Inference

Senior Software Engineer - Model Performance

Inference(24 days ago)

HybridFull TimeSenior$220,000 - $320,000Engineering
Apply Now

About this role

Inference.net is seeking a technical expert to optimize and accelerate AI inference systems using GPU and CUDA technologies. The role involves deep technical work on the full inference stack, aiming to improve performance, latency, throughput, and cost efficiency of large language model serving. It offers an opportunity to work on cutting-edge AI infrastructure in a collaborative startup environment.

View Original Listing

Required Skills

  • CUDA
  • GPU Programming
  • Inference Optimization
  • PyTorch
  • TensorRT
  • Quantization
  • Speculative Decoding
  • GPU Profiling
  • Model Serving
  • Distributed Inference
Inference

About Inference

inference.net

Inference.net is an innovative platform that specializes in AI inference solutions, enabling businesses to effectively train and host custom large language models tailored to their specific needs. The company offers a range of services, including serverless API and batch inference capabilities, designed to deliver improved performance and cost-efficiency compared to traditional models. With a focus on reducing latency and enhancing model accuracy, Inference.net empowers organizations to leverage AI technologies across various modalities such as text, image, and video. Their mission is to provide high-quality, reliable AI solutions that optimize deployment processes and drive operational excellence for their clients.

View more jobs at Inference

ApplyBlast uses AI to match you with the right jobs, tailor your resume and cover letter, and apply automatically so you can land your dream job faster.

© All Rights Reserved. ApplyBlast.com