Principal On-Device Model Inference Optimization Engineer
NVIDIA(4 months ago)
About this role
A Senior On-Device Model Inference Optimization Engineer at NVIDIA focuses on advancing on-device AI model performance and efficiency to support autonomous vehicle systems. The role contributes to production-grade, safety-critical AI deployments across NVIDIA’s hardware and software platforms and works within interdisciplinary engineering teams to deliver scalable inference solutions.
Required Skills
- Model Optimization
- Quantization
- Pruning
- Distillation
- CUDA
- C++
- Python
- PyTorch
- ONNX
- TensorRT
+6 more
Qualifications
- MSc or PhD in Computer Science, Engineering, or related field
About NVIDIA
nvidia.comNVIDIA invents the GPU and drives advances in AI, HPC, gaming, creative design, autonomous vehicles, and robotics.
View more jobs at NVIDIA →Apply instantly with AI
Let ApplyBlast auto-apply to jobs like this for you. Save hours on applications and land your dream job faster.
More jobs at NVIDIA
Similar Jobs
Principal Machine Learning Researcher, On-Device Optimization
HP(1 month ago)
Senior Machine Learning Engineer, On-Device Optimization
HP(1 month ago)
Staff AI Software Engineer - Edge Model Optimization & Deployment
Field AI(20 days ago)
Senior Software Engineer - Model Performance
Inference(1 month ago)
Internship / Thesis Student for Edge AI Optimization Research & Engineering (f/m/d)
NXP Semiconductors(19 days ago)
Neural Network Optimization Engineer
Recraft(3 months ago)