Manager, Large Language Model Inference
NVIDIA
About this role
A hands-on Engineering Manager at NVIDIA leading the development of next-generation LLM/VLM inference software for the TensorRT platform. The role combines technical ownership and people leadership to architect and ship production-grade inference runtimes across enterprise and edge GPUs. It involves close collaboration with researchers, GPU architects, and cross-functional teams to accelerate AI deployment and performance.
Skills
Qualifications
About NVIDIA
nvidia.comNVIDIA invents the GPU and drives advances in AI, HPC, gaming, creative design, autonomous vehicles, and robotics.
Recent company news
NVIDIA Ignites the Next Industrial Revolution in Knowledge Work With Open Agent Development Platform
2 days ago
Nvidia CEO Says Company Is Firing Up H200 Production for China
1 day ago
Nvidia CEO Huang says company sees more than $1 trillion in sales through 2027
1 day ago
Nvidia's one of the fastest growing companies with one of the lowest valuations, says Jim Cramer
14 hours ago
Nvidia is reskinning games with AI. Gamers are angry about it, and wrong
1 day ago
About NVIDIA
Headquarters
San Francisco, CA
Company Size
201-500 employees
Founded
2018
Industry
Technology
Glassdoor Rating
4.2 / 5
Leadership Team
Sarah Johnson
Chief Executive Officer
Michael Chen
Chief Technology Officer
Emily Williams
VP of Engineering
David Rodriguez
VP of Product
Jessica Thompson
Chief Financial Officer
Andrew Park
VP of Sales
Unlock Company Insights
View leadership team, funding history,
and employee contacts for NVIDIA.
Salary
$184k – $357k
per year
More jobs at NVIDIA
Similar Jobs
Director of Engineering, Inference Services
CoreWeave
Spring 2023 GPU Workload Analysis Co-Op / Intern
Shelby County Government of Tennessee
AI Infrastructure Engineer
NIO
Engineering Manager, Inference ML Runtime
Cerebras Systems
Senior Engineer, GPU Performance Architect (PPA)
SEC
Senior ML Engineer (Token Factory)
Nebius