Software Engineer, Networking - Inference
OpenAI
About this role
A Software Engineer in Networking - Inference at OpenAI is responsible for designing and building a high-performance load balancer for research inference that routes large AI models with precision and reliability. This role involves architecting the system for long-lived connections, developing traffic routing strategies, and implementing observability tools for complex distributed systems. The engineer will collaborate with researchers and ML engineers to ensure infrastructure decisions enhance model performance and will oversee the entire system lifecycle from design to deployment. Candidates should have extensive experience with distributed systems, particularly load balancers, and programming in systems languages like Rust or C++.
Skills
About OpenAI
openai.comOpenAI is an AI research and deployment company that develops advanced artificial intelligence models and products, including ChatGPT and APIs, to help people and organizations solve problems, create content, and build intelligent applications.
Recent company news
OpenAI to acquire Promptfoo
2 days ago
Google And OpenAI Staff Unite Behind Anthropic As It Sues Pentagon
1 day ago
Workers at OpenAI show support for Anthropic as the company says it could lose $5 billion in its feud with the Pentagon
2 days ago
OpenAI is feeling the heat, but it’s not cooked
3 days ago
Metadata company Gracenote is the latest to sue OpenAI for copyright infringement
1 day ago
About OpenAI
Headquarters
San Francisco, CA
Company Size
201-500 employees
Founded
2018
Industry
Technology
Glassdoor Rating
4.2 / 5
Leadership Team
Sarah Johnson
Chief Executive Officer
Michael Chen
Chief Technology Officer
Emily Williams
VP of Engineering
David Rodriguez
VP of Product
Jessica Thompson
Chief Financial Officer
Andrew Park
VP of Sales
Unlock Company Insights
View leadership team, funding history,
and employee contacts for OpenAI.
Salary
$325k – $490k
per year