AI Engineer, Product
Mistral AI(1 month ago)
About this role
This role sits within Mistral AI's product team and focuses on building the evaluation, observability, and release framework for their LLM-based products. The position is responsible for ensuring reliable model rollouts and measurable improvements to quality, latency, safety, and reliability. It involves close collaboration with the Science team and other internal stakeholders to support safe experimentation and shipping of AI features. The role is based primarily in Paris or London, with some flexibility for remote work within specific European countries.
Required Skills
- TypeScript
- Python
- LLM Evaluation
- A/B Testing
- Metric Design
- Data Analysis
- Observability
- Logging
- Tracing
- Dashboards
+19 more
About Mistral AI
mistral.aiMistral AI builds frontier large language models and an enterprise AI platform that lets companies customize, fine‑tune, and deploy AI assistants, autonomous agents, and multimodal models. Their offering centers on open models, developer APIs, and professional services designed for secure, scalable integration into products and workflows. Mistral combines research-grade model development with tooling for building tailored assistants and autonomous agents for business use cases. They also publish research and consumer-facing apps that showcase their models in action.
Apply instantly with AI
Let ApplyBlast auto-apply to jobs like this for you. Save hours on applications and land your dream job faster.
More jobs at Mistral AI
Similar Jobs
Software Engineer, Observability
Airtable(13 days ago)
Software Engineer, Observability
Airtable(10 hours ago)
Developer Experience Engineer - Quality Infrastructure
SumUp(1 month ago)
Senior Site Reliability Engineer
Zeta Global(13 days ago)
Lead Devops Engineer (Bangkok based, relocation provided)
Agoda(7 days ago)
Senior Software Engineer - Observability and Reliability
Sigma Computing(4 days ago)