Applied AI, Evaluation Engineer
Mistral AI(13 days ago)
About this role
The Evaluation Engineer on Mistral's Applied AI team designs evaluation methodologies and builds scalable evaluation infrastructure to measure LLM capabilities for enterprise customers. The role defines production readiness across verticals and translates evaluation insights into model and product improvements while interfacing with research, engineering, and customer-facing teams.
Required Skills
- LLM Evaluation
- Benchmarking
- Evaluation Methodology
- Python
- APIs
- ML Infrastructure
- PyTorch
- HuggingFace
- Customer Engagement
- Research Collaboration
About Mistral AI
mistral.aiMistral AI builds frontier large language models and an enterprise AI platform that lets companies customize, fine‑tune, and deploy AI assistants, autonomous agents, and multimodal models. Their offering centers on open models, developer APIs, and professional services designed for secure, scalable integration into products and workflows. Mistral combines research-grade model development with tooling for building tailored assistants and autonomous agents for business use cases. They also publish research and consumer-facing apps that showcase their models in action.
Apply instantly with AI
Let ApplyBlast auto-apply to jobs like this for you. Save hours on applications and land your dream job faster.
More jobs at Mistral AI
Similar Jobs
AI Applied Scientist
Figma(1 month ago)
AI Platform Engineer, Applied AI
Circle.so(4 days ago)
Member of Technical Staff - Evaluations
Reflection AI(1 month ago)
Applied AI Engineer
Squint(1 month ago)
AI/ML Evaluation Engineer - Global Solutions Provider (Mexico)
Truelogic Software(2 months ago)
Applied AI Researcher
Tavily(30 days ago)