Anthropic

Research Engineer, Reward Models Platform

Anthropic(1 month ago)

HybridFull TimeSenior$315,000 - $340,000Research Engineering
Apply Now

About this role

This role builds scalable tooling and infrastructure to accelerate reward-signal development for Anthropic's fine-tuning teams, turning manual experimentation into fast, repeatable workflows. You will partner closely with researchers on the Rewards and Fine-Tuning teams to translate scientific needs into platform capabilities, while occasionally contributing to research. The work focuses on enabling rapid iteration across rubric design, human feedback experiments, reward robustness evaluation, and detecting reward pathologies.

View Original Listing

Required Skills

  • Python
  • ML Workflows
  • Data Pipelines
  • Infrastructure
  • Tooling
  • Automation
  • Monitoring
  • Observability
  • Experiment Tracking
  • Reward Modeling

+11 more

Qualifications

  • Bachelor's Degree
Anthropic

About Anthropic

anthropic.com

Anthropic is an AI safety and research company focused on building reliable, interpretable, and steerable AI systems. It develops large language models (branded as Claude) and offers APIs and enterprise products that let organizations integrate conversational AI with safety-focused controls, moderation, and privacy features. The company prioritizes interpretability and alignment research, publishes technical work, and engages with policymakers to reduce risks from advanced AI. Customers choose Anthropic for its safety-first approach, controllability tools, and research-driven models.

ApplyBlast uses AI to match you with the right jobs, tailor your resume and cover letter, and apply automatically so you can land your dream job faster.

© All Rights Reserved. ApplyBlast.com