Anthropic

Anthropic AI Safety Fellow

Anthropic(1 month ago)

HybridFull TimeTrainee$81,709 - $109,293 (estimated)Research
Apply Now

About this role

The Anthropic Fellows Program is a four-month, full-time fellowship that funds and mentors technical talent to conduct empirical AI safety research aiming to produce public outputs (e.g., papers). Fellows receive direct mentorship, access to shared workspaces in Berkeley or London (with remote options in the US/UK/Canada), compute funding, and a stipend. The program runs multiple cohorts per year and is intended to accelerate candidates into empirical AI safety research and potential full-time roles.

View Original Listing

Required Skills

  • Python
  • Empirical Research
  • Large Language Models
  • Deep Learning
  • Experiment Management
  • Open Source
  • Communication
  • Collaboration
  • Rapid Prototyping
  • AI Safety

Qualifications

  • Bachelor's Degree in Related Field
  • Work Authorization (US/UK/Canada)
Anthropic

About Anthropic

anthropic.com

Anthropic is an AI safety and research company focused on building reliable, interpretable, and steerable AI systems. It develops large language models (branded as Claude) and offers APIs and enterprise products that let organizations integrate conversational AI with safety-focused controls, moderation, and privacy features. The company prioritizes interpretability and alignment research, publishes technical work, and engages with policymakers to reduce risks from advanced AI. Customers choose Anthropic for its safety-first approach, controllability tools, and research-driven models.

ApplyBlast uses AI to match you with the right jobs, tailor your resume and cover letter, and apply automatically so you can land your dream job faster.

© All Rights Reserved. ApplyBlast.com