Anthropic

Enforcement Operations Lead

Anthropic(14 hours ago)

HybridFull TimeSenior$230,000 - $270,000Trust and Safety Operations
Apply Now

About this role

Anthropic's Safeguards team is responsible for enforcing policies, protecting users, and ensuring AI model safety and compliance through evaluations, mitigation, and process development. The role involves cross-functional collaboration with policy, engineering, and legal teams, and managing content moderation and enforcement operations.

View Original Listing

Required Skills

  • Content Moderation
  • Policy Enforcement
  • Data Analysis
  • Operational Reporting
  • Vendor Management
  • Regulatory Reporting
  • Content Review
  • Process Development
  • Copyright Enforcement
  • Safety Evaluations
Anthropic

About Anthropic

anthropic.com

Anthropic is an AI safety and research company focused on building reliable, interpretable, and steerable AI systems. It develops large language models (branded as Claude) and offers APIs and enterprise products that let organizations integrate conversational AI with safety-focused controls, moderation, and privacy features. The company prioritizes interpretability and alignment research, publishes technical work, and engages with policymakers to reduce risks from advanced AI. Customers choose Anthropic for its safety-first approach, controllability tools, and research-driven models.

View more jobs at Anthropic

ApplyBlast uses AI to match you with the right jobs, tailor your resume and cover letter, and apply automatically so you can land your dream job faster.

© All Rights Reserved. ApplyBlast.com