Anthropic

Staff Red Team Engineer, Safeguards

Anthropic(14 days ago)

HybridFull TimeSenior$300,000 - $405,000Security
Apply Now

About this role

A Red Team Engineer on Anthropic’s Safeguards team performs adversarial testing to ensure the safety of deployed AI systems and products. The role focuses on uncovering vulnerabilities and emergent abuse across the product ecosystem, simulating sophisticated threat actors, and helping translate findings into concrete improvements to make AI systems more reliable and steerable.

View Original Listing

Required Skills

  • Penetration Testing
  • Red Teaming
  • Application Security
  • Web Security
  • Security Tooling
  • Automation
  • API Security
  • Rate Limiting
  • Authorization Bypass
  • Distributed Systems

+3 more

Qualifications

  • Bachelor's Degree
Anthropic

About Anthropic

anthropic.com

Anthropic is an AI safety and research company focused on building reliable, interpretable, and steerable AI systems. It develops large language models (branded as Claude) and offers APIs and enterprise products that let organizations integrate conversational AI with safety-focused controls, moderation, and privacy features. The company prioritizes interpretability and alignment research, publishes technical work, and engages with policymakers to reduce risks from advanced AI. Customers choose Anthropic for its safety-first approach, controllability tools, and research-driven models.

ApplyBlast uses AI to match you with the right jobs, tailor your resume and cover letter, and apply automatically so you can land your dream job faster.

© All Rights Reserved. ApplyBlast.com