Staff Red Team Engineer, Safeguards
Anthropic(14 days ago)
About this role
A Red Team Engineer on Anthropic’s Safeguards team performs adversarial testing to ensure the safety of deployed AI systems and products. The role focuses on uncovering vulnerabilities and emergent abuse across the product ecosystem, simulating sophisticated threat actors, and helping translate findings into concrete improvements to make AI systems more reliable and steerable.
Required Skills
- Penetration Testing
- Red Teaming
- Application Security
- Web Security
- Security Tooling
- Automation
- API Security
- Rate Limiting
- Authorization Bypass
- Distributed Systems
+3 more
Qualifications
- Bachelor's Degree
About Anthropic
anthropic.comAnthropic is an AI safety and research company focused on building reliable, interpretable, and steerable AI systems. It develops large language models (branded as Claude) and offers APIs and enterprise products that let organizations integrate conversational AI with safety-focused controls, moderation, and privacy features. The company prioritizes interpretability and alignment research, publishes technical work, and engages with policymakers to reduce risks from advanced AI. Customers choose Anthropic for its safety-first approach, controllability tools, and research-driven models.
Apply instantly with AI
Let ApplyBlast auto-apply to jobs like this for you. Save hours on applications and land your dream job faster.
More jobs at Anthropic
Similar Jobs
Security Lead, Agentic Red Team
DeepMind(11 days ago)
Red Team Manager
Cloudflare Events(5 days ago)
AI Security Engineer - Red Team (United States, Remote)
Lakera(2 months ago)
Senior Internal Red Team Engineer
Horizon3.ai(2 months ago)
Red Team Operator
SixGen, Inc.(1 year ago)
1002- Security Red Teaming Specialist
GoFasti(20 days ago)