Offensive Security Research Engineer, Safeguards
Anthropic(2 hours ago)
About this role
Anthropic is seeking vulnerability researchers to analyze and mitigate risks associated with large language models (LLMs). The role involves researching how adversaries might misuse LLMs and developing strategies to defend against these threats, contributing to building safer and more trustworthy AI systems.
Required Skills
- Vulnerability Research
- Penetration Testing
- Security Exploitation
- Reverse Engineering
- Network Security
- Software Engineering
- AI Safety
- Bug Bounty
- Open Source
- Threat Modeling
About Anthropic
anthropic.comAnthropic is an AI safety and research company focused on building reliable, interpretable, and steerable AI systems. It develops large language models (branded as Claude) and offers APIs and enterprise products that let organizations integrate conversational AI with safety-focused controls, moderation, and privacy features. The company prioritizes interpretability and alignment research, publishes technical work, and engages with policymakers to reduce risks from advanced AI. Customers choose Anthropic for its safety-first approach, controllability tools, and research-driven models.
View more jobs at Anthropic →Apply instantly with AI
Let ApplyBlast auto-apply to jobs like this for you. Save hours on applications and land your dream job faster.
More jobs at Anthropic
Similar Jobs
Conseiller principal ou conseillère principale, Sécurité offensive
Desjardins(1 month ago)
Security Engineer | Mid - Senior | WebSec Team
Nord Security(17 days ago)
Offensive Security Associate
PwC Nederland(10 months ago)
Security Engineer - Offensive Security
Traveloka(8 months ago)
Security Engineer (MacOS)
Prelude(24 days ago)
Offensive Security Specialist
AWE plc(1 month ago)