Security Lead, Agentic Red Team
DeepMind(11 days ago)
About this role
The Security Lead for the Agentic Red Team at Google DeepMind leads a specialized unit focused on assessing and improving the safety of agentic and generative AI systems. The role centers on closing the "Agentic Launch Gap" by integrating adversarial findings into development lifecycles and automated validation frameworks across product teams.
Required Skills
- Red Teaming
- Offensive Security
- Adversarial ML
- LLM Architectures
- Agentic Workflows
- Security Engineering
- Threat Intelligence
- Automation
- Python
Qualifications
- Bachelor's Degree in Computer Science
About DeepMind
deepmind.googleGoogle DeepMind is an AI research lab within Alphabet focused on solving intelligence and building safe, general-purpose artificial intelligence to advance science and benefit humanity. It develops cutting-edge machine learning methods—especially deep learning, reinforcement learning and neuroscience-inspired models—and has produced landmark systems such as AlphaGo, AlphaZero and AlphaFold. DeepMind partners with academia, healthcare providers and other Google teams to apply AI to scientific discovery, medicine, energy efficiency and real-world problems. The organization emphasizes safety, interpretability and responsible deployment alongside publishing research and tools for the wider community.
Apply instantly with AI
Let ApplyBlast auto-apply to jobs like this for you. Save hours on applications and land your dream job faster.
More jobs at DeepMind
Similar Jobs
Staff Red Team Engineer, Safeguards
Anthropic(14 days ago)
Senior Offensive Security Engineer – Detection & Adversary Research
Elastic(1 month ago)
AI Security Engineer - Red Team (United States, Remote)
Lakera(2 months ago)
Red Team Engineer
CloudWalk(27 days ago)
1002- Security Red Teaming Specialist
GoFasti(20 days ago)
Senior Internal Red Team Engineer
Horizon3.ai(2 months ago)