Research Engineer
Center for AI Safety(3 years ago)
About this role
A Research Engineer at the Center for AI Safety works on technical research to reduce catastrophic and existential risks from artificial intelligence, focusing on topics such as Power Aversion, Trojans, Machine Ethics, and Reward Hacking. The role involves collaborating with internal researchers and academics at top universities, contributing to publications, and running experiments at scale on the organization's compute cluster.
Required Skills
- Machine Learning
- Deep Learning
- PyTorch
- HuggingFace
- Distributed Training
- NLP
- Reinforcement Learning
- Data Engineering
- Experimentation
- Collaboration
About Center for AI Safety
safe.aiThe Center for AI Safety (CAIS) is a nonprofit research and field-building organization focused on reducing societal-scale risks from advanced artificial intelligence. CAIS conducts technical AI safety research, builds infrastructure and pathways into the field (including a compute cluster and researcher support), and publishes resources such as blog posts and a newsletter. It also works on advocacy and standards development to align industry, academia, and policymakers around stronger AI safety practices. Researchers, funders, and decision-makers rely on CAIS for pragmatic research, tools, and guidance to mitigate catastrophic AI risks.
Apply instantly with AI
Let ApplyBlast auto-apply to jobs like this for you. Save hours on applications and land your dream job faster.
More jobs at Center for AI Safety
Similar Jobs
Research Scientist, Gemini Personal Intelligence
DeepMind(8 days ago)
Senior AI Scientist
Covera Health(6 months ago)
2026 Intern, Robot Intelligence (Spring/Summer/Fall)
Samsung Research America(1 month ago)
Research Engineer, Frontier Safety Risk Assessment
DeepMind(1 month ago)
Intern, AI-enabled Robotic Manipulation Research
Intrinsic(1 month ago)
Research Intern, Model Shaping (Summer 2026)
Together AI(29 days ago)