Center for AI Safety

Research Engineer

Center for AI Safety(3 years ago)

San Francisco, CAOnsiteFull TimeMedior$100,000 - $180,000Research
Apply Now

About this role

A Research Engineer at the Center for AI Safety works on technical research to reduce catastrophic and existential risks from artificial intelligence, focusing on topics such as Power Aversion, Trojans, Machine Ethics, and Reward Hacking. The role involves collaborating with internal researchers and academics at top universities, contributing to publications, and running experiments at scale on the organization's compute cluster.

View Original Listing

Required Skills

  • Machine Learning
  • Deep Learning
  • PyTorch
  • HuggingFace
  • Distributed Training
  • NLP
  • Reinforcement Learning
  • Data Engineering
  • Experimentation
  • Collaboration
Center for AI Safety

About Center for AI Safety

safe.ai

The Center for AI Safety (CAIS) is a nonprofit research and field-building organization focused on reducing societal-scale risks from advanced artificial intelligence. CAIS conducts technical AI safety research, builds infrastructure and pathways into the field (including a compute cluster and researcher support), and publishes resources such as blog posts and a newsletter. It also works on advocacy and standards development to align industry, academia, and policymakers around stronger AI safety practices. Researchers, funders, and decision-makers rely on CAIS for pragmatic research, tools, and guidance to mitigate catastrophic AI risks.

ApplyBlast uses AI to match you with the right jobs, tailor your resume and cover letter, and apply automatically so you can land your dream job faster.

© All Rights Reserved. ApplyBlast.com