Researcher, Alignment Oversight

OpenAI · San Francisco, CA
full-time mid Posted 1 day ago

About this role

ABOUT THE TEAM The Alignment Oversight team at OpenAI develops techniques for improving control, accountability, and alignment as AI systems become more capable and agentic. We combine longer-horizon research with hands-on deployment. We study long-term questions about how increasingly intelligent systems can be supervised, constrained, and corrected, while also building oversight systems that are used in practice today, both internally and externally (see our recent work on code review https://alignment.openai.com/scaling-code-verification/ and action monitoring for codex https://alignment.openai.com/auto-review/). We also study how to learn from real-world deployments: using oversight data and human interventions to train future models to be more aligned, while preserving the effectiveness and independence of the oversight systems themselves. ABOUT THE ROLE As a researcher on the Alignment team, you will design and run experiments that improve our ability to oversee increasingly capable models. You will work on hands-on model training, evaluation design, and research infrastructure, and translating promising oversight ideas into systems that can operate on real model traffic and real user workflows. This role combines longer-horizon research with shorter deployment sprints, with projects typically scoped around 3-6 month research timelines and aimed at directly improving future model behavior. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. IN THIS ROLE, YOU WILL: - Design and implement alignment experiments focused on oversight systems for increasingly agentic AI models. - Deploy practical systems for action monitoring, red-teaming, and human-in-the-loop control. - Develop evaluations for alignment failure modes of the frontier models such as overeagerness, instruction following failures, covert actions, avoiding restrictions and scheming propensity. - Analyze deployment data to understand model failures, oversight gaps, and opportunities for training more aligned models. - Develop techniques for feeding oversight signals back into training while preserving the reliability and independence of the oversight process. - Produce externally publishable research when results advance the broader science of alignment. - Collaborate across research, product, security, safety, and engineering teams to turn alignment ideas into working systems. - Move quickly from research intuition to working experiments, prototypes, and evidence that can shape future models. YOU MIGHT THRIVE IN THIS ROLE IF YOU: - Have strong hands-on experience training, evaluating, or debugging large ML models, especially LLMs. - Have experience with reinforcement learning, post-training, preference optimization, scalable oversight, model evaluation, or adjacent empirical ML research. - Have strong engineering execution and can turn ambiguous research ideas into reliable experiments, tools, training pipelines, and production-facing systems. - Have research intuitions for what experiments are likely to teach us something, while staying grounded in implementation details and empirical results. - Are a team player - willing to do a variety of tasks that move the team forward. - Enjoy fast-paced, collaborative research environments where priorities shift as models and evidence change. - See safety and usefulness as coupled goals. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.  We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement https://cdn.openai.com/policies/eeo-policy-statement.pdf. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the follo

Similar Jobs

Related searches:

Hybrid Jobs Mid-Level Jobs Hybrid Mid-Level Jobs Mid-Level Machine LearningMid-Level Robotics & AutonomyMid-Level NLP & Language AIMid-Level Data EngineeringMid-Level AI Agents & RAGMid-Level AI Research AI Jobs in San Francisco Machine Learning in San FranciscoRobotics & Autonomy in San FranciscoNLP & Language AI in San FranciscoData Engineering in San FranciscoAI Agents & RAG in San FranciscoAI Research in San Francisco llmagentsreinforcement-learningsearchresearch

Get jobs like this delivered weekly

Free AI jobs newsletter. No spam.