Model Policy

OpenAI · San Francisco, CA
full-time mid Posted 2 days ago

About this role

About the Team Our Safety Systems https://openai.com/safety/safety-systems team is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. Within Safety Systems, the Model Policy team aligns model behavior with desired human values and norms. We co-design policy with models and for models by driving rapid policy taxonomy iteration based on data and defining evaluation criteria for foundational models’ ability to reason about safety. About the Role If you have a specific expertise or speciality related to this work, please note it in your application via your resume, cover letter or application note. Frontier AI systems are expanding what people can do across domains, creating both enormous opportunities and difficult safety questions: when should a model help, when should it refuse, and how do we make those boundaries clear enough to train, evaluate, and enforce? In this role, you will help define how OpenAI’s models should behave in high-risk or high-ambiguity contexts, such as agentic systems, multimodal systems, user safety, privacy, and other emerging risk domains. This is an ideal role for someone who can move across unfamiliar topics, reason from first principles, and turn ambiguity into practical model behavior. You will work closely with research, engineering, product, preparedness, and operations teams to build policies that are technically grounded, measurable, and responsive to real-world risk. In this role, you will: - Design and maintain model policies across safety-relevant domains, including dual-use, agentic, and emerging frontier-risk areas. - Translate risk and harm models into clear behavioral specifications, evaluation criteria, grading guidance, and system-level safeguards. - Define practical boundaries between beneficial uses of AI and assistance that could materially enable harm, exploitation, misuse, or unsafe outcomes. - Build policy artifacts that support model training, evaluation, and deployment.Partner with safety researchers, engineers, product teams, and other stakeholders to operationalize policy into scalable model behavior and measurable safeguards. - Use red-teaming results, deployment data, model failures, over-refusals, under-refusals, and ambiguous edge cases to improve policy and evaluation quality over time. - Identify emerging capability areas where frontier AI systems could create new safety challenges or lower barriers to harm. - Study real-world deployments to identify where model behavior succeeds, fails, or drifts from the intended safety posture. - Combine longer-horizon safety research with hands-on launch and deployment work. - Contribute to system cards, safety reports, policy documentation, launch reviews, and external communications on OpenAI's approach to model safety and risk mitigation. - Design and run human data campaigns, including gold set construction, labeling guidance, calibration, adjudication, and eval coverage analysis, to ensure policies can be reliably measured and improved. You might thrive in this role if you: - Have strong judgment about how advanced AI systems may affect real-world risk, especially in ambiguous, fast-moving, or high-impact areas. - Have experience building or applying policies, taxonomies, harm models, threat models, or risk frameworks for complex technical, social, or adversarial systems. - Can move across domains without needing to be the deepest subject-matter expert in every area, while knowing when to seek expert input. - Can turn fuzzy questions into structured policy frameworks, evaluation criteria, operational guidance, and enforceable model behavior. - Are comfortable using empirical evidence, including evaluations, red-teaming results, deployment observations, and model failure modes, to inform policy decisions. - Think in systems across policy, data, graders, classifiers, training, deployment safeguards, measurement, monitoring, and escalation workflows. - Have technical judgment about what model behavior can realistically be trained, measured, evaluated, and enforced at scale. - Work well across research, engineering, product, policy, domain experts, and operational teams. - Write clearly about complex tradeoffs where safety, user value, and implementation constraints all matter. - Take a pragmatic approach to safety, focused on reducing real-world risk while preserving legitimate, beneficial, and socially valuable uses of AI. - Enjoy fast-paced, collaborative research environments where priorities shift as models, evidence, and risks change. - Stay grounded in implementation details, empirical results, and what can actually be trained or measured. Our relevant publications: - Accelerating the cyber defense ecosystem that protects us all https://openai.com/index/accelerating-cyber-defense-ecosystem/ - Trusted Access https://openai.com/index/scaling-tru

Similar Jobs

Related searches:

Hybrid Jobs Mid-Level Jobs Hybrid Mid-Level Jobs Mid-Level AI ResearchMid-Level AI Agents & RAGMid-Level AI Safety & Security AI Jobs in San Francisco AI Research in San FranciscoAI Agents & RAG in San FranciscoAI Safety & Security in San Francisco alignmentagents

Get jobs like this delivered weekly

Free AI jobs newsletter. No spam.