Data Scientist, Safety

OpenAI · London, UK
full-time mid Posted 15 hours ago

About this role

About the Team OpenAI’s Safety teams work to ensure our products are safe, trusted, and resilient as frontier AI systems scale globally. We tackle some of the company’s most important challenges across understanding and preventing misuse and misalignment, intercepting fraud and abuse, and protecting vulnerable users. We are hiring Data Scientists to help build the analytical foundations that allow OpenAI to deploy increasingly capable AI responsibly. This is a high-impact role operating at the intersection of product, safety, policy, and research.   About the Role As a Data Scientist, Safety, you will help solve complex and ambiguous problems where rigorous analysis directly informs critical decisions. Depending on your background and team alignment, you may work on areas such as: - Measure harmful or abusive behavior across OpenAI’s products - Detect fraud, manipulation, coordinated misuse - Evaluate and improve safety classifiers, rules systems, mitigation systems, and human review workflows - Design experiments and causal analyses to understand product, policy, and mitigation impacts - Build prevalence estimators, dashboards, monitoring systems, and executive decision frameworks - Diagnose gaps in safety and integrity systems using behavioral and product data, and help quantify and navigate false positive / false negative tradeoffs - Translate ambiguous safety risks into measurable problems and evidence-based recommendations - Partner with Product, Engineering, Policy, Research, and Operations teams to improve safety outcomes - Build zero-to-one analytical systems in rapidly evolving domains   Ideal Candidate We’re looking for strong Data Scientists who thrive in ambiguous, high-leverage environments. You may be a fit if you have: - Strong statistical reasoning and analytical judgment - Experience with experimentation, causal inference, or observational analysis - Strong SQL and Python skills - Experience working with messy, incomplete, or noisy datasets - Ability to structure open-ended business or risk problems - Excellent communication with technical and non-technical stakeholders - High ownership and comfort operating independently Helpful backgrounds include: - Trust & Safety / Integrity - Fraud & abuse - Security analytics - AI/ML model measurement and evaluation - Alignment and AI safety research - Biosecurity, synthetic biology, infectious diseases, or computational biology   Why This Role - Work on mission-critical problems with world-level impact - Help shape how frontier AI systems are deployed safely - Operate with unusual ownership and visibility - Solve novel problems where there are no existing playbooks - Join highly collaborative teams working across OpenAI   Location San Francisco, New York, or London depending on team alignment and business need. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.  We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement https://cdn.openai.com/policies/eeo-policy-statement.pdf. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compl

Similar Jobs

Related searches:

Remote Jobs Mid-Level Jobs Remote Mid-Level Jobs Mid-Level Data ScienceMid-Level AI ResearchMid-Level AI Safety & SecurityMid-Level Backend & Systems AI Jobs in London Data Science in LondonAI Research in LondonAI Safety & Security in LondonBackend & Systems in London alignmentdata-sciencerust

Get jobs like this delivered weekly

Free AI jobs newsletter. No spam.