Machine Learning Engineer, API Multicloud

OpenAI · San Francisco, CA
full-time senior Posted 14 hours ago

About this role

ABOUT THE TEAM OpenAI’s API Multicloud team sits within B2B Applications and is responsible for extending OpenAI’s API platform into strategic cloud environments, starting with AWS. The team’s mission is to distribute OpenAI’s API broadly and safely by enabling key API technologies in AWS-native environments, in close partnership with Amazon and internal teams across Codex, Research, Safety Systems, and Applied. The team is focused on bringing core developer and enterprise capabilities into cloud-native environments, including AWS-hosted Codex, model customization / post-training as a service, and new stateful runtime environments for agentic workloads. This work sits at the intersection of production ML systems, developer platforms, model behavior, and large-scale infrastructure. ABOUT THE ROLE We’re hiring Machine Learning Engineers to build and improve the AI systems that help strategic partners adapt OpenAI models to important use cases in cloud-native environments. This role spans post-training workflows, evaluation, data pipelines, model behavior, and API/infrastructure integration. You’ll work at the boundary between partner needs and core ML systems: helping teams understand what is and isn’t working, diagnosing issues in training and evaluation workflows, and turning those learnings into improvements to the underlying platform. You’ll collaborate closely with Research, Applied, Safety Systems, infrastructure teams, and external technical partners to solve ambiguous model-performance problems. When you succeed, strategic partners and internal teams will be able to improve model behavior with confidence, driving measurable product improvements while the systems behind that work become more reliable, scalable, and effective over time. IN THIS ROLE, YOU WILL - Partner with strategic customers and internal teams to define target model behaviors, diagnose failure modes, and translate real-world needs into training, evaluation, and system requirements. - Build and scale production ML systems for model customization, post-training, and fine-tuning-as-a-service workflows. - Investigate whether training and customization workflows are producing the intended outcomes, and identify changes to data, evaluation, training, or infrastructure that improve performance. - Partner with backend and infrastructure engineers to integrate ML capabilities into AWS-native API environments. - Feed learnings from partner deployments back into the platform by proposing and implementing improvements to post-training systems, tooling, APIs, and developer workflows. - Work closely with Research and Applied teams to bring model improvements, training workflows, and evaluation best practices into production. - Help design systems that allow strategic partners and enterprise customers to safely customize OpenAI models for high-value use cases. - Debug and improve complex systems spanning model behavior, training data, APIs, distributed infrastructure, and customer-facing product surfaces. - Operate with high ownership in a 0→1 environment where requirements are ambiguous, systems are evolving quickly, and reliability matters. YOUR BACKGROUND MIGHT LOOK SOMETHING LIKE: - Master’s or PhD in Computer Science, Machine Learning, or a related field, or equivalent practical experience. - 7+ years of professional engineering experience in relevant ML, infrastructure, or product-driven engineering roles. - Strong ML engineering experience building, training, fine-tuning, evaluating, or deploying production AI systems, with hands-on experience in deep learning, transformer models, and frameworks like PyTorch or TensorFlow. - Familiarity with training and fine-tuning large language models, including methods like supervised fine-tuning, distillation, preference optimization, reinforcement learning, or other post-training techniques. - Strong software engineering fundamentals, including data structures, algorithms, systems design, and high-quality production code in Python, Rust, or similar languages. - Experience with model customization, evaluation systems, data pipelines, distributed systems, cloud infrastructure, or production ML platform tradeoffs. - Ability to operate across model behavior, APIs, and infrastructure, while collaborating closely with Research, Safety, product engineering, infrastructure, and external technical partners. - Comfort moving quickly through ambiguity, owning problems end-to-end, and learning whatever is needed to get the job done. - Bonus: experience with AWS, Kubernetes, agents, tool use, runtime environments, AI developer platforms, or speech models. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely pow

Similar Jobs

Related searches:

On-site Jobs Senior Jobs On-site Senior Jobs Senior Machine LearningSenior AI InfrastructureSenior Backend & SystemsSenior AI Agents & RAGSenior Data EngineeringSenior Generative AISenior Robotics & Autonomy AI Jobs in San Francisco Machine Learning in San FranciscoAI Infrastructure in San FranciscoBackend & Systems in San FranciscoAI Agents & RAG in San FranciscoData Engineering in San FranciscoGenerative AI in San FranciscoRobotics & Autonomy in San Francisco pytorchdata-pipelinedeep-learningfine-tuningcloudreinforcement-learningagentsdistributed-systems

Get jobs like this delivered weekly

Free AI jobs newsletter. No spam.