Software Engineer, Compute Infrastructure

OpenAI · San Francisco, CA
full-time mid Posted 2 days ago

About this role

About the Team: Compute Infrastructure builds the platform that turns enormous amounts of compute into a reliable engine for frontier AI. We design, provision, schedule, operate, and optimize the systems that connect accelerators, CPUs, networks, storage, data centers, orchestration software, agent infrastructure, developer tools, and observability into one coherent experience for researchers and product teams. Our work spans the entire stack: capacity planning and cluster lifecycle, bare-metal automation, distributed systems, Kubernetes and scheduling, deep system optimization, high-performance networking, storage, fleet health, reliability, workload profiling, benchmarking, and the developer experience that lets teams use enormous compute systems with confidence. At this scale, small improvements to communication, scheduling, hardware efficiency, or debugging workflows can compound into meaningful research velocity. We are hiring across Compute Infrastructure rather than for a single narrow team, and we use this opening to match strong engineers to the problems where they can have the most leverage. About the Role We are looking for engineers who want to build the compute platform behind OpenAI's research and products. You may be strongest in low-level systems, high-performance computing, distributed infrastructure, reliability, CaaS, agent infrastructure, developer platforms, tooling, or the user experience around infrastructure. What matters is that you can reason carefully about complex systems, write durable software, and raise the quality and velocity of the people around you. Depending on your background and interests, you might work close to hardware, close to users, on CaaS and agent infrastructure, or on the control planes and data planes in between. You could help bring new supercomputing capacity online, optimize training workloads from profiler traces and benchmarks, improve NCCL and collective communication behavior, reason about GPUs, NICs, topology, firmware, thermals, and failure modes, or design abstractions that make heterogeneous clusters feel like one coherent platform. We do not expect every candidate to have worked at every layer. Some engineers will go deep on systems performance, kernel or runtime behavior, large-scale networking protocols, RDMA, NCCL, GPU hardware behavior, benchmarking, scheduling, or hardware reliability; others will make the platform more usable through APIs, tools, workflows, and developer experience. The common thread is strong engineering judgment and excitement about making enormous compute systems faster, more reliable, and easier to use. This is a general opening for Compute Infrastructure. We will consider candidates for teams across Compute Infrastructure and match you based on your strengths, the problems that motivate you, and where the infrastructure needs are highest. Where you might work - Compute Foundations: Build the low-level platform primitives that make heterogeneous hardware, providers, and data centers repeatable, automatable, and operable at scale. - Fleet / Orchestration: Turn raw capacity into reliable, efficient clusters and scheduling systems that researchers and product teams can use with minimal friction and great experience. - Core Network Engineering: Build and operate the high-performance networking fabrics, protocols, and observability needed for the largest training and serving workloads. - Hardware Health and Observability: Detect, diagnose, remediate, and prevent hardware and fleet-health issues so usable compute stays high across providers and accelerator generations. - Storage: Build scalable, performant, durable storage abstractions that keep data movement and storage access from becoming a bottleneck to research or products. - Agent Infrastructure: Build sandboxed execution infrastructure for agentic workloads across research and production, with strong isolation, reliability, and scale. In this role, you will: - Build and deeply optimize reliable system software for large-scale compute systems that run some of the world's most demanding AI workloads - Design and operate infrastructure across accelerators, CPUs, NICs, switches, networking protocols, storage, data centers, cluster orchestration, scheduling, and fleet health - Profile, benchmark, and optimize training workloads across compute, memory, storage, networking, NCCL and collective communication, and cluster scheduling bottlenecks - Create hardware-aware automation that makes provisioning, firmware and driver upgrades, incident response, and day-to-day operations faster and less error-prone - Build CaaS, agent infrastructure, profiling, observability, benchmarking, and platform tools that help researchers, product engineers, and operators launch, debug, and optimize workloads with less friction - Turn operational lessons into better systems, stronger abstractions, and clearer ownership boundaries across teams - Collaborate across

Similar Jobs

Related searches:

Remote Jobs Mid-Level Jobs Remote Mid-Level Jobs Mid-Level AI InfrastructureMid-Level Backend & SystemsMid-Level AI Agents & RAG AI Jobs in San Francisco AI Infrastructure in San FranciscoBackend & Systems in San FranciscoAI Agents & RAG in San Francisco gpuagentsdistributed-systemsinfrastructure

Get jobs like this delivered weekly

Free AI jobs newsletter. No spam.