Staff Software Engineer, Kubernetes Platform
full-time
lead
Posted 1 week ago
About this role
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role
Anthropic runs some of the largest Kubernetes clusters in the industry. We have fleets of hundreds of thousands of nodes across multiple cloud providers and datacenters to train, research, and serve frontier AI models. The Kubernetes Platform team owns the Kubernetes control plane that makes those clusters work.
We are operating at a scale where the defaults stop working. We own the scheduler and extend it to place topology-sensitive ML workloads across thousands of accelerators at once. We scale the control plane itself — apiserver, etcd, controllers — so it stays responsive as object counts and node counts grow by orders of magnitude. And we build the core cluster services every workload depends on, like service discovery, so they hold up under the same pressure.
We make sure the control plane is fast, correct, and always available. Your work will directly determine whether Anthropic can keep reliably and safely training frontier models as our compute footprint continues to grow.
Key responsibilities
Own, operate, and extend the Kubernetes scheduler for Anthropic's accelerator fleets, including custom scheduling plugins and policies for gang scheduling, topology awareness, and preemption
Scale the Kubernetes control plane (apiserver, etcd, controller-manager) to support clusters far beyond typical limits, and find the next bottleneck before it finds us
Design, build, and operate core cluster services such as service discovery that every workload in the fleet depends on
Build and maintain custom controllers, operators, and CRDs
Partner with research, training, and inference to understand workload shapes and turn their requirements into platform capabilities
Collaborate with cloud providers on required features and escalations
Participate in on-call, lead incident response, and design processes (postmortems, runbooks, SLOs) that help the team avoid repeating failures
Minimum qualifications
Significant software engineering experience building and operating production distributed systems
Proficiency in at least one systems-appropriate language (e.g., Go, Python, Rust, or C++)
Deep, hands-on Kubernetes experience (well beyond "user of”) into scheduler, controllers, apiserver, or operating large multi-tenant clusters
Demonstrated ability to debug complex issues across the stack, from API behavior down to node and network-level root causes
A track record of designing for reliability, correctness, and clear failure semantics in systems other engineers depend on
Strong written and verbal communication; comfort building consensus with internal stakeholders
Preferred qualifications
Experience with Kubernetes internals or contributions: kube-scheduler / scheduling framework, apiserver, etcd, client-go, controller-runtime, or similar
Experience building or operating cluster schedulers or batch systems (e.g., Kueue, Volcano, Slurm, or in-house equivalents)
Background scaling control planes or coordination systems (etcd, ZooKeeper, Consul, or large DNS/service-mesh deployments)
Familiarity with ML infrastructure: GPUs, TPUs, or Trainium; gang scheduling; topology-aware placement; collective networking such as NCCL
Experience with GCP and/or AWS, including GKE/EKS internals and Infrastructure as Code
Low-level systems experience such as Linux kernel tuning, cgroups, or eBPF
8+ years of relevant industry experience, including time leading large, ambiguous infrastructure projects
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Annual Salary:
$320,000 — $405,000 USD
Logistics
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply
Similar Jobs
Related searches:
Hybrid Jobs
Lead Jobs
Hybrid Lead Jobs
Lead AI InfrastructureLead AI ResearchLead AI Safety & SecurityLead Backend & Systems
AI Jobs in San Francisco
AI Infrastructure in San FranciscoAI Research in San FranciscoAI Safety & Security in San FranciscoBackend & Systems in San Francisco
alignmentclouddistributed-systemsinfrastructurekubernetes
Get jobs like this delivered weekly
Free AI jobs newsletter. No spam.