Staff Software Engineer, Inference Cloud

Cerebras · Sunnyvale, CA
full-time lead Posted 1 year ago

About this role

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.    Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups.  OpenAI recently announced a multi-year partnership with Cerebras , to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.  Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation. Location: Sunnyvale   We're hiring a Staff Engineer to own major areas of the architecture of our Inference Cloud Platform. This team owns the cloud layer behind our Inference Service, with responsibility for availability, latency, reliability, and global scale.   This is a hands on IC role for an engineer who wants to work on the hardest distributed systems problems in the stack: multi-region traffic architecture, graceful degradation under bursty AI workloads, performance at high QPS, and the operating model for a platform that has to stay fast and available under load. You'll write code, lead key architectural decisions in your domain, debug production issues, and help shape technical direction across adjacent teams.   If you're interested in building the next-generation architecture of a globally distributed inference platform, we'd like to talk.   Responsibilities   Platform Direction.  Help shape the technical direction for the Inference Cloud Platform, including multi-region topology, failure domains, service boundaries, and system evolution over time, and own the roadmap for major technical areas.   Core Cloud Systems.  Design and build critical platform components such as service discovery, request routing, load balancing, caching, batching, and traffic management for AI inference workloads.   Reliability & Performance.  Architect active-active systems with rapid failover, graceful degradation, and clear SLOs. Drive system-level improvements in latency, throughput, capacity efficiency, and resilience under unpredictable demand.   Traffic Control & Service Tiers.  Define platform mechanisms for admission control, quota management, rate limiting, and differentiated quality of service across workload types and customer tiers.   Execution on Critical Paths.  Write and review production code in the most important parts of the platform. Make high-consequence architectural decisions within your area and set the technical bar through design reviews, code reviews, and sound engineering judgment.   Production Leadership.  Lead on the hardest production issues and cross-system bottlenecks. Drive observability, incident response, capacity planning, and post-incident improvement with a high standard for operational rigor.    Technical Influence.  Partner with ML, Product, Infrastructure, and Platform teams to translate product and business requirements into scalable system designs, and drive alignment on shared technical decisions within your domain and adjacent platform surfaces.   Mentorship.  Raise the effectiveness of senior engineers through design feedback, pairing, and clear technical standards.   Skills & Qualifications   8+ years of experience in software engineering, with substantial individual contributor experience building and operating large-scale distributed systems or cloud infrastructure.   Deep expertise in distributed systems architecture in cloud environments, including networking, compute orchestration, container platforms, and multi-region production services.   Strong track record of making sound architectural decisions for highly available, latency-sensitive systems at scale.   Experience optimizing latency, throughput, and efficiency in high-QPS systems. Experience with TTFT and tail-latency reduction is a strong plus.   Strong proficiency in backend or systems languages such as Go, C++, or Python, with the expectation that you can contribute production code directly.   Experience designing observability and reliability practices, including metrics, logging, tracing, alerting, incident response, and SLO-driven operations.   Ability to influence senior engineers and cross-functional partners through technica

Similar Jobs

Related searches:

On-site Jobs Lead Jobs On-site Lead Jobs Lead AI InfrastructureLead Generative AILead AI Agents & RAGLead Backend & Systems AI Jobs in Sunnyvale AI Infrastructure in SunnyvaleGenerative AI in SunnyvaleAI Agents & RAG in SunnyvaleBackend & Systems in Sunnyvale mlopsgenerative-aiagentsclouddistributed-systemsinference