Engineering Manager, Inference ML Runtime

Cerebras · Sunnyvale, CA
full-time lead Posted 3 weeks ago

About this role

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.    Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups.  OpenAI recently announced a multi-year partnership with Cerebras , to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.  Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation. About the Role The Inference ML Engineering team at Cerebras builds the runtime, APIs, and systems that power the fastest generative AI inference platform in the world. As an Engineering Manager, Inference ML Runtime , you will lead a team responsible for designing and scaling the systems that enable seamless execution of state-of-the-art AI models on Cerebras hardware. You will operate at the intersection of machine learning, distributed systems, and high-performance runtime engineering , translating cutting-edge research into production-ready infrastructure to serve a variety of text-only and multimodal models. This role combines technical leadership, people management, and execution ownership , with direct impact on Cerebras’ core inference platform. What You’ll Do Technical Leadership Own the architecture and evolution of the ML inference runtime and serving systems. Guide the design of: high-throughput, low-latency inference pipelines; multimodal model execution (text, image, audio, video); scalable serving infrastructure for concurrent workloads. Partner with cloud, compiler, core runtime, hardware, and ML teams to optimize end-to-end performance. Team Leadership Build, manage, and grow a team of ML systems and infrastructure engineers. Provide technical direction, mentorship, and career development. Foster a culture of ownership, velocity, and engineering excellence. Recruit top talent in ML systems, distributed systems, and runtime engineering. Execution & Delivery Drive execution of complex, cross-functional initiatives across: ML engineering; compiler/runtime teams; cloud and infrastructure teams. Own delivery of features such as: advanced inference capabilities (structured outputs, sampling strategies); heterogeneous model types, including test and multimodal; performance optimization (latency, throughput, memory efficiency); observability and reliability across the inference stack. Ensure high-quality releases through strong testing, validation, and operational rigor. Platform & Performance Ownership Scale Cerebras’ inference platform to handle large volumes of concurrent requests at very fast speed Drive improvements in: latency; throughput; compute efficiency. Identify and prioritize technical debt and system bottlenecks. Maintain Cerebras’ industry-leading inference speed advantage. Cross-Functional Collaboration Partner with: ML researchers (model enablement); compiler teams (model execution optimization); cloud/platform teams (deployment and scaling). Act as a bridge between research, infrastructure, and production systems. What You Bring Required 8+ years of experience in: large-scale software engineering; ML systems or distributed systems. 2+ years of engineering management experience. Strong programming skills in: Python (production systems); C++ (performance-critical systems). Experience building and scaling large-scale inference systems (LLMs or multimodal). Experience working with cloud infrastructures and following best-practices for building scalable microservices and applications. Preferred Experience with: LLM serving frameworks (e.g., vLLM, TensorRT-LLM, SGLang); PyTorch and deep learning frameworks; distributed systems and high-performance computing. Familiarity with: ML runtime systems; model execution pipelines; performance optimization for AI workloads. Why This Role Matters This team is central to Cerebras’ mission of delivering the fastest AI inference in the world . Your work will directly enable real-time AI applications and unlock new capabilities across enterprise and frontier AI use cases. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we

Similar Jobs

Related searches:

On-site Jobs Lead Jobs On-site Lead Jobs Lead AI Agents & RAGLead AI InfrastructureLead Generative AILead NLP & Language AILead Machine LearningLead Backend & Systems AI Jobs in Sunnyvale AI Agents & RAG in SunnyvaleAI Infrastructure in SunnyvaleGenerative AI in SunnyvaleNLP & Language AI in SunnyvaleMachine Learning in SunnyvaleBackend & Systems in Sunnyvale deep-learningdistributed-systemsgenerative-aiagentspytorchmicroservicesllmcloud