Senior Technical Product Manager - Serverless AI

Nebius · Amsterdam, Netherlands
full-time senior Posted 1 week ago

About this role

About Nebius: Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure. Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI. Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D. The role   Nebius Serverless AI is our consumption-based compute platform for running AI workloads — training jobs, inference endpoints, and interactive development environments — without managing infrastructure. Users submit containerized workloads via CLI or UI, access GPU compute with pay-per-second billing, and the platform handles provisioning, lifecycle, and cleanup. We launched GA in Q1 2026 and are now scaling toward 1,000+ users while building the next generation of capabilities: autoscaling, multi-node distributed workloads, and developer-first tooling. We are looking for a Senior Technical Product Manager to join the Serverless AI product team. Together you will divide ownership across the product surface — but individually, you will own your areas with full autonomy. This is not a role where you write requirements and hand them off. You will be the person who understands container runtimes, GPU scheduling, cold start optimization, and inference serving deeply enough to make correct technical trade-offs — and also the person who talks to customers, shapes the CLI experience, defines pricing, and drives adoption. We are building the next generation of AI cloud — infrastructure designed from the ground up for GPU-intensive workloads, not retrofitted from legacy cloud. This is a lean, high-impact team where every person shapes the product directly. You need to be the kind of PM who amplifies engineering output by making the right calls on what to build and what to skip. What success looks like in 12 months: Serverless AI has clear product-market fit with measurable activation and retention metrics improving quarter over quarter. Multi-node jobs and autoscaling endpoints are shipped and adopted by customers running production workloads. Cold start time is reduced from 1-3 minutes to under 60 seconds for common workloads through a combination of product and infrastructure improvements you drove. Developer experience (CLI, docs, error messages, onboarding flow) sets the standard that developers expect from a next-generation AI cloud. At least 3 product decisions you made are directly attributable to customer conversations or data analysis you conducted. Your responsibilities will include: 1. Product Ownership Co-own the Serverless AI product roadmap — Jobs, Endpoints, and DevPods — taking primary ownership of specific product areas while collaborating closely with the other PM on shared priorities and cross-cutting decisions. Write detailed, technically precise PRDs that engineering teams can execute against. Our PRDs specify CLI syntax, API contracts, state machines, and billing models — not abstract feature descriptions. Make build/buy/defer decisions on capabilities like autoscaling, multi-node orchestration, HTTPS termination, secret injection, and health checking based on customer signal and strategic priorities. 2. Technical Depth: Understand the full workload lifecycle: container image pull → VM provisioning → GPU attachment → workload execution → cleanup — well enough to identify bottlenecks and propose solutions. Evaluate technical trade-offs in areas like container cold start optimization (image caching, snapshot restore, warm pools), GPU scheduling and bin-packing, and storage mount performance. Work directly with engineers on architecture decisions for distributed training support, endpoint autoscaling policies, and fault tolerance mechanisms. Stay current on the fast-moving serverless GPU infrastructure space — new inference frameworks (vLLM, TensorRT-LLM, SGLang), container runtimes, orchestration approaches — and translate trends into product direction. 3. Customer & Market: Run customer discovery and feedback sessions with ML engineers and platform teams at AI startups and enterprises. Turn qualitative insight into specific product actions. Analyze usage data, activation funnels, and churn patterns to identify where users get stuck and what features drive retention. Track market dynamics, emerging technologies, and industry trends to inform product strategy and ensure Nebius stays ahead of where the market is heading. Define and iterate on pricing, packaging, and tier

Similar Jobs

Related searches:

Remote Jobs Senior Jobs Remote Senior Jobs Senior NLP & Language AISenior Fintech & Payments AISenior AI InfrastructureSenior Machine LearningSenior Backend & Systems AI Jobs in Amsterdam NLP & Language AI in AmsterdamFintech & Payments AI in AmsterdamAI Infrastructure in AmsterdamMachine Learning in AmsterdamBackend & Systems in Amsterdam mlopsgpudistributed-systemscloudpaymentsllm

Get jobs like this delivered weekly

Free AI jobs newsletter. No spam.