Product Manager, AI Platform

FluidStack · San Francisco, CA · $180k - $250k
full-time senior Posted 1 month ago

About this role

ABOUT FLUIDSTACK At Fluidstack, we build the compute, data centers, and power that will fuel artificial superintelligence. We work with Anthropic, Google, Meta, AMI Labs, and Black Forest Labs to deploy gigawatts of compute at industry defining speeds. We are investing tens of billions of dollars in US infrastructure. In 2026, we will deploy 1GW. In 2027, 10GW. Our team is small, fast, and obsessed with quality. We own outcomes end-to-end, challenge assumptions, and treat our customers' problems as our own. No task is beneath anyone here. There are a few thousand people who will shape the trajectory of superinteligence. Come and be one of them. ABOUT THE ROLE We're hiring a Product Manager to own our AI platform roadmap, including managed inference and agent platforms. You'll define how Fluidstack enables customers to deploy, scale, and optimize LLM inference workloads—from model serving and routing to agent orchestration and compound AI systems. This role requires balancing customer needs for low latency and high throughput with the operational realities of GPU utilization, cost efficiency, and platform reliability. You'll work across engineering, ML research, and go-to-market teams to position Fluidstack against inference-first competitors like Together AI, Fireworks, Baseten, Modal, and Replicate. WHAT YOU'LL DO - Own the product strategy and roadmap for managed inference services, including model deployment, autoscaling, multi-LoRA serving, and inference optimization - Define requirements for agent platform capabilities: structured outputs, function calling, memory primitives, tool integration, and multi-step reasoning workflows - Drive decisions on which inference optimizations to prioritize: speculative decoding, continuous batching, KV cache management, quantization support, and custom kernel integration - Partner with ML infrastructure engineers to design APIs, SDKs, and deployment workflows that support model fine-tuning, version management, and A/B testing - Work with datacenter teams to optimize GPU allocation strategies—balancing dedicated vs. serverless deployments, cold start latency, and cost-per-token economics - Analyze competitive offerings from Together AI (inference optimization stack), Fireworks (custom inference engine), Baseten (training-to-inference integration), and Modal (serverless architecture) - Define pricing models that align with customer usage patterns (tokens, requests, GPU-hours) while maintaining healthy unit economics - Conduct customer research to understand inference workload requirements: latency SLAs, throughput targets, model size constraints, and integration needs - Translate customer feedback into feature specifications—including support for new model architectures, framework integrations (vLLM, TensorRT-LLM, TGI), and observability tooling - Build go-to-market materials: reference architectures, performance benchmarks, cost calculators, and migration guides for customers moving from self-hosted or competing platforms ABOUT YOU - 5+ years product management experience with at least 3 years focused on AI/ML infrastructure, inference platforms, or developer tools - Strong technical understanding of transformer architectures, inference optimization techniques, and production ML systems - Experience building products for technical users deploying LLMs in production (ML engineers, research scientists, AI application developers) - Track record of shipping features that improved inference latency, throughput, or cost efficiency—backed by quantitative metrics - Deep familiarity with the inference ecosystem: serving frameworks (vLLM, TensorRT-LLM, TGI), model formats (GGUF, SafeTensors), and API standards (OpenAI-compatible endpoints) - Understanding of GPU memory constraints, batching strategies, and the tradeoffs between latency-optimized vs. throughput-optimized serving - Ability to translate complex technical concepts (speculative decoding, PagedAttention, Multi-LoRA) into clear customer value propositions - Experience conducting competitive analysis in the inference market, including pricing elasticity, feature differentiation, and customer acquisition patterns - Comfortable working with engineering teams to debug performance bottlenecks, analyze profiling data, and prioritize kernel-level optimizations - Bonus: Experience with agent frameworks (LangChain, LlamaIndex, AutoGPT), compound AI patterns, or model fine-tuning workflows COMPENSATION To provide greater transparency to candidates, we share base pay ranges for all US-based job postings. Our compensation package includes base salary, equity, benefits, and for applicable roles, commissions plans. Our cash compensation range for this role is $180,000-$250,000. Final offers vary based on geography, candidate experience, relevant credentials, and other factors. Outstanding candidates may be eligible for adjusted terms plus meaningful equity. We are commit

Similar Jobs

Related searches:

On-site Jobs Senior Jobs On-site Senior Jobs Senior NLP & Language AISenior AI Agents & RAGSenior Generative AISenior Machine LearningSenior AI Infrastructure AI Jobs in San Francisco NLP & Language AI in San FranciscoAI Agents & RAG in San FranciscoGenerative AI in San FranciscoMachine Learning in San FranciscoAI Infrastructure in San Francisco fine-tuningllmagentsmlops