Manager, HPC Storage Engineer

RunPod · Remote (US) · $150k - $240k
full-time lead Posted 2 months ago

About this role

Runpod is pioneering the future of AI and machine learning, offering cutting-edge cloud infrastructure for full‑stack AI applications. Founded in 2022, we are a rapidly growing, well‑funded, remote‑first company with a global team across the US, Canada, and Europe. Our mission is to create a foundational platform that enables developers and companies to build, deploy, and scale custom AI systems with speed and flexibility. As AI workloads continue to push the limits of throughput, latency, and parallelism, Runpod is investing heavily in next-generation storage architectures purpose-built for GPU-centric compute. We are looking for an Engineering Manager, Datacenter Storage Engineering to lead the team responsible for Runpod’s distributed storage infrastructure across all regions. This role owns the end-to-end storage stack — from NAND and NVMe devices through filesystems, transport protocols, and cluster-level deployment — ensuring performance, reliability, and scalability for AI workloads. You will manage engineers designing and operating large-scale SAN and NFS-based systems, including high-performance shared filesystems for training workloads. This role requires deep technical fluency and architectural leadership, combined with strong people management and operational discipline. Responsibilities Own Distributed Storage Architecture: Define, evolve, and operate Runpod’s global storage platforms, supporting training, inference, checkpointing, and dataset access at scale. Build the Storage Engineering Team: Manage and grow a team of storage and systems engineers. Set clear ownership, technical direction, and operational standards across regions. High-Performance Shared Filesystems: Design and operate large-scale SAN and NFS deployments , including performance-sensitive shared storage for GPU clusters.= Advanced Filesystems & Platforms: Lead deployments and operations of VAST Data and experience with Lustre or similar parallel filesystems used in HPC and AI environments. End-to-End Performance Ownership: Drive performance optimization from NAND and NVMe media through controllers, networking, and client access patterns. Next-Generation Storage Technologies: Evaluate and deploy cutting-edge capabilities such as NFS over RDMA, GPU Direct Storage (GDS) , and low-latency data paths for accelerated workloads. Reliability & Scale: Establish best practices for replication, data tiering, data protection, failure recovery, capacity planning, and lifecycle management. Automation & Observability: Build automation for provisioning, expansion, upgrades, and monitoring. Ensure deep observability into throughput, latency, and error characteristics. Cross-Functional Collaboration: Partner with Datacenter Networking, GPU Platform, SRE, and Product teams to ensure storage systems meet evolving workload and customer needs. Vendor & Partner Management: Own technical relationships with storage vendors, hardware partners, and colocation providers; drive roadmap alignment and issue resolution. Requirements Engineering Leadership Experience: 3+ years managing storage, systems, or infrastructure engineering teams in production environments. Distributed Storage Expertise: 8+ years designing and operating large-scale storage systems, including SAN and NFS architectures at multi-petabyte scale. VAST Data Experience: Hands-on experience deploying, operating, or deeply integrating VAST Data in production environments is required. Parallel Filesystems: Experience with Lustre or comparable HPC filesystems (e.g., GPFS, BeeGFS) supporting high-concurrency workloads. Low-Level Storage Knowledge: Deep understanding of NAND, NVMe, PCIe, storage controllers , and performance characteristics across the stack. High-Performance Data Paths: Proven experience with NFS over RDMA, RDMA-capable transports , or similar technologies. Familiarity with GPU Direct Storage strongly preferred. Linux Systems Expertise: Strong Linux internals knowledge, including filesystems, I/O scheduling, memory management, and tuning for performance workloads. Operational Excellence: Experience running 24/7 storage platforms with strong incident response, change management, and post-mortem discipline. Communication & Leadership: Ability to clearly communicate complex technical tradeoffs and lead teams through high-stakes infrastructure decisions. Successful completion of a background check. Preferred Qualifications Experience supporting AI training pipelines, large-scale model checkpointing, and dataset streaming workloads. Familiarity with RDMA fabrics and close collaboration with datacenter networking teams. Experience designing storage systems for multi-tenant isolation and secure data access. Background in hyperscale, HPC, or AI-focused infrastructure environments. Experience building internal storage platforms or abstractions consumed by product teams. What You’ll Receive: The competitive base pay for

Similar Jobs

Related searches:

Remote Jobs Lead Jobs Remote Lead Jobs Lead AI Infrastructure cloudgpu

Get jobs like this delivered weekly

Free AI jobs newsletter. No spam.