Principal Machine Learning Infrastructure Engineer

PhysicsX · London, UK
full-time principal Posted 3 days ago

About this role

About us PhysicsX is a deep-tech company with roots in numerical physics and Formula One, dedicated to accelerating hardware innovation at the speed of software. We are building an AI-driven simulation software stack for engineering and manufacturing across advanced industries. By enabling high-fidelity, multi-physics simulation through AI inference across the entire engineering lifecycle, PhysicsX unlocks new levels of optimization and automation in design, manufacturing, and operations — empowering engineers to push the boundaries of possibility. Our customers include leading innovators in Aerospace & Defense, Materials, Energy, Semiconductors, and Automotive. Note:  We are currently recruiting for multiple positions, however please only apply for the role that best aligns with your skillset and career goals. The Role The Principal ML Infrastructure Engineer will extend and operate the infrastructure that powers our research model training, fine-tuning, and serving pipelines. You will be embedded within our Research function, partnering directly with ML engineers and research scientists to ensure they can train Large Physics Models efficiently and reliably at scale. Team Context In this role, you will be vertically embedded in Research, working daily with: Research Scientists who determine the model architectures and methods ML Engineers who implement and develop the models Simulation Data Engineers who are accountable for upstream data pipelines You will have end-to-end responsibilities over the research infrastructure, with the autonomy to make architectural decisions and the responsibility to keep data flowing reliably. Horizontally, you will be part of an infrastructure engineering group responsible for infrastructure across the company. What you will do Training Infrastructure Design and operate distributed training infrastructure for neural operator architectures (Transolver, Point Cloud Transformer, etc.) on our large NVIDIA DGX B200 platform. Optimize training pipelines for throughput, fault tolerance, and cost efficiency, including checkpointing strategies, gradient accumulation, and multi-node synchronization. Build and maintain experiment tracking and observability systems that give researchers clear visibility into training runs, hyperparameter sweeps, and model performance. Data I/O and Performance Solve data loading bottlenecks for large-scale mesh datasets. Optimize data pipelines for efficient I/O from cloud storage, including prefetching, caching, and format optimization. Work with heterogeneous data sources of varying formats and resolutions. Model Serving and Deployment Build serving infrastructure for pre-trained LPMs, supporting both zero-shot inference and uncertainty quantification (Monte Carlo Dropout). Design and implement model packaging pipelines for customer deployment. Models must run reliably in customer environments with fine-tuning capabilities. Ensure reproducibility: any model checkpoint should be deployable with consistent behaviour. Platform and Tooling Improve developer experience for the Research team with fast iteration cycles, reliable CI/CD, clear debugging tools. Collaborate with the broader Infrastructure team on shared patterns and standards. What you bring to the table Ability to scope and effectively deliver projects, prioritising activity as needed. Problem-solving skills and the ability to analyse issues, identify causes, and recommend solutions quickly. Excellent collaboration and communication skills, especially in a research setting. You can translate "the model isn't converging" into infrastructure hypotheses and solutions, and can bridge technical abstractions with implementations. 5+ years of experience building and operating ML infrastructure at scale: Deep expertise in distributed training: you've debugged NCCL hangs, optimized collective communication, and know when to use FSDP vs. DDP vs. pipeline parallelism Strong systems fundamentals: Linux, networking (including domain specific NVLink and InfiniBand), storage I/O, profiling and performance optimization Production experience with Kubernetes and SLURM for job orchestration on GPU clusters Proficiency in Python and ML frameworks (PyTorch strongly preferred) Experience with cloud GPU infrastructure; ideally CoreWeave or similar GPU/HPC-focused clouds Ideally Experience with geometric deep learning or neural operators, ****architectures that operate on meshes, point clouds, or graphs Background in HPC for simulation engineering, familiarity with how CFD/FEA workflows generate and consume data Experience building model serving infrastructure with latency and throughput requirements Familiarity with experiment tracking tools (Weights & Biases, MLflow) and observability stacks (Prometheus, Grafana) Experience packaging models for deployment into customer environments (containers, model registries, versioning)   What we of

Similar Jobs

Related searches:

Hybrid Jobs Principal Jobs Hybrid Principal Jobs Principal Machine LearningPrincipal Backend & SystemsPrincipal Generative AIPrincipal AI InfrastructurePrincipal Data Engineering AI Jobs in London Machine Learning in LondonBackend & Systems in LondonGenerative AI in LondonAI Infrastructure in LondonData Engineering in London gpusearchdeep-learningpytorchdistributed-systemsfine-tuningdata-pipelinemlops