Principal AI/ML Platform Engineer

Natera · Remote (US) · $174k - $218k
full-time principal Posted 4 months ago

About this role

Role Overview   The Principal AI/ML Platform Engineer is responsible for build and delivery of the next generation of Natera’s Generative AI and ML platforms. This is a hands-on technical leadership role at the intersection of engineering excellence, platform design, and applied GenAI/ML innovation. This role requires deep expertise in AI engineering at scale, with a passion for building robust, compliant, and high-performance systems that directly impact patient outcomes and clinical innovation.   You will design, build, and scale enterprise-grade Gen AI and ML platforms and services that power internal workflows (R&D, Lab Ops, Clinical Trials, Billing, Patient/Provider engagement) and external-facing AI/ML products. As the most senior leader in the AI/ML engineering team, you will also set technical standards, mentor engineers, and drive adoption of cutting-edge techniques such as retrieval-augmented generation (RAG), advanced prompt engineering, vector search, GenAI governance, evaluation frameworks, ML/LLMOps, model experimentation, observability, and compliance-first AI pipelines. You will be responsible for development of a production-ready AI platform with reusable components used to deploy multiple AI solutions across Natera’s business units in a federated approach. You will also develop clear standards and best practices established for AI/ML development across the organization.   Key Responsibilities   AI/ML Platform Architecture & Design Define the technical vision and architecture for Natera’s ML and GenAI platforms, ensuring scalability, reliability, and compliance across diverse use cases Build, operate, and evolve core AI platform components for standardized data access, LLM model registries for versioning and lifecycle tracking, evaluation pipelines for model validation and monitoring, vector databases, RAG frameworks, and agent frameworks for GenAI applications, prompt orchestration and guardrails for safe and compliant LLM deployments Design, build, and operate end-to-end ML/DL/FM infrastructure (feature engineering, distributed training, evaluation, deployment, monitoring) that are modular, reproducible, and auditable. Design, build, and operate reusable GenAI services such as unstructured data extraction, classification, summarization, generation, retrieval from knowledge bases, prompt optimization etc.   Hands-On Engineering & Solution Delivery Implement production-grade Gen AI and ML services and API’s that power critical workflows, from genomics analytics to clinical trial optimization to patient-facing solutions. Lead the deployment and scaling of large models (custom trained LLMs, multimodal, deep learning) using modern MLOps practices (Kubernetes, MLflow, AWS-native services)  Deliver retrieval-augmented generation (RAG), agentic runtime, agent orchestration frameworks, and domain-specific copilots in compliance-ready environments. Optimize inference latency, throughput, and cost-efficiency through infrastructure design and algorithmic improvements. Build online and offline evaluation frameworks to ensure performance and real world utility   Governance, Security & Compliance Integration Embed governance and monitoring guardrails into AI and ML pipelines, including bias testing, safety, security, hallucination, explainability, PHI/PII redaction, audit trails Partner with the Head of Data & AI Governance to ensure adherence to HIPAA, CLIA, CAP, FDA, GxP, GDPR, and emerging AI regulations. Establish automated checks and controls in the CI/CD and SDLC processes to maintain compliance-by-design.   Technical Leadership & Mentorship Act as the principal technical authority in AI/ML engineering — set coding standards, review designs, and ensure best practices in reproducibility, monitoring, and observability. Mentor and guide other engineers and data scientists, providing thought leadership on system design, optimization, and responsible AI. Influence cross-functional roadmaps by partnering with Product, Data Governance, and Engineering leadership to align delivery with business needs.   Innovation, Research & Tooling Strategy Evaluate and integrate emerging AI/ML technologies (foundation models, multimodal AI, biomedical reasoning agents, federated learning). Lead build vs. buy assessments for AI/ML tooling and platforms; integrate open-source or vendor solutions where appropriate. Prototype and productionize new approaches (e.g., foundation model fine-tuning, GenAI copilots for lab workflows, advanced monitoring frameworks). Represent Natera’s AI/ML technical capabilities externally at conferences, publications, and industry forums.   Qualifications   Required: 12+ years in software/data/ML engineering, with 8+ years in AI/ML engineering at scale. Expertise in building production-grade ML/LLM systems on AWS tech stack (E.g. Python,

Similar Jobs

Related searches:

Remote Jobs Principal Jobs Remote Principal Jobs Principal AI InfrastructurePrincipal Machine LearningPrincipal Generative AIPrincipal NLP & Language AIPrincipal AI Agents & RAGPrincipal Fintech & Payments AIPrincipal Healthcare AI deep-learningmlopscode-generationpaymentsraggenerative-aihealthcarecloud