Principal Research Scientist – Scaling

Databricks · San Francisco, CA · $280k - $350k
full-time principal Posted 6 days ago

About this role

Principal Research Scientist – Scaling P-1227 About Databricks AI At Databricks, we are obsessed with enabling data teams to solve the world’s toughest problems, from security threat detection to cancer drug development, by building and running the world’s best data and AI platform. The Databricks AI Research organization enables companies to develop AI models and systems using their own data; from pre-training LLMs from scratch to state-of-the-art retrieval-augmented generation by producing novel science and putting it into production. We believe a company’s AI models are a core part of their IP, and that high‑quality AI models should be available to all. About the Scaling Research Team The Databricks AI Scaling team focuses on pushing the boundaries of large language model (LLM) training and inference efficiency beyond what is required to support existing models. The team explores novel avenues for scaling and efficiency improvements across algorithms, systems, and infrastructure, requiring researchers who can both drive independent research agendas and dive deep into low‑level implementation details with engineering partners. Role Summary As a Principal Research Scientist – Scaling, you will lead a team of world‑class researchers and engineers to advance the state of the art in large‑scale machine learning, focusing on post-training, RL and inference efficiency, optimization, and scaling. You will define and execute a research roadmap that advances the Databricks AI platform and delivers tangible improvements to how customers train, serve, and adapt LLMs at scale, working closely with product, data, and engineering leaders to bring cutting‑edge methods into production. The Impact You Will Have Lead and grow a multidisciplinary research team focused on foundational and applied AI problems, with a particular emphasis on LLM scaling, efficiency, and systems performance. Define the scaling research roadmap in alignment with Databricks’ strategic objectives, prioritizing advances in foundation model efficiency and large‑scale training and inference. Drive algorithmic innovations for large‑scale neural network training and inference, including novel optimizers, low‑precision techniques, and model adaptation methods, and guide your team in rigorous empirical validation against state‑of‑the‑art approaches. Optimize end‑to‑end ML systems for distributed training and RL, memory efficiency, and compute efficiency through close collaboration with core systems and platform teams, ensuring that research ideas translate into performant, reliable infrastructure. Partner with product and engineering to translate research breakthroughs, especially around scaling and efficiency, into customer‑impacting capabilities in the Databricks AI platform. Foster a culture of scientific excellence and openness, including high‑quality research practices, reproducible experimentation, and effective internal knowledge sharing across Databricks AI. Represent Databricks AI research externally through top‑tier publications, conference talks, and collaborations with academia and the open‑source community, with a focus on optimization and efficiency for large‑scale models. Mentor and develop talent, providing both technical guidance (research agendas, experimentation, implementation) and career development support for research scientists and engineers. What You Will Do Define and lead independent research programs on foundation model efficiency, covering topics such as optimizer design, low‑precision training/inference, scalable model architectures, and efficient adaptation methods. Oversee the design and execution of large‑scale experiments, including benchmarking against state‑of‑the‑art methods and evaluating trade‑offs in quality, latency, throughput, and cost. Work hands‑on with your team on high‑quality, efficient code in Python and PyTorch for research implementation, rapid prototyping, and integration with Databricks’ production systems. Collaborate with distributed systems and infra teams to push the limits of distributed training , parallelism strategies, memory management, and hardware utilization for LLMs and other large models. Establish metrics, evaluation protocols, and best practices for scaling‑focused research (e.g., training efficiency, inference cost, energy usage) and drive their adoption across Databricks AI. Champion responsible and robust deployment of scaling innovations, ensuring that model behavior, reliability, and safety remain first‑class considerations. What We Look For Proven ability to lead a research team to develop novel techniques for foundation model efficiency and related topics, with a strong track record of industry impact.  Deep expertise in at least one of: generative AI, LLMs, distributed ML systems, model optimization, or responsible AI, with a strong emphasis on scaling and efficiency for la

Similar Jobs

Related searches:

On-site Jobs Principal Jobs On-site Principal Jobs Principal Data EngineeringPrincipal AI InfrastructurePrincipal Backend & SystemsPrincipal NLP & Language AIPrincipal AI ResearchPrincipal AI Agents & RAGPrincipal Generative AIPrincipal Machine Learning AI Jobs in San Francisco Data Engineering in San FranciscoAI Infrastructure in San FranciscoBackend & Systems in San FranciscoNLP & Language AI in San FranciscoAI Research in San FranciscoAI Agents & RAG in San FranciscoGenerative AI in San FranciscoMachine Learning in San Francisco generative-aidistributed-systemspre-trainingpytorchllmdeep-learningragdata-pipeline

Get jobs like this delivered weekly

Free AI jobs newsletter. No spam.