Senior AI Researcher - Pre-training (f/m/d)

Aleph Alpha · Heidelberg
full-time senior Posted 6 hours ago

About this role

Our Mission Aleph Alpha is one of the few companies in Europe doing serious foundation model pre-training. Our customers — in finance, manufacturing, and public administration — need models that understand German, meet European regulatory requirements, and work reliably in high-stakes settings. We’re building that in Heidelberg. We are hiring a Senior AI Researcher to join our Pre-training team and to advance the architecture and training of our next generation of foundation models. If you are excited about designing inference-efficient architectures, optimising training recipes that scale reliably, and training models on a large scale cluster (thousands of NVIDIA Blackwell GPUs), we would love to hear from you. Team Culture We foster a culture built on ownership, autonomy, and empowerment. Teams and individual contributors are trusted to take responsibility for their work and drive meaningful impact. We maintain a flat organisational structure with efficient, supportive management that enables quick decision-making, open communication, and a strong sense of shared purpose. We collaborate closely on complex technical problems, working in pairs or using mob programming to resolve challenging issues. About the Role As a Senior AI Researcher in Pre-training, you will work on the core technical problems that determine whether large-scale pre-training succeeds: architecture, optimisation, stability, and scaling up. You will work at the intersection of model architecture, training dynamics, and large-scale distributed training, translating empirical observations into principled training decisions. From small-scale proxy experiments to multi-thousand-GPU runs, you will ensure our models converge as expected and scale efficiently. We are looking for someone who combines significant research experience with strong engineering ability. You should be comfortable reasoning mathematically about training behaviour, designing rigorous experiments, and maintaining a high-quality production codebase. Your work sits at high leverage: the training decisions you make directly determine model quality, run reliability, inference efficiency, and how quickly we can improve the next generation of models. You’ll have direct influence on the models we ship. Your Responsibilities - Training Recipe Optimisation: Own and improve core elements of the training recipe, including optimiser settings, learning rate schedules, initialisation, regularisation, and other choices that materially affect convergence, stability, and final model quality. - Scaling Strategy and Hyperparameter Transfer: Develop and validate scaling strategies for models and training recipes, including hyperparameter scaling, scale-up methodology, and empirical scaling laws. You will use carefully designed experiments to predict large-scale behaviour from smaller runs and reduce uncertainty in major training decisions. - Model Architecture Development: Design, implement, and evaluate architectural improvements in PyTorch, with a focus on training stability, scalability, efficiency in training and inference, and overall model performance. - Training Stability and Diagnostics: Investigate and resolve convergence issues such as loss spikes, divergence, optimiser pathologies, or numerical instability, and develop diagnostics that improve visibility into training health. - System-Model Co-Design: Collaborate with the Compute Performance, Data, Evaluation, and Post-Training teams to ensure full pipeline alignment across the model lifecycle, while satisfying performance requirements and hardware constraints (e.g., memory bandwidth and communication topology). - Distributed Training Debugging: Diagnose and resolve complex failures in large-scale distributed runs, including communication failures, race conditions, synchronisation issues, and other hard-to-reproduce problems. Core Qualifications - You are proficient in Python and deeply familiar with PyTorch-based training workflows. - You have a strong track record in machine learning research and software engineering, demonstrated through shipped models, impactful open-source contributions, or published research. - You have a strong mathematical foundation and are comfortable reasoning formally about optimisation, scaling behaviour, and training dynamics. - You deeply understand transformer training dynamics, optimisation, and the behaviour of large distributed training jobs. - You can design rigorous experiments, reason clearly from noisy results, and translate empirical observations into robust training decisions. - You apply strong software engineering practices, including writing maintainable, well-tested code and supporting reproducible experimentation workflows. - You are able to implement complex model architectures efficiently and reliably and to debug complex issues across model code, training dynamics, and distributed systems. - You collaborate effectively within a researc

Similar Jobs

Related searches:

Hybrid Jobs Senior Jobs Hybrid Senior Jobs Senior Generative AISenior NLP & Language AISenior Machine LearningSenior AI InfrastructureSenior AI ResearchSenior Backend & Systems gpudistributed-systemsllmpytorchpre-traininggenerative-airesearch

Get jobs like this delivered weekly

Free AI jobs newsletter. No spam.