Lead ML Engineer (VLA/ALM focused)
full-time
lead
Posted 1 day ago
About this role
May Mobility is transforming cities through autonomous technology to create a safer, greener, more accessible world. Based in Ann Arbor, Michigan, May develops and deploys autonomous vehicles (AVs) powered by our innovative Multi-Policy Decision Making (MPDM) technology that literally reimagines the way AVs think. Our vehicles do more than just drive themselves - they provide value to communities, bridge public transit gaps and move people where they need to go safely, easily and with a lot more fun. We’re building the world’s best autonomy system to reimagine transit by minimizing congestion, expanding access and encouraging better land use in order to foster more green, vibrant and livable spaces. Since our founding in 2017, we’ve given more than 500,000 autonomous rides to real people around the globe. And we’re just getting started. We’re hiring people who share our passion for building the future, today, solving real-world problems and seeing the impact of their work. Join us.
Job Summary
May Mobility is entering an exciting phase of growth as we expand our first-of-its-kind autonomous shuttle and mobility services across the nation. Launched in 2017 with a strong team of experienced roboticists and software engineers with decades of experience fielding robotic systems in the wild, May Mobility is looking to expand its team of robotics engineers with a background in robotics or autonomous vehicles.
Essential Responsibilities
Work independently with cross-functional teams to develop software and system requirements.
Design, implement, and test state-of-the-art perception features on time with high quality, industrial-grade production code stack.
Integrate Vision-Language Models (VLMs) and Large Language Models (LLMs) into the perception stack to improve semantic scene understanding and reasoning.
Track and trend technical performance of perception in the field.
Lead major feature development including feature design, code reviews, issue diagnosis, and resolution.
Lead extensive testing to validate features and satisfy release schedules.
Lead development related to data, development, and ML pipelines, specifically focused on multimodal data alignment for training foundation models.
Skills and Abilities
Success in this role typically requires the following competencies:
Familiar with ML development cycle, deployment, and optimization.
Deep understanding of data: data pipeline, data balancing, data mining, and data-driving performance improvement.
Knowledge of multimodal learning techniques, including contrastive learning and prompt engineering for zero-shot visual recognition.
Deep understanding of testing frameworks and workflows.
Excellent attention to detail and rigorous testing methodology.
Exceptional written and verbal communication skills and team leading abilities.
Qualifications and Experience
Candidates most successful in this role typically hold the following qualifications or comparable knowledge or experience:
Required
A minimum of 5+ years of industry experience working on real-world robot systems maintaining high-quality industrial-grade code.
Master’s degree in Robotics, Computer Science, or Computer Engineering, or a field that requires a strong mathematical and/or engineering foundation.
Strong programming skills in C/C++/Python; software development in Linux environments.
Strong experiences in ML/DL development with PyTorch/TensorFlow.
Direct experience developing or fine-tuning Vision-Language Models (e.g., CLIP, BLIP, or custom VLM backbones) for real-world applications.
Experience working on a combination of several of the following real-time areas:
Computer Vision: Object detection, classification, segmentation.
Semantic scene understanding and open-vocabulary detection.
Multi-target Tracking and Sensor Fusion.
Localization/prediction/planning.
Extensive experience deploying features/ML/DL models in real-time systems with high accuracy and low latency.
Desirable
Experience utilizing VLMs for edge-case (long-tail) detection and explainable AI in autonomous systems.
Familiar with synthetic data in ML system development.
Familiar with Reinforcement Learning (RL) for ML systems.
Familiar with ML/DL optimization on real-time products with limited compute resources (e.g., quantization of large transformer models).
Strong background demonstrated through high-quality capability deliveries to robots working in the field.
Physical Requirements
Standard office working conditions which includes but is not limited to:
Prolonged sitting
Prolonged standing
Prolonged computer use
Travel required? - Moderate: 11%-25%
Benefits and Perks
Comprehensive healthcare suite including medical, dental, vision, life, and disability plans. Domestic partners who have been residing together at least one year are also eligible to participate.
Health Savings and Flexibl
Similar Jobs
Related searches:
On-site Jobs
Lead Jobs
On-site Lead Jobs
Lead Healthcare AILead Machine LearningLead NLP & Language AILead Robotics & AutonomyLead Data EngineeringLead Generative AILead Computer Vision
AI Jobs in Ann Arbor
Healthcare AI in Ann ArborMachine Learning in Ann ArborNLP & Language AI in Ann ArborRobotics & Autonomy in Ann ArborData Engineering in Ann ArborGenerative AI in Ann ArborComputer Vision in Ann Arbor
tensorflowroboticspytorchgenerative-aiautonomous-vehicleshealthcarellmdata-pipeline
Get jobs like this delivered weekly
Free AI jobs newsletter. No spam.