Software Engineer, GenAI
full-time
senior
Posted 1 month ago
About this role
ABOUT ABRIDGE
Abridge was founded in 2018 with the mission of powering deeper understanding in healthcare. Our AI-powered platform was purpose-built for medical conversations, improving clinical documentation efficiencies while enabling clinicians to focus on what matters most—their patients.
Our enterprise-grade technology transforms patient-clinician conversations into structured clinical notes in real-time, with deep EMR integrations. Powered by Linked Evidence and our purpose-built, auditable AI, we are the only company that maps AI-generated summaries to ground truth, helping providers quickly trust and verify the output. As pioneers in generative AI for healthcare, we are setting the industry standards for the responsible deployment of AI across health systems.
We are a growing team of practicing MDs, AI scientists, PhDs, creatives, technologists, and engineers working together to empower people and make care make more sense. We have offices located in the Mission District in San Francisco, the SoHo neighborhood of New York, and East Liberty in Pittsburgh.
THE ROLE
We are looking for passionate GenAI Engineers of all levels who are passionate about making a positive impact. You’ll collaborate closely with a cross-functional team of researchers, clinicians, and engineers to translate cutting-edge language model capabilities into dependable, real-world clinical systems. Your focus will be on designing advanced LLM-driven workflows that can reason through complex clinical contexts, leverage agentic capabilities and structured tool use, navigate branching chains of LLM calls, integrate seamlessly with retrieval systems, and consistently generate outputs that meet the highest standards of clinical reliability and trust.
A major part of this role will involve developing and applying rigorous evaluation frameworks (both automated and human-in-the-loop) to continuously assess accuracy, robustness, multilingual capabilities, and more. This is an opportunity to design experiments to probe failure modes, simulate edge cases, and stress-test LLM workflows under realistic load and challenging real-world conditions. You’ll apply a disciplined, data-driven approach to understanding model behavior—developing tools to measure system performance, conduct A/B tests against established baselines, and generate clear, actionable insights that inform deployment decisions. This high impact role will own the end-to-end productionization of LLM workflows: deploying models into low-latency, high-uptime environments, building monitoring and observability systems, implementing post-processing guardrails, and managing workflow versioning.
WHAT YOU’LL DO
- Design and build GenAI systems that turn LLMs into composable, dependable tools—leveraging retrieval, tool use, agentic reasoning, and structured outputs.
- Collaborate with ML and infra engineers to scale and optimize GenAI workflows, managing latency, context windows, and model choice.
- Write high-quality, modular code that’s graceful under failure, flexible to change, and easy to iterate on.
- Own major architectural decisions—how we architect workflows,define data flow, cache intermediate state, and structure generative outputs.
- Drive rigorous evaluation: build benchmark datasets, develop automated and human-in-the-loop frameworks, design experiments to surface failure modes and edge cases, run A/B tests to inform deployment, and distill insights from clinician feedback to evaluate and guide model improvement.
- Leverage frontier capabilities: rapidly prototype with new models and model capabilities, open-source tools, and novel prompting techniques.
WHAT YOU’LL BRING
- 3+ years of experience building production-grade systems, with 1–2+ years focused on GenAI or LLM-powered products.
- Deep fluency with LLM APIs, prompting strategies, and orchestration patterns (e.g., LangChain, LlamaIndex, custom pipelines).
- Experience with retrieval systems (e.g., semantic and lexical retrieval, vector DBs, efficient kNN), function calling, tool-use, or agentic workflows.
- Working knowledge of model evaluation, experience building diverse datasets, conducting both automated and human-in-the-loop evaluations, running A/B tests, and working with subject matter experts to guide model improvement.
- Strong Python fundamentals—including ability to write clean code, design comprehensive test-cases, and familiarity with core language features and standard libraries; experience with async programming, performance profiling, packaging, and deployment tooling is strongly preferred.
- Good taste and intuition: You know when to move fast, ship, and iterate and also when to take a beat to tackle tech debt.
We value people who are eager to learn new things and recognize that great team members might not perfectly match a job description. If you’re interested in the role but aren’t sure whether or not you’re a good fit, we’d still lik
Similar Jobs
Related searches:
Hybrid Jobs
Senior Jobs
Hybrid Senior Jobs
Senior Generative AISenior NLP & Language AISenior Data ScienceSenior Machine LearningSenior AI Agents & RAGSenior Fintech & Payments AISenior Healthcare AI
AI Jobs in San Francisco
Generative AI in San FranciscoNLP & Language AI in San FranciscoData Science in San FranciscoMachine Learning in San FranciscoAI Agents & RAG in San FranciscoFintech & Payments AI in San FranciscoHealthcare AI in San Francisco
embeddingsgenerative-aihealthcarepaymentsllmagents