Software Engineer, Safeguards Foundations (Internal Tooling)
full-time
junior
Posted 2 days ago
About this role
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role
The Safeguards team is responsible for the systems that detect, review, and act on misuse of Anthropic's models — work that sits at the very centre of our mission to develop AI safely. Within Safeguards, the Foundations team builds the platforms, infrastructure, and internal tools that the rest of the organisation depends on to do this well.
We are looking for a software engineer to own and extend the internal tooling that powers human review — the case management, labelling, investigation, and enforcement interfaces our analysts and policy specialists use every day. These are back-office tools, but they are anything but low-stakes: the speed, clarity, and reliability of this tooling directly determines how quickly Anthropic can identify harmful behaviour, make sound enforcement decisions, and feed signal back into model training. You'll work closely with Trust & Safety operations, policy, and detection-engineering teams to turn messy operational workflows into well-designed, durable software.
This is a hands-on, full-stack role for someone who enjoys building products for internal users, sweats the details of usability and correctness, and wants their engineering work to have a clear line to real-world safety outcomes.
Responsibilities
Design, build, and maintain the internal review and enforcement tooling used by Safeguards analysts — including case queues, content review surfaces, decision/audit logging, and account-actioning workflows
Understand user workflows and establish tooling for well processes that may be distributed across a number of tools and UIs
Develop the ‘base layer’ of reusable APIs, data storage, and backend services that let new review workflows be stood up quickly and safely
Partner with operations and policy teams to understand reviewer pain points, then translate them into clear product improvements that reduce handling time and decision error
Integrate tooling with upstream detection systems and downstream enforcement infrastructure so that flagged behaviour flows cleanly from signal → human review → action
Build in the guardrails that sensitive internal tools require: granular permissions, audit trails, data-access controls, and reviewer wellbeing features (e.g. content blurring, exposure limits)
Instrument the tools you ship — surfacing metrics on queue health, reviewer throughput, and decision quality so the team can see what's working
Contribute to the Foundations team's shared platform and on-call responsibilities
You may be a good fit if you
Have 4+ years of experience as a software engineer, with meaningful time spent building internal tools, operations platforms, or back-office products
Are comfortable using agentic coding tools (e.g. Claude Code) as a core part of your workflow, and can direct them to ship well-tested, production-quality software at a high cadence without lowering the bar (our stack is mostly React/TypeScript and Python)
Take a product-minded approach to internal users: you work with the people using your tools, watch where they struggle, and fix it
Are results-oriented, with a bias towards flexibility and impact
Pick up slack, even if it goes outside your job description
Communicate clearly with non-engineering stakeholders and can explain technical trade-offs to operations and policy partners
Care about the societal impacts of your work and want to apply your engineering skills directly to AI safety
Strong candidates may also
Have built tooling in a trust & safety, content moderation, fraud, integrity, or risk-operations setting
Have experience designing case-management or workflow systems (queues, SLAs, escalation paths, audit logs)
Have worked with sensitive data and understand the privacy, access-control, and reviewer-wellbeing considerations that come with it
Have experience with GCP/AWS, Postgres/BigQuery, and CI/CD in a production environment
Have used LLMs as a building block inside operational tools (e.g. assisted triage, summarisation, or classification in the review loop)
Representative projects
Rebuilding the analyst review queue so cases are routed by severity and skill, with full decision history and one-click escalation
Shipping a unified account-investigation view that pulls signals from multiple detection systems into a single, permissioned surface
Adding content-obfuscation and exposure-tracking features to protect reviewers working with harmful material
Building an internal labelling tool that feeds high-quality ground truth back to the detection and research teams
Candidates need not have
100% of the skills listed above
Similar Jobs
Related searches:
Hybrid Jobs
Junior Jobs
Hybrid Junior Jobs
Junior NLP & Language AIJunior Machine LearningJunior AI ResearchJunior Backend & SystemsJunior AI Agents & RAGJunior AI Safety & Security
AI Jobs in London
NLP & Language AI in LondonMachine Learning in LondonAI Research in LondonBackend & Systems in LondonAI Agents & RAG in LondonAI Safety & Security in London
llmagentsalignmentrust
Get jobs like this delivered weekly
Free AI jobs newsletter. No spam.