Manager, Scaled Abuse Countermeasures and Research
full-time
senior
Posted 22 hours ago
About this role
Discord is used by over 200 million people every month for many different reasons, but there’s one thing that nearly everyone does on our platform: play video games. Over 90% of our users play games, spending a combined 1.5 billion hours playing thousands of unique titles on Discord each month. Discord plays a uniquely important role in the future of gaming. We are focused on making it easier and more fun for people to talk and hang out before, during, and after playing games.
Discord exists to give people the power to create space to find belonging — to talk regularly with the people they care about and build genuine relationships with friends and communities close to home or around the world.
We're looking for a Manager to lead our Scaled Abuse Countermeasures and Research (SCAR) team — Discord's first line of defense against scaled abuse. SCAR combines rapid incident response with deep threat research and signal generation across bulk fake account creation, login abuse, spam, scams, fraud, and other high-volume threats. This is an opportunity to reshape how the team operates: set a sharper strategy, stand up a structured research program, and lean heavily into ML and AI-powered automation to replace today's manual workflows. This role reports to the Director of Safety Automation.
What You'll Be Doing
Lead and grow the SCAR team, a group of Scaled Abuse Scientists who serve as Discord's first line of defense against bulk fake account creation, login abuse, spam, scams, fraud, and other high-volume threats.
Set a vision for the team that leans heavily into automation — scale SCAR's impact by partnering with Safety ML on ML-driven detection and building AI-powered incident response workflows that increase the team's leverage and reduce manual work.
Define scaled abuse north-star metrics and build a roadmap and prioritization framework that keeps the team focused on the highest-impact problems.
Stand up a research program where the team operates as scientists: surface the most important questions about attacker operations and signals, then systematically answer them through structured research and experimentation.
Close the loop with Safety ML so that SCAR's signals feed directly into model features and pipeline upgrades, and tactical wins translate into long-term, automated countermeasures.
Partner cross-functionally with Product, Safety ML, Data Science, Policy, Legal, Trust & Safety, and Revenue, influencing safety-by-design decisions upstream so abuse is prevented, not just mitigated.
Coach, hire, and grow Scaled Abuse Scientists.
Communicate attacker dynamics, economic incentives, and trade-offs clearly to senior leadership.
What you should have
2+ years of people management experience leading technical teams, including engineers, ML engineers, data scientists, or applied researchers, or an equivalent track record of leading, mentoring, or coordinating technical work in a way that has prepared you to step into this role.
4+ years of experience working on Trust & Safety, fraud, anti-abuse, or a closely adjacent adversarial domain at a consumer-scale platform.
Demonstrated ability to drive ML and automation adoption within a Trust & Safety or operations context. You don't need to build models yourself, but you need to understand them well enough to evaluate their quality, direct their development, and push a team toward automated solutions.
Strong analytical skills and fluency in SQL and Python for data investigation and pattern analysis (not full software engineering proficiency required).
Ability to think from first principles and apply behavioral-economics reasoning to adversarial systems: why are attackers doing this, what are the incentives, and what is the most cost-effective way to break their economics.
Excellent communication and cross-functional collaboration skills, with a history of partnering effectively across engineering, ML, data science, policy, legal, and product teams.
A growth mindset: you seek feedback, reflect on decisions, and actively help your team do the same.
Bonus Points
Experience leveraging LLMs or AI agents for incident response, investigation automation, or signal triage.
Experience tackling scaled abuse problems at large social platforms, marketplaces, or other high-volume consumer products.
Threat intelligence research background, including understanding of internet infrastructure and the tools and techniques attackers use.
A strong passion for Discord and/or gaming, and an appreciation for the communities we serve.
A relevant degree in Computer Science, Machine Learning, Statistics, or a related quantitative field, or equivalent practical experience.
Candidates must reside in or be willing to relocate to the San Francisco Bay Area (Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Solano, and Sonoma counties). Relocation assistance may be available.
The US base salary range for this full-time
Similar Jobs
Related searches:
On-site Jobs
Senior Jobs
On-site Senior Jobs
Senior Machine LearningSenior AI Agents & RAGSenior NLP & Language AISenior AI ResearchSenior Data Science
AI Jobs in San Francisco
Machine Learning in San FranciscoAI Agents & RAG in San FranciscoNLP & Language AI in San FranciscoAI Research in San FranciscoData Science in San Francisco
llmagentsresearchmachine-learning
Get jobs like this delivered weekly
Free AI jobs newsletter. No spam.