Staff Attack Engineer, AI/LLM
full-time
lead
Posted 3 days ago
About this role
Get to Know Us
Horizon3.ai is a fast-growing, remote cybersecurity company dedicated to the mission of enabling organizations to proactively find and fix and verify exploitable attack vectors before criminals exploit them. Our flagship product, the NodeZeroTM platform, delivers production-safe autonomous pentests and other key assessment operations that scale across the largest internal, external, cloud, and hybrid cloud environments. NodeZero has been adopted by organizations of all sizes, from small educational institutions to government agencies and Global 100 enterprises. It is used by ITOps/SecOps teams, consulting pentesters, and MSSPs and MSPs.
We are a fusion of former U.S. Special Operations cyber operators, startup engineers, and formerly frustrated cybersecurity practitioners. We're committed to helping solve our common security problems: ineffective security tools, false positives resulting in alert fatigue, blind spots, "checkbox” security culture, cybersecurity skills shortage, and the long lead time and expense of hiring outside consultants. Collectively, we are a team of learn it alls, committed to a culture of respect, collaboration, ownership, and results.
SUMMARY
We are hiring a Staff Attack Engineer specializing in AI/LLM security to join our team. You will break AI and agentic systems and turn that research into automated attacks inside NodeZero, our autonomous pentesting platform.
This is not consulting or manual pentesting; the goal is to build repeatable, scalable attack patterns that run autonomously across customer environments. You’ll also help drive our LLM-powered offensive capabilities and act as a technical leader for AI/LLM offense.
ESSENTIAL FUNCTIONS
ATTACKING AI/LLM SYSTEMS
- Break AI and agentic systems and translate that research into automated, repeatable attack modules for NodeZero.
- Design and execute prompt injection and defense evasion attacks, focusing on generalized, reusable patterns.
- Conduct tool-use exploitation, abusing LLM agents’ access to code, file systems, APIs, and databases for attacker-realistic outcomes (e.g., context poisoning, RCE, data exfiltration, privilege escalation).
- Target AI infrastructure (model serving, training pipelines, vector databases, GPU/MLOps tooling) with an understanding of real-world enterprise deployments and misconfigurations.
- Research and apply model and supply chain attacks (poisoning, training data extraction, adversarial inputs, deployment pipeline abuse).
- Perform threat modeling for agentic systems, mapping trust boundaries and attack surfaces and turning them into concrete attack paths.
- Apply a strong productization mindset, turning manual techniques into safe, reliable, and scalable automated tooling.
BUILDING WITH LLMS
- Build and extend LLM-powered applications (prompting, structured output, agentic workflows).
- Design with production concerns in mind: cost, safety and hallucination guardrails, reliability, and observability.
- Design and extend microservices that orchestrate LLM tasks and integrate with NodeZero and related offensive workflows.
COMPETENCIES / REQUIREMENTS
- Expert-level Python and software engineering skills.
- Solid penetration testing fundamentals and understanding of common attack chains.
- Familiarity with AI/LLM security frameworks (e.g., OWASP Top 10 for LLMs, MITRE ATLAS).
- Experience in a security product or offensive security team, ideally with shipped offensive capabilities or tooling.
- Proven ability to break AI/LLM and agentic systems.
- Clear understanding of trust boundaries around AI tools, data sources, and permissions, and how to systematically test and exploit them.
- Expert-level ownership – drives high-complexity, high-risk programs and sets strategy, not just execution.
- Self-motivated – identifies problems and builds solutions proactively.
- Industry obsessed – tracks the fast-moving AI security landscape and can speak to recent developments, new attacks, and where the field is heading.
NICE-TO-HAVE
- Experience with other cloud AI services (e.g., Azure OpenAI, GCP Vertex AI).
- Contributions to AI security research (blog posts, conference talks, CVEs, open-source tools).
- Experience with AWS Bedrock and AWS Agent Core.
- Familiarity with graph databases (e.g., Neo4j).
- Background in traditional exploit development or vulnerability research.
- CTF experience, particularly in AI/ML-focused challenge categories.
Perks of Horizon3.ai
- Inclusive Team: We value diversity and promote an inclusive culture where everyone can thrive.
- Growth Opportunities: Be part of a dynamic and growing team with numerous career development opportunities.
- Innovative Culture: Work in a collaborative environment that encourages creativity and out-of-the-box thinking.
- Remote Work: We are a 100% remote company. Enjoy the convenience and work-life balance that comes with remote work.
- C
Similar Jobs
Related searches: