Forward Deployed Engineer, Ecosystem
full-time
senior
Posted 1 week ago
About this role
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
The role
Nebius builds the infrastructure serious AI teams run on — GPU clusters, inference runtimes, agent development environments, data pipelines — all of it purpose-built for the most demanding AI workloads. What we are now building is the ecosystem function that ensures the best AI companies choose to build on us, integrate with us, and stay.
As a Forward Deployed Engineer, Ecosystem , you will sit at the intersection of solution architecture and hands-on engineering. You assess how partner products actually work on our stack, define the reference architecture for each integration, build the working prototype that proves it, and translate what you find into product requirements that shape what Nebius ships next.
Your responsibilities will include :
Solutioning & Architecture
Design and prototype integrations between partner products and the Nebius platform — fast, hands-on, and technically sound
Define reference architectures for partner integrations — not just what works, but how it should work at scale and in production
Scope partner architectures against our platform — how does this product actually work on our stack, where does it snap together, where does it break
Build production-quality proof-of-concepts across the AI stack including agentic pipelines, RAG architectures, inference optimization patterns, and multi-model orchestration
Produce working proof-of-concepts that serve as the starting point for product creation — not a requirements doc, a working thing
Maintain a library of reference architectures and integration patterns that internal product and engineering teams can build from
Technical Partner Scoping
Work directly with partner engineering teams to scope, prototype, and progress integrations
Assess partner architectures honestly — if the integration is painful, that is signal; if it snaps together in a weekend, that is also signal; report both
Provide technical guidance to partners on how to maximize performance, reliability, and cost efficiency on Nebius infrastructure
Produce technical scoping that gives your pod partner and internal teams a clear picture of integration feasibility, depth, and complexity
Internal
Translate external integration findings into actionable product requirements for Nebius platform teams
Work with ISV partners, SI teams, and field teams to scale solution adoption and drive revenue once a solution is ready
Surface recurring architectural patterns and integration gaps to inform platform roadmap decisions
Participate in platform planning as the technical voice of what you are seeing and building in the field
Ecosystem Presence
Represent Nebius at hackathons, in open source communities, and at technical events
Build in public — demos, reference architectures, and integrations that establish Nebius as the platform serious AI builders choose
Stay current with the AI tooling ecosystem — you know what shipped last week and what it means for our stack
Platform focus areas:
Depending on your background and mutual fit, you will focus on one or more of the following:
Agentic — agent frameworks, memory systems, tool integration, orchestration, MCP, guardrails
Managed Inference — inference runtimes, model serving, optimization tooling, speculative decoding, KV-cache routing
IaaS / Managed Infrastructure — cloud-native integrations, GPU orchestration, enterprise platform connectors
Data — vector databases, retrieval systems, RAG architectures, data pipeline integrations, synthetic data tooling
We expect you to have:
6+ years of hands-on engineering experience in AI application development, ML systems, or AI infrastructure
Deep working knowledge of the AI developer stack — LLM APIs, inference runtimes, orchestration frameworks, vector databases, RAG architectures, agentic pipelines — built through shipping, not reading
Hands-on experience with agentic frameworks such as LangChain, LangGraph, CrewAI, AutoGen, or equivalent
Strong Python programming skills and comfort prototyping end-to-end AI systems quickly
Experience defining reference architectures and techni
Similar Jobs
Related searches:
Get jobs like this delivered weekly
Free AI jobs newsletter. No spam.