Senior Software Engineer - (Search) Indexing
full-time
senior
Posted 2 weeks ago
About this role
Why work at Nebius Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
The Product
In a rapidly evolving world, trust in AI depends on AI agents being grounded in fresh, verified real-world data. Search is the foundation that makes this possible.
We are building an agent-native search platform designed specifically for AI systems rather than human users. Our product provides programmatic, low-latency, and observable search APIs that AI agents use to retrieve, filter, and reason over real-world information at scale.
The Role
We are looking for a Senior Software Engineer to work on the indexing and data processing layer of a novel search engine tailored for agentic AI consumption.
In this role, you will focus on building systems that ingest, process, and organise massive volumes of data into efficient, queryable structures. You will work primarily on offline and nearline pipelines, ensuring that data is fresh, complete, and efficiently accessible by downstream retrieval systems. You will operate in an environment where throughput, scalability, and correctness are critical — designing systems capable of handling tens of gigabytes per second across continuously evolving datasets.
In this position, your responsibility will be to
Design, implement, and operate large-scale indexing systems and data pipelines that sit at the core of our search infrastructure
Build ingestion workflows for internal and external data sources, including web-scale crawling and structured feeds
Develop and optimise indexing strategies balancing performance, freshness, and resource efficiency
Work on storage formats, compaction strategies, and update mechanisms to keep data accessible and current
Ensure reliability and predictability of pipelines under high-throughput conditions
Build well-tested components with clear responsibilities and interaction contracts, while remaining flexible as the system evolves
Define and implement observability primitives, including structured logs, metrics, and data quality signals across offline and nearline pipelines
Monitor throughput, resource usage, and cost, and drive optimisations when business needs require it
Collaborate with runtime and ML teams to ensure indexing outputs meet retrieval and ranking requirements
Enable safe experimentation on indexing strategies and data processing logic through controlled rollouts and clearly defined quality signals
You may be a good fit if you:
Have 5+ years of experience building production backend or data systems
Have strong hands-on experience with Go in real-world, high-load services (experience with other systems languages such as C++ or Rust is a plus)
Have worked on systems at significant scale — such as 10k+ RPS or 10+ GiB/sec throughput
Have experience building or operating databases, data planes, or large-scale data pipelines
Understand distributed systems fundamentals, including fault tolerance, failure modes, and horizontal scalability
Have operated your own systems in production: handled incidents, made real-world tradeoffs, and understand what running data infrastructure at scale truly involves
Think in terms of systems and data flows rather than isolated components, reasoning end-to-end about how data moves through the stack
Can make pragmatic decisions under pressure without compromising long-term system health
Collaborate effectively across infrastructure, ML, and product teams, communicating clearly in cross-functional settings
Strong candidates may also have experience with:
Distributed data processing frameworks such as Spark, Flink, MapReduce, or Beam
Content systems including web crawling, scraping, proxying, or anti-bot infrastructure
Ad tech, social networks, or other large-scale content platforms
DBMS internals (open source or SaaS) and cloud infrastructure
Open-source contributions or active involvement in the engineering community
Competitive programming or CTF participation (ICPC, IOI, or similar)
SHAD or similar advanced technical programmes
Conference talks or technical publications
We conduct coding interviews as part of the process.
What we offer
Competitive salary and comprehensive benefits package.
Opportunities for professional growth within Nebius.
Flexible workin
Similar Jobs
Related searches:
Hybrid Jobs
Senior Jobs
Hybrid Senior Jobs
Senior AI Agents & RAGSenior Data EngineeringSenior Backend & SystemsSenior AI Infrastructure
AI Jobs in Amsterdam
AI Agents & RAG in AmsterdamData Engineering in AmsterdamBackend & Systems in AmsterdamAI Infrastructure in Amsterdam
distributed-systemsdata-pipelineagentssearchcloud