Research Engineer (Scaling Multimodal Data)

World Labs · San Francisco, CA · $200k - $325k
full-time senior Posted 1 month ago

About this role

About World Labs: We build foundational world models that can perceive, generate, reason, and interact with the 3D world — unlocking AI's full potential through spatial intelligence by transforming seeing into doing, perceiving into reasoning, and imagining into creating. We believe spatial intelligence will unlock new forms of storytelling, creativity, design, simulation, and immersive experiences across both virtual and physical worlds. We bring together a world-class team, united by a shared curiosity, passion, and deep backgrounds in technology — from AI research to systems engineering to product design — creating a tight feedback loop between our cutting-edge research and products that empower our users. About the Role: We’re looking for a research engineer to help improve our in-house world models through better multimodal data. This role is about figuring out what data actually moves model quality — then building the datasets, pipelines, and experiments to prove it. The best generative models aren’t just a product of model architecture and compute, they are a product of the training data. The model output reflects someone’s obsession over what goes into the data, how it’s processed, and what gets thrown away. We’re looking for the person who does the obsessing and builds the tools to act on it at scale. This isn’t a role where someone hands you a dataset and asks you to clean it. You will decide what data we need, figure out where to get it, build the processing and curation systems, and close the loop with model training to make sure it actually works. You will need strong engineering skills to do this well, but engineering serves your judgement about data, not the other way around. What You’ll Do: Discover, evaluate, and acquire training data . You will find, evaluate, and integrate data from diverse sources. You will write scrapers, work with APIs, and make judgement calls about whether a source is worth pursuing before investing days of effort. Build data processing and curation systems . Design and implement data processing pipelines for filtering, deduplication, quality scoring, and curation. You will create well-abstracted systems that your teammates can pick up and extend. Look at the actual data constantly . You will sampling outputs, spotting distributional issues (e.g., too many screenshots, low-resolution crops, near-duplicates), and catch problems before they propagate to model training. Close the data → model → evaluation loop. You will diagnose model failures and trace them back to data issues, then design principled fixes to nip the problem in the bud. Deploy ML models for data enrichment . captioning, quality scoring, text embedding, segmentation, classification etc. You will evaluate whether these models actually help. Make systematic, documented decisions . Score thresholds, filtering criteria, mixture ratios — every processing choice should be reproducible, versioned, and auditable. You will set the standard for rigor on the team. Questions We Think About: How do you sample data for large scale world models, where the best practices for dense frame video models don’t apply? How do you caption large scale video datasets for world generation? How do you measure the diversity of video datasets, where counting the raw number of hours or frames doesn’t account for variation in content? How do we build data pipelines that are reproducible and robust? How can we improve the observability of billion-scale datasets so we can catch issues early? What does it mean for a dataset to have a good “taste”? How do you operationalize aesthetic judgement at a billion scale? How do you decide whether to filter aggressively for quality versus preserve diversity and coverage? Where’s the line, and how do you find it empirically? How do you strike the balance between pre-processing data and computing things on the fly? One locks you into design decisions, while the other can bottleneck training throughout. Most of these questions don’t have clean answers. We want someone who thinks about them seriously. What We Require: Strong software engineering fundamentals . You write well-abstracted, readable code and build reusable tools with clear interfaces. You find messy, undocumented systems personally unacceptable, because you've been burned by the alternative. Deep experience with image and video data at scale . You know the data formats, the processing libraries (OpenCV, PIL, FFmpeg, PyAV), and you have hard-won intuition for what goes wrong when you're processing billions of samples. Experience with distributed computing . You've used frameworks like Apache Beam, Spark, Kubernetes, or Ray to process datasets that don't fit on a single machine. Experience using ML models as components. You’ve built and run inference pipelines (e.g., filtering, scoring, captioning, and embedding) at billion scale, and evaluated whether t

Similar Jobs

Related searches:

On-site Jobs Senior Jobs On-site Senior Jobs Senior AI InfrastructureSenior AI ResearchSenior Backend & SystemsSenior Data Engineering AI Jobs in San Francisco AI Infrastructure in San FranciscoAI Research in San FranciscoBackend & Systems in San FranciscoData Engineering in San Francisco searchdata-pipelinedistributed-systemsresearch