Senior Software Engineer - Data Lake & BI
full-time
senior
Posted 2 weeks ago
About this role
CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com .
What You'll Do:
CoreWeave is the top-rated AI-cloud for high-performance GPU infrastructure across AI/ML, visual effects, rendering, and real-time inference. Our stack is engineered for speed, scale, and cost-efficiency—an unmatched alternative to traditional hyperscalers. At CoreWeave, infrastructure is the product .
About this role:
We're looking for a Senior Engineer to be a driving force on CoreWeave's Benchmarking & Performance team, with a singular focus on our planet-scale performance data warehouse. You will own the architecture and evolution of how we ingest, store, transform, and surface performance data across every data center in our global infrastructure—turning billions of raw events into the trusted, queryable insights that power our engineering and business decisions.
If you believe that the right storage format, the right schema, and the right query engine can turn a mountain of telemetry into a competitive advantage, this role was built for you. You will shape the data foundations that underpin industry-leading benchmark publications, internal performance SLAs, and executive-level reporting—working hand-in-hand with world-class partners and communities to ensure every number we publish is authoritative, reproducible, and actionable.
Key Responsibilities:
Data Lake Architecture - Design and build our core performance data lake on columnar storage foundations. Select, integrate, and optimize table formats (Apache Iceberg, Parquet, Avro) to balance query performance, storage cost, and schema evolution. Implement hot and cold storage tiering strategies that keep recent data instantly queryable while efficiently archiving historical benchmarks at petabyte scale.
Schema Design & Data Modeling - Define and govern schemas for performance telemetry: latency distributions, throughput metrics, GPU utilization, cost-per-token, and hardware health signals. Establish naming conventions, partitioning strategies, and lifecycle policies that keep the warehouse fast, consistent, and self-documenting as new workloads and hardware generations come online.
Time-Series & Metrics Infrastructure - Own and extend our time-series database (TSDB) layer. Write and optimize PromQL/MetricsQL queries that power real-time dashboards, alerting, and trend analysis across thousands of GPUs and hundreds of benchmark runs. Bridge the gap between streaming metrics and batch-analytical workloads so engineers get sub-second answers and analysts get complete historical context from the same data.
BI, Visualization & Data-Driven Processes - Build compelling, self-service BI views and dashboards (Grafana, Looker, or similar) that translate raw performance data into clear stories for engineers, product managers, and executives. Design playbooks and data-driven runbooks that tie benchmark regressions, capacity decisions, and competitive analyses directly to live data. Champion a culture where every performance claim is backed by a reproducible query and a versioned dataset.
Query Optimization & Performance - Profile and tune query engines against columnar and time-series stores; reduce scan times, optimize join strategies, and introduce materialized views or pre-aggregations where they matter most. Benchmark the benchmarking infrastructure itself—ensuring our data platform meets its own strict P99 latency and freshness SLAs.
Who you are
5+ years of experience building distributed systems, data platforms, or cloud services.
Strong coding in Python or Go (C++ a plus) and deep familiarity with networked systems and performance.
Hands-on experience with Kubernetes at production scale, CI/CD, and observability stacks (Prometheus, Grafana, OpenTelemetry).
Demonstrated expertise with data lake architectures, columnar databases, and modern table formats (Iceberg, Parquet, Avro); you understand the trade-offs between them and know when to reach for each.
Practical experience designing and managing hot/cold storage tiers for large-scale analytical workloads.
Strong schema design instincts—you think in partitions, sort keys, and evolution strategies, not just tables and columns.
Working knowledge of time-series databases and fluency in PromQL or MetricsQL for building dashboards, alerts, and ad-hoc analysis.
Experience building BI views, visualizations, and data-driven playbooks that turn raw data into organizational decision-making tools.
Strong communicator comfortable colla
Similar Jobs
Related searches:
Hybrid Jobs
Senior Jobs
Hybrid Senior Jobs
Senior Machine LearningSenior NLP & Language AISenior Backend & SystemsSenior AI InfrastructureSenior Data Engineering
AI Jobs in Sunnyvale
Machine Learning in SunnyvaleNLP & Language AI in SunnyvaleBackend & Systems in SunnyvaleAI Infrastructure in SunnyvaleData Engineering in Sunnyvale
gpudata-pipelinepytorchcomputer-graphicsdistributed-systemsllm