Sr. Staff SDET (Analytics)
full-time
lead
Posted 2 weeks ago
About this role
Netradyne harnesses the power of Computer Vision and Edge Computing to revolutionize the modern-day transportation ecosystem. We are a leader in fleet safety solutions. With growth exceeding 4x year over year, our solution is quickly being recognized as a significant disruptive technology. Our team is growing, and we need forward-thinking, uncompromising, competitive team members to continue to facilitate our growth.
About Netradyne
Netradyne provides AI-powered technologies for fleet management and safer roads. An award-winning industry leader in fleet safety and video telematics solutions, Netradyne empowers thousands of commercial fleet customers across North America, Europe, and Asia to enhance their driver performance, reduce risk, and optimize operations.
Netradyne sets the standard among transportation technology companies for enhancing and sustaining road safety, with an industry-leading 25+ billion miles vision-analyzed for risk and an industry-first driver scoring system that reinforces safe behaviors. Founded in 2015, Netradyne is headquartered in San Diego with offices in San Francisco, Nashville, the UK and Bangalore. For more details visit: www.netradyne.com
Role Overview
As a Sr. Staff SDET in the Analytics team, you will own the quality engineering strategy, automation architecture, and reliability validation for our offline batch data pipelines and end-to-end analytics/KPI outputs. This is a hands-on technical leadership role focused on building high-signal, deterministic validation that is deeply integrated into Jenkins CI/CD, along with strong test data management and leadership-grade visibility via dashboards.
You will design and standardize a PyTest-based validation framework (plus internal libraries + CLI tooling), establish quality gates that prevent silent regressions, and drive cross-team adoption of quality standards that materially improve reliability, correctness, and release confidence.
Key Responsibilities
Batch Data Pipeline Regression Automation
Design and implement automated regression validation for offline batch pipelines covering correctness (joins/aggregations/reconciliation), integrity (null/uniqueness/referential), and cross-table/time-window consistency.
Standardize validation approaches: golden datasets, snapshot/deterministic diff checks, and contract-based checks across pipeline boundaries.
Build reusable Python validation libraries and utilities that teams can extend across pipeline families.
Define strategies to handle expected variance (e.g., late-arriving data) without weakening correctness guarantees.
CI/CD for Data Pipelines – Jenkins
Build and operationalize Jenkins-based CI/CD gates for batch pipelines: pre-merge validations, nightly regressions, pre-prod checks, and release/promotions.
Improve CI signal quality by reducing flakes, ensuring deterministic execution, and producing actionable diagnostics (logs/artifacts/metadata).
Optimize runtime and reliability via tiered suites (smoke vs regression) and smart execution (parallelization/test selection).
Publish standardized CI reporting (pass/fail trends, top failure causes, time-to-detect/time-to-triage).
Test Data Management
Own test data strategy for analytics validation: curated datasets, versioning/lifecycle, refresh cadence, retention, and reproducibility standards.
Establish best practices for deterministic fixtures, controlled dataset updates, and safe data handling (masking/anonymization as needed).
Enable teams to run validations reliably in CI and pre-prod without ad-hoc setup.
End-to-End Analytics & KPI Correctness
Implement automated KPI correctness checks: metric-definition compliance, aggregation sanity, source reconciliation, and regression detection on KPI distributions.
Build audit-style end-to-end validation from source → transforms → warehouse → KPI outputs → dashboards/reports.
Standardize KPI validation so new pipelines/KPIs inherit guardrails by default.
Deliver self-serve dashboards for pipeline health, data quality health, and KPI correctness signals used for release readiness.
Incident RCA, Prevention & Reliability Engineering
Lead investigations for pipeline failures/missed SLAs and KPI regressions; drive closure through preventive guardrails.
Convert incident learnings into automated checks, stronger gates, monitoring/alerts, and clear runbooks with ownership.
Track reliability metrics and demonstrate measurable reduction in repeat incidents and improved MTTR.
Cross-team Quality Leadership (Sr. Staff Expectation)
Partner with Data Engineering, Analytics, Platform, and Release stakeholders to define quality standards and release readiness criteria.
Mentor engineers/SDETs on validation architecture, best practices, and raising the bar for “done” on analytics pipelines.
Drive org-wide quality initiatives such as shared libraries, consistent validation tiering, and unified
Similar Jobs
Related searches:
Get jobs like this delivered weekly
Free AI jobs newsletter. No spam.