Senior Data Engineer, AI and Systems Engineering
full-time
senior
Posted 2 hours ago
About this role
Role Description
As a Senior Data Engineer on the CMDB and Asset Intelligence platform, you will help build the unified data foundation that powers asset visibility, cost optimization, and security insights across the company. You will design scalable pipelines and data models that bring together sources like ServiceNow, Okta, Oracle, and Jamf into a centralized lakehouse architecture, turning messy, multi-system data into trusted, decision-ready signals.
This role is a chance to raise the bar on data quality and governance while building systems that teams actually rely on day to day. You will partner closely with IT, Security, and Finance to define what “good” looks like, deliver high-impact solutions, and shape the long-term direction of the platform.
Our Engineering Career Framework is viewable by anyone outside the company and describes what’s expected for our engineers at each of our career levels. Check out our blog post on this topic and more here .
Responsibilities
Design and build scalable data pipelines using Databricks and Spark to ingest, transform, and unify data from multiple enterprise systems
Develop and maintain medallion architecture (Bronze, Silver, Gold) data models to create reliable and performant “Golden Record” datasets
Implement data normalization, mapping, and entity resolution techniques (e.g., fuzzy matching, XREF tables) to unify asset data across disparate systems
Build data workflows to detect and surface Shadow IT across financial, identity, endpoint, and network signals and integrate results into CMDB systems
Partner with IT, Security, Finance, Procurement, and GRC teams to define and enforce data standards for critical CMDB attributes (e.g., ownership, approval status, lifecycle)
Develop and maintain data integrations and APIs to synchronize curated datasets into operational systems such as ServiceNow and Jira Assets
Monitor, troubleshoot, and improve data quality, reliability, and observability across the data platform
On-call work may be necessary occasionally to help address bugs, outages, or other operational issues, with the goal of maintaining a stable and high-quality experience for our customers.
Requirements
9+ years of experience building and maintaining data pipelines and large-scale data platforms
Strong experience with Databricks, Apache Spark, and SQL for distributed data processing and transformation
Experience designing data models and architectures such as medallion architecture, data lakes, or lakehouse systems
Proficiency in Python or similar programming languages for data engineering and ETL development
Experience integrating data from multiple enterprise systems (e.g., SaaS tools, financial systems, identity systems)
Strong understanding of data quality, data governance, and entity resolution techniques across heterogeneous datasets
Excellent collaboration and communication skills, with experience working cross-functionally with technical and non-technical stakeholders
Preferred Qualifications
Experience working with CMDB systems such as Jira Assets or ServiceNow
Familiarity with identity, security, or IT asset management systems (e.g., Okta, Jamf, Zscaler)
Experience implementing cost-optimized data processing strategies in cloud environments
Exposure to financial data systems (e.g., Oracle, Concur) and spend analytics use cases
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field
Compensation
Poland Pay Range
239 700 zł — 324 300 zł PLN
Similar Jobs
Related searches:
Get jobs like this delivered weekly
Free AI jobs newsletter. No spam.