Senior Software Engineer, Data Ingestion Platform

Block · San Francisco, CA · $185k - $277k
full-time senior Posted 10 hours ago

About this role

Block builds simple, powerful tools that make progress towards an economy that’s truly open to all. Each of our brands unlocks different aspects of the economy for more people. Square makes commerce and financial services accessible to sellers. Cash App is the easy way to spend, send, and store money. Afterpay is transforming the way customers manage their spending over time. TIDAL is a music platform that empowers artists to thrive as entrepreneurs. Bitkey is a simple self-custody wallet built for bitcoin. Proto is a suite of bitcoin mining products and services. Together, we’re helping build a financial system that is open to everyone. Join us. The Role The Data Ingestion team is part of Block's AI, Data & Analytics organization and is responsible for building and operating the platforms that replicate and ingest data into Block's Lakehouse, powered by Databricks and Snowflake. The team owns Block's Change Data Capture (CDC) platform, streaming data connectors, and data loading infrastructure — ensuring that fresh, reliable data from production databases, event streams, and third-party sources is available for analytics, machine learning, and AI initiatives across Square, Cash App, and Afterpay. As a Senior Software Engineer on the team, you will design and build the next generation of data ingestion infrastructure — including Kafka Iceberg connectors, database replication pipelines, and unified ingestion frameworks. You will drive the modernization of our CDC platform, help consolidate multiple ingestion paths into a cohesive architecture, and collaborate with partner teams across Block to ensure data flows reliably from source to Lakehouse. In this role, you will have a direct impact on the scalability, reliability, and cost-efficiency of Block's data ecosystem. Work from anywhere: This role can be performed from any location in the US or Canada. You Will Design, build, and operate scalable data replication and ingestion pipelines that move data from production databases, event streams, and third-party sources into Block's Lakehouse. Develop and enhance Kafka Iceberg connectors and data loading frameworks, enabling reliable, low-latency data delivery to Snowflake and Databricks. Drive the modernization of Block's CDC platform — evaluating and implementing next-generation approaches for database replication, including cloud-native alternatives, and Iceberg-based ingestion patterns. Build self-service tooling and observability features that empower internal teams to onboard, monitor, and troubleshoot their own data pipelines with minimal support. Collaborate with data engineering, platform infrastructure, and product teams to define data contracts, improve service encapsulation, and reduce tight coupling between operational databases and analytics consumers. Contribute to the unification of Block's data ingestion architecture by identifying opportunities to consolidate overlapping systems and reduce infrastructure complexity. Design and implement solutions for PII detection, masking, and privacy-compliant data handling within ingestion pipelines, ensuring sensitive data is properly classified, protected, and governed in accordance with Block's privacy policies and regulatory requirements (e.g., GDPR, CCPA). Establish and promote best practices for data pipeline reliability, cost optimization, schema management, and compliance across the ingestion platform. You Have 8+ years of experience in software engineering or data platform development, with a focus on building scalable data systems or distributed infrastructure. Strong programming proficiency in languages such as Java, Python, Scala, or Go, with experience developing data frameworks, libraries, or services. Hands-on experience with streaming data systems and technologies such as Apache Kafka, Kafka Connect, or similar distributed messaging platforms. Solid understanding of Change Data Capture (CDC), database replication patterns, and data lake or Lakehouse architectures. Experience with modern data storage formats and table formats such as Apache Iceberg or Delta Lake. Experience with cloud-based data ecosystems (AWS, GCP, or Azure) and infrastructure-as-code tools. Technologies We Use and Teach Streaming & Messaging: Apache Kafka, Schema Registry, Kafka Connect, Debezium Data Platform: Databricks, Snowflake Data Processing & Storage: Apache Spark, Apache Iceberg, Delta Lake, Apache Airflow Cloud & Infrastructure: AWS, Terraform We’re working to build a more inclusive economy where our customers have equal access to opportunity, and we strive to live by these same values in building our workplace. Block is an equal opportunity employer evaluating all employees and job applicants without regard to identity or any legally protected class. We will consider qualified applicants with arrest or conviction records for employment in accordance with state and local laws and “fair chance” ordinances. W

Similar Jobs

Related searches:

Remote Jobs Senior Jobs Remote Senior Jobs Senior Data EngineeringSenior AI Infrastructure AI Jobs in San Francisco Data Engineering in San FranciscoAI Infrastructure in San Francisco data-pipelinecloud

Get jobs like this delivered weekly

Free AI jobs newsletter. No spam.