Description:
Seattle WA Job Description: > > Design and implement data pipelines in Azure Databricks for ingesting and transforming data from upstream systems > > Optimize ETLELT workflows for performance and scalability > > Collaborate with JavaAPI developers to integrate event-driven triggers into data pipelines > > Implement data quality checks, schema validation, and error handling > > Support batch and near-real-time data flows for operational and analytics use cases > > Work with Boomi and WFM teams to ensu
Jan 13, 2026;
from:
dice.com