Description:
Responsibilities: Design, develop, and maintain ETL/ELT pipelines using PySpark on Databricks - Build and optimize batch and streaming data pipelines - Implement Delta Lake solutions (Delta tables, time travel, ACID transactions) - Collaborate with data scientists, analysts, and architects to deliver analytics-ready datasets - Optimize Spark jobs for performance, scalability, and cost - Integrate data from multiple sources (RDBMS, APIs, files, cloud storage) - Implement data quality checks, vali
Jan 22, 2026;
from:
dice.com