Description:
Job Requirement: A robust background in software engineering with significant experience in data engineering.Expert-level proficiency in Python, PySpark, and SQL, with the ability to architect and optimize complex data workflows.Understanding of Apache Spark architecture, including its execution model and memory management, with the ability to analyze Spark job metrics (task execution times, shuffle operations, and executor memory usage) to optimize ETL pipelinesProven experience in designing re
Dec 29, 2025;
from:
dice.com