This job offer is no longer available
About
Build and maintain PySpark data pipelines in the Databricks environment Optimize Spark jobs performance and resource usage, addressing bottlenecks and inefficiencies Design, develop, and maintain high-quality backend software components and services
Required Qualifications:
Bachelor's degree and 8 years of experience Strong experience with Python / Apache Spark Solid understanding of data modeling, ETL process, and distributed computing Bachelor's degree in Computer Science, Computer Engineering or related field Experience with Agile development methodologies
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.