This job offer is no longer available
About
Build and maintain object-oriented data pipelines to ingest, integrate, and transform data from various sources Write efficient, scalable Python code for processing large data volumes on Azure, including unit tests and automated deployments Analyze Spark execution plans to identify bottlenecks and optimize data pipelines
Required Qualifications
Experience with Azure services such as Data Lake, Data Factory, and Databricks Proficiency in Python programming Knowledge of Apache Spark and its execution plans Ability to work independently with minimal supervision Experience in building scalable data pipelines
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.