Über
Build and maintain object-oriented data pipelines to ingest, integrate, and transform data from various sources Write efficient, scalable Python code for processing large data volumes on Azure, including unit tests and automated deployments Analyze Spark execution plans to identify bottlenecks and optimize data pipelines
Required Qualifications
Experience with Azure services such as Data Lake, Data Factory, and Databricks Proficiency in Python programming Knowledge of Apache Spark and its execution plans Ability to work independently with minimal supervision Experience in building scalable data pipelines
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klick auf „Jetzt Bewerben”, um deine Bewerbung direkt auf deren Website einzureichen.