Über
Design, develop, and maintain scalable data pipelines using Azure Databricks and Azure Data Lake. Integrate data from various sources into the Databricks platform. Implement data integration and ETL processes using Azure Data Factory. Develop and optimize data processing workflows and pipelines using PySpark. Support solving business use cases involving Bloomberg data acquisition and transformation. Collaborate with data scientists and analysts to support data-driven decision-making. Ensure data quality and integrity across various data sources and storage solutions. Monitor and troubleshoot data pipeline performance and reliability. Assist with dashboarding and data visualization using Power BI.
Wünschenswerte Fähigkeiten
- Azure Data Factory
- Power BI
- PySpark
- Python
Berufserfahrung
- Data Engineer
- Data Infrastructure
- Data Analyst
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klicken Sie auf „Jetzt Bewerben“, um Ihre Bewerbung direkt auf deren Website einzureichen.