About
Design, develop, and maintain scalable data pipelines using Azure Databricks and Azure Data Lake. Integrate data from various sources into the Databricks platform. Implement data integration and ETL processes using Azure Data Factory. Develop and optimize data processing workflows and pipelines using PySpark. Support solving business use cases involving Bloomberg data acquisition and transformation. Collaborate with data scientists and analysts to support data-driven decision-making. Ensure data quality and integrity across various data sources and storage solutions. Monitor and troubleshoot data pipeline performance and reliability. Assist with dashboarding and data visualization using Power BI.
Nice-to-have skills
- Azure Data Factory
- Power BI
- PySpark
- Python
Work experience
- Data Engineer
- Data Infrastructure
- Data Analyst
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.