This job offer is no longer available
About
Expertise in Azure Databricks, leveraging PySpark and Scala for large-scale distributed data processing, along with implementing Delta Lake to enable efficient storage, data versioning, and ACID-compliant operations. Experience in designing and implementing structured data pipelines using the Medallion architecture (Bronze, Silver, Gold layers) is essential. The role involves orchestrating end-to-end data workflows using Azure Data Factory (ADF) and enabling advanced analytics and integration using Azure Synapse Analytics. Strong proficiency in SQL, including working with Azure SQL Hyperscale, is required. The candidate should be experienced in managing and optimizing data storage in ADLS Gen1 and Gen2 and integrating real-time and event-driven data streams using Event Hub and Service Bus. Familiarity with NoSQL databases such as Cosmos DB is expected. Additionally, the role requires implementing robust monitoring and observability using Azure Monitor and Application Insights, along with ensuring secure data access and secret management using Azure Key Vault. The candidate should demonstrate the ability to optimize data pipelines for performance and cost, troubleshoot complex production issues, and consistently deliver high-quality, scalable data solutions in a distributed and enterprise-scale ecosystem .
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.