This job offer is no longer available
About
Design, develop, and maintain robust and scalable data pipelines using Apache Spark and cloud-native data services. Build, optimize, and support ETL/ELT workflows to enable analytics, reporting, and downstream applications. Implement and manage data solutions using Databricks, Delta Lake, and Unity Catalog. Ensure data quality, reliability, and performance across large-scale and complex datasets. Collaborate with cross-functional teams to gather data requirements and translate them into effective technical solutions. Apply data engineering best practices related to scalability, security, monitoring, and maintainability. Support the continuous improvement of data architecture, pipeline performance, and operational stability in a cloud environment. Requirements:
7+ years of experience in data engineering or a related technical role. Strong hands-on experience with Databricks and Apache Spark are required. Experience working with Delta Lake and Unity Catalog for data management and governance. Solid understanding of data pipeline design, ETL/ELT patterns, and data modeling concepts. Experience working with cloud data platforms such as Azure, AWS, or GCP. Proven ability to design scalable, reliable, and high-performing data solutions. Strong problem-solving skills and the ability to work effectively in collaborative, fast-paced environments. Benefits:
Health insurance Retirement plans Paid time off Flexible work arrangements Professional development
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.