Cette offre d'emploi n'est plus disponible
À propos
Any additional information you require for this job can be found in the below text Make sure to read thoroughly, then apply.
The role
Build and optimise data pipelines on the Databricks Lakehouse Platform
Design scalable ETL/ELT and structured streaming pipelines
Develop enterprise‑grade data processing and analytics solutions
Optimise Spark jobs and Databricks clusters for performance and cost
Implement data quality, monitoring and governance standards
Apply security, access control and cataloguing best practices
Work closely with data scientists, analysts and business stakeholders
Contribute to Agile delivery, code reviews and technical knowledge sharing
Experience
6+ years’ experience in data engineering roles
Hands‑on experience with Databricks and Apache Spark
Strong Python and SQL skills with solid data modelling knowledge
Experience building ETL/ELT pipelines and lakehouse xcfaprz architectures
Cloud experience, ideally AWS
Familiarity with Delta Lake, Unity Catalog and governance frameworks
Experience with real‑time or streaming data is a plus
Exposure to AI/ML use cases or using AI tools in development is advantageous
Strong problem‑solving skills and the confidence to work in complex, regulated environments
#J-18808-Ljbffr
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre a été publiée par l’un de nos partenaires. Vous pouvez consulter l’offre originale ici.