XX
SC Cleared Databricks Data Engineer – Azure CloudMontashLondon, England, United Kingdom
XX

SC Cleared Databricks Data Engineer – Azure Cloud

Montash
  • GB
    London, England, United Kingdom
  • GB
    London, England, United Kingdom
Apply Now

About

SC Cleared Databricks Data Engineer – Azure Cloud 3 days ago Be among the first 25 applicants
Get AI-powered advice on this job and more exclusive features.
This range is provided by Montash. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
Base pay range Direct message the job poster from Montash
AI & Data Experts - Senior Delivery Consultant Job Title:
SC Cleared Databricks Data Engineer – Azure Cloud
Contract Type:
12 month contract
Day Rate:
Up to £400 a day inside IR35
Location:
Remote or hybrid (as agreed)
Start Date:
January 5th 2026
Clearance required:
Must be holding active SC Clearance
We are seeking an experienced Databricks Data Engineer to design, build, and optimise large-scale data workflows within the Databricks Data Intelligence Platform.
The role focuses on delivering high-performing batch and streaming pipelines using PySpark, Delta Lake, and Azure services, with additional emphasis on governance, lineage tracking, and workflow orchestration. Client information remains confidential.
Key Responsibilities
Build and orchestrate Databricks data pipelines using Notebooks, Jobs, and Workflows
Optimise Spark and Delta Lake workloads through cluster tuning, adaptive execution, scaling, and caching
Conduct performance benchmarking and cost optimisation across workloads
Implement data quality, lineage, and governance practices aligned with Unity Catalog
Develop PySpark-based ETL and transformation logic using modular, reusable coding standards
Create and manage Delta Lake tables with ACID compliance, schema evolution, and time travel
Integrate Databricks assets with Azure Data Lake Storage, Key Vault, and Azure Functions
Collaborate with cloud architects, data analysts, and engineering teams on end-to-end workflow design
Support automated deployment of Databricks artefacts via CI/CD pipelines
Maintain clear technical documentation covering architecture, performance, and governance configuration
Required Skills and Experience
Strong experience with the Databricks Data Intelligence Platform
Hands-on experience with Databricks Jobs and Workflows
Deep PySpark expertise, including schema management and optimisation
Strong understanding of Delta Lake architecture and incremental design principles
Proven Spark performance engineering and cluster tuning capabilities
Unity Catalog experience (data lineage, access policies, metadata governance)
Azure experience across ADLS Gen2, Key Vault, and serverless components
Familiarity with CI/CD deployment for Databricks
Solid troubleshooting skills in distributed environments
Preferred Qualifications
Experience working across multiple Databricks workspaces and governed catalogs
Knowledge of Synapse, Power BI, or related Azure analytics services
Understanding of cost optimisation for data compute workloads
Strong communication and cross-functional collaboration skills
Seniority level Mid-Senior level
Employment type Contract
Job function Information Technology
Industries Information Services
Referrals increase your chances of interviewing at Montash by 2x
Sign in to set job alerts for “Data Engineer” roles. London, England, United Kingdom
#J-18808-Ljbffr
  • London, England, United Kingdom

Languages

  • English
Notice for Users

This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.