XX
Senior/Lead Data Engineer - Azure & DatabricksJconnect IncUnited States
XX

Senior/Lead Data Engineer - Azure & Databricks

Jconnect Inc
  • US
    United States
  • US
    United States

Über

Hi,
Hope you are doing well!
This is Aditya from Jconnect INC
Job Title: Senior/Lead Data Engineer - Azure & Databricks
Location:
Alpharetta, GA Type:
Full-Time
Visa:
Visa-dependent candidates accepted
Job Description
We are seeking an experienced
Senior/Lead Data Engineer
with strong hands-on expertise in
Azure, Databricks, Spark, Delta Lake, and modern Lakehouse architecture . The ideal candidate has the ability to architect, design, and build complex data pipelines, optimize large-scale workloads, and lead data engineering initiatives in an enterprise environment.
Key Responsibilities
Architect, design, and implement scalable data platforms and pipelines on
Azure + Databricks . Build and optimize
batch & real-time
data ingestion, processing, and transformation workflows. Develop ETL/ELT pipelines using
Spark (PySpark) , ADLS, Delta Lake, and Databricks Jobs/Workflows. Design conceptual, logical, and physical data models for analytics and operational workloads. Manage Delta Lake features: ACID transactions, schema evolution, Z-Order, OPTIMIZE, Time Travel, DLT. Build streaming pipelines using
Autoloader, Spark Streaming, and checkpointing . Implement CI/CD and deployment automation using
Git, ADO Pipelines, Databricks Asset Bundles . Ensure strong data governance using
Unity Catalog
(permissions, lineage, security). Collaborate with architects, analysts, and data scientists to translate business needs into solutions. Troubleshoot platform issues, optimize performance, and ensure reliability of large-scale workflows. Mentor junior engineers and contribute to engineering best practices and reusable frameworks. Required Skills & Experience
Databricks & Lakehouse
Strong experience with
Azure Databricks runtimes , Spark engine, and newly launched features. Hands-on with
Delta Lake , ACID properties, schema evolution, Time Travel, OPTIMIZE, Z-Order. Strong understanding of
Lakehouse architecture
and scalable data platform design. Spark & Programming
Expert-level
PySpark
experience handling
TB-scale
data workloads. Strong experience in performance tuning: cluster optimization, file size tuning, parallelism. Data Engineering
Strong ETL/ELT pipeline development, orchestration, and scheduling. Hands-on with Databricks
Workflows, Jobs, Tasks , and CI/CD deployment. Strong SQL-complex queries, optimization, table design, schema evolution. Streaming
Experience with
Autoloader , Spark Streaming, EventHub/Kafka (preferred). Security & Governance
Unity Catalog (permissions, lineage, governance, encryption/decryption). Cloud & Tools
Azure experience: ADF, Functions, ADLS, Event Hub. CI/CD using Azure DevOps & Git. IaC: Terraform/ARM (preferred). Experience designing full end-to-end architecture. Soft Skills
Strong communication, architecture justification, collaboration skills. Ability to lead technical discussions and problem-solving. Nice-to-Have
Azure Purview (governance) Experience on ML or BI workloads on Databricks Terraform/ARM Azure EventHub/Kafka streaming
Please send me your updated resume ASAP with below details
:
Full Legal Name:
Current Location:
Willing to Relocate:-
E-mail:
Cell:
Rate/Salary:
Availability:
Are you still in Project:
Work Status:
LinkedIn profile ID:-
  • United States

Sprachkenntnisse

  • English
Hinweis für Nutzer

Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klick auf „Jetzt Bewerben”, um deine Bewerbung direkt auf deren Website einzureichen.