This job offer is no longer available
- +1
- +4
- District of Columbia, United States
About
Data Engineer
experienced in
Azure Databricks ,
ADF , and
PySpark
to build and optimize
ETL pipelines
for a
Data Lakehouse
architecture.
Responsibilities
Build and maintain ETL processes using ADF, PySpark, Databricks
Convert Informatica ETL workflows to cloud
Ensure data quality, lineage, and performance
Create self-service data products using semantic layers
Work closely with data architects and business teams
Required Skills
8+ years in data engineering
Strong skills in Databricks, PySpark, SQL, Azure
Experience in legacy ETL migration
Familiarity with financial risk datasets, data marts
Agile project exposure and strong problem-solving skills
Seniority level Mid-Senior level
Employment type Contract
Job function Information Technology
Industries IT Services and IT Consulting
Note: The job posting appears active; no expiration indicators are present.
#J-18808-Ljbffr
Nice-to-have skills
- PySpark
- ETL
- SQL
- Azure
Work experience
- Data Engineer
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.