Cette offre d'emploi n'est plus disponible
- +1
- +4
- District of Columbia, United States
À propos
Data Engineer
experienced in
Azure Databricks ,
ADF , and
PySpark
to build and optimize
ETL pipelines
for a
Data Lakehouse
architecture.
Responsibilities
Build and maintain ETL processes using ADF, PySpark, Databricks
Convert Informatica ETL workflows to cloud
Ensure data quality, lineage, and performance
Create self-service data products using semantic layers
Work closely with data architects and business teams
Required Skills
8+ years in data engineering
Strong skills in Databricks, PySpark, SQL, Azure
Experience in legacy ETL migration
Familiarity with financial risk datasets, data marts
Agile project exposure and strong problem-solving skills
Seniority level Mid-Senior level
Employment type Contract
Job function Information Technology
Industries IT Services and IT Consulting
Note: The job posting appears active; no expiration indicators are present.
#J-18808-Ljbffr
Compétences idéales
- PySpark
- ETL
- SQL
- Azure
Expérience professionnelle
- Data Engineer
Compétences linguistiques
- English
Avis pour les utilisateurs
Cette offre a été publiée par l’un de nos partenaires. Vous pouvez consulter l’offre originale ici.