Über
Create and maintain processes to acquire, validate, and enrich data from various sources Support the migration of on-premise data systems to a cloud-based lakehouse architecture Develop and optimize ETL/ELT pipelines using PySpark and Spark SQL
Required Qualifications
1+ years of work experience in a data engineering role Bachelor's degree or greater in a quantitative field or equivalent practical experience Hands-on experience with Databricks (Spark, PySpark, Delta Lake) and/or migrating RDBMS systems to a data lakehouse Experience with common types of healthcare data from various sources Hands-on experience with SQL and Python (including PySpark) for distributed data processing
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klick auf „Jetzt Bewerben”, um deine Bewerbung direkt auf deren Website einzureichen.