Dieses Stellenangebot ist nicht mehr verfügbar
Über
This specialty insurance company is looking for a Senior Data Engineer to join their Enterprise Data Hub (EDH) team. The role is highly hands-on, focusing on coding in PySpark, building and optimizing Databricks-based frameworks, and developing scalable data pipelines. You will work directly with the VP of Architecture & Engineering, participate in daily status calls, and collaborate with cross-functional teams to ingest, transform, and expose data across all systems within the organization. Key responsibilities include optimizing Databricks, implementing job orchestration and automation using Azure tools, and building reusable frameworks for ingestion and transformation. You will also create notebooks, contribute to governance models, and ensure best practices in coding and deployment. An excellent candidate will have deep technical expertise in Databricks and Azure, thrive in a hands-on coding environment, and bring a strong engineering mindset with leadership capabilities. Experience with Unity Catalog, Delta Lake, and MLflow will set you apart. Skills and Requirements: 5-7+ years of experience designing & developing data pipelines with Databricks & Apache Spark Hands-on coding with PySpark/Spark SQL Job orchestration & automation using Azure Data Factory, Azure Functions, and Azure DevOps End-to-end workflow: scheduling, triggers, monitoring, error handling, CI/CD deployment Experience with Unity Catalog Technical leadership: Code reviews, architecture guidance, mentoring junior engineers Strong Azure ecosystem experience: Data Lake, Data Factory, Synapse, Functions Excellent problem-solving and communication skills Familiarity with data governance and security best practices Exposure to Agile methodologies and DevOps practices
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.