Über
• Provide technical leadership and mentorship to data engineers across multiple delivery initiatives
• Lead design and implementation of complex data pipelines and data products
• Ensure engineering standards, architectural patterns, and best practices are consistently applied
• Review code, pipelines, and technical designs to maintain high engineering quality
• Act as the primary technical escalation point for complex engineering challenges
• Design, build, and optimize batch and streaming data pipelines from source systems to the enterprise data platform
• Implement scalable ingestion, transformation, and enrichment frameworks using approved architectural patterns
• Lead implementation of medallion / lakehouse data architecture patterns
• Optimize pipelines for performance, reliability, and cost efficiency
• Define and implement data quality frameworks, validation rules, and monitoring controls
• Ensure pipeline observability, alerting, and operational reliability
• Lead troubleshooting of pipeline failures and coordinate root cause analysis
• Establish reliability standards and SLAs for critical data products
• Lead adoption of engineering best practices including version control, CI/CD pipelines, automated testing, and infrastructure-as-code
• Partner with platform engineering teams to optimize compute usage and platform performance
• Contribute to reusable data frameworks, templates, and shared libraries
• Partner closely with Data Architects, Analytics Engineers, Data Scientists, Platform Engineering teams, and product & business stakeholders
• Translate business requirements into scalable technical solutions
• Provide technical guidance during sprint planning and backlog refinement
• Ensure delivery timelines are met and proactively manage risks or blockers
• Define and maintain standards for data pipeline documentation, data lineage and metadata, and operational procedures
• Ensure compliance with data governance, security, and enterprise architecture standards
• Contribute to engineering playbooks, standards, and knowledge sharing
Minimum Education and/or Experience:
• 6–10+ years of experience in data engineering, analytics engineering, or software engineering
• Experience leading delivery of enterprise-scale data engineering initiatives
• Proven experience mentoring or leading data engineers
• Experience working in Agile delivery environments
Additional knowledge and skills:
• Advanced SQL expertise and experience designing complex transformations
• Strong programming skills in Python, Scala, or similar languages
• Experience with modern data platforms such as Databricks, Snowflake, or BigQuery
• Deep familiarity with ELT / ETL patterns, orchestration frameworks, data observability and monitoring
• Experience with cloud platforms (AWS, Azure, or GCP)
• Strong understanding of lakehouse / medallion architectures
• Experience implementing large-scale data lakehouse platforms
• Experience supporting AI/ML workloads and feature pipelines
• Experience with data governance and metadata management tools
• Cloud or data platform certifications
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klicken Sie auf „Jetzt Bewerben“, um Ihre Bewerbung direkt auf deren Website einzureichen.