Dieses Stellenangebot ist nicht mehr verfügbar
Über
• Design and build scalable batch and streaming data pipelines from enterprise source systems to the data platform
• Implement ingestion, transformation, and enrichment processes using approved architectural patterns
• Optimize pipelines for performance, reliability, and cost efficiency
• Develop reusable frameworks and components to accelerate pipeline development
• Implement data engineering solutions aligned with lakehouse / medallion architecture patterns (Bronze, Silver, Gold layers)
• Support integration of enterprise systems with the modern data platform
• Partner with platform engineering teams to improve pipeline performance and platform scalability
• Contribute to engineering patterns and reusable templates across the data platform
• Implement data validation, reconciliation, and monitoring processes
• Ensure data pipelines meet enterprise data quality, reliability, and observability standards
• Troubleshoot pipeline failures and support root cause analysis
• Help define and maintain SLAs for critical data pipelines and data products
• Write clean, well-tested, and maintainable code
• Implement engineering best practices including version control, CI/CD pipelines, automated testing, and infrastructure-as-code
• Conduct peer code reviews and maintain engineering standards
• Partner closely with Data Architects, Analytics Engineers, Data Scientists, Platform Engineering teams, and product & business stakeholders
• Translate business requirements into scalable technical solutions
• Provide technical guidance during sprint planning and backlog refinement
• Ensure delivery timelines are met and proactively manage risks or blockers
• Define and maintain standards for data pipeline documentation, data lineage and metadata, and operational procedures
• Ensure compliance with data governance, security, and enterprise architecture standards
• Contribute to engineering playbooks, standards, and knowledge sharing
Minimum Education and/or Experience:
• 5–8+ years of experience in data engineering, analytics engineering, or software engineering
• Proven experience building and maintaining production-grade data pipelines in modern data platforms
• Experience working in Agile delivery environments
• Experience supporting enterprise analytics or AI/ML data use cases
Additional knowledge and skills:
• Strong SQL expertise and experience designing complex transformations
• Proficient programming skills in Python, Scala, or similar languages
• Experience with modern data platforms such as Databricks, Snowflake, or BigQuery
• Experience with ELT / ETL frameworks patterns, workflow orchestration tools, data quality and observability frameworks
• Experience with cloud platforms (AWS, Azure, or GCP)
• Strong understanding of lakehouse / medallion architectures
• Experience implementing large-scale data lakehouse platforms
• Experience supporting AI/ML workloads and feature pipelines
• Experience with data governance and metadata management tools
• Cloud or data platform certifications
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.