Cette offre d'emploi n'est plus disponible
À propos
Key Responsibilities
Design, build, and optimize scalable ETL/ELT data pipelines
Develop and maintain data models in data warehouses and lakes
Ensure data reliability, quality, and performance across systems
Collaborate with analysts and stakeholders to deliver analytics-ready data
Lead technical design reviews and contribute to architectural decisions
Monitor and troubleshoot pipeline failures and performance issues
Implement best practices for testing, version control, and documentation
Mentor junior data engineers and review code
Contribute to data platform roadmap and continuous improvement
Qualifications
Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience)
5+ years of experience in data engineering or related roles
Advanced SQL and strong proficiency in Python (or similar language)
Experience with data warehousing and data modeling techniques
Proven experience building production-grade data pipelines
Strong understanding of cloud-based data architectures
Preferred Experience
Cloud platforms (AWS, Azure, or GCP)
Orchestration and transformation tools (Airflow, dbt, Spark, Kafka, etc.)
Experience with streaming or near-real-time data systems
Familiarity with data governance, observability, and security best practices
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre a été publiée par l’un de nos partenaires. Vous pouvez consulter l’offre originale ici.