Dieses Stellenangebot ist nicht mehr verfügbar
Über
Autodesk is seeking a Principal Data Engineer to lead the design and development of data architecture, pipeline for our data team. In this role, you will shape the future of our data ecosystem, driving innovation across data pipelines, architecture, and cloud platforms. You'll partner closely with analysts, data scientists, AI/ML Engineers and product teams to deliver scalable solutions that power insights and decision-making across the company. This is an exciting opportunity for a principal data engineer who thrives on solving complex problems, driving best practices, and mentoring high-performing teams. Responsibilities
Lead and mentor a team of data engineers responsible for building and maintaining scalable data pipelines and infrastructure on AWS, Snowflake and Azure Architect and implement end-to-end data pipeline solutions, ensuring high performance, resilience, and cost efficiency across both batch and real-time data flows Define and drive the long-term vision for data engineering in alignment with Autodesk's data platform strategy and analytics roadmap Collaborate with analysts, data scientists, FinOps engineers, and product/engineering teams to translate business needs into reliable, scalable data solutions Establish and enforce standards for data quality, governance, observability, and operational excellence, defining "what good looks like" across the data lifecycle Design and optimize data models, ELT/ETL processes, and data architectures to support analytics, BI, and machine learning workloads Best practices in CI/CD, testing frameworks, and deploying data pipelines Leverage modern data integration tools such as Fivetran, Nexla and Airflow to batch ingestion and transformation workflows Apply AI-driven approaches for anomaly detection, pipeline optimization, and automation Stay current with emerging trends in data engineering and proactively evolve the team's capabilities and toolset Minimum Qualifications
10+ years of experience in data engineering, with at least 3 years in a lead role Demonstrated success in delivering large-scale, enterprise-grade data pipeline architectures and leading technical teams Expertise with cloud data platforms AWS and Azure experience is a strong plus Proficiency in SQL, Python, and modern data modeling practices Hands-on experience with batch and streaming frameworks (e.g., Spark, Kafka, Kinesis, Hadoop) Proven track record of building and maintaining real-time and batch data pipelines at scale Deep understanding of ETL and ELT paradigms, including traditional ETL and modern ELT tools Experience with data integration tools (Fivetran, Nexla, etc.) and orchestration platforms Familiarity with Data Lakehouse architectures, data mesh concepts, and hybrid/multi-cloud strategies Strong communication, leadership, and stakeholder management skills Ability to drive scalable architecture decisions through platform systems design and modern engineering patterns
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.