À propos
The Data Engineer will design, develop, and maintain scalable data engineering pipelines to support analytics and business intelligence initiatives. The role involves building data warehouse solutions, leading data migration efforts, developing ETL and ELT processes, and ensuring data quality, security, and performance across the data ecosystem. Requirements/Must-Haves:
Strong hands-on experience in data engineering, including data pipeline development and large-scale data processing. Deep expertise in Snowflake architecture including virtual warehouses, micro-partitioning, clustering, performance optimization, and security access controls. Proven experience with data migration projects including assessment, planning, execution, and validation. Minimum five years of experience in software engineering or analytics building enterprise data architectures and distributed systems. Strong SQL skills and experience with data modeling including dimensional modeling or data vault techniques. Strong knowledge of core SQL database concepts including creating DDL and DML scripts and optimizing SQL queries. Experience working with cloud platforms such as AWS, Azure, or GCP. Experience working with orchestration and data integration tools. Strong problem-solving skills and ability to work independently in a fast-paced environment. Experience:
Experience designing and implementing Snowflake data warehouse solutions including data modeling and performance tuning. Experience leading and supporting data migration initiatives from legacy platforms to cloud-based solutions. Experience developing and managing ETL and ELT processes using modern data integration frameworks. Experience working with relational database systems and optimizing complex queries. Experience with distributed data processing platforms and NoSQL databases. Experience using version control systems and code repositories for collaborative development. Responsibilities:
Design, develop, and maintain scalable and reliable data pipelines to support analytics and reporting needs. rchitect and implement data engineering solutions that meet business and stakeholder requirements. Develop and optimize data warehouse solutions including performance tuning and cost optimization. Lead and support migration of data from legacy platforms to modern cloud-based solutions. Develop and manage ETL and ELT workflows using modern data integration tools. Participate in requirements gathering, data modeling, and architecture design discussions. Prepare high-level and detailed technical specifications aligned with security and architecture standards. Develop project plans and accurate estimates for build, testing, and implementation phases. Collaborate with data architects, analytics teams, and business stakeholders to implement technical solutions. Ensure data quality, security, governance, and compliance throughout the data lifecycle. Develop and execute unit tests, system integration tests, and acceptance tests. Troubleshoot and resolve performance issues, data integrity problems, and pipeline reliability challenges. Document architecture, data flows, and operational procedures. Should Have:
Knowledge of distributed processing frameworks such as MapReduce or Spark. Experience with programming languages such as Java, Python, or Bash scripting. Experience working with enterprise workload automation tools. Experience with data visualization and analytics platforms. Skills:
Data pipeline development and large-scale data processing. Data warehouse architecture and Snowflake optimization. SQL development and database performance tuning. ETL and ELT development and orchestration. Cloud data platform architecture. Data modeling and data governance practices. Qualification And Education:
Bachelor's degree in Information Technology, Computer Science, or a related field.
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre provient d’une plateforme partenaire de TieTalent. Cliquez sur « Postuler maintenant » pour soumettre votre candidature directement sur leur site.