About
Role Overview:
Hands-on experience with Azure Services: Databricks, Data Factory and DevOps Proficient in Python, Apache Spark, and PySpark Write advance and complex SQL queries with solid experience in database management Develop and maintain data models, data warehouses, and data marts in Snowflake Automation, build re-usable transformations, performance tuning Streamline ETL complexity- logics and transformations Collaborate with data scientists, analysts, and stakeholders to gather requirements and define data models that effectively support business requirements Demonstrate decision-making, analytical and problem-solving abilities Strong verbal and written communication skills to manage client discussions Familiar with working on Agile methodologies - daily scrum, sprint planning, backlog refinement Key Responsibilities & Skillsets:
Azure Databricks ability to create data transformation logics Strong programming skills in Python and experience with SQL. Should be able to write complex SQL, Transact SQL, Stored Procedures ETL tool Azure data factory, Data Bricks Experience with Data Modelling DWH - Snowflake Excellent communication skills and stakeholder management Ability to work independently in IC role Good to have:
Knowledge on Snowflake platform Knowledge on PowerBI Familiarity with CI/CD practices and version control (e.g., Git) Familiarity with Azure DevOps
Nice-to-have skills
- Azure Data Factory
- Data Modeling
- PySpark
- Python
- SQL
Work experience
- Data Engineer
- Data Infrastructure
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.