About
Design and build scalable, reliable data pipelines using AWS services to process and transform large datasets from utility systems Orchestrate workflows across data pipelines using AWS Step Functions, with a preference for this over Airflow Implement ETL/ELT processes using PySpark, Python, and Pandas to clean, transform, and integrate data from multiple sources
Required Qualifications
Minimum of 5 years of experience in data engineering Proficiency in AWS services such as Step Functions, Lambda, Glue, S3, DynamoDB, and Redshift Strong programming skills in Python with experience using PySpark and Pandas for large-scale data processing Hands-on experience with distributed systems and scalable architectures Knowledge of ETL/ELT processes for integrating diverse datasets into centralized systems
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.