À propos
Responsibilities:
rchitect, build, and maintain scalable and reliable data pipelines using AWS and distributed processing frameworks. Develop ETL/ELT processes using Python, PySpark, and AWS services such as Glue and Lambda. Implement workflow orchestration using Airflow and AWS-native automation tools. Optimize data storage, querying, and processing using Athena and other AWS analytics services. Create and manage infrastructure-as-code components with Terraform (basic proficiency expected). Work closely with data architects, analysts, and application teams to deliver robust data solutions. Ensure best practices for data quality, security, governance, and monitoring across the data ecosystem. Must-Have Skills:
WS Lambda. WS Glue. PySpark. Python. irflow. thena. Terraform (Basic). Good-to-Have Skills:
WS Step Functions. DynamoDB. ECS. EKS. OpenSearch. Kinesis. SNS / SQS. Lake Formation. Preferred Experience:
Building and optimizing data lakes and real-time/streaming data solutions. Working in cloud-native architectures with strong emphasis on automation. Implementing CI/CD pipelines for data engineering workloads. Experience with distributed systems and performance tuning.
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre provient d’une plateforme partenaire de TieTalent. Cliquez sur « Postuler maintenant » pour soumettre votre candidature directement sur leur site.