About
Design, develop, and maintain streaming and batch data pipelines using modern data engineering tools and frameworks. Work with large volumes of structured and unstructured data, ensuring high performance and scalability. Collaborate with cloud providers and data platform vendors (e.g., AWS, Microsoft Azure, Databricks) to conduct PoCs for data platform solutions. Evaluate PoC outcomes and provide comprehensive documentation including architecture, performance benchmarks, and recommendations.Required Experience & Skills:
Proven experience as a Data Engineer with a strong focus on streaming and batch processing. Hands-on experience with cloud-based data plaforms such as AWS/ Databricks. Strong programming skills in Python, Scala, or Java. Experience with data modeling, ETL/ELT processes, and data warehousing. Experience conducting and documenting PoCs with hyperscalers or data platform vendors.Preferred Qualifications:
Certifications in AWS, Azure, or Databricks. Experience with Snowflake, IBM DataStage, or other enterprise data tools. Knowledge of CI/CD pipelines and infrastructure as code (e.g., Terraform, CloudFormation)
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.