About
Pyspark, AWS Core responsibilities: Data pipeline development:
Design, develop, and maintain high-performance data pipelines using PySpark. Performance optimization:
Optimize and tune existing data processing workflows for better performance and efficiency. Data transformation:
Implement complex data transformations and integrations, such as reading from external sources, merging data, and ensuring data quality.
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.