About
Coding skill in SQL, Python and PySpark is mandatory. Unix scripting is desired. Having worked on real 2 to 3 projects as AWS data engineer and ability to work on "Data pipeline, ingest, transform, and deliver" using S3, Glue and Redshift/RDS is needed. Knowledge of streaming ingestion using KAFKA/Kinesis or Spark Streaming is desired. AWS So...
Nice-to-have skills
- Kafka
- PySpark
- Python
- SQL
Work experience
- Data Engineer
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.