About
Collect, store, process, and analyze large sets of data while maintaining optimal solutions Implement batch and real-time data ingestion/extraction processes between diverse source and target systems Design and build data solutions focusing on performance, scalability, and reliability
Required Qualifications
3 years of experience in data handling, building ETLs, and using data visualization tools Experience in building stream-processing systems with technologies like Kafka or Spark-Streaming Familiarity with Big Data tools such as Spark, Hive, and NoSQL databases Strong experience with database technologies and data governance Proficiency in programming languages such as Java, Scala, or Python
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.