À propos
Collect, store, process, and analyze large sets of data while maintaining optimal solutions Implement batch and real-time data ingestion/extraction processes between diverse source and target systems Design and build data solutions focusing on performance, scalability, and reliability
Required Qualifications
3 years of experience in data handling, building ETLs, and using data visualization tools Experience in building stream-processing systems with technologies like Kafka or Spark-Streaming Familiarity with Big Data tools such as Spark, Hive, and NoSQL databases Strong experience with database technologies and data governance Proficiency in programming languages such as Java, Scala, or Python
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre provient d’une plateforme partenaire de TieTalent. Cliquez sur « Postuler maintenant » pour soumettre votre candidature directement sur leur site.