Dieses Stellenangebot ist nicht mehr verfügbar
Über
Collect, store, process, and analyze large sets of data while maintaining optimal solutions Implement batch and real-time data ingestion/extraction processes between diverse source and target systems Design and build data solutions focusing on performance, scalability, and reliability
Required Qualifications
3 years of experience in data handling, building ETLs, and using data visualization tools Experience in building stream-processing systems with technologies like Kafka or Spark-Streaming Familiarity with Big Data tools such as Spark, Hive, and NoSQL databases Strong experience with database technologies and data governance Proficiency in programming languages such as Java, Scala, or Python
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.