À propos
The candidate must demonstrate proficiency in,
·
Strong expertise in Hadoop ecosystem – HDFS, Hive, Spark(PySpark/Scala),MapReduce ·
Hands-on experience with Sqoop for data ingestion ·
Strong programming skills in Python / Scala / Java ·
Advanced knowledge of SQL and query optimization ·
Experience in data lake architecture and maintenance ·
Strong understanding of distributed computing and parallel processing ·
Experience in ETL/ELT pipeline design and optimization ·
Exposure to cloud migration strategies and readiness planning ·
Experience with data governance tools and frameworks ·
Knowledge of CI/CD pipelines and DevOps practices
Nice-to-have skills
·
Experience with cloud platforms (GCP preferred – Dataproc, BigQuery, Cloud Storage) ·
Familiarity with workflow orchestration tools (Airflow / Cloud Composer)
Qualifications
Overall 8 + years with 7-10 years of relevant work experience in Big Data/Hadoop B.Tech., M.Tech. or MCA degree from a reputed university
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre provient d’une plateforme partenaire de TieTalent. Cliquez sur « Postuler maintenant » pour soumettre votre candidature directement sur leur site.