Über
Job Description: Minimum 5+ years of experience in IT industry with 2+ years of experience in Big Data implementing complete Hadoop solutions along with Scala Knowledge on complete Hadoop eco systems Good working experience in using Apache Hadoop eco system components like MapReduce HDFS Hive Sqoop Pig Oozie Flume HBase and Zoo Keeper. Extensive experience working on Pyspark and Spark in performing ETL using Spark Core Spark-SQL and Real-time data processing using Spark Streaming. Expertise in working with RDD's DataFrames and DataSet API's. Performed importing and exporting data into HDFS and Hive using Sqoop. Extensive knowledge in using SQL Queries for backend database analysis. Strong knowledge in NOSQL column-oriented databases like HBase Cassandra and its integration with Hadoop cluster.
Roles & Responsibilities : Well-versed in Agile other SDLC methodologies and can coordinate with owners and SMEs. Worked on different operating systems like UNIX Linux and Windows Must have hands on experience in Python/R/Scala Experience in AI platform(s) that enable integrated data access exploration model management automation and data insights.
Thanks & Best Regards Swamy A Direct : 585-532-7074| ************* /> 687 Lee Road, Suite # 208, Rochester, NY 14606 http://www.avanitechsolutions.com
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klick auf „Jetzt Bewerben”, um deine Bewerbung direkt auf deren Website einzureichen.