Cette offre d'emploi n'est plus disponible
À propos
Location: Phoenix, Az
Job Description:
Must Have Technical/Functional Skills • In-depth knowledge of the Hadoop ecosystem, including HDFS, MapReduce, Spark , Scala , Hive • Expertise in Hive for data warehousing and querying large datasets. • Strong experience with Apache Spark for distributed data processing and real-time analytics. • Advanced proficiency in Scala programming for big data application development. • Familiarity with data modeling, ETL processes, and data integration techniques. • Experience with cloud platforms and big data tools (e.g., Google Big Query). • Strong analytical and problem-solving skills to address complex data challenges. • Basic knowledge on JAVA , Springboot • Strong SQL skill.
Roles & Responsibilities • Architect and design scalable big data solutions leveraging technologies like Hadoop, Hive, Spark, and Scala. • Develop and optimize data pipelines and workflows for processing and analyzing large-scale datasets. • Manage and fine-tune distributed computing frameworks to ensure high performance and reliability. • Collaborate with cross-functional teams, including data engineers, analysts, and business stakeholders, to gather and define data requirements. • Implement data security, governance, and compliance measures in big data environments. • Diagnose and resolve technical issues in big data systems to maintain operational efficiency.
Generic Managerial Skills, If any
Decision Making, Problem Solving, Leadership, Time Management
Salary range: $105,000 to $120,000 per year
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre a été publiée par l’un de nos partenaires. Vous pouvez consulter l’offre originale ici.