XX
"Data Engineer"NavtechUnited States
XX

"Data Engineer"

Navtech
  • US
    United States
  • US
    United States

About

I have an opportunity for a
"Data Engineer with GCP" - (100% Remote).
and I am looking for a candidate who can join Immediately if you are interested, reply to me with your updated resume or if you could refer someone I would really appreciate it.
Position : Data Engineer with GCP
Location : Preferably Wellesley, MA, Hartford, CT, or NYC. Will Consider (Remote).
Duration: 12 months
Must Haves: • 5+ Years of Experience • Unix/Linux • Pig (Hive good to have) • Python (good to have) • Scala • CI/CD - GitLab • GCP
Primary Responsibilities: • Understand the business objectives and analyze, dissect system requirements and technical specifications • Interpret data, analyze results using statistical techniques and provide ongoing reports • Develop and implement databases, data collection systems, data analytics and other strategies that optimize statistical efficiency and quality • Explore the automation scope wherever possible • Acquire data from primary or secondary data sources and maintain databases / data systems • Identify, analyze, and interpret trends or patterns in complex data sets • Display strong technical knowledge in product analysis and debugging • Willingness to learn new technologies and adapt to product needs • Work independently with little or no supervision • Provide technical mentoring for team members, troubleshooting • Understand design concepts and product architecture. Propose solutions for performance and security issues • Identify Process Gaps and drive Initiatives to address process Gaps • Partner with product owners and developers to identify areas for improved efficiencies • Share and communicate ideas both verbally and in writing to staff, business sponsors, managers, and technical resources in clear concise language that is appropriate to the target audience • Participate in communities of practice to share knowledge, learn, and innovate • Research and implement tools that support delivery • Assesses and interprets customer needs and requirements • Solves moderately complex problems and/or conducts moderately complex analyses • Analyzes and investigates • Provides explanations and interpretations within area of expertise • Query structured and unstructured data and perform exploratory data analysis for further advanced modeling • Guide and implement Optimization techniques across projects
Required Qualifications: • 5+ years of IT experience in architecture of Big Data Technologies and ETL and other automation techniques • Good experience on Unix shell command and scripting • Experience of writing SQL and Complex SQLs • Good knowledge of databases and Big Data especially Hive, HBase and Spark • Knowledge in Hadoop Architecture, HDFS commands, experience designing and optimizing queries against data in the HDFS environment. • Experience with bash shell scripts, UNIX utilities, and UNIX commands • Fair knowledge on the Having experience on Hadoop Applications like Spark, Scala, Hbase, Hive, PIG, Sqoop • Knowledge on tools like Kibana and Splunk • Knowledge on Kafka Streaming • Knowledge on implementing Data Integrations projects from sourcing to auditing and implements controls for each stage of Integration • Knowledge of large scale search applications and building high volume data pipelines • Deeply analytical • Good communication and presentation skills • Problem solving skills with the ability to think laterally, and to think with a medium term and long term perspective.
Regards
Alex Keylor
NAVTECH INC
1600 Golf Road. Suite 1200, Rolling Meadows, IL 60008
Ph:
(224) 348-1340 ||
Email:
alex@navtechusa.com || www.navtechusa.co
E-Verified Company
  • United States

Languages

  • English
Notice for Users

This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.