Cette offre d'emploi n'est plus disponible
Data Analytics Engineer II
- Irving, Texas, United States
- Irving, Texas, United States
À propos
Summary:
A Data and Analytics Engineer II is responsible for
development, expansion, and maintenance of data pipelines of the
echo system. A Data and Analytics Engineer uses programming skills
to develop, customize and manage integration tools, databases,
warehouses, and analytical systems.
The Data and
Analytics Engineer II is responsible for implementation of optimal
solutions to integrate, store, process and analyze huge data sets.
This includes an understanding of methodology, specifications,
programming, delivery, monitoring, and support standards.
Individual must have advanced knowledge of designing and
developing data pipelines and delivering advanced analytics, with
open-source Big Data processing frameworks such as Hadoop
technologies. Individual must have proven competency in programming
utilizing distributed computing principles.
Responsibilities:
- Meets expectations of the applicable
OneCHRISTUS Competencies: Leader of Self, Leader of Others, or
Leader of Leaders. - Responsible for analyzing
and understanding data sources, participating in requirement
gathering, and providing insights and guidance on data technology
and data modeling best practices. - Analyzes
ideas and business and functional requirements to formulate a
design strategy. - Acts as a tenant to draw out a
workable application design and coding parameters with essential
functionalities. - Works in collaboration with
the team members to identify and address the issues by implementing
a viable technical solution that is time and cost-effective and
ensuring that it does not affect performance
quality. - Develops code following the industry's
best practices and adhere to the organizational development rules
and standards. - Involved in the evaluation of
proposed system acquisitions or solutions development and provides
input to the decision-making process relative to compatibility,
cost, resource requirements, operations, and
maintenance. - Integrates software components,
subsystems, facilities, and services into the existing technical
systems environment; assesses the impact on other systems and works
with cross-functional teams within information Services to ensure
positive project impact. Installs configure and verify the
operation of software components. - Participates
in the development of standards, design, and implementation of
proactive processes to collect and report data and statistics on
assigned systems. - Participates in the research,
design, development, and implementation of application, database,
and interface using technologies platforms
provided. - Researching, designing, implementing,
and managing programs. - Fixes problems arising
across the test cycles and continuously improve the quality of
deliverables. - References and documents each
phase of development for further reference and maintenance
operation. - Uses critical and analytical
thinking skills and understanding of programming principles and
design. - Uses strong technical knowledge of
Enterprise Application/Integration Design and develop systems,
databases, operating systems, and Information
Services. - Uses experience in large scale data
lake data and data warehouse implementations and demonstrate
proficiency in open-source technology, for example, Python, Spark,
Hive, HDFS, NiFi etc. - Demonstrates substantial
experience and a deep knowledge of data mining techniques,
relational, and non-relational
databases. - Demonstrates experience with data
integration with ETL techniques and
frameworks. - Works with Big Data querying tools,
such as Hive, Impala and Spark SQL. - Builds
stream-processing systems, using solutions such as NiFi or
Spark-Streaming. - Demonstrates good
understanding of Lambda Architecture - Uses
intermediate level of SQL programing and query performance tuning
techniques for Data Integration and Consumption using design for
optimum performance against large data asset within Transactional,
MPP and columnar architecture. - Demonstrates
solid understanding of BI and analytics landscape, preferable in
large-scale development environments.
Job Requirements:
Education/Skills
- Bachelor's degree in computer science,
Engineering, Math, or related field is
required.
Experience
- Minimum of three (3) years of experience in
MapReduce, Spark programming. - Minimum of three
(3) years of experience developing analytics solutions with large
data sets within an OLAP and MPP
architecture. - Minimum of five (5) years of
experience with design, architecture and development of Enterprise
scale platforms built on open-source
frameworks. - Three (3) years of experience with
working in a Microsoft SQL Server environment is
preferred. - One (1) year of Healthcare IT
experience is preferred. - Two (2) years of
experience working with Microsoft SQL Server Integration Services
(SSIS) is preferred.
Licenses, Registrations, or
Certifications
- Certifications in Hadoop or Java are a
plus.
Work Schedule:
8AM - 5PM Monday-Friday
Work
Type:
Full
Time
Compétences linguistiques
- English
Cette offre a été publiée par l’un de nos partenaires. Vous pouvez consulter l’offre originale ici.