XX
Data Engineer - HybridcyberThinkUnited States

Dieses Stellenangebot ist nicht mehr verfügbar

XX

Data Engineer - Hybrid

cyberThink
  • US
    United States
  • US
    United States

Über

Job Description: As a Data Engineer, you will support the design, development, and optimization of enterprise data pipelines and ETL processes across large-scale, distributed environments. You will collaborate with cross-functional teams to deliver scalable data solutions, build high-performance pipelines using big data frameworks, contribute to data quality initiatives, and support experimentation-driven analytics within an agile engineering culture.
Key Responsibilities:
Support the design, implementation, and maintenance of enterprise ETL processes for data platforms. Develop scalable and efficient code to process large datasets and ensure timely data availability. Use big data processing frameworks such as Apache Spark and Hadoop to build and optimize data pipelines. Collaborate with senior engineers to address data challenges and maintain high data quality. Assist in the data delivery process to support accurate, high-value solutions for diverse clients. Build strong working relationships with team members and contribute to both local and global projects. Apply best practices including version control, code reviews, and data validation. Write and optimize SQL queries to support large-scale data processing. Design, implement, and maintain data pipelines using ETL frameworks, orchestration tools, and distributed engines. Participate in automation efforts to streamline recurring data tasks. Ensure compliance with internal policies and external regulatory requirements. Required Skills, Experiences, Education, and Competencies:
Experience as a Data Engineer or similar role with strong understanding of data engineering concepts. Strong SQL skills for retrieving, manipulating, and analyzing data efficiently. Experience with big data technologies including Apache Spark, PySpark, Spark SQL, Spark Streaming, and Hadoop ecosystem components such as HDFS, Hive, and YARN. Experience designing and maintaining ETL pipelines and data workflows. Understanding of data modeling concepts and database design. Familiarity with Python for data processing and automation. Ability to analyze and troubleshoot data issues with minimal supervision. Basic understanding of data validation and testing practices. Excellent verbal and written communication skills. Bachelor's degree in Engineering, Mathematics, Finance, Business, or related quantitative field, or equivalent experience.
The hourly range for roles of this nature are $40.00 to $80.00/hr. Rates are heavily dependent on skills, experience, location, and industry.
cyberThink is an Equal Opportunity Employer.
  • United States

Sprachkenntnisse

  • English
Hinweis für Nutzer

Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.