XX
Data Engineer - IIICompunnelUnited States

Dieses Stellenangebot ist nicht mehr verfügbar

XX

Data Engineer - III

Compunnel
  • US
    United States
  • US
    United States

Über

Job Summary
The Data Engineer III will design, develop, and maintain scalable, secure, and efficient data pipelines in support of enterprise data product initiatives.
This role involves working with modern cloud-based data technologies, building reliable data processing solutions, and supporting a unified data platform.
The engineer will collaborate across cross-functional teams to collect, parse, manage, analyze, and visualize large datasets while ensuring consistency, data quality, and operational excellence.
Key Responsibilities
Design, develop, and maintain robust data pipelines to ingest, transform, catalog, and deliver high-quality data into the Common Data Platform. Participate in Agile ceremonies and follow established Scaled Agile (SAFe) processes. Deliver high-quality data products and services aligned with program standards and best practices. Identify, troubleshoot, and resolve issues with data pipelines and analytical data environments. Implement monitoring, alerting, and automated remediation for data pipelines and data stores. Apply a security-first approach and follow testing, automation, and data engineering best practices. Collaborate with product teams, data scientists, analysts, and business partners to understand data needs and deliver the necessary infrastructure and tooling. Stay informed on emerging technologies and recommend improvements to enhance data engineering processes and efficiencies. Required Qualifications
Bachelor's degree in computer science, information systems, or a related field, or equivalent experience. Skills across Databricks (PySpark), SQL (Starburst is a plus), GitLab, CI/CD pipelines, Python, and Tableau. Two or more years of experience with tools such as Databricks, Collibra, and Starburst. Three or more years of experience with Python and PySpark. Experience coding and unit testing in Jupyter notebooks. Recent experience working with relational and NoSQL data stores, including STAR and dimensional modeling techniques. Two or more years of experience with modern data stack components such as S3, Spark, Airflow, lakehouse architectures, and cloud data warehouses such as Redshift or Snowflake. Broad data engineering experience across traditional ETL and big data technologies. Experience implementing data engineering solutions in AWS environments. Experience building end-to-end pipelines for unstructured and semi-structured data using Spark. Preferred Qualifications
Experience with real-time data processing solutions. Experience contributing to large-scale enterprise data platform initiatives. Familiarity with data governance, metadata management, and data quality principles. Experience working in Agile or SAFe environments. Strong analytical, problem-solving, and communication skills.
  • United States

Sprachkenntnisse

  • English
Hinweis für Nutzer

Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.