XX
Sr. Data EngineerCyientUnited States

This job offer is no longer available

XX

Sr. Data Engineer

Cyient
  • US
    United States
  • US
    United States

About

Job Description
Job Description for BDS DADI Data Platform Data Ingestion Project
Project:
Boeing BDS DADI Data Platform - Data Ingestion & ETL Duration:
April 1, 2026 - September 30, 2026 (6 months) Location:
Onsite (US-based) Eligibility:
All resources must be U.S. Citizens or Green Card holders
Role 1: Sr. Data/Cloud Engineer (2 Positions)
Experience Required:
10-14 years
Role Summary: The Sr. Data/Cloud Engineer is responsible for designing, building, testing, and deploying end-to-end data ingestion connectors and ETL/ELT pipelines on the Boeing-provided framework. Working in two-person pods, each pod will deliver one data source to production per month across a variety of ingestion patterns (batch, streaming, CDC). This role is the core delivery engine of the project.
Key Responsibilities: Design and build connectors for prioritized data sources including SFTP, REST APIs, RDBMS (CDC), Kafka, S3 file drops, and mainframe extracts. Define source-specific ingestion patterns (batch windows, CDC, streaming) and map data to canonical landing zones in the lakehouse architecture. Implement reusable ETL/ELT pipelines on the IT-provided framework (e.g., AWS Glue, Spark, dbt) across raw curated consumption layers. Develop transformation logic, handle schema evolution, implement partitioning strategies, and capture metadata for lineage tracking. Embed data quality checks (completeness, schema conformance, record counts, freshness) with fail/alert behavior within pipelines. Write unit, integration, and end-to-end tests; validate pipelines in CI/CD and staging environments prior to production promotion. Produce connector runbooks, data contracts, transformation specs, and onboarding guides. Collaborate with source system owners to obtain access, sample data, and schema/contract details. Participate in 2-week Agile sprints under Boeing's sprint planning and task assignment process. Required Skills & Qualifications:
5-8 years of hands-on experience in data engineering, cloud data platforms, and ETL/ELT pipeline development. Strong proficiency in Python, SQL, and Spark (PySpark or Scala). Hands-on experience with AWS data services: Glue, S3, Kinesis, Lambda, Redshift, Athena, or equivalent. Experience building ingestion pipelines for diverse source types: SFTP, REST APIs, RDBMS (JDBC/CDC), Kafka/streaming, and flat file processing. Working knowledge of lakehouse architectures (Delta Lake, Iceberg, or Hudi). Experience with dbt or similar transformation frameworks. Familiarity with CI/CD pipelines for data workloads (e.g., GitHub Actions, CodePipeline, Jenkins). Understanding of data quality frameworks and schema evolution handling. Strong documentation skills for runbooks, data contracts, and technical specifications. Experience working in Agile/Scrum delivery models. Preferred Skills:
Experience with mainframe data extraction and integration. Familiarity with Apache Kafka (producers, consumers, connect, schema registry). Exposure to data cataloging and lineage tools (e.g., AWS Glue Catalog, Apache Atlas, DataHub). Prior experience in aerospace, defense, or regulated industri
  • United States

Languages

  • English
Notice for Users

This job was posted by one of our partners. You can view the original job source here.