XX
Senior Data EngineerEuclid InnovationsUnited States
XX

Senior Data Engineer

Euclid Innovations
  • +1
  • +4
  • US
    United States
  • +1
  • +4
  • US
    United States

À propos

Position Summary We are seeking an experienced
Data Engineer
to design, build, and enhance data ingestion pipelines and metadata frameworks that support data lineage, modernization, and analytics enablement. The ideal candidate has hands-on experience with
AWS Glue ,
Python/PySpark ,
ETL pipelines , and
metadata management tools , with the ability to work in a dynamic, cloud-centric data environment.
Key Responsibilities Design, develop, and maintain
data ingestion and ETL pipelines
for metadata integration across systems like SQL Server, Informatica, AWS Glue, and Azure Data Factory. Implement
end-to-end data lineage tracing , including source-to-target mapping, column-level lineage, and job dependency tracking. Automate
metadata ingestion and transformation
for lineage reporting and quality validation. Identify
static, orphaned, or unused tables/jobs
and contribute to data-quality improvement. Collaborate with data architects, analysts, and AI strategist to align metadata ingestion with modernization goals. Participate in the
deployment and hardening
of solutions in AWS (and Azure as applicable). Contribute to modernization and AI-readiness initiatives through metadata standardization and automation. Required Skills & Experience
8-10 years
of experience in
Data Engineering / ETL Development . Strong experience with: AWS Glue ,
S3 ,
Redshift ,
Lambda SQL
(SQL Server, Oracle, or Snowflake) - complex queries, stored procedures, performance tuning Python / PySpark
for metadata parsing and ETL automation ETL Tools:
Informatica, AWS Glue, Azure Data Factory Data Lineage / Metadata Management:
Collibra, Alation, or custom lineage repositories CI/CD & Orchestration:
GitHub, Jenkins, or Azure DevOps
Nice to Have
Experience with
BladeBridge
or similar lineage analysis tools Familiarity with
Client Batch Processing Exposure to
Airflow ,
dbt , or
Databricks
for orchestration and transformation Basic understanding of
AI/ML data pipeline readiness Azure data platform exposure ( ADF, Synapse, Data Lake )

Compétences idéales

  • AWS Lambda
  • PySpark
  • Python
  • Metadata Management
  • United States

Expérience professionnelle

  • Data Engineer

Compétences linguistiques

  • English
Avis aux utilisateurs

Cette offre provient d’une plateforme partenaire de TieTalent. Cliquez sur « Postuler maintenant » pour soumettre votre candidature directement sur leur site.