About
Data Engineer (Level 3)
Location:
Houston, TX
Address:
1100 Louisiana St, Houston, TX 77002
Schedule:
5 days a week on-site with half day's on Friday's
Duration:
6-month contract- high possibility of extension
# of Positions:
1
Start Date:
ASAP
Need only local candidates- any work authorization works
Why Open: Backfill Mission:
We are seeking a Senior Data Engineer (Level 3) to design, build, and optimize large-scale, high-reliability data pipelines and lakehouse architectures. The ideal candidate combines deep data engineering expertise with strong software engineering fundamentals to deliver modular, scalable, and testable data systems. This role involves leading core architectural decisions and end-to-end patterns across ingestion, transformation, data modeling, and delivery, including partitioning strategies and partition key design for high-performance analytics. Day to Day:
Design, build, and maintain ELT pipelines across ingestion, transformation, modeling, and delivery layers (bronze → silver → gold). Implement incremental loads, change-data-capture (CDC), merge/upsert, and idempotent pipeline patterns to ensure reliability and repeatability. Define and apply data architectural patterns (e.g., layered lakehouse, domain-oriented datasets, and semantic models) aligned to business objectives. Engineer physical data designs including partitioning strategies, partition key selection, clustering/micro-partitioning, and compaction for performance and cost efficiency. Develop curated datasets and data marts that enable analytics and self-service BI. Implement data quality, observability, and lineage (validations, profiling, SLAs, monitoring, and alerting). Optimize performance on cloud data platforms (e.g., Snowflake tasks/streams, compute sizing, query optimization). Design and manage Lakehouse table formats (e.g., Apache Iceberg or Delta) on object storage including schema evolution and maintenance. Collaborate with Data Architects, Analytics Engineering, and business stakeholders to translate requirements into scalable data solutions. Mentor junior engineers, lead design reviews, and contribute to engineering standards and reusable frameworks. Automate and optimize the data lifecycle using CI/CD and infrastructure-as-code; apply DevOps principles to data pipelines. Must haves:
10+ years of experience in Data Engineering with heavy Software Engineering experience as well. Expert SQL (crafting/testing) and Very Strong Python experience. AWS S3, Snowflake, and Iceberg are ALL cloud must haves. DBT experience (cloud transformation tool). Airflow/Astronomer (Job scheduling tool) experience. CI/CD expertise and end-to-end data quality ownership. Plusses:
Midstream industry experience (supporting eStream product), Supply chain logistics, chemical engineering industry experience works too Experience with commercial systems like OpenLink (Endur/RightAngle) or Quorum is also highly valued. Soft Skills:
HAVE to be able to communicate and no hand holding required at all- go getter attitude. Red flags:
He likes to avoid Heavy PhD candidates because they are more theoretical and have less hands-on experience.
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.