About
Data Engineer Location: Warrendale, PA / Pittsburgh, PA (Onsite) Job Type: Contract
Role Overview
We are seeking an experienced
Data Engineer
with strong expertise in
Databricks, Python, and Spark
to design and build scalable data pipelines. The ideal candidate will have hands-on experience with
ETL/ELT development
and large-scale data processing using
Spark and PySpark .
Key Responsibilities Design, develop, and maintain
ETL/ELT data pipelines
for large-scale data processing. Build scalable data solutions using
Databricks and Apache Spark (PySpark / Spark SQL) . Develop and optimize
Python-based data processing frameworks . Work closely with data analysts and data scientists to deliver high-quality datasets. Optimize performance and scalability of data pipelines. Implement data quality checks and monitoring solutions. Collaborate with cross-functional teams to support analytics and reporting initiatives. Required Skills
Advanced
Databricks (hands-on experience) Strong
Python programming Spark / PySpark / Spark SQL ETL / ELT pipeline development Data pipeline optimization and performance tuning Strong SQL skills
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.