About
Department: Enterprise Data Services
Location: Nashville, TN or Sterling, VA (Hybrid)
Position Overview As a Staff Data Engineer, you will lead the design and delivery of scalable, high-quality data pipelines that power analytics and reporting across the enterprise. This role combines deep hands‑on engineering with technical leadership, driving best practices for data ingestion, transformation, and data modeling.
You will play a critical role in building and optimizing data solutions using Databricks, Delta Lake, cloud-native technologies (AWS), ensuring reliable and efficient movement of data from source systems to curated data assets. This position focuses on delivering well‑structured, trusted data through robust pipeline development and modern data engineering practices. You will also leverage AI‑assisted development tools to improve coding efficiency, validation, and documentation.
Essential Duties and Responsibilities
Lead the design and development of scalable data pipelines and data products using Databricks, Spark, and Delta Lake
Develop and optimize data transformations and ELT workflows using SQL and Python
Design and implement data models and curated datasets to support analytics and reporting use cases
Ensure data quality, consistency, and reliability through validation, monitoring, and testing practices
Optimize pipeline performance, scalability, and cost efficiency within AWS environments
Apply best practices for data partitioning, storage optimization, and query performance tuning
Collaborate with product, analytics, and business teams to translate requirements into efficient data solutions
Provide technical leadership and mentorship to engineers, including code reviews and design guidance
Leverage AI tools for coding, validation, and documentation assistance to enhance productivity and code quality
Troubleshoot and resolve data pipeline failures, latency issues, and data inconsistencies
Continuously evaluate and improve data engineering workflows and tooling
Here’s What You’ll Bring to the Team
8+ years of experience in data engineering or data pipeline development
Strong hands‑on experience with Databricks, Apache Spark, and Delta Lake
Advanced proficiency in SQL and Python for building and optimizing data pipelines
Experience developing robust ETL/ELT pipelines and handling complex data transformations
Hands‑on experience with AWS cloud services (e.g., S3, EMR, Lambda, Glue, Redshift, Kinesis)
Strong understanding of data modeling and data warehousing concepts
Experience working with large-scale datasets (TB+ or greater) in distributed environments
Knowledge of data quality frameworks, validation techniques, and monitoring practices
Familiarity with CI/CD pipelines and modern development workflows
Experience using AI‑assisted development tools for code generation, validation, or documentation
Strong problem‑solving skills with the ability to debug complex data issues
Leadership & Impact
Acts as a technical leader and subject matter expert in data pipeline development
Drives best practices for pipeline design, data transformation, and reliability
Mentors engineers and elevates team capabilities through hands‑on guidance and reviews
Leads complex, cross‑functional data initiatives with measurable business impact
Balances hands‑on execution with technical leadership
Required Education & Experience
Bachelor’s degree in Computer Science, Engineering, or related field
8+ years of relevant experience in data engineering
Proven experience leading technical initiatives or large‑scale data pipeline projects
Preferred Qualifications
Master’s degree in a technical field
Experience in large‑scale, enterprise data environments
Cloud certifications (AWS, Databricks)
#J-18808-Ljbffr
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.