About
Key Responsibilities Leadership & Mentorship Lead and mentor a team of Data Engineers, providing technical direction, coaching, and feedback to support skill development and high-quality delivery. Champion engineering best practices, code quality, and scalable design principles. Support team members through architectural decisions, troubleshooting, and solution planning. Hands-On Data Engineering
Design, develop, and enhance end-to-end ETL/ELT pipelines using Databricks, Python, PySpark, and SQL. Build reliable systems for large-scale data ingestion, transformation, processing, and analytics. Develop and maintain real-time and batch data pipelines, leveraging cloud-native tools and distributed processing frameworks. Collaboration & Strategy
Partner with product, engineering, and business teams to understand data needs and translate them into effective technical solutions. Contribute to architectural roadmaps and participate in long-term planning for data platforms and infrastructure. Interpret complex processes and recommend solutions that improve speed, scalability, and overall data quality. Data Architecture & Optimization
Assist in data modeling, schema design, and database performance optimization across relational, NoSQL, and big data environments. Implement best practices for data governance, metadata management, and data lifecycle management. Ensure pipeline reliability through monitoring, alerting, and proactive troubleshooting. Continuous Improvement & Innovation
Stay informed about emerging trends in data engineering, cloud technologies, and distributed systems. Identify opportunities to improve existing architectures and introduce modern tools and techniques. Maintain documentation, participate in code reviews, and uphold engineering standards across the team.
Qualifications
15+ years of IT experience, including 8+ years in data engineering, data architecture, or similar fields. Strong hands-on experience with Databricks, PySpark, Python, and advanced SQL. Background working with multiple data technologies such as SQL/NoSQL databases, Hadoop ecosystem tools, and ETL platforms. Experience developing large-scale data solutions both on-premises and in cloud environments (AWS preferred). Familiarity with tools and services such as Redshift, Snowflake, RDS, Informatica/Syncsort, Jenkins, and real-time AWS components (SQS/SNS, OpenSearch). Understanding of mainframe or legacy data structures (VSAM) is a plus. Proven ability to design scalable, performant data architectures and pipelines. Strong communication skills with the ability to collaborate across technology and business teams. Experience working with enterprise stakeholders, ideally in an insurance or highly regulated industry.
Who You Are
A hands-on engineering leader who enjoys solving complex problems and helping others grow. A clear, confident communicator who can engage technical and non-technical audiences. A collaborative team player with strong emotional intelligence and the ability to build trust across functions. Adaptable, curious, and able to navigate evolving business priorities with ease. Committed to building high-quality, sustainable, and scalable data solutions.
Compensation:
Up to $150,000 a year + bonus Compensation is based on a range of factors that include relevant experience, knowledge, skills, other job-related qualifications.
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.