Über
You will partner directly with business leaders and cross-functional teams to understand requirements, translate them into scalable technical solutions, and design pipelines and workflows built on modern data platforms. This position requires strong expertise in SQL, PySpark, and Databricks, including job scheduling and orchestration using tools such as Control-M or similar.
Key Responsibilities
Technical Leadership & Ownership (20%)
Provide technical guidance and mentorship to offshore engineers, ensuring quality, consistency, and adherence to best practices. Act as the lead engineer on critical projects, setting standards for code quality, architecture, and delivery. Support planning and prioritization for a lean engineering team, with an opportunity to formally grow into managing a small team. Hands-On Engineering (80%)
Design, develop, and maintain scalable data pipelines using Databricks, SQL, and PySpark. Build and optimize ETL/ELT workflows for ingestion, transformation, and processing of large datasets. Manage Databricks jobs, including scheduling, automation, and orchestration using Control-M or a similar scheduling platform. Develop high-quality, production-ready solutions that support analytics, reporting, and operational data needs. Diagnose and remedy pipeline issues, performance bottlenecks, and data quality challenges. Collaboration & Business Engagement
Work directly with business stakeholders to gather requirements, understand use cases, and translate needs into robust technical designs. Partner with cross-functional teams including product, analytics, and architecture groups to deliver integrated, scalable solutions. Communicate technical concepts clearly to both technical and non-technical audiences. Architecture & Optimization
Support data modeling, schema design, and performance tuning across cloud and on-prem data systems. Implement data management best practices-governance, observability, documentation, and operational standards. Continuously assess and improve pipelines, architecture, and tooling to enhance reliability and speed. Qualifications
15+ years of IT experience with 8+ years in Data Engineering or related fields. Deep expertise with SQL, PySpark, and Databricks, including job orchestration and scheduling. Proven experience building and optimizing large-scale ETL/ELT pipelines. Strong understanding of cloud data ecosystems (AWS preferred) and data warehousing platforms such as Redshift or Snowflake. Experience working with SQL/NoSQL databases and modern data integration patterns. Bonus: Experience with Fivetran or similar ingestion tools. Familiarity with Hadoop ecosystem tools and ETL platforms is a plus. Excellent communication skills and demonstrated ability to interact with business stakeholders. Prior experience in insurance or regulated industries is advantageous. Who You Are
A hands-on technical lead who enjoys building and delivering high-quality data solutions. Someone who can work independently, drive initiatives, and take ownership from concept to deployment. A natural mentor who supports and guides offshore resources while still owning the most complex engineering tasks. An effective communicator who can gather business requirements and translate them into strong technical plans. A problem-solver who thrives in evolving environments and stays current with modern data engineering practices Compensation
Up to $150,000 annually + bonus Compensation is based on a range of factors that include relevant experience, knowledge, skills, other job-related qualifications.
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klicken Sie auf „Jetzt Bewerben“, um Ihre Bewerbung direkt auf deren Website einzureichen.