This job offer is no longer available
About
The Enterprise Data Services Department’s Personal Insurance team is seeking a hands‑on Sr Staff Data Engineer to build and scale its Data assets on Snowflake and AWS platform. The role focuses on technical leadership, integrating data from new data sources, curating and transforming it into high‑quality data products for actionable insights, using a mix of solutions spanning AI and Cloud technologies.
Ideal candidates bring deep expertise in data engineering frameworks and tools, proficiency in programming languages, and experience with DevOps / DataOps pipelines, cloud platforms, and agile methodologies. Strong problem-solving, communication, and collaboration skills are essential, along with a proactive mindset and the ability to thrive in complex, fast‑paced environment.
Key Responsibilities
Design, develop, and optimize highly scalable batch and near‑real‑time data pipelines supporting structured and unstructured data sources (XML, JSON, Parquet).
Lead the delivery of curated, analytics‑ready data products supporting reporting, advanced analytics, regulatory, and machine learning use cases.
Implement robust error handling, reconciliation, restartability, and performance optimization to ensure platform reliability and data integrity.
Partner with Data Governance team for metadata management, data lineage, data quality monitoring and data privacy controls.
Evaluate and apply AI‑assisted engineering tools to improve developer productivity, accelerate delivery, and enhance data solutions.
Provide technical mentorship while partnering with architects and stakeholders to influence and deliver a scalable, trusted AI and data pipelines grounded in best practices and reusable standards.
Qualifications, Required Skills & Experience
Candidates must be authorized to work in the US without company sponsorship. The company will not support the STEM OPT I‑983 Training Plan endorsement for this position.
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related discipline.
5+ years of progressive experience in data engineering, with significant hands‑on expertise developing and deploying large‑scale data and analytics applications on cloud platforms such as AWS and Snowflake.
5+ years of hands‑on experience with Python and PySpark / Spark for data ingestion, transformation, and pipeline development.
Deep hands‑on experience with Snowflake, including SQL development, ELT design, performance optimization, and semi‑structured data handling.
Good experience working with disparate data sources—Structured, semi‑structured data (Flat files, XML, JSON, Parquet) and unstructured data.
Good experience with version control tools, CI/CD pipelines and DevOps tools like GitHub, Jenkins, Nexus and uDeploy.
Strong background in Data Profiling, Data Modeling and Data governance concepts is key to this role.
Nice to have: Certifications in AWS Data & Analytics Services, AI and Snowflake.
Nice to have: Experience using AI‑assisted development tools to improve productivity in SQL development, data pipeline creation, testing, and documentation.
Nice to have: Experience in the Insurance industry and policy administration data environments.
Nice to have: Experience with Informatica Data Management Cloud.
This role will have a Hybrid work schedule, with the expectation of working in an office (Columbus, OH, Chicago, IL, Hartford, CT or Charlotte, NC) 3 days a week (Tuesday through Thursday).
Compensation The listed annualized base pay range for this role is: $125,760 – $188,640.
Equal Opportunity Employer/Sex/Race/Color/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age
#J-18808-Ljbffr
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.