XX
Data EngineerNationsBenefitsUnited States
XX

Data Engineer

NationsBenefits
  • US
    United States
  • US
    United States
Postuler Maintenant

À propos

Company Overview: NationsBenefits is recognized as one of the fastest growing companies in America and a Healthcare Fintech provider of supplemental benefits, flex cards, and member engagement solutions. We partner with managed care organizations to provide innovative healthcare solutions that drive growth, improve outcomes, reduce costs, and bring value to their members. Through our comprehensive suite of innovative supplemental benefits, fintech payment platforms, and member engagement solutions, we help health plans deliver high-quality benefits to their members that address the social determinants of health and improve member health outcomes and satisfaction. Our compliance-focused infrastructure, proprietary technology systems, and premier service delivery model allow our health plan partners to deliver high-quality, value-based care to millions of members. We offer a fulfilling work environment that attracts top talent and encourages all associates to contribute to delivering premier service to internal and external customers alike. Our goal is to transform the healthcare industry for the better! We provide career advancement opportunities from within the organization across multiple locations in the US, South America, and India.
Description: We are seeking a Data Engineer to join our Data Platforms team and focus on building and maintaining the critical data pipelines that power our data-driven organization. In this role, you will work with modern data stack technologies including Databricks, Airflow, and Azure cloud services to deliver reliable, high-quality data products that support business analytics, reporting, and decision-making across the enterprise. You will collaborate closely with data platform engineers, architects, and business stakeholders to design, implement, and optimize ETL/ELT workflows that ingest, transform, and deliver data at scale. This role emphasizes hands-on development of data pipelines using Python and SQL, working within our established metadata-driven frameworks and cloud-native infrastructure. The ideal candidate is passionate about data engineering fundamentals, comfortable working with large-scale data processing, and committed to delivering reliable data products in a regulated healthcare environment. You will contribute to a collaborative team environment where data quality, operational excellence, and continuous improvement are paramount.
Key Responsibilities • Design, build, and maintain ETL/ELT pipelines using metadata-driven frameworks within Airflow, Databricks, and our broader data platform stack. • Implement data ingestion processes from various source systems into our data platform, including databases, APIs, file-based systems, and streaming sources. • Build and optimize data delivery mechanisms to support analytics, reporting, and downstream data products consumed by business users. • Collaborate with team leads, architects, and stakeholders to implement data solutions that align with architectural standards and business requirements. • Monitor and troubleshoot data pipelines to ensure reliable, timely data delivery with appropriate error handling and alerting. • Implement comprehensive data quality and integrity checks throughout the ETL/ELT process to ensure reliable data delivery. • Participate in code reviews and contribute to team knowledge sharing and best practices around data engineering patterns. • Support data consumers by optimizing data access patterns and query performance on cloud-native table formats. • Write high-quality, maintainable code in Python and SQL that follows software engineering best practices. • Maintain comprehensive documentation for data pipelines, transformations, and data flows.
Required Skills & Qualifications: • Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. • 3-5 years of
data engineering experience
with hands-on expertise in
ETL/ELT development and data pipeline implementation. • Strong proficiency in
Python and SQL
for data processing, transformation, and analysis. • Experience with
workflow orchestration tools
such as
Airflow , or similar technologies for scheduling and managing data pipelines. • Strong Hands-on experience with
PySpark. • Hands-on experience with cloud data platforms, preferably Azure, and modern data stack technologies. • Familiarity with
database systems (SQL Server, PostgreSQL, or similar)
and modern table formats such as
Delta Lake or Iceberg. • Strong understanding of data quality frameworks and experience implementing data validation and integrity checks. • Experience with version control systems (Git) and familiarity with DevOps processes and CI/CD concepts. • Excellent problem-solving skills and ability to work collaboratively in a team environment. • Strong communication skills with ability to explain technical concepts to diverse audiences.
Preferred Qualifications • Experience with Databricks and Unity Catalog for data lakehouse implementations. • Knowledge of streaming data processing and real-time data pipelines using Kafka, EventHub, or similar technologies. • Experience working in regulated industries or with sensitive data, particularly HIPAA compliance knowledge. • Familiarity with Infrastructure as Code tools such as Terraform for managing data infrastructure. • Experience with dbt (data build tool) for analytics engineering and data transformation. • Knowledge of data modeling principles and dimensional modeling techniques. • Understanding of data governance, metadata management, and data cataloging practices. • Experience with monitoring and observability tools for data pipeline reliability.
  • United States

Compétences linguistiques

  • English
Avis aux utilisateurs

Cette offre provient d’une plateforme partenaire de TieTalent. Cliquez sur « Postuler maintenant » pour soumettre votre candidature directement sur leur site.