About
Were looking for a Senior Software Engineer to join us and work with our client's Data Platform team. Our client is a leading healthcare technology company, dedicated to transforming the patient and provider experience through innovative, data-driven solutions. You will architect and build core services, automation tools, and integrations that power our client's data ecosystem. Youll own high-impact platform components, improve pipeline reliability and observability, and partner closely with data engineering, analytics, and DevOps to advance the scalability and developer experience of our client's data platform. What Youll Do Build Automation & Tooling: Develop
scalable backend services, APIs , and internal tools to
automate data platform workflows
(e.g.,
data onboarding ,
validation ,
pipeline orchestration, schema tracking, quality monitoring ). Data Platform Integration:
Integrate tools with core data infrastructure , building pipelines ( Airflow, Spark, dbt, Kafka, Snowflake , or similar) to expose capabilities via APIs and UIs. Observability & Governance:
Build visualization and monitoring components
for data lineage, job health, and quality metrics. Collaboration: Work cross-functionally with data engineering, product, and DevOps teams to define requirements and deliver end-to-end solutions.
What Were Looking For
You have to live in the USA and have all the necessary documentation to work under an independent contractor agreement.
Must-Have Experience
7+ years of experience
in
data engineering
or
software development
with
at least 5 years building production-grade data or platform services . Strong programming skills in
Python & SQL
on
at least one major data platform
( Snowflake, BigQuery, Redshift , or similar). Develop
tooling for schema evolution ,
data contracts , and
developer self-service . Deep experience with
streaming, distributed compute, or S3-based table formats
( Spark, Kafka, Iceberg/Delta/Hudi ). Experience with
schema governance, metadata systems, and data quality
frameworks. Understanding of
orchestration tools
(Airflow, Dagster, Prefect, etc.). Solid grasp of
CI/CD
and
Docker . At least 2 years
of experience in
AWS . Experience with
building data pipelines
using
DBT .
Nice-to-haves
Experience with
data observability, data catalog, or metadata management
tools. Experience working with
healthcare data
( X12, FHIR ). Proven experience in
data migration projects
(legacy technologies to the latest technologies). Experience
building internal developer platforms or data portals . Understanding of authentication/authorization ( OAuth2, JWT, SSO ).
Our Recruitment Process
Technical Interview (45 min) Screening interview with the client's hiring manager Client technical interview A follow-up technical interview
What We Offer
Totally remote within the USA, full-time (40h/week) Work hours US Eastern time office hours
#J-18808-Ljbffr
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.