XX
Senior Data EngineerAloola.ioUnited States
XX

Senior Data Engineer

Aloola.io
  • US
    United States
  • US
    United States

Über

About the Role We are looking for a skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. You will work cross-functionally with analytics, engineering, and business teams to ensure reliable, high-quality data flows that power decision-making across the organization.
Candidates should take the time to read all the elements of this job advert carefully Please make your application promptly.
Responsibilities Design, develop, and maintain robust ELT/ETL pipelines using dbt and Apache Airflow for orchestration Build and optimize data workflows using Google Cloud Dataflow for large-scale stream and batch processing Manage and optimize Amazon Redshift data warehouse, including schema design, query performance tuning, and cluster maintenance Collaborate with data analysts and scientists to model data in a way that supports self-service analytics and reporting Implement and enforce data quality checks, monitoring, and alerting across pipelines Develop and maintain dbt models, tests, and documentation to ensure data consistency and lineage transparency Partner with platform and DevOps teams on infrastructure-as-code, CI/CD pipelines, and deployment of data assets Contribute to the development of data engineering best practices, standards, and reusable frameworks Requirements 5+ years of experience in a data engineering role Strong proficiency with
Amazon Redshift
— schema design, query optimization, distribution/sort keys, and workload management Hands-on experience with
dbt
(Core or Cloud) — building models, writing tests, managing sources, and maintaining documentation Experience orchestrating workflows with
Apache Airflow
(Cloud Composer or self-managed), including DAG development, scheduling, and dependency management Experience building pipelines with
Google Cloud Dataflow
(Apache Beam), including both batch and streaming use cases Proficiency in SQL and Python Familiarity with data modeling concepts (star schema, Kimball, data vault) Experience with xywuqvp version control (Git) and collaborative development workflows Nice to Have Experience with Google Cloud Platform (BigQuery, GCS, Pub/Sub) Familiarity with Terraform or other infrastructure-as-code tools Knowledge of data observability tools (Monte Carlo, Great Expectations, etc.) Experience in a healthcare or regulated data environment Exposure to streaming architectures (Kafka, Pub/Sub) What We're Looking For
A self-sufficient engineer who takes ownership end-to-end — from pipeline design through production monitoring — and communicates clearly with both technical and non-technical stakeholders.
  • United States

Sprachkenntnisse

  • English
Hinweis für Nutzer

Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klick auf „Jetzt Bewerben”, um deine Bewerbung direkt auf deren Website einzureichen.