Cette offre d'emploi n'est plus disponible
Senior Data Engineer
- Atlanta, Georgia, United States
- Atlanta, Georgia, United States
À propos
Role: Senior Data Engineer
Experience: 12+ Years
Location: Atlanta, GA
Employment Type: W2 Only
Job Summary:
We are seeking a highly experienced Senior Data Engineer with 12+ years of hands-on expertise in designing, building, and optimizing large-scale data platforms. The ideal candidate will have deep knowledge of modern data engineering, cloud technologies, data warehousing, and distributed processing frameworks. You will play a key role in architecting end-to-end data solutions, building robust pipelines, and ensuring data quality, reliability, and scalability across the organization.
Key Responsibilities:
Design, build, and maintain scalable, secure, and high-performance ETL/ELT pipelines for batch and real-time data processing.
Architect and optimize data lake, data warehouse, and analytics platforms on cloud environments (AWS/Azure/GCP).
Lead the development of data models, schemas, and storage structures to support analytics, BI, and ML use cases.
Implement and enhance data governance, cataloging, lineage, and quality frameworks.
Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions.
Optimize data workflows, query performance, and storage costs across distributed systems.
Integrate data from various structured and unstructured sources using modern ETL tools and frameworks.
Mentor junior engineers, review code, and enforce engineering best practices.
Ensure security, compliance, and reliability of data platforms in production environments.
Troubleshoot production issues, perform root cause analysis, and implement long-term fixes.
Required Skills & Qualifications:
12+ years of experience in data engineering, data architecture, or related fields.
Strong expertise in SQL, data modelling, and performance tuning.
Deep hands-on experience with cloud platforms (AWS / Azure / GCP) and managed data services such as:
AWS: Redshift, Glue, EMR, S3, Lambda, Kinesis
Azure: Data Factory, Synapse, Databricks, ADLS
GCP: BigQuery, Dataflow, Composer
Proficiency in Python or Scala for data engineering workloads.
Strong background in distributed data processing frameworks:
Apache Spark, Kafka, Hive, Airflow, etc.
Experience with modern data lakehouse architectures (Delta Lake / Iceberg / Hudi).
Hands-on experience with ETL/ELT tools, workflow orchestration, and automation.
Strong understanding of CI/CD, DevOps concepts, and version control (Git).
Experience with Docker/Kubernetes is a plus.
Compétences linguistiques
- English
Cette offre a été publiée par l’un de nos partenaires. Vous pouvez consulter l’offre originale ici.