XX
Data EngineerPrecision TechnologiesNorth Carolina, United States

Cette offre d'emploi n'est plus disponible

XX

Data Engineer

Precision Technologies
  • US
    North Carolina, United States
  • US
    North Carolina, United States

À propos

Job Title: Data Engineer (8+ Years)

Location:
North Carolina
(Onsite)

Employment:
Full Time/ W2
(NO C2C)

Job Summary:
We are seeking an experienced
Senior Data Engineer
with
8+ years of hands-on experience
designing, building, and optimizing
scalable, high-performance data platforms
. The ideal candidate will have strong expertise in
data ingestion, transformation, data warehousing, cloud platforms, big-data technologies, and analytics enablement
. This role involves close collaboration with
Data Architects, Analytics, Data Science, Product, and Engineering teams
to deliver reliable, secure, and analytics-ready data solutions.

Key Responsibilities:

  • Design, develop, and maintain
    end-to-end data pipelines
    for structured and semi-structured data using
    batch and real-time processing frameworks
    .
  • Build and optimize
    ETL/ELT pipelines
    using
    cloud-native and big-data tools
    to ingest data from
    databases, APIs, files, event streams, and third-party sources
    .
  • Develop
    data transformation logic
    using
    SQL, Python, PySpark, and Spark SQL
    to support analytics, reporting, and data science workloads.
  • Implement and manage
    cloud-based data platforms
    leveraging
    Azure, AWS, or GCP
    , including
    Data Lakes, Lakehouse architectures, and Data Warehouses
    .
  • Design and optimize
    Data Lake (Bronze/Silver/Gold) layers
    ,
    Delta/Parquet formats
    , partitioning strategies, and performance tuning techniques.
  • Build and maintain
    Data Warehouses and analytical models
    (star/snowflake schemas) to support
    BI, dashboards, and regulatory reporting
    .
  • Work with
    streaming data technologies
    to support near real-time ingestion and processing using
    Kafka, Event Hubs, Kinesis, or Pub/Sub
    .
  • Ensure
    data quality, validation, reconciliation, and lineage
    , implementing robust
    error handling, logging, and monitoring frameworks
    .
  • Collaborate with
    Data Analysts, BI teams, and Data Scientists
    to deliver analytics-ready datasets and curated views.
  • Implement
    security, governance, and compliance controls
    , including
    RBAC, encryption, masking, auditing, and metadata management
    .
  • Support
    CI/CD pipelines, version control, and automated deployments
    for data engineering solutions.
  • Participate in
    Agile/Scrum ceremonies
    , providing accurate estimates, documentation, and continuous improvement.
  • Troubleshoot and resolve
    performance bottlenecks, data issues, and production incidents
    .

Required Skills:

  • Strong proficiency in
    SQL, Python, PySpark, and Spark
    for large-scale data processing.
  • Hands-on experience with
    Cloud Platforms
    :
    Azure (ADF, ADLS, Synapse, Databricks, Fabric)
    ,
    AWS (S3, Glue, Redshift, EMR)
    , or
    GCP (BigQuery, Dataflow, Dataproc)
    .
  • Solid experience with
    Data Warehousing concepts
    , dimensional modeling, and analytical data design.
  • Experience building
    ETL/ELT pipelines
    using tools such as
    ADF, SSIS, Glue, Airflow, Informatica, or dbt
    .
  • Knowledge of
    Big Data ecosystems
    , including
    HDFS, Hive, Spark, Kafka
    , and distributed computing concepts.
  • Familiarity with
    BI and reporting tools
    such as
    Power BI, Tableau, Looker, or SSRS
    .
  • Strong understanding of
    data quality, governance, metadata, and master data management
    .
  • Experience working in
    Agile environments
    with tools like
    JIRA, Confluence, and Git
    .

Preferred Qualifications:

  • Experience with
    Lakehouse architectures
    and
    Delta Lake
    .
  • Exposure to
    Microsoft Fabric, Synapse Analytics, and modern analytics platforms
    .
  • Knowledge of
    DevOps and CI/CD
    for data platforms (Azure DevOps, GitHub Actions, Jenkins).
  • Experience with
    containerization and orchestration
    (Docker, Kubernetes).
  • Background in
    regulated industries
    such as
    Banking, Healthcare, or Insurance
    .
  • Certifications in
    Azure Data Engineer, AWS Data Analytics, or GCP Data Engineer
    are a plus.
  • North Carolina, United States

Compétences linguistiques

  • English
Avis aux utilisateurs

Cette offre a été publiée par l’un de nos partenaires. Vous pouvez consulter l’offre originale ici.