XX
Data EngineerLorven TechnologiesUnited States
XX

Data Engineer

Lorven Technologies
  • US
    United States
  • US
    United States

Über

Data Engineer
Location: Atlanta, GA, (Day 1 Onsite) - 5 Days week Office - Only Locals) F2F interview MUST
Experience: 13+ Years MUST
Duration: 12 Months
Job Type: Contract
Job Overview: We are seeking a skilled Data Engineer with strong expertise in SQL, ETL development, and hands-on experience with Rhine (or similar metadata-driven orchestration frameworks). The ideal candidate will play a key role in building scalable data pipelines, managing data transformation workflows, and supporting analytics initiatives across the enterprise.
Mandatory Skills:
DataStage (6+ Years Must along with the current project), Microsoft SQL Server, Data warehouse, ETL, SSIS, Power shell, Python, Informatica, Talend, Python, Rhine is not Mandatory/Needed
Key Responsibilities: Design, develop, and maintain ETL pipelines using best practices and enterprise data architecture standards. Write advanced SQL queries for data extraction, transformation, and analysis from structured and semi-structured data sources. Work with Rhine-based pipelines to enable dynamic, metadata-driven data workflows.
Collaborate with data architects, analysts, and business stakeholders to understand data requirements and implement robust solutions. Ensure data quality, consistency, and integrity across systems. Participate in performance tuning, optimization, and documentation of data processes. Troubleshoot and resolve issues in data pipelines and workflows. Support deployment and monitoring of data jobs in production environments.
Required Qualifications: Bachelor's degree in Computer Science, Engineering, Information Systems, or related field. Strong hands-on experience with SQL (complex joins, window functions, CTEs, performance tuning). Proven experience in ETL development using tools like Informatica, Talend, DataStage, or custom Python/Scala frameworks. Familiarity with or experience in using Rhine for metadata-driven pipeline orchestration. Working knowledge of data warehousing concepts and dimensional modeling. Exposure to cloud platforms (AWS, Azure, or GCP) and tools such as Snowflake, Redshift, or BigQuery is a plus. Experience with version control (e.g., Git) and CI/CD for data jobs
  • United States

Sprachkenntnisse

  • English
Hinweis für Nutzer

Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klicken Sie auf „Jetzt Bewerben“, um Ihre Bewerbung direkt auf deren Website einzureichen.