About
Salt Lake City, Utah 5 days onsite Experience:
8+ Years We are seeking an ETL Data Engineer with strong experience in building and supporting large-scale data pipelines. The role involves designing, developing, and optimizing ETL processes using tools like DataStage, SQL, Python, and Spark. You will work closely with architects, engineers, and business teams to create efficient data solutions. The job includes troubleshooting issues, improving performance, and handling data migration and transformation tasks. You will also support Test, QA, and Production environments while ensuring smooth deployments. Strong skills in databases, scripting, and version control are essential for this position. Responsibilities
Collaborate with architects, engineers, analysts, and business teams to develop and deliver enterprise-level data platforms that support data-driven solutions. Apply strong analytical, organizational, and problem-solving skills to design and implement technical solutions based on business requirements. Develop, test, and optimize software components for data platforms, improving performance and efficiency. Troubleshoot technical issues, identify root causes, and recommend effective solutions. Work closely with data operations teams to deploy updates into production environments. Provide support across Test, QA, and Production environments and perform additional tasks as needed. Required Qualifications
Bachelors degree in Computer Science, Computer Engineering, or a related discipline. Strong experience in Data Warehousing, Operational Data Stores, ETL tools, and data management technologies. 8+ years of hands?on expertise in ETL (IBM DataStage), SQL, UNIX/Linux scripting, and Big Data distributed systems. 4+ years of experience with Teradata (Vantage), SQL Server, Greenplum, Hive, and delimited text data sources. 3+ years of experience with Python programming, orchestration tools, and ETL pipeline development using Python/Pandas. Deep understanding of data migration, data analysis, data transformation, large-volume ETL processing, database modeling, and SQL performance tuning. Experience creating DDL scripts, stored procedures, and database functions. Practical experience with Git for version control and release processes. Familiarity with Spark framework, including RDDs using Python or Scala. Seniority level
Mid?Senior level Employment type
Full?time Job function
Information Technology Referrals increase your chances of interviewing at Rockwoods Inc by 2x #J-18808-Ljbffr
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.