This job offer is no longer available
About
Our client is a private investment firm combining capital, strategic insight, and engineering capabilities to build and scale complex businesses. They manage diverse asset strategies and collaborate closely with portfolio companies on operational, simulation, and engineering infrastructure. Responsibilities Lead the design, development, and maintenance of scalable data pipelines, ETL / ELT processes, and data models Design and implement reusable data processing frameworks that support different data types and patterns Develop and optimize data storage solutions including data warehousing, data lakes, and NoSQL databases on cloud platforms Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver solutions Ensure data quality, governance, and security through the implementation of best practices and robust monitoring systems Ensure efficient data ingestion, storage, and processing by using big data tools
Requirements
Bachelors degree in computer science, engineering, or a related field 7-10 years of experience as a data engineer or in a similar role Experience with data warehousing, ETL / ELT processes, and data modeling Proficiency in Python programming Hands?on experience with big data technologies such as Kafka, Spark, HDFS, Flink, Trino, Iceberg Experience with AWS cloud platform Strong SQL skills and experience working with both relational and NoSQL databases Experience with high performance file systems
Salary Range
$160,000-$210,000 #J-18808-Ljbffr
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.