Über
Mission You will be responsible for designing, developing, and maintaining the infrastructure and systems required for data storage, processing, and analysis. You play a crucial role in building and managing the data pipelines that enable efficient and reliable data integration, transformation, and delivery for all data users across the enterprise.
What your responsibilities will be
Collaborate with data scientists and analysts to optimize models and algorithms for data quality, security, and governance.
Ensure data consistency and integrity during the integration process, performing data validation and cleaning as needed.
Transform raw data into a usable format by applying data cleansing, aggregation, filtering, and enrichment techniques.
Optimize data pipelines and data processing workflows for performance, scalability, and efficiency.
Monitor and tune data systems, identify and resolve performance bottlenecks, and implement caching and indexing strategies to enhance query performance.
Implement data quality checks and validations within data pipelines to ensure the accuracy, consistency, and completeness of data.
Takes authority, responsibility, and accountability for exploiting the value of enterprise information assets and the analytics used to render insights for decision‑making, automated decisions and augmentation of human performance.
Establishes the governance of data and algorithms used for analysis, analytical applications, and automated decision making.
Who you are To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skills, education, and/or ability required. Reasonable accommodation may be made to enable individuals with disabilities to perform the essential functions.
Technical Skills
Bachelor's degree in computer science, data science, software engineering, information systems, or a related quantitative field; Master’s degree preferred.
Advanced English language skills (spoken and written).
Experience in Data Modeling is a plus.
Minimum of 5 years of experience in data management disciplines—including data integration, modeling, optimization, and data quality—or in other areas directly related to data engineering responsibilities. Experience with SAP BW, SAP Datasphere, or Microsoft Fabric is highly valued.
Deep knowledge in Apache technologies such as Spark to build scalable and efficient data pipelines.
Experience with database technologies such as SQL and Oracle. Experience with NoSQL databases, including graph databases, will be a plus.
Hands‑on experience in programming languages such as Python.
Ability to design, build, and deploy data solutions that capture, explore, transform, and utilize data to support AI, ML, and BI.
Soft Skills
Excellent business acumen and interpersonal skills; able to work across business lines at a senior level to influence and effect change to achieve common goals.
Excellent communication skills and the ability to work under pressure.
Ability to collaborate with cross‑functional teams and align diverse stakeholders.
Problem‑solving and critical‑thinking abilities.
Benefits Joint contract: permanent position.
Flexibility for U Program Hybrid Model.
Flexible schedule Monday-Thursday 7‑10 to 16‑19h and Friday 8‑15h (with the same flexible start time).
Grifols is an equal opportunity employer.
#J-18808-Ljbffr
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klick auf „Jetzt Bewerben”, um deine Bewerbung direkt auf deren Website einzureichen.