- +2
- +6
- Switzerland
Über
................
For our client from the telecommunications industry in Zurich , we are looking for a very experienced, motivated and open-minded Data Engineer - CMDB
Your tasks:
Design, develop, and maintain data pipelines and workflows using Apache Airflow for efficient data ingestion, transformation, and loading into the CMDB
Develop and optimize PL/SQL queries and stored procedures for data manipulation and retrieval within the CMDB environment
Utilize NoSQL databases for handling and processing large volumes of configuration data
Integrate data from various sources into the CMDB using MuleSoft and other integration platforms
Conduct data reconciliation activities to ensure data accuracy and consistency across multiple systems and sources
Develop and implement inventory data models based on the Common Information Model (CIM) to accurately represent IT assets and their relationships
Design and implement Extract, Transform, and Load (ETL) processes to populate and update the CMDB with accurate and up-to-date information
Collaborate with cross-functional teams to understand data requirements and ensure the CMDB meets business needs
Troubleshoot and resolve data-related issues, ensuring data integrity and availability
Document data processes, data models, and configurations to maintain knowledge and facilitate collaboration
Your profile:
Proven experience in data engineering and data modeling
Scripting languages such as Python, Perl and Others
Strong understanding of Service Asset and Configuration Management (SACM) principles and best practices using Systems such as Microfocus Asset Manager, Peregrine Asset Center or similar (not the ITSM part)
In-depth knowledge of the Common Information Model (CIM) from DMTF.org.
Proficiency in Apache Airflow for workflow orchestration and automation
Knowledge of Container Solutions such as iKube 2.0 (preferred), Kubernetes or others
Extensive experience with PL/SQL for database operations and data manipulation
Experience working with NoSQL databases (e.g., MongoDB)
Hands-on experience with MuleSoft or other integration platforms
Strong data reconciliation and data quality management skills
Expertise in inventory data modeling and implementation
Solid understanding of Extract, Transform, and Load (ETL) processes using different Tooling
Basic Anchor Modelling Skills
Excellent problem-solving and analytical skills and strong collaboration abilities
Fluency in English is mandatory, knowledge of German is an advantage
Proven experience in data engineering and data modeling
Scripting languages such as Python, Perl and Others
Strong understanding of Service Asset and Configuration Management (SACM) principles and best practices using Systems such as Microfocus Asset Manager, Peregrine Asset Center or similar (not the ITSM part)
In-depth knowledge of the Common Information Model (CIM) from DMTF.org.
Proficiency in Apache Airflow for workflow orchestration and automation
Building Web Frontends and Front- as well as Backend Loading Mechanism
Knowledge of Container Solutions such as iKube 2.0 (preferred), Kubernetes or others
Extensive experience with PL/SQL for database operations and data manipulation
Experience working with NoSQL databases (e.g., MongoDB)
Hands-on experience with MuleSoft or other integration platforms
Strong data reconciliation and data quality management skills
Expertise in inventory data modeling and implementation
Solid understanding of Extract, Transform, and Load (ETL) processes using different Tooling
Basic Anchor Modelling Skills
Wünschenswerte Fähigkeiten
- NoSQL
- Python
- Perl
- Kubernetes
- MongoDB
- ETL
Berufserfahrung
- Data Engineer
- Data Infrastructure
Sprachkenntnisse
- English