About
Accepting applications until:
1 May 2026
Job Description
Your role: Data Engineer A hands-on role building scalable data infrastructure that powers AI-driven products and audience intelligence. As a Data Engineer at Global, you will: Key Responsibilities
•
Data Platform & Pipeline Engineering (60%) : Design, build and maintain scalable batch and near real-time pipelines across ingestion, transformation and serving layers. Develop reusable data models and optimise performance, reliability and cost. •
Platform Evolution & Engineering Excellence (20%) : Shape the Global:IQ data platform through best practices in architecture, tooling, CI/CD and infrastructure as code. Create reusable components and maintain clear technical documentation. •
Quality & Governance (10%) : Implement robust data validation, testing, lineage and observability to ensure high-quality, trusted datasets. Support governance and privacy-conscious data handling. •
Collaboration & Enablement (10%) : Partner with Data Science, MLOps, Product and commercial teams to deliver production-ready data solutions. Support and mentor others while communicating clearly with stakeholders. What You'll Love About This Role
Think Big:
Build a data platform from the ground up that will scale with a cutting-edge AI and ML product. Own It:
Take responsibility for production-grade data systems that directly power targeting, optimisation and measurement. Keep it Simple:
Apply pragmatic engineering to deliver reliable, maintainable solutions without over-engineering. Better Together:
Work in a highly collaborative, cross-functional team spanning technical and commercial expertise. What Success Looks Like
In your first few months, you'll have: • Developed a strong understanding of the Global:IQ platform and its core use cases • Successfully onboarded key datasets with robust ingestion and quality standards • Delivered reliable pipelines supporting live production use cases • Established or improved data engineering standards and best practices • Built strong working relationships across Data, Product and commercial teams • Identified opportunities to improve scalability, reliability and efficiency What You'll Need
•
Programming & Data Skills:
Strong Python and SQL skills, with experience building production-grade data pipelines •
Data Platform Experience:
Hands-on experience with modern data tools (e.g. Snowflake, Airflow, dbt) and cloud environments (preferably AWS) •
Engineering Best Practice:
Knowledge of CI/CD, testing, version control and infrastructure as code •
Data Quality & Governance:
Understanding of observability, validation and maintaining reliable data systems •
Collaboration & Communication:
Ability to translate business and data science needs into scalable solutions and communicate clearly with stakeholders •
Mindset & Approach:
Pragmatic, ownership-driven and curious, with a passion for building impactful data products ]]>
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.