Data Engineer - Crypto Market Data InfrastructureGunvor Group • London, England, United Kingdom
Data Engineer - Crypto Market Data Infrastructure
Gunvor Group
- London, England, United Kingdom
- London, England, United Kingdom
À propos
Build and operate a robust, low-latency, fault-tolerant market data platform that powers trading and analytics. Within our broader data engineering unit, you will champion standardization and reuse, delivering clean, consistent crypto market data and reusable APIs and blueprints that accelerate teams across the organization.
Main Responsibilities
Design, build, and maintain a robust, low-latency, fault-tolerant market data pipeline.
Aggregate order books, trades, and funding data from multiple crypto exchanges into a single standardized feed.
Implement redundancy, error handling, and data validation mechanisms to ensure high reliability of live data.
Develop monitoring tools and alerts for data quality, latency, and system health.
Work closely with developers, quants, and traders to ensure seamless integration of data into execution and analytics systems.
Document and continuously improve data ingestion, transformation, and storage processes.
Strategic Collaboration & Business Alignment
Partner with trading desks, quantitative teams, and risk functions to translate business needs into data solutions that enhance decision-making and operational efficiency.
Act as a senior liaison between engineering and business stakeholders, ensuring alignment on data priorities and delivery timelines.
Prioritize a value-based backlog (e.g., faster close/settlement, improved forecast accuracy, reduced balancing penalties) and measure business impact.
Align data models and domain ownership with business processes (bids/offers, nominations, positions, exposures, outages).
Liaise with Cybersecurity, Compliance, and Legal on sector-specific controls (REMIT/NERC-CIP considerations, data retention, segregation).
Innovation & Product Development
Incubate and industrialize data products: curated marts, feature stores, real-time decision APIs, and event streams for forecasting and optimization.
Introduce modern patterns (CDC, schema evolution, Delta/Iceberg, stream–batch unification) to improve freshness and resilience.
Evaluate and integrate external data (weather, fundamentals, congestion, capacity postings), internal and external vendor systems (ETRM) safely and at scale.
Collaborate with quantitative analysts to productionize ML pipelines (forecasting load/renewables, anomaly detection, etc.) with monitoring and rollback.
Mentorship & Technical Oversight
Coach engineers through design reviews, pair programming, and clear contribution guidelines; raise the bar on code quality and documentation.
Lead incident reviews and architectural forums; provide pragmatic guidance on trade-offs (latency vs. cost, simplicity vs. flexibility).
Develop growth paths and learning plans focused on energy domain fluency and modern data engineering practices.
Operational Excellence
Implement robust monitoring/alerting, runbooks.
Ensure security and compliance by design: least-privilege access, secrets management, encryption, auditability, and disaster recovery testing.
Profile
Bachelor’s degree or higher in Computer Science, Engineering, or related field.
3+ years’ relevant experience in data engineering, trading systems, or financial technology.
Proven experience building and operating tick-level data pipelines for financial or crypto markets.
Prior experience in low-latency or high-availability systems preferred.
Skills
Azure: ADLS Gen2, Event Hubs, Synapse Analytics, Azure Databricks (Spark), Azure Functions, Azure Data Factory/Databricks Workflows, Key Vault, Azure Monitoring/Log Analytics; IaC with Terraform/Bicep; CI/CD with Azure DevOps or GitHub Actions.
Snowflake (on Azure or multi-cloud): Warehousing design, Streams & Tasks, Snowpipe/Snowpipe Streaming, Time Travel & Fail-safe, RBAC & row/column security, external tables over ADLS, performance tuning & cost governance.
Kafka / Streaming: Confluent Platform/Cloud, Kafka Streams/Spring Kafka, ksqlDB, Schema Registry (Avro/Protobuf), Kafka Connect (Debezium CDC), MirrorMaker 2; patterns for exactly-once/at-least-once, backpressure, and idempotency.
Programming & Engineering Practices: Strong OOP in Python and/or Java/Scala;
SDLC, DevOps mindset, TDD/BDD, code reviews, automated testing (unit/integration/contract), packaging and dependency management, API design (REST/gRPC).
Orchestration & Quality: Airflow/ADF/Databricks Jobs, data contracts, Great Expectations (or similar), lineage/catalog (e.g., Purview), metrics/observability (Prometheus/Grafana/Application Insights).
Additional Skills
Highly numerate, rigorous, and resilient in problem-solving.
Ability to prioritize, multitask, and deliver under time constraints.
Strong written and verbal communication in English.
Self-motivated, proactive, and detail-oriented.
Comfortable working under pressure in a fast-paced environment.
Excellent communication skills, ability to explain technical topics clearly.
Team player with ability to collaborate across engineering, quant, and trading teams.
Our people make all the difference in our success.
#J-18808-Ljbffr
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre provient d’une plateforme partenaire de TieTalent. Cliquez sur « Postuler maintenant » pour soumettre votre candidature directement sur leur site.