Machine Learning Engineer
Octopus Energy
- +3
- +1
- London, England, United Kingdom
- +3
- +1
- London, England, United Kingdom
About
Kraken is the operating system for utilities of the future. Built in-house at Octopus Energy, Kraken powers energy companies and utilities around the globe – in 10 countries and counting – licensing software to organisations such as Origin Energy in Australia and Tokyo Gas in Japan. We’re on a mission to accelerate the renewable transition and bring affordable green energy to the world. We’ve reinvented energy products with smart, data‑driven tariffs to balance customer demand with renewable generation, and Kraken’s platform controls more than half of the grid‑scale batteries in the UK. Our platform supports engineers in the field, making energy specialists more productive with a suite of AI tools. We hire clever, curious, and self‑driven people, enabling them with modern tools and infrastructure and giving them autonomy. Our ML team consists of ML, front‑end and back‑end engineers, enabling rapid prototyping and the deployment of innovative tools at speed. We’ve had success using AI to improve service for customers, and we want to extend that success across the business. You’ll join a small expert team tackling the most pressing problems, whether it’s internal AI tooling to boost developer productivity or automating processes to accelerate migration for new clients. You’ll work across the product lifecycle, exploring new technologies, validating ideas with stakeholders, and rapidly prototyping. Your work will define the pattern for AI success at Kraken. What you’ll do
Work with a high performance team of LLM, MLOps, backend and frontend engineers Tackle the biggest problems facing the company, with the freedom to define novel approaches Help LLMs understand and interact with Kraken’s codebase, leveraging techniques such as GraphRAG, agentic workflows, finetuning and reinforcement learning Apply classic ML and NLP techniques to complement LLM systems Act as a centre of excellence for AI across the business, consulting teams on LLM usage and lifting product quality Stay at the forefront of AI advancements and their technical implications for the team and business What you’ll need
Curious and self‑driven – the ability to make decisions independently and solve novel problems 1+ year of production experience with LLMs, plus deep technical understanding of techniques to adapt LLMs to domains (e.g., advanced RAG, tool calling, finetuning, RL) 3+ years of experience with traditional ML techniques, including training and deploying non‑LLM models and monitoring production models with feedback loops A keen interest in Gen AI and classic ML, with the ability to apply trends to real‑world objectives Nice to have
Experience working with large codebases and collaborating with multiple engineering teams in large companies Experience with diverse LLM deployment methods (e.g., hosted finetuned models via services like Bedrock, or engines like vLLM) Job details
Seniority level: Not Applicable Employment type: Full-time Job function: Engineering and Information Technology Industries: Utilities and Environmental Services Equal opportunity and data privacy
We are an equal opportunity employer. We do not discriminate on the basis of protected attributes. For privacy information related to applications, refer to our Applicant and Candidate Privacy Notice and related notices on our website.
#J-18808-Ljbffr
Nice-to-have skills
- Machine Learning
Work experience
- Frontend
- Machine Learning
- NLP
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.