This job offer is no longer available
Software Engineer, Internal Infrastructure
- San Francisco, California, United States
- San Francisco, California, United States
About
Who are we?
Our mission is to scale intelligence to serve humanity. We're training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.
We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what's best for our customers.
Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.
Join us on our mission and shape the future
Why this team?
The internal infrastructure team is responsible for building world-class infrastructure and tools used to train, evaluate and serve Cohere's foundational models. By joining our team, you will work in close collaboration with AI researchers to support their AI workload needs on the cutting edge, with a strong focus on stability, scalability, and observability. You will be responsible for building and operating Kubernetes GPU superclusters across multiple clouds. Your work will directly accelerate the development of industry-leading AI models that power Cohere's platform North.
We're hiring software engineers at multiple levels. Whether you're early in your career or a seasoned staff engineer, you'll find opportunities to grow and make an impact here.
Please Note:
All of our infrastructure roles require participating in a 24x7 on-call rotation, where you are compensated for your on-call schedule.
As a Software Engineer in the Internal Infrastructure team, you will:
- Build and operate Kubernetes compute superclusters across multiple clouds
- Partner with cloud providers to optimize infrastructure costs, performance, and reliability for AI workloads
- Work closely with research teams to understand their infrastructure needs and identify ways to improve stability, performance, and efficiency of novel model training techniques
- Design and build resilient, scalable systems for training AI models, focusing on creating intuitive user interfaces that empower researchers to self-serve to troubleshoot and resolve problems
- Encourage software best practices across our company and participate in team processes such as knowledge sharing, reviews, and on-call
You May Be a Good Fit If You
- Have deep experience running Kubernetes clusters at scale and/or scaling and troubleshooting Cloud Native infrastructure, including Infrastructure as Code
- Have strong programming skills in Go or Python
- Prefer contributing to Open Source solutions rather than building solutions from the ground up
- Are self-directed and adaptable, and excel at identifying and solving key problems
- Draw motivation from building systems that help others be more productive
- See mentorship, knowledge
Languages
- English
This job was posted by one of our partners. You can view the original job source here.