Dieses Stellenangebot ist nicht mehr verfügbar
Über
This role is in the early stages of hiring, and conversations are exploratory at this point.
All potential candidates should read through the following details of this job with care before making an application.
A product-led SaaS company is integrating LLM-powered features directly into its core platform. This isn’t experimental, AI is central to the roadmap.
They’re looking for an engineer who understands how to build reliable, scalable LLM systems in production not just prompt or API integration.
You’ll work on:
- Designing and deploying RAG pipelines
- Improving inference latency and reliability
- Embedding AI features alongside backend and product teams
Tech environment:
- Python
- LangChain (or similar)
- AWS or GCP
- Docker.
What xcfaprz matters:
- Experience shipping LLM/NLP systems into production
- Strong backend fundamentals (APIs, architecture, performance)
- Understanding real-world trade-offs: cost, latency, scale
#J-18808-Ljbffr
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.