Dieses Stellenangebot ist nicht mehr verfügbar
Über
Build ingestion frameworks using AWS Glue, Lambda, Kinesis, and Step Functions to support large-scale AI workloads.
Develop embedding pipelines, feature stores, and vector database integrations (Pinecone, FAISS, Chroma, Amazon OpenSearch) to power semantic retrieval.
Transform unstructured data-documents, text, images, logs-into AI-ready assets for LLM applications.
Integrate & Orchestrate LLM Architectures Build end-to-end GenAI pipelines connecting enterprise data with LLMs including Anthropic Claude, Amazon Titan, OpenAI GPT, and Llama 3.
Use LangChain, LlamaIndex, and Bedrock Agents to deliver context-rich RAG, prompt-chaining, and conversational intelligence.
Develop LLM-powered APIs enabling natural language querying, summarization, search, and generative workflows.
Optimize prompts, context windows, model evaluation, and response quality.
Scale AI Infrastructure & MLOps Deploy, monitor, and optimize LLM workflows on AWS Bedrock and other cloud AI platforms.
Implement CI/CD pipelines for GenAI systems using Airflow, Prefect, GitHub Actions, or AWS CodePipeline.
Establish data and model observability frameworks to track drift, accuracy, latency, and performance.
Partner with Data Science and MLOps teams to streamline fine-tuning, deployment, and scalable model operations.
Champion Governance, Security & Responsible AI Implement data lineage, access controls, encryption, and governance for AI datasets.
Enforce Responsible AI practices, ensuring transparency, risk mitigation, and ethical use of LLMs.
Maintain prompt logs, telemetry, and audit documentation supporting SOC2, GDPR, and CCPA compliance.
What You Bring Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.
5+ years of data engineering experience, including 2+ years developing GenAI or LLM-based solutions.
Strong proficiency in:
AWS Bedrock, SageMaker, or Vertex AI
LangChain or LlamaIndex
Snowflake, Redshift, or Databricks
Python, SQL, and API integrations
Vector databases (Pinecone, FAISS, Chroma, OpenSearch)
Proven experience building RAG pipelines, embeddings, and prompt-chaining architectures.
Deep understanding of data modeling, orchestration, and MLOps best practices.
Ability to integrate LLM capabilities into enterprise SaaS products and data platforms.
Preferred Qualifications Experience with advanced GenAI frameworks such as AutoGen, CrewAI, or Semantic Kernel.
Familiarity with data observability tools (Monte Carlo, DataHub, Marquez).
Experience with Docker, Kubernetes, and CI/CD deployment automation.
Relevant certifications, such as:
AWS Certified Machine Learning - Specialty
Google Cloud Generative AI Engineer
MIT or Stanford AI/ML Certificates
DeepLearning.AI Generative AI Specialization
Success Measures Deployment of reliable GenAI pipelines supporting Dynatron's automation and analytics initiatives.
Improved latency, accuracy, and consistency of LLM-generated outputs.
Reduction in hallucinations and data quality gaps across AI workflows.
Increased adoption of AI- and LLM-enabled interfaces across Dynatron's product ecosystem and business functions.
Compensation Base Salary: $110,000 - $135,000 Benefits Summary Comprehensive health, vision, and dental insurance
Employer-paid short- and long-term disability and life insurance
401(K) with competitive company match
Flexible vacation policy and 9 paid holidays
Remote-first culture
Ready to Build the Future of AI at Dynatron? Join us and help architect the intelligent systems that power smarter decisions, stronger performance, and next-generation automation across the automotive service industry. Powered by JazzHR
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.