XX
Solutions Architect, Inference DeploymentsNVIDIAUnited States

Cette offre d'emploi n'est plus disponible

XX

Solutions Architect, Inference Deployments

NVIDIA
  • US
    United States
  • US
    United States

À propos

divh2Solutions Architect (Inference Focus)/h2pWere forming a team of innovators to roll out and enhance AI inference solutions at scale, demonstrating NVIDIAs GPU technology and Kubernetes. As a Solutions Architect (Inference Focus), youll collaborate closely with our engineering, DevOps, and customer success teams to foster enterprise AI adoption. Together, well introduce generative AI to production!/ppWhat youll be doing:/pulliHelp customers craft, deploy, and maintain scalable, GPU-accelerated inference pipelines on Kubernetes for large language models (LLMs) and generative AI workloads./liliEnhance performance tuning using TensorRT/TensorRT-LLM, NVIDIA NIM, and Triton Inference Server to improve GPU utilization and model efficiency./liliCollaborate with multi-functional teams (engineering, product) and offer technical mentorship to customers implementing AI at scale./liliArchitect zero-downtime deployments, autoscaling (e.g., HPA or equivalent experience with custom metrics), and integration with cloud-native tools (e.g., OpenTelemetry, Prometheus, Grafana)./li/ulpWhat we need to see:/pulli5+ years in Solutions Architecture with a proven track record of moving AI inference from POC to production on Kubernetes./liliExperience architecting GPU allocation using NVIDIA GPU Operator and NVIDIA NIM Operator. Troubleshoot sophisticated GPU orchestration, optimize with Multi-Instance GPU (MIG), and ensure efficient utilization in Kubernetes environments./liliProficiency with TensorRT-LLM, Triton, and TensorRT for model optimization and serving./liliSuccess stories optimizing LLMs for low-latency inference in enterprise environments./liliBS or equivalent experience in CS/Engineering./li/ulpWays to stand out from the crowd:/pulliPrior experience deploying NVIDIA NIM microservices for multi-model inference./liliServerless Inference, knowledge of FaaS patterns (e.g., Google Cloud Run, AWS Lambda, NVCF) with NVIDIA GPUs./liliNVIDIA Certified AI Engineer or similar./liliActive contributions to Kubernetes SIGs or AI inference projects (e.g., KServe, Dynamo, SGLang or similar)./liliFamiliarity with networking concepts which support multi-node inference such as MPI, LWS or similar./li/ulpYour base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 148,000 USD - 235,750 USD. You will also be eligible for equity and benefits./ppApplications for this job will be accepted at least until November 25, 2025. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law./p/div
  • United States

Compétences linguistiques

  • English
Avis aux utilisateurs

Cette offre a été publiée par l’un de nos partenaires. Vous pouvez consulter l’offre originale ici.