This job offer is no longer available
About
Integrate AI solutions seamlessly with existing electronic and engineering infrastructures, ensuring reliability and performance.
Deploy and fine-tune domain-specific AI models, particularly LLM-powered architectures, in cloud environments to enhance operational efficiency and insights.
Ship to cloud: containerize services, automate CI/CD, provision runtime/inference services, and operate them with basic observability and cost controls.
Leverage LangChain/LangGraph and/or LlamaIndex to build agentic workflows, tools, and routers; evaluate models and prompts with repeatable harnesses.
Stand up and tune vector/hybrid search with appropriate metadata, hybrid scoring, and guardrails.
Collaborate with electrical/RF/firmware teams; translate problem statements into AI features, validate with real datasets and on‑bench workflows.
Minimum Qualifications
2–3+ years of professional software engineering; 1+ years building and deploying LLM‑based apps in production or serious pilots.
Strong Python; experience shipping services (FastAPI/Flask), writing clean APIs, and working in Linux/Git/Docker.
Hands‑on RAG experience (indexing/embedding, retrieval, reranking, evaluation) and one or more vector stores.
Practical use of LangChain/LangGraph and/or LlamaIndex for orchestration and data frameworks.
Cloud familiarity (AWS/GCP/Azure) for deploying model‑backed services and data infra.
Hardware literacy: can read schematics/PCBs, reason about components/BOM, and converse fluently about electronics; exposure to at least one of RF/wireless/PCB bring‑up/test.
Preferred Qualifications
MS in Electrical Engineering, Computer Engineering, Computer Science, or related.
Experience across ECAD/EDA and hardware workflows (Altium/KiCad; Cadence Allegro/OrCAD or Virtuoso; Siemens Xpedition; Synopsys/Cadence EDA).
Wireless/RF background (e.g., BLE/Wi‑Fi/LoRa/5G fundamentals; link budgets, S‑parameters, impedance, SDR).
LLM depth: agent/tool use, function‑calling/routing, evaluation frameworks (e.g., LangSmith/Ragas/DeepEval), prompting at scale, caching/cost controls, LoRA/fine‑tuning, multimodal (text+image/OCR).
MLOps/DevOps: Kubernetes/KServe, Terraform, metrics/logging/tracing, secrets management; experience operating inference stacks (e.g., vLLM/TensorRT‑LLM, OpenAI/Anthropic/Azure/Open‑source models).
Benefits Pairwise offers competitive compensation and access to a range of benefits for employees, including premium PPO and HMO medical insurance coverage, dental, vision, FSA, life insurance, short/long-term disability insurance, a company-matched 401k plan, and a generous PTO policy. We are proud to work with engineers and professionals who want to make an impact through their craft. If you're looking to be part of a team that values technical excellence, autonomy, and practical problem solving - we'd love to hear from you! Pairwise is an equal opportunity employer. We are committed to building a diverse team and fostering an inclusive workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, disability, veteran status, or any other legally protected status.
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.