XX
Machine Learning LeadAutolane, Inc.Austin, Texas, United States
XX

Machine Learning Lead

Autolane, Inc.
  • US
    Austin, Texas, United States
  • US
    Austin, Texas, United States

Über

You’ll work directly with our CTO to build AI systems that scale from pilot deployments to thousands of coordinated deliveries per day, establishing the intelligence layer that makes autonomous logistics commercially viable. Description
Location
: Remote US (Bay Area, Austin preferred) About Autolane
Autolane is on a mission to revolutionize last-mile logistics by empowering autonomous vehicle owners to unlock the value of their vehicle. Our flagship product is the industry's first orchestration layer for autonomous deliveries—coordinating heterogeneous autonomous systems (AVs, humanoid robots, delivery bots) to achieve zero-wait handoffs and maximum fleet utilization. We integrate directly with retailers, commercial real-estate operators, and AV fleets, building the AI infrastructure that enables autonomy at scale. The Role
As Machine Learning Lead at Autolane, you’ll architect and build the AI brain that orchestrates autonomous last-mile logistics. You’ll design and deploy the core learning systems—Graph Neural Networks for spatial reasoning, Transformers for temporal prediction, and Multi-Agent Reinforcement Learning for heterogeneous agent coordination—that enable our platform to optimize deliveries across AVs, humanoid robots, and delivery bots in real-time. You’ll work directly with our CTO to build AI systems that scale from pilot deployments to thousands of coordinated deliveries per day, establishing the intelligence layer that makes autonomous logistics commercially viable. Core Responsibilities
Graph Neural Networks:
Design and implement 6-layer Graph Attention Networks for modeling spatial relationships between agents, locations, and resources using PyTorch Geometric Temporal Prediction:
Build Transformer-based architectures for multi-horizon arrival time prediction, task duration forecasting, and optimal scheduling sequences Multi-Agent RL:
Architect QMIX-based coordination systems with Conservative Q‑Learning for safe exploration across heterogeneous agent types (Teslas, Unitree G1 humanoids, PUDU bots) Ensemble Systems:
Design robust decision-making through model diversity, weighted voting mechanisms, and uncertainty quantification with confidence‑based fallbacks Real‑time Inference:
Optimize models for
Heterogeneous Agent Coordination
Agent Abstraction:
Design unified state representations across vehicle types with distinct capability profiles Cooperative Policy Learning:
Train agents to optimize joint actions—vehicle routing, robot task assignment, and handoff timing Reward Engineering:
Develop composite reward structures balancing efficiency, wait time reduction, success rates, and safety constraints Cross‑Agent Communication:
Implement learned communication protocols for decentralized coordination Simulation & Training Infrastructure
Environment Design:
Build high‑fidelity simulation environments with physics engines for safe policy exploration Offline Training:
Architect pipelines for learning from historical ridehail coordination data and synthetic scenarios Transfer Learning:
Leverage logistics datasets and pre‑trained models to accelerate domain adaptation Online Learning:
Design shadow mode deployment, A/B testing infrastructure, and continuous learning with replay buffers Production ML Systems
MLOps Pipeline:
Build end‑to‑end training, validation, and deployment infrastructure on GCP Model Monitoring:
Implement drift detection, performance tracking, and automated retraining triggers Feature Engineering:
Design spatial graph construction, temporal sequence encoding, and agent state representation pipelines Safety Validation:
Ensure policy safety through Conservative Q‑Learning, human‑in‑the‑loop validation, and confidence thresholds Edge AI Integration
Model Optimization:
Quantize and optimize models for edge deployment alongside embedded systems Sensor Fusion:
Integrate ML predictions with edge sensor data (cameras, LiDAR, ultrasonic) for ground truth validation Hybrid Architecture:
Design cloud‑edge inference strategies balancing latency and computational requirements Required Qualifications
Technical Foundation
5+ years
machine learning engineering with production deployment experience Expert proficiency
in PyTorch and deep learning frameworks Deep expertise
with Graph Neural Networks (PyTorch Geometric, DGL) for relational reasoning Strong foundation
in Transformer architectures and attention mechanisms Hands‑on experience
with Reinforcement Learning (single‑agent and multi‑agent systems) Proven ability
to take models from research to production at scale Core ML Competencies
Proven experience
with temporal sequence modeling and time‑series prediction Working knowledge
of model ensemble techniques and uncertainty quantification Strong foundation
in optimization algorithms, hyperparameter tuning, and neural architecture search Ability
to design and debug complex training pipelines with distributed computing Production & Infrastructure Skills
Strong understanding
of cloud ML infrastructure (GCP Vertex AI, Cloud Run, Pub/Sub preferred) Knowledge
of model serving, latency optimization, and real‑time inference Proven ability
to build observable, debuggable ML systems in production environments AI Development Fluency
Active daily use
of AI coding assistants (Claude Code, Cursor, GitHub Copilot) for ML development Demonstrated ability
to leverage LLMs for rapid prototyping, debugging, and code generation Experience
using AI tools for experiment tracking, documentation, and analysis Preferred Qualifications
Advanced ML Experience
Multi-Agent Reinforcement Learning
algorithms (QMIX, MAPPO, COMA, VDN) Conservative Q‑Learning
or offline RL for safe policy learning Graph Attention Networks
for dynamic graph reasoning Imitation Learning
and learning from demonstrations Sim‑to‑Real Transfer
for robotics applications Domain Experience
Autonomous vehicles
or robotics ML systems Fleet optimization
or logistics scheduling Real‑time coordination
systems at scale Spatial‑temporal prediction
for transportation Multi‑robot coordination
or swarm intelligence Robotics & Edge ML
ROS2
integration for ML inference and sensor fusion ONNX Runtime
or TensorRT for embedded deployment Model quantization
and pruning for edge inference Sensor fusion
with heterogeneous data sources Isaac Sim
or Gazebo for robotics simulation Publications
in top ML/robotics venues (NeurIPS, ICML, ICRA, CoRL) Experience
translating research into production systems Open‑source contributions
to ML frameworks or RL libraries Familiarity
with latest advances in foundation models for robotics At Autolane, we’re building the intelligence layer for autonomous logistics—combining cutting‑edge ML with real‑world robotics to create systems that learn and adapt: Rapid Iteration:
Move from Jupyter exploration to production deployment in days, not quarters AI‑Augmented Development:
Use LLMs to accelerate research, prototyping, and production code Real‑World Impact:
Your models will coordinate actual autonomous vehicles and robots in production Cross‑Functional Innovation:
Collaborate with embedded engineers, roboticists, and operations teams Research‑to‑Production:
Bridge the gap between academic ML and deployed systems Why Join Our AI/ML Team?
Cutting‑Edge Stack:
Work with GNNs, Transformers, and MARL at the intersection of ML and robotics Direct Impact:
Your algorithms will orchestrate millions of autonomous deliveries Technical Leadership:
Work directly with CTO and Head of R&D on architectural decisions Growth Trajectory:
Build the AI foundation as we scale from pilots to nationwide deployment Innovation Freedom:
Experiment with novel architectures, reward structures, and training paradigms Mission‑Critical Work:
Build the intelligence that makes autonomous logistics safe, efficient, and commercially viable Location:
Remote US with Portland, Bay Area, or Austin preferred for occasional hardware collaboration Compute Resources:
Access to GCP GPU clusters, TPUs, and simulation infrastructure Hardware Integration:
Collaboration opportunities with Unitree G1, Tesla vehicles, and delivery bots Collaboration:
Direct partnership with CTO and Head of R&D on architecture decisions Pace:
Fast‑moving startup environment where shipping working models matters Interview Process Note:
Be prepared to: Walk through ML systems you’ve designed and deployed to production Demonstrate your AI‑augmented development workflow for research and prototyping Discuss trade‑offs in model architecture selection (when to use GNN vs Transformer vs RL) Show examples of designing reward functions and training multi‑agent systems Explain how you’d approach coordinating heterogeneous autonomous agents in real‑time Showing working MARL systems or multi‑agent coordination demos Metrics from deployed ML systems (latency, accuracy, business impact) Experience with robotics simulation (Isaac Sim, Gazebo) or real robots Creative solutions to sim‑to‑real transfer, sample efficiency, or safety constraints Publications or open‑source contributions in relevant areas Real‑world deployments involving autonomous vehicles or fleet optimization
#J-18808-Ljbffr
  • Austin, Texas, United States

Sprachkenntnisse

  • English
Hinweis für Nutzer

Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klick auf „Jetzt Bewerben”, um deine Bewerbung direkt auf deren Website einzureichen.