XX
Senior/Staff Machine Learning Engineer, PerceptionAgtonomyUnited States

Dieses Stellenangebot ist nicht mehr verfügbar

XX

Senior/Staff Machine Learning Engineer, Perception

Agtonomy
  • +2
  • +3
  • US
    United States
  • +2
  • +3
  • US
    United States

Über

Senior/Staff Machine Learning Engineer, Perception
At Agtonomy, we're not just building techwe're transforming how vital industries get work done. Our Physical AI and fleet services turn heavy machinery into intelligent, autonomous systems that tackle the toughest challenges in agriculture, turf, and beyond. Partnering with industry-leading equipment manufacturers, we're creating a future where labor shortages, environmental strain, and inefficiencies are relics of the past. Our team is a tight-knit group of bold thinkersengineers, innovators, and industry expertswho thrive on turning audacious ideas into reality. If you want to shape the future of industries that matter, this is your shot. What You'll Do
Develop computer vision and machine learning models for real-time perception systems, enabling tractors to identify crops, obstacles, and terrain in varying unpredictable conditions. Build sensor fusion algorithms to combine camera, LiDAR, and radar data, creating robust 3D scene understanding that handles challenges like crop occlusions or GNSS drift. Optimize models for low-latency inference on resource-constrained hardware, balancing accuracy and performance. Design and test data pipelines to curate and label large sensor datasets, ensuring high-quality inputs for training and validation, with tools to visualize and debug failures. Analyze performance metrics and iterate on algorithms to improve accuracy and efficiency of various perception subsystems. What You'll Bring
A MS, or PhD in Computer Science, AI, or a related field, or 5+ years of industry experience building vision-based perception systems. Deep expertise in developing and deploying machine learning models, particularly for perception tasks such as object detection, segmentation, mono/stereo depth estimation, sensor fusion, and scene understanding. Strong understanding of integrating data from multiple sensors like cameras, LiDAR, and radar. Experience handling large datasets efficiently and organizing them for labeling, training and evaluation. Fluency in Python and experience with ML/CV frameworks like TensorFlow, PyTorch, or OpenCV, with the ability to write efficient, production-ready code for real-time applications. Proven ability to design experiments, analyze performance metrics (e.g., mAP, IoU, latency), and optimize algorithms to meet stringent performance requirements in dynamic settings. An eagerness to get your hands dirty and agility in a fast-moving, collaborative, small team environment with lots of ownership. What Makes You A Strong Fit
Experience architecting multi-sensor ML systems from scratch. Experience with foundational models for robotics or Vision-Language-Action (VLA) models. Experience with compute-constrained pipelines including optimizing models to balance the accuracy vs. performance tradeoff, leveraging TensorRT, model quantization, etc. Experience implementing custom operations in CUDA. Publications at top-tier perception/robotics conferences (e.g. CVPR, ICRA, etc.). Passion for sustainable agriculture and securing our food supply chain. $180,000 - $250,000 a year The US base salary range for this full-time position is $180,000 to $250,000 + equity + benefits + unlimited PTO. Benefits include 100% covered medical, dental, and vision for the employee (partner, children, or family is additional), commuter benefits, flexible spending account (FSA), life insurance, short- and long-term disability, 401k plan, stock options, and a collaborative work environment working alongside a passionate mission-driven team.

Wünschenswerte Fähigkeiten

  • Computer Vision
  • Machine Learning
  • Segmentation
  • United States

Berufserfahrung

  • Machine Learning
  • Computer Vision

Sprachkenntnisse

  • English
Hinweis für Nutzer

Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.