This job offer is no longer available
Staff Machine Learning Infrastructure Engineer
Dyna Robotics
- United States
- United States
About
Dyna Robotics was founded by repeat founders Lindon Gao and York Yang, who sold Caper AI for $350 million, and former DeepMind research scientist Jason Ma. The company has raised over $140M, backed by top investors, including CRV and First Round. We're positioned to redefine the landscape of robotic automation. Join us to shape the next frontier of AI-driven robotics!
Learn more at dyna.co
Position Overview: We are seeking an experienced Machine Learning Infrastructure Engineer to join our team and help scale our ML training platform. In this role, you will be responsible for designing, implementing, and maintaining large-scale ML infrastructure to accelerate model iteration and improve training performance across an expanding GPU ecosystem. You will work on cutting‑edge high-performance computing systems, optimizing distributed training environments, and ensuring system reliability as we scale.
Key Responsibilities:
Infrastructure Design & Scalability:
Architect and implement large-scale ML training pipelines that leverage parallel GPU processing on platforms like GCP or AWS.
Enhance our existing infrastructure to fully exploit parallelism and design for future expansion, ensuring that our system is ready to support growth.
High‑Performance ML Computing & Distributed Systems:
Manage and optimize high-performance computing resources.
Develop robust distributed computing solutions, addressing challenges like race conditions, memory optimization, and resource allocation.
Optimize model training with techniques like mixed precision, ZeRO, Lora, etc.
Job Scheduling & Reliability:
Design systems for job rescheduling, automated retries, and failure recovery to maximize uptime and training efficiency.
Implement intelligent job queuing mechanisms to optimize training workloads and resource utilization.
Storage & Data Handling:
Evaluate and implement tradeoffs between different local and networked storage solutions to improve data throughput and access.
Develop strategies for caching training data to optimize performance.
Collaboration & Continuous Improvement:
Work closely with ML researchers and data scientists to understand training requirements and bottlenecks.
Continuously monitor system performance, identify areas for improvement, and implement best practices to enhance scalability and reliability.
Required Qualifications:
Bachelor’s degree or higher in Computer Science or a related field.
At least 7 years of professional experience in the software industry, with a minimum of 2 years in a tech lead role.
Proven experience with high-performance computing environments and distributed systems.
Demonstrated ability to scale ML training systems and optimize resource utilization.
Hands‑on experience with job scheduling systems and managing cloud GPU environments (GCP, AWS, etc.).
Deep understanding of distributed computing concepts, including race conditions, memory optimization, and parallel processing.
Hands‑on experience in ML model tuning for performance.
Experience with common ML training and inference tools including PyTorch, TensorRT, Triton, Accelerate, etc.
Strong analytical and problem‑solving skills with the ability to troubleshoot complex system issues.
Excellent communication skills to collaborate effectively with cross‑functional teams.
Preferred Qualifications:
Experience with container orchestration tools (e.g., Kubernetes) and infrastructure‑as‑code frameworks.
If you're passionate about building scalable ML systems and optimizing high-performance computing infrastructures, we'd love to hear from you.
#J-18808-Ljbffr
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.