Dieses Stellenangebot ist nicht mehr verfügbar
Über
We are building the next generation of large-scale AI training systems that power pre-training, post-training, distillation, fine-tuning, and reinforcement learning workloads at an unprecedented scale and efficiency.
You will design and develop high-performance distributed software that orchestrates massive compute and data pipelines across heterogeneous clusters. Your work will push the limits of concurrency, throughput, and scalability—enabling efficient execution of models with trillions of parameters. This role sits at the intersection of systems engineering and machine learning performance, demanding both architectural depth and low-level implementation skills. You will help shape how models are trained and optimized end-to-end, from data ingestion to distributed execution, across cutting-edge hardware platforms.
Responsibilities- Design and implement distributed runtime components to efficiently manage large-scale model training, fine-tuning, and RL workloads.
- Develop and optimize high-performance data and communication pipelines that fully utilize CPU, memory, storage, and network resources.
- Enable scalable execution across multiple compute nodes, ensuring high concurrency and minimal bottlenecks.
- Collaborate closely with ML and compiler teams to integrate new model architectures, training regimes, and hardware-specific optimizations.
- Diagnose and resolve complex performance issues across the software stack using profiling and instrumentation tools.
- Contribute to overall system design, architecture reviews, and roadmap planning for large-scale AI workloads.
- 3 years of experience developing high-performance or distributed system software.
- Strong programming skills in C/C , with expertise in multi-threading, memory management, and performance optimization.
- Experience with distributed systems, networking, or inter-process communication.
- Solid understanding of data structures, concurrency, and system-level resource management (CPU, I/O, and memory).
- Proven ability to debug, profile, and optimize code across scales—from threads to clusters.
- Bachelor's, Master's, or equivalent experience in Computer Science, Electrical Engineering, or related field.
- Familiarity with machine learning training or inference pipelines, especially distributed training and large-model scaling.
- Exposure to Python and PyTorch, particularly in the context of model training or performance tuning.
- Experience with compiler internals, custom hardware interfaces, or low-level protocol design.
- Prior work on high-performance clusters, HPC systems, or custom hardware/software co-design.
- Deep curiosity about how to unlock new levels of performance for large-scale AI workloads.
Sprachkenntnisse
- English
Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.