XX
Machine Learning Research EngineerParametricUnited States
XX

Machine Learning Research Engineer

Parametric
  • US
    United States
  • US
    United States

À propos

About Parametric
Parametric is building robots to reliably automate frontline physical labor, starting with laundry folding. We are moving beyond traditional hard-coded automation by developing generalizable, learning-based agents capable of operating in unstructured environments. We have spent the last few months validating our core technology and fundraising, and we are now building a team to scale a fleet across commercial deployments. About The Role
As a Machine Learning Research Engineer, you will architect the neural backbones that drive our robots. This is not just a "tuning" role; you will define how we apply state-of-the-art developments in
Transformers
and
World Modeling
to physical control problems. You will work at the intersection of perception and action, designing novel algorithms that allow robots to understand complex scenes and execute precise tasks. You will own the full research-to-deployment loop: reading papers, prototyping in
PyTorch , training at scale, and deploying to hardware. What You’ll Do
Architect Neural Policies: Design and train large-scale Transformer-based policies that integrate multimodal inputs (vision, proprioception) for end-to-end robotic control. Advance World Modeling: Develop predictive world models that allow agents to reason about future states and physical interactions, reducing the sample complexity of real-world training. Reinforcement Learning at Scale: Implement and refine advanced RL algorithms—specifically PPO, GRPO, and Q-Learning variants—to solve complex manipulation and navigation tasks. Vision Foundation Models: Leverage and fine-tune modern self-supervised vision backbones (e.g., DINOv2, SigLIP) to provide dense, semantic understanding of the robot's environment. Reward Engineering: Design robust reward modeling architectures that align agent behavior with high-level task goals, utilizing techniques like inverse reinforcement learning or preference-based learning. High-Performance Engineering: Write production-grade PyTorch code. You may also explore or implement components in JAX for high-throughput simulation and training. What We’re Looking For
Deep Learning Proficiency: 3+ years of experience in deep learning research or engineering. You have a "strong familiarity" (expert level) with PyTorch and can debug complex computation graphs without relying on high-level abstractions. RL Fundamentals: Strong intuition for reinforcement learning dynamics. You have implemented algorithms like PPO, GRPO (Group Relative Policy Optimization), or Q-Learning from scratch or significantly modified them for custom environments. Modern Architecture Knowledge: Practical experience with Transformer architectures (attention mechanisms, positional encodings, tokenization) applied to non-NLP domains (e.g., Vision Transformers, Decision Transformers). Computer Vision: Familiarity with modern vision encoders and foundation models, specifically DINOv2 or similar self-supervised architectures. JAX Familiarity: Experience with JAX/Flax/Optax is highly appreciated as we explore high-performance training stacks. Startup DNA: You thrive in 0→1 environments where you must balance research rigor with the need to ship working code to physical robots. Parametric PBC is a public benefit corporation building robots to benefit all humans. We’re a proud equal-opportunity employer and encourage applications from all individuals regardless of race, color, religion, sex, gender, national origin, disability, age, or veteran status. We firmly believe the best version of the future includes everyone, so we encourage you to apply even if you don’t strictly meet all the requirements.
#J-18808-Ljbffr
  • United States

Compétences linguistiques

  • English
Avis aux utilisateurs

Cette offre provient d’une plateforme partenaire de TieTalent. Cliquez sur « Postuler maintenant » pour soumettre votre candidature directement sur leur site.