XX
Research EngineerMetaMenlo Park, California, United States

Dieses Stellenangebot ist nicht mehr verfügbar

XX

Research Engineer

Meta
  • US
    Menlo Park, California, United States
  • US
    Menlo Park, California, United States

Über

Meta is seeking Research Engineers to join the Safety System and Foundations team within Meta Superintelligence Labs, dedicated to advancing the safe development and deployment of Superintelligent AI. Our mission is to pioneer robust and foundational safety techniques that empower Meta's most ambitious AI capabilities, ensuring billions of users experience our products and services securely and responsibly.
Responsibilities
  • Design, implement, and evaluate novel, systemic, and foundational safety techniques for large language models and multimodal AI systems
  • Create, curate, and analyze high-quality datasets for safety system and foundations
  • Fine-tune and evaluate LLMs to adhere to Meta's safety policies and evolving global standards
  • Build scalable infrastructure and tools for safety evaluation, monitoring, and rapid mitigation of emerging risks
  • Work closely with researchers, engineers, and cross-functional partners to integrate safety solutions into Meta's products and services
  • Lead complex technical projects end-to-end
Minimum Qualifications
  • Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
  • PhD in Computer Science, Machine Learning, or a relevant technical field
  • 3+ years of industry research experience in LLM/NLP, computer vision, or related AI/ML model training
  • Experience as a technical lead on a team and/or leading complex technical projects from end-to-end
  • Publications at peer-reviewed conferences (e.g. ICLR, NeurIPS, ICML, KDD, CVPR, ICCV, ACL)
  • Programming experience in Python and hands-on experience with frameworks such as PyTorch
Preferred Qualifications
  • Hands-on experience applying state-of-the-art techniques to build robust AI system solutions for safety and policy adherence
  • Experience developing, fine-tuning, or evaluating LLMs across multiple languages and capabilities (text, image, voice, video, reasoning, etc)
  • Demonstrated experience to innovate in safety foundational research, including custom guideline enforcement, dynamic policy adaptation, and rapid hotfixing of model vulnerabilities
  • Experience designing, curating, and evaluating safety datasets, including adversarial and borderline prompt cases
  • Experience with distributed training of LLMs (hundreds/thousands of GPUs), scalable safety mitigations, and automation of safety tooling
  • Menlo Park, California, United States

Sprachkenntnisse

  • English
Hinweis für Nutzer

Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.