XX
Machine Learning Security Research SpecialistTrail of BitsUnited States

Dieses Stellenangebot ist nicht mehr verfügbar

XX

Machine Learning Security Research Specialist

Trail of Bits
  • US
    United States
  • US
    United States

Über

About Us: Founded in 2012 by three expert hackers without any investment capital, Trail of Bits is a leading company where security experts can boldly advance the field and tackle the newest and most complex risks in technology. We have successfully secured some of the world's most targeted organizations and devices. Our innovative research combined with practical solutions significantly reduces the security risks faced by our clients due to emerging technologies. Our work drives both the security industry and public understanding of the technologies shaping our world. Why Join Us: Cybersecurity preparedness is an ever-evolving challenge, and companies like Trail of Bits are at the forefront of the battle against cyber adversaries. Through our research-based, custom-engineering approach, we elevate our clients' capabilities to the pinnacle of what is available. For organizations reliant on security, a proactive and tailored strategy is essential to stay ahead of attackers. Community Engagement: We believe in democratizing security information. As part of our mission, we deliver continual informational support through blogs, whitepapers, newsletters, meetups, and open-source tools. The more the community understands security, the more they will appreciate the unique value we offer. The Role: Trail of Bits is on the lookout for a Machine Learning Security Research Specialist to join our expanding AI Assurance team. In this role, you will conduct pioneering security research on machine learning systems utilized by leading AI organizations. Your tasks will include identifying innovative attack vectors, failure modes, and vulnerabilities across sophisticated ML systems—from training pipelines and model architectures to deployment infrastructure and inference systems. You will collaborate directly with esteemed AI labs and developers to ensure their systems are resilient against emerging threats. This role focuses on research and requires a deep understanding of AI/ML, with no prior application security experience necessary. Additionally, you will contribute to the AI/ML security research community through tool development, threat modeling frameworks, and publications while helping to shape secure AI development practices. Key Responsibilities: Original ML Security Research:
Explore advanced machine learning systems to uncover unique attack vectors such as adversarial examples, model poisoning, data extraction attacks, and jailbreaks affecting large language models. Client Engagement:
Work with top-tier AI organizations to assess and enhance the security posture of their sophisticated ML systems, matching your expertise with their internal research. AI/ML Security Tools Development:
Design innovative security testing frameworks, evaluation methodologies, and open-source tools tailored for AI/ML security research, including adversarial robustness testing and automated vulnerability discovery systems. Threat Intelligence and Modeling:
Develop comprehensive threat models for new AI/ML deployment patterns, anticipate future attack vectors, and build scalable security frameworks to keep pace with rapidly evolving AI capabilities. Research Community Contribution:
Publish research findings, present at conferences, and actively engage in the broader AI/ML security discourse through papers, blogs, and open-source contributions. Cross-Disciplinary Collaboration:
Act as a bridge between AI/ML research and security engineering, simplifying complex adversarial AI/ML concepts for diverse stakeholders and collaborating with Trail of Bits' larger security research teams. Qualifications: Advanced AI/ML Research Background:
PhD-level expertise in machine learning or deep learning, with proven contributions to research. AI/ML Security Knowledge:
Deep understanding of adversarial machine learning, familiar with attack paradigms like evasion attacks, poisoning attacks, model inversion, etc. Experience in adversarial ML and AI safety research is highly valued. Deep Technical ML Proficiency:
Hands-on experience with modern ML frameworks (PyTorch, JAX, TensorFlow), transformer architectures, and the complete ML development lifecycle from data pipelines to deployment. Skills in CUDA programming or GPU optimization are a plus. Research Excellence:
A strong track record of high-quality research shown through publications, preprints, or contributions recognized in the ML community. Publications at prestigious ML conferences or security venues are valued but not mandatory. Programming Skills:
Robust software engineering skills in Python and at least one systems language (C/C++, Rust, etc.), along with experience in building research prototypes and tools. Intellectual Curiosity:
Ability to quickly learn new domains, identify security-critical scenarios, and think adversarially about complex systems without needing explicit application security experience. Communication Skills:
Capability to articulate complex AI/ML security research into clear, actionable recommendations for both technical and executive audiences, and effectively present findings to clients who are also AI/ML experts. Compensation: The base salary for this full-time role ranges from $175,000 to $300,000, excluding benefits and potential bonuses. Variations in our salary range relate to the specific role, seniority level, geographic location, and the nature of the employment contract. The final offer within this range will depend on an individual's unique skills, experience, and educational background, reflecting starting salaries for all U.S. locations. For a precise salary estimate tailored to your preferred location, please discuss during the hiring process.
  • United States

Sprachkenntnisse

  • English
Hinweis für Nutzer

Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.