Dieses Stellenangebot ist nicht mehr verfügbar
Machine Learning Security Research Specialist
Trail of Bits
- United States
- United States
Über
Explore advanced machine learning systems to uncover unique attack vectors such as adversarial examples, model poisoning, data extraction attacks, and jailbreaks affecting large language models. Client Engagement:
Work with top-tier AI organizations to assess and enhance the security posture of their sophisticated ML systems, matching your expertise with their internal research. AI/ML Security Tools Development:
Design innovative security testing frameworks, evaluation methodologies, and open-source tools tailored for AI/ML security research, including adversarial robustness testing and automated vulnerability discovery systems. Threat Intelligence and Modeling:
Develop comprehensive threat models for new AI/ML deployment patterns, anticipate future attack vectors, and build scalable security frameworks to keep pace with rapidly evolving AI capabilities. Research Community Contribution:
Publish research findings, present at conferences, and actively engage in the broader AI/ML security discourse through papers, blogs, and open-source contributions. Cross-Disciplinary Collaboration:
Act as a bridge between AI/ML research and security engineering, simplifying complex adversarial AI/ML concepts for diverse stakeholders and collaborating with Trail of Bits' larger security research teams. Qualifications: Advanced AI/ML Research Background:
PhD-level expertise in machine learning or deep learning, with proven contributions to research. AI/ML Security Knowledge:
Deep understanding of adversarial machine learning, familiar with attack paradigms like evasion attacks, poisoning attacks, model inversion, etc. Experience in adversarial ML and AI safety research is highly valued. Deep Technical ML Proficiency:
Hands-on experience with modern ML frameworks (PyTorch, JAX, TensorFlow), transformer architectures, and the complete ML development lifecycle from data pipelines to deployment. Skills in CUDA programming or GPU optimization are a plus. Research Excellence:
A strong track record of high-quality research shown through publications, preprints, or contributions recognized in the ML community. Publications at prestigious ML conferences or security venues are valued but not mandatory. Programming Skills:
Robust software engineering skills in Python and at least one systems language (C/C++, Rust, etc.), along with experience in building research prototypes and tools. Intellectual Curiosity:
Ability to quickly learn new domains, identify security-critical scenarios, and think adversarially about complex systems without needing explicit application security experience. Communication Skills:
Capability to articulate complex AI/ML security research into clear, actionable recommendations for both technical and executive audiences, and effectively present findings to clients who are also AI/ML experts. Compensation: The base salary for this full-time role ranges from $175,000 to $300,000, excluding benefits and potential bonuses. Variations in our salary range relate to the specific role, seniority level, geographic location, and the nature of the employment contract. The final offer within this range will depend on an individual's unique skills, experience, and educational background, reflecting starting salaries for all U.S. locations. For a precise salary estimate tailored to your preferred location, please discuss during the hiring process.
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.