Remote Software Engineer – AI Research & Evaluation (US-based)Nerdleveltech • San Francisco, California, United States
Cette offre d'emploi n'est plus disponible
Remote Software Engineer – AI Research & Evaluation (US-based)
Nerdleveltech
- San Francisco, California, United States
- San Francisco, California, United States
À propos
Ideal Background This role is ideal for engineers who have worked at the frontier of AI — at companies like OpenAI, NVIDIA, Databricks, Palantir, Snowflake, or similar organizations pushing the boundaries of intelligent systems. We especially welcome graduates from leading programs such as Harvard, Columbia, Princeton, Yale, University of Pennsylvania, and comparable institutions — though exceptional experience and skill always take precedence over pedigree.
Project Overview As a Software Engineering evaluator, you will create cutting‑edge datasets for training, benchmarking, and advancing large language models, collaborating closely with researchers. This includes curating code examples, providing precise solutions, and making corrections in Python, C/C++, Rust, Go, Java, and JavaScript (including ReactJS) — with particular emphasis on systems‑level code, performance‑critical applications, and infrastructure. You will evaluate and refine AI‑generated code for efficiency, scalability, and reliability, and work with cross‑functional teams to enhance enterprise‑level AI‑driven coding solutions.
Typical Day
Work on AI model training initiatives by curating code examples, building solutions, and correcting code in Python, C/C++, Rust, Go, Java, and JavaScript (including ReactJS).
Evaluate and refine AI‑generated code with an emphasis on systems‑level correctness, performance, and reliability.
Collaborate with cross‑functional teams to enhance AI‑driven coding solutions against industry performance benchmarks.
Build agents that can verify the quality of systems‑level and infrastructure code and identify error patterns.
Hypothesize on steps in the software engineering cycle (prototyping, architecture design, API design, production implementation, launch, experiments, monitoring, operational maintenance) and evaluate model capabilities on them.
Design verification mechanisms that can automatically verify a solution to a software engineering task.
Required Skills
Several years of software engineering experience (3 years or more).
Strong expertise in systems programming, infrastructure, or backend development using languages like Python, C/C++, Rust, and Go.
Experience building and deploying scalable, production‑grade software using modern languages and tools.
Deep understanding of software architecture, design, development, debugging, and code quality/review assessment.
Excellent oral and written communication skills for clear, structured evaluation rationales.
Engagement Details
Commitment: flexible engagement, minimum 10 hrs/week, up to 40 hrs/week.
Type: Contractor (no medical/paid leave).
Duration: 1 month (potential extensions based on performance and fit).
Location: Candidates must be based in the United States.
Evaluation Process
The application process takes 15–30 minutes.
Completion of an AI video interview is required.
#J-18808-Ljbffr
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre a été publiée par l’un de nos partenaires. Vous pouvez consulter l’offre originale ici.