Back to Jobs
Mercor

AI Red-Teamer — Adversarial AI Testing

Mercor

ContractEntry Level🌐 Remote
Posted January 5, 2026$13.87 per hourJanuary 5, 2026

Job Description

At Mercor, we believe the safest AI is the one that’s already been attacked — by us. We are assembling a red team for this project - human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red team data that makes AI safer for our customers. This project involves reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources. Responsibilities: Red team conversational AI models and agents: jailbreaks, prompt injections, misuse cases, bias exploitation, multi-turn manipulation. Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks. Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent. Document reproducibly: produce reports, datasets, and attack cases customers can act on. Requirements: Prior red teaming experience (AI adversarial work, cybersecurity, socio-technical probing). Curious and adversarial mindset. Structured approach using frameworks or benchmarks. Communicative: explain risks clearly. Adaptable across projects. Fluent in English & Hindi (native-level). Location: Remote, geography restricted to India. Hourly contract at $13.87 per hour.

Required Skills

AI red teamingjailbreaksprompt injectionsmisuse casesbias exploitationmulti-turn manipulationadversarial inputsvulnerability annotationcybersecuritysocio-technical probingRLHF/DPO attacksmodel extractionpenetration testingexploit developmentreverse engineeringharassment/disinfo probingabuse analysisconversational AI testingpsychologyactingwriting

Interested in this role?

Apply directly on the company website

About the Company

Share this Job