robertoecf/adversarial-review-coding

3 stars · Last commit 2026-03-27

Review triad plugin for Claude Code: prompt optimization, adversarial red-team review, plan validation, and intelligent orchestration.

README preview

# adversarial-review-coding

## Adversarial Review with LLMs

Adversarial review is the practice of submitting work to an independent model for red-team analysis, then cross-validating the findings against your primary model's own assessment. Two models examining the same artifact from different angles catch more issues than either alone — each has different training biases, blind spots, and reasoning patterns. Disagreements between models surface the highest-value findings: the ones a single reviewer would miss.

This plugin applies adversarial review to **coding workflows** inside [Claude Code](https://docs.anthropic.com/en/docs/claude-code). It spawns background subagents that call external models (Codex CLI / Gemini CLI), cross-validate against Claude's analysis, and return unified findings — without blocking your main session. The result: security vulnerabilities, logic errors, and architectural gaps caught earlier, with higher confidence, and with clear severity ratings.

## How It Works

```
Main Session (Opus) ─── continues working, not blocked
       │
       └─► spawns background Agent
                │
                ├─ 1. Send artifact to external model (Codex/Gemini)
                ├─ 2. Run independent Claude analysis
                ├─ 3. Cross-validate both sets of findings
                │     ├─ [cross-validated] = high confidence
                │     ├─ [external-only]  = flagged for review

View full repository on GitHub →