Skip to main content
ValidationForge · Commands · CLI Reference

19 Commands.
One philosophy.

Every command produces cited evidence or writes structured state. No command produces a PASS without proof. The Validation suite runs your journeys; the Forge suite rebuilds what breaks.

ValidationForgeCommands

19
Total Commands
13
Validation
6
Forge
0
Mock flags
1
Install cmd

The core validation suite. Run against the real system, capture evidence, write verdicts.

/vf-setupValidation30–60s
First-time project setup. Detects platform (Next.js, iOS, API, CLI, etc.), writes .vf/config.json, registers 7 enforcement hooks, and appends e2e-evidence/ to .gitignore.
/vf-setup
/vf-setup --strict --platform web
/vf-setup --global
/validateValidation2–15 min
Full 7-phase pipeline: RESEARCH → PLAN → PREFLIGHT → EXECUTE → ANALYZE → VERDICT → SHIP. Asks for journey approval before executing (unless --ci).
/validate
/validate --platform ios
/validate --scope src/auth/ --fix
/validate --parallel --verbose
/validate-planValidation15–30s
Define a validation journey by name. Scans routes/screens/endpoints, generates YAML with measurable PASS criteria and evidence specs. No code runs — this is the contract.
/validate-plan user-signup
/validate-plan checkout-flow --platform web
/validate-plan api-health --starting-point /api/health
/validate-auditValidation30–90s
Read-only audit. Classifies issues by severity (CRITICAL / HIGH / MEDIUM / LOW) without executing any journeys. Safe to run on any branch without side effects.
/validate-audit
/validate-audit --scope src/
/validate-audit --severity critical
/validate-fixValidation5–45 min
3-strike fix loop. Reads the FAIL verdict, proposes a minimal source fix for the first failing PASS criterion, applies it, re-runs the single failing journey. Marks UNFIXABLE after 3 unique-hypothesis attempts.
/validate-fix
/validate-fix --journey user-signup
/validate-fix --max-attempts 2
/validate-sweepValidation5–30 min
Autonomous fix-and-revalidate loop. Runs the full pipeline, fixes failures, re-validates until all journeys PASS or max attempts exhausted. The CI workhorse command.
/validate-sweep
/validate-sweep --max-attempts 5
/validate-sweep --platform ios --team
/validate-sweep --scope 'auth flow'
/validate-ciValidation2–15 min
Non-interactive CI/CD mode. Exits 0 on PASS, 1 on FAIL. No approval prompts, no interactive output. Designed for GitHub Actions, GitLab CI, and Jenkins pipelines.
/validate-ci
/validate-ci --platform web --exit-on-first-fail
/validate-ci --json-output > vf-report.json
/validate-teamValidation3–20 min
Multi-agent parallel platform validation. Spawns one validator per platform in dependency-aware waves (DB → API → Web/iOS). Each validator owns its evidence directory exclusively.
/validate-team
/validate-team --platforms web,api
/validate-team --wave-mode strict
/validate-consensusValidation3× single-run
Multi-validator consensus (v1.5 preview). Spawns N independent validators on the same journey list. Synthesizer produces one confidence-scored verdict (HIGH/MEDIUM/LOW) from the agreement ratio.
/validate-consensus
/validate-consensus --validators 3
/validate-consensus --validators 5 --journey checkout
/validate-benchmarkValidation~10s
Score project validation posture: Coverage 35% / Evidence Quality 30% / Enforcement 25% / Speed 10%. Appends to .vf/benchmark-history.json. Prints delta vs prior run.
/validate-benchmark
/validate-benchmark --since 7d
/validate-benchmark --export csv
/validate-dashboardValidation~5s
Generate a self-contained HTML evidence dashboard. Thumbnails all screenshots, plots a PASS-rate trend graph, hyperlinks every evidence file. Open via file:// — no server needed.
/validate-dashboard
/validate-dashboard --compress
/validate-dashboard --open
/validate-team-dashboardValidation~10s
Aggregate multi-platform validation posture dashboard. Shows wave results, blocked validators, evidence coverage per platform, and trend across team runs.
/validate-team-dashboard
/validate-team-dashboard --since 14d
/vf-telemetryValidation~3s
Show local telemetry: run counts, median sweep time, PASS rate over last 30 days, hook activation frequency, most-failed journey. All data is local — nothing phones home.
/vf-telemetry
/vf-telemetry --export json

The autonomous build-fix-revalidate engine. Planned for v2.0; forge-install-rules and forge-plan are available in v1.0.

/forge-setupForge10–20s
Initialize the FORGE engine. Writes .validationforge/forge-config.json, creates the runs/ archive directory, and wires the forge-state.json persistence layer.
/forge-setup
/forge-setup --max-attempts 3
/forge-planForge30–90s
Plan a forge execution run. Optionally --consensus to generate 3 competing implementation plans synthesized into one. Outputs a phase-by-phase execution plan.
/forge-plan
/forge-plan --consensus
/forge-plan --scope src/billing/
/forge-executeForge5–60 min
Execute the forge pipeline with hard phase gates: PREFLIGHT → BUILD → VALIDATE → (fix loop). Each phase must pass before the next begins. State persists to forge-state.json.
/forge-execute
/forge-execute --dry-run
/forge-execute --resume
/forge-teamForge3× /forge-execute
Run forge execution with a multi-agent validation team. Validators spawn in dependency-aware waves. Supports up to 5 validators per run.
/forge-team
/forge-team --validators 3 --platform web
/forge-benchmarkForge~10s
Benchmark FORGE runs: fix-loop efficiency, attempt distribution, UNFIXABLE rate, root-cause diversity. Appends to .validationforge/forge-benchmark-history.json.
/forge-benchmark
/forge-benchmark --since 30d
/forge-install-rulesForge~5s
Install all 9 ValidationForge rules to .claude/rules/vf-*.md. Safe to re-run — existing rules are updated, not duplicated. Use after upgrading the plugin.
/forge-install-rules
/forge-install-rules --global
CommandCategoryTypical RuntimePrimary Output
/vf-setupValidation30–60sFirst-time project setup.
/validateValidation2–15 minFull 7-phase pipeline: RESEARCH → PLAN → PREFLIGHT → EXECUTE → ANALYZE → VERDICT → SHIP.
/validate-planValidation15–30sDefine a validation journey by name.
/validate-auditValidation30–90sRead-only audit.
/validate-fixValidation5–45 min3-strike fix loop.
/validate-sweepValidation5–30 minAutonomous fix-and-revalidate loop.
/validate-ciValidation2–15 minNon-interactive CI/CD mode.
/validate-teamValidation3–20 minMulti-agent parallel platform validation.
/validate-consensusValidation3× single-runMulti-validator consensus (v1.
/validate-benchmarkValidation~10sScore project validation posture: Coverage 35% / Evidence Quality 30% / Enforcement 25% / Speed 10%.
/validate-dashboardValidation~5sGenerate a self-contained HTML evidence dashboard.
/validate-team-dashboardValidation~10sAggregate multi-platform validation posture dashboard.
/vf-telemetryValidation~3sShow local telemetry: run counts, median sweep time, PASS rate over last 30 days, hook activation frequency, most-failed journey.
/forge-setupForge10–20sInitialize the FORGE engine.
/forge-planForge30–90sPlan a forge execution run.
/forge-executeForge5–60 minExecute the forge pipeline with hard phase gates: PREFLIGHT → BUILD → VALIDATE → (fix loop).
/forge-teamForge3× /forge-executeRun forge execution with a multi-agent validation team.
/forge-benchmarkForge~10sBenchmark FORGE runs: fix-loop efficiency, attempt distribution, UNFIXABLE rate, root-cause diversity.
/forge-install-rulesForge~5sInstall all 9 ValidationForge rules to .
bashflag patterns
# Force platform (skip auto-detect)
/validate --platform ios|web|api|cli|fullstack

# Limit scope to a directory or feature
/validate --scope src/auth/
/validate-sweep --scope "checkout flow"

# CI/CD — no prompts, exit codes
/validate-ci --json-output > report.json

# Auto-fix after sweep (3-strike cap)
/validate --fix
/validate-fix --max-attempts 2

# Parallel journeys for large projects
/validate --parallel

# Consensus: N independent validators, synthesized verdict
/validate-consensus --validators 3

# Forge: plan with 3 competing perspectives
/forge-plan --consensus