Code Graders
Code graders (also accepts code-judge for backward compatibility) are scripts that evaluate agent responses deterministically. Write them in any language — Python, TypeScript, Node, or any executable.
Contract
Section titled “Contract”Code graders communicate via stdin/stdout JSON:
Input (stdin):
{ "input_text": "What is 15 + 27?", "criteria": "Correctly calculates 15 + 27 = 42", "output_text": "The answer is 42.", "expected_output_text": "42"}
**Output (stdout):**```json{ "score": 1.0, "assertions": [ { "text": "Answer contains correct value (42)", "passed": true } ]}| Output Field | Type | Description |
|---|---|---|
score | number | 0.0 to 1.0 |
assertions | Array<{ text, passed, evidence? }> | Per-aspect results with verdict and optional evidence |
Python Example
Section titled “Python Example”import json, sysdata = json.load(sys.stdin)output_text = data.get("output_text", "")
assertions = []
if "42" in output_text: assertions.append({"text": "Output contains correct value (42)", "passed": True})else: assertions.append({"text": "Output does not contain expected value (42)", "passed": False})
passed = sum(1 for a in assertions if a["passed"])score = passed / len(assertions) if assertions else 0.0
print(json.dumps({ "score": score, "assertions": assertions,}))TypeScript Example
Section titled “TypeScript Example”import { readFileSync } from "fs";
const data = JSON.parse(readFileSync("/dev/stdin", "utf-8"));const outputText: string = data.output_text ?? "";
const assertions: Array<{ text: string; passed: boolean }> = [];
if (outputText.includes("42")) { assertions.push({ text: "Output contains correct value (42)", passed: true });} else { assertions.push({ text: "Output does not contain expected value (42)", passed: false });}
const passed = assertions.filter(a => a.passed).length;
console.log(JSON.stringify({ score: passed > 0 ? 1.0 : 0.0, assertions, reasoning: `Passed ${passed} check(s)`,}));Referencing in Eval Files
Section titled “Referencing in Eval Files”assertions: - name: my_validator type: code-grader command: [./validators/check_answer.py]@agentv/eval SDK
Section titled “@agentv/eval SDK”The @agentv/eval package provides a declarative API with automatic stdin/stdout handling. Use defineCodeGrader (formerly defineCodeJudge) to skip boilerplate:
#!/usr/bin/env bunimport { defineCodeGrader } from '@agentv/eval';
export default defineCodeGrader(({ outputText, criteria }) => { const assertions: Array<{ text: string; passed: boolean }> = [];
if (outputText.includes(criteria)) { assertions.push({ text: 'Output matches expected outcome', passed: true }); } else { assertions.push({ text: 'Output does not match expected outcome', passed: false }); }
const passed = assertions.filter(a => a.passed).length; return { score: assertions.length === 0 ? 0 : passed / assertions.length, assertions, };});SDK exports: defineCodeGrader, Message, ToolCall, TraceSummary, CodeGraderInput, CodeGraderResult
Target Access
Section titled “Target Access”Code graders can call an LLM through a target proxy for metrics that require multiple LLM calls (contextual precision, semantic similarity, etc.).
Configuration
Section titled “Configuration”Add a target block to the evaluator config:
assertions: - name: contextual-precision type: code-grader command: [bun, scripts/contextual-precision.ts] target: max_calls: 10 # Default: 50Use createTargetClient from the SDK:
#!/usr/bin/env bunimport { createTargetClient, defineCodeGrader } from '@agentv/eval';
export default defineCodeGrader(async ({ inputText, outputText }) => { const target = createTargetClient(); if (!target) return { score: 0, assertions: [{ text: 'Target not configured', passed: false }] };
const response = await target.invoke({ question: `Is this relevant to: ${inputText}? Response: ${outputText}`, systemPrompt: 'Respond with JSON: { "relevant": true/false }' });
const result = JSON.parse(response.rawText ?? '{}'); return { score: result.relevant ? 1.0 : 0.0 };});Use target.invokeBatch(requests) for multiple calls in parallel.
Environment variables (set automatically when target is configured):
| Variable | Description |
|---|---|
AGENTV_TARGET_PROXY_URL | Local proxy URL |
AGENTV_TARGET_PROXY_TOKEN | Bearer token for authentication |
Advanced Input Fields
Section titled “Advanced Input Fields”Beyond the basic text fields (input_text, output_text, expected_output_text, criteria), code graders receive additional structured context:
| Field | Type | Description |
|---|---|---|
input_files | string[] | Paths to input files referenced in the eval |
input | Message[] | Full resolved input message array |
expected_output | Message[] | Expected agent behavior including tool calls |
output | Message[] | Actual agent execution trace with tool calls |
trace | TraceSummary | Lightweight execution metrics (tool calls, errors) |
token_usage | {input, output} | Token consumption |
cost_usd | number | Estimated cost in USD |
duration_ms | number | Total execution duration |
start_time | string | ISO timestamp of first event |
end_time | string | ISO timestamp of last event |
file_changes | string | null | Unified diff of workspace file changes (when workspace_template is configured) |
workspace_path | string | null | Absolute path to the workspace directory (when workspace_template is configured) |
trace structure
Section titled “trace structure”{ "event_count": 5, "tool_names": ["fetch", "search"], "tool_calls_by_name": { "search": 2, "fetch": 1 }, "error_count": 0, "llm_call_count": 2}| Field | Type | Description |
|---|---|---|
event_count | number | Total tool invocations |
tool_names | string[] | Unique tool names used |
tool_calls_by_name | Record<string, number> | Count per tool |
error_count | number | Failed tool calls |
llm_call_count | number | Number of LLM calls (assistant messages) |
Use expected_output for retrieval context in RAG evals (tool calls with outputs) and output for the actual agent execution trace from live runs.
Workspace Access
Section titled “Workspace Access”When workspace_template is configured on a target, code graders receive the workspace path in two ways:
- JSON payload:
workspace_pathfield in the stdin input - Environment variable:
AGENTV_WORKSPACE_PATH
This enables functional grading — running commands like npm test, pytest, or cargo test directly in the agent’s workspace.
Example: Deploy-and-Test Pattern
Section titled “Example: Deploy-and-Test Pattern”#!/usr/bin/env bunimport { readFileSync } from "fs";import { execFileSync } from "child_process";
const input = JSON.parse(readFileSync("/dev/stdin", "utf-8"));const cwd = input.workspace_path;
const assertions: Array<{ text: string; passed: boolean }> = [];
// Stage 1: Install dependenciestry { execFileSync("npm", ["install"], { cwd, stdio: "pipe" }); assertions.push({ text: "npm install passed", passed: true });} catch { assertions.push({ text: "npm install failed", passed: false }); }
// Stage 2: Typechecktry { execFileSync("npx", ["tsc", "--noEmit"], { cwd, stdio: "pipe" }); assertions.push({ text: "typecheck passed", passed: true });} catch { assertions.push({ text: "typecheck failed", passed: false }); }
// Stage 3: Run teststry { execFileSync("npm", ["test"], { cwd, stdio: "pipe" }); assertions.push({ text: "tests passed", passed: true });} catch { assertions.push({ text: "tests failed", passed: false }); }
const passed = assertions.filter(a => a.passed).length;console.log(JSON.stringify({ score: assertions.length > 0 ? passed / assertions.length : 0, assertions,}));targets: - name: my_agent provider: cli command: "my-agent --task {INPUT_FILE} --output {OUTPUT_FILE}" workspace_template: ./workspace-template
# dataset.eval.yamltests: - id: implement-feature criteria: Agent implements the feature correctly input: "Implement the TODO functions in src/index.ts" assertions: - name: functional-check type: code-grader command: [bun, scripts/functional-check.ts]See examples/features/functional-grading/ for a complete working example.
Testing Locally
Section titled “Testing Locally”With agentv eval assert
Section titled “With agentv eval assert”Run a grader from .agentv/graders/ by name — no manual JSON piping required:
# Pass agent output and input directlyagentv eval assert rouge-score --agent-output "The fox jumps over the dog" --agent-input "Summarise this"
# Or pass a JSON file with { output, input } fieldsagentv eval assert rouge-score --file result.jsonThe command:
- Discovers the grader script by walking up directories looking for
.agentv/graders/<name>.{ts,js,mts,mjs} - Passes
{ output_text, output, input, input_text }to the script via stdin - Prints the grader’s JSON result to stdout
- Exits 0 if score >= 0.5, exit 1 otherwise
This is the same interface that agent-orchestrated evals use — the EVAL.yaml transpiler emits agentv eval assert instructions for code graders so external grading agents can run them directly.
With stdin pipe
Section titled “With stdin pipe”Pipe JSON directly to the grader script for full control:
echo '{"input_text":"What is 2+2?","criteria":"4","output_text":"4","expected_output_text":"4"}' | python validators/check_answer.py