Experiment Guide¶
Status: 🚧 Work in Progress - This guide will evolve as the experiment system matures.
This guide explains how to run AI experiments using the podcast_scraper benchmarking framework. Experiments allow you to test different models, prompts, and parameters on canonical datasets and compare results against frozen baselines.
Getting Started¶
Workflow Order: You must follow these steps in order:
- Prepare Source Data (Step 0) - Generate metadata and source indexes from RSS XML files
- Create a Dataset (Step 1) - Create a canonical dataset from your eval data
- Materialize Dataset (Step 1a) - Validate and materialize the dataset (optional but recommended)
- Create a Baseline (Step 2) - Create a baseline using that dataset
- Run Experiments (Step 3) - Run experiments that compare against the baseline
Why this order?
- Datasets require source data with transcripts
- Baselines require a dataset to know which episodes to process
- Experiments require both a dataset (for input) and a baseline (for comparison)
- Materialization validates dataset integrity before use
Overview¶
The experiment system consists of several components:
- Source Data - Raw RSS XML files, transcripts, and metadata in
data/eval/sources/ - Datasets - Canonical, frozen sets of episodes with transcripts and golden references
- Materialized Datasets - Validated, copied datasets in
data/eval/materialized/ - Baselines - Frozen reference results from a known system state
- Experiments - Runs that test new configurations against datasets and compare to baselines
Key Concepts¶
- Source ID: Identifier for a source directory (e.g.,
curated_5feeds_raw_v1) - Dataset ID: Identifier for a canonical dataset (e.g.,
curated_5feeds_smoke_v1) - Baseline ID: Identifier for a frozen baseline (e.g.,
bart_led_baseline_v1) - Experiment ID: Unique identifier for an experiment run (e.g.,
summarization_openai_long_v2)
Understanding Baselines, Experiments, and App Defaults¶
This section explains the correct way to think about baselines, experiments, and how they relate to your application's default behavior.
What is a Baseline?¶
A baseline represents how the default app behaves for a given task, dataset, and provider.
Key points:
- ✅ Baseline params = default app params - The configuration used in a baseline should match what users get by default
- ✅ Baseline pipeline = default app pipeline - The processing steps should match the default workflow
- ✅ Baseline output = what users should expect today - The results represent current production behavior
This is not just "ok" — this is the point of a baseline.
Critical Clarification: Baseline ≠ "Whatever the App Happens to Do"¶
❌ Wrong approach: Baseline = "whatever the app happens to do right now"
✅ Correct approach: Baseline = explicitly frozen snapshot of default app behavior
Why this matters:
Your app may evolve, but your baseline must not drift silently. The rule is:
The app defaults should always be derived from a baseline, not the other way around.
The Correct Relationship Between App and Baseline¶
Ideal flow (what you should aim for):
- You define a config (like a YAML experiment config)
- You run it through the evaluation system
- You promote that run to a baseline
- That baseline config becomes the app default
- Future app changes are compared against that baseline
Visually:
baseline config ──► app default behavior
▲
│
experiments / changes
Not:
app default (mutable) ──► baseline (moving target) ❌
Why This Distinction is Important¶
If you treat baseline as "whatever the app currently does":
- ❌ Regressions slip in unnoticed
- ❌ Metrics history becomes meaningless
- ❌ You can't explain why quality changed
- ❌ Rollbacks become guesswork
If you treat baseline as the authority:
- ✅ App behavior is intentional
- ✅ Changes are deliberate
- ✅ Comparisons are meaningful
- ✅ Rollbacks are trivial
How This Applies to Your Setup¶
For your case:
baseline_bart_small_led_long_fast→ default summarization behavior in dev
Later you'll likely have:
baseline_prod_authority_benchmark_v1→ default summarization behavior in prod
Those baselines should correspond 1:1 with:
- The model IDs
- Generation params
- Preprocessing logic
- Chunking strategy (once added)
Practical Guideline¶
If a user asks "what does the app do by default?", you should be able to answer: "it runs baseline X."
If you can't answer that, the baseline isn't doing its job.
What Baselines Should NOT Be Used For¶
Just to be clear:
- ❌ Baselines are not "best possible quality"
- ❌ Baselines are not "experiments"
- ❌ Baselines are not "aspirational targets"
Those are:
- Capability baselines - for exploring what's possible
- Silver/gold references - for quality targets
- Experiments - for testing new approaches
Different roles, different purposes.
How This Ties Back to Configuration¶
Your instinct was right:
- Putting
max_length: 150in the baseline config would literally mean: "the app default produces very short summaries"
That's why fixing the baseline params is so important.
Once you fix and promote the baseline:
- Those params become your app default
- Everything else is compared against them
One-Sentence Rule to Remember¶
A baseline is the contract for default app behavior — frozen, explicit, and intentional.
From Baseline to App Default¶
The workflow for promoting a baseline to app default:
- Create baseline - Run evaluation with your intended default config
- Validate baseline - Ensure metrics meet acceptance criteria
- Promote baseline - Mark it as the authoritative default
- Promote baseline into the Model Registry (RFC-044) - Create a named mode in code
- Update app defaults - Use the promoted
summary_mode_id(mode) as the default - Verify alignment - Confirm app behavior matches baseline
Future changes:
- Run experiments against the baseline
- Compare metrics and quality
- If better, create new baseline and update app defaults
- If worse, reject the change
This ensures all app behavior is intentional and traceable.
Registry Promotion (RFC-044)¶
The app runtime never imports data/eval/. Instead, proven baseline configs are
promoted into the code registry as modes:
make registry-promote BASELINE_ID=baseline_ml_prod_authority_v1 MODE_ID=ml_prod_authority_v1
Then, set summary_mode_id: ml_prod_authority_v1 in config (or rely on the production default).
Step 0: Prepare Source Data¶
Prerequisites: You need RSS XML files and transcript files in data/eval/sources/.
Before creating datasets, you should:
- Generate episode metadata from RSS XML files
- Generate source indexes for inventory management
Generate Episode Metadata¶
Generate metadata JSON files from RSS XML files:
make metadata-generate INPUT_DIR=data/eval/sources
This will:
- Scan
data/eval/sources/recursively for RSS XML files - Parse each RSS feed and extract episode metadata
- Generate
*.metadata.jsonfiles next to each XML file
Output format:
Each {episode_id}.metadata.json contains:
{
"source_episode_id": "p01_e01",
"feed_name": "Singletrack Sessions",
"feed_url": "http://localhost/",
"episode_title": "Episode 1: Building Trails That Last...",
"published_at": "2025-09-01",
"duration_seconds": 630,
"language": "en",
"scraped_at": "2026-01-13T12:07:56.657450Z"
}
Optional parameters:
make metadata-generate \
INPUT_DIR=data/eval/sources \
OUTPUT_DIR=data/eval/metadata \
LOG_LEVEL=DEBUG
Generate Source Index¶
Create an inventory index for each source directory:
make source-index SOURCE_DIR=data/eval/sources/curated_5feeds_raw_v1
This will:
- Scan the source directory for feed subdirectories
- Find all transcript and metadata files
- Compute SHA256 hashes for transcripts
- Generate
index.jsonin the source directory
Output format:
The index.json contains:
{
"source_id": "curated_5feeds_raw_v1",
"created_at": "2026-01-13T12:11:46.314843Z",
"episodes": [
{
"source_episode_id": "p01_e01",
"feed": "feed-p01",
"transcript_path": "feed-p01/p01_e01.txt",
"transcript_sha256": "a650e729cc8b7379c94fd5b29c092bcd32a8c7e4c2086f1321d6ed496718b9b4",
"meta_path": "feed-p01/p01_e01.metadata.json"
}
]
}
Process all sources:
make source-index SOURCE_DIR=data/eval/sources ALL=1
Benefits of source indexes:
- Programmatic dataset generation
- Drift detection (hash changes)
- Dataset definition validation
- Avoid ad-hoc directory scanning
Step 1: Create a Dataset¶
Prerequisites: You need evaluation data in data/eval/ with transcript files (.txt files in any subdirectory).
Datasets are canonical, frozen sets of episodes stored as JSON files. The script recursively finds all .txt files in subdirectories and treats each as a transcript.
Quick Start: Predefined Datasets¶
For the curated 5 feeds source, we have three predefined datasets:
Smoke Test Dataset (first episode per feed):
make dataset-smoke
Creates data/eval/datasets/curated_5feeds_smoke_v1.json with 5 episodes.
Benchmark Dataset (first 2 episodes per feed):
make dataset-benchmark
Creates data/eval/datasets/curated_5feeds_benchmark_v1.json with 10 episodes.
Raw Dataset (all episodes):
make dataset-raw
Creates data/eval/datasets/curated_5feeds_raw_v1.json with all episodes.
Custom Dataset Creation¶
Using the Make Command (Recommended):
make dataset-create \
DATASET_ID=indicator_v1 \
EVAL_DIR=data/eval \
DESCRIPTION="Lenny's Podcast evaluation episodes (interview style)"
Default values:
EVAL_DIRdefaults todata/eval(can be omitted)OUTPUT_DIRdefaults tobenchmarks/datasets(can be omitted)DESCRIPTIONdefaults to "Dataset {DATASET_ID}" (can be omitted)
With all options:
make dataset-create \
DATASET_ID=indicator_v1 \
EVAL_DIR=data/eval \
OUTPUT_DIR=data/eval/datasets \
DESCRIPTION="Lenny's Podcast evaluation episodes (interview style)" \
CONTENT_REGIME=narrative \
MAX_EPISODES_PER_FEED=2
Filtering episodes:
Use MAX_EPISODES_PER_FEED to limit episodes per feed:
MAX_EPISODES_PER_FEED=1- First episode per feed (smoke test)MAX_EPISODES_PER_FEED=2- First 2 episodes per feed (benchmark)- Omit parameter - All episodes (full dataset)
Using the Script Directly¶
python scripts/eval/create_dataset_json.py \
--dataset-id indicator_v1 \
--eval-dir data/eval \
--output-dir data/eval/datasets \
--description "Lenny's Podcast evaluation episodes (interview style)" \
--max-episodes-per-feed 2
How it works:
- Recursively scans
data/eval/for all.txtfiles - Derives episode IDs from filenames (without extension)
- Looks for associated files:
{episode_id}.metadata.json- Episode metadata (new format)metadata.json- Episode metadata (old format){episode_id}.raw.txt- Raw transcript{episode_id}.summary.gold.long.txt- Long golden summary{episode_id}.summary.gold.short.txt- Short golden summary- Computes SHA256 hashes for transcripts
- Creates dataset JSON with all episode information
Dataset JSON Structure¶
A dataset JSON looks like this:
{
"dataset_id": "curated_5feeds_smoke_v1",
"version": "1.0",
"description": "Smoke test dataset: first episode per feed from curated_5feeds_raw_v1",
"created_at": "2026-01-13T12:22:41.258855Z",
"content_regime": "explainer",
"num_episodes": 5,
"episodes": [
{
"episode_id": "p01_e01",
"title": "Episode 1: Building Trails That Last (with Liam Verbeek)",
"transcript_path": "data/eval/sources/curated_5feeds_raw_v1/feed-p01/p01_e01.txt",
"transcript_hash": "a650e729cc8b7379c94fd5b29c092bcd32a8c7e4c2086f1321d6ed496718b9b4",
"preprocessing_profile": "cleaning_v3",
"duration_minutes": 10.5
}
]
}
Manual Dataset Creation¶
You can also create dataset JSONs manually. Each episode must have:
episode_id: Unique identifiertranscript_path: Path to cleaned transcript filetranscript_hash: SHA256 hash of transcript content
Optional fields:
title: Episode titlepreprocessing_profile: Profile used for cleaning (see Preprocessing Profiles Guide)transcript_raw_path: Path to raw transcriptgolden_summary_long_path: Path to long golden summarygolden_summary_short_path: Path to short golden summaryduration_minutes: Episode duration in minutes
Step 1a: Materialize Dataset (Recommended)¶
Prerequisites: You must have created a dataset first (Step 1).
Materialization validates dataset integrity and creates a clean, reproducible copy of all transcripts.
Why Materialize?¶
Materialization proves:
- Dataset JSON is correct
- Paths resolve correctly
- Hashes match expected values
- Materialization is reproducible
Materializing a Dataset¶
Using the Make Command (Recommended):
make dataset-materialize DATASET_ID=curated_5feeds_smoke_v1
With custom output directory:
make dataset-materialize \
DATASET_ID=curated_5feeds_smoke_v1 \
OUTPUT_DIR=data/eval/materialized
Using the Script Directly:
python scripts/eval/materialize_dataset.py \
--dataset-id curated_5feeds_smoke_v1 \
--output-dir data/eval/materialized
What Materialization Does¶
- Validates dataset JSON - Checks that all required fields are present
- Resolves paths - Verifies all transcript files exist
- Validates hashes - Computes SHA256 and compares to expected hash
- Copies transcripts - Creates clean copies in materialized directory
- Creates metadata - Generates episode and dataset metadata files
Hash validation:
If a transcript hash doesn't match, materialization fails with a clear error:
ERROR: Episode p01_e01: HASH MISMATCH - transcript file has been modified!
Expected hash: abc123...
Actual hash: def456...
File: data/eval/sources/curated_5feeds_raw_v1/feed-p01/p01_e01.txt
This indicates the transcript file has changed since the dataset was created.
Materialized Dataset Structure¶
data/eval/materialized/curated_5feeds_smoke_v1/
├── meta.json # Dataset-level metadata
├── p01_e01.txt # Copied transcript
├── p01_e01.meta.json # Episode metadata
├── p02_e01.txt
├── p02_e01.meta.json
└── ...
Dataset metadata (meta.json):
{
"dataset_id": "curated_5feeds_smoke_v1",
"source_dataset_file": "data/eval/datasets/curated_5feeds_smoke_v1.json",
"num_episodes": 5,
"materialized_at": "2026-01-13T12:22:41.258855Z",
"episodes": [
{
"episode_id": "p01_e01",
"transcript_path": "p01_e01.txt",
"meta_path": "p01_e01.meta.json"
}
]
}
Episode metadata ({episode_id}.meta.json):
{
"episode_id": "p01_e01",
"transcript_path": "p01_e01.txt",
"transcript_hash": "a650e729cc8b7379c94fd5b29c092bcd32a8c7e4c2086f1321d6ed496718b9b4",
"source_transcript_path": "/path/to/source/p01_e01.txt",
"preprocessing_profile": "cleaning_v3",
"title": "Episode 1: Building Trails That Last...",
"duration_minutes": 10.5
}
Reproducibility¶
Materialization is reproducible - you can delete the materialized directory and regenerate it byte-for-byte:
rm -rf data/eval/materialized/curated_5feeds_smoke_v1
make dataset-materialize DATASET_ID=curated_5feeds_smoke_v1
Step 2: Create a Baseline¶
Prerequisites: You must have created a dataset first (Step 1). The baseline will use that dataset to know which episodes to process.
Baselines are frozen reference results from a known system state. They serve as comparison points for experiments.
Creating a Baseline with Make (Recommended)¶
Use the make command to materialize a baseline:
make baseline-create \
BASELINE_ID=bart_led_baseline_v1 \
DATASET_ID=curated_5feeds_smoke_v1
With optional experiment config:
make baseline-create \
BASELINE_ID=bart_led_baseline_v1 \
DATASET_ID=curated_5feeds_smoke_v1 \
EXPERIMENT_CONFIG=data/eval/configs/baseline_config.yaml \
PREPROCESSING_PROFILE=cleaning_v3
Creating a Baseline with the Script¶
Alternatively, you can call the script directly:
python scripts/eval/materialize_baseline.py \
--baseline-id bart_led_baseline_v1 \
--dataset-id curated_5feeds_smoke_v1 \
--experiment-config data/eval/configs/baseline_config.yaml \
--preprocessing-profile cleaning_v3
This will:
- Load the dataset JSON (created in Step 1)
- Process each episode using the specified configuration
- Save predictions to
benchmarks/baselines/{baseline_id}/predictions/ - Generate metadata, fingerprints, and metrics
- Important: Baselines are immutable - you cannot overwrite an existing baseline
Baseline Structure¶
A baseline directory contains:
benchmarks/baselines/bart_led_baseline_v1/
├── metadata.json # Baseline metadata (dataset_id, git commit, stats)
├── fingerprint.json # System fingerprint (model, version, device)
├── metrics.json # Aggregate metrics
├── config.yaml # Experiment config used (if provided)
├── predictions/ # Individual episode predictions
│ ├── ep01.json
│ ├── ep02.json
│ └── ...
└── artifacts/ # Additional artifacts (if any)
Baseline Metadata¶
The metadata.json includes:
baseline_id: Unique identifierdataset_id: Dataset usedcreated_at: Timestampgit_commit: Git commit SHA when baseline was createdgit_is_dirty: Whether repo had uncommitted changesprovider_type: Provider used (e.g., "OpenAIProvider")model_name: Model namepreprocessing_profile: Preprocessing profile ID (see Preprocessing Profiles Guide)stats: Processing statistics (num_episodes, avg_time, compression, etc.)
Step 3: Run an Experiment¶
Experiments test new configurations against datasets and compare results to baselines.
Creating an Experiment Config¶
Create a YAML file (e.g., data/eval/configs/my_experiment.yaml):
id: "summarization_openai_long_v2"
task: "summarization"
backend:
type: "openai"
model: "gpt-4o-mini"
prompts:
system: "summarization/system_v1"
user: "summarization/long_v2_more_narrative"
params:
paragraphs_min: 3
paragraphs_max: 6
data:
dataset_id: "curated_5feeds_smoke_v1" # Use dataset-based mode (recommended)
params:
max_output_tokens: 900
temperature: 0.7
# Contract fields (RFC-015)
dataset_id: "curated_5feeds_smoke_v1"
baseline_id: "bart_led_baseline_v1"
golden_required: true
golden_ref: "data/eval" # Path to golden references
Grounded insights (GIL) and knowledge graph (KG) experiments¶
For transcript-only evaluation on a materialized dataset (no RSS run), use separate configs and runs—one task per YAML:
task: grounded_insightswithbackend.type: eval_stub→predictions.jsonlrows includeoutput.gil(GIL-shaped dict). Sample:data/eval/configs/gil_eval_stub_curated_5feeds_smoke_v1.yaml.task: knowledge_graphwithbackend.type: eval_stub→output.kg. Sample:data/eval/configs/kg_eval_stub_curated_5feeds_smoke_v1.yaml.
Details, gold reference layout (references/gold/gil/, references/gold/kg/), and metrics
schemas: data/eval/README.md and data/eval/configs/README.md. Provider/coupled-summary
modes are not wired into run_experiment yet (only eval_stub is validated today).
Data Configuration Modes¶
The experiment runner supports two data configuration modes:
Dataset-Based Mode (Recommended)¶
data:
dataset_id: "curated_5feeds_smoke_v1"
This loads episode information from data/eval/datasets/curated_5feeds_smoke_v1.json (or benchmarks/datasets/ if not found). Episode IDs are taken directly from the dataset JSON.
Glob-Based Mode (Legacy)¶
data:
episodes_glob: "data/episodes/ep*/transcript.txt"
id_from: "parent_dir" # or "stem"
This uses glob patterns to discover files. Episode IDs are derived from paths using the id_from rule.
Note: You cannot specify both dataset_id and episodes_glob in the same config.
Running an Experiment¶
Prerequisites: You must have created both a dataset (Step 1) and a baseline (Step 2). The experiment will use the dataset for input and compare against the baseline.
Running an Experiment with Make (Recommended)¶
export OPENAI_API_KEY="your-key-here"
make experiment-run CONFIG=data/eval/configs/my_experiment.yaml
With custom log level:
make experiment-run CONFIG=data/eval/configs/my_experiment.yaml LOG_LEVEL=DEBUG
Running an Experiment with the Script¶
Alternatively, you can call the script directly:
export OPENAI_API_KEY="your-key-here"
python scripts/eval/run_experiment.py data/eval/configs/my_experiment.yaml
The experiment runner will:
- Validate the experiment contract (dataset_id, baseline_id, etc.)
- Load the dataset and discover input files
- Process each episode with the specified provider
- Save predictions to
results/{experiment_id}/predictions.jsonl - Generate metadata, fingerprints, and statistics
Experiment Results¶
Results are saved to results/{experiment_id}/:
results/summarization_openai_long_v2/
├── predictions.jsonl # One JSON object per episode (input/output/hashes/timing)
├── run_metadata.json # Experiment metadata (config, stats, contract info)
└── fingerprint.json # System fingerprint
Understanding Predictions¶
Each line in predictions.jsonl contains:
{
"episode_id": "p01_e01",
"input_path": "data/eval/sources/curated_5feeds_raw_v1/feed-p01/p01_e01.txt",
"input_hash": "a650e729cc8b7379c94fd5b29c092bcd32a8c7e4c2086f1321d6ed496718b9b4",
"output": "Summary text here...",
"output_hash": "abc123...",
"processing_time_seconds": 2.5,
"input_length_chars": 50000,
"output_length_chars": 500
}
Step 4: Evaluate Results¶
Evaluation is handled automatically by the experiment runner. When you run an experiment with --baseline and/or --reference flags, the system automatically:
- Computes intrinsic metrics (gates, length, performance, cost)
- Computes vs_reference metrics (ROUGE, embedding similarity) if references are provided
- Computes deltas vs baseline if baseline is provided
Metrics Calculation Flow¶
experiment-run → run_experiment.py → score_run() → metrics.json
- Run Experiment:
scripts/eval/run_experiment.pyprocesses episodes and generatespredictions.jsonl - Compute Metrics:
score_run()insrc/podcast_scraper/evaluation/scorer.pyreads predictions and computes metrics - Save Results: Metrics are saved to
data/eval/runs/<run_id>/metrics.jsonandmetrics_report.md
Running Experiments with Evaluation¶
To run an experiment with full evaluation, use the --baseline and/or --reference flags:
make experiment-run \
CONFIG=experiments/my_experiment.yaml \
BASELINE=bart_led_baseline_v1 \
REFERENCE=silver_gpt52_v1,gold_human_v1
Arguments:
CONFIG(required) - Experiment config YAMLBASELINE(optional) - Baseline ID for comparisonREFERENCE(optional, comma-separated) - Reference IDs for evaluation (can be silver/gold)LOG_LEVEL(optional) - Logging level
Evaluation Architecture¶
The evaluation system consists of three separate roles that work together:
- Runner - Produces outputs (predictions + fingerprint + run metadata)
- Scorer - Computes metrics (gates, stability, cost/latency, and optionally "vs reference" metrics)
- Comparator - Computes deltas vs baseline
These roles are kept separate in code, even though they can be wired together in one script.
Runner (Execution)¶
The runner executes the experiment and produces:
predictions.jsonl- Model outputs for all episodesfingerprint.json- System fingerprint (reproducibility)run_metadata.json- Experiment metadata
Location: scripts/eval/run_experiment.py (runner phase)
Scorer (Metrics)¶
The scorer computes metrics from predictions. Metrics are divided into two categories:
Intrinsic Metrics¶
Intrinsic metrics are computed from predictions alone and don't require reference summaries. They include:
1. Quality Gates
Detect common issues in generated summaries:
boilerplate_leak_rate: Fraction of episodes with promotional/sponsor content leaks- Patterns detected: "subscribe to our newsletter", "follow us on", "rate and review", etc.
speaker_label_leak_rate: Fraction of episodes with speaker labels leaking through (FAIL gate)- Patterns detected: "Host:", "Guest:", "Speaker 1:", "Interviewer:", etc.
- This is the main summarization gate - should be 0.0
truncation_rate: Fraction of episodes that appear truncated- Detected by truncation markers ("...", "[TRUNCATED]") or suspiciously short outputs
failed_episodes: List of episode IDs that failed quality gates
Warnings (Not Gates):
speaker_name_leak_rate: Fraction of episodes with actual speaker names leaking through (WARN only)- Detects actual names from metadata (e.g., "Alice", "Bob") appearing in summaries
- This is tracked for monitoring but does not cause gate failures
2. Length Metrics
Token-based length statistics:
avg_tokens: Average number of tokens per summary (estimated as chars/4)min_tokens: Minimum tokens across all summariesmax_tokens: Maximum tokens across all summaries
3. Performance Metrics
Latency measurements:
avg_latency_ms: Average processing time per episode in milliseconds- Extracted from
metadata.processing_time_secondsin predictions
4. Cost Metrics (OpenAI Only)
Note: Cost metrics are only included for OpenAI runs. ML model runs skip this section entirely.
avg_cost_usd: Average cost per episode in USDtotal_cost_usd: Total cost for all episodes in USD
Cost is computed from:
metadata.cost_usd(if directly provided by provider)metadata.usage(token counts) with model-specific pricing:- GPT-4o-mini: $0.15/1M input, $0.60/1M output
- GPT-4o: $2.50/1M input, $10.00/1M output
Location: src/podcast_scraper/evaluation/scorer.py
Comparator (Deltas)¶
The comparator computes deltas between experiment and baseline:
- Cost deltas
- Latency deltas
- Gate regressions
- ROUGE deltas (if both have same references)
Location: src/podcast_scraper/evaluation/comparator.py
Reference Model¶
References are optional evaluation targets. You can have:
- Baseline (optional but usually required for experiments) - for regression detection
- Silver references (optional) - machine-generated, higher quality
- Gold references (optional) - human-verified summaries
Key principle: A reference is anything that looks like a run output (predictions.jsonl + fingerprint.json + baseline.json).
vs_reference Metrics¶
vs_reference metrics compare your predictions against reference summaries (golden or silver standards). These are optional and only computed when references are provided.
When is vs_reference null?¶
vs_reference is null when:
- No references were provided via
--referenceCLI argument orREFERENCE_IDSMakefile variable - The experiment was run without reference evaluation
This is the normal state for most runs - references are optional and only needed when you want to compare against golden/silver standards.
How to provide references¶
# Single reference via Makefile
make experiment-run CONFIG=... REFERENCE_IDS=golden_v1
# Multiple references via Makefile
make experiment-run CONFIG=... REFERENCE_IDS="golden_v1 silver_v2"
# Via CLI
python scripts/eval/run_experiment.py config.yaml --reference golden_v1 --reference silver_v2
Reference Structure¶
References can be:
- Baselines:
data/eval/baselines/<baseline_id>/ - References:
- Silver:
data/eval/references/silver/<reference_id>/ - Gold NER:
data/eval/references/gold/ner_entities/<reference_id>/ - Gold Summarization:
data/eval/references/gold/summarization/<reference_id>/ - Gold GIL:
data/eval/references/gold/gil/<reference_id>/({episode_id}.jsonper episode) - Gold KG:
data/eval/references/gold/kg/<reference_id>/({episode_id}.jsonper episode) - Legacy baselines:
benchmarks/baselines/<baseline_id>/
Reference payloads: Silver references and summarization-style gold often include
predictions.jsonl with the same episode IDs as your run. Gold NER, GIL, and KG
may instead use per-episode JSON files only (no predictions.jsonl); rescore_baseline and
baseline materialization accept either pattern when resolving references.
vs_reference Metrics Computed¶
When references are provided, the following metrics are computed:
-
reference_quality: Metadata about the reference (episode count, quality level, etc.) -
ROUGE Scores (requires
rouge-scorepackage): rouge1_f1: ROUGE-1 F1 score (unigram overlap) - measures coveragerouge2_f1: ROUGE-2 F1 score (bigram overlap) - measures local coherence-
rougeL_f1: ROUGE-L F1 score (longest common subsequence) - measures structural similarity -
BLEU Score (requires
nltkpackage): -
bleu: BLEU score (n-gram precision with brevity penalty) -
WER (Word Error Rate) (requires
jiwerpackage): -
wer: Word-level edit distance normalized by reference length -
Embedding Similarity (requires
sentence-transformerspackage): -
embedding_similarity: Cosine similarity between embeddings of predictions and references -
numbers_retained: Fraction of reference numbers retained in predictions (average over episodes); implemented inevaluation/scorer.py(_extract_numbers,_compute_numbers_retained). Omitted when the reference has no numbers.
Example vs_reference Structure¶
{
"vs_reference": {
"golden_v1": {
"reference_quality": {
"episode_count": 5,
"quality_level": "gold"
},
"rouge1_f1": 0.45,
"rouge2_f1": 0.32,
"rougeL_f1": 0.42,
"bleu": 0.38,
"wer": 0.15,
"embedding_similarity": 0.87
},
"silver_v2": {
"reference_quality": {
"episode_count": 5,
"quality_level": "silver"
},
"rouge1_f1": 0.42,
"rouge2_f1": 0.19,
"rougeL_f1": 0.39,
"bleu": 0.35,
"wer": 0.18,
"embedding_similarity": 0.85
}
}
}
Key points:
- Each reference ID becomes a key in the
vs_referencedictionary - All metrics are computed independently for each reference
- Missing dependencies (e.g.,
rouge-scorenot installed) will result innullvalues for those metrics - You can compare against multiple references in a single run
Metrics Structure¶
metrics.json¶
The scorer generates a metrics.json file with the following structure:
{
"dataset_id": "curated_5feeds_benchmark_v1",
"run_id": "run_2026-01-16_12-10-03",
"episode_count": 10,
"intrinsic": {
"gates": {
"speaker_label_leak_rate": 0.0,
"boilerplate_leak_rate": 0.0,
"truncation_rate": 0.0,
"failed_episodes": []
},
"length": {
"avg_tokens": 420,
"min_tokens": 310,
"max_tokens": 560
},
"performance": {
"avg_latency_ms": 1800
},
"cost": {
"total_cost_usd": 0.14,
"avg_cost_usd": 0.014
}
},
"vs_reference": null
}
Key points:
intrinsic- Always present (computed from predictions alone)vs_reference-nullwhen no references provided, or a dictionary with reference IDs as keys when references are provided- Cost section is only included for OpenAI runs (ML models skip it entirely)
metrics_report.md¶
Human-readable markdown report with formatted metrics, suitable for viewing in GitHub or documentation. Includes formatted tables and summaries of all computed metrics.
comparisons/vs_{baseline_id}.json¶
The comparator generates comparison files with deltas:
{
"baseline_id": "baseline_prod_authority_v1",
"dataset_id": "curated_5feeds_benchmark_v1",
"experiment_run_id": "run_2026-01-16_12-10-03",
"deltas": {
"cost_total_usd": -0.05,
"avg_latency_ms": 120,
"gate_regressions": [],
"rougeL_f1_vs_silver_gpt52_v1": 0.01
}
}
Key points:
- Deltas are computed as:
experiment_value - baseline_value gate_regressionsis a list of gate names that regressed- ROUGE deltas are included if both experiment and baseline have the same reference
Reference Validation¶
For every reference (baseline/silver/gold), the system enforces:
- Episode ID match: Episode IDs match exactly (no missing/extra)
- Immutable: Reference is write-once (cannot be overwritten)
If any of these fail → scoring refuses to run.
Reference Pack Structure¶
A reference pack should contain at minimum:
# Silver references
references/silver/{reference_id}/
├── predictions.jsonl # Reference text per episode
├── fingerprint.json # How reference was generated
├── baseline.json # Reference metadata (reference_quality)
└── config.yaml # Config used (optional)
# Gold NER references
references/gold/ner_entities/{reference_id}/
├── index.json # Index of episodes
├── {episode_id}.json # Gold entities per episode
└── README.md # Reference documentation
# Gold summarization references
references/gold/summarization/{reference_id}/
├── predictions.jsonl # Gold summaries per episode
└── README.md # Reference documentation
# Gold GIL / KG references (eval vs_reference)
references/gold/gil/{reference_id}/
├── {episode_id}.json # Gold GIL payload per episode (same shape as output.gil)
└── README.md # Optional
references/gold/kg/{reference_id}/
├── {episode_id}.json # Gold KG payload per episode (same shape as output.kg)
└── README.md # Optional
Note: A baseline can be promoted to a reference pack if you want. That's fine.
Evaluation Results¶
When you run an experiment with evaluation, results are saved to results/{experiment_id}/:
results/summarization_openai_long_v2/
├── predictions.jsonl # Model outputs for all episodes
├── fingerprint.json # System fingerprint
├── run_metadata.json # Experiment metadata
├── metrics.json # Intrinsic + vs_reference metrics
└── comparisons/
└── vs_baseline_prod_authority_v1.json # Deltas vs baseline
Key Design Decisions¶
1. Separation of Concerns¶
- Runner = execution only
- Scorer = metrics computation
- Comparator = delta computation
This allows:
- Recomputing metrics without re-running inference
- Recomputing comparisons without re-running inference
- Testing each component independently
2. Optional References¶
References are optional because:
- You can do rigorous evaluation without goldens (Phase 1)
- You can add references incrementally (Phase 2/3)
- Different experiments may need different references
3. Reference as "Anything"¶
A reference is anything that looks like a run output:
- Baseline can be a reference
- Silver reference can be a reference
- Gold reference can be a reference
This keeps the system flexible.
4. Metrics vs Comparisons¶
- Metrics = absolute facts about this run (+ vs reference scores)
- Comparisons = deltas between two runs
This separation allows recomputing comparisons later without re-running inference.
Complete Workflow Example¶
Here's a complete example workflow:
# Step 0: Prepare source data
make metadata-generate INPUT_DIR=data/eval/sources
make source-index SOURCE_DIR=data/eval/sources/curated_5feeds_raw_v1
# Step 1: Create datasets
make dataset-smoke # Creates curated_5feeds_smoke_v1 (5 episodes)
make dataset-benchmark # Creates curated_5feeds_benchmark_v1 (10 episodes)
make dataset-raw # Creates curated_5feeds_raw_v1 (all episodes)
# Step 1a: Materialize dataset (validate integrity)
make dataset-materialize DATASET_ID=curated_5feeds_smoke_v1
# Step 2: Create baseline
make baseline-create \
BASELINE_ID=bart_led_baseline_v1 \
DATASET_ID=curated_5feeds_smoke_v1
# Step 3: Run experiment
export OPENAI_API_KEY="your-key-here"
make experiment-run CONFIG=data/eval/configs/my_experiment.yaml
# Step 4: Run experiment with evaluation
make experiment-run \
CONFIG=experiments/my_experiment.yaml \
BASELINE=bart_led_baseline_v1 \
REFERENCE=silver_gpt52_v1
# Results are automatically computed:
# - results/{experiment_id}/metrics.json (intrinsic + vs_reference)
# - results/{experiment_id}/comparisons/vs_{baseline_id}.json (deltas)
# Review results:
cat results/summarization_openai_long_v2/metrics.json | jq '.intrinsic'
cat results/summarization_openai_long_v2/metrics.json | jq '.vs_reference'
cat results/summarization_openai_long_v2/comparisons/vs_bart_led_baseline_v1.json
Best Practices¶
Source Data Management¶
- Generate metadata first: Always generate metadata from RSS XML before creating datasets
- Create source indexes: Use indexes for inventory management and drift detection
- Freeze source data: Once datasets are created, avoid modifying source transcripts
Dataset Management¶
- Freeze datasets: Once created, datasets should be immutable
- Version datasets: Use versioned IDs (e.g.,
curated_5feeds_smoke_v1,curated_5feeds_smoke_v2) - Document datasets: Include clear descriptions and content regime
- Materialize datasets: Always materialize datasets to validate integrity before use
- Use appropriate sizes: Use smoke datasets for quick tests, benchmark datasets for evaluation, raw datasets for comprehensive analysis
Baseline Management¶
- Create baselines on clean commits: Avoid creating baselines with uncommitted changes
- Document baseline purpose: Use descriptive baseline IDs
- Version baselines: Use versioned IDs (e.g.,
bart_led_baseline_v1,bart_led_baseline_v2) - Never overwrite: Baselines are immutable - create new ones for changes
Experiment Management¶
- Use descriptive IDs: Include model, task, and version in experiment ID
- Always specify baseline: Experiments must compare against a baseline
- Validate contracts: Ensure dataset_id matches between experiment and baseline
- Track golden references: Use
golden_required: truewhen evaluation is needed
Workflow¶
- Prepare source data →
make metadata-generate→make source-index - Create dataset →
make dataset-smoke/make dataset-benchmark/make dataset-raw - Materialize dataset →
make dataset-materialize DATASET_ID=...(recommended) - Create baseline →
make baseline-create BASELINE_ID=... DATASET_ID=... - Run experiment →
make experiment-run CONFIG=... - Run experiment with evaluation →
make experiment-run CONFIG=... BASELINE=... REFERENCE=...(evaluation is automatic)
Experiment Lifecycle Management¶
When iterating on ML models and preprocessing, you'll make many small changes. Having a clear strategy for what to keep and what to delete prevents clutter while preserving important reference points.
The General Strategy¶
┌─────────────────────────────────────────────────────────────────────────┐
│ EXPERIMENT LIFECYCLE │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ KEEP DELETE │
│ ──── ────── │
│ • Configs you might reuse • Most intermediate runs │
│ • One "before" run per major • Most exploratory configs │
│ change (frozen) • Failed experiment attempts │
│ • All promoted baselines • Superseded parameter sweeps │
│ • Committed references │
│ │
└─────────────────────────────────────────────────────────────────────────┘
What to Keep¶
1. Configs You Might Reuse¶
Archive useful configurations before deleting them:
# Archive all current configs with today's date
make configs-archive
# Creates: data/eval/configs/_archive/configs_YYYY-MM-DD/
This preserves your parameter sweep configurations for reference without cluttering the active configs directory.
2. One "Before" Run Per Major Change (Frozen)¶
Before implementing a significant change (new preprocessing profile, model switch, etc.), freeze one representative run:
# Freeze a run as a baseline comparison point
make run-freeze RUN_ID=baseline_bart_v1 REASON="Pre-cleanup baseline for comparison"
# Creates: data/eval/runs/_frozen_pre_cleanup/<run_id>/
# Adds: NOTE.md with reason and date
Why freeze runs?
- Quantify improvement after the change
- Track metrics like: repetition rate, garbage tokens, coherence, speaker label leakage
- Provides rollback reference if the change regresses quality
3. All Promoted Baselines/References¶
Baselines that become app defaults and reference summaries (gold/silver) should be:
- Committed to version control
- Never deleted (they're immutable)
- Located in
benchmarks/baselines/(promoted) ordata/eval/references/(gold/silver)
What to Delete¶
1. Most Intermediate Runs¶
During parameter sweeps, you'll generate many runs. Delete all except the best performer:
# Delete multiple runs at once
make runs-delete RUN_IDS="run_v2 run_v3 run_v4 run_v5"
# Keep only the winning configuration
2. Most Exploratory Configs¶
After a parameter sweep, archive then clean:
# First archive for reference
make configs-archive
# Then clean, optionally keeping one
make configs-clean KEEP=baseline_bart_best.yaml
Makefile Commands Reference¶
| Command | Purpose | Example |
|---|---|---|
make configs-archive |
Archive all baseline_*.yaml configs |
make configs-archive |
make configs-clean |
Delete baseline_*.yaml configs |
make configs-clean KEEP=best.yaml |
make run-freeze |
Freeze a run for baseline comparison | make run-freeze RUN_ID=my_run REASON="Pre-X baseline" |
make runs-delete |
Delete multiple runs | make runs-delete RUN_IDS="run1 run2 run3" |
make experiment-run FORCE=1 |
Re-run experiment, deleting existing results | make experiment-run CONFIG=... FORCE=1 |
make report-multi-run |
Generate multi-run comparison report (baseline + N runs) | See Multi-run comparison report below |
Multi-run comparison report¶
The multi-run comparison report builds a single markdown table from one optional baseline and any number of experiment runs, using their metrics.json and the same vs-reference metrics (ROUGE, BLEU, embedding, coverage, WER) and latency. Use it to compare baseline vs tier1 vs tier2, or any set of runs, in one view.
Make target: report-multi-run
- Default: With no arguments, uses baseline
baseline_ml_prod_authority_smoke_v1, runshybrid_ml_tier1_smoke_v1andhybrid_ml_tier2_qwen25_7b_smoke_v1, referencesilver_gpt4o_smoke_v1, and writesdocs/wip/multi_run_comparison.md. - Tier 2 (32B): For larger hardware, eval config
hybrid_ml_tier2_qwen25_32b_smoke_v1is available (ollama pull qwen2.5:32b). Add it toRUN_IDSwhen comparing against 7B or tier1.
make report-multi-run
- Custom baseline + runs: Specify baseline, comma-separated run IDs, and reference. Output path is optional.
make report-multi-run \
BASELINE_ID=baseline_ml_prod_authority_smoke_v1 \
RUN_IDS=hybrid_ml_tier1_smoke_v1,hybrid_ml_tier2_qwen25_7b_smoke_v1 \
REFERENCE_ID=silver_gpt4o_smoke_v1 \
OUTPUT=docs/wip/my_comparison.md
- Runs only (no baseline): Omit
BASELINE_IDand pass onlyRUN_IDSandREFERENCE_ID.
make report-multi-run \
RUN_IDS=run_a,run_b,run_c \
REFERENCE_ID=silver_gpt4o_smoke_v1
Options (make variables):
| Option | Required | Description |
|---|---|---|
REFERENCE_ID |
Yes (or default) | Reference ID for vs_reference metrics (e.g. silver_gpt4o_smoke_v1). Default: silver_gpt4o_smoke_v1 when using default baseline/runs. |
BASELINE_ID |
No | Baseline ID; included as first row. Looked up in data/eval/baselines/. |
RUN_IDS |
No* | Comma-separated run IDs. Looked up in data/eval/runs/. *At least one of BASELINE_ID or RUN_IDS required. |
OUTPUT |
No | Output markdown path. Default: docs/wip/multi_run_comparison.md. |
TITLE |
No | Report title (default: "Multi-Run Comparison"). |
LABELS |
No | Comma-separated labels for each row (same order: baseline first, then runs). Default: use ID. |
DATASET_ID |
No | Dataset ID for report subtitle (default: from first metrics). |
BASELINES_DIR |
No | Baselines directory (default: data/eval/baselines). |
RUNS_DIR |
No | Runs directory (default: data/eval/runs). |
Direct script usage:
python scripts/eval/smoke_three_way_report.py \
--reference-id silver_gpt4o_smoke_v1 \
--baseline-id baseline_ml_prod_authority_smoke_v1 \
--run-ids hybrid_ml_tier1_smoke_v1,hybrid_ml_tier2_qwen25_7b_smoke_v1 \
--output docs/wip/smoke_three_way_comparison.md \
[--title "Smoke comparison"] [--labels "Prod,Tier1,Tier2"]
All script options are documented in the script's help: python scripts/eval/smoke_three_way_report.py --help.
Typical Workflow: Parameter Sweep¶
# 1. Create multiple experiment configs
# baseline_bart_v1.yaml, baseline_bart_v2.yaml, ...
# 2. Run experiments
make experiment-run CONFIG=data/eval/configs/baseline_bart_v1.yaml
make experiment-run CONFIG=data/eval/configs/baseline_bart_v2.yaml
# ...
# 3. Compare results, pick winner (e.g., v3)
# 4. Archive configs before cleanup
make configs-archive
# 5. Clean configs, keeping winner
make configs-clean KEEP=baseline_bart_v3.yaml
# 6. Delete non-winning runs
make runs-delete RUN_IDS="baseline_bart_v1 baseline_bart_v2 baseline_bart_v4"
# 7. Optionally freeze winning run if it's a major milestone
make run-freeze RUN_ID=baseline_bart_v3 REASON="Best params before preprocessing change"
Typical Workflow: Major Change¶
# 1. Freeze current best run as "before"
make run-freeze RUN_ID=baseline_bart_current REASON="Pre-cleaning_v4 baseline"
# 2. Implement the change (e.g., new preprocessing profile)
# 3. Run new experiment
make experiment-run CONFIG=data/eval/configs/baseline_bart_cleaning_v4.yaml
# 4. Compare frozen "before" vs new "after"
# - Check metrics.json for improvements
# - Verify no regressions in gates
# 5. If improved: promote new run, delete old intermediates
# If regressed: investigate, iterate, compare against frozen baseline
Directory Structure After Cleanup¶
data/eval/
├── configs/
│ ├── _archive/
│ │ └── configs_2026-01-30/ # Archived parameter sweeps
│ │ ├── baseline_bart_v1.yaml
│ │ └── ...
│ └── baseline_bart_best.yaml # Current best config
├── runs/
│ ├── _frozen_pre_cleanup/ # Frozen baseline runs
│ │ └── baseline_bart_v1/
│ │ ├── NOTE.md # Why it was frozen
│ │ ├── metrics.json
│ │ └── ...
│ ├── baseline_bart_best/ # Current best run
│ └── README.md
└── ...
Key Principles¶
- Always archive before delete - You may need to reference old configs
- Freeze before major changes - Enables quantitative comparison
- Keep promoted baselines forever - They're your quality contracts
- Delete aggressively otherwise - Clutter obscures signal
- Document frozen runs - The
NOTE.mdexplains why they matter
Visual run comparison (RFC-047)¶
To compare experiment or baseline runs side by side (artifact status, KPI tiles, deltas vs a chosen baseline, token/latency charts, optional map/reduce diagnostics, and per-episode diffs), use the Streamlit tool described in the run_compare README in the repository.
pip install -e '.[run_compare]'
make run-compare
Optional BASELINE picks the default row in the Baseline (for deltas) dropdown when that run is selected (see the README). On load, all runs matching the category filter are selected; use Select all / Deselect all in the sidebar as needed. This complements text reports such as make runs-compare and make report-multi-run.
Troubleshooting¶
"Dataset definition not found"¶
- Check that
data/eval/datasets/{dataset_id}.jsonorbenchmarks/datasets/{dataset_id}.jsonexists - Verify the dataset_id in your experiment config matches the JSON filename
"Baseline not found"¶
- Check that
benchmarks/baselines/{baseline_id}/exists - Verify the baseline_id in your experiment config is correct
- Create the baseline first using
make baseline-create
"Dataset mismatch"¶
- The experiment's
dataset_idmust match the baseline'sdataset_id - Check
benchmarks/baselines/{baseline_id}/metadata.jsonto see which dataset was used
"No input files found"¶
- For dataset mode: Verify transcript paths in the dataset JSON exist
- For glob mode: Check that the glob pattern matches files in your directory
"Episode not found in dataset"¶
- The transcript path in the dataset JSON must match the actual file path
- Use absolute paths or paths relative to the project root
"Hash mismatch" (during materialization)¶
- The transcript file has been modified since the dataset was created
- Regenerate the dataset or restore the original transcript file
- Check
data/eval/sources/for the original files
"Materialized directory already exists"¶
- The script will automatically remove and recreate the directory
- This ensures reproducible materialization
Next Steps¶
This guide will evolve as the experiment system matures. Planned additions:
- [ ] Automated evaluation integration
- [x] Comparison tools (experiment vs baseline) — see
make runs-compare,make report-multi-run, andmake run-compare(RFC-047) - [ ] Regression detection
- [ ] CI/CD integration
- [ ] Cost tracking
- [x] Visualization tools —
make run-compare(Streamlit, RFC-047)
References¶
- RFC-015: AI Experiment Pipeline
- RFC-041: Benchmarking Framework
- Implementation Plan:
docs/wip/ai-quality-implementation-plan-sync.md - Dataset Format:
data/eval/datasets/curated_5feeds_smoke_v1.json(example) - Baseline Format:
benchmarks/baselines/(examples)