Skip to content
AGNT
All playbooks
Contentintermediate

Benchmark & Comparison Playbook

Build data-driven comparison pages — from benchmark tables to full comparison articles to social distribution.

AGNT Growth Desk3 steps60-90 minutesClaude Opus 4.6, Claude Sonnet 4.6

Why this playbook

Comparison pages are the highest-converting content type for B2B and developer products. They capture searchers who are actively evaluating options — the highest-intent traffic you can get from organic search.

But most comparison pages are either biased (strawman the competitor) or lazy (surface-level feature lists). This playbook produces rigorous, data-backed comparisons that readers trust and share.

The three-step workflow: build structured benchmark data, generate comparison articles from that data, then distribute. Data-first means your comparisons age well — update the benchmark data and regenerate.

Prerequisites

  • Claude Code installed
  • Benchmark data or access to test both products
  • List of competitors/products to compare against

Input requirements

InputTypeRequiredDescription
Products to comparestring[]YesThe products, platforms, or tools to compare. Minimum 2, maximum 5 for a single comparison page.
Benchmark dataJSON, CSV, or raw notesNoStructured benchmark results if available. If not provided, the comparison builder generates a testing plan you can execute manually.
Comparison axesstring[]NoSpecific dimensions to compare on (e.g., 'pricing', 'API coverage', 'latency', 'ecosystem'). If not specified, the builder selects axes from the product category.

Step-by-step workflow

1

Build structured comparison tables from benchmark data

Open prompt

This prompt takes raw benchmark data (or product documentation) and generates structured comparison tables. Each row is a comparison axis, each column is a product, and every cell includes the specific metric, not just 'yes/no'.

If you don't have benchmark data, the prompt generates a testing protocol: what to measure, how to measure it, and what sample size gives statistically meaningful results.

2

Generate full comparison articles

Open prompt

Feed the structured tables from step 1 into the comparison writer. It generates complete comparison articles: introduction, feature table, 'when to pick A/B' sections, verdict, and FAQ.

The writer enforces neutrality: no strawman arguments, no unsourced claims, and every advantage/disadvantage must cite a specific metric or feature from the benchmark data.

Comparison tables from step 1Products and axes
3

Create social distribution for comparison pages

Open prompt

Generate platform-optimized social posts for each comparison page. Comparison content performs especially well on X (hot takes drive engagement) and Reddit (community members share rigorous comparisons).

The generator creates multiple angles per page: a data-driven post for LinkedIn, a hot-take thread for X, and a 'I tested both, here's what I found' format for Reddit.

Comparison articles from step 2

Expected outputs

JSON + Markdown

Structured comparison tables

Axis-by-product comparison matrix with specific metrics, sources, and highlight markers.

Produced by step 1
Markdown or TypeScript data

Comparison articles

Complete comparison pages with intro, table, verdict, when-to-pick sections, and FAQ.

Produced by step 2
Markdown

Social distribution pack

Platform-optimized posts per comparison page: X thread, LinkedIn post, Reddit comment, HN submission.

Produced by step 3

Tool requirements

  • Claude Code
  • Benchmark data (structured or raw)
  • Target competitors or products to compare

Troubleshooting

Comparison tables have empty cells for one product
The product's documentation likely doesn't cover that axis. Mark it as 'not documented' (not 'N/A' — there's a difference). Consider reaching out to the vendor or testing manually to fill gaps.
Verdict reads as biased toward your product
Remove the verdict and regenerate with the 'neutral_observer' flag in the prompt input. Also have someone outside your team read the comparison — bias is hard to self-detect. The prompt includes a self-audit checklist for this.
Benchmark data is outdated by the time the page publishes
Include a 'last benchmarked' date on every comparison page and set up a quarterly re-benchmark reminder. The comparison table builder can diff old vs new data to highlight what changed.

Share as social post

Benchmark & Comparison Playbook 3 steps, 60-90 minutes. Build data-driven comparison pages — from benchmark tables to full comparison articles to social distribution. https://agntdot.com/playbooks/benchmark-comparison-playbook

228 / 280 chars

Related playbooks

Run the playbook.

Open each prompt in order, feed the outputs forward, and ship the workflow end-to-end.