Benchmark & Comparison Playbook
Build data-driven comparison pages — from benchmark tables to full comparison articles to social distribution.
Why this playbook
Comparison pages are the highest-converting content type for B2B and developer products. They capture searchers who are actively evaluating options — the highest-intent traffic you can get from organic search.
But most comparison pages are either biased (strawman the competitor) or lazy (surface-level feature lists). This playbook produces rigorous, data-backed comparisons that readers trust and share.
The three-step workflow: build structured benchmark data, generate comparison articles from that data, then distribute. Data-first means your comparisons age well — update the benchmark data and regenerate.
Prerequisites
- Claude Code installed
- Benchmark data or access to test both products
- List of competitors/products to compare against
Input requirements
| Input | Type | Required | Description |
|---|---|---|---|
| Products to compare | string[] | Yes | The products, platforms, or tools to compare. Minimum 2, maximum 5 for a single comparison page. |
| Benchmark data | JSON, CSV, or raw notes | No | Structured benchmark results if available. If not provided, the comparison builder generates a testing plan you can execute manually. |
| Comparison axes | string[] | No | Specific dimensions to compare on (e.g., 'pricing', 'API coverage', 'latency', 'ecosystem'). If not specified, the builder selects axes from the product category. |
Step-by-step workflow
Build structured comparison tables from benchmark data
Open promptThis prompt takes raw benchmark data (or product documentation) and generates structured comparison tables. Each row is a comparison axis, each column is a product, and every cell includes the specific metric, not just 'yes/no'.
If you don't have benchmark data, the prompt generates a testing protocol: what to measure, how to measure it, and what sample size gives statistically meaningful results.
Generate full comparison articles
Open promptFeed the structured tables from step 1 into the comparison writer. It generates complete comparison articles: introduction, feature table, 'when to pick A/B' sections, verdict, and FAQ.
The writer enforces neutrality: no strawman arguments, no unsourced claims, and every advantage/disadvantage must cite a specific metric or feature from the benchmark data.
Create social distribution for comparison pages
Open promptGenerate platform-optimized social posts for each comparison page. Comparison content performs especially well on X (hot takes drive engagement) and Reddit (community members share rigorous comparisons).
The generator creates multiple angles per page: a data-driven post for LinkedIn, a hot-take thread for X, and a 'I tested both, here's what I found' format for Reddit.
Expected outputs
Structured comparison tables
Axis-by-product comparison matrix with specific metrics, sources, and highlight markers.
Produced by step 1Comparison articles
Complete comparison pages with intro, table, verdict, when-to-pick sections, and FAQ.
Produced by step 2Social distribution pack
Platform-optimized posts per comparison page: X thread, LinkedIn post, Reddit comment, HN submission.
Produced by step 3Tool requirements
- Claude Code
- Benchmark data (structured or raw)
- Target competitors or products to compare
Troubleshooting
Comparison tables have empty cells for one product
Verdict reads as biased toward your product
Benchmark data is outdated by the time the page publishes
Share as social post
Benchmark & Comparison Playbook 3 steps, 60-90 minutes. Build data-driven comparison pages — from benchmark tables to full comparison articles to social distribution. https://agntdot.com/playbooks/benchmark-comparison-playbook
228 / 280 chars