Deterministic headline scoring · compare mode · public calibration log

Score the headline
before you ship the post.

Start with one headline, compare two options, then ship the stronger one. ContentForge gives you a grade, specific feedback, and rewrite help before you publish. Same input, same score, every time.

Try Headline Scorer → Add to Chrome → View Calibration
🧩

Chrome extension is live on the Web Store! 🎉

Score badges on Twitter/X, LinkedIn, Instagram, Threads, and Facebook while you write. Install free, works immediately.

Add to Chrome →

See the Difference

Start with a weak headline, rewrite it, then check whether the new version actually earns the pass.

Before
"Tips for better landing pages"
Score: 52   Grade: D   REVIEW
Too generic, no number, not enough specificity
After
"7 landing page headline fixes that lifted demo requests 32%"
Score: 82   Grade: B   PASSED   +30 points
Specific number · outcome driven · clearer promise

Public Calibration Log

When the scorer gets a ranking right, the evidence lands here. When it misses, the miss is logged too.

Cohorts Tested
0
Historical draft groups scored against real outcomes
Drafts Scored
0
Rows processed by the calibration harness
Top Pick Accuracy
0%
How often the top predicted draft matched the top real outcome
Average Correlation
n/a
Rank correlation between score and outcome inside each cohort

Early Launch Notes

Before the public calibration log fills up, the launch itself is already teaching us what people trust and what they ignore.

signal r/webdev

Blind Taste Test announcement

Score5
Comments6
Views~3.8K
Takeawayproof ask
The strongest response came when the uncertainty was explicit and the calibration challenge gave people a concrete role.
signal r/SideProject

Broad API feature pitch

Score2
Comments1
Views226
Takeawaytoo broad
The first impression was doing too many jobs at once. The project read as a platform before it read as one useful tool.
signal r/webdev

Technical explainer without fast proof

Score0
Upvote Ratio0.14
Takeawayshow usage
Fixcompare first
Explainability matters, but the extension and the before or after workflow are easier to trust than architecture on first contact.

Developer Access

The landing page should stay focused on the scorer, but the technical path is still here if you want to wire it into your own workflow.

⚔️

A/B test drafts with the API

Use the compare and scoring endpoints when you want to rank drafts in code instead of by hand. The API is secondary on this page, but it remains the path for automation and paid usage.

🧪

Self-host and inspect the rules

The deterministic scoring logic is open source, readable, and runnable locally. If the core promise is “same input, same score,” the source needs to stay auditable.

RapidAPI Listing View Source

The Deterministic Advantage

Interpretable content intelligence — every deduction has a traceable rule, every score has an audit trail.

📐

A Digital Ruler, Not a Black Box

A ruler doesn't need a dataset to measure 12 inches — it just needs to be calibrated. ContentForge's heuristic engine gives the same score to the same input, every time. No variance. No hallucinations. Every deduction is itemised and traceable to a specific rule.

<50ms — No LLM in the Scoring Path

All 12 platform scorers are pure Python heuristics. No network calls, no model inference, no API quota consumed. AI (Ollama or Gemini) is reserved for generation endpoints — rewrites, hooks, subject lines.

🔬

White-Box Scoring — Fully Auditable

The scoring logic is on GitHub. You can trace exactly why a post scored 74 and not 83 — every signal is a readable Python condition. When a client asks "why did this fail the quality gate?", you have a defensible answer, not "the AI said so."

Try It Live

Instant heuristic scoring in the browser. Score one draft or compare two before you publish.

Score Your Content

Headline · 30–80 chars is optimal · add a number or question

⚔️ A/B Compare

Paste two drafts and see which one wins across platforms. The current defaults mirror an early launch lesson: concrete proof beats broad product description.

Why Not Just Use ChatGPT?

Ask GPT to score the same tweet twice. You'll get two different answers. That's the problem.

LLM-based scoring
score("Great tweet!") → 74
score("Great tweet!") → 79
score("Great tweet!") → 71
1.2s avg · $0.003/call · different every time
ContentForge
score("Great tweet!") → 42
score("Great tweet!") → 42
score("Great tweet!") → 42
18ms · free · same answer forever

Use the scorer first

Try the live compare workflow, inspect the rules, then decide if you need the API or extension when approval lands.

Try The Live Scorer →
Developer Access on RapidAPI

Questions? captainarmoreddude@gmail.com

🔒
No data stored
Your content is scored and discarded. We never log, store, or sell your text.
🛡️
HTTPS only
All API traffic encrypted in transit. No plaintext. No exceptions.
📖
Open source
Scoring logic is public on GitHub. Audit it yourself.
No account needed
RapidAPI handles auth. We never see your password or payment info.

Frequently Asked Questions

Is ContentForge free?

The live scorer on this page is free to try. The API is available separately through RapidAPI if you want programmatic access and higher-volume usage.

Do you store or read my content?

No. Text is scored in memory and discarded immediately. We do not log, store, or analyze your content after the response is sent. See our Terms of Use.

Why is the first request slow?

ContentForge runs on Render's free tier, which puts the server to sleep after inactivity. The first request wakes it up (~15-30 seconds). Subsequent requests are fast (<500ms). Paid infrastructure is on the roadmap.

What platforms does the Chrome extension support?

The extension is live on the Chrome Web Store. It shows a floating score badge on Twitter/X, LinkedIn, Instagram, Threads, and Facebook while you write. You can also use the popup on any page to score content for all supported platforms.

Is the scoring AI or rule-based?

The instant scoring endpoints use deterministic rule-based analysis (no LLM, no latency, no hallucinations). The AI endpoints use Ollama-backed language models for content generation, rewriting, and deep analysis.

Can I still use the API?

Yes. The API still exists for compare mode, scoring, and automation. It is just no longer the main thing the landing page should sell while the product is still proving the core scorer.

Can I see the scoring logic?

Yes. ContentForge is open source: github.com/CaptainFredric/ContentForge. Every scoring rule is public and auditable.

What did early launch feedback actually show?

The clearest pattern was that public calibration and before or after comparison got more trust than a giant feature list. The live notes are tracked in docs/reddit-launch-notes.md, and they are already shaping the product narrative.