How scores are calculated
Every ranking on this site is produced by a single, transparent formula. No hidden weights, no paid boosts in the ranking, no editorial thumb-on-the-scale beyond one capped input. Here's exactly what goes in.
The five signals
For each approved listing in a category, we gather five signals, normalize them, weight them, and sum them into a single score on a 0–10 scale.
-
1. Community votes
rolling 30 daysCount of votes cast for this listing in its category over the last 30 days. One vote per person per day (deduped by a daily voter hash). Votes from authenticated GitHub users and anonymous visitors both count.
-
2. GitHub stars
refreshed hourlyPulled directly from the GitHub API for any listing that has a GitHub repo linked. Sites without a repo are excluded from the stars calculation entirely — they're not penalized for missing it, and they don't drag the normalization floor down for repos that do have one.
-
3. Traffic
rolling 30 daysReal outbound clicks from this site to the listing's URL, summed over the last 30 days. No external traffic-estimation API is involved — it's measured from our own audience, which is exactly the audience the ranking is for.
-
4. Recency
half-life 60 daysExponential decay from the last GitHub release (falling back to the last push to the default branch). A fresh release scores 1.0, a 60-day-old release scores 0.5, a 4-month-old release scores 0.25, a year-old release is effectively zero. Only applies to listings with a GitHub repo.
-
5. Editorial score
0–10 manualA capped 0–10 rating set by our editors to reflect quality signals the other four metrics can't capture — documentation quality, maintainership, real-world reliability. Default is 5/10 for every new listing; we only move it when there's a clear reason to.
How they combine
Inside each category, the five signals are min-max normalized to a 0–1 range, so the formula compares listings against their peers rather than across unrelated categories. Each category has its own weights — stars matter more for dev tools, votes matter more for vibe-coded projects.
score =
w_votes · normalize(votes_30d)
+ w_stars · normalize(github_stars) // github sites only
+ w_traffic · normalize(outbound_30d)
+ w_recency · 0.5 ^ (days_since_release / 60) // github sites only
+ w_editorial · normalize(editorial_0_10)
// rescaled by the sum of weights that applied,
// then multiplied by 10 for display The final score is rescaled by the sum of weights that actually applied to that listing, so a listing with no GitHub repo competes on three signals instead of five without being silently demoted. The result is always on a consistent 0–10 scale.
What doesn't move the ranking
- — Paid sponsor slots. Sponsored listings are surfaced above the ranking in a separate, clearly labeled section; they don't alter any listing's score.
- — Submission recency. A brand-new submission starts with the same neutral editorial score as everything else; it wins or loses on the other four signals like any listing.
- — Click-throughs from paid ads, referrals from spammers. If a domain is spiking inorganically, we review it manually.
Refresh cadence
Signals pull hourly. Rankings recompute every hour and on every new vote. All data is rolling — a listing that stops shipping, losing traffic, or collecting votes will slide down; a newcomer that does the opposite will climb.
Spot something wrong with a score or the weights for a category? Drop us a note — the whole ranking is designed to be auditable, including by the people whose projects are on it.