This post is a continuation of the microservice I've been building. You can check out my last post in this series here.
Over the years, I've come across headlines that turned out to be half‑truths or outright hoaxes. Around the same time, I’ve also been spending a lot of time practicing microservice development in Golang, so I started wondering: why not build something that combines both interests? 🤷
That idea became source-score, a project that aims to rate news sources based on how often the claims they publish turn out to be true. It’s still very early and nowhere near finished 🫣, but I have a demo instance up and running. In this post, I’ll briefly walk through what the project is meant to do, how it currently works under the hood, and where I want to take it next.
TL;DR
Three repos work together to turn YAML documents into a live dashboard:
| Repo | What it does | Key tech |
|---|---|---|
| sources | Stores and validates YAML docs (sources, claims, proofs) | Python, YAML, GitHub Actions |
| source‑score | Go microservice that verifies claims and calculates scores | Go, Gin, REST, Swagger |
| source‑score‑dashboard | Static UI that polls the API and shows sources, claims, proofs | HTML, CSS, vanilla JS |
The demo instance is live at https://satyalens.github.io/source-score-dashboard/.
I've manually added 5 news sources, 2 claims for each of them with 1 proof backing up each claim.
The app is deployed on Render free instance so it might take a few seconds for it to come online and return data 🙏
The Idea
The model is intentionally simple for now.
A source is a media outlet or information provider. A claim is something that source has published. A proof is another link that either supports or refutes that claim.
Once claims have proofs attached, the API can verify them. Right now, the verification logic is basic: if a claim has more supporting proofs than refuting proofs, it is marked valid. A source score is then calculated like this:
valid checked claims / total checked claims
So if a source has two checked claims, and both are valid, the score is 1.
This is not supposed to be the final credibility algorithm. I wanted the first version to be understandable, testable, and easy to argue with. A simple score gives me something concrete to improve instead of starting with a scoring model that looks impressive but is hard to explain.
The Architecture
The project is split into three parts because I wanted the data, backend, and dashboard to stay separate.
sources: The Data Pipeline
The sources repo stores structured documents under three folders:
sources/
claims/
proofs/
Each document is written in YAML and validated against an OpenAPI schema. A source document describes the outlet. A claim document points back to a source using the source URI digest. A proof document points back to a claim and records whether the proof supports the claim.
This repo also has a small CI flow:
- validate new YAML documents on pull requests
- post newly added documents to the API after merge
- refresh claim verification and source scores on demand or after the document post workflow is completed successfully
I like this setup because it makes the dataset reviewable. Instead of manually sending API requests every time I want to add a source or proof, I can add a structured file, validate it, and let GitHub Actions handle the posting step. This repo also acts as a user friendly interface for someone who is not super technical to add documents.
source-score: The API
The source-score repo is the backend service.
It is a Go API built with Gin, GORM, PostgreSQL, and OpenAPI-generated types. It exposes endpoints for creating and reading sources, claims, and proofs. It also has endpoints to verify claims and calculate source scores.
The main flow looks like this:
source -> claim -> proof -> claim validation -> source score calculation
There are a few technical choices in the first version that I wanted to keep simple:
- HTTPS URIs identify sources, claims, and proofs.
- SHA-256 URI digests are used as stable IDs.
- OpenAPI defines the request shapes (I thank my past self for this choice because now I can use the OpenAPI schema to validate documents before ingesting them).
- The API can be protected with an
X-API-Keyheader. - The dashboard is allowed through CORS for the live demo.
- Unit and acceptance tests cover the main source, claim, and proof flows.
The scoring logic is intentionally small right now. Checked claims are grouped by source, valid claims are counted, and the source score is updated as a ratio.
You can access the demo instance Swagger UI, deployed on Render here. Shout out to Render for helping people like me test their side projects with their free tier.
source-score-dashboard: The Demo UI
The source-score-dashboard repo is a small static dashboard.
No framework. Just HTML, CSS, and JavaScript. The code is mostly AI generated. I'm frontend-ally disabled so please cut me some slack here 🥲
It has three pages:
index.html -> sources
claims.html -> claims for one source
proofs.html -> proofs for one claim
The dashboard calls the live API and refreshes every few seconds. Clicking a source opens its claims. Clicking a claim opens the proofs attached to it.
It is plain on purpose. I wanted a working view of the data before spending time on UI polish.
How the Score is Computed
// pkg/domain/source/source_service.go
func (s *Service) RecalculateScore(ctx context.Context, sourceID string) error {
claims, err := s.repo.ListClaimsForSource(ctx, sourceID)
if err != nil { return err }
var valid, total int
for _, c := range claims {
if !c.Checked { continue } // skip unverified claims
total++
if c.Validity { valid++ }
}
score := float64(valid) / float64(total)
return s.repo.UpdateSourceScore(ctx, sourceID, score)
}
Why this matters: The score is a simple ratio, but it gives a quick sanity check. A source with a score of 0.9 has 9 valid claims for every 10 verified claims – a strong signal that the outlet is generally reliable.
What I want to improve next
The next step is to keep adding more sources, claims, and proofs.
After that, the scoring model needs to get smarter. A simple ratio is fine as a prototype, but it does not capture enough nuance. Some claims matter more than others. Some proofs are stronger than others. Some claims are too ambiguous and hard to prove right or wrong conclusively. Sources publish across different topics, and I probably should not treat a sports claim and a geopolitical claim as if they carry the same weight.
Some improvements I want to work on:
- better scoring rules
- topic-wise source scores
- richer proof metadata
- easier contribution flow for new YAML documents
- dashboard filtering and sorting
- clearer handling for conflicting proofs
- some score associated with unverified claims
For now, Source Score is a personal side project born out of a very practical annoyance: I want a better way to know which news sources I should trust.
Maybe, over time, it can help others wondering the same thing.



Top comments (0)