I maintain a small open-source project called pubm.
pubm is a tool for complex publish and release workflows. Since the project is still small, I use GitHub issues as my planning system. Every feature idea, rough product thought, and future workflow becomes an
issue.
One of those issues was about release channels: stable, beta, rc, canary, nightly, and how pubm should treat them as first-class release workflows.
Then someone commented.
At first, it felt good. The comment sounded helpful. The person talked about testing, validating workflows, and helping me get better feedback on the project.
I was excited.
That detail matters. Small open-source projects do not get much attention. When someone shows up and says they might help, it is easy to lean forward too quickly. English is not my first language either, so I tend
to give vague wording extra patience.
I replied. Then the conversation continued.
The more I read, the more something felt wrong. The replies were friendly, but broad. They sounded adjacent to the issue without really engaging with the release-channel design problem.
Then the conversation started moving toward Telegram or email.
That was the point where I stopped.
I checked the account more carefully. I saw limited public activity, similar outreach patterns across repositories, and enough public context to make me uncomfortable treating it like a normal contributor
conversation.
This is the public review comment my bot posted on the issue:
https://github.com/syi0808/pubm/issues/36#issuecomment-4364206862
That comment became the first real case study for Get Out Spam.
The problem is suspicious GitHub comments
A suspicious comment is not always dangerous by itself.
The risk starts when the comment opens a trust path.
A maintainer replies. The conversation moves from GitHub to Telegram, email, or another private channel. Later, the discussion may turn into collaborator access, package publishing permissions, CI access, test
environment access, or release workflow help.
For a small open-source project, this can happen with almost no process.
There may be no security team. There may not even be a second maintainer. The issue tracker is the whole process.
I wanted a tool that helps identify suspicious GitHub comments before that trust path starts.
Not a tool that says βthis person is bad.β
A tool that says:
This comment has public signals worth reviewing before you move off GitHub.
What Get Out Spam does
Get Out Spam is a spam-comment review bot for GitHub maintainers.
More specifically, it is a GitHub App that helps maintainers review suspicious issue comments before replying, moving off-platform, or sharing access.
GitHub App page:
https://github.com/apps/get-out-spam
The current MVP checks public signals such as:
- recently created accounts
- sparse public profile metadata
- broad repository comment activity
- similar public outreach patterns
- off-platform contact requests
- prior public moderation evidence, when available
The output is intentionally neutral.
It does not say someone is a scammer. It does not claim intent. It does not auto-block or report anyone.
The recommendation is simple:
Keep the discussion on GitHub and ask for a concrete technical proposal before sharing private access or moving off-platform.
That is the sentence I wish I had seen earlier.
Why it is a review bot, not a verdict bot
A spam-comment review bot can easily become harmful if it speaks too strongly.
New contributors can have sparse profiles. Non-native English speakers can write broad comments. Legitimate contributors sometimes ask for easier communication channels.
So Get Out Spam is designed to help review suspicious comments, not to make final judgments about people.
The bot comment in the pubm issue shows the shape I want:
https://github.com/syi0808/pubm/issues/36#issuecomment-4364206862
It names public signals, links to public evidence where possible, and says clearly that it is not a spam verdict.
That constraint matters. The tool should help maintainers slow down, not encourage public shaming.
The maintainer moment I want to protect
The most dangerous moment is not when a spam comment appears.
It is when the maintainer wants it to be real.
That was me.
A small project got attention. Someone sounded helpful. I wanted to continue the conversation. I almost treated vague outreach as a real contributor path before checking the public context carefully.
Get Out Spam exists to add friction at that exact moment.
A good outcome looks like this:
- A new account comments on an issue.
- The bot checks public comment and account signals.
- If the review level is high enough, it posts a neutral note.
- The maintainer keeps the discussion on GitHub.
- The maintainer asks for a concrete technical proposal before sharing access.
That would have helped me.
Current status
Get Out Spam is still an MVP.
The GitHub App page is public:
https://github.com/apps/get-out-spam
The source repo is still private while I finish the public release checklist, policies, and beta testing. I do not want to publish a tool like this before the wording, correction process, and false-positive
handling are in better shape.
Right now, the app is already posting review comments in my own repo. The public example from pubm is here:
https://github.com/syi0808/pubm/issues/36#issuecomment-4364206862
The project is meant for open-source maintainers, especially small projects that do not have a formal security or contributor review process.
Top comments (0)