🎮 A Twitch bot that allows users to send ChatGPT commands through Twich chat.
🧠 Built with Python 3.13, Twitch EventSub WebSockets, and the OpenAI API.
💾 Repo: https://github.com/swallace100/ChatGPT-Powered-Twitch-Bot-With-Logging
Intro
Many channels on Twitch have offline chats where users join and message with each other when the streamer is offline. I decided to make a chatbot designed for offline chats that allows users to send commands to ChatGPT through the Twitch chat.
The app listens to messages through Twitch’s EventSub WebSocket API, uses the OpenAI API to generate creative responses, and logs everything per channel and day for review.
With minor adjustments, it could be used when the streamer is online. I know many streamers try to keep bot messages to a minimum, though, so it defaults to offline only.
In this post, I will walk you through how it works, how to set it up, and lessons learned.
Tech Stack
• Language: Python 3.13
• APIs: Twitch EventSub WebSocket, Helix REST, OpenAI API (GPT-4)
• Libraries: asyncio, requests, websockets, python-dotenv, openai
• Tools: Makefile, pre-commit, pytest, ruff
• Persistence: Local logs
Everything runs locally. No hosted backend is required.
Architecture Overview
bot/
app.py # main entrypoint
eventsub_bot.py # Twitch EventSub WebSocket handling
commands/ # command registry and built-in handlers
services/ # OpenAI + logging helpers
handlers.py # routes messages to command handlers
resources/appSettings.env
Message flow:
Twitch → EventSub WebSocket → Command Registry → OpenAI → Twitch Helix (chat message)
All chat messages are timestamped and logged to disk.
Command System
The bot’s CommandRegistry routes messages by prefix ($ by default).
Example: a Twitch user types $joke
, triggering:
async def joke(self, ctx, arg):
banlist = self._join_recent(self.recent_jokes)
prompt = (
"Tell one short, original, Twitch-friendly joke. "
"Avoid repeating these recent ones:\n" + banlist
)
joke = self.ai.chat(prompt)
if joke:
self.recent_jokes.append(joke)
return joke
Built-in commands include $about
, $inputs
, $story
, $trivia
, $touchgrass
, and $image <description>
for AI-generated pictures.
Twitch Integration
Instead of the deprecated IRC interface, this bot uses EventSub WebSockets to listen for channel.chat.message events:
r = requests.post(
f"{HELIX}/eventsub/subscriptions",
headers=self._headers,
data=json.dumps(payload),
)
When Twitch sends a new message event, the bot processes it asynchronously, calls OpenAI for a response, and posts it back through Helix’s /chat/messages endpoint.
Logging
Every message is saved under:
logs/<channel>/<YYYY-MM-DD>/<YYYY-MM-DD>.txt
Example line:
2025-10-07 13:22:54 user123: $trivia space
Logs make it easy to analyze chat trends or replay conversations later.
Environment Setup
- Copy and edit your config:
cp resources/appSettings.env_example resources/appSettings.env
- Fill in TWITCH_CLIENT_ID, then run:
python get_tokens.py
This launches Twitch’s Device Flow to populate access tokens. - Install and run:
make install-dev make run
The bot connects automatically and starts logging messages.
Lessons Learned
• EventSub WebSocket is cleaner and future-proof compared to IRC.
• Proper async design prevents dropped messages under high chat volume.
• I had a problem where ChatGPT would repeat the same response over and over again. I implemented a response history so that ChatGPT would know which responses it already gave and to not repeat them.
• A simple 10-second cooldown timer prevents spam effectively.
Repository + License
📂 Full source: https://github.com/swallace100/ChatGPT-Powered-Twitch-Bot-With-Logging
⚖️ License: MIT
Top comments (0)