I made progress on that MUD I’ve been working on.
Try it out (yea, naming things is hard):
git clone -b v0.1.0_share https://gitlab.com/modle13/queue-mud.git
cd queue-mud
docker-compose up
Over the weekend, I got it working nearly back to the state it was before dockerizing. There’s a bit less logic (the creatures decision point just returns a string), and the Angular frontend is still disabled. I had had a publisher and consumer working back in early January, but now it has the whole chain: command > validate > attack if command is attack and there’s a target > response queue.
The main problem I solved was how to have separate queues for the different decision points. I also solved the problem of getting gunicorn and python to output stdout and stderr properly to the docker-compose logs (check out the Dockerfile CMD flags; -u
for python and "--access-logfile", "-", "--error-logfile", "-"
for gunicorn). I learned some things about naming services and containers, querying the RabbitMQ API, .env
files for docker-compose, and using custom python modules across multiple containers.
After I added the 2nd and 3rd queues, I had the strong urge to abstract the boilerplate into a module (that’s what modules are for: extensible boilerplate :D), so I did that and it was glorious. I added a module for publishing too, and split the responsibility of the publisher flask app into an ingester (flask POST
and status
endpoints only), and a publisher module (simple python, used in all decision points and the ingester). This way I don’t have to worry about the boilerplate at all. I just set the queue
and exchange
parameter and custom on_message
function on initialization of the decision point to set up a new queue, and pass the name of the target exchange with the message to the publisher module.
The key focus today was cleaning it up so using it sucked less. Writing the shared modules was a big step. Now new modules only need to set the two parameters and override the on_message
method. I should probably do this with inheritance, but this’ll do for now. I cleaned up the README
s, listed required and optional system dependencies (only docker-ce
and docker-compose
are required). I added a tmuxinator profile (so cool) that spins up the custom queue_watcher
script (queries the RabbitMQ API and parses out some data using python), the log tailer (message output by the response queue consumer), docker-compose (the thing actually running the game), and a verification step (verify.sh
, which POST
s a few example values to the ingester and checks the status
endpoint).
The status endpoint also has an unnecessary feature. It uses a goodreads API to pull random George Orwell quotes. Here’s the module. Don’t make fun of the try
/except
.
The next steps are to figure out how to handle world state, and how to manage creatures, both likely have the same set of problems.. Currently responses just get dumped to the volume mounted log file. This is fine for now (it could be playable locally) but eventually there will need to be size considerations. It would also be interesting to be able to replay the history, and for that something a little smarter would be needed.
Top comments (0)