The Chinese Open-Source Model That Draws Pelicans Better Than GPT-4o
GLM-5.1 just landed from Z.ai - a 754B parameter, 1.51TB, MIT-licensed model that is free on Hugging Face and available via OpenRouter.
The model is the same size as their previous GLM-5 release, but something changed in how it handles creative tasks.
Simon Willison ran his pelican test (asking models to draw a pelican on a bicycle as SVG). Most models produce static graphics. GLM-5.1 did something unexpected: it generated a full HTML page with CSS animations.
The pelican's beak has a wobble animation. The wheels spin. It's not perfect - the animation broke positioning on the first try - but when prompted to fix it, the model correctly diagnosed the problem:
"The issue is that CSS transform animations on SVG elements override the SVG transform attribute used for positioning, causing the pelican to lose its placement and fly off to the top-right. The fix is to separate positioning (SVG attribute) from animation (inner group) and use for SVG rotations since it handles coordinate systems correctly."
And then it fixed it.
This is different from most open-weight releases. The Chinese AI ecosystem has been catching up on benchmarks, but GLM-5.1 shows competence in a domain few models touch: understanding that graphics exist in a rendering context, not just as static output.
The pelican test is not just cute. It's a proxy for whether a model understands that code runs somewhere - that SVG has coordinate systems, CSS has cascade rules, and "fix it" means understanding both.
What to watch: MIT-licensed at 754B parameters is unusually permissive for a model this size. If inference costs continue dropping, GLM-5.1 becomes the "good enough" baseline for anyone who does not want to rent from OpenAI or Anthropic.
The real test is not whether it draws pelicans. It's whether organizations start shipping it in production because the license lets them.
Top comments (0)