This post was originally written in Japanese and translated into English by the author using ChatGPT. The original post is available here.
Introduction
AI coding tools have become much better, and I now write code by hand far less often than before. I use GitHub Copilot in VSCode through the OSS benefit program, and recently I have also been trying Claude Code and Codex.
Porting libraries to the Crystal language
Compared with other programming languages, Crystal does not have a very large library ecosystem.
However, many open source libraries have licenses that allow porting, and AI can provide a lot of help with porting work. Because of this, when a library is missing, it is becoming realistic to consider porting as an option.
In this article, I want to put into words and record what I actually do when porting libraries.
Choosing a reference library
The first step is to choose the library to use as a reference.
In Crystal, compared with C or Go, a higher level of abstraction is often expected. On the other hand, Crystal cannot use metaprogramming as freely as Ruby or Python.
In that sense, Crystal is a language somewhere in the middle, and it is not always possible to choose a single reference library. Sometimes it is necessary to look at multiple libraries and think about what would be best. In my case, I often look at active Rust and Go projects, while using Ruby and Python APIs as references. When the reference is written in Rust, binding to it may sometimes be better than porting it.
That kind of separation is the first step. If I decide that porting is the right approach, I move on. There may also be cases where porting and bindings are mixed.
Checking the license
The most important point is the license. I check whether the original library is under a license that allows porting, such as MIT, BSD, Apache-2.0, or another license whose conditions I can comply with. I try to make the new library inherit the license terms of the original.
I also clearly state where the original project came from. In my case, I add the reference repository as a whole using git submodule. This also fixes the version of the code that Coding Agents refer to. It helps avoid unnecessary misunderstandings and trouble.
When creating a final "tool", it may be reasonable to have multiple reference sources. But when creating a "library" by porting or binding, I think it is easier to maintain the project later if the main reference source is kept to one.
Making an overall plan with the web version of ChatGPT
First, I upload an archive of the source library, either a tarball or a zip file, to ChatGPT and ask for an overall policy for porting the whole library to Crystal. I upload an archive simply because there is a limit on the number of files that can be uploaded.
Why do I start with the web version of ChatGPT instead of a Coding Agent?
The reason is based on experience: I tend to get better results when I first discuss the whole thing with the web version. This is only a guess, but I think there are probably two reasons.
The first is the efficient use of internet search. Compared with local Coding Agents, the web version of ChatGPT is better at search. It can look through websites, technical blogs, and GitHub, and explore the policy from a wider point of view. When something is vague or unknown, it can search on its own and refine the plan.
The second reason is probably that the cloud environment has more efficient access to documentation. Compared with searching a codebase locally while spending tokens, searching code in the cloud seems to work better, at least in my experience.
Once the overall policy is produced, I ask additional questions about unclear points or my own requirements. At this stage, it may be useful to limit the amount of information by saying something like, "Please answer in three lines or less," so that ChatGPT's responses do not become too long or go off on a tangent.
When building a Crystal library as a binding to a static language such as C, C++, or Rust, it is important to discuss the boundary of the binding, the build system, and the infrastructure, and to make sure the understanding is aligned.
The desirable Crystal API design is often different from the language of the original library, such as Rust or Go. In such cases, I may also upload Ruby libraries as reference material and ask what kind of API would be desirable. However, Crystal is not Ruby, so simply making the API the same as Ruby is not necessarily the right answer. If I really want to make something good, I have to check it myself.
Once the architecture and design are agreed on, I ask ChatGPT to write it down as a self-contained and executable plan, usually in a file such as PLAN.md. I often have PLAN.md written in Japanese so that I can read it easily.
However, PLAN.md may contain rough notes, wrong assumptions, or unorganized working context. For that reason, I usually do not publish this file as-is. Instead, I try to preserve important design decisions and caveats in a form that can be read later, such as in the README, issues, or commit messages.
That said, if the tool is not just for myself and I want many people to use it, I may ask for PLAN.md to be written in English from the beginning.
Doing the porting work locally
Because I want to make use of Copilot's free quota, I use VSCode locally. Recently, however, CLI agents have also become more capable, and it is becoming possible to use editors of one's choice, such as Zed.
First, I initialize the project repository.
I decide the Crystal project name, create a skeleton with crystal init lib piyo, add the reference repository as a submodule, and place PLAN.md in the repository.
I still do not know whether PLAN.md should be made more like a TODO list.
I do not want to lose the work, so I create a new private repository on GitHub and prepare to push the project there. After that, I ask the Coding Agent to proceed with the project while looking at PLAN.md.
During the work cycle, I periodically run crystal tool format for formatting and crystal spec for tests.
In my case, I run the linter, ameba --no-color, by myself later, and then pass the result to the Agent and ask it to fix the issues in a batch.
I also periodically ask it to review the plan, update it, or delete parts that have been completed.
Coding agents make rapid progress at first, but at some point they often start taking very small steps, and the work stops moving forward.
In such cases, I stop the agent for a while and ask questions such as: "What is the current problem?", "Are there places that should be refactored before continuing?", "Is there anything unrealistic in the plan itself?", or "Is there anything you want me to do, such as setting up the environment?"
In some cases, I make a tarball of the current state, upload it to ChatGPT again, ask it to evaluate the whole situation from a broader perspective, and have it create PLAN2.md.
However, this kind of workflow is only possible because I am working alone and will eventually publish everything on GitHub. I do not think this is a possible workflow when multiple people are coding for work.
Writing tests with fixtures
Some reference libraries provide fixtures for tests. When the original repository is added as a submodule, those fixtures can also be referenced, so they can be used in the Crystal-side tests as well.
However, my honest impression is that aiming for parity is not easy, because of differences caused by floating point errors, random numbers, race conditions, and so on.
To absorb differences in random numbers, one possible method is to prepare a small piece of code in the original language that generates random numbers, save those generated numbers as a kind of fixture in JSON or another format, and use those values in Crystal tests. This method does not always work, but there are cases where it does.
Adding GitHub Actions
Once the project has taken some shape, I add several GitHub Actions workflows.
-
docs.ymlfor generating documentation -
build.ymlfor building and releasing CLI tools -
test.ymlfor tests -
dependabot.yml, or Renovate, for updating GitHub Actions
I end up adding these almost every time. In my case, I have a toy project called lolcat.cr, and I often copy workflows from it and then modify them.
After adding them, I adjust things until the tests and builds pass, and then add badges to the README.
Keeping README.md simple
This depends on the purpose. In my case, most of the AI-ported libraries I create are for my own use. Compared with the time when I wrote libraries by hand, my feeling has changed a little: I do not really think that I want the library to become widely used. Maybe I instinctively want to avoid losing time to maintenance or spending more money on tokens.
If README.md is too decorated, it starts to look like it was made by AI. A project with a gorgeous README but no maintenance feels, to me, like the ruins of a theme park with no afterglow. It does not leave a good impression.
So I usually ask for README.md to be written in a "simple, plain, and purely practical" style.
Deciding the granularity of commits
As a general principle, in order to run a trustworthy project, it is desirable to avoid force push and leave a transparent commit log. This is especially true when developing together in an open source community. Through pull requests and reviews, you can interact with people around the world and come to know what kind of people they are. I think this is one of the pleasures of participating in an open source community.
However, best practices for Git workflows in solo personal development that depends heavily on AI have not yet been established. It is necessary to record why a certain piece of code was included, but I think it is better to lean on Git than to attach a separate memory system only for AI.
That said, Git is chronological. When reordering commits, even if the final state of the code is the same, force push becomes necessary. In the AI era, I feel that we may need a version control method based on semantics, one that can rebase without depending so strongly on chronological order.
For now, personally, I ask AI to write commit messages, and then I manually commit them myself.
I think this works as a minimal check to confirm that the human goal and the AI's work target are aligned. But there is also criticism that this is laundering or hiding AI-written code as if it had been written by a human, and I do not think it can be called a best practice.
Embarrassingly, I also use force push a lot.
Especially in the early stage of a private repository where I am progressing through a plan, I repeatedly use --amend and force push, effectively using Git as a kind of "overwrite save" without leaving much history.
Of course, this is mainly about private repositories before publication. The same thing should not be done on a shared public branch.
Conclusion
What I have written here reflects the situation as of April 2026.
I hope that when I reread this later, I will be able to think, "So that was what things were like at that time."
Coding Agents have made it possible to ask for explanations of algorithms and ideas that were previously difficult to understand, at any desired level of detail. They have also made it easy to quickly implement and try out ideas that come to mind. This really is revolutionary.
At the same time, I sometimes get too absorbed in AI coding, work for too many hours, and feel that it may harm my health. I think I need to be a little careful about that. Health comes first.
At the beginning, I wrote that this article was written by hand, not by AI.
Note added for the English version: This refers to the original Japanese version. This English translation was made with ChatGPT.
I wrote that on purpose because these days I often generate text with AI too, and I wanted to deliberately do something different this time.
That is all for this article.
Top comments (0)