I'm a Korean garlic farmer with no PC. I built a programming language on my phone using only AI conversations.
TL;DR: No coding experience. No computer. Just a smartphone, copy-paste, and conversations with AI. The result is GarlicLang — a Python-based scripting language that tells you when AI is lying.
What happened
I'm a garlic farmer in South Korea. I don't have a PC. I don't know how to code. But I wanted a way to give commands to AI and verify whether the output is real or hallucinated.
So I started talking to Claude (Anthropic's AI) on my phone. I described what I wanted in Korean. Claude designed the language. I copied the code, pasted it into ChatGPT's sandbox, and ran it. When tests failed, I carried the error messages back to Claude. When Claude needed execution results, I carried them from ChatGPT.
I was the human relay between AIs, using nothing but copy and paste.
The language is called GarlicLang. It's written in pure Python (standard library only, zero dependencies), and it runs inside AI sandboxes like ChatGPT's Code Interpreter.
What makes it different
GarlicLang has a command that no other language has:
try
run "python3 script.py"
verify output contains "expected answer"
on hallucination
print "AI lied."
on hallucination triggers when the command succeeds (exit code 0) but the output doesn't match what you expected. This is designed specifically to catch AI fabrication — not crashes, not errors, but confident wrong answers.
What it can do
Write files, run commands, verify results, define functions, use arrays, loop with while/break/continue, import other scripts, and catch errors or hallucinations. All in a syntax designed to be readable by non-programmers.
Example — check if AI wrote the correct file:
write "hello.py" "print('hello from garlic farm!')"
run "python3 hello.py"
verify output contains "hello"
Example — sum 1 to 100 with a loop:
let sum = 0
let i = 1
while i <= 100
let sum = sum + i
let i = i + 1
end
print sum
verify output contains "5050"
The numbers (all verified by actual execution)
| Test suite | PASS | FAIL | Notes |
|---|---|---|---|
| Phase 1 — basics | 4 | 0 | file ops, run, verify |
| Phase 2 — error handling | 9 | 2 | 2 failures are intentional (test the error handlers) |
| Phase 3 — variables & print | 13 | 0 | enabled by v0.3.1 bug fix |
| Phase 4 — arrays, loops, functions | 16 | 0 | all v0.4 features verified |
| Total | 42 | 2 | 44 tests, 2 intentional failures |
Additional tests passed: recursion (5! = 120), nested arrays, Korean special characters, error recovery (try/on-fail with division by zero), and summing a 100-element array (= 5050).
All tests were executed in ChatGPT's Code Interpreter sandbox. Process ID, working directory, and file system contents were verified.
The honest problems
Three bugs were found and documented:
Bug 1: while treats the string "0" as true, but if treats it as false. Same condition, different behavior.
Bug 2: verify file "variable_name" contains "text" doesn't resolve the variable — it looks for a file literally named "variable_name". Reproduced and confirmed.
Bug 3: After verify run "command" contains "text", the interpreter doesn't save the output, so on hallucination checks the wrong data.
ChatGPT rated the project 6.0/10: originality 8, usability 6, completeness 5, stability 4, extensibility 6.
These are real scores from an AI that actually ran the code, not my own rating.
How it was built
| AI | Role |
|---|---|
| Claude Opus 4.6 | Designed the language, wrote docs, analyzed bugs |
| ChatGPT (Code Interpreter) | Saved files, ran all tests, reproduced bugs |
| Me (garlic farmer) | Relayed messages between AIs via copy-paste on phone |
No git. No IDE. No terminal. Just chat windows and a clipboard.
What I learned
AI estimates of line counts were consistently wrong (guesses ranged from 578 to 1,697; actual count was 783 lines for the main module, measured with wc -l). Never trust AI estimates — always measure.
pip install fails in some AI sandboxes. The workaround is sys.path.insert(0, '.'). If that fails, a standalone build script merges all modules back into one file.
If you give AI too many instructions at once, it fails. Breaking tasks into single steps works.
Current state
Version 0.4.1. Eight Python modules, ~2,000 total lines. Works in ChatGPT sandbox. Three known bugs documented with fix instructions ready. No external dependencies.
The source code isn't public yet. I'm still deciding how to release it.
Built with no code, no PC, no experience. Just garlic, a phone, and AI.
Top comments (0)