TL;DR
What Happened
- Dec 2025: Symbolica AI released
@symbolica/agentica- Same name as our Feb 2025 project
@agentica- Nearly identical
unplugin-typiacode- Same obscure WebSocket RPC pattern from my 2015 library
- Oct 2025: Discussed our projects in Ryoppippi's interview
- Dec 2025: Released their version claiming "independent development"
Suspicious
- Code similarity:
unplugin-typia≈unplugin-agentica- Timeline: Interview (Oct) → Their release (Dec)
- Ryoppippi testimony: "Discussed wrtnlabs/agentica in interview"
- MIT violation: Removed credits, added only after complaint
- Identical concepts: Compiler-driven schema generation
- Same RPC pattern: Low-level
ws+ Proxy (extremely rare choice)- Timing: Building transformer on legacy platform weeks before TypeScript 7.0 (Go) release
My Question
Is this convergent evolution or concept borrowing without attribution?
1. Summary
In December 2025, US AI-startup company "Symbolica AI" released @symbolica/agentica.
As an open source developer, I was surprised to find striking similarities to projects I've been developing since 2015—not just in concepts, but in naming, architecture, and even specific implementation patterns.
1.1. Observed Similarities
-
Identical Project Name:
@agentica(WrtnLabs, Feb 2025) =@symbolica/agentica(Dec 2025) - Identical Core Concept: Auto-generating LLM schemas from TypeScript types via Compiler API (Compiler-Driven Development → Code Mode)
-
Code Replication:
unplugin-typia(Ryoppippi) =unplugin-agentica -
Identical RPC Approach:
tgrid(2015) WebSocket RPC ≈ WARPC (JS Proxy + Promise pattern) - Similar Documentation: Validation Feedback, TypeScript Controller, JSDoc parsing strategies
- Questionable Code Maturity: 17k LOC claims to replicate 400k+ LOC functionality, without any test files
- Puzzling Timing: Starting a TypeScript Compiler API transformer in late 2025—weeks before TypeScript 7.0 (Go-based) obsoletes the current architecture
1.2. My Request
I politely emailed Symbolica AI requesting proper attribution and suggesting they simply use MIT-licensed typia directly instead of imitating and reinventing as commercial license. With TypeScript 7.0's Go-based compiler releasing in early 2026, building a new transformer on the legacy platform seemed particularly puzzling—I offered to handle the migration myself.
Symbolica AI responded that "everything except unplugin-typia was independently developed"—while claiming unfamiliarity with typia, whose name is literally in unplugin-TYPIA.
1.3. Ryoppippi's X Tweet (Jan 12, 2026)
Ryoppippi, author of unplugin-typia, tweeted about Symbolica AI.
Symbolica AI attempted to hire him, then after the hiring failed, copied his OSS code, removed credits, and only added them back belatedly after he raised concerns. He also stated "samchon's OSS side is also quite problematic." and "discussed about wrtnlabs/agentica in interview".
By the way, as Ryoppippi's tweet emerged while writing this, my perspective has evolved since.
1.4. Purpose of This Article
I seek the community's perspective on whether this represents coincidence/convergent evolution, or concept borrowing without proper attribution.
2. Preface
Hello, I'm an open source developer using the GitHub username samchon. I've created personal projects typia and tgrid, and at my current employer Wrtn Technologies (South Korea), I'm developing open source projects @agentica and @autobe.
Recently, US AI startup company "Symbolica AI" released their Agentica project (@symbolica/agentica) on GitHub, promoting its core concepts as their novel inventions.
After that, many people contacted me suggesting Symbolica AI had appropriated my open source projects, with some expressing frustration at what they viewed as ethically questionable.
The concepts in question resemble those introduced on typia's intro page and README, with links to related documentation. Specifically: automatically extracting function calling or structured output schemas from TypeScript types, and using them to build AI agents.
//----
// in typia
//----
typia.llm.application<BbsArticleService>();
typia.llm.structures<IBbsArticle>();
//----
// @agentica of wrtnlabs
//----
const agent: MicroAgentica = new MicroAgentica({
service: {
api: new OpenAI({ apiKey: "*****" }),
model: "openai/gpt-4.1-mini",
},
controllers: [
typia.llm.controller<ArixvService>(
"arixv",
new ArixvService(),
),
typia.llm.controller<BbsArticleService>(
"bbs",
new BbsArticleService(),
),
],
});
await agent.conversate("Hello, I want to create an article referencing a paper.");
//----
// @symbolica/agentica
//----
const agent = await spawn(
{
premise: 'Answer questions by searching the web.',
model: 'google/gemini-2.5-flash',
},
{ database },
);
await agent.call<Map<UserID, string>>(
"For each user, summarise their spending habits.",
);
When I first saw @symbolica/agentica's documentation, I was startled by how similar the concepts were to mine—even sharing the same project name. However, I had to consider convergent evolution: when people seek optimal solutions, they often arrive at the same conclusions. Before typia, projects like typescript-is and ts-runtime-checks attempted runtime validation using pure TypeScript types via compiler APIs.
I carefully analyzed @symbolica/agentica's source code. While the concepts matched, the code differed and seemed incomplete (17k lines attempting to replicate what took us 400k+ lines and years of testing, with no test files), so I was leaning toward convergent evolution—until I discovered two shocking facts. First, not my typia but Ryoppippi's supporting library unplugin-typia had been nearly identically replicated. Second, among countless possible approaches for agent server/client communication, they used the exact WebSocket RPC pattern from my 10+ year-old tgrid project (started 2015, which Symbolica AI calls WARPC).
While unplugin-typia code replication seemed undeniable, and I was weighing whether typia/@agentica concepts were borrowed or independently developed by Symbolica AI, seeing my server/client communication approach also replicated tipped my judgment. When coincidences accumulate, they begin to look inevitable.
MIT licenses permit copying code and borrowing concepts freely. So I politely emailed Symbolica requesting they add "inspired by unplugin-typia/typia/tgrid/agentica" to their README. I also suggested, given the apparent implementation gaps (17k LOC vs 400k+, zero tests), that rather than reinventing these technologies under a commercial license, they might consider simply using typia directly—it's MIT-licensed and freely available for commercial use. Contrary to my expectations, Symbolica responded that besides unplugin-typia, everything was independently researched and developed by Symbolica AI.
What do you think? Is this truly coincidental convergent evolution? Or did they study my and my colleagues' open source projects comprehensively, borrow concepts, then promote them as original inventions without acknowledging sources? I'm unsure how to respond to this situation, so I'm writing to seek your advice.
Here is the list of open source projects directly related to this article.
| Package | License | Links | Since |
|---|---|---|---|
tgrid |
MIT | Github / Homepage | 2015 (renamed from samchon) |
typia |
MIT | Github / Homepage | 2022 (renamed from typescript-json) |
@samchon/openapi |
MIT | Github | 2022 (separated from typia) |
@ryoppippi/unplugin-typia |
MIT | Github | 2024 |
@agentica/* |
MIT | Github / Homepage | 2025-02 (separated from @nestia) |
@symbolica/agentica |
Commercial | Github / Homepage | 2025-12 |
And below are our other related open-source projects.
| Package | License | Links | Summary |
|---|---|---|---|
@nestia/* |
MIT | Github / Homepage | NestJS helper library in compiler level |
@autobe/* |
GPL v3 | Github / Homepage | Backend coding agent, final purpose of @agentica
|
3. Agentica vs Agentica
3.1. @agentica
import { MicroAgentica } from "@agentica/core";
import typia from "typia";
import { ArixvService } from "./services/ArixvService";
import { BbsArticleService } from "./services/BbsArticleService";
const agent: MicroAgentica = new MicroAgentica({
vendor: {
api: new OpenAI({ apiKey: "*****" }),
model: "openai/gpt-4.1-mini",
},
controllers: [
typia.llm.controller<ArixvService>(
"arixv",
new ArixvService(),
),
typia.llm.controller<BbsArticleService>(
"bbs",
new BbsArticleService(),
),
],
});
await agent.conversate("Hello, I want to create an article referencing a paper.");
Agentica (official package name @wrtnlabs/*), which I developed as open source at Wrtn Technologies, is an agent library specialized for LLM function calling.
As you can see, the core functionality is: pass in TypeScript class types and instances, and AI automatically invokes their functions via function calling. In the example above, functions from ArixvService and BbsArticleService classes can be automatically called through AI agent conversation. The key is the typia.llm.controller<Class>() function, which analyzes ArixService and BbsArticleService class types at compiler level and converts them to LLM function calling schemas.
My colleagues and I are using this methodology and skillset to build @autobe, a backend coding agent. By structuring compiler AST as function calling (e.g., AutoBeDatabase and AutoBeOpenApi), we've successfully automated the initial generation of backend server DB/API design and development, and are now tackling maintenance automation.
interface AutoBeApplication {
database(p: {
models: AutoBeDatabase.IModel[]
}): Promise<void>;
interface(p: {
document: AutoBeOpenApi.IDocument;
}): Promise<void>;
}
const agent: MicroAgentica<AutoBeApplication> = new MicroAgentica({
vendor: {
api: new OpenAI(),
model: "qwen/qwen3-next-80b-a3b-instruct",
baseURL: "http://localhost:1234",
},
controllers: [
typia.llm.controller<AutoBeApplication>(
"autobe",
new AutoBeApplication(),
),
],
});
await agent.conversate("I wanna make an e-commerce service...");
await agent.conversate("Design database from my requirements.");
await agent.conversate("Design API specifications.");
3.2. @symbolica/agentica
import { spawn } from '@symbolica/agentica';
import { UserID, Database } from '@some/sdk';
const database = new Database(...);
const agent = await spawn(
{
premise: 'Answer questions by searching the web.',
model: 'google/gemini-2.5-flash',
},
{ database },
);
await agent.call<Map<UserID, string>>(
"For each user, summarise their spending habits.",
);
Symbolica's @symbolica/agentica is a library specialized for LLM structured output.
As shown, when you specify type T in agent.call<T>, it analyzes this at compiler level, converts it to JSON schema, and internally uses AI's structured output feature to generate data of the specified T type. In typia terms, this corresponds to the typia.llm.parameters<T>() function.
Symbolica calls this "code mode" and introduces it as a new paradigm.
Symbolica AI's README states:
"Agentica is a type-safe AI framework that lets LLM agents integrate with your code—functions, classes, live objects, even entire SDKs. Instead of building MCP wrappers or brittle schemas, you pass references directly; the framework enforces your types at runtime, constrains return types, and manages agent lifecycle."
Type-safe AI framework, passing TypeScript types directly, runtime type validation, return type constraints... these are all features typia has long provided. typia.llm.application<Class>() auto-generates LLM function calling schemas from TypeScript types and includes typia.validate<T>() for runtime type validation. typia.llm.parameters<T>() provides type constraints for structured output.
Yet nowhere in Symbolica's README is there mention of typia, @agentica, or tgrid. Everything is presented as innovations independently developed by Symbolica AI.
3.3. Convergent Evolution
At first glance, this seemed plausible—until I examined further.
Using TypeScript Compiler API to automatically generate AI function calling or JSON schemas from TypeScript types can be understood as convergent evolution.
Also, since Agentica is a compound word (Agent+ica) and the company name is Symbolica, coincidentally matching names isn't impossible. Perhaps they coincidentally pondered the same topic, coincidentally invented the same methodology, and thus coincidentally arrived at the same project name. Maybe I just thought of it and implemented it slightly earlier, while someone else at a different time independently invented the same approach through their own effort and research—that's entirely possible, right?
Therefore, even if Symbolica AI introduces this as new technology, grandly claiming they opened a new paradigm through their own research and development, and promotes it extensively, I could understand it as their small, innocent delusion.
4. Perspective of typia
4.1. What is typia?
import typia from "typia";
typia.is<number>(3); // returns true
typia.asserts<number>("three"); // throws TypeGuardError
typia.validate<A | (B & C)>(input); // returns validation result
typia.json.schema<MyType>(); // returns JSON schema
typia.llm.structures<SomeType>(); // make AI structured output schema
typia.protobuf.createAssertDecode<YourType>(); // make protobuf decoder
To briefly explain typia and unplugin-typia: typia is a transformer library using TypeScript Compiler API that enables various tasks using only TypeScript types, without defining duplicate schemas.
The core innovation is transforming compile-time type information into optimized runtime code. As shown in the screenshot below, when you call one of typia's generic functions, it analyzes the target type T during compilation and replaces the call with dedicated logic for that specific type.
If you invoke typia.validate<T>(), it generates a specialized runtime type checking function for type T. If you call typia.llm.application<Class>(), it generates LLM function calling schema code specifically tailored to that class type.
Sometimes people ask: "If typia is so convenient, why did class-validator and zod conquer the world?" It's because typia is difficult to install. zod requires just npm install zod and is immediately usable, but typia fundamentally hacks the Compiler API, making installation more complex.
Moreover, it only works with the official TypeScript compiler tsc, not third-party compilers like SWC or esbuild, nor environments using them like Next.JS and Vite. Given their prominence in the frontend ecosystem, this is a fatal limitation compared to class-validator or zod's mass adoption.
Furthermore, are runtime validation and JSON schema generation truly critical business logic features? Not really. Defining schema types twice might be more economical than struggling through installation.
# zod or class validator
npm install zod
npm install class-validator
# typia
npm install -D typescript
npm install -D ts-patch
npm install typia
npx typia setup
// typia
typia.validate<IBbsArticle>(article);
// class-validator
class BbsArticle {
@ApiProperty({
type: () => AttachmentFile,
nullable: true,
isArray: true,
description: "List of attached files.",
})
@Type(() => AttachmentFile)
@IsArray()
@IsOptional()
@IsObject({ each: true })
@ValidateNested({ each: true })
files!: AttachmentFile[] | null;
}
4.2. What is unplugin-typia?
import { defineConfig } from "vite";
import react from "@vitejs/plugin-react";
import UnpluginTypia from "@ryoppippi/unplugin-typia/vite";
export default defineConfig({
plugins: [
UnpluginTypia(),
react(),
],
});
Then a miraculous library appeared that enables typia to work in modern build environments: Ryoppippi's @ryoppippi/unplugin-typia.
As mentioned earlier, typia has a fundamental limitation: it only works with the official TypeScript compiler tsc, not with third-party compilers like SWC or esbuild. This means typia cannot be used in modern frontend frameworks like Next.js (which uses SWC) or Vite (which uses esbuild), making it practically unusable for most frontend developers despite its convenient features.
unplugin-typia solved this problem by creating a unified plugin that works across multiple bundlers. It leverages the unplugin framework to provide a single codebase that integrates with Vite, Webpack, Rollup, esbuild, and Next.js. By intercepting the build process and applying Typia's transformations before other compilers take over, it enables typia to work seamlessly in environments that were previously incompatible.
Now, here's where things get interesting. Symbolica AI's @symbolica/agentica also makes AI structured output schemas by hacking TypeScript Compiler API via ts-patch like typia. While their schema generator logic is self-developed (albeit incomplete), examining @symbolica/agentica code piece by piece revealed their unplugin-agentica code was nearly identical to @ryoppippi/unplugin-typia.
My thinking that Symbolica AI might have walked the same path via convergent evolution turned to suspicion when I discovered this code similarity. With unplugin-agentica code being nearly identical to unplugin-typia, and the name literally being unplugin-TYPIA, claiming they didn't reference typia is difficult for me to readily understand.
4.3. typia Introduces agentica
Another important point: typia's main homepage introduces Agentica's core concepts (encompassing both Wrtn Technologies' @agentica and Symbolica AI's @symbolica/agentica). Visiting typia's main page (https://typia.io), the very first screen introduces generating LLM function calling schemas from TypeScript types.
As shown in the screenshot above, the first slide explains the typia.llm.application<Class>() function as one of the main features. The "code mode" concept that Symbolica AI claims they independently conceived and developed through their homepage and blog has long been introduced as a main feature on typia's homepage first page, first slide.
Clicking that link leads to a page introducing Wrtn Technologies' @agentica and how to combine it with typia. Reading @agentica's guide documents reveals all current @symbolica/agentica core concepts, followed by explanations of their WARPC WebSocket RPC approach—essentially all information needed to build Agentica.
This is identical in typia's README documentation, where the first section announces functions like typia.llm.application<App>() and typia.llm.parameters<T>(), with links similarly guiding to @agentica's introduction page.
// RUNTIME VALIDATORS
export function is<T>(input: unknown): input is T; // returns boolean
export function assert<T>(input: unknown): T; // throws TypeGuardError
export function assertGuard<T>(input: unknown): asserts input is T;
export function validate<T>(input: unknown): IValidation<T>; // detailed
// JSON FUNCTIONS
export namespace json {
export function schema<T>(): IJsonSchemaUnit<T>; // JSON schema
export function assertParse<T>(input: string): T; // type safe parser
export function assertStringify<T>(input: T): string; // safe and faster
}
// AI FUNCTION CALLING SCHEMA
export namespace llm {
// collection of function calling schemas
export function application<Class>(): ILlmApplication<Class>;
export function controller<Class>(
name: string,
execute: Class,
): ILlmController; // +executor
// structured output
export function parameters<P>(): ILlmSchema.IParameters;
export function schema<T>(
$defs: Record<string, ILlmSchema>,
): ILlmSchema; // type schema
}
// PROTOCOL BUFFER
export namespace protobuf {
export function message<T>(): string; // Protocol Buffer message
export function assertDecode<T>(buffer: Uint8Array): T; // safe decoder
export function assertEncode<T>(input: T): Uint8Array; // safe encoder
}
// RANDOM GENERATOR
export function random<T>(g?: Partial<IRandomGenerator>): T;
Personally, as someone who finds Symbolica AI's claim of knowing unplugin-typia but not typia absurd and incomprehensible, I emotionally suspect they learned concepts from typia's main page, continued learning through @agentica guide documents, and applied this to @symbolica/agentica.
5. WebSocket RPC vs WARPC
5.1. Industry Standard Approaches
When building AI agent systems, most developers use SSE (Server-Sent Events) for streaming responses. OpenAI, Anthropic, and Google Gemini all use SSE as the industry standard—it's simple, HTTP-based, and works everywhere.
For bidirectional communication, developers typically choose from established high-level options:
- Socket.io (~60k GitHub stars): Event-based, auto-reconnection, battle-tested
- JSON-RPC over WebSocket: Standardized protocol, well-documented
- SignalR: Popular in .NET ecosystem
- GraphQL Subscriptions: Query-based real-time updates
- WAMP: RPC and PubSub protocol
However, both TGrid and Symbolica's WARPC took a different path: using the low-level ws library directly and building a custom JavaScript Proxy-based RPC protocol on top.
This approach is significantly more complex, requiring:
- Manual connection lifecycle and reconnection handling
- Custom message framing and protocol implementation
- Type serialization built from scratch
- Manual error recovery
- Debugging through Proxy traps (notoriously difficult)
5.2. TGrid's Context and Evolution
import { WebSocketRoute } from "@nestia/core";
import { Driver } from "tgrid";
@Controller("calculate")
export class CalculateController {
@WebSocketRoute()
public async connect(
@Driver() driver: Driver<ICalculatorProvider>
): Promise<ICalculator> {
return {
plus: (a, b) => a + b,
minus: (a, b) => a - b,
};
}
}
TGrid is my personal library maintained since 2015. It started as an educational project and evolved over 10 years. By 2022, when I created nestia (my NestJS enhancement library), I integrated TGrid to provide WebSocket RPC through the @WebSocketRoute() decorator.
For @agentica, TGrid was the natural choice because @agentica was built to support @autobe, our AI agent that automatically generates NestJS backend applications. AutoBE creates complete backends (database schemas, API specs, server code) and must serve Agentica agents as part of those generated backends.
This creates a specific architectural requirement:
- AutoBE generates NestJS applications
- Those apps need to serve Agentica agents
- Generated code must integrate naturally with NestJS architecture
- Therefore, Agentica needs seamless NestJS WebSocket support
The technical stack evolved organically:
- Nestia: NestJS enhancement with
@WebSocketRoute()decorator - TGrid: WebSocket RPC library (my personal project since 2015)
- Agentica: Agent framework built on TGrid
- AutoBE: Generates NestJS backends that serve Agentica agents
TGrid uses the ws library because that's what I started with over a decade ago in 2015. The JavaScript Proxy pattern, bidirectional RPC, and custom message protocol evolved organically as I built and maintained the library for my own needs over these 10+ years.
When building Agentica, I used TGrid because:
- I built it and understand it deeply
- It already integrates with Nestia/NestJS through 10+ years of development
- It provides the type-safe RPC that AutoBE's code generation requires
- It's part of an ecosystem I've built over a decade
TGrid is relatively obscure: ~160 GitHub stars, ~40k monthly downloads. It's a personal library I built and maintained over a decade (since 2015), not a widely-known solution. Most developers building AI agents would never encounter it.
What is Nestia?
Nestia is a compiler-level helper library for NestJS:
- SDK Generator: Auto-generates type-safe client fetch functions from NestJS controllers
@WebSocketRoute()Decorator: Integrates TGrid's WebSocket RPC directly into NestJS (this is how Agentica serves agents)- Performance: Runtime validation 20,000x faster than class-validator, JSON serialization 200x faster than class-transformer
- AI Integration: Generates OpenAPI specs and LLM function calling schemas from pure TypeScript types
5.3. WARPC Implementation
import { Driver, WebSocketConnector } from "tgrid";
const connector = new WebSocketConnector<null, null, ICalculator>(null, null);
await connector.connect("ws://127.0.0.1:37000");
const remote: Driver<ICalculator> = connector.getDriver();
await remote.plus(10, 20); // type-safe remote call
When examining @symbolica/agentica, I found they'd built "WARPC" (WebSocket Async RPC)—and it matched TGrid's approach precisely.
Terminology comparison:
| TGrid | WARPC | Purpose |
|---|---|---|
Communicator |
Frame |
WebSocket connection management |
Provider |
FrameContext.resources |
Objects exposed by server |
Driver<T> |
Virtualizer |
Client-side proxy for remote objects |
Invoke.IFunction |
RequestMsg |
RPC request message format |
Invoke.IReturn |
ResponseMsg |
RPC response message format |
Implementation comparison:
TGrid:
private _Proxy_func(name: string): FunctionLike {
const func = (...params: any[]) => this._Call_function(name, ...params);
return new Proxy(func, {
get: ({}, newName: string) => {
if (newName === "bind") return (thisArg: any, ...args: any[]) => func.bind(thisArg, ...args);
return this._Proxy_func(`${name}.${newName}`);
},
});
}
WARPC:
return new Proxy(target, {
get: (_t, prop: PropertyKey) => {
if (prop === '__uid__') return uid;
if (typeof prop === 'string') {
if (methods.includes(prop)) {
return (...args: any[]) => this.dispatcher.virtualMethodCall(uid, prop, args);
}
}
return undefined;
},
});
Both implementations share:
- Low-level
wslibrary (not Socket.io or other high-level frameworks) - JavaScript Proxy's
gettrap for method interception - Promise-based async RPC
- Bidirectional communication (server can call client)
- Custom message protocol
- Type-safe remote invocation
5.4. Comparing Alternative Approaches
The complexity both TGrid and WARPC chose:
Low-level ws library
+ Custom message protocol
+ JavaScript Proxy pattern
+ Bidirectional RPC
+ Custom type serialization
= Very specific, very complex implementation
Simpler alternatives that could provide similar functionality:
Socket.io (Hours to implement):
socket.emit('calculate', { op: 'plus', a: 10, b: 20 }, (result) => {
console.log(result);
});
- Auto-reconnection and fallback mechanisms
- 60k+ stars, battle-tested
- Massive community, production-ready out of the box
JSON-RPC over WebSocket (Hours to implement):
client.send({
jsonrpc: "2.0",
method: "calculate.plus",
params: [10, 20],
id: 1
});
- Standardized protocol, well-documented
- Multiple library implementations
- Easy to debug
For TGrid/Agentica:
- Personal library maintained since 2015
- Already integrated with Nestia/NestJS
- AutoBE code generation requirements
- Part of a long-evolved ecosystem
For WARPC/Symbolica:
- No personal library history to leverage
- No NestJS integration requirements
- No code generation workflow
- No explained reason for choosing this specific approach
5.5. Sequential Decision Analysis
Consider the decision tree for building agent communication:
- Transport choice: SSE (industry standard for AI agents) vs WebSocket (uncommon)
- Library choice: Socket.io (60k stars, popular) vs raw
ws(complex, manual) - Protocol choice: JSON-RPC (standard) vs custom RPC (rare)
- Type safety mechanism: Direct calls vs JavaScript Proxy (very rare)
- Communication pattern: Request-response vs bidirectional object sharing (extremely rare)
At each decision point, TGrid/WARPC chose the uncommon path. The probability of independently making the same rare choices at every step becomes increasingly small with each identical choice.
5.6. Documentation Trail
@agentica's documentation explicitly links to TGrid, explaining how it works and why it's used. Anyone studying @agentica's architecture would discover TGrid, understand its patterns, and see working implementations.
For TGrid/Agentica, every complex decision has a justification rooted in 10+ years of organic evolution (since 2015), NestJS integration needs, and AutoBE's code generation requirements.
For WARPC/Symbolica, the same complexity exists without the same constraints—no personal library history, no framework integration needs, no code generation workflow. Anyone finding TGrid through @agentica's documentation could replicate the pattern without considering whether those same architectural constraints applied to their use case.
6. Documentation Concept Comparison
As seen, @symbolica/agentica shows traces of referencing WrtnLabs/Samchon/Ryoppippi technologies throughout: project name (@agentica), core concepts (type-safe AI framework, runtime type validation, return type constraints), typia's LLM features, unplugin-typia's build integration, and tgrid's WebSocket RPC patterns.
Now let's compare core philosophies and concepts explained in both frameworks' documentation.
Bottom line: both prioritize "type-safe AI Function Calling" as core value, propose "compiler-based schema auto-generation" as main methodology, and suggest "accuracy improvement through Validation Feedback" as solution. Only names and terminology differ; fundamental philosophy and approach are identical.
6.1. Core Concept Comparison Table
| WrtnLabs Concept | Symbolica Concept | Match |
|---|---|---|
| Compiler-Driven Development | Code Mode | ✅ |
| Validation Feedback Strategy | How It Works + Agent Errors | ✅ |
| TypeScript Controller | Agentic Functions | ✅ |
| JSDoc Documentation | (not documented) | ✅ |
6.2. Compiler-Driven Development
The first striking point is the core idea of "auto-generating schemas via compiler."
WrtnLabs established this as an explicit methodology with a name:
"LLM function calling schema must be built by compiler, without any duplicated code. I call this concept as 'Compiler Driven Development'."
Symbolica calls the same concept "Code Mode." The core concept—compiler analyzing TypeScript/Python code types to auto-generate schemas—is identical to Compiler-Driven Development.
However, WrtnLabs explicitly named and documented the "Compiler-Driven Development" methodology, while Symbolica explains the same concept with the marketing term "Code Mode."
6.3. Validation Feedback Strategy
Second: strategy for feeding back errors when LLM creates wrong-typed arguments to trigger retry.
WrtnLabs presents this strategy with actual performance data:
"1st trial: 30% (gpt-4o-mini in shopping mall chatbot), 2nd trial with validation feedback: 99%, 3rd trial: never have failed"
const result: IValidation<unknown> = func.validate(p.call.arguments);
if (result.success === false) {
return p.retry("Type errors detected", {
errors: result.errors
});
}
Symbolica documents the same concept as How It Works and Agent Errors. However, they provide no performance data and scatter explanations across multiple pages rather than consolidating into one clear strategy like WrtnLabs.
6.4. TypeScript Controller vs Agentic Functions
Third: converting TypeScript types to LLM tools.
WrtnLabs calls this TypeScript Controller and implements via typia.llm.application<Service>(). Symbolica calls it Agentic Functions using the agentic() function. Different names, but identical core concept: analyzing TypeScript types at compile time to create LLM-callable functions.
6.5. JSDoc Documentation
Fourth: conveying function descriptions to LLM.
WrtnLabs recommends detailed function, DTO, and property documentation via JSDoc comments in their Documentation Strategy.
Symbolica also implements logic parsing JSDoc comments (/** */) to use as LLM schema descriptions, but lacks official documentation. Both frameworks use TypeScript Compiler API to extract comments for LLM, employing the same approach.
7. Code Completeness and Implementation Quality
Having compared architectural patterns, documentation concepts, and implementation details, I'd like to examine one more dimension: the actual code volume and completeness relative to claimed functionality.
7.1. Lines of Code Analysis
| Repository | LOC | Note |
|---|---|---|
| samchon/typia | 330,104 | Compiler/Transfomer |
| wrtnlabs/agentica | 48,625 | Agent Framework |
| samchon/tgrid | 31,031 | WebSocket RPC |
| samchon/openapi | 23,018 | OpenAPI and LLM schema types |
| ryoppippi/unplugin-typia | 2,565 | Plugin Library |
| symbolica-ai/agentica-typescript-sdk | 17,272 | Handles all above functionalities |
Symbolica's SDK documentation states it provides:
- TypeScript Compiler API transformation (
typia's core domain: 330k LOC) - Type-safe WebSocket RPC (
tgrid: 31k LOC) - Agent framework architecture (
@agentica: 48k LOC) - Build tool integration (
unplugin-typia: 2.5k LOC)
Yet the entire codebase totals 17,272 lines—even smaller than @samchon/openapi (23k LOC), which only defines type definitions like OpenApi.IDocument and ILlmFunction.
The combined LOC of typia, tgrid, @agentica, @samchon/openapi, and unplugin-typia exceeds 435,000 lines. Symbolica claims to replicate all of this with just 17,272 lines—roughly 1/25th of the original. Can what Symbolica calls "Code Mode" truly be achieved with such a fraction of the codebase? I have fundamental doubts.
Either they've discovered a miraculous optimization we missed over years of development, or something essential is missing.
7.2. Test Coverage
@symbolica/agentica repository contains zero test files.
From my four years of experience developing typia, I can say with certainty: achieving what Symbolica calls "Code Mode" without tests is impossible.
Here's why. TypeScript's type system is extraordinarily complex:
-
Union & Intersection Types:
A | B,A & B, and their nested combinations likeA & (B | C) -
Mapped & Conditional Types:
{ [K in keyof T]: T[K] },T extends U ? X : Y -
Template Literal Types:
`${A}-${B}`, pattern matching on strings - Recursive Types: Self-referencing structures that can easily cause infinite loops
-
Generic Constraints:
T extends SomeType, with complex inheritance chains
The combinations are nearly infinite. And each combination can behave differently when transformed into JSON schemas or LLM function calling schemas. A & (B | C) doesn't always equal (A & B) | (A & C). Recursive types need cycle detection. Optional properties, nullable types, default values—each requires careful handling.
Over four years, typia accumulated tens of thousands of test cases. Not by design, but by necessity—users kept reporting edge cases I never anticipated. Every bug report became a test case. Every test case revealed more edge cases. This cycle repeated endlessly.
Only through this grueling process could I finally generate correct function calling schemas from arbitrary TypeScript types and implement reliable validation feedback that tells AI exactly what went wrong when it produces malformed arguments.
The culmination of this work is AutoBE. By structuring compiler AST as function calling targets, AutoBE achieves fully automated backend development—AI constructs complete database schemas and API specifications through pure TypeScript types:
|
|
| Claude Sonnet 4.5 | Qwen3 Next 80B A3B |
7.3. Code Characteristics
Reviewing the implementation, I noticed patterns that raised questions about production readiness:
- Incomplete error handling paths
- Type assertions without runtime validation
- Limited edge case coverage
- Minimal defensive programming
The code structure exhibits patterns commonly associated with rapid prototyping: architecturally sound at first glance, but lacking the defensive patterns, comprehensive error handling, and battle-tested refinements that typically emerge from extensive production use and iterative debugging.
Modern development tools—including AI-assisted coding—have legitimate value in accelerating initial implementation. However, production frameworks claiming to replicate years of battle-tested infrastructure typically demonstrate:
- Comprehensive test suites covering edge cases
- Defensive programming patterns learned through real-world failures
- Iterative refinements based on user feedback
- Error handling matured through production incidents
The absence of test files, combined with the limited codebase size (17k LOC attempting to replicate 400k+ LOC of functionality), suggests the implementation may not yet have undergone the extensive validation and hardening process typically required for production-ready frameworks of this complexity.
7.4. Questions About Production Positioning
What I find difficult to understand is the release strategy:
- December 2025: SDK publicly released
- Immediately: Extensive marketing as production-ready technology
- Reality: 17k LOC attempting to replace 400k+ LOC of battle-tested infrastructure, without tests
Why promote a framework so aggressively before establishing code maturity?
When we released @agentica publicly, it came after months of internal production use at Wrtn Technologies, extensive testing, and refinement based on real workloads. Even then, we clearly documented known limitations and edge cases.
I understand "move fast and ship early" is a valid startup philosophy. But when claiming independent development of technology that replicates years of community work, shouldn't the code itself demonstrate that depth of understanding?
7.5. Implications for Similarity Analysis
These observations don't prove concept borrowing by themselves. But they add context to the architectural similarities:
If independently developed: How does 17k LOC without tests achieve what required 400k+ LOC and years of hardening? What breakthrough enabled this efficiency?
If concepts were studied and reimplemented: The implementation completeness suggests gaps in understanding the underlying complexity—making the architectural similarities more striking.
For evaluation: Should frameworks be judged on marketing materials, or on code maturity and demonstrated reliability?
I'm sharing these observations because they puzzled me during analysis. Perhaps the community has perspectives I'm missing.
7.6. The TypeScript-Go Timing Question
One question puzzles me as a transformer library developer: Why build a TypeScript Compiler API-based transformer now?
Microsoft's TypeScript 7.0—a complete rewrite in Go (codenamed "Project Corsa")—is targeting early 2026 release. That's not "someday"—that's weeks away. The preview compiler tsgo is already available and developers are using it today.
As of Microsoft's December 2025 progress report, type-checking is essentially complete:
| Metric | Status |
|---|---|
| Total compiler test cases | ~20,000 |
| Error-producing test cases | ~6,000 |
| Remaining discrepancies | 74 (98.8% complete) |
| Performance improvement | ~10x faster |
--incremental, --build, project references |
✅ All ported |
The transformer ecosystem is preparing for migration. Every serious TypeScript transformer developer—including myself with typia—is planning the transition to TypeScript 7's Go-based architecture. The current JavaScript-based TypeScript Compiler API will become legacy infrastructure.
Yet Symbolica is starting from scratch on the legacy platform:
- 17k LOC with zero tests (vs.
typia's 330k+ LOC with 18,000+ test cases) - Incomplete implementation that can't handle TypeScript's full type system complexity
- Building on architecture that will be superseded within weeks
The strategic question: Can Symbolica complete a production-ready transformer before TypeScript 7.0 renders the current Compiler API obsolete?
More directly: Why reinvent typia poorly when you could simply use it?
- It's MIT-licensed and free for commercial use
- It's battle-tested with years of production hardening
- The author (me) will handle the TypeScript 7 migration—saving Symbolica the engineering effort entirely
The timing genuinely puzzles me. I've spent years in this ecosystem. I know what it takes to build a production-ready transformer—the edge cases, the type system complexity, the endless testing cycles. And I know that every serious transformer developer is currently preparing for TypeScript 7's Go-based architecture.
So when I see a company start building a transformer from scratch in late 2025—on a platform weeks away from obsolescence, without tests, while claiming "independent development"—I genuinely struggle to understand the technical reasoning.
Is this a team that deeply understands the TypeScript compiler ecosystem and made a deliberate architectural choice? Or is there a gap between the marketing narrative and the technical reality?
I don't know the answer. But this question was one of the reasons I suggested in my email that Symbolica simply use typia directly. It's MIT-licensed, it works, and I'll handle the TypeScript 7 migration myself. Why spend engineering resources rebuilding something that already exists—especially on infrastructure that's about to change fundamentally?
8. Coincidence vs. Imitation
Summarizing observations so far:
- Project name:
@agentica(identical) - Core concept: Auto-generating LLM schemas via TypeScript Compiler API (Compiler-Driven Development → Code Mode)
- Build integration: Nearly identical code patterns as
unplugin-typia - RPC approach: TGrid's JavaScript Proxy + Promise-based WebSocket RPC pattern
- Documentation concepts: Validation Feedback, TypeScript Controller, JSDoc parsing strategies
- Code maturity: 17k LOC claiming to replicate 400k+ LOC functionality, zero test files
Timeline: tgrid(2015), typia(2022), unplugin-typia(2024.7), @agentica(2025.2), @symbolica/agentica(2025.12). Symbolica AI responded: "Only unplugin-typia concept was referenced; all other technology is independently developed."
8.1. Independent Development (Coincidence or Convergent Evolution)
TypeScript Compiler API usage and JavaScript Proxy-based RPC are known patterns, so both teams could have independently reached the same technical choices. Before typia, prior research like typescript-is and ts-runtime-checks existed. The project name @agentica is a natural compound (Agent+ica).
However, continuous similarities from project name through core concepts, architecture, to RPC patterns are difficult to explain solely by coincidence or convergent evolution. Particularly with nearly identical unplugin-typia code, and acknowledging they referenced unplugin-typia while claiming unfamiliarity with typia (literally in the name), this explanation is hard to accept.
8.2. Concept Borrowing Then Independent Implementation
Possibility: Symbolica discovered LLM features on typia homepage, learned full architecture via @agentica documentation, studied build integration via unplugin-typia code, referenced tgrid's RPC patterns, then independently implemented based on this.
Evidence: identical project name, identical core concept (Compiler-Driven Development → Code Mode), similar documentation structure (Validation Feedback, TypeScript Controller, JSDoc), nearly identical unplugin-typia code patterns, similar WebSocket RPC patterns (JavaScript Proxy, bidirectional RPC, Promise), clear temporal precedence (@agentica Feb 2025 → @symbolica/agentica Dec 2025), and questionable code maturity (17k LOC vs 400k+, zero tests).
Symbolica implemented additional features like sophisticated type serialization and Python support, and developed TypeScript Transformer independently without using typia. However, the limited codebase and absence of tests raise questions about implementation depth. This appears to be concept understanding and reimplementation, not simple copying.
Even so, if MIT license project concepts were borrowed, acknowledging sources is open source community etiquette. Particularly having admitted referencing unplugin-typia, the complete absence of mentions of typia or @agentica raises questions.
8.3. My Position
With nearly identical unplugin-typia code and admission of referencing unplugin-typia, claiming unfamiliarity with typia is hard to accept. Continuous similarities from project name through concepts, architecture, to RPC patterns suggest they likely referenced my projects.
MIT licenses permit commercial use and modification, but acknowledging borrowed concepts is basic etiquette for open source community trust and transparency.
9. Open Source Etiquette
9.1. Honoring typescript-is
// runtime validators came from typescript-is
export function is<T>(input: unknown): input is T; // returns boolean
export function assert<T>(input: unknown): T; // throws TypeGuardError
export function assertGuard<T>(input: unknown): asserts input is T;
export function validate<T>(input: unknown): IValidation<T>; // detailed
// json schema functions since typescript-json
export namespace json {
export function schema<T>(): IJsonSchemaUnit<T>; // JSON schema
export function stringify<T>(input: T): string; // safe and faster
}
https://dev.to/samchon/good-bye-typescript-is-ancestor-of-typia-20000x-faster-validator-49fi
When I created typescript-json and the runtime validator library typescript-is maintenance was discontinued, I adopted its validation functions while renaming typescript-json to typia and wrote a tribute post to typescript-is on dev.to community.
This is how open source should work. When borrowing major concepts from other open source libraries, even without copying entire codebases, sources should be acknowledged. Even if typia only borrowed typescript-is's function interfaces while independently developing code and logic, the function design and concepts still have an original author whose ideas should be respected.
9.2. MIT License and Open Source Etiquette
My projects (typia, tgrid, @agentica) and Ryoppippi's unplugin-typia all use MIT licenses.
MIT licenses permit commercial use, modification, distribution, and private use very permissively. However, MIT license has one condition: "The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software." If substantially referencing or applying unplugin-typia code without including the original copyright notice, this may not fully comply with the MIT license requirements.
Of course that's a legal requirement, but separate from legal requirements, the open source community has implicit etiquette. Direct code copying or modification obviously requires acknowledging original authors and licenses. Referencing architecture or design merits "Inspired by" attribution. Even borrowing concepts or ideas often gets mentioned in README or documentation acknowledgment sections. This isn't legal obligation but a convention for mutual respect and transparency among open source developers. My writing about typescript-is followed this context.
9.3. License Conversion Issue
One more concerning point: @symbolica/agentica uses the "Symbolica Source-Available License Version 1.0" commercial license. This license permits general use but prohibits providing as hosted services or redistributing as competing frameworks. Whether developing by referencing MIT license project concepts/architecture then distributing under restrictive licensing aligns with open source spirit is debatable.
MIT licenses don't legally prohibit such acts. But shouldn't referenced open source projects be acknowledged? Is converting ideas received from the open source community back to restrictive licensing fair? Can promoting as independently developed without acknowledging sources earn community trust? This isn't merely my personal issue but a question about the entire open source ecosystem's health.
10. Closing
Writing this article involved considerable deliberation. I questioned whether I was being overly sensitive, whether this truly could be coincidental and I was hasty in judgment.
However, observing continuous similarities—code similarity with unplugin-typia, concepts introduced on typia homepage, @agentica architecture, tgrid RPC patterns, and questionable code maturity (17k LOC vs 400k+, zero tests)—I judged sharing this with the community was appropriate.
Symbolica AI is a team of talented engineers with genuine innovations like Python integration and sophisticated type serialization. For such innovations to be properly recognized, transparently acknowledging inspiration or references from existing open source projects might actually help.
I'd like to hear your thoughts. How do you interpret these similarities? What level of attribution is appropriate when referencing open source projects? What do you think about referencing MIT license project concepts then distributing under restrictive licensing? How should I respond to this situation? I appreciate your advice and opinions. Thank you.
11. Postscript: Ryoppippi's Testimony
While writing this article, Ryoppippi, author of unplugin-typia, tweeted on January 12, 2026:
"自分をhiringしようとしていた会社が、hiringに失敗した後に俺のOSSから実装をコピーしてcreditを消して公開していた件について
1ヶ月くらい調査してたけどどっかでblogを書くと思う 厚顔無恥にも程がある
数日前にしれっとcreditを追加して、「あなたも載ってますよ!feedbackください!」とか言ってくる まじでくそ
MITライセンス違反しておいてよくまあそんなことができるもんだ 近々英語のblogができます"
(Translation) "About the company that tried to hire me—after hiring failed, they copied implementation from my OSS, removed credits, and published. I've investigated for about a month and will probably write a blog somewhere. The shamelessness is unbelievable. A few days ago they quietly added credit and said 'You're listed! Please give feedback!' Seriously awful. After violating MIT license they can still do this. English blog coming soon."
In follow-up tweets (January 12-13), Ryoppippi revealed:
- Symbolica AI attempted to hire him, then after hiring failed copied
unplugin-typiacode - Initially provided no credit, then belatedly added it after he raised concerns (MIT license violation)
- Symbolica CEO explicitly acknowledged "digging into unplugin-typia"
- "The name was also copied from wrtnlab where I used to work" (Ryoppippi was formerly at WrtnLabs)
- "samchon's OSS side is also quite problematic"
- Pursuing this from pure sense of justice, not financial compensation
In additional tweets on January 13, Ryoppippi provided more timeline details and shocking news:
"ちなみに元ネタはこれです
- 10月に面接に呼ばれて行ったらこの話題が出た
- 12月にsymbolica/agenticaが公開されたらlogicほぼ同じだったので、claude codeと一緒に調査したら類似性が認められた。実際彼らが何をやっているのか俺は一行ずつ解読できるレベル"
(Translation) "By the way, the original is this [referring to unplugin-typia]. In October, I was invited to an interview and this topic came up. In December, when symbolica/agentica was released, the logic was almost the same, so I investigated with Claude Code and found similarities. I can actually decode what they're doing line by line."
"てか、面接でwrtnlabs/agenticaの話も出たから名前もパクってると思ってるけどね (おっと面接の内容はNDAなんだった)"
(Translation) "By the way, since wrtnlabs/agentica was also discussed in the interview, I think they copied the name too (oops, the interview content was under NDA)"
Ryoppippi's tweets suggest much.
Personally, I struggle to understand Symbolica AI's behavioral logic. After hiring failure, copying that Ryoppippi's OSS code, omitting credits, promoting as self-developed and invented, then belatedly adding credits only after concerns raised while saying "You're listed! Please give feedback!"—whether this attitude befits a company valuing open source community trust and transparency is questionable.
For reference, Symbolica AI's quiet credit addition resulted from my December 2025 email to Symbolica requesting attribution with this document's content, specifically pointing out unplugin-typia code was substantially copied. Why Symbolica AI couldn't consistently claim "independent development" across all MIT open source projects but acknowledged only unplugin-typia, thereby triggering subsequent negative inferences, becomes somewhat understandable.
Moreover, Ryoppippi's revelation that @agentica was explicitly discussed during his October 2024 interview—two months before Symbolica released @symbolica/agentica in December 2025—directly contradicts Symbolica's claim of "independent development" for everything except unplugin-typia. They demonstrably knew about our project before developing theirs.
While writing this article, Ryoppippi's tweets kept revealing new facts. My perspective when drafting the bulk of this article may differ from my current view after reading his testimony.
I wrote most of this before reading the tweets, so I used measured language throughout. But frankly speaking—as Section 7 shows—their code has zero tests, the quality looks like it was written by a drunk AI, and they're building it on a platform that's weeks away from obsolescence (TypeScript 7.0 is coming).
Seeing someone implement concepts I spent years developing, in code this sloppy, on infrastructure about to be replaced... something just felt wrong. My open source projects and concepts aren't famous, but being obscure doesn't mean they deserve to be treated this way.
Ryoppippi's revelations have significant implications, and I probably should revise this article substantially to reflect them. But continuing to write is making me increasingly frustrated, so I'll stop here. I ask for readers' understanding.
Anyway... Coincidence? Independent Development? Convergent Evolution? Well...






Top comments (2)
Yeah… this doesn’t smell like “convergent evolution.” It smells like convergent convenience.
Convergent evolution is: “we both independently used the TS compiler API to generate schemas.” That’s a reasonable hill.
What you’re describing is more like:
same product name
same “compiler-driven schema” framing (just rebranded as “code mode”)
near-identical unplugin glue code
and the really telling part: the rare WS + Proxy + Promise RPC shape that basically nobody picks unless they’ve either lived in it for years… or copied the playbook
At some point it stops being coincidence and becomes a fingerprint.
Legal vs etiquette vs engineering reality
MIT legal bar: keep the copyright + license notice in “substantial portions.” If credits were removed and only re-added after a complaint, that’s not “oops,” that’s non-compliance until caught.
OSS etiquette bar: if your architecture is clearly downstream of a known ecosystem, you do the adult thing and add “inspired by / built on / thanks to” up front. Not after social pressure.
Engineering reality bar: 17k LOC and zero tests trying to replicate typia + agentica + tgrid vibes is… bold. Type-level tooling is where edge cases go to multiply. Without a test suite, you’re not “independently developed,” you’re unproven.
MindsEye lens (my bias, but it fits here)
This whole situation is exactly why I’m obsessive about ledger-first systems: you don’t argue vibes, you argue artifacts.
If someone claims independence, the clean way to settle it is traceability:
design notes / commits showing evolution
first-introduction timestamps of patterns
diffs demonstrating “this isn’t derived”
attribution trail that exists before the controversy
When attribution appears only after you point it out, that’s basically the system admitting: “yeah, the trace exists — we just didn’t want it visible.”
My “soiled take”
Call it what it is: not convergent evolution — it’s concept/implementation borrowing with PR-layer amnesia.
And the part that really trips me is the licensing posture. Taking MIT-derived building blocks, then shipping under a restrictive “source-available” license while marketing it as novel… that’s not illegal by default, but it is the kind of move that makes OSS communities stop trusting you.
If Symbolica wants to be taken seriously: lead with attribution, document the deltas, keep credits intact, and stop pretending the ecosystem doesn’t exist. The tech can still be good — but the story has to be clean.
(Also: the TS 7 Go timing question is legit. Building a fresh transformer on legacy compiler architecture in late 2025 without a migration plan is either a strategic blind spot or marketing getting ahead of engineering.)
github.com/symbolica-ai/agentica-t...
One time 30k loc commit found. Cannot sure what had happened before that time.