DEV Community

Kyungsu Kang
Kyungsu Kang

Posted on

4 4 4 4 4

Easy and simple AI development for web developers!

Introduction

Hello, recently I’ve been sharing a lot about AI, but these days it might sound like it’s something from another world. Terms like AI, artificial intelligence, singularity, agent, MCP—each one carries so much weight that if you dive deep into them, an entire day can vanish in an instant. Such discussions may seem far removed from what most developers—especially frontend and backend developers—usually work on. Of course, when you hear about job market challenges and other concerns, you might think that at least the basics of AI should be learned to stay competitive. For those people, I’d like to introduce a newly released library. Interestingly, this library was developed by a Korean startup and is called Agentica.

https://github.com/wrtnlabs/agentica

"Interest, questions, feedback, and stars for the library serve as strong motivation for open-source developers."

What is Agentica?

https://github.com/wrtnlabs/agentica

Agentica is a library that, as shown above, converts TypeScript classes and Swagger into LLM Function Calling. It helps those developing multi-agent orchestration and agentic AI. That might sound overly complicated, but here are the key points to focus on:

  • Anyone who knows TypeScript can develop AI. → Now even frontend developers can build AI!
  • Anyone who can create Swagger can develop AI. → Now even backend developers can build AI!

Is it really possible to develop AI with just these two ingredients? Aren’t fields like AI, with their complex terms such as machine learning or regression analysis, intimidating enough that one must memorize and master them all? If you think so, consider this: How did frontend development work in the days before React or jQuery? And how was development done before frameworks like Spring, NestJS, or Django existed? Development back then was incredibly challenging—more so than we can imagine today.

"The reason AI development is difficult now is because there aren’t yet enough libraries and frameworks to assist with it."

My conclusion is that AI is no different. The AI ecosystem is still in its infancy, and surprisingly, practical development is lagging behind theoretical advancements. This is why various new libraries are emerging. Personally, I find Agentica extremely intriguing, and before diving into further details, let me first share how you can use it.

Function Calling Through TypeScript Classes in Agentica

There’s no need to think of the term Function Calling as something difficult. Function Calling means that the LLM can invoke functions during a conversation. In simpler terms, it’s not just a chatbot that converses; it can perform actions when needed—like ordering food or transferring money—taking appropriate actions at the right moment for the user.

Function calling provides a powerful and flexible way for OpenAI models to interface with your code or external services. This guide will explain how to connect the models to your own custom code to fetch data or take action.*

Now, let’s look at a simple code example that demonstrates how to use Function Calling.

Example: Creating an AI Agent Using Gmail Code

You can read more about this on this tutorial page.

import { Agentica } from "@agentica/core";
import typia from "typia";
import dotenv from "dotenv";
import { OpenAI } from "openai";

import { GmailService } from "@wrtnlabs/connector-gmail";

dotenv.config();

export const agent = new Agentica({
  model: "chatgpt",
  vendor: {
    api: new OpenAI({ apiKey: process.env.OPENAI_API_KEY!, }), // Insert your OpenAI key
    model: "gpt-4o-mini", // Set the model
  },
  controllers: [
    {
      name: "Gmail Connector", // A descriptive name to help the LLM choose the function
      protocol: "class",
      application: typia.llm.application<GmailService, "chatgpt">(), // Provide the class
      execute: new GmailService({ ... }), // Pass an instance of the class
    },
  ],
});

const main = async () => {
  console.log(await agent.conversate("What can you do?")); // Now conversation is possible!
};

main();
Enter fullscreen mode Exit fullscreen mode

How is that? The code is much shorter than you might expect.

Looking at the code, here’s what’s happening:

  • Model: You decide on a provider—in this case, we’re using chatgpt.
  • Vendor: You choose the service that provides the LLM—here we use OpenAI.
    • If you’d like to get a key, you can go to this link and, after obtaining one, charge about 5 dollars.
  • Controller: Now, you need to define a controller. By including a class (as shown in the comments), you can freely access its functions.

Additional Explanation About the Controller

export const agent = new Agentica({
  model: "chatgpt",
  vendor: {
    api: new OpenAI({ apiKey: process.env.OPENAI_API_KEY!, }), // Insert your OpenAI key
    model: "gpt-4o-mini", // Set the model
  },
  controllers: [],
});
Enter fullscreen mode Exit fullscreen mode

In Agentica, a controller represents a collection of functions that the LLM can call. When you supply a class, the LLM reads the class information—as interpreted by the TypeScript compiler—and understands its public member functions. Moreover, since the actual calls are made by an instance, you must provide an instance.

npm install @wrtnlabs/connector-gmail
Enter fullscreen mode Exit fullscreen mode
export const agent = new Agentica({
  model: "chatgpt",
  vendor: {
    api: new OpenAI({ apiKey: process.env.OPENAI_API_KEY!, }), // Insert your OpenAI key
    model: "gpt-4o-mini", // Set the model
  },
  controllers: [
    {
      name: "Gmail Connector", // A descriptive name to help the LLM choose the function
      protocol: "class",
      application: typia.llm.application<GmailService, "chatgpt">(), // Provide the class
      execute: new GmailService({ ... }), // Pass an instance of the class
    },
  ],
});
Enter fullscreen mode Exit fullscreen mode

Thus, you can define and use the GmailService as shown here. In GmailService, public methods are defined for actions such as:

  • Sending emails
  • Drafting emails
  • Retrieving email lists and details

This means that you can now call these functions through conversation.

(async function () {
    await agent.conversate("Can you forward an email to my acquaintance?");
    await agent.conversate("Their email is ABC@gmail.com."); // The conversation continues!
})()
Enter fullscreen mode Exit fullscreen mode
  • If needed, you can also create an agent that directly manages computer files by redefining the fs module as a class. See this tutorial.
  • Likewise, frontend developers can now define and manipulate browser functions as classes.

There are many pre-built controllers available, so it might be worthwhile to take a look.

Function Calling Through Swagger in Agentica

npm install @samchon/openapi
Enter fullscreen mode Exit fullscreen mode
import { Agentica } from "@agentica/core";
import { HttpLlm, OpenApi } from "@samchon/openapi";
import dotenv from "dotenv";
import OpenAI from "openai";

dotenv.config();

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

export const SwaggerAgent = async () =>
  new Agentica({
    model: "chatgpt",
    vendor: {
      api: openai,
      model: "gpt-4o-mini",
    },
    controllers: [
      {
        name: "PetStore", // Name of the connector (can be any descriptive name)
        protocol: "http", // Indicates an HTTP-based connector
        application: HttpLlm.application({
          // Convert the Swagger JSON document to an OpenAPI model for Agentica.
          document: OpenApi.convert(
            await fetch("https://petstore.swagger.io/v2/swagger.json").then(
              (r) => r.json()
            )
          ),
          model: "chatgpt",
        }),
        connection: {
          // This is the actual API host where the API requests will be sent.
          host: "https://petstore.swagger.io/v2",
        },
      },
    ],
  });
Enter fullscreen mode Exit fullscreen mode

Next up is Swagger. Now, backend developers can create an agent simply with the Swagger documentation they built. Whether you load Swagger via fetch or as a file, once you read it in as JSON, you just need to substitute it for the application field as shown below:

{
    ...
    application: HttpLlm.application({
        document: OpenApi.convert(swagger),
    })
    ...
}
Enter fullscreen mode Exit fullscreen mode

Then, the LLM will be able to call the API based solely on the Swagger document—without the need for class definitions for Function Calling!

How Agentica Works

The Greatest Strength and Weakness of AI—and the Solution

The greatest strength of AI is that it always returns different results. If it always returned the same result, AI wouldn’t be as popular as it is. However, its greatest weakness is also that it always returns different results. This unpredictability is something that traditional frontend or backend development can hardly comprehend. Even if you try to guard against it with test code, the only method is to invoke it multiple times to probabilistically ensure stability. In this kind of development, achieving 80% completeness might take just 20% effort, but raising that remaining 20% to perfection requires 80% of the effort. This is why there are still so few fully polished AI products.

Agentica tackles these issues by:

  • Using TSC (the TypeScript compiler) to provide compile-time stable documentation that teaches the AI about functions.
  • Similarly, using the TypeScript compiler to correct the AI’s mistakes with remarkable precision.

For example, you can instruct:

“The type of the element e in the array d, nested inside c inside b inside a is incorrect. Fix it.”

Inside Agentica, errors are corrected in a similar fashion:

// Inside Agentica, errors are corrected as follows.
{
    success: false, // The operation failed,
    errors: [{ // and hints on how to fix it are provided at every step.
        path: "$input.a.b.c.d[0].e", 
        expected: "number", 
        value: "abc"
      }], 
    data: { // This represents the value that the AI received.
      a: {
        b: {
          c: {
            d: [{
                e: "abc"
              }]
          }
        }
      }
    }
}
Enter fullscreen mode Exit fullscreen mode

Every time the AI calls a function, it can determine that:

"The type of a.b.c.d[0].e is number, but I provided "abc"!"

It’s amazing how a field like AI—seemingly so distant—suddenly incorporates compilation!

Implications of Agentica

This concludes a brief introduction to Agentica. Even beyond Agentica, many open-source projects are emerging, and this discussion highlights what frontend and backend developers should be preparing for. In my view, there are four key points:

  1. As open-source projects increase, AI development will become easier and evolve to the level of service development.
  2. Code that cannot be explained to AI will eventually become unusable by AI, so code readability and documentation are essential.
  3. Similarly, to explain your work, you need to be proficient not only at the code level but also in business and domain knowledge.
  4. Just like with compilers, learning AI development requires a strong foundation in computer science—it will only become more important.

Agentica is not just a library for TypeScript. It should be seen as a tool for web developers—especially frontend developers—who make up more than half of all developers. As interest in AI-driven services continues to grow, I believe that frontend developers should familiarize themselves with libraries like this and prepare for the AI era. This will undoubtedly be advantageous for their careers.

Conclusion

If you enjoyed reading this, why not consider enhancing your project with Agentica? Whether by adding classes to your existing frontend or by generating Swagger for your existing server to create a chatbot, your project can become even more impressive. Thank you for reading.

Hostinger image

Get n8n VPS hosting 3x cheaper than a cloud solution

Get fast, easy, secure n8n VPS hosting from $4.99/mo at Hostinger. Automate any workflow using a pre-installed n8n application and no-code customization.

Start now

Top comments (0)

Playwright CLI Flags Tutorial

5 Playwright CLI Flags That Will Transform Your Testing Workflow

  • 0:56 --last-failed: Zero in on just the tests that failed in your previous run
  • 2:34 --only-changed: Test only the spec files you've modified in git
  • 4:27 --repeat-each: Run tests multiple times to catch flaky behavior before it reaches production
  • 5:15 --forbid-only: Prevent accidental test.only commits from breaking your CI pipeline
  • 5:51 --ui --headed --workers 1: Debug visually with browser windows and sequential test execution

Learn how these powerful command-line options can save you time, strengthen your test suite, and streamline your Playwright testing experience. Click on any timestamp above to jump directly to that section in the tutorial!

Watch Full Video 📹️

👋 Kindness is contagious

DEV shines when you're signed in, unlocking a customized experience with features like dark mode!

Okay