DEV Community

Cover image for Thoughts about tooling | progress on semantic intents
Anton Malofeev
Anton Malofeev

Posted on

Thoughts about tooling | progress on semantic intents

Ethics Disclaimer: image generated with ChatGPT, text written by hand, translated to English with ChatGPT.

update-eureka on semantic intents v2 — which I continued working on several weeks ago while building the AI system.

If in the first version my focus was on “how to design a document format that would be ideal for an agent,”

then in the second version the focus shifted to “how to simplify and automate the agent’s workflow” while working with documents.

In simple terms, I started creating tools (scripts, Python, Bash) specifically to automate the agent’s work — for example:

  • validating task execution
  • automatically gathering context
  • analyzing code
  • analyzing business logic
  • etc.

As a result, the role of the agent has changed.

Instead of launching 100500 agents to explore code or read documentation, agents now use tools / scripts / analytical dashboards — and the actual analysis, ranking, and problem detection have moved to the side of hardcoded tools.

In essence, if we use an analogy:

Old format:
A human throws raw data into a viewer / manipulator / analytical tool (like Excel) → then works with the result of the analysis / transformation.

New format:
An agent launches a program, specifies where to pull raw data from (code, etc.), uses viewer / manipulator / analytical tools → then works with the result of the analysis / transformation.

In other words, the less repetitive work the agent has to do — including document analysis, issue analysis, etc. — the better.

Just like a human can be overloaded with large amounts of data and will be forced to use tools to work with it,

the same applies to an agent / AI — it can be overloaded with that kind of work too. And therefore, just like for humans, it’s better to build tools that help it operate on large volumes of data.

Context:

Top comments (0)