DEV Community

Alexander Loth
Alexander Loth

Posted on

From Misinformation to Agentic AI: Where My Research Is Heading

Two years ago, I started studying AI-generated misinformation. I wanted to understand how large language models produce convincing false content, how quickly that content spreads, and whether humans can even tell the difference anymore. That work led to four papers at The Web Conference 2026, tools like JudgeGPT and RogueGPT, and a growing concern I could not shake: the problem is bigger than text.

Because while I was studying how AI breaks trust through information, something else was happening. AI was gaining the ability to act.

The Shift I Noticed

Misinformation research forced me to think carefully about trust. Can you trust that a piece of content is what it claims to be? Did a human write it? Is the source legitimate? Can verification tools keep pace with generation quality?

These are information integrity questions. But they turn out to be a specific instance of a broader challenge: how do you maintain trust and oversight when AI systems operate with increasing autonomy?

The same questions that apply to AI-generated text apply -- with more urgency -- to AI agents taking actions on your behalf. When an agent sends an email, edits a document, or executes a command in your environment, how do you know it did what you intended?

Misinformation is a trust problem in the information layer. Agentic AI is a trust problem in the action layer.

PowerSkills as a Practical Case Study

One project I have been building is PowerSkills -- Windows automation skills for AI agents. It gives agents structured access to Outlook (email and calendar), Edge browser via Chrome DevTools Protocol, desktop automation, and shell commands.

PowerSkills is open source (MIT license) and installable via AgentSkills:

npx skills add aloth/PowerSkills
Enter fullscreen mode Exit fullscreen mode

Every command returns a consistent JSON envelope:

{
  "status": "success",
  "exit_code": 0,
  "data": { ... },
  "timestamp": "2026-03-06T16:00:00+01:00"
}
Enter fullscreen mode Exit fullscreen mode

Building it clarified something: the agent-tool interface is a design problem, not just an engineering one. Decisions about what to expose, what to restrict, and how to structure output all affect how safely and predictably an agent operates. A well-designed tool surface makes agent behavior more auditable.

Open Questions for Builders

If you are building agentic systems, these are the questions I think matter most:

  • Verification of agent actions. When an agent completes a task, how does a human confirm it did the right thing?
  • Trust calibration. How should trust in an agent accumulate or decay based on observed behavior?
  • Agent-tool interface design. How do you design tool interfaces that make unsafe actions harder and correct actions clearer?
  • Multi-agent oversight. As agents orchestrate other agents, who watches the watcher?

What Is Next

More empirical work with PowerSkills, expanding the agent-tool interface research, and connecting the information integrity thread to agentic AI. The trust questions do not go away just because the AI got more capable. They get harder.

Full post: alexloth.com

Top comments (0)