DEV Community

Hagicode
Hagicode

Posted on • Originally published at docs.hagicode.com

Progressive Disclosure: Improving Human-Computer Interaction in AI Products with Less-is-More Philosophy

Progressive Disclosure: Improving Human-Computer Interaction in AI Products with "Less-is-More" Philosophy

In AI product design, the quality of user input often determines the quality of output. This article shares a "progressive disclosure" interaction solution we practiced in the HagiCode project. Through step-by-step guidance, intelligent completion, and immediate feedback, we transform users' brief and vague inputs into structured technical proposals, significantly improving human-computer interaction efficiency.

Background

Those working on AI products have likely encountered this scenario: a user opens your application, excitedly types a requirement, but the AI returns completely irrelevant content. It's not that the AI isn't smart—it's simply that the user provided too little information. After all, mind-reading isn't something anyone does well.

This phenomenon was particularly evident during our development of HagiCode. HagiCode is an AI-powered code assistant where users describe requirements in natural language to create technical proposals and conversations. In actual usage, we found that user inputs often had these issues:

  • Uneven input quality: Some users only type a few words, like "optimize login" or "fix bug," lacking necessary context
  • Inconsistent technical terminology: Different users use different terms for the same thing—some say "frontend," others say "FE"
  • Missing structured information: No project background, no repository scope, no impact scope—these key pieces are absent
  • Repetitive issues: The same types of requirements appear repeatedly, requiring explanation from scratch each time

The direct consequences of these issues are: AI comprehension difficulties, unstable generated proposal quality, and poor user experience. Users think "this AI isn't good," while we feel wronged too—you only gave one sentence, how am I supposed to guess what you want?

Actually, this can't be helped. After all, understanding between people takes time, let alone between humans and machines.

To address these pain points, we made a bold decision: introduce the "progressive disclosure" design philosophy to improve human-computer interaction. The changes brought by this decision might be greater than you imagine, though we didn't realize it would be so effective at the time.

About HagiCode

The solution shared in this article comes from our practical experience in the HagiCode project. HagiCode is an open-source AI code assistant project designed to help developers complete code writing, technical proposal generation, code review, and other tasks through natural language interaction. Project repository: github.com/HagiCode-org/site.

This progressive disclosure solution was summarized through multiple iterations and optimizations during our actual development process. If you find this solution valuable, it shows our engineering capabilities are pretty good—then HagiCode itself is worth paying attention to, after all, good things are worth sharing.

What is Progressive Disclosure

"Progressive Disclosure" is a design principle originating from the HCI (Human-Computer Interaction) field. The core idea is simple: don't display all information and options to users at once; instead, gradually display necessary content based on user actions and needs.

This principle is particularly well-suited for AI products, because AI interaction is naturally progressive—users say a little, AI understands a little, then supplement a bit more, and understands more. Like communication between people, it has to be gradual—after all, no one bares their heart upon first meeting.

Specifically for HagiCode's scenario, we implemented progressive disclosure in four aspects:

1. Description Optimization Mechanism: Let AI Help You Speak Clearly

When users input brief descriptions, we don't directly let the AI understand them. Instead, we first trigger a "description optimization" process. The core of this process is "structured output"—transforming users' free text into a standard format. Like stringing scattered pearls into a necklace, things don't look so messy.

The optimized description must include the following standard sections:

  • Background: Problem background and context
  • Analysis: Technical analysis and thought process
  • Solution: Solution and implementation steps
  • Practice: Actual code examples and considerations

At the same time, we automatically generate a Markdown table displaying information such as target repository, path, and edit permissions, facilitating subsequent AI operations. After all, with a clear table of contents, finding things is more convenient.

Below is the actual code implementation:

// Core method in ProposalDescriptionMemoryService.cs
public async Task<string> OptimizeDescriptionAsync(
    string title,
    string description,
    string locale = "zh-CN",
    DescriptionOptimizationMemoryContext? memoryContext = null,
    CancellationToken cancellationToken = default)
{
    // Build query parameters
    var queryContext = BuildQueryContext(title, description);

    // Retrieve historical context
    var memoryContext = await RetrieveHistoricalContextAsync(queryContext, cancellationToken);

    // Generate structured prompt
    var prompt = await BuildOptimizationPromptAsync(
        title,
        description,
        memoryContext,
        cancellationToken);

    // Call AI for optimization
    return await _aiService.CompleteAsync(prompt, cancellationToken);
}
Enter fullscreen mode Exit fullscreen mode

The key to this process is "memory injection"—we inject historical context such as project conventions, similar cases, and negative patterns into the prompt, allowing the AI to reference past experiences when optimizing. After all, you learn from mistakes—past experiences shouldn't go to waste.

Notes:

  • Ensure current input takes priority over historical memory, avoiding overwriting user-specified information
  • HagIndex references must serve as factual sources and cannot be modified by historical cases
  • Low-confidence correction suggestions should not be injected as strong constraints

2. Voice Input Capability: Speaking is More Natural Than Typing

In addition to text input, we also support voice input. This is particularly useful when describing complex requirements—think about it, typing a technical requirement might take several minutes, but speaking might take just a few dozen seconds. The mouth is always faster than the hand.

The design focus of voice input is "state management"—users must clearly understand what state the system is currently in. We defined the following states:

  • Idle: System ready, can start recording
  • Waiting-upstream: Connecting to backend service
  • Recording: Recording user voice
  • Processing: Converting voice to text
  • Error: Error occurred, requires user handling

The frontend state model looks roughly like this:

interface VoiceInputState {
  status: 'idle' | 'waiting-upstream' | 'recording' | 'processing' | 'error';
  duration: number;
  error?: string;
  deletedSet: Set<string>; // Fingerprint set of deleted results
}

// State transition when starting recording
const handleVoiceInputStart = async () => {
  // First enter waiting state, show loading animation
  setState({ status: 'waiting-upstream' });

  // Wait for backend ready confirmation
  const isReady = await waitForBackendReady();
  if (!isReady) {
    setState({ status: 'error', error: 'Backend service not ready' });
    return;
  }

  // Start recording
  setState({ status: 'recording', startTime: Date.now() });
};

// Handle recognition results
const handleRecognitionResult = (result: RecognitionResult) => {
  const fingerprint = normalizeFingerprint(result.text);

  // Check if already deleted
  if (state.deletedSet.has(fingerprint)) {
    return; // Skip deleted content
  }

  // Merge result into text box
  appendResult(result);
};
Enter fullscreen mode Exit fullscreen mode

Here's a detail: we use a "fingerprint set" to manage deletion synchronization. When voice recognition returns multiple results, users might delete some of them. We store the fingerprints of deleted content, and if the same content appears later, we automatically skip it. It's like remembering which dishes you don't like—you won't order them again next time. After all, no one wants to be troubled by the same issue twice.

3. Prompt Management System: Externalizing AI's "Brain"

HagiCode has a flexible prompt management system where all prompts are stored as files:

prompts/
├── metadata/
│   ├── optimize-description.zh-CN.json
│   └── optimize-description.en-US.json
└── templates/
    ├── optimize-description.zh-CN.hbs
    └── optimize-description.en-US.hbs
Enter fullscreen mode Exit fullscreen mode

Each prompt consists of two parts:

  • Metadata file (.json): Defines the prompt's scenario, version, parameters, and other information
  • Template file (.hbs): Actual prompt content using Handlebars syntax

The format of the metadata file is like this:

{
  "scenario": "optimize-description",
  "locale": "zh-CN",
  "version": "1.0.0",
  "syntax": "handlebars",
  "syntaxVersion": "1.0",
  "parameters": [
    {
      "name": "title",
      "type": "string",
      "required": true,
      "description": "Proposal title"
    },
    {
      "name": "description",
      "type": "string",
      "required": true,
      "description": "Original description"
    }
  ],
  "author": "HagiCode Team",
  "description": "Optimize user input technical proposal description",
  "lastModified": "2026-04-05",
  "tags": ["optimization", "nlp"]
}
Enter fullscreen mode Exit fullscreen mode

The template file uses Handlebars syntax and supports parameter injection:

You are a technical proposal expert.

<task>
Generate a structured technical proposal description based on the following information.
</task>

<input>
<title>{{title}}</title>
<description>{{description}}</description>
{{#if memoryContext}}
<memory_context>
{{memoryContext}}
</memory_context>
{{/if}}
</input>

<output_format>
## Background
[Describe problem background and context, including project information, repository scope, etc.]

## Analysis
[Technical analysis and thought process, explaining why this change is needed]

## Solution
[Solution and implementation steps, listing key code locations]

## Practice
[Actual code examples and considerations]
</output_format>
Enter fullscreen mode Exit fullscreen mode

The benefits of this design are:

  • Prompts can be version-managed like code
  • Supports multiple languages, automatically switching based on user preferences
  • Parameterized design, allowing dynamic context injection
  • Completeness validation at startup, avoiding runtime errors

After all, if you don't write down what's in your head, who knows when you'll forget it? Better to record it properly from the start than regret it later.

4. Progressive Wizard: Breaking Complex Tasks into Small Steps

For complex tasks (like first-time installation and configuration), we used a multi-step wizard design. Each step only requests necessary information and provides clear progress indicators. Life is like this too—you can't become fat in one bite, taking it step by step is actually more reliable.

The wizard state model:

interface WizardState {
  currentStep: number;           // 0-3, corresponding to 4 steps
  steps: WizardStep[];
  canGoNext: boolean;
  canGoBack: boolean;
  isLoading: boolean;
  error: string | null;
}

interface WizardStep {
  id: number;
  title: string;
  description: string;
  completed: boolean;
}

// Step navigation logic
const goToNextStep = () => {
  if (wizardState.currentStep < wizardState.steps.length - 1) {
    // Validate current step input
    if (validateCurrentStep()) {
      wizardState.currentStep++;
      wizardState.steps[wizardState.currentStep - 1].completed = true;
    }
  }
};

const goToPreviousStep = () => {
  if (wizardState.currentStep > 0) {
    wizardState.currentStep--;
  }
};
Enter fullscreen mode Exit fullscreen mode

Each step has independent validation logic, and completed steps have clear visual markers. Cancel operations pop up a confirmation dialog to prevent users from accidentally losing progress. After all, you can turn back if you go the wrong way, but if you tear up the road, there's really no way out.

Summary

Reviewing HagiCode's progressive disclosure practice, we can summarize several core principles:

  1. Step-by-step guidance: Break complex tasks into small steps, each requesting only necessary information
  2. Intelligent completion: Automatically complete information using historical context and project knowledge
  3. Immediate feedback: Every action has clear visual feedback and status indicators
  4. Fault tolerance mechanism: Allow users to undo and reset, avoiding irreversible losses from errors
  5. Diversified input: Support multiple input methods such as text and voice

The actual effect of this solution in HagiCode is: the average length of user input increased from less than 20 characters to structured 200-300 characters, the quality of AI-generated proposals significantly improved, and user satisfaction also rose.

Actually, this isn't surprising—the more information you provide, the more accurately the AI understands, and the better the returned results. This is no different from communication between people.

If you're also working on AI-related products, I hope these experiences provide some inspiration. Remember: users aren't unwilling to provide information—you just haven't asked the right questions yet. The core of progressive disclosure is finding the optimal timing and way to ask questions—it just takes some patience to explore that timing and method.

References


If this article helps you, feel free to give a Star on GitHub and follow the HagiCode project's future development. Public beta has begun—install now to experience full functionality:

Original Article & License

Thanks for reading. If this article helped, consider liking, bookmarking, or sharing it.
This article was created with AI assistance and reviewed by the author before publication.

Top comments (0)