DEV Community

Cover image for Build an AI Content Creation App in Blazor with AI AssistView
Calvince Moth for Syncfusion, Inc.

Posted on • Originally published at syncfusion.com on

Build an AI Content Creation App in Blazor with AI AssistView

TL;DR: Build an AI-assisted content creation app in Blazor using Syncfusion AI AssistView as the chat surface. Add Blog/KB modes with different system instructions, suggestions, and output templates, then layer in file attachments, speech-to-text, and text-to-speech via JS interop.

If you’ve ever tried to use a single “write me content” prompt for everything, you’ve seen the problem: blog drafts turn into support docs, and KB articles start reading like marketing copy. The UI is fine, the instructions and structure are what break down.

In this walkthrough, you’ll build a small Blazor Web App (.NET 10) that uses the Syncfusion ® Blazor AI AssistView component as the chat surface and routes requests through a mode-aware prompt builder so your output stays Blog or Knowledge Base consistently.

Before getting started with this implementation, make sure you’re familiar with the baseline setup (Syncfusion packages and AssistView rendering) described in the official documentation.

What’s new in the content creation app

Here’s what we’re adding on top of the getting-started baseline:

1. Blog vs KB modes

Blogs and KB articles have different success criteria. The mode switch lets you make those differences explicit, so the model stops guessing.

2. Mode-aware prompting

Instead of sending args.Prompt directly, we’ll build a final prompt that includes:

  • shared rules (markdown, accuracy, etc.)
  • a mode-specific output template

This is the part that typically improves consistency the most.

3. Real workflow features

  • Attachments: bring your own reference doc/log/draft.
  • Speech-to-text: capture ideas faster.
  • Text-to-speech: review long output hands-free.
  • Clear prompts: reset and start a fresh session.

How it works (architecture in one minute)

The pattern is simple and scales well:

  1. UI (AI AssistView) collects the user prompt.
  2. A mode selector (Blog vs KB) chooses a configuration:
    • system instruction (high-level role + constraints)
    • prompt suggestions (starter prompts that match the mode)
    • output template (the structure you want back)
  3. Your PromptRequested handler builds a final prompt:
    • shared rules (Markdown, accuracy)
    • mode template (Blog vs KB outline)
    • user request
  4. You call Gemini (or any model) and return the response via UpdateResponseAsync.

This is a practical midpoint between raw chat and a full workflow engine.

<alt-text>


After selecting a prompt from the prompt suggestions

<alt-text>


After scrolling through the end of the AI response output

That gives you a working AssistView UI. But if you keep one generic instruction and one mixed-suggestion list, users will get an inconsistent output structure. The rest of this post is about fixing that by making mode a first-class concept.

Prerequisites

  • .NET 10 SDK (or your target SDK)
  • A Blazor Web App with Syncfusion Blazor configured
  • Google.GenAI configured with an API key (Gemini)
  • Browser permission for microphone + speech synthesis (for voice features)

With the groundwork in place, let’s dive into the step‑by‑step implementation

Step 1: Add a Blog/KB mode model

Create a mode enum and a config object:

public enum ContentMode
{
    Blog,
    KnowledgeBase
}

public sealed class ModeConfig
{
    public required string DisplayName { get; init; }
    public required string SystemInstruction { get; init; }
    public required List<string> Suggestions { get; init; }
}
Enter fullscreen mode Exit fullscreen mode

Then define your mode configs:

private ContentMode mode = ContentMode.Blog;

private readonly Dictionary<ContentMode, ModeConfig> modes = new()
{
    [ContentMode.Blog] = new ModeConfig
    {
        DisplayName = "Blog",
        SystemInstruction = @"You are an expert content creator. Generate a developer blog post with clear headings, short paragraphs, and actionable steps. End with 8 FAQs.",
        Suggestions = new List<string>
        {
            "Generate a blog post outline on sustainable energy.",
            "Create a draft blog introduction about Blazor development.",
            "Suggest SEO keywords for a tech blog on AI.",
            "Rewrite this section to be more concise."
        }
    },
    [ContentMode.KnowledgeBase] = new ModeConfig
    {
        DisplayName = "KB",
        SystemInstruction = @"You are an expert support writer. Generate a developer knowledge base article with Summary, Environment, Symptoms, Resolution steps, and Troubleshooting. End with 8 FAQs.",
        Suggestions = new List<string>
        {
            "Write a KB article on troubleshooting network issues.",
            "Draft a KB: 'App fails to start' with resolution steps.",
            "Create a troubleshooting checklist for intermittent timeouts.",
            "Summarize the likely root causes and next diagnostics."
        }
    }
};

private ModeConfig CurrentMode => modes[mode];
Enter fullscreen mode Exit fullscreen mode

Step 2: Wire mode suggestions into AI AssistView

Point AssistView at the current mode’s prompt suggestion list:

<SfAIAssistView @ref="assistView"
                PromptSuggestions="CurrentMode.Suggestions"
                PromptRequested="@PromptRequest">
</SfAIAssistView>
Enter fullscreen mode Exit fullscreen mode

Step 3: Add a simple mode selector (and clear chat on switch)

The key detail here: clear prompts when switching modes. Otherwise, the previous tone/structure can leak into the next request.

<div style="width:155px; padding-bottom:20px; margin-left:34px">
    <label for="contentType">Content Type</label>
    <SfDropDownList TValue="string"
                    ID="contentType"
                    TItem="ContentType"
                    Placeholder="Select a content type"
                    DataSource="@ContentTypes"
                    @bind-Index="@selectedContentTypeIndex">
        <DropDownListFieldSettings Value="ID"
                                   Text="Text">
        </DropDownListFieldSettings>
        <DropDownListEvents TValue="string"
                            TItem="ContentType"
                            ValueChange="OnValueChange">
        </DropDownListEvents>
    </SfDropDownList>
</div>
Enter fullscreen mode Exit fullscreen mode
public class ContentType
{
    public string? ID { get; set; }
    public string? Text { get; set; }
}
private int? selectedContentTypeIndex{ get; set; } = 0;

List<ContentType> ContentTypes = Enum.GetValues(typeof(ContentMode))
    .Cast<ContentMode>()
    .Select(mode => new ContentType
    {
        ID = mode.ToString().ToLower(),
        Text = mode.ToString()
    })
    .ToList();

public void OnValueChange(ChangeEventArgs<string, ContentType> args)
{
    if (Enum.TryParse<ContentMode>(args.ItemData.Text, out var parsed))
    {
        mode = parsed;

        // Prevent tone/structure “leakage” across modes
        assistView?.Prompts?.Clear();
        StateHasChanged();
    }
}
Enter fullscreen mode Exit fullscreen mode

<alt-text>


Blog and KB option on the page

Step 4: Make SystemInstruction mode-aware

In your PromptRequested handler, set SystemInstruction based on the mode:

GenerateContentConfig config = new GenerateContentConfig()
{
    SystemInstruction = new Content()
    {
        Parts = new List<Part>
        {
            new Part { Text = CurrentMode.SystemInstruction }
        }
    }
};
Enter fullscreen mode Exit fullscreen mode

Step 5: Add a structured prompt builder (shared rules + mode template)

This is where output consistency really improves:

private string BuildUserPrompt(string rawPrompt)
{
    var sharedRules = """
        Use Markdown.
        Use headings and bullet points where helpful.
        Be accurate and avoid making up product behaviors.
        """;

    var modeTemplate = mode switch
    {
        ContentMode.Blog => """
            Output structure:
            - # Title
            - Intro (direct answer + bullets)
            - Main sections with H2/H3 headings
            - Common mistakes
            - Conclusion
            - FAQs (3)
            """,
        ContentMode.KnowledgeBase => """
            Output structure:
            - # Title
            - Summary
            - Environment / Applies to
            - Symptoms
            - Resolution (numbered steps)
            - Troubleshooting
            - FAQs (3)
            """,
        _ => ""
    };

    return $"""
        {sharedRules}
        {modeTemplate}
        User request:
        {rawPrompt}
        """;
}
Enter fullscreen mode Exit fullscreen mode

Step 6: Call Gemini with the final prompt

Then call Gemini with the final prompt and update AssistView:

private async Task PromptRequest(AssistViewPromptRequestedEventArgs args)
{
    var config = new GenerateContentConfig
    {
        SystemInstruction = new Content
        {
            Parts = new List<Part> { new Part { Text = CurrentMode.SystemInstruction } }
        }
    };

    try
    {
        var finalPrompt = BuildUserPrompt(args.Prompt);

        var content = await client.Models.GenerateContentAsync(
            "gemini-2.5-flash",
            finalPrompt,
            config);

        var responseText = content.Candidates[0].Content.Parts[0].Text;

        await assistView.UpdateResponseAsync(responseText);
        args.Response = assistView.Prompts[^1].Response;
    }
    catch (Exception ex)
    {
        args.Response = $"Error: {ex.Message}";
    }
}
Enter fullscreen mode Exit fullscreen mode

Notes for reliability:

  • BuildUserPrompt(...) is where you centralize formatting and guardrails.
  • UpdateResponseAsync(…) pushes the model response back into the Syncfusion Blazor AI AssistView UI.

This article was originally published at Syncfusion.com.

Top comments (0)