DEV Community

Hagicode
Hagicode

Posted on • Originally published at docs.hagicode.com

From CLI Invocation to SDK Integration: Best Practices for GitHub Copilot in .NET Projects

From CLI Invocation to SDK Integration: Best Practices for GitHub Copilot in .NET Projects

The upgrade path from command-line invocation to official SDK integration has been quite a journey. Today, I'll share the pitfalls we encountered and lessons learned in the HagiCode project.

Background

After the official release of the GitHub Copilot SDK in 2025, we began integrating it into our AI capability layer. Prior to this, the project primarily used GitHub Copilot capabilities by directly calling the Copilot CLI command-line tool. However, this approach presented several obvious issues:

  • Complex process management: Need to manually manage CLI process lifecycle, startup timeouts, and process cleanup—after all, processes can crash without any warning
  • Incomplete event handling: Raw CLI invocation makes it difficult to capture fine-grained events during model inference and tool execution—like only seeing the result without witnessing the thinking process
  • Difficult session management: Lack of effective session reuse and recovery mechanisms, meaning starting from scratch every time, which gets quite exhausting
  • Compatibility issues: CLI parameters update frequently, requiring continuous maintenance of parameter compatibility logic—fighting against windmills, essentially

These issues gradually became apparent in daily development, especially when needing real-time tracking of model inference process (thinking) and tool execution status. The limitations of CLI invocation became particularly obvious. We eventually realized we needed a lower-level, more complete integration approach—after all, all roads lead to Rome, it's just that some roads are easier to travel than others.

About HagiCode

The solution shared in this article comes from our practical experience in the HagiCode project. HagiCode is an open-source AI code assistant project. During development, we needed deep integration of various GitHub Copilot capabilities—from basic code completion to complex multi-turn conversations and tool calls. These actual requirements drove our upgrade from CLI invocation to official SDK integration.

If you're interested in the practical solutions in this article, it means our engineering practices might be helpful to you—then the HagiCode project itself is worth checking out. Perhaps at the end of the article, you'll discover more information and links about the project, who knows...

Architecture Design

The project adopts a layered architecture to address CLI invocation issues:

┌─────────────────────────────────────────────────────────┐
│  hagicode-core (Orleans Grains + AI Provider Layer)    │
│  - CopilotAIProvider: Convert AIRequest to CopilotOptions │
│  - GitHubCopilotGrain: Orleans distributed execution interface            │
└─────────────────────────────────────────────────────────┘
                          ↓
┌─────────────────────────────────────────────────────────┐
│  HagiCode.Libs (Shared Provider Layer)                 │
│  - CopilotProvider: CLI Provider interface implementation               │
│  - ICopilotSdkGateway: SDK invocation abstraction                     │
│  - GitHubCopilotSdkGateway: SDK session management & event distribution     │
└─────────────────────────────────────────────────────────┘
                          ↓
┌─────────────────────────────────────────────────────────┐
│  GitHub Copilot SDK (Official .NET SDK)                │
│  - CopilotClient: SDK client                            │
│  - CopilotSession: Session management                             │
│  - SessionEvent: Event stream                                 │
└─────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

The technical advantages of this layered design are actually quite practical:

  1. Separation of concerns: Core business logic decoupled from SDK implementation details—after all, each layer has its responsibilities, not stepping on each other's toes
  2. Testability: Through the ICopilotSdkGateway interface, unit testing becomes straightforward, making testing less painful
  3. Reusability: HagiCode.Libs can be referenced by multiple projects, write once, use everywhere
  4. Maintainability: SDK upgrades only require modifying the Gateway layer, upper code remains untouched, quite nice

Core Implementation

Authentication Flow

Authentication is the first and most important step in SDK integration—after all, if you can't get through the door, nothing else matters. We designed a flexible authentication configuration that supports multiple authentication sources:

// CopilotProvider.cs - Authentication source configuration
public class CopilotOptions
{
    public bool UseLoggedInUser { get; set; } = true;
    public string? GitHubToken { get; set; }
    public string? CliUrl { get; set; }
}

// Convert to SDK request
return new CopilotSdkRequest(
    GitHubToken: options.AuthSource == CopilotAuthSource.GitHubToken
        ? options.GitHubToken
        : null,
    UseLoggedInUser: options.AuthSource != CopilotAuthSource.GitHubToken
);
Enter fullscreen mode Exit fullscreen mode

The benefits of this design are quite obvious:

  • Supports logged-in user mode (no token required), suitable for desktop scenarios—users log in with their own accounts
  • Supports GitHub Token mode, applicable for server-side deployment—unified management is convenient
  • Supports Copilot CLI URL override, convenient for enterprise proxy configuration—enterprise environments always have special rules

In actual use, this flexible authentication approach greatly simplifies configuration work for different deployment scenarios. Desktop clients can use the user's own Copilot login state, while servers can manage unified access through tokens. Different approaches for different needs, really.

Event Stream Processing

One of the SDK's most powerful capabilities is complete event stream capture. We implemented an event dispatch system capable of real-time processing of various SDK events—after all, knowing the process versus just knowing the result feels quite different:

// GitHubCopilotSdkGateway.cs - Event dispatch core logic
internal static SessionEventDispatchResult DispatchSessionEvent(
    SessionEvent evt, bool sawDelta)
{
    switch (evt)
    {
        case AssistantReasoningEvent reasoningEvent:
            // Capture model reasoning process
            events.Add(new CopilotSdkStreamEvent(
                CopilotSdkStreamEventType.ReasoningDelta,
                Content: reasoningEvent.Data.Content));
            break;

        case ToolExecutionStartEvent toolStartEvent:
            // Capture tool call start
            events.Add(new CopilotSdkStreamEvent(
                CopilotSdkStreamEventType.ToolExecutionStart,
                ToolName: toolStartEvent.Data.ToolName,
                ToolCallId: toolStartEvent.Data.ToolCallId));
            break;

        case ToolExecutionCompleteEvent toolCompleteEvent:
            // Capture tool call completion and result
            events.Add(new CopilotSdkStreamEvent(
                CopilotSdkStreamEventType.ToolExecutionEnd,
                Content: ExtractToolExecutionContent(toolCompleteEvent)));
            break;

        default:
            // Unhandled events preserved as RawEvent
            events.Add(new CopilotSdkStreamEvent(
                CopilotSdkStreamEventType.RawEvent,
                RawEventType: evt.GetType().Name));
            break;
    }
}
Enter fullscreen mode Exit fullscreen mode

The value this implementation brings, how should I put it:

  • Complete capture of model reasoning process (thinking): Users can see the AI's thinking process, not just the final result—like knowing how to think is better than just knowing the answer
  • Real-time tracking of tool execution status: Know which tools are running, when they complete, and what results they return
  • Zero event loss: Through fallback to RawEvent mechanism, ensures all events are recorded, nothing gets left behind

In HagiCode's actual use, these fine-grained events allow users to gain deeper understanding of the AI's work process, especially when debugging complex tasks—this is actually quite useful.

CLI Compatibility Handling

After migrating from CLI invocation to SDK, we found that some original CLI parameters were no longer applicable in the SDK. To maintain backward compatibility, we implemented a parameter filtering system—after all, it's quite headache-inducing when old configurations don't work:

// CopilotCliCompatibility.cs - Parameter filtering
private static readonly Dictionary<string, string> RejectedFlags = new()
{
    ["--headless"] = "Unsupported startup parameter",
    ["--model"] = "Passed through SDK native fields",
    ["--prompt"] = "Passed through SDK native fields",
    ["--interactive"] = "Interaction managed by provider",
};

public static CopilotCliArgumentBuildResult BuildCliArgs(CopilotOptions options)
{
    // Filter unsupported parameters, retain compatible parameters
    // Generate diagnostic information
}
Enter fullscreen mode Exit fullscreen mode

Benefits of doing this:

  • Automatically filters incompatible CLI parameters, avoiding runtime errors—program crashes are no joke
  • Generates clear error diagnostic information, helping developers quickly locate problems
  • Ensures SDK stability, unaffected by CLI parameter changes

During the upgrade process, this compatibility handling mechanism helped us transition smoothly. Old configuration files can still be used, only requiring gradual adjustments based on diagnostic information—consider it a gradual process.

Runtime Pooling

Creating Copilot SDK sessions is costly, and frequently creating and destroying sessions affects performance. We implemented a session pool management system—like water in a pool, better to keep it for next use rather than refilling each time:

// CopilotProvider.cs - Session pool management
await using var lease = await _poolCoordinator.AcquireCopilotRuntimeAsync(
    request,
    async ct => await _gateway.CreateRuntimeAsync(sdkRequest, ct),
    cancellationToken);

if (lease.IsWarmLease)
{
    // Reuse existing session
    yield return CreateSessionReusedMessage();
}

await foreach (var eventData in lease.Entry.Resource.SendPromptAsync(...))
{
    yield return MapEvent(eventData);
}
Enter fullscreen mode Exit fullscreen mode

Benefits of session pooling:

  • Session reuse: Requests with the same sessionId can reuse existing sessions, reducing startup overhead
  • Supports session recovery: Previous session state can be recovered after network interruption—after all, who can guarantee networks are always stable
  • Automatic pool management: Automatically cleans expired sessions, avoiding resource leaks

In HagiCode's actual use, session pooling significantly improved response speed, especially when handling continuous conversations—this improvement is quite noticeable.

Orleans Integration

HagiCode uses Orleans as its distributed framework, and we integrated the Copilot SDK into Orleans Grains—distributed systems, complex to talk about but quite smooth to use:

// GitHubCopilotGrain.cs - Distributed execution
public async IAsyncEnumerable<GitHubCopilotResponse> ExecuteCommandStreamAsync(
    string command,
    CancellationToken token = default)
{
    var provider = await aiProviderFactory.GetProviderAsync(AIProviderType.GitHubCopilot);

    await foreach (var chunk in provider.SendMessageAsync(request, null, token))
    {
        // Map to unified response format
        yield return BuildChunkResponse(chunk, startedAt);
    }
}
Enter fullscreen mode Exit fullscreen mode

Advantages of Orleans integration:

  • Unified AI Provider abstraction: Can easily switch between different AI providers—use this today, that tomorrow, quite flexible
  • Multi-tenant isolation: Different users' Copilot sessions are isolated from each other, no interference
  • Persistent session state: Session state can recover across server restarts, no data loss from restarts

For scenarios needing to handle large volumes of concurrent requests, Orleans's distributed capabilities provide excellent scalability—after all, when a single machine can't handle it, distributed systems pick up the slack.

Practical Guide

Configuration Example

Here's a complete configuration example—copy, paste, modify, and it works:

{
  "AI": {
    "Providers": {
      "Providers": {
        "GitHubCopilot": {
          "Enabled": true,
          "ExecutablePath": "copilot",
          "Model": "gpt-5",
          "WorkingDirectory": "/path/to/project",
          "Timeout": 7200,
          "StartupTimeout": 30,
          "UseLoggedInUser": true,
          "NoAskUser": true,
          "Permissions": {
            "AllowAllTools": false,
            "AllowedTools": ["Read", "Bash", "Grep"],
            "DeniedTools": ["Edit"]
          }
        }
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Usage Considerations

In actual use, we've summarized some points needing attention—some are lessons learned from pitfalls:

Startup timeout configuration: First-time Copilot CLI startup takes longer, recommend setting StartupTimeout to at least 30 seconds. For first-time login, even more time may be needed—after all, first-time login requires verification, can't be helped.

Permission management: Avoid using AllowAllTools: true in production. Use AllowedTools whitelist to control available tools, use DeniedTools blacklist to prohibit dangerous operations. This effectively prevents AI from executing dangerous commands—when it comes to security, being careful is always right.

Session management: Requests with the same sessionId automatically reuse sessions. Session state is persisted through ProviderSessionId. Cancellation operations are passed through CancellationTokenSource—good session management leads to good user experience.

Diagnostic output: Incompatible CLI parameters generate diagnostic type messages. Original SDK events are preserved as event.raw type. Error messages include categorization (startup timeout, parameter incompatibility, etc.) for convenient troubleshooting—quick problem location provides some comfort.

Best Practices

Based on our actual experience, here are some best practices—consider it a summary:

1. Use tool whitelists

var request = new AIRequest
{
    Prompt = "Analyze this file",
    AllowedTools = new[] { "Read", "Grep", "Bash(git:*)" }
};
Enter fullscreen mode Exit fullscreen mode

Explicitly specify allowed tools through whitelisting to avoid AI performing unexpected operations. Especially for tools with write permissions (like Edit), extra caution is needed—after all, nobody wants to experience database deletion.

2. Set reasonable timeouts

options.Timeout = 3600;  // 1 hour
options.StartupTimeout = 60;  // 1 minute
Enter fullscreen mode Exit fullscreen mode

Set appropriate timeout values based on task complexity. Too short may cause task interruption, too long may waste resources waiting for unresponsive requests—moderation in all things, excess is as bad as deficiency.

3. Enable session reuse

options.SessionId = "my-session-123";
Enter fullscreen mode Exit fullscreen mode

Setting the same sessionId for related tasks can reuse previous session context, improving response speed—context is sometimes quite important.

4. Handle streaming responses

await foreach (var chunk in provider.StreamAsync(request))
{
    switch (chunk.Type)
    {
        case StreamingChunkType.ThinkingDelta:
            // Handle reasoning process
            break;
        case StreamingChunkType.ToolCallDelta:
            // Handle tool call
            break;
        case StreamingChunkType.ContentDelta:
            // Handle text output
            break;
    }
}
Enter fullscreen mode Exit fullscreen mode

Streaming responses can display AI processing progress in real-time, improving user experience. Especially for time-consuming tasks, real-time feedback is very important—watching progress is better than waiting blindly.

5. Error handling and retry

try
{
    await foreach (var chunk in provider.StreamAsync(request))
    {
        // Handle response
    }
}
catch (CopilotSessionException ex)
{
    // Handle session exception
    logger.LogError(ex, "Copilot session failed");
    // Decide whether to retry based on exception type
}
Enter fullscreen mode Exit fullscreen mode

Proper error handling and retry mechanisms can improve system stability—no one can guarantee programs never fail, being able to handle failures well is what matters.

Summary

The upgrade from CLI invocation to SDK integration brought significant value to the HagiCode project—how should I put it, this upgrade was quite worthwhile:

  • Improved stability: SDK provides more stable interfaces, unaffected by CLI version changes—no more worrying about version updates every day
  • Feature completeness: Can capture complete event streams, including reasoning process and tool execution status—can see both process and result
  • Development efficiency: Type-safe SDK interfaces make development more efficient, reducing runtime errors—with type checking, peace of mind
  • User experience: Real-time event feedback lets users more clearly understand the AI's work process—knowing what it's thinking is better than knowing nothing

This upgrade is not just a technical solution replacement, but an optimization of the entire AI capability layer architecture. Through layered design and abstract interfaces, we gained better maintainability and extensibility—once architecture is done well, subsequent tasks become easier.

If you're considering integrating GitHub Copilot into your .NET project, I hope the practical experiences in this article help you avoid some detours. The official SDK is indeed more stable and complete than CLI invocation, worth investing time to understand and master—after all, the right tools can make things事半功倍 (twice the result with half the effort), this saying isn't without reason.

References


If this article helped you:


Writing this is about enough. Technical articles are never complete, after all technology evolves and we're always learning. If you have any questions or suggestions while using HagiCode, feel free to contact us anytime. Well, that's it for now...

Original Article & License

Thanks for reading. If this article helped, consider liking, bookmarking, or sharing it.
This article was created with AI assistance and reviewed by the author before publication.

Top comments (0)