When I started building Code Telescope — a Telescope.nvim-inspired fuzzy finder for VS Code — one of the hardest parts wasn't the fuzzy matching or the file indexing. It was the preview panel.
The preview needs to show syntax-highlighted code, update in real time as you navigate between files, and do all of this inside a VS Code webview — which is essentially a sandboxed iframe with no access to VS Code's own editor APIs.
Here's how I built it.
The Constraints
VS Code webviews are isolated. You can't use the editor's built-in tokenizer, you can't access TextMate grammars directly, and you can't load arbitrary node modules. Everything that runs in the webview is browser code.
This meant I needed a full syntax highlighting pipeline running entirely in the browser, capable of:
- Loading language grammars on demand
- Supporting whatever color theme the user has active in VS Code
- Rendering large files without freezing the UI
- Updating fast enough to feel instant while navigating
I chose Shiki as the highlighting engine. It uses the same TextMate grammar engine as VS Code, which means it produces accurate, consistent results. But making it work inside a webview with dynamic grammars and themes required some non-trivial plumbing.
Loading Grammars Dynamically
Shiki normally loads grammars from bundled imports. That doesn't work in a webview — the bundle size would be enormous, and you'd be shipping grammars for hundreds of languages the user never needs.
Instead, I load grammars on demand from the extension host via a message bridge.
When the preview needs to highlight a file, it checks if the language grammar is already loaded. If not, it sends a request to the backend:
static async loadLanguageIfNeeded(langId: string): AsyncResult<LanguageGrammar> {
if (this.loadedLanguages.has(langId) && this.langMap.has(langId)) {
return { ok: true, value: this.langMap.get(langId) };
}
const langGrammar = await MessageBridge.request<LanguageGrammar>("langGrammar", langId);
return await this.registerLanguage(langGrammar);
}
On the backend, the extension reads the grammar directly from the VS Code extensions installed on the machine — not from a bundled copy:
for (const ext of vscode.extensions.all) {
const grammars = ext.packageJSON.contributes?.grammars;
const matched = grammars?.find((g: any) => g.scopeName === targetScope);
if (!matched) continue;
const grammarUri = vscode.Uri.joinPath(ext.extensionUri, matched.path);
const bytes = await vscode.workspace.fs.readFile(grammarUri);
const grammarJson = JsoncParser.parse(new TextDecoder().decode(bytes));
// ...
}
This means Code Telescope automatically supports any language grammar that's installed in VS Code — including extensions the user has added themselves. No bundled grammar list to maintain.
The same approach applies to themes: the active color theme is resolved from the extension host and sent to the webview as raw JSON, which Shiki loads at runtime.
Chunked Rendering for Large Files
Loading the entire file content into the DOM at once doesn't scale. A 5000-line TypeScript file would take hundreds of milliseconds to tokenize and render, freezing the UI entirely.
The solution is chunked rendering. The file is split into chunks of 30 lines, and only the chunks near the visible area are rendered. As the user scrolls, new chunks are appended or prepended on demand.
const CHUNK_SIZE = 30;
const renderChunk = async (chunkIndex: number, position: "append" | "prepend" = "append") => {
const start = chunkIndex * CHUNK_SIZE;
const end = Math.min(start + CHUNK_SIZE, totalLines);
const chunkText = getChunkText(start, end);
const result = this.highlighter.codeToTokens(chunkText, {
lang: finalLanguageId,
theme: finalThemeName,
...(prevGrammarState ? { grammarState: prevGrammarState } : {}),
});
const html = makeHtmlFromTokens(result.tokens, themeBg, themeFg, ...);
this.insertChunkIntoDOM(previewElement, html, chunkIndex, position);
};
For very large files (content over 5000 characters), a LazyLineParser pre-indexes line offsets instead of splitting the entire string upfront:
private indexLines(): void {
this.lineOffsets = [0];
let pos = 0;
while ((pos = this.text.indexOf("\n", pos)) !== -1) {
pos++;
this.lineOffsets.push(pos);
}
}
This allows slicing arbitrary line ranges in O(1) without holding the entire split array in memory.
The GrammarState Problem
Shiki's tokenizer is stateful. To correctly highlight line 500 of a TypeScript file, it needs to know what state the parser was in at line 499 — whether it was inside a template literal, a block comment, a generic type, and so on.
When you jump to the middle of a file and start rendering from chunk 16, Shiki has no idea about the context accumulated in chunks 0–15. The result is incorrect highlighting — tokens that should be orange come out white.
Shiki exposes a grammarState object that can be passed between tokenization calls to carry this context forward. The fix is to run a warmup pass that tokenizes chunks 0 through N-1 without rendering them — just to accumulate the grammar states:
const warmupChunks = async (): Promise<boolean> => {
for (let i = 0; i < initialChunk; i += BATCH_SIZE) {
if (this.abortController?.signal.aborted) return false;
for (let j = i; j < Math.min(i + BATCH_SIZE, initialChunk); j++) {
const result = this.highlighter.codeToTokens(chunkText, {
lang: finalLanguageId,
theme: finalThemeName,
...(prevState ? { grammarState: prevState } : {}),
});
if (result.grammarState) {
this.grammarStates.set(j, result.grammarState as GrammarState);
}
}
// yield between batches to avoid blocking the thread
await new Promise((r) => setTimeout(r, 0));
}
return true;
};
The warmup runs in background after the initial render. The chunk is first rendered with whatever state is available (possibly wrong), then re-rendered with the correct state once the warmup completes.
Caching GrammarStates Per File
The warmup adds latency — tokenizing 30 chunks in the background takes time, especially on large files. And if the user navigates away before it finishes, the abort controller cancels the operation and the states are discarded.
The solution is to cache the grammar states per file path. Once a warmup completes successfully, the accumulated states are stored:
warmupChunks().then((completed) => {
if (!completed || this.abortController?.signal.aborted) return;
// only persist if warmup completed without abort
this.grammarStateCache.set(cacheKey, new Map(this.grammarStates));
// re-render the initial chunk with correct state
this.loadedChunks.delete(initialChunk);
this.chunkHtmlCache.delete(initialChunk);
renderChunk(initialChunk);
});
On subsequent visits to the same file, the cached states are restored before rendering — the warmup is skipped entirely and the first render is already correct.
The cache key is the file path. Partial warmups (aborted mid-way) are never persisted, so the cache always contains complete, reliable state maps.
Scroll-Driven Chunk Loading
Once the initial chunks are rendered, a scroll listener loads more chunks as the user scrolls:
this.scrollHandler = () => {
requestAnimationFrame(async () => {
const { scrollTop, scrollHeight, clientHeight } = previewElement;
if (scrollTop + clientHeight >= scrollHeight - SCROLL_THRESHOLD) {
await renderChunk(this.maxLoadedChunk + 1, "append");
}
if (scrollTop <= SCROLL_THRESHOLD) {
await renderChunk(this.minLoadedChunk - 1, "prepend");
}
});
};
When prepending chunks, the scroll position is adjusted to prevent the viewport from jumping:
const oldScrollTop = previewElement.scrollTop;
const oldScrollHeight = previewElement.scrollHeight;
previewElement.prepend(chunkContainer);
previewElement.scrollTop = oldScrollTop + (previewElement.scrollHeight - oldScrollHeight);
The Result
The preview panel renders large files without blocking the UI, supports any language grammar installed in VS Code, applies the user's active color theme, and correctly highlights code even when jumping to the middle of a file.
Most of the complexity is invisible to the user — they just see fast, accurate syntax highlighting as they navigate.
Code Telescope is open source. If you're a Neovim user who ended up in VS Code and misses Telescope, give it a try.

Top comments (0)