In the past few weeks, I've been on the lookout for a solution to share code between multiple editors and platforms. I'm working on a CodeOwners platform, and part of the offering is various integrations with developers' own editors (like Visual Studio Code, neovim, Zed, etc.) and potentially LLM agents. Though I knew from the start that each editor would need its own integration, the pattern matching logic for CODEOWNERS rules stays the same across all of them; and it was important that this code produce consistent results whether it ran in Lua or Rust.
So the challenge was twofold: how to keep this logic consistent across platforms and languages, and how to keep it in sync when making updates. One idea was to use WebAssembly to encapsulate the logic, ensuring the same code handles pattern matching everywhere. However, there was another challenge: speed. Since the CODEOWNERS CLI reads every file to find ownership data, this part can't be done in WASM and has to live in each editor's own extension, written in whatever language that editor supports. Rust makes this fast, but that's not always an option in other languages. (Theoretically you could use a C binding, but things were getting complicated faster than I liked.)
While looking into extension support for editors like Helix that don't have a plugin system, I stumbled onto a completely different approach: LSP. What if I built an LSP server for CODEOWNERS rules? It seemed crazy at first; my intuition was that building LSP servers must be incredibly hard, an impression that came from setting up LSP servers for my Neovim config, which was a painful and buggy process. If installing one was that hard, how difficult could it be to build one?
Turns out, not that much.
How does an LSP server work
LSP is a protocol. It defines a server that your editor communicates with. The simplest mental model for an LSP server is: it's a TCP server that receives JSON objects and answers with JSON objects.
The spec standardizes what those JSON objects look like, what method names mean (textDocument/completion, textDocument/hover, etc.), what fields to expect, and what to send back. The editor speaks the protocol, your server listens and responds, and because the protocol is the same everywhere, any editor that implements LSP can talk to any server that implements LSP.
Setting up a basic server with Rust
For LSP with Rust there are several choices. The most popular one is tower-lsp, but unfortunately that project hasn't been updated in about three years. There isn't much activity around the alternatives either; LSP is kind of a niche thing. For this post, we'll use tower-lsp-server, a community fork of tower-lsp that's actively maintained.
First, add the dependency:
[package]
name = "lsp-fun"
version = "0.1.0"
edition = "2024"
[dependencies]
tower-lsp-server = "0.23.0"
Next, define a struct that implements the LanguageServer trait. The trait only requires two methods — initialize and shutdown. Everything else is optional and has a default no-op implementation. In its most bare-bones form, your server does nothing at all:
use tower_lsp_server::{
LanguageServer, LspService,
ls_types::{InitializeParams, InitializeResult},
};
#[derive(Debug)]
struct Backend {}
impl LanguageServer for Backend {
async fn initialize(
&self,
_params: InitializeParams,
) -> tower_lsp_server::jsonrpc::Result<InitializeResult> {
Ok(InitializeResult::default())
}
async fn shutdown(&self) -> tower_lsp_server::jsonrpc::Result<()> {
Ok(())
}
}
fn main() {
let lsp_service = LspService::new(|_client| Backend {});
dbg!(lsp_service);
}
Notice there are no dependencies on Tokio or anything async-runtime-specific yet. This code would even compile to WASM which means you can even run it in the browser.
Run cargo run and you'll see the full list of methods your service can route:
[src/main.rs:24:5] lsp_service = (
LspService {
inner: Router {
server: Backend,
methods: [
"textDocument/foldingRange",
"textDocument/references",
"workspace/symbol",
"textDocument/prepareTypeHierarchy",
"notebookDocument/didChange",
"textDocument/signatureHelp",
"textDocument/linkedEditingRange",
"textDocument/formatting",
"notebookDocument/didClose",
"textDocument/typeDefinition",
"workspace/didCreateFiles",
"$/cancelRequest",
"typeHierarchy/supertypes",
"textDocument/declaration",
"initialized",
"textDocument/didSave",
"textDocument/semanticTokens/full/delta",
"workspace/didRenameFiles",
"textDocument/definition",
"textDocument/inlayHint",
"textDocument/onTypeFormatting",
"textDocument/semanticTokens/full",
"documentLink/resolve",
"workspace/didChangeWorkspaceFolders",
"textDocument/rename",
"textDocument/didOpen",
"textDocument/prepareCallHierarchy",
"notebookDocument/didSave",
"textDocument/documentLink",
"workspace/willCreateFiles",
"textDocument/documentHighlight",
"workspaceSymbol/resolve",
"workspace/willDeleteFiles",
"callHierarchy/outgoingCalls",
"textDocument/completion",
"textDocument/didChange",
"workspace/didChangeWatchedFiles",
"codeLens/resolve",
"workspace/didChangeConfiguration",
"textDocument/willSaveWaitUntil",
"textDocument/codeLens",
"textDocument/inlineValue",
"textDocument/codeAction",
"workspace/diagnostic",
"callHierarchy/incomingCalls",
"textDocument/moniker",
"textDocument/colorPresentation",
"textDocument/documentColor",
"textDocument/willSave",
"textDocument/hover",
"textDocument/documentSymbol",
"typeHierarchy/subtypes",
"exit",
"textDocument/diagnostic",
"textDocument/implementation",
"workspace/didDeleteFiles",
"textDocument/didClose",
"completionItem/resolve",
"initialize",
"notebookDocument/didOpen",
"workspace/willRenameFiles",
"codeAction/resolve",
"textDocument/selectionRange",
"inlayHint/resolve",
"workspace/executeCommand",
"textDocument/rangeFormatting",
"shutdown",
"textDocument/prepareRename",
"textDocument/semanticTokens/range",
],
},
state: Uninitialized,
},
ClientSocket {
rx: Receiver {
closed: false,
},
pending: {},
state: Uninitialized,
},
)
LspService::new returns a tuple: the service itself and a ClientSocket which is essentially a tx/rx channel you'll use later to push messages from the server to the editor and vice-versa.
Do you need a client to communicate with the service? No, you don't even need a running server, to understand the low level, here is an example where we send the initialize function to the service and manually unwrap it. you need to add tower-service, serde_json and futures crates.
Poking the server without an editor
You don't even need an editor (or a server) to test the service. We can manually execute a JSON-RPC initialize request and inspect the response. Add tower-service, serde_json, and futures to your dependencies and try this:
use tower_lsp_server::{
LanguageServer, LspService,
ls_types::{InitializeParams, InitializeResult},
};
use tower_service::Service;
#[derive(Debug)]
struct Backend {}
impl LanguageServer for Backend {
async fn initialize(
&self,
_params: InitializeParams,
) -> tower_lsp_server::jsonrpc::Result<InitializeResult> {
Ok(InitializeResult::default())
}
async fn shutdown(&self) -> tower_lsp_server::jsonrpc::Result<()> {
Ok(())
}
}
fn main() {
let (mut lsp_service, _socket) = LspService::new(|_client| Backend {});
let raw_initialize_request = serde_json::json!({
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"capabilities": {},
"processId": null,
"rootUri": null
}
});
let initialize_request: tower_lsp_server::jsonrpc::Request =
serde_json::from_value(raw_initialize_request).unwrap();
let future = lsp_service.call(initialize_request);
let response = futures::executor::block_on(future);
println!("Server response: {:#?}", response);
}
Output:
Server response: Ok(
Some(
Response {
jsonrpc: Version,
result: Object {
"capabilities": Object {},
},
id: Number(
1,
),
},
),
)
The server replied with an empty capabilities object, which makes sense since we declared none. This separation between protocol and server is really useful, and it's one of the things I like most about Rust and its ecosystem; you can already see how unit tests could be written without needing to emulate a server or editor.
Connecting to a real editor
Now let's wire up a real TCP server and connect it to Neovim. Add Tokio:
# add tokio
tokio = { version = "1.50.0", features = [
"macros",
"rt-multi-thread",
"io-std",
"io-util",
"net",
] }
use tower_lsp_server::{
Client, LanguageServer, LspService, Server,
ls_types::{InitializeParams, InitializeResult, InitializedParams, MessageType},
};
#[derive(Debug)]
struct Backend {
client: Client,
}
impl LanguageServer for Backend {
async fn initialize(
&self,
params: InitializeParams,
) -> tower_lsp_server::jsonrpc::Result<InitializeResult> {
Ok(InitializeResult::default())
}
async fn initialized(&self, _: InitializedParams) {
self.client
.log_message(MessageType::INFO, "server initialized!")
.await;
}
async fn shutdown(&self) -> tower_lsp_server::jsonrpc::Result<()> {
Ok(())
}
}
#[tokio::main]
async fn main() {
let address = "127.0.0.1:9292";
let listener = tokio::net::TcpListener::bind(address).await.unwrap();
println!("listening on {}", address);
// wait for the editor to connect to the TCP socket
let (stream, _) = listener.accept().await.unwrap();
let (read, write) = tokio::io::split(stream);
let (service, socket) = LspService::new(|client| Backend { client });
Server::new(read, write, socket).serve(service).await;
}
The Client struct (injected by the framework in the closure) is how you interact with the editor. Here we use it to send a log message right after initialization. The server can push info to the editor at any time through this handle.
With the server running, connect from Neovim with a single command:
:lua vim.lsp.start({ name = 'custom_tcp', cmd = vim.lsp.rpc.connect('127.0.0.1', 9292), root_dir = vim.fn.getcwd() })
Run :LspInfo and you'll see it's alive:
vim.lsp: Active Clients ~
- custom_tcp (id: 4)
- Version: ? (no serverInfo.version response)
- Root directory: ~/Documents/lsp-trials
- Command: <function @/usr/share/nvim/runtime/lua/vim/lsp/rpc.lua:626>
- Settings: {}
- Attached buffers: 24
And there we have a real working LSP server.
Endless possibilities
So what can you actually do with this? Let's start with something silly to get a feel for it, then build up to more non-sense.
Custom autocomplete
Let's add a completion handler that triggers on % and suggests one item. We advertise the capability in initialize, then implement the completion method:
use tower_lsp_server::{
Client, LanguageServer, LspService, Server,
ls_types::{
CompletionItem, CompletionItemKind, CompletionOptions, CompletionParams,
CompletionResponse, InitializeParams, InitializeResult, InitializedParams, MessageType,
ServerCapabilities,
},
};
#[derive(Debug)]
struct Backend {
client: Client,
}
impl LanguageServer for Backend {
async fn initialize(
&self,
params: InitializeParams,
) -> tower_lsp_server::jsonrpc::Result<InitializeResult> {
Ok(InitializeResult {
capabilities: ServerCapabilities {
completion_provider: Some(CompletionOptions {
trigger_characters: Some(vec!["%".to_string()]),
..Default::default()
}),
..Default::default()
},
..Default::default()
})
}
async fn completion(
&self,
_params: CompletionParams,
) -> tower_lsp_server::jsonrpc::Result<Option<CompletionResponse>> {
// Create the autocomplete suggestion
let suggestion = CompletionItem {
label: "Select Me!".to_string(), // What shows up in the Neovim popup list
detail: Some("My Advice is to Select Me!".to_string()), // The helpful ghost text next to it
kind: Some(CompletionItemKind::VARIABLE), // Gives it a nice variable icon in Neovim
insert_text: Some("What's up??!".to_string()), // What actually gets inserted into the file!
..Default::default()
};
// Send it back to the editor as a list of options
Ok(Some(CompletionResponse::Array(vec![suggestion])))
}
async fn initialized(&self, _: InitializedParams) {
self.client
.log_message(MessageType::INFO, "server initialized!")
.await;
}
async fn shutdown(&self) -> tower_lsp_server::jsonrpc::Result<()> {
Ok(())
}
}
#[tokio::main]
async fn main() {
let address = "127.0.0.1:9292";
let listener = tokio::net::TcpListener::bind(address).await.unwrap();
println!("listening on {}", address);
// wait for the editor to connect to the TCP socket
let (stream, _) = listener.accept().await.unwrap();
let (read, write) = tokio::io::split(stream);
let (service, socket) = LspService::new(|client| Backend { client });
Server::new(read, write, socket).serve(service).await;
}
Type % in your editor, and the autocomplete popup appears. label is what shows in the list, detail is the ghost text next to it, and insert_text is what actually gets written into your file when you accept the suggestion.
EU Omniscient Chat Control
The server can also modify the document in response to changes. Here's something that some people will love: a server that watches for a specific phrase and replaces it on the fly using apply_edit:
use std::collections::HashMap;
use tower_lsp_server::{
Client, LanguageServer, LspService, Server,
ls_types::{
DidChangeTextDocumentParams, InitializeParams, InitializeResult, InitializedParams,
MessageType, Position, Range, ServerCapabilities, TextDocumentSyncCapability,
TextDocumentSyncKind, TextEdit, WorkspaceEdit,
},
};
#[derive(Debug)]
struct Backend {
client: Client,
}
impl LanguageServer for Backend {
async fn initialize(
&self,
_params: InitializeParams,
) -> tower_lsp_server::jsonrpc::Result<InitializeResult> {
Ok(InitializeResult {
capabilities: ServerCapabilities {
text_document_sync: Some(TextDocumentSyncCapability::Kind(
TextDocumentSyncKind::FULL,
)),
..Default::default()
},
..Default::default()
})
}
async fn did_change(&self, params: DidChangeTextDocumentParams) {
let uri = params.text_document.uri.clone();
// we do a full sync where we get the whole file,
// certainly not efficient but we are the government!
if let Some(change) = params.content_changes.first() {
let text = &change.text;
let new_text = text.replace("EU Commission sucks", "CSAM aficionado");
let line_count = text.lines().count() as u32;
let edit = TextEdit {
range: Range {
start: Position {
line: 0,
character: 0,
},
end: Position {
line: line_count + 1,
character: 0,
},
},
new_text,
};
let mut changes = HashMap::new();
changes.insert(uri, vec![edit]);
let workspace_edit = WorkspaceEdit {
changes: Some(changes),
document_changes: None,
change_annotations: None,
};
let _ = self.client.apply_edit(workspace_edit).await;
}
}
async fn initialized(&self, _: InitializedParams) {
self.client
.log_message(MessageType::INFO, "server initialized!")
.await;
}
async fn shutdown(&self) -> tower_lsp_server::jsonrpc::Result<()> {
Ok(())
}
}
#[tokio::main]
async fn main() {
let address = "127.0.0.1:9292";
let listener = tokio::net::TcpListener::bind(address).await.unwrap();
println!("listening on {}", address);
// wait for the editor to connect to the TCP socket
let (stream, _) = listener.accept().await.unwrap();
let (read, write) = tokio::io::split(stream);
let (service, socket) = LspService::new(|client| Backend { client });
Server::new(read, write, socket).serve(service).await;
}
Now every time you type EU Commission sucks, your text disappears. You could even take it further like sending an API request to alert the relevant parties. Think about the endless possibilities!
Building an AI chatbot inside your editor
How about something out right stupid (or maybe not?). Lines starting with ## and ending with a new line, trigger an API request to an OpenAI compatible endpoint and return the response in the new line!
[dependencies]
tokio = { version = "1.50.0", features = [
"macros",
"rt-multi-thread",
"io-std",
"io-util",
"net",
] }
tower-lsp-server = "0.23.0"
reqwest = { version = "0.12", features = ["json"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
use std::collections::{HashMap, HashSet};
use std::sync::Arc;
use std::time::Duration;
use serde::{Deserialize, Serialize};
use tokio::sync::RwLock;
use tower_lsp_server::{
Client, LanguageServer, LspService, Server,
ls_types::{
DidChangeTextDocumentParams, InitializeParams, InitializeResult, InitializedParams,
MessageType, Position, Range, ServerCapabilities, TextDocumentSyncCapability,
TextDocumentSyncKind, TextEdit, WorkspaceEdit,
},
};
const API_ENDPOINT: &str = "https://api.z.ai/api/coding/paas/v4/chat/completions";
const API_KEY: &str = "your-api-key";
#[derive(Debug)]
struct Backend {
client: Client,
processing: Arc<RwLock<HashSet<String>>>,
}
#[derive(Serialize)]
struct ChatRequest {
model: String,
messages: Vec<ChatMessage>,
}
#[derive(Serialize, Deserialize)]
struct ChatMessage {
role: String,
content: String,
}
#[derive(Deserialize)]
struct ChatResponse {
choices: Vec<ChatChoice>,
}
#[derive(Deserialize)]
struct ChatChoice {
message: ChatMessage,
}
async fn query_ai(http_client: &reqwest::Client, prompt: &str) -> Result<String, String> {
let request = ChatRequest {
model: "glm-4.6".to_string(),
messages: vec![ChatMessage {
role: "user".to_string(),
content: prompt.to_string(),
}],
};
let response = http_client
.post(API_ENDPOINT)
.header("Authorization", format!("Bearer {}", API_KEY))
.header("Content-Type", "application/json")
.json(&request)
.timeout(Duration::from_secs(30))
.send()
.await
.map_err(|e| format!("Request failed: {}", e))?;
if !response.status().is_success() {
let status = response.status();
let body = response.text().await.unwrap_or_default();
return Err(format!("API error {}: {}", status, body));
}
let chat_response: ChatResponse = response
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
chat_response
.choices
.first()
.map(|c| c.message.content.clone())
.ok_or_else(|| "No response from API".to_string())
}
impl LanguageServer for Backend {
async fn initialize(
&self,
_params: InitializeParams,
) -> tower_lsp_server::jsonrpc::Result<InitializeResult> {
Ok(InitializeResult {
capabilities: ServerCapabilities {
text_document_sync: Some(TextDocumentSyncCapability::Kind(
TextDocumentSyncKind::FULL,
)),
..Default::default()
},
..Default::default()
})
}
async fn did_change(&self, params: DidChangeTextDocumentParams) {
let uri = params.text_document.uri.clone();
let uri_str = uri.to_string();
if let Some(change) = params.content_changes.first() {
let text = &change.text;
let lines: Vec<&str> = text.lines().collect();
for (line_idx, line) in lines.iter().enumerate() {
let key = format!("{}:{}", uri_str, line_idx);
// Check if this line starts with ##
if let Some(prompt) = line.strip_prefix("## ") {
if line_idx + 1 >= lines.len() {
continue; // Still typing, don't trigger yet
}
let next_line = lines[line_idx + 1].trim();
if !next_line.is_empty() {
continue; // Next line has content, not a fresh enter
}
// Skip if already processing
{
let processing = self.processing.read().await;
if processing.contains(&key) {
continue;
}
}
// Skip if there's already a response or loading indicator
if line_idx + 2 < lines.len() {
let after_next = lines[line_idx + 2];
if after_next.starts_with("⏳") || after_next.starts_with("→") {
continue;
}
}
// Mark as processing
{
let mut processing = self.processing.write().await;
processing.insert(key.clone());
}
// Build new text with loading indicator (replace the empty line after prompt)
let mut new_lines: Vec<String> = lines.iter().map(|s| s.to_string()).collect();
new_lines[line_idx + 1] = "⏳ querying...".to_string();
let new_text = new_lines.join("\n");
let line_count = text.lines().count() as u32;
let edit = TextEdit {
range: Range {
start: Position {
line: 0,
character: 0,
},
end: Position {
line: line_count,
character: 0,
},
},
new_text,
};
let mut changes = HashMap::new();
changes.insert(uri.clone(), vec![edit]);
let workspace_edit = WorkspaceEdit {
changes: Some(changes),
document_changes: None,
change_annotations: None,
};
if self.client.apply_edit(workspace_edit).await.is_err() {
let mut processing = self.processing.write().await;
processing.remove(&key);
continue;
}
// Store state for the async response
let client = self.client.clone();
let processing = self.processing.clone();
let key_clone = key.clone();
let prompt = prompt.to_string();
let uri_clone = uri.clone();
let original_lines: Vec<String> = lines.iter().map(|s| s.to_string()).collect();
// Spawn background task to query AI and update document
tokio::spawn(async move {
let http_client = reqwest::Client::new();
let response = match query_ai(&http_client, &prompt).await {
Ok(r) => r,
Err(e) => format!("Error: {}", e),
};
// Build final text with response (replace the empty line after prompt)
let mut final_lines = original_lines.clone();
final_lines[line_idx + 1] = format!("→ {}", response);
let final_text = final_lines.join("\n");
let final_line_count = final_lines.len() as u32;
let edit = TextEdit {
range: Range {
start: Position {
line: 0,
character: 0,
},
end: Position {
line: final_line_count + 1,
character: 0,
},
},
new_text: final_text,
};
let mut changes = HashMap::new();
changes.insert(uri_clone, vec![edit]);
let workspace_edit = WorkspaceEdit {
changes: Some(changes),
document_changes: None,
change_annotations: None,
};
let _ = client.apply_edit(workspace_edit).await;
// Remove from processing set
let mut proc = processing.write().await;
proc.remove(&key_clone);
});
return; // Process one query at a time
}
}
// Clean up processing keys for lines that no longer exist
let mut processing = self.processing.write().await;
processing.retain(|key| {
if let Some(line_part) = key.strip_prefix(&format!("{}:", uri_str)) {
if let Ok(line_num) = line_part.parse::<usize>() {
return line_num < lines.len();
}
}
false
});
}
}
async fn initialized(&self, _: InitializedParams) {
self.client
.log_message(
MessageType::INFO,
"AI LSP server ready! Type '## your question' then press Enter to query AI.",
)
.await;
}
async fn shutdown(&self) -> tower_lsp_server::jsonrpc::Result<()> {
Ok(())
}
}
#[tokio::main]
async fn main() {
let address = "127.0.0.1:9292";
let listener = tokio::net::TcpListener::bind(address).await.unwrap();
println!("AI LSP server listening on {}", address);
// Wait for the editor to connect to the TCP socket
let (stream, _) = listener.accept().await.unwrap();
let (read, write) = tokio::io::split(stream);
let processing = Arc::new(RwLock::new(HashSet::new()));
let (service, socket) = LspService::new(|client| Backend { client, processing });
Server::new(read, write, socket).serve(service).await;
}
The previous code was partly generated by LLMs, so it's more of a toy program than something I'd recommend using especially since requests to LLM endpoints cost real money.
Why aren't LSP servers more popular?
So why aren't LSP servers used more widely beyond programming languages? Honestly, I'm not sure. I'm still new to this area, so I can't fully assess whether LSP would make sense as an alternative to MCP. You could argue that LSP has certain limitations because it's built around a fixed set of methods, or that it was designed specifically for editors. Then again, maybe that was for the best with what's going with AI right now.



Top comments (0)