Cursor Rules for Rust: The Complete Guide to AI-Assisted Rust Development
Cursor writes Rust that compiles. That's the easy part. The hard part is getting it to write Rust that's idiomatic — code that respects ownership, uses the type system properly, integrates with cargo workflows, and handles async correctly.
I've been refining .cursorrules configurations for Rust projects over the past year. This guide covers the rules that made the biggest difference, organized by the problems they solve: error handling, ownership, cargo integration, async patterns, and testing.
If you've been manually fixing AI-generated .unwrap() calls and unnecessary .clone() spam, these rules will save you hours.
How Cursor Rules Work for Rust Projects
Cursor reads rules from two places:
-
.cursorrules— a single file at your project root -
.cursor/rules/*.mdc— modular rule files (recommended for larger projects)
For Rust projects, I use modular rules:
.cursor/
rules/
rust-core.mdc # ownership, error handling, types
rust-async.mdc # tokio, async patterns
rust-cargo.mdc # cargo commands, dependencies
rust-testing.mdc # test conventions
Each .mdc file can include frontmatter that controls when the rule activates:
---
description: Rust async patterns for tokio-based services
globs: ["src/**/*.rs"]
alwaysApply: false
---
This means async rules only activate when you're editing .rs files — they don't pollute context when you're writing documentation or CI configs.
Rule 1: No .unwrap() in Production — Enforce ? and Context
The single most impactful rule. AI models treat .unwrap() as a default because it makes code compile with minimal effort.
Never use .unwrap() or .expect() in non-test code. Use the ? operator
with anyhow::Context or custom error types. Every error must carry
context about what operation failed and why.
For optional values, prefer:
- .unwrap_or_default() when a default makes sense
- .ok_or_else(|| ...)? to convert Option to Result
- if let Some(x) for conditional logic
Without this rule:
fn fetch_user_settings(user_id: &str) -> Settings {
let client = reqwest::blocking::Client::new();
let resp = client.get(format!("https://api.example.com/users/{user_id}/settings"))
.send()
.unwrap();
let body = resp.text().unwrap();
serde_json::from_str(&body).unwrap()
}
Three panics. If the API is down, your whole service crashes.
With this rule:
use anyhow::{Context, Result};
fn fetch_user_settings(user_id: &str) -> Result<Settings> {
let client = reqwest::blocking::Client::new();
let resp = client
.get(format!("https://api.example.com/users/{user_id}/settings"))
.send()
.context("failed to reach settings API")?;
let body = resp
.text()
.context("failed to read response body")?;
let settings: Settings = serde_json::from_str(&body)
.with_context(|| format!("invalid JSON in settings for user {user_id}"))?;
Ok(settings)
}
Every failure explains what went wrong and which user was affected. Your logs become useful.
Rule 2: Respect Ownership — Stop Cloning Everything
AI models discover .clone() as a universal borrow-checker escape hatch. It works, but it's the Rust equivalent of using any in TypeScript.
Do not use .clone() to satisfy the borrow checker. Prefer:
- &T for read-only access
- &mut T for mutation
- &[T] instead of &Vec<T>
- &str instead of &String
If .clone() is genuinely needed (e.g., sending data to another thread),
add a brief comment explaining why.
Without this rule:
fn process_batch(orders: Vec<Order>, config: Config) {
for order in orders.clone() {
validate_order(order.clone(), config.clone());
if order.total > config.threshold {
flag_for_review(order.clone());
}
}
write_summary(orders.clone(), config.clone());
}
With this rule:
fn process_batch(orders: &[Order], config: &Config) {
for order in orders {
validate_order(order, config);
if order.total > config.threshold {
flag_for_review(order);
}
}
write_summary(orders, config);
}
Zero allocations. The function borrows everything it needs.
Rule 3: Integrate with Cargo Workflow
This is where most Cursor rule guides stop — but getting the cargo integration right matters as much as the code patterns.
When modifying Rust code:
- Always check that `cargo check` passes before suggesting changes are complete
- Run `cargo clippy -- -D warnings` and fix all warnings
- Run `cargo fmt` to ensure consistent formatting
- When adding dependencies, use `cargo add <crate>` not manual Cargo.toml edits
- Prefer workspace dependencies for multi-crate projects
- Always specify minimum versions: use "1.4" not "*"
- Check for unused dependencies with `cargo udeps` when removing code
Cargo.toml conventions:
- Group dependencies: std-lib extensions, then framework, then utilities
- Use workspace = true for shared deps in workspace members
- Always include edition = "2021" or "2024"
- Set rust-version to your MSRV
Without this rule, Cursor adds dependencies by manually editing Cargo.toml — often with incorrect version syntax, missing features, or duplicated entries:
# ❌ Bad: manual edit, wildcard version, missing feature flags
[dependencies]
serde = "*"
tokio = "1"
sqlx = "0.7"
With this rule:
# ✅ Good: specific versions, required features explicit
[dependencies]
serde = { version = "1.0", features = ["derive"] }
tokio = { version = "1.38", features = ["macros", "rt-multi-thread"] }
sqlx = { version = "0.8", features = ["runtime-tokio", "postgres"] }
Cargo Workspace Rules
For monorepo projects, add a workspace-specific rule:
This is a cargo workspace. Rules:
- Shared dependencies go in workspace Cargo.toml [workspace.dependencies]
- Member crates use `dep.workspace = true`
- Run `cargo check --workspace` not just `cargo check`
- Run `cargo test --workspace` to catch cross-crate breakage
- Use `cargo clippy --workspace` for full lint coverage
This prevents Cursor from adding the same dependency at different versions across workspace members.
Rule 4: Async/Await Patterns with Tokio
Async Rust is where AI models produce the most subtly broken code. The patterns look right but cause deadlocks, unnecessary boxing, or performance issues.
This project uses tokio as the async runtime.
Async rules:
- Never use .block_on() inside an async context — it panics
- Never hold a MutexGuard (std or tokio) across an .await point
- Prefer tokio::sync::Mutex only when the lock must be held across .await
- Use std::sync::Mutex for synchronous critical sections (even in async code)
- Use tokio::spawn for CPU-independent concurrent tasks
- Use tokio::task::spawn_blocking for CPU-heavy or blocking I/O work
- Always use select! with a timeout or cancellation branch
- Prefer structured concurrency: JoinSet over scattered spawn calls
- Stream processing: prefer tokio_stream or async-stream over manual polling
Without this rule, Cursor generates async code that deadlocks:
// ❌ Bad: holds std::sync::Mutex across .await — will deadlock under contention
use std::sync::Mutex;
async fn update_cache(cache: Arc<Mutex<HashMap<String, Value>>>, key: String) {
let mut guard = cache.lock().unwrap();
let fresh = fetch_from_db(&key).await; // .await while holding lock!
guard.insert(key, fresh);
}
With this rule:
// ✅ Good: fetch first, then lock briefly
use std::sync::Mutex;
async fn update_cache(cache: Arc<Mutex<HashMap<String, Value>>>, key: String) {
let fresh = fetch_from_db(&key).await;
let mut guard = cache.lock().unwrap();
guard.insert(key, fresh);
// guard dropped immediately
}
Structured Concurrency with JoinSet
Another pattern AI models consistently get wrong — spawning tasks without tracking them:
// ❌ Bad: fire-and-forget spawns, no error handling
async fn process_all(items: Vec<Item>) {
for item in items {
tokio::spawn(async move {
process_item(item).await;
});
}
// Who knows if they finished? Who catches panics?
}
// ✅ Good: JoinSet tracks all tasks, surfaces errors
use tokio::task::JoinSet;
async fn process_all(items: Vec<Item>) -> Result<()> {
let mut set = JoinSet::new();
for item in items {
set.spawn(async move { process_item(item).await });
}
while let Some(result) = set.join_next().await {
result.context("task panicked")?.context("processing failed")?;
}
Ok(())
}
Timeout and Cancellation
// ✅ Good: every external call has a timeout
use tokio::time::{timeout, Duration};
async fn fetch_with_fallback(url: &str) -> Result<Response> {
match timeout(Duration::from_secs(5), reqwest::get(url)).await {
Ok(Ok(resp)) => Ok(resp),
Ok(Err(e)) => Err(e.into()),
Err(_) => Err(anyhow::anyhow!("request to {url} timed out after 5s")),
}
}
Rule 5: Type-Driven Error Handling — Not Strings
Never use String as an error type. Use:
- thiserror::Error for library/domain errors (callers match on variants)
- anyhow::Error for application-level code (callers just propagate)
- Never implement Display manually when thiserror can derive it
Error enum variants should map to failure modes, not code locations.
Name them after what went wrong, not where it happened.
Without this rule:
fn parse_config(input: &str) -> Result<Config, String> {
let parsed: Value = serde_json::from_str(input)
.map_err(|e| format!("JSON error: {e}"))?;
let port = parsed["port"].as_u64()
.ok_or("missing port".to_string())?;
Ok(Config { port: port as u16 })
}
With this rule:
use thiserror::Error;
#[derive(Debug, Error)]
enum ConfigError {
#[error("invalid JSON: {0}")]
InvalidJson(#[from] serde_json::Error),
#[error("missing required field: {field}")]
MissingField { field: &'static str },
#[error("port {port} is out of valid range (1-65535)")]
InvalidPort { port: u64 },
}
fn parse_config(input: &str) -> Result<Config, ConfigError> {
let parsed: Value = serde_json::from_str(input)?;
let port = parsed["port"]
.as_u64()
.ok_or(ConfigError::MissingField { field: "port" })?;
if port == 0 || port > 65535 {
return Err(ConfigError::InvalidPort { port });
}
Ok(Config { port: port as u16 })
}
Callers can match on ConfigError::MissingField vs ConfigError::InvalidJson and handle them differently.
Rule 6: Idiomatic Patterns — Let Clippy Be Your Guide
All code must pass `cargo clippy -- -D warnings`. Common patterns:
- &[T] not &Vec<T>, &str not &String
- .is_empty() not .len() == 0
- if let instead of match with one meaningful arm
- Iterator chains over manual for loops with index
- collect into specific types, not intermediate Vecs
- Use From/Into impls instead of manual conversion functions
Without this rule:
fn find_active_users(users: &Vec<User>) -> Vec<String> {
let mut result = Vec::new();
for i in 0..users.len() {
match users[i].status {
Status::Active => {
result.push(users[i].name.clone());
}
_ => {}
}
}
result
}
With this rule:
fn find_active_users(users: &[User]) -> Vec<&str> {
users
.iter()
.filter(|u| u.status == Status::Active)
.map(|u| u.name.as_str())
.collect()
}
Shorter, borrows instead of cloning, and zero clippy warnings.
Rule 7: Testing Conventions
Testing rules:
- Use #[cfg(test)] mod tests for unit tests in the same file
- Integration tests go in tests/ directory
- .unwrap() and .expect() are acceptable in test code
- Use assert_eq! with descriptive messages: assert_eq!(got, want, "user {id} should be active")
- Test error paths, not just happy paths
- Use #[should_panic(expected = "...")] for panic tests
- For async tests: use #[tokio::test] not manual runtime setup
- Prefer proptest or quickcheck for property-based testing on parsers/serializers
Without this rule:
#[test]
fn test_parse() {
let result = parse_config(r#"{"port": 8080}"#);
assert!(result.is_ok());
}
One test, only checks the happy path, no descriptive message.
With this rule:
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn parse_config_extracts_port() {
let config = parse_config(r#"{"port": 8080}"#).unwrap();
assert_eq!(config.port, 8080, "should extract port from valid JSON");
}
#[test]
fn parse_config_rejects_missing_port() {
let err = parse_config(r#"{"host": "localhost"}"#).unwrap_err();
assert!(
matches!(err, ConfigError::MissingField { field: "port" }),
"should return MissingField for absent port, got: {err}"
);
}
#[test]
fn parse_config_rejects_invalid_json() {
let err = parse_config("not json").unwrap_err();
assert!(matches!(err, ConfigError::InvalidJson(_)));
}
#[tokio::test]
async fn fetch_user_settings_timeout() {
// Uses a mock server that never responds
let result = timeout(Duration::from_millis(100), fetch_user_settings("test-user")).await;
assert!(result.is_err(), "should timeout on unresponsive server");
}
}
Tests cover happy path, error paths, and async behavior. Each assertion explains what it's checking.
Copy-Paste Ready: Complete .cursorrules for Rust
Drop this into your project root or .cursor/rules/rust.mdc:
# Rust Cursor Rules
## Error Handling
- Never use .unwrap() or .expect() in non-test code
- Use ? operator with anyhow::Context for application code
- Use thiserror for library/domain error types
- Never use String as an error type
- Error enum variants describe what failed, not where
## Ownership and Borrowing
- Never .clone() to escape the borrow checker without justification
- Accept &T for read-only, &mut T for mutation
- Use &[T] not &Vec<T>, &str not &String
- Comment any necessary .clone() explaining why
## Cargo Integration
- Run cargo clippy -- -D warnings before declaring code complete
- Run cargo fmt for consistent formatting
- Use cargo add to manage dependencies, not manual Cargo.toml edits
- Specify minimum versions and required feature flags
- For workspaces: shared deps in [workspace.dependencies]
## Async/Await (tokio)
- Never .block_on() inside async context
- Never hold a MutexGuard across .await
- Use std::sync::Mutex for sync-only critical sections
- Use tokio::sync::Mutex only when lock spans .await
- Prefer JoinSet over scattered tokio::spawn
- Always timeout external calls: tokio::time::timeout
- Use spawn_blocking for CPU-heavy work
## Idiomatic Patterns
- All code must pass cargo clippy with zero warnings
- Iterator chains over manual for loops
- if let over match with one arm
- .is_empty() over .len() == 0
- From/Into impls over manual conversion functions
- Elide lifetimes when the compiler allows it
## Testing
- .unwrap() is acceptable in test code
- Test error paths, not just happy paths
- Use #[tokio::test] for async tests
- Descriptive assert messages: assert_eq!(a, b, "reason")
- Property-based tests for parsers and serializers
Want 50+ Production-Tested Rules for Every Stack?
These 7 rules cover the core of idiomatic Rust development with Cursor. But every project has more layers — CI pipelines, database migrations, deployment configs, framework-specific patterns.
My Cursor Rules Pack v2 includes 50+ battle-tested rules covering Rust, TypeScript, React, Next.js, Go, Python, and more. Each rule is organized by language and priority, with before/after examples so you know exactly what changes.
Stop correcting AI output by hand. Give Cursor the rules it needs to write Rust the way you would.
Top comments (0)