Crates I Have Known And Loved
There are many reasons I enjoy writing Rust but close to the top is how well-designed the tooling is. As far as I'm concerned cargo
is everything I wanted out of a package manager and then some, and you can create your own subcommands if you need.
The Book does a fantastic job of getting you up and running, but it doesn't touch the crate ecosystem. Now that I'm several projects deep into my Rust journey I've settled on a few "must-haves" that my blank projects almost always end up with. I wish I'd had this list from the get-go - some of this functionality I had been hand-implementing for far too long before I found the solution. If you've got some more, let me know!
error-chain
This is pretty much always my first addition to any project.
One of my favorite features of Rust is the ?
operator. If an operation returns a Result<T, E>
you can just tack a question mark on it and get the logic you usually want - a success will continue execution and a failure will early-return an Err
.
However, this only works if the function you're in returns the same exact type. This often isn't the case - you may have an app-specific error type for the containing function but inside call something from std::io
- in this case it won't work unless you're implementing error-type conversions all over the place yourself. The error-chain
crate lets you, well, chain errors:
fn get_dir_listing(dir_str: &str) -> errors::Result<Vec<PathBuf>> {
let dir_listing: Vec<PathBuf> = read_dir(dir_str)
.chain_err(|| "could not read dir!")?
//etc...
}
Now my error is just a simple string, and I'll see it in the stacktrace connected to the underlying IO::Error
that was generated! It also provides a custom Result<T>
type that automatically uses your error chain. Which is nice.
In application code using strings like that generally gets the job done but for a library you'll want a more robust custom error type - this crate also provides an opinionated structure for defining one should you so choose. I've so far been content to leave the setup completely empty - all you need is the below in your main.rs and you're good to go:
extern crate error_chain;
mod errors {
error_chain!{}
}
Then you change main()
- this is straight from the docs:
fn main() {
if let Err(ref e) = run() {
error!("error: {}", e);
for e in e.iter().skip(1) {
debug!("caused by: {}", e);
}
if let Some(backtrace) = e.backtrace() {
trace!("backtrace: {:?}", backtrace);
}
::std::process::exit(1);
}
}
In the above snippet, run()
is really our main function but it's properly error-chained (returns an errors::Result
)- your whole app is covered this way. The if let
syntax expresses exactly the behavior we want in a concise, clear manner.
Just add use errors::*
anywhere you need.
structopt
Structopt feels like cheating. The gold standard for scaffolding command-line apps is clap. Structopt makes it even easier than clap
already does. It lets you write the following (from the doc link):
#[derive(Debug, StructOpt)]
#[structopt(name = "example", about = "An example of StructOpt usage.")]
struct Opt {
/// Activate debug mode
#[structopt(short = "d", long = "debug")]
debug: bool,
/// Set speed
#[structopt(short = "s", long = "speed", default_value = "42")]
speed: f64,
/// Input file
#[structopt(parse(from_os_str))]
input: PathBuf,
/// Output file, stdout if not present
#[structopt(parse(from_os_str))]
output: Option<PathBuf>,
}
It will automatically generate a clap::App
for you! The triple-slashed docstrings in the snippet will become the help line for each argument. To compare, the below is the "usual" method from a project I built before I was Enlightened:
let matches = App::new("ar-bot")
.version(VERSION)
.author("deciduously <ben@deciduously.com>")
.about("Batching of auto email alerts")
.arg(
Arg::with_name("config")
.short("c")
.long("config")
.value_name("CONFIG_FILE")
.takes_value(true)
.help("Specify an alternate toml config file"),
)
.arg(
Arg::with_name("digest")
.short("d")
.long("digest")
.takes_value(false)
.help("Finalizes a digest with the emails in the brain. Make sure to preview first!")
)
.arg(
Arg::with_name("email")
.short("e")
.long("email")
.takes_value(false)
.help("Placeholder command for developing email functionality"),
)
.arg(
Arg::with_name("preview")
.short("p")
.long("preview")
.takes_value(false)
.help("Displays the current contents of the batch"),
)
.arg(
Arg::with_name("report")
.short("r")
.long("report")
.takes_value(false)
.help("Daily report comparing inputs to outputs for the day"),
)
.arg(
Arg::with_name("verbose")
.short("v")
.multiple(true)
.help("Set RUST_LOG verbosity. There are three levels: info, debug, and trace. Repeat the flag to set level: -v, -vv, -vvv.")
)
.get_matches();
It's a lot more typing for the same endgame, and at the end everything is already handily stored in your Opt
struct. Struct-opt :)
envy
This crate is similar to structopt
but for environment variables. You define a struct and it can auto-fill it with any environment variables present:
#[macro_use]
extern crate serde_derive;
extern crate envy;
#[derive(Deserialize, Debug)]
struct Config {
foo: u16,
bar: bool,
baz: String,
boom: Option<u64>
}
fn main() {
match envy::from_env::<Config>() {
Ok(config) => println!("{:#?}", config),
Err(error) => panic!("{:#?}", error)
}
}
Now it will automatically read the FOO, BAR, BAZ, and BOOM env vars at runtime.
It's another task that's not necessarily difficult to do by hand but it's tedious and you're likely doing it a lot, over and over again.
serde
Serde at least to me feels so intertwined with Rust I'm sure this isn't a surprise to anyone, but it's a seriously solid solution. Super sound, stupendously speedy. Say that five times fast.
Mouthful aside, serde is a no-brainer when you need to do any serializing or deserializing, which is...usually. I'm not even including a snippet because in most cases it can derive all the functionality you need with a single annotation, and it's not hard to hand-implement the traits yourself if you need. It's fast and simple!
cargo-watch
Watch your files for changes and re-run the cargo
subcommands of your choosing with, for example, cargo watch -x test -x run
. I don't have anything more to say, that pretty much speaks for itself. This is a must-have for me.
pretty_env_logger
This is kind of a twofer - it's a colorful wrapper around env-logger
. I didn't start using the latter until I found this crate, though, and the colors are nice.
env-logger
allows you to set the logging output level via an environment variable. Then you use the macros from the log
crate: info!
, warn!
, debug!
, trace!
. When you run your code, only those in the level specified will display. This is a serious step up over println debugging - you can leave your debug printouts in and then just set a "verbose" flag to suppress them in normal usage.
I'm sure there's a better way to do this, but I've been dropping the below function into each project that uses the logging tools and it's working well enough for me:
fn init_logging(level: u64) -> Result<()> {
let verbosity = match level {
0 => "warn",
1 => "info",
2 => "debug",
3 | _ => "trace",
};
if verbosity == "trace" {
set_var("RUST_BACKTRACE", "1");
};
set_var("RUST_LOG", verbosity);
info!(
"Attempting to set logger to {}",
var("RUST_LOG").chain_err(|| "Failed to set verbosity level")?
);
pretty_env_logger::init();
info!(
"Set verbosity to {}",
var("RUST_LOG").chain_err(|| "Failed to set verbosity level")?
);
Ok(())
}
It simplifies the levels a little to make it easier to use with a verbosity flag that takes 0, 1, 2, or 3 levels (-v
, -vv
, -vvv
), and makes sure if you've set RUST_BACKTRACE
that you're getting the trace
level no matter what, and will set RUST_BACKTRACE
for you if you pass it -vvv
.
pretty_assertions
This is in a similar vein as pretty_env_logger
. It's a drop-in replacement for assert_eq!
with colored output. You just add the crate, no code changes required at all. Of course, you're a responsible developer and are using assert_eq!
all over the place - this just makes the output a bit easier to read.
indicatif
This crate provides multiple progress bars and spinners to use in your command-line apps. See the github README for some animated examples.
r2d2
This crate if likely familiar if you've done any database work, but I'll throw it in anyway because it's nice. It's a connection pool for your database. From the readme:
Opening a new database connection every time one is needed is both inefficient and can lead to resource exhaustion under high traffic conditions. A connection pool maintains a set of open connections to a database, handing them out for repeated use.
It's backend-agnostic and easy to drop in to your app. An adapter exists to use it easily with the diesel
ORM. Now instead of connecting directly to your DB when you need it, you ask for a connection from the Pool instead and it all works as expected. I love minimal-effort drop-in performance gains, don't you?
pest
This won't be useful in all projects, but it's my current go-to for parsing needs. It's much easier to use than a do-it-yourself parser-combinator library like nom
. You define your whole grammar in a separate file. Then in your Rust code:
#[derive(Parser)]
#[grammar = "grammar.pest"]
struct GrammarParser;
As an example, here's a small (in progress) prefix calculator's grammar:
COMMENT = _{ "/*" ~ (!"*/" ~ ANY)* ~ "*/" }
WHITESPACE = _{ " " }
num = @{ int ~ ("." ~ digit*)? }
int = { ("+" | "-")? ~ digit+ }
digit = { '0'..'9' }
symbol = @{ "+" | "-" | "*" | "/" | "%" | "^" | "add" | "sub" | "mul" | "div" | "rem" | "pow" | "max" | "min" | "list" | "eval" }
sexpr = { "(" ~ expr* ~ ")" }
qexpr = { "{" ~ expr* ~ "}" }
expr = { num | symbol | sexpr | qexpr }
blispr = { SOI ~ expr* ~ EOI }
And the corresponding code to read the parsed input:
fn lval_read(parsed: Pair<Rule>) -> Box<Lval> {
match parsed.as_rule() {
Rule::blispr | Rule::sexpr => {
let mut ret = lval_sexpr();
for child in parsed.into_inner() {
// here is where you skip stuff
if is_bracket_or_eoi(&child) {
continue;
}
ret = lval_add(&ret, lval_read(child));
}
ret
}
Rule::expr => lval_read(parsed.into_inner().next().unwrap()),
Rule::qexpr => {
let mut ret = lval_qexpr();
for child in parsed.into_inner() {
if is_bracket_or_eoi(&child) {
continue;
}
ret = lval_add(&ret, lval_read(child));
}
ret
}
Rule::num => lval_num(parsed.as_str().parse::<i64>().unwrap()),
Rule::symbol => lval_sym(parsed.as_str()),
_ => unreachable!(),
}
This library is incredibly easy to use. I love how it maintains your grammar completely separate from your code, and the PEG format is easy to follow. Give it a whirl!
actix_web
This isn't so much for use in the general case, but if I'm writing a webserver this is what I reach for, without hesitation. A lot of choice between webservers in the Rust ecosystem does boil down to personal taste, but I like how fast this one is and that it's been running on the stable branch since it launched.
I haven't had the opportunity to use the actor model without the webserver, but it looks great too!
ggez
This is a game framework inspired by LÖVE, with a Rustier API. It's quite easy to get up and running with - perfect for prototyping.
For a larger game I'd recommend looking at Amethyst. It seems to be the most promising engine at the moment and wraps specs
, an Entity-Component System. specs
is the only ECS I've ever personally used so I can't really compare it to anything else..but that said, I think it's nice?
svgbob
This one gets honorable mention because its just cool, not because its a library. Go check it out - it converts ASCII diagrams into SVG.
If I missed your favorite crate, holla at me below!
Top comments (4)
One thing I'm not really that sure of from your examples -- how does
StructOpt
do with documentation? For me, one of the nice things aboutclap
is that it encourages you to thoroughly document your CLIs. From the example you're giving, it seems like you'd need to add documentation lines toStructOpt
's field metadata. That feels kind of awkward to me, and I worry that it would discourage folks from thoroughly documenting their stuff!StructOpt will use the doc comments above each field.
This snippet will include the string "Activate debug mode" in the
help()
method of your clap argument, so it is separate from thestructopt
annotation. I actually like this method because it encourages you to build the habit of using docstrings elsewhere too in general - with just "clap" you need to define the docstring using a separate syntax. I like keeping it consistent.Is that what you were getting at?
Yeah! That makes sense. I see what you like about it, and I agree that getting folks in the habit of using docstrings is is a good idea. I'm worried, though, about how non-obvious that API is. In a lot of ways, it turns that docstring into CODE, not "just" documentation. When I read your example at first, I didn't understand what was going on, because I'm used to the idea that code is code and comments are comments. Now that I know that, I'm fine, but... what about on a team project?
On team projects, there's usually a lot of "copy, paste, modify" going on. It's easy to dismiss this as laziness, but, really, it's just very human. Folks want to build features "efficiently" (in a way that minimizes time & effort) and in a way that none of their teammates will think is weird. Copying and pasting existing code is a great way to achieve both of those goals. Since other folks will copy/paste/modify our code, it's important to write code that can be copied and pasted! Sandi Metz calls this "exemplary" code -- code that's a good example for others.
So, now I'm putting myself in the mindset of someone who hasn't really used StructOpt before, and who needs to do something new with it. When I go to copy-and-paste this example code, I don't bother copying and pasting the docstrings -- after all, they're "just" comments. I'm supposed to discard them when I copypasta, right? But this results in code that's functionally different, too.
This is the kind of issue that code review is supposed to catch, but.... there's nothing "actively" wrong about the copypastaed section. It's missing something, but there aren't any bugs per se. The ways that it's not-quite-right wouldn't be obvious in a code review tool, so I give it a 50-50 chance of passing code review. And so now, we have a bad, undocumented example in the codebase that people will copy-paste from in turn.....
I do like the conciseness. Reusing docstrings is a clever anti-duplication mechanism. But it seems awfully fragile against "humans" to me!
Ah, okay, I get what you're saying now. And you're absolutely right - the onus is on the user to ensure you're properly documenting your code, and there is a bit of "magic" there. That said, I find Rust's docstrings pretty dang magical already so that third slash suggests to me this comment is "extra special" somehow, even if it's not how a normal doc comment is used.
I don't have the perspective of having worked on a team before, but I'm surprised to hear that you specifically omit the docstrings - that's not something I usually do when I'm copying code I didn't write specifically because I want to be sure I have all the information I need as I build on it. For this library, anyone would simply have to know that they're part of the code.
You do have the option to skip the docstrings, though, and define your help strings directly using the about attribute:
I think this circumvents the problem you're describing, but as before it's still up to your review process to ensure each field has this attribute defined.
I'd never understood why this would be preferable until now - thanks for the perspective!