Run tests from more places with Rust-Analyzer
TL;DR
Until now, rust-analyzer’s "Run" menu would only let you run tests from #[cfg(test)]
modules if your cursor was inside the test module, but not if you were editing some production code in the same file đ˘.
Starting Today, you can run your tests from anywhere in the file, without taking your hands off the keyboard đ!
Here’s how:
- Open the "Command Palette" with
ctrl
+shift
+p
(orcmd
+shift
+p
on macOS) - Use the Fuzzy Finder to choose the "rust-analyzer: Run" command ("rarun" will probably get you there)
- Choose from any of the test modules in the file, now available regardless of cursor location!
When I was young
At my first job as a developer, we used the karma test runner for javascript, and had it set up to rerun all our frontend tests every time a file was saved. Having this kind of instantaneous feedback for changes was eyeopening the first time I experienced it, and ever since Iâve made sure to prioritize having quick and easy access to test feedback in my development setups.
The brute-force âjust run all the tests every timeâ approach we used made sense for us:
- We were a tiny (3 person!) startup
- With a small frontend codebase
- Written in an interpreted language
Build times were essentially instantaneous, and it didnât take too much effort to keep our entire test suite around 5 seconds.
Not all languages, codebases and test suites are as forgiving to the brute force approach, however. I remember trying to use gradleâs continuous build feature to achieve a similar experience with a large Java project at a different job, and being less-than-pleased with the outcome. In larger projects with longer build/compile times and larger test suites, a more targeted approach is often necessary, and luckily IDEs are well positioned to provide one. Being able to compile and run only the relevant code or tests can dramatically shorten the feedback cycle, providing great TDD flow even in much larger, more complex projects.
You are a (test) Runner
Rust-analyzer has support for this kind of workflow: the âRunâ feature. When your cursor is on a #[test] fn
, the ârust-analyzer: Runâ command pulls up the option to run just that test, or to run all the tests in the current module, without taking your hands off the keyboard. Both of these options will compile only the crates the test depends on, which can speed things up significantly, especially in a multi-crate workspace (like rust-analyzer itself, for instance). Even better, once youâve run a test like this the first time, the next time you bring up the âRunâ menu, youâll have the option to re-run that same test (or test module), regardless of where your cursor is. Being able to run the same test over and over again while making changes to some code is excellent.
In Rust, unit tests are usually defined in the same file as the production code they are exercising. According to the book:
The purpose of unit tests is to test each unit of code in isolation from the rest of the code to quickly pinpoint where code is and isnât working as expected. Youâll put unit tests in the src directory in each file with the code that theyâre testing. The convention is to create a module named tests in each file to contain the test functions and to annotate the module with
cfg(test)
.
Unfortunately, rust-analyzer hasnât been supporting this particular convention as well as it could. If youâre editing some production code in a file with a #[cfg(test)] mod tests
, and you want to run those tests, you better hope that youâve already run them once before. If so, youâre in luck! Youâll be able to âre-runâ, and continue about your business. If not, then youâll need to navigate down to the test module to get it to show up in the âRunâ menu, potentially losing your focus and disrupting your flow đ˘!
In an ideal world, youâd be able to run the unit tests associated with a file from anywhere in that file, not only from within the tests themselves. Edit the body of a function, hit a hot key, and see your tests run, flow and context remaining fully intact.
Letâs make the world a little more ideal, shall we?
So, weâve got sense of the behavior we want to change: the âRunâ menu should contain entries for test modules defined within a file, even if the cursor isnât inside a test module. And we know where to start looking: the rust-analyzer manual entry for âRunâ points us to runnables.rs
.
Given this is an article about TDD, tests and test runners, the next step should probably be fairly obvious, right?
Write a test!
I started by adding a cfgâd out test mod to fn test_runnables()
, and sure enough, my new test mod didnât show up in the result. I assumed this had to do with the cfg
, and starting digging through the implementation looking for how cfg
attributes were handled. It turns out, thatâs a pretty complex topic! Bouncing around through the call stack, I found that CfgOptions
were included as part of CrateData
, which is stored by the RootDatabase
in the CrateGraph
. Did I really need to change the CrateGraph
? Would that mean creating an entirely new database? A new Analysis
? That doesnât feel right.
To help build my understanding, I tried writing a test for the current behavior that I knew worked. I moved the $0
cursor marker to be within the new test mod, since thatâs when the test mod runnable shows up in my daily usage. To my surprise and confusion, essentially nothing changed in the test output! Something different was happening in the tests than happened in the real world.
Wait, What?
This was a bit of a discouraging setback, to be honest. I did some digging into how the test fixtures are set up, and compared that with how the application did it.
I was frustrated, so I took a break. I went outside. I fixed some other bugs.
I realized a massive clue had been staring me in the face.
That little "Run Tests" button above the mod
definition is called an "Annotation". If we can show an annotation that allows us to run all the tests in a module, regardless of where our cursor is, then by-golly we can do it from the âRunâ menu.
I eventually found the answer I was looking for: these lines in workspace::cargo_to_crate_graph
:
// Add test cfg for local crates
if cargo[pkg].is_local {
cfg_options.insert_atom("test".into());
}
In the real application, we add the test
config to every local crate. In tests, not so much. Progress!
When I added cfg:test
to my unit test from earlier, the test module showed up, regardless of the cursor location. So not only was there an unexpected difference between the application and the test setup, I was also looking in the wrong place. More progress?
I’ve started a discussion on zulip with the team to see if we think adding the test
cfg to test fixtures by default would prevent more confusion than it would cause. For now, let’s continue investigating.
Layers
Runnable annotations are powered by the same runnables
feature I tried testing earlier, so it was a safe bet that whatever was preventing the test mod runnable from being available was happening somewhere above the runnables
function in the stack.
Both annotations
and runnables
are exposed via handlers.rs
in the top level rust-analyzer
crate. The one that calls annotations
is called handle_code_lens
, and it is about as straightforward as can be, transforming the result into a protobuf, but otherwise returning it unchanged. handle_runnables
, on the other hand, was a different story. It filters out runnables for multiple reasons, including the culprit:
if let Some(offset) = offset {
if !runnable.nav.full_range.contains_inclusive(offset) {
continue;
}
}
If the runnableâs full_range
doesnât contain the cursorâs offset
, we donât include it in the result. This conforms with the behavior weâve observed, and even makes a fair amount of logical sense. The user probably wants to run the thing their cursor is on, after all!
First, let’s quickly extract a function from the conditional, and name it something like should_skip_for_offset
. After that, we can slightly modify the filter logic:
RunnableKind::TestMod
s should always be available regardless of cursor location- Other
runnables
should continue to be filtered out based on theirfull_range
Voila:
fn should_skip_for_offset(runnable: &Runnable, offset: Option<TextSize>) -> bool {
match offset {
None => false,
_ if matches!(&runnable.kind, RunnableKind::TestMod { .. }) => false,
Some(offset) => !runnable.nav.full_range.contains_inclusive(offset),
}
}
Happy Testing!