Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Test doubles in Rust: mockall vs autospy

autospy_vs_mockall

Unit testing is one of many tools in a software engineer's arsenal of validating their code does what they think it does. Unit testing aims to validate that an individual module, function or unit does what we expect in isolation.

To achieve in isolation it is a common scenario when writing unit tests to use test doubles to mimic interfaces without relying on a real implementation.

This is such a common scenario in fact that there are many crates that endeavour to simplify this process and reduce boilerplate, cumulatively racking up millions of downloads.

mockallwiremock
fauxmockers
unimockpseudo

Arrange, Act, Assert

One thing I noticed when moving from C++ to Rust, and is evident from the above list, is that mocks tend to be the preferred test double of choice in Rust. This was interesting revelation coming from other languages where the most common test doubles are typically fakes, stubs and spies.

You might be thinking, they're all test doubles does it really make that much of a difference? The answer is yes there are some obvious and not so obvious differences between mocks and other types of test doubles that I think should be taken into consideration.

Firstly, the "Arrange, Act, Assert" test structure I had become familiar with and is touted as "best practise" didn't seem to naturally fall out of tests that use mocks. It didn't feel like there was a clear divide between which part of the test was the arrange section and which was the assert.

aaa

Test structure is just one thing that differs between use of mocks and other test doubles. There are also some functional differences that will be covered later, to illustrate the "Arrange, Act, Assert" differences let's compare a typical test structure using mocks versus spies with some example code...

Mocks: typical test structure

  • Configure the mock - this can include setting return values, specifying the expected arguments, defining call order or other expectations
  • Inject and use the mock - then assert the function under test produces the expected result
  • Panics during execution - if any of the expectations are violated, the mock will panic inside the function under test
#![allow(unused)]
fn main() {
#[cfg_attr(test, mockall::automock)]
trait SaveFile {
    fn save_file(&self, filename: &str, contents: &[u8]) -> anyhow::Result<()>;
}

fn save_file_to_disk(
    file_system: &impl SaveFile,
    filename: &str,
    contents: &[u8],
) -> anyhow::Result<()> {
    file_system.save_file(filename, contents)
}

#[cfg(test)]
mod tests {
    use super::*;
    use mockall::predicate::*;

    #[test]
    fn fails_to_save_file() {
        // Arrange & Assert ------------------------------------------------
        let mut mock = MockSaveFile::new();
        mock.expect_save_file()
            .with(eq("filename"), eq(b"contents".as_ref()))
            .times(1)
            .returning(|_, _| Err(anyhow::anyhow!("deliberate test error")));
        
        // Act -------------------------------------------------------------
        assert!(save_file_to_disk(&mock, "filename", b"contents").is_err());
    }
}
}

Spies: typical test structure

  • Configure the spy - this usually involves setting the return values
  • Inject and use the spy - then assert the function under test produces the expected result
  • Verify arguments - assert the spy was called with the expected arguments
#![allow(unused)]
fn main() {
#[cfg_attr(test, autospy::autospy)]
trait SaveFile {
    fn save_file(&self, filename: &str, contents: &[u8]) -> anyhow::Result<()>;
}

fn save_file_to_disk(
    file_system: &impl SaveFile,
    filename: &str,
    contents: &[u8],
) -> anyhow::Result<()> {
    file_system.save_file(filename, contents)
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn fails_to_save_file() {
        // Arrange --------------------------------------------------------
        let spy = SaveFileSpy::default();
        spy.save_file
            .returns
            .set([Err(anyhow::anyhow!("deliberate test error"))]);
        
        // Act ------------------------------------------------------------
        assert!(save_file_to_disk(&spy, "filename", b"contents").is_err());
        
        // Assert ---------------------------------------------------------
        assert_eq!(
            [("filename".to_string(), b"contents".to_vec())],
            spy.save_file.arguments
        ) 
    }
}
}

Advantages of spies

spy

Arrange, Act, Assert

As previously mentioned the "Arrange, Act, Assert" pattern is the expected pattern for unit tests, leading to improved readability when people drop in and out of a codebase.

Crate specific syntax

Something else of note is the reduction in crate specific syntax, in the mock example to express our expectations we needed to use expect_fn(), with(), times() and returning(). These might read as obvious to a seasoned Rust veteran, or even a standard mock user; however, there is a cognitive load in understanding what each of these do, and additional complexities in the interface that would require a fresh pair of eyes to peruse the documentation.

In the spy example we can see this reduction in crate specific syntax, with the only crate specific function being set(). You might justifiable argue that returns and arguments are conceptually part of the library and therefore crate specific, but in the lens of "Arrange, Act, Assert" they fall very clearly into one category or the other which results in the test structure remaining consistent with unit test structures common in other languages.

Does not panic in function under test

One final difference which depending on the situation can manifest as an advantage is spies don't panic during the function under test if expectations are not met. Why is this relevant? First reason, it means you don't get the nice error messages you get with Rust's asserts. For example if we take the previous mock example, except whilst I'm writing it I make a mistake and misspell "filename" as "filenam", what happens?

#![allow(unused)]
fn main() {
#[test]
fn fails_to_save_file() { 
    let mut mock = MockSaveFile::new();
    mock.expect_save_file()
        .with(eq("filenam"), eq(b"contents".as_ref()))
        .times(1)
        .returning(|_, _| Err(anyhow::anyhow!("deliberate test error")));
        
    assert!(save_file_to_disk(&mock, "filename", b"contents").is_err());
}
}
failures:

---- mock::tests::fails_to_save_file stdout ----

thread 'mock::tests::fails_to_save_file' (44145) panicked at src/mock.rs:1:18:
MockSaveFile::save_file("filename", [99, 111, 110, 116, 101, 110, 116, 115]): No matching expectation found
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Okay, we have "No matching expectation found", but which expectation? We have nothing to compare against, and no indicator as to which argument is causing the issue. That might not seem like a big issue here where we have two relatively simple arguments, but when we have multiple arguments, or the types become complex this quickly becomes non-ideal.

Let's compare to a spy:

#![allow(unused)]
fn main() {
#[test]
fn fails_to_save_file() { 
    let spy = SaveFileSpy::default();
    spy.save_file
       .returns
       .set([Err(anyhow::anyhow!("deliberate test error"))]);
        
    assert!(save_file_to_disk(&spy, "filename", b"contents").is_err());
        
    assert_eq!(
        [("filenam".to_string(), b"contents".to_vec())],
        spy.save_file.arguments
    )
}
}
failures:

---- spy::tests::fails_to_save_file stdout ----

thread 'spy::tests::fails_to_save_file' (49891) panicked at src/spy.rs:45:9:
assertion `left == right` failed
  left: [("filenam", [99, 111, 110, 116, 101, 110, 116, 115])]
 right: [("filename", [99, 111, 110, 116, 101, 110, 116, 115])]
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Looks like any other Rust test assert message to me!

Advantages of mocks

It wouldn't be a fair comparison without looking at some of the advantages of mocks, as there are some benefits that might swing your choice.

Line count

This isn't always the case but mocks typically require fewer lines to achieve the same functionality.

Flexibility

As previously mentioned, mocks typically come with a lot more crate specific syntax:

Lots of bells and whistles, and therefore gives you as the author a bit more flexibility and levers at your disposal to implement the tests.

Conclusion

You might have read all of this (in which case thanks!) and thought well you made autospy, of course you're going to suggest people use it... to which I say I hope this article is somewhat convincing to give spies a try 😄!

These observations might just be from the lens of someone who has always used spies and fakes, and you might be perfectly happy to continue using mocks! Power to you, but if you ever find yourself one day looking to try an alternative which maybe you might enjoy more... autospy is always here!