Writing Tests

snforge lets you test standalone functions from your smart contracts. This technique is referred to as unit testing. You should write as many unit tests as possible as these are faster than integration tests.

Writing Your First Test

First, add the following code to the src/lib.cairo file:

fn sum(a: felt252, b: felt252) -> felt252 {
    return a + b;
}

#[cfg(test)]
mod tests {
    use super::sum;

    #[test]
    fn test_sum() {
        assert(sum(2, 3) == 5, 'sum incorrect');
    }
}

It is a common practice to keep your unit tests in the same file as the tested code. Keep in mind that all tests in src folder have to be in a module annotated with #[cfg(test)]. When it comes to integration tests, you can keep them in separate files in the tests directory. You can find a detailed explanation of how snforge collects tests here.

Now run snforge using a command:

$ snforge test
Output:
Collected 1 test(s) from first_test package
Running 1 test(s) from src/
[PASS] first_test::tests::test_sum (gas: ~1)
Tests: 1 passed, 0 failed, 0 skipped, 0 ignored, 0 filtered out

Failing Tests

If your code panics, the test is considered failed. Here's an example of a failing test.

fn panicking_function() {
    let mut data = array![];
    data.append('panic message');
    panic(data)
}

#[cfg(test)]
mod tests {
    use super::panicking_function;

    #[test]
    fn failing() {
        panicking_function();
        assert(2 == 2, '2 == 2');
    }
}
$ snforge test
Output:
Collected 1 test(s) from panicking_test package
Running 1 test(s) from src/
[FAIL] panicking_test::tests::failing

Failure data:
    0x70616e6963206d657373616765 ('panic message')

Tests: 0 passed, 1 failed, 0 skipped, 0 ignored, 0 filtered out

Failures:
    panicking_test::tests::failing

When contract fails, you can get backtrace information by setting the SNFORGE_BACKTRACE=1 environment variable. Read more about it here.

Expected Failures

Sometimes you want to mark a test as expected to fail. This is useful when you want to verify that an action fails as expected.

To mark a test as expected to fail, use the #[should_panic] attribute.

You can specify the expected failure message in three ways:

  1. With ByteArray:
    #[test]
    #[should_panic(expected: "This will panic")]
    fn should_panic_exact() {
        panic!("This will panic");
    }

    // here the expected message is a substring of the actual message
    #[test]
    #[should_panic(expected: "will panic")]
    fn should_panic_expected_is_substring() {
        panic!("This will panic");
    }

With this format, the expected error message needs to be a substring of the actual error message. This is particularly useful when the error message includes dynamic data such as a hash or address.

  1. With felt
    #[test]
    #[should_panic(expected: 'panic message')]
    fn should_panic_felt_matching() {
        assert(1 != 1, 'panic message');
    }
  1. With tuple of felts:
    use core::panic_with_felt252;

    #[test]
    #[should_panic(expected: ('panic message',))]
    fn should_panic_check_data() {
        panic_with_felt252('panic message');
    }

    // works for multiple messages
    #[test]
    #[should_panic(expected: ('panic message', 'second message',))]
    fn should_panic_multiple_messages() {
        let mut arr = ArrayTrait::new();
        arr.append('panic message');
        arr.append('second message');
        panic(arr);
    }
$ snforge test
Output:
Collected 5 test(s) from should_panic_example package
Running 5 test(s) from src/
[PASS] should_panic_example::tests::should_panic_felt_matching (gas: ~1)
[PASS] should_panic_example::tests::should_panic_multiple_messages (gas: ~1)
[PASS] should_panic_example::tests::should_panic_exact (gas: ~1)
[PASS] should_panic_example::tests::should_panic_expected_is_substring (gas: ~1)
[PASS] should_panic_example::tests::should_panic_check_data (gas: ~1)
Tests: 5 passed, 0 failed, 0 skipped, 0 ignored, 0 filtered out

Ignoring Tests

Sometimes you may have tests that you want to exclude during most runs of snforge test. You can achieve it using #[ignore] - tests marked with this attribute will be skipped by default.

#[cfg(test)]
mod tests {
    #[test]
    #[ignore]
    fn ignored_test() { // test code
    }
}
$ snforge test
Output:
Collected 1 test(s) from ignoring_example package
Running 1 test(s) from src/
[IGNORE] ignoring_example::tests::ignored_test
Tests: 0 passed, 0 failed, 0 skipped, 1 ignored, 0 filtered out

To run only tests marked with the #[ignore] attribute use snforge test --ignored. To run all tests regardless of the #[ignore] attribute use snforge test --include-ignored.

Writing Assertions and assert_macros Package

⚠️ Recommended only for development ️⚠️

Assert macros package provides a set of macros that can be used to write assertions such as assert_eq!. In order to use it, your project must have the assert_macros dependency added to the Scarb.toml file. These macros are very expensive to run on Starknet, as they result a huge amount of steps and are not recommended for production use. They are only meant to be used in tests. For snforge v0.31.0 and later, this dependency is added automatically when creating a project using snforge init. But for earlier versions, you need to add it manually.

[dev-dependencies]
snforge_std = ...
assert_macros = "<scarb-version>"

Available assert macros are

  • assert_eq!
  • assert_ne!
  • assert_lt!
  • assert_le!
  • assert_gt!
  • assert_ge!