testthat 3e

testthat 3.0.0 introduces the idea of an “edition” of testthat. An edition is a bundle of behaviours that you have to explicitly choose to use, allowing us to make otherwise backward incompatible changes. This is particularly important for testthat since it has a very large number of packages that use it (almost 5,000 at last count). Choosing to use the 3rd edition allows you to use our latest recommendations for ongoing and new work, while historical packages continue to use the old behaviour.

(We don’t anticipate creating new editions very often, and they’ll always be matched with major version, i.e. if there’s another edition, it’ll be the fourth edition and will come with testthat 4.0.0.)

This vignette shows you how to activate the 3rd edition, introduces the main features, and discusses common challenges when upgrading a package. If you have a problem that this vignette doesn’t cover, please let me know, as it’s likely that the problem also affects others.

library(testthat)
local_edition(3)

Activating

The usual way to activate the 3rd edition is to add a line to your DESCRIPTION:

Config/testthat/edition: 3

This will activate the 3rd edition for every test in your package.

You can also control the edition used for individual tests with testthat::local_edition():

test_that("I can use the 3rd edition", {
  local_edition(3)
  expect_true(TRUE)
})
#> Test passed 🎉

This is also useful if you’ve switched to the 3rd edition and have a couple of tests that fail. You can use local_edition(2) to revert back to the old behaviour, giving you some breathing room to figure out the underlying issue.

test_that("I want to use the 2nd edition", {
  local_edition(2)
  expect_true(TRUE)
})
#> Test passed 🎊

Changes

There are three major changes in the 3rd edition:

Deprecations

A number of outdated functions have been deprecated. Most of these functions have not been recommended for a number of years, but before the introduction of the edition idea, I didn’t have a good way of preventing people from using them without breaking a lot of code on CRAN.

Fixing these deprecation warnings should be straightforward.

Warnings

In the second edition, expect_warning() swallows all warnings regardless of whether or not they match the regexp or class:

f <- function() {
  warning("First warning")
  warning("Second warning")
  warning("Third warning")
}

local_edition(2)
expect_warning(f(), "First")

In the third edition, expect_warning() captures at most one warning so the others will bubble up:

local_edition(3)
expect_warning(f(), "First")
#> Warning in f(): Second warning
#> Warning in f(): Third warning

You can either add additional expectations to catch these warnings, or silence them all with supressWarnings():

f() %>% 
  expect_warning("First") %>% 
  expect_warning("Second") %>% 
  expect_warning("Third")

f() %>% 
  expect_warning("First") %>% 
  suppressWarnings()

Alternatively, you might want to capture them all in a snapshot test:

test_that("f() produces expected outputs/messages/warnings", {
  expect_snapshot(f())  
})
#> Can't compare snapshot to reference when testing interactively
#> i Run `devtools::test()` or `testthat::test_file()` to see changes
#> i Current value:
#> Code
#>   f()
#> Warning <simpleWarning>
#>   First warning
#>   Second warning
#>   Third warning
#> ── Skip (???): f() produces expected outputs/messages/warnings ─────────────────
#> Reason: empty test

The same principle also applies to expect_message(), but message handling has changed in a more radical way, as described next.

Messages

For reasons that I can no longer remember, testthat silently ignores all messages. This is inconsistent with other types of output, so as of the 3rd edition, they now bubble up to your test results. You’ll have to explicit ignore them with supressMesssages(), or if they’re important, test for their presence with expect_message().

waldo

Probably the biggest day-to-day difference (and the biggest reason to upgrade!) is the use of waldo::compare() inside of expect_equal() and expect_identical(). The goal of waldo is to find and concisely describe the difference between a pair of R objects, and it’s designed specifically to help you figure out what’s gone wrong in your unit tests.

f1 <- factor(letters[1:3])
f2 <- ordered(letters[1:3], levels = letters[1:4])

local_edition(2)
expect_equal(f1, f2)
#> Error: `f1` not equal to `f2`.
#> Attributes: < Component "class": Lengths (1, 2) differ (string compare on first 1) >
#> Attributes: < Component "class": 1 string mismatch >
#> Attributes: < Component "levels": Lengths (3, 4) differ (string compare on first 3) >

local_edition(3)
expect_equal(f1, f2)
#> Error: `f1` (`actual`) not equal to `f2` (`expected`).
#> 
#> `class(actual)`:   "factor"          
#> `class(expected)`: "ordered" "factor"
#> 
#> `levels(actual)`:   "a" "b" "c"    
#> `levels(expected)`: "a" "b" "c" "d"

waldo looks even better in your console because it carefully uses colours to help highlight the differences.

The use of waldo also makes precise the difference between expect_equal() and expect_identical(): expect_equal() sets tolerance so that waldo will ignore small numerical differences arising from floating point computation. Otherwise the functions are identical (HA HA).

This change is likely to result in the most work during an upgrade, because waldo can give slightly different results to both identical() and all.equal() in moderately common situations. I believe on the whole the differences are meaningful and useful, so you’ll need to handle them by tweaking your tests. The following changes are most likely to affect you:

Reproducible output

In the third edition, test_that() automatically calls local_reproducible_output() which automatically sets a number of options and environment variables to ensure output is as reproducible across systems. This includes setting:

See the documentation for more details.

Upgrading

The changes lend themselves to the following workflow for upgrading from the 2nd to the 3rd edition:

  1. Activate edition 3. You can let usethis::use_testthat(3) do this for you.
  2. Remove or replace deprecated functions, going over the list of above.
  3. If your output got noisy, quiet things down by either capturing or suppressing warnings and messages.
  4. Inspect test outputs if objects are not “all equal” anymore.

Alternatives

You might wonder why we came up with the idea of an “edition”, rather than creating a new package like testthat3. We decided against making a new package because the 2nd and 3rd edition share a very large amount of code, so making a new package would have substantially increased the maintenance burden: the majority of bugs would’ve needed to be fixed in two places.

If you’re a programmer in other languages, you might wonder why we can’t rely on semantic versioning. The main reason is that CRAN checks all packages that use testthat with the latest version of testthat, so simply incrementing the major version number doesn’t actually help with reducing R CMD check failures on CRAN.