Introduction

The mvnfast R package provides computationally efficient tools related to the multivariate normal and Student's t distributions. The tools are generally faster than those provided by other packages, thanks to the use of C++ code through the Rcpp\RcppArmadillo packages and parallelization through the OpenMP API. The most important functions are:

In the following sections we will benchmark each function against equivalent functions provided by other packages, while in the final section we provide an example application.

Simulating multivariate normal or Student's t random vectors

Simulating multivariate normal random variables is an essential step in many Monte Carlo algorithms (such as MCMC or Particle Filters), hence this operations has to be as fast as possible. Here we compare the rmvn function with the equivalent function rmvnorm (from the mvtnorm package) and mvrnorm (from the MASS package). In particular, we simulate \(10^4\) twenty-dimensional random vectors:

# microbenchmark does not work on all platforms, hence we need this small wrapper 
microwrapper <- function(..., times = 100L){
  ok <- "microbenchmark" %in% rownames(installed.packages())
  if( ok ){ 
    library("microbenchmark") 
    microbenchmark(list = match.call(expand.dots = FALSE)$..., times = times)
  }else{
    message("microbenchmark package is not installed")
    return( invisible(NULL) )
  }
}

library("mvtnorm")
library("mvnfast")
library("MASS")
# We might also need to turn off BLAS parallelism 
library("RhpcBLASctl")
blas_set_num_threads(1)

N <- 10000
d <- 20

# Creating mean and covariance matrix
mu <- 1:d
tmp <- matrix(rnorm(d^2), d, d)
mcov <- tcrossprod(tmp, tmp)

microwrapper(rmvn(N, mu, mcov, ncores = 2),
             rmvn(N, mu, mcov),
             rmvnorm(N, mu, mcov),
             mvrnorm(N, mu, mcov))
## Unit: milliseconds
##                           expr       min        lq      mean    median
##  rmvn(N, mu, mcov, ncores = 2)  6.202737  9.356082 11.555508 11.403996
##              rmvn(N, mu, mcov)  4.917440  5.160246  6.399035  5.265605
##           rmvnorm(N, mu, mcov) 14.690580 15.052695 16.869087 15.425904
##           mvrnorm(N, mu, mcov) 13.950568 14.384078 15.993633 14.815282
##         uq      max neval cld
##  13.485178 18.62414   100  b 
##   6.206907 30.99346   100 a  
##  17.740624 29.02458   100   c
##  16.781134 24.24790   100   c

In this example rmvn cuts the computational time, relative to the alternatives, even when a single core is used. This gain is attributable to several factors: the use of C++ code and efficient numerical algorithms to simulate the random variables. Parallelizing the computation on two cores gives another appreciable speed-up. To be fair, it is necessary to point out that rmvnorm and mvrnorm have many more safety check on the user's input than rmvn. This is true also for the functions described in the next sections.

Notice that this function does not use one of the Random Number Generators (RNGs) provided by R, but one of the parallel cryptographic RNGs described in (Salmon et al., 2011) and available here. It is important to point out that this RNG can safely be used in parallel, without risk of collisions between parallel sequence of random numbers, as detailed in the above reference.

We get similar performance gains when we simulate multivariate Student's t random variables:

# Here we have a conflict between namespaces
microwrapper(mvnfast::rmvt(N, mu, mcov, df = 3, ncores = 2),
             mvnfast::rmvt(N, mu, mcov, df = 3),
             mvtnorm::rmvt(N, delta = mu, sigma = mcov, df = 3))
## Unit: milliseconds
##                                                expr       min        lq
##      mvnfast::rmvt(N, mu, mcov, df = 3, ncores = 2)  7.046654  9.444982
##                  mvnfast::rmvt(N, mu, mcov, df = 3)  6.676374  7.048788
##  mvtnorm::rmvt(N, delta = mu, sigma = mcov, df = 3) 17.613624 18.714233
##       mean    median        uq       max neval cld
##  12.523786 11.055335 15.583642  26.67929   100  b 
##   8.063657  7.283086  8.097293  14.45066   100 a  
##  22.455439 20.937000 22.397717 160.96288   100   c

When d and N are large, and rmvn or rmvt are called several times with the same arguments, it would make sense to create the matrix where to store the simulated random variable upfront. This can be done as follows:

A <- matrix(nrow = N, ncol = d)
class(A) <- "numeric" # This is important. We need the elements of A to be of class "numeric".  

rmvn(N, mu, mcov, A = A) 

Notice that here rmvn returns NULL, not the simulated random vectors! These can be found in the matrix provided by the user:

A[1:2, 1:5]             
##            [,1]      [,2]      [,3]      [,4]     [,5]
## [1,] -2.8920115  6.320758  3.208751 -1.717283 3.259752
## [2,] -0.7204504 12.235853 -2.201966  9.493969 4.429800

Pre-creating the matrix of random variables saves some more time:

microwrapper(rmvn(N, mu, mcov, ncores = 2, A = A),
             rmvn(N, mu, mcov, ncores = 2), 
             times = 200)
## Unit: milliseconds
##                                  expr      min        lq     mean    median
##  rmvn(N, mu, mcov, ncores = 2, A = A) 6.044221  8.947618 10.71177  9.984821
##         rmvn(N, mu, mcov, ncores = 2) 6.494025 10.519379 13.11799 12.475034
##        uq      max neval cld
##  12.35714 18.84868   200  a 
##  15.16277 35.59698   200   b

Don't look at the median time here, the mean is much more affected by memory re-allocation.

Evaluating the multivariate normal and Student's t densities

Here we compare the dmvn function, which evaluates the multivariate normal density, with the equivalent function dmvtnorm (from the mvtnorm package). In particular we evaluate the log-density of \(10^4\) twenty-dimensional random vectors:

# Generating random vectors 
N <- 10000
d <- 20
mu <- 1:d
tmp <- matrix(rnorm(d^2), d, d)
mcov <- tcrossprod(tmp, tmp)
X <- rmvn(N, mu, mcov)

microwrapper(dmvn(X, mu, mcov, ncores = 2, log = T),
             dmvn(X, mu, mcov, log = T),
             dmvnorm(X, mu, mcov, log = T), times = 500)
## Unit: milliseconds
##                                    expr      min       lq     mean   median
##  dmvn(X, mu, mcov, ncores = 2, log = T) 1.285882 1.406305 1.649631 1.532452
##              dmvn(X, mu, mcov, log = T) 2.246185 2.427389 2.595852 2.503581
##           dmvnorm(X, mu, mcov, log = T) 2.581662 2.792003 3.855094 2.892585
##        uq        max neval cld
##  1.717268   5.829845   500 a  
##  2.636381   6.663973   500  b 
##  3.130071 146.045025   500   c

Again, we get some speed-up using C++ code and some more from the parallelization. We get similar results if we use a multivariate Student's t density:

# We have a namespace conflict
microwrapper(mvnfast::dmvt(X, mu, mcov, df = 4, ncores = 2, log = T),
             mvnfast::dmvt(X, mu, mcov, df = 4, log = T),
             mvtnorm::dmvt(X, delta = mu, sigma = mcov, df = 4, log = T), times = 500)
## Unit: milliseconds
##                                                         expr      min       lq
##      mvnfast::dmvt(X, mu, mcov, df = 4, ncores = 2, log = T) 4.102857 5.497727
##                  mvnfast::dmvt(X, mu, mcov, df = 4, log = T) 2.441430 2.740683
##  mvtnorm::dmvt(X, delta = mu, sigma = mcov, df = 4, log = T) 2.785010 3.108017
##      mean  median       uq       max neval cld
##  8.003070 6.31515 8.069671 457.94787   500   b
##  3.642025 3.12200 4.048915  11.53906   500  a 
##  4.733767 3.79991 5.641744 114.96263   500  a

Evaluating the Mahalanobis distance

Finally, we compare the maha function, which evaluates the square mahalanobis distance with the equivalent function mahalanobis (from the stats package). Also in the case we use \(10^4\) twenty-dimensional random vectors:

# Generating random vectors 
N <- 10000
d <- 20
mu <- 1:d
tmp <- matrix(rnorm(d^2), d, d)
mcov <- tcrossprod(tmp, tmp)
X <- rmvn(N, mu, mcov)

microwrapper(maha(X, mu, mcov, ncores = 2),
             maha(X, mu, mcov),
             mahalanobis(X, mu, mcov))
## Unit: milliseconds
##                           expr      min       lq     mean   median       uq
##  maha(X, mu, mcov, ncores = 2) 1.156353 1.253181 1.578370 1.306841 1.612218
##              maha(X, mu, mcov) 2.130691 2.276088 2.353069 2.311144 2.350686
##       mahalanobis(X, mu, mcov) 4.010210 4.178524 5.962155 4.285853 4.931068
##         max neval cld
##    4.270702   100  a 
##    4.530075   100  a 
##  116.211884   100   b

The acceleration is similar to that obtained in the previous sections.

Example: mean-shift mode seeking algorithm

As an example application of the dmvn function, we implemented the mean-shift mode seeking algorithm. This procedure can be used to find the mode or maxima of a kernel density function, and it can be used to set up clustering algorithms. Here we simulate \(10^4\) d-dimensional random vectors from mixture of normal distributions:

set.seed(5135)
N <- 10000
d <- 2
mu1 <- c(0, 0); mu2 <- c(2, 3)
Cov1 <- matrix(c(1, 0, 0, 2), 2, 2)
Cov2 <- matrix(c(1, -0.9, -0.9, 1), 2, 2)

bin <- rbinom(N, 1, 0.5)

X <- bin * rmvn(N, mu1, Cov1) + (!bin) * rmvn(N, mu2, Cov2)

Finally, we plot the resulting probability density and, starting from 10 initial points, we use mean-shift to converge to the nearest mode:

# Plotting
np <- 100
xvals <- seq(min(X[ , 1]), max(X[ , 1]), length.out = np)
yvals <- seq(min(X[ , 2]), max(X[ , 2]), length.out = np)
theGrid <- expand.grid(xvals, yvals) 
theGrid <- as.matrix(theGrid)
dens <- dmixn(theGrid, 
              mu = rbind(mu1, mu2), 
              sigma = list(Cov1, Cov2), 
              w = rep(1, 2)/2)
plot(X[ , 1], X[ , 2], pch = '.', lwd = 0.01, col = 3)
contour(x = xvals, y = yvals, z = matrix(dens, np, np),
        levels = c(0.002, 0.01, 0.02, 0.04, 0.08, 0.15 ), add = TRUE, lwd = 2)

# Mean-shift
library(plyr)
inits <- matrix(c(-2, 2, 0, 3, 4, 3, 2, 5, 2, -3, 2, 2, 0, 2, 3, 0, 0, -4, -2, 6), 
                10, 2, byrow = TRUE)
traj <- alply(inits,
              1,
              function(input)
                  ms(X = X, 
                     init = input, 
                     H = 0.05 * cov(X), 
                     ncores = 2, 
                     store = TRUE)$traj
              )

invisible( lapply(traj, 
                  function(input){ 
                    lines(input[ , 1], input[ , 2], col = 2, lwd = 1.5)
                    points(tail(input[ , 1]), tail(input[ , 2]))
           }))

As we can see from the plot, each initial point leads one of two points that are very close to the true mode. Notice that the bandwidth for the kernel density estimator was chosen by trial-and-error, and less arbitrary choices are certainly possible in real applications.

References