GPR - example 1

library(GPFDA)
require(MASS)

Simulating data from a GP with unidimensional input

We simulate \(30\) independent realisations from a zero-mean GP with a covariance function given by the sum of a liner kernel and a squared exponential kernel. Each observed curve has a sample size of \(15\) time points on \([0,1]\).

set.seed(123)
nrep <- 30
n <- 15
input <- seq(0, 1, length.out=n)
hp <- list('linear.a'=log(40), 'linear.i'=log(10),
           'pow.ex.v'=log(5), 'pow.ex.w'=log(15),
           'vv'=log(0.3))
Sigma <- cov.linear(hyper=hp, input=input) + 
  cov.pow.ex(hyper=hp, input=input, gamma=2) + 
  diag(exp(hp$vv), n, n)
Y <- t(mvrnorm(n=nrep, mu=rep(0,n), Sigma=Sigma))

Estimation

Estimation of the GPR model can be carried out without using gradient:

set.seed(111)
fitNoGrad <- gpr(input=input, response=Y, Cov=c('linear','pow.ex'), gamma=2, 
               trace=4, nInitCandidates = 1, useGradient = F)
#> 
#>  --------- Initialising ---------- 
#> iter:  -loglik:     linear.a     linear.i     pow.ex.v     pow.ex.w     vv     
#>   0:     3711.8575:  1.71278  4.17194 -2.38691 0.274907 -2.25353
#>   4:     942.14397:  3.30895  3.66638 -2.96802 -0.666393 0.384504
#>   8:     917.64085:  3.81603  2.23017 -2.89464 -0.528616 0.687257
#>  12:     861.04967:  4.22513  2.13865 -0.245436  3.77629 0.544165
#>  16:     722.73438:  3.96940  2.14387  1.23493  2.88357 -1.10751
#>  20:     721.70119:  3.86146  2.11529  1.33133  2.82828 -1.20729
#>  24:     721.60108:  3.75021  2.06638  1.34295  2.83609 -1.20554
#>  28:     721.55626:  3.71439  2.04729  1.33449  2.81930 -1.20972
#> 
#>      optimization finished.

If one wants to use gradient:

set.seed(111)
fit <- gpr(input=input, response=Y, Cov=c('linear','pow.ex'), gamma=2, 
         trace=4, nInitCandidates = 1, useGradient = T)
#> 
#>  --------- Initialising ---------- 
#> iter:  -loglik:     linear.a     linear.i     pow.ex.v     pow.ex.w     vv     
#>   0:     3711.8575:  1.71278  4.17194 -2.38691 0.274907 -2.25353
#>   4:     934.72164:  3.33584  3.79766 -2.82219 -0.407991 0.530585
#>   8:     917.51640:  3.79340  2.07776 -2.64103 -0.0756082 0.692108
#>  12:     810.17635:  4.56448  2.49453  1.59043  1.79717 0.0581705
#>  16:     722.91927:  4.03888  2.33859  1.48098  2.76962 -1.20929
#>  20:     721.55718:  3.71214  2.05899  1.33235  2.82090 -1.20784
#> 
#>      optimization finished.

Note the smaller number of iterations needed when the gradient analytical expressions are used in the optimisation.

We can see that the hyperparameter estimates are very accurate despite the fairly small sample size:

sapply(fit$hyper, exp)
#> linear.a linear.i pow.ex.v pow.ex.w       vv 
#>  41.0373   7.7446   3.7981  16.7650   0.2983

The fitted model for the \(10\)th realisation can be seen:

plot(fit, realisation=10)

#> Predicted values not found, ploting fitted values

Prediction

Predictions on a fine grid for the \(10\)th realisation can be obtained as follows:

inputNew <- seq(0, 1, length.out = 1000)
pred1 <- gprPredict(train=fit, inputNew=inputNew, noiseFreePred=T)
plot(pred1, realisation=10)

If one wants to include noise variance in the predictions for the new time points:

pred2 <- gprPredict(train=fit, inputNew=inputNew, noiseFreePred=F)
plot(pred2, realisation=10)