forget_past_performance
had no effect in online()
online
can now be used with multivariate data
y
and a TxDxPxK array as experts
summary.online
can be used to obtain selected parameters of online
modelsonline
uses Rcpp Modules to bundle data and functionality into an exposed C++ classinitial_weights
argument is replaced by init
init
takes a named list and currently intial_weights
and R0
the initial weights and the initial cumulative regret can be provided. They have to be PxK or 1xK.profoc
function was extended:
regret
can now be passed as an array as before, or as a list, e.g. list(regret = regret_array, share = 0.2)
if the provided regret should be mixed with the regret calculated by online.loss
can also be provided as a list, see above.batch
function can now minimize an alternative objection function using the quantile weighted CRPS
qw_crps=TRUE
profoc
function was renamed to online
for consistency.batch
function to apply batch-learning.oracle
function to approximate the oracle.online
and batch
objects.The spline functions where rewritten to add the ability of using a non-equidistant knot sequence and a penalty term defined on the Sobolev space. This change induces breaking changes to small parts of the API.
ndiff
defines the degree of differencing for creating the penalty term. For values between 1 and 2 a weighted sum of the difference penalization matrices is used.rel_nseg
is replaced by knot_distance
( distance between knots). Defaults to 0.025, which corresponds to the grid steps when knot_distance_power = 1 (the default).knot_distance_power
defines if knots are uniformly distributed. Defaults to 1, which corresponds to the equidistant case. Values less than 1 create more knots in the center, while values above 1 concentrate more knots in the tails.allow_quantile_crossing
defines if quantile crossing is allowed. Defaults to false, which means that predictions will be sorted.package:::function
notation.y
must now be a matrix of either \(\text{T} \times 1\) or \(\text{T} \times \text{P}\).trace
specifies whether a progress bar will be printed or not. Default to TRUE
.loss_function
lets you now specify “quantile”, “expectile” or “percentage”. All functions are generalized as in Gneitling 2009. The power can be scaled by loss_parameter
. The latter defaults to 1, which leads to the well-known quantile, squared, and absolute percentage loss.gradient
lets you specify whether the learning algorithm should consider actual loss or a linearized version using the gradient of the loss. Defaults to TRUE
(gradient-based learning).forget_performance
was added. It defines the share of the past performance that will be ignored when selecting the best parameter combination.forget
parameter to forget_regret
to underline its reference to the regret.init_weights
parameter. It has to be either a Kx1 or KxP matrix specifying the experts’ starting weights.lead_time
parameter. offset for expert forecasts. Defaults to 0 which means that experts predict t+1 at t. Setting this to h means experts’ predictions refer to t+1+h at time t. The weight updates delay accordingly.tau
is now optional. It defaults to 1:P/(P+1). A scalar given to tau will be repeated P times. The latter is useful in multivariate settings.pinball_loss
and loss_pred
functions were replaced by a more flexible function called loss
.weights
object is changed from a \((\text{T}+1 \times \text{K} \times \text{P})\) array to a \((\text{T}+1 \times \text{P} \times \text{K})\) array to match other objects’ dimensions. Now the following indexing scheme is consistent throughout the package: (Time, Probabilities, Experts, Parameter combination)