While adopt also allows implementation of custom scores via subclassing, for most applications a simple point-wise arithmetic on scores is sufficient. For instance, consider the case of a utility maximizing approach to planning where not a hard constraint on power but rather a trade-off betweem power and expected sample size is required. The simplest utility function would just be a weightes sum of both power (negative weight since we minimize costs!) and expected sample size.
Consider the following situation
H_0 <- PointMassPrior(.0, 1)
H_1 <- PointMassPrior(.2, 1)
datadist <- Binomial(.1, two_armed = FALSE)
ess <- ExpectedSampleSize(datadist, H_1)
power <- Power(datadist, H_1)
toer <- Power(datadist, H_0)
Adoptr supports such CompositeScores
via the composite
function:
The new unconditional score can be evaluated as usual, e.g.
design <- TwoStageDesign(
n1 = 100,
c1f = .0,
c1e = 2.0,
n2_pivots = rep(150, 5),
c2_pivots = sapply(1 + adoptr:::GaussLegendreRule(5)$nodes, function(x) -x + 2)
)
evaluate(objective, design)
#> [1] 59.34104
Note that conditional and unconditional scores cannot be mixed in an expression passed to composite
. Composite conditional score, however, are possible as well.
Of course, composite conditional scores can also be integrated
and (due to linearity) the result is exactly the same as before.
Composite scores are not restricted to linear operations but support any valid numerical expression:
cs <- composite({log(css) - 50*sin(cp)})
evaluate(cs, design, c(0, .5, 1))
#> [1] -36.39139 -36.53500 -36.55095
Even control flow is supported:
cs <- composite({
res <- 0
for (i in 1:3) {
res <- res + css
}
res
})
evaluate(cs, design, c(0, .5, 1))
#> [1] 750 750 750
The only real constraint is that the expression must be vectorized.