Abstract
groupdata2 is a set of subsetting methods for easy grouping, windowing, folding and splitting of data. Create balanced folds for cross-validation or divide a time series into windows.
This vignette contains descriptions of functions and methods, along with simple examples of usage. For a gentler introduction to groupdata2, please see Introduction to groupdata2
Contact author at r-pkgs@ludvigolsen.dk
You can either install the CRAN version or the GitHub development version.
# Uncomment:
# install.packages("groupdata2")
# Uncomment:
# install.packages("devtools")
# devtools::install_github("LudvigOlsen/groupdata2")
# Attaching groupdata2
library(groupdata2)
# Attaching other packages used in this vignette
library(dplyr)
library(tidyr)
library(ggplot2)
library(knitr)
# We will also be using plyr a few times, but we don't attach this
# because of possible conflicts with dplyr. Instead we use its functions
# like so: plyr::count()
groupdata2 is a set of functions and methods for easy grouping, windowing, folding and splitting of data.
There are 5 main functions:
Returns a factor with group numbers, e.g. 111222333.
This can be used to subset, aggregate, group_by, etc.
Returns the given data as a dataframe with added grouping factor made with group_factor(). The dataframe is grouped by the grouping factor for easy use with dplyr pipelines.
Splits the given data into the specified groups made with group_factor() and returns them in a list.
Creates (optionally) balanced folds for use in cross-validation. Balance folds on one categorical variable and/or make sure that all datapoints sharing an ID is in the same fold.
Creates (optionally) balanced partitions (e.g. training/test sets). Balance partitions on one categorical variable and/or make sure that all datapoints sharing an ID is in the same partition.
When working with time series we would often refer to the kind of groups made by group_factor(), group() and splt() as windows. In this vignette, these will be referred to as groups.
fold() creates balanced groups for cross-validation by using group(). These are referred to as folds.
Cross-validation with groupdata2
In this vignette, we go through the basics of cross-validation, such as creating balanced train/test sets with partition() and balanced folds with fold(). We also write up a simple cross-validation function and compare multiple linear regression models.
Time series with groupdata2
In this vignette, we divide up a time series into groups (windows) and subgroups using group() with the ‘greedy’ and ‘staircase’ methods. We do some basic descriptive stats of each group and use them to reduce the data size.
Automatic groups with groupdata2
In this vignette, we will use the ‘l_starts’ method with group() to allow transferring of information from one dataset to another. We will use the automatic grouping function that finds group starts all by itself.
In the examples we will be using knitr::kable() to visualize some of the data such as dataframes. You do not need to use kable() in any way when using the functions.
There are currently 9 methods for grouping the data.
It is possible to create groups based on number of groups (default), group size, list of group sizes, list of group start positions, step size or prime number to start at. These can be passed as whole number(s) or percentage(s), while method ‘l_starts’ can also be passed as ‘auto’.
Here we will take a look at the different methods.
‘greedy’ uses group size for dividing up the data.
Greedy means that each group grabs as many elements as possible (up to size), meaning that there might be less elements available to the last group, but that all other groups than the last are guaranteed to have the size specified.
Example
We have a vector with 57 values. We want to have group sizes of 10.
The greedy splitter will return groups with this many values in them:
10, 10, 10, 10, 10, 7
By setting force_equal to TRUE, we discard the last group if it contains fewer values than the other groups.
Example
We have a vector with 57 values. We want to have group sizes of 10.
The greedy splitter with force_equal set to TRUE will return groups with this many values in them:
10, 10, 10, 10, 10meaning that 7 values have been discarded.
‘n_dist’ uses a specified number of groups to divide up the data.
First it creates equal groups as large as possible. Then, if there are any excess data points, it distributes them across the groups.
Example
We have a vector with 57 values. We want to get back 5 groups.
‘n_dist’ with default settings would return groups with this many values in them:
11, 11, 12, 11, 12
By setting force_equal to TRUE, ‘n_dist’ will create the largest possible, equally sized groups by discarding excess data elements.
Example
‘n_dist’ with force_equal set to TRUE would return groups with this many values in them:
11, 11, 11, 11, 11
meaning that 2 values have been discarded.
‘n_fill’ uses a specified number of groups to divide up the data.
First it creates equal groups as large as possible. Then, if there are any excess data points, it places them in the first groups.
By setting descending to TRUE, it would be the last groups though.
Example
We have a vector with 57 values. We want to get back 5 groups.
‘n_fill’ with default settings would return groups with this many values in them:
12, 12, 11, 11, 11
By setting force_equal to TRUE, ‘n_fill’ will create the largest possible, equally sized groups by discarding excess data elements.
Example
‘n_fill’ with force_equal set to TRUE would return groups with this many values in them:
11, 11, 11, 11, 11
meaning that 2 values have been discarded.
‘n_last’ uses a specified number of groups to divide up the data.
With default settings, it tries to make the groups as equally sized as possible, but notice that the last group might contain fewer or more elements, if the length of the data is not divisible with the number of groups. All, but the last, groups are guaranteed to contain the same number of elements.
Example
We have a vector with 57 values. We want to get back 5 groups.
‘n_last’ with default settings would return groups with this many values in them:
11, 11, 11, 11, 13
By setting force_equal to TRUE, ‘n_last’ will create the largest possible, equally sized groups by discarding excess data elements.
Example
‘n_last’ with force_equal set to TRUE would return groups with this many values in them:
11, 11, 11, 11, 11
meaning that 2 values have been discarded.
Notice that ‘n_last’ will always return the given number of groups. It will never return a group with zero elements. For some situations that means that the last group will contain a lot of elements. Asked to divide a vector with 57 elements into 20 groups, the first 19 groups will contain 2 elements, while the last group will itself contain 19 elements. Had we instead asked it to divide the vector into 19 groups, we would have had 3 elements in all groups.
‘n_fill’ uses a specified number of groups to divide up the data.
First it creates equal groups as large as possible. Then, if there are any excess data points, it places them randomly in the groups.
N.B.: It only places one extra element per group.
Example
We have a vector with 57 values. We want to get back 5 groups.
‘n_rand’ with default settings could return groups with this many values in them:
12, 11, 11, 11, 12
By setting force_equal to TRUE, ‘n_rand’ will create the largest possible, equally sized groups by discarding excess data elements.
Example
‘n_rand’ with force_equal set to TRUE would return groups with this many values in them:
11, 11, 11, 11, 11
meaning that 2 values have been discarded.
‘l_sizes’ divides up the data by a list of group sizes.
Excess data points are placed in extra group at the end.
n is a list/vector of group sizes
Example
We have a vector with 57 values. We want to get back 3 groups containing 20%, 30% and 50% of the data points.
‘l_sizes’ with n = c(0.2, 0.3) would return groups with this many values in them:
11, 17, 29
By setting force_equal to TRUE, ‘l_sizes’ discard any excess elements.
Example
‘l_sizes’ with n = c(0.2, 0.3) and force_equal set to TRUE would return groups with this many values in them:
11, 17
meaning that 29 values have been discarded.
‘l_starts’ starts new groups at specified values of vector.
n is a list of starting positions. Skip values by c(value, skip_to_number) where skip_to_number is the nth appearance of the value in the vector. Groups automatically start from first data point.
If passing n = ‘auto’ the starting positions are automatically found with find_starts().
If data is a dataframe, starts_col must be set to indicate the column to match starts.
Set starts_col to ‘index’ or ‘.index’ for matching with row names. ‘index’ first looks for column named ‘index’ in data, while ‘.index’ completely ignores potential column in data named ‘.index’.
Example
We have a vector with 57 values ranging from (1:57). We want to get back groups starting at specific values in the vector.
‘l_starts’ with n = c(1, 3, 7, 25, 50) would return groups with this many values in them:
2, 4, 18, 25, 8
force_equal does not have any effect with method ‘l_starts’.
Groups can start at nth appearance of the value by using c(value, skip_to_number).
Example
We have a vector with the values c(“a”, “e”, “o”, “a”, “e”, “o”) and want to start groups at the first “a”, the first following “e” and the second following “o”.
‘l_starts’ with n = list(“a”, “e”, c(“o”, 2)) would return groups with this many values in them:
1, 4, 1
Using the find_starts() function, ‘l_starts’ is capable of finding the beginning of groups automatically.
A group start is a value which differs from the previous value.
Example
We have a vector with the values c(“a”, “a”, “o”, “o”, “o”, “a”, “a”) and want to automatically discover groups of data and group them.
‘l_starts’ with n = ‘auto’ would return groups with this many values in them:
2, 3, 2
find_starts() finds group starts in a given vector.
A group start is a value which differs from the previous value.
Setting return_index to TRUE returns indices of group starts.
Example
We have a vector with the values c(“a”, “a”, “o”, “o”, “o”, “a”, “a”) and want to automatically discover group starts.
find_starts() would return these group starts:
“a”, “o”, “a”
find_missing_starts() tells you the values and (optionally) skip_to numbers that would be recursively removed when using the ‘l_starts’ method with the remove_missing_starts argument set to TRUE.
Set return_skip_numbers to FALSE to get only the missing values without the skip_to numbers.
Example
We have a vector with the values c(“a”, “a”, “o”, “o”, “o”, “a”, “a”) and a vector of starting positions c(“a”,“d”,“o”,“p”,“a”).
find_missing_starts() would return this list of values and skip_to numbers:
list(c(“d”,1), c(“p”,1))
‘staircase’ uses step_size to divide up the data.
For each group, the group size will be step size multiplied with the group index.
Example
We have a vector with 57 values. We specify a step size of 5.
‘staircase’ with default settings would return groups with this many values in them:
5, 10, 15, 20, 7
By setting force_equal to TRUE, ‘staircase’ will discard the last group if it does not contain the expected values (step size multiplied by group index).
Example
‘staircase’ with force_equal set to TRUE would return groups with this many values in them:
5, 10, 15, 20
meaning that 7 values have been discarded.
When using the staircase method the last group might not have the size of the second last group + step size.
Use %staircase% to find the remainder.
If the last group has the size of the second last group + step size, %staircase% will return 0.
Example
%staircase% on a vector with size 57 and step size of 5 would look like this:
57 %staircase% 5
and return:
7
meaning that the last group would contain 7 values
‘primes’ creates groups with sizes of primary numbers in a staircasing design. n is the prime number to start at (size of first group).
Prime numbers are generated with the ‘numbers’ package by Hans Werner Borchers.
Example
We have a vector with 57 values. We specify n (start at) as 5.
‘primes’ with default settings would return groups with this many values in them:
5, 7, 11, 13, 17, 4
By setting force_equal to TRUE, ‘primes’ will discard the last group if it does not contain the expected number of values.
Example
‘primes’ with force_equal set to TRUE would return groups with this many values in them:
5, 7, 11, 13, 17
meaning that 4 values have been discarded.
When using the primes method, the last group might not have the size of the associated prime number, if there are not enough elements. Use %primes% to find the remainder.
Returns 0 if the last group has the size of the associated prime number.
Example
%primes% on a vector with size 57 and n (start at) as 5 would look like this:
57 %primes% 5
and return:
4
meaning that the last group would contain 4 values
Type: dataframe or vector
The data to process.
Used in: group_factor(), group(), splt(), fold()
Type: integer, numeric, character, or list
n represents either number of groups (default), group size, list of group sizes, list of group starts, step size or prime number to start at, depending on which method is specified.
n can be given as a whole number(s) (n > 1) or as percentage(s) (0 < n < 1).
Method l_starts allows n = ‘auto’.
Used in: group_factor(), group(), splt()
Type: character
Choose which method to use when dividing up the data.
Available methods: greedy, n_dist, n_fill, n_last, n_rand, staircase, or primes
Used in: group_factor(), group(), splt(), fold()
Type: character
Name of column with values to match in method ‘l_starts’ when data is a dataframe.
Pass ‘index’ or ‘.index’ to use rownames. ‘index’ first looks for column named ‘index’ in data, while ‘.index’ completely ignores potential column in data named ‘.index’.
Used in: group_factor(), group(), splt(), fold()
Type: logical (TRUE or FALSE)
If you need groups with the exact same size, set force_equal to TRUE.
Implementation is different in the different methods. Read more in their sections above.
Be aware that this setting discards excess datapoints!
Used in: group_factor(), group(), splt(), partition()
Type: logical (TRUE or FALSE)
If you set n to 0, you get an error.
If you don’t want this behavior, you can set allow_zero to TRUE, and (depending on the function) you will get the following output:
group_factor() will return the factor with NAs instead of numbers. It will be the same length as expected.
group() will return the expected dataframe with NAs instead of a grouping factor.
splt() functions will return the given data (dataframe or vector) in the same list format as if it had been split.
Used in: group_factor(), group(), splt()
Type: logical (TRUE or FALSE)
In methods like ‘n_fill’ where it makes sense to change the direction of the method, you can use this argument.
In ‘n_fill’ it fills up the excess data points starting from the last group instead of the first.
NB. Only some of the methods can use this argument.
Used in: group_factor(), group(), splt()
Type: logical (TRUE or FALSE)
After creating the the grouping factor using the chosen method, it is possible to randomly reorganize it before returning it. Notice that this applies to all the functions that allows for the argument, as group() and splt() uses the grouping factor!
Used in: group_factor(), group(), splt()
N.B. fold() and partition() always uses some randomization.
Type: character
Name of added grouping factor column. Allows multiple grouping factors in a dataframe.
Used in: group()
Type: logical (TRUE or FALSE)
Recursively remove elements from the list of starts that are not found. For method ‘l_starts’ only.
Used in: group_factor(), group(), splt(), fold()
Type: integer or numeric
k represents either number of folds (default), fold size, list of fold sizes, list of fold starts, step size or prime number to start at, depending on which method is specified.
k can be given as a whole number(s) (k > 1) and/or as percentage(s) (0 < k < 1) and/or as character.
Used in: fold()
Type: integer or numeric
Size(s) of partition(s). Passed as vector if specifying multiple partitions.
p can be given as a whole number(s) (p > 1) or as percentage(s) (0 < p < 1).
Used in: partition()
Type: categorical vector or factor (passed as column name)
Categorical variable to balance between folds.
E.g. when predicting a binary variable (a or b), it is necessary to have both represented in every fold.
N.B. If also passing id_col, cat_col should be a constant within IDs.
E.g. a participant must always have the same diagnosis (a or b) throughout the dataset. Else, the participant might be placed in multiple folds.
Used in: fold(), partition()
Type: Factor (passed as column name)
Factor with IDs. This will be used to keep all rows with an ID in the same fold (if possible).
E.g. If we have measured a participant multiple times and want to see the effect of time, we want to have all observations of this participant in the same fold.
Used in: fold(), partition()
Return list of partitions (TRUE) or a grouped dataframe (FALSE).
Used in: partition()
We will be using the method ‘n_dist’ on a dataframe to showcase the functions. Afterwards we will use and compare the methods.
Notice that you can also use vectors as data with all the functions.
See the necessary attached packages for running the examples under Attach Packages.
df <- data.frame("x"=c(1:12),
"species" = rep(c('cat','pig', 'human'), 4),
"age" = sample(c(1:100), 12))
groups <- group_factor(df, 5, method = 'n_dist')
groups
#> [1] 1 1 2 2 3 3 3 4 4 5 5 5
#> Levels: 1 2 3 4 5
df$groups <- groups
df %>% kable(align = 'c')
x | species | age | groups |
---|---|---|---|
1 | cat | 63 | 1 |
2 | pig | 7 | 1 |
3 | human | 21 | 2 |
4 | cat | 18 | 2 |
5 | pig | 66 | 3 |
6 | human | 37 | 3 |
7 | cat | 73 | 3 |
8 | pig | 47 | 4 |
9 | human | 67 | 4 |
10 | cat | 91 | 5 |
11 | pig | 35 | 5 |
12 | human | 70 | 5 |
aggregate(df[, 3], list(df$groups), mean) %>%
rename(group = Group.1, mean_age = x) %>%
kable(align = 'c')
group | mean_age |
---|---|
1 | 35.00000 |
2 | 19.50000 |
3 | 58.66667 |
4 | 57.00000 |
5 | 65.33333 |
Getting an equal number of elements per group with group_factor().
Notice that we discard the excess values so all groups contain the same amount of elements. Since the grouping factor is shorter than the dataframe, we can’t combine them as they are. A way to do so would be to shorten the dataframe to be the same length as the grouping factor.
df <- data.frame("x"=c(1:12),
"species" = rep(c('cat','pig', 'human'), 4),
"age" = sample(c(1:100), 12))
groups <- group_factor(df, 5, method = 'n_dist', force_equal = TRUE)
groups
#> [1] 1 1 2 2 3 3 4 4 5 5
#> Levels: 1 2 3 4 5
plyr::count(groups) %>%
rename(group = x, size = freq) %>%
kable(align = 'c')
group | size |
---|---|
1 | 2 |
2 | 2 |
3 | 2 |
4 | 2 |
5 | 2 |
First we make the dataframe the same size as the grouping factor. Then we add the grouping factor to the dataframe.
df <- head(df, length(groups)) %>%
mutate(group = groups)
df %>% kable(align = 'c')
x | species | age | group |
---|---|---|---|
1 | cat | 94 | 1 |
2 | pig | 22 | 1 |
3 | human | 64 | 2 |
4 | cat | 13 | 2 |
5 | pig | 26 | 3 |
6 | human | 37 | 3 |
7 | cat | 2 | 4 |
8 | pig | 36 | 4 |
9 | human | 81 | 5 |
10 | cat | 31 | 5 |
df <- data.frame("x"=c(1:12),
"species" = rep(c('cat','pig', 'human'), 4),
"age" = sample(c(1:100), 12))
df_grouped <- group(df, 5, method = 'n_dist')
df_grouped %>% kable(align = 'c')
x | species | age | .groups |
---|---|---|---|
1 | cat | 50 | 1 |
2 | pig | 19 | 1 |
3 | human | 82 | 2 |
4 | cat | 65 | 2 |
5 | pig | 77 | 3 |
6 | human | 11 | 3 |
7 | cat | 69 | 3 |
8 | pig | 39 | 4 |
9 | human | 76 | 4 |
10 | cat | 59 | 5 |
11 | pig | 71 | 5 |
12 | human | 100 | 5 |
2.2 Using group() with dplyr pipelines to get mean age
df_means <- df %>%
group(5, method = 'n_dist') %>%
dplyr::summarise(mean_age = mean(age))
df_means %>% kable(align = 'c')
.groups | mean_age |
---|---|
1 | 34.50000 |
2 | 73.50000 |
3 | 52.33333 |
4 | 57.50000 |
5 | 76.66667 |
Getting an equal number of elements per group with group().
Notice that we discard the excess rows/elements so all groups contain the same amount of elements.
df <- data.frame("x"=c(1:12),
"species" = rep(c('cat','pig', 'human'), 4),
"age" = sample(c(1:100), 12))
df_grouped <- df %>%
group(5, method = 'n_dist', force_equal = TRUE)
df_grouped %>% kable(align = 'c')
x | species | age | .groups |
---|---|---|---|
1 | cat | 53 | 1 |
2 | pig | 79 | 1 |
3 | human | 3 | 2 |
4 | cat | 47 | 2 |
5 | pig | 71 | 3 |
6 | human | 66 | 3 |
7 | cat | 45 | 4 |
8 | pig | 81 | 4 |
9 | human | 41 | 5 |
10 | cat | 23 | 5 |
df <- data.frame("x"=c(1:12),
"species" = rep(c('cat','pig', 'human'), 4),
"age" = sample(c(1:100), 12))
df_list <- splt(df, 5, method = 'n_dist')
df_list %>% kable(align = 'c')
|
|
|
|
|
v = c(1:6)
splt(v, 3, method = 'n_dist')
#> $`1`
#> [1] 1 2
#>
#> $`2`
#> [1] 3 4
#>
#> $`3`
#> [1] 5 6
Getting an equal number of elements per group with splt().
Notice that we discard the excess rows/elements so all groups contain the same amount of elements.
df <- data.frame("x"=c(1:12),
"species" = rep(c('cat','pig', 'human'), 4),
"age" = sample(c(1:100), 12))
df_list <- splt(df, 5, method = 'n_dist', force_equal = TRUE)
df_list %>% kable(align = 'c')
|
|
|
|
|
df <- data.frame("participant" = factor(rep(c('1','2', '3', '4', '5', '6'), 3)),
"age" = rep(sample(c(1:100), 6), 3),
"diagnosis" = rep(c('a', 'b', 'a', 'a', 'b', 'b'), 3),
"score" = sample(c(1:100), 3*6))
df <- df[order(df$participant),]
# Remove index
rownames(df) <- NULL
# Add session info
df$session <- rep(c('1','2', '3'), 6)
kable(df, align = 'c')
participant | age | diagnosis | score | session |
---|---|---|---|---|
1 | 44 | a | 72 | 1 |
1 | 44 | a | 61 | 2 |
1 | 44 | a | 92 | 3 |
2 | 71 | b | 13 | 1 |
2 | 71 | b | 82 | 2 |
2 | 71 | b | 53 | 3 |
3 | 40 | a | 25 | 1 |
3 | 40 | a | 100 | 2 |
3 | 40 | a | 57 | 3 |
4 | 32 | a | 14 | 1 |
4 | 32 | a | 73 | 2 |
4 | 32 | a | 31 | 3 |
5 | 73 | b | 24 | 1 |
5 | 73 | b | 41 | 2 |
5 | 73 | b | 23 | 3 |
6 | 20 | b | 6 | 1 |
6 | 20 | b | 37 | 2 |
6 | 20 | b | 83 | 3 |
df_folded <- fold(df, 3, method = 'n_dist')
# Order by folds
df_folded <- df_folded[order(df_folded$.folds),]
kable(df_folded, align = 'c')
participant | age | diagnosis | score | session | .folds |
---|---|---|---|---|---|
1 | 44 | a | 61 | 2 | 1 |
1 | 44 | a | 92 | 3 | 1 |
4 | 32 | a | 73 | 2 | 1 |
4 | 32 | a | 31 | 3 | 1 |
5 | 73 | b | 24 | 1 | 1 |
6 | 20 | b | 83 | 3 | 1 |
1 | 44 | a | 72 | 1 | 2 |
2 | 71 | b | 13 | 1 | 2 |
3 | 40 | a | 100 | 2 | 2 |
4 | 32 | a | 14 | 1 | 2 |
5 | 73 | b | 41 | 2 | 2 |
6 | 20 | b | 6 | 1 | 2 |
2 | 71 | b | 82 | 2 | 3 |
2 | 71 | b | 53 | 3 | 3 |
3 | 40 | a | 25 | 1 | 3 |
3 | 40 | a | 57 | 3 | 3 |
5 | 73 | b | 23 | 3 | 3 |
6 | 20 | b | 37 | 2 | 3 |
df_folded <- fold(df, 3, cat_col = 'diagnosis', method = 'n_dist')
# Order by folds
df_folded <- df_folded[order(df_folded$.folds),]
kable(df_folded, align = 'c')
participant | age | diagnosis | score | session | .folds |
---|---|---|---|---|---|
1 | 44 | a | 61 | 2 | 1 |
3 | 40 | a | 25 | 1 | 1 |
3 | 40 | a | 57 | 3 | 1 |
2 | 71 | b | 13 | 1 | 1 |
5 | 73 | b | 41 | 2 | 1 |
6 | 20 | b | 6 | 1 | 1 |
1 | 44 | a | 72 | 1 | 2 |
1 | 44 | a | 92 | 3 | 2 |
4 | 32 | a | 14 | 1 | 2 |
2 | 71 | b | 53 | 3 | 2 |
5 | 73 | b | 24 | 1 | 2 |
6 | 20 | b | 37 | 2 | 2 |
3 | 40 | a | 100 | 2 | 3 |
4 | 32 | a | 73 | 2 | 3 |
4 | 32 | a | 31 | 3 | 3 |
2 | 71 | b | 82 | 2 | 3 |
5 | 73 | b | 23 | 3 | 3 |
6 | 20 | b | 83 | 3 | 3 |
Let’s count how many of each diagnosis there are in each group.
df_folded %>% group_by(.folds) %>% count(diagnosis) %>% kable(align='c')
.folds | diagnosis | n |
---|---|---|
1 | a | 3 |
1 | b | 3 |
2 | a | 3 |
2 | b | 3 |
3 | a | 3 |
3 | b | 3 |
df_folded <- fold(df, 3, id_col = 'participant', method = 'n_dist')
# Order by folds
df_folded <- df_folded[order(df_folded$.folds),]
# Remove index (Looks prettier in the table!)
rownames(df_folded) <- NULL
kable(df_folded, align = 'c')
participant | age | diagnosis | score | session | .folds |
---|---|---|---|---|---|
3 | 40 | a | 25 | 1 | 1 |
3 | 40 | a | 100 | 2 | 1 |
3 | 40 | a | 57 | 3 | 1 |
5 | 73 | b | 24 | 1 | 1 |
5 | 73 | b | 41 | 2 | 1 |
5 | 73 | b | 23 | 3 | 1 |
2 | 71 | b | 13 | 1 | 2 |
2 | 71 | b | 82 | 2 | 2 |
2 | 71 | b | 53 | 3 | 2 |
6 | 20 | b | 6 | 1 | 2 |
6 | 20 | b | 37 | 2 | 2 |
6 | 20 | b | 83 | 3 | 2 |
1 | 44 | a | 72 | 1 | 3 |
1 | 44 | a | 61 | 2 | 3 |
1 | 44 | a | 92 | 3 | 3 |
4 | 32 | a | 14 | 1 | 3 |
4 | 32 | a | 73 | 2 | 3 |
4 | 32 | a | 31 | 3 | 3 |
Let’s see how participants were distributed in the groups.
df_folded %>% group_by(.folds) %>% count(participant) %>% kable(align='c')
.folds | participant | n |
---|---|---|
1 | 3 | 3 |
1 | 5 | 3 |
2 | 2 | 3 |
2 | 6 | 3 |
3 | 1 | 3 |
3 | 4 | 3 |
fold() first divides up the dataframe by cat_col and then create n folds for both diagnoses. As there are only 3 participants per diagnosis, we can maximally create 3 folds in this scenario.
df_folded <- fold(df, 3, cat_col = 'diagnosis', id_col = 'participant', method = 'n_dist')
# Order by folds
df_folded <- df_folded[order(df_folded$.folds),]
kable(df_folded, align = 'c')
participant | age | diagnosis | score | session | .folds |
---|---|---|---|---|---|
1 | 44 | a | 72 | 1 | 1 |
1 | 44 | a | 61 | 2 | 1 |
1 | 44 | a | 92 | 3 | 1 |
6 | 20 | b | 6 | 1 | 1 |
6 | 20 | b | 37 | 2 | 1 |
6 | 20 | b | 83 | 3 | 1 |
3 | 40 | a | 25 | 1 | 2 |
3 | 40 | a | 100 | 2 | 2 |
3 | 40 | a | 57 | 3 | 2 |
5 | 73 | b | 24 | 1 | 2 |
5 | 73 | b | 41 | 2 | 2 |
5 | 73 | b | 23 | 3 | 2 |
4 | 32 | a | 14 | 1 | 3 |
4 | 32 | a | 73 | 2 | 3 |
4 | 32 | a | 31 | 3 | 3 |
2 | 71 | b | 13 | 1 | 3 |
2 | 71 | b | 82 | 2 | 3 |
2 | 71 | b | 53 | 3 | 3 |
Let’s count how many of each diagnosis there are in each group and find which participants are in which groups.
df_folded %>% group_by(.folds) %>% count(diagnosis, participant) %>% kable(align='c')
.folds | diagnosis | participant | n |
---|---|---|---|
1 | a | 1 | 3 |
1 | b | 6 | 3 |
2 | a | 3 | 3 |
2 | b | 5 | 3 |
3 | a | 4 | 3 |
3 | b | 2 | 3 |
df <- data.frame("x"=c(1:12),
"species" = rep(c('cat','pig', 'human'), 4),
"age" = sample(c(1:100), 12))
groups <- group_factor(df, 5, method = 'n_dist', randomize = TRUE)
groups
#> [1] 5 3 1 2 3 2 4 5 4 1 5 3
#> Levels: 1 2 3 4 5
df_list <- splt(df, 5, method = 'n_dist', randomize = TRUE)
df_list %>% kable(align = 'c')
|
|
|
|
|
In this section we will take a look at the outputs we get from the different methods.
Below you’ll see a dataframe with counts of group elements when dividing up the same data with the different n_ methods. The forced_equal column is simply with the force_equal set to TRUE.
forced_equal: Since this is a setting to make sure that all groups are of the same size, it makes sense that all the groups have the same size.
n_dist: compared to forced_equal we see the 3 datapoints that forced_equal had discarded. These are distributed across the groups (in this example group 2,4 and 6)
n_fill: The 3 extra datapoints are located at the first 3 groups. Had we set descending to TRUE, it would have been the last 3 groups instead.
n_last: We see that n_last creates equal group sizes in all but the last group. This means that the last group can sometimes have a group size, which is very large or small compared to the other groups. Here it is a third larger than the other groups.
n_rand: The extra datapoints are placed randomly and so we would see the extra datapoints located at different groups if we ran the script again. Unless we use set.seed() before running the function.
#> x n_dist n_fill n_last n_rand forced_equal
#> 1 1 9 10 9 9 9
#> 2 2 10 10 9 9 9
#> 3 3 9 10 9 10 9
#> 4 4 10 9 9 9 9
#> 5 5 9 9 9 10 9
#> 6 6 10 9 12 10 9
Here is another example.
#> x n_dist n_fill n_last n_rand forced_equal
#> 1 1 10 11 11 11 10
#> 2 2 11 11 11 11 10
#> 3 3 10 11 11 11 10
#> 4 4 11 11 11 10 10
#> 5 5 11 11 11 11 10
#> 6 6 10 11 11 10 10
#> 7 7 11 11 11 11 10
#> 8 8 11 10 11 10 10
#> 9 9 10 10 11 11 10
#> 10 10 11 10 11 10 10
#> 11 11 11 10 7 11 10
Below you will see group sizes when using the method ‘greedy’ and asking for group sizes of 8, 15, 20. What should become clear is that only the last group can have a different group size than what we asked for. This is important if, say, you want to split a time series into groups of 100 elements, but the time series is not divisible with 100. Then you could use force_equal to remove the excess elements, if you need equal groups.
With a size of 8, we get 13 groups. The last group (13) only contains 4 elements, but all the other groups contain 8 elements as specified.
With a size of 15, we get 7 groups. The last group (7) contains only 10 elements, but all the other groups contain 15 elements as specified.
With a size of 20, we get 5 groups. As 20 is divisible with the 100 elements that the splitted vector contained, the last group also contains 20 elements, and so we have equal groups.
Below you’ll see a plot with the group sizes at each group when using step sizes 2, 5, and 11.
At a step size of 2 elements it simply increases 2 for each group, until the last group (32) where it runs out of elements. Had we set force_equal to TRUE, this last group would have been discarded, because of the lack of elements.
At a step size of 5 elements it increases with 5 every time. Because of this it runs out of elements faster. Again we see that the last group (20) has fewer elements.
At a step size of 11 elements it increases with 11 every time. It seems that the last group is not too small, but it can be hard to see on this scale. Actually, the last group misses 1 element to be complete and so would have been discarded if force_equal was set to TRUE.
Below we will take a quick look at the cumulative sum of group elements to get an idea of what is going on under the hood.
Remember that the splitted vector had 1000 elements? That is why they all stop at 1000 on the y-axis. There are simply no more elements left!
Below you’ll see a plot with the group sizes at each group when starting from prime numbers 2, 5, and 11.
Below we will take a quick look at the cumulative sum of group elements to get an idea of what is going on under the hood.
Because the splitted vector had 1000 elements, it stops at 1000 on the y-axis. There are simply no more elements left!
You have reached the end! Now celebrate by taking the week off, splitting data and laughing!