funModeling quick-start {#quick_start}

funModeling

This package contains a set of functions related to exploratory data analysis, data preparation, and model performance. It is used by people coming from business, research, and teaching (professors and students).

funModeling is intimately related to the Data Science Live Book -Open Source- (2017) in the sense that most of its functionality is used to explain different topics addressed by the book.

Data Science Live Book

Opening the black-box

Some functions have in-line comments so the user can open the black-box and learn how it was developed, or to tune or improve any of them.

All the functions are well documented, explaining all the parameters with the help of many short examples. R documentation can be accessed by: help("name_of_the_function").


About this quick-start

This quick-start is focused only on the functions. All explanations around them, and the how and when to use them, can be accessed by following the “Read more here.” links below each section, which redirect you to the book.

Below there are most of the funModeling functions divided by category.

Exploratory data analysis

status: Dataset health status (2nd version)

Similar to df_status, but it returns all percentages in the 0 to 1 range (not 1 to 100).

library(funModeling)

status(heart_disease)
##                  variable q_zeros   p_zeros q_na       p_na q_inf p_inf    type
## 1                     age       0 0.0000000    0 0.00000000     0     0 integer
## 2                  gender       0 0.0000000    0 0.00000000     0     0  factor
## 3              chest_pain       0 0.0000000    0 0.00000000     0     0  factor
## 4  resting_blood_pressure       0 0.0000000    0 0.00000000     0     0 integer
## 5       serum_cholestoral       0 0.0000000    0 0.00000000     0     0 integer
## 6     fasting_blood_sugar     258 0.8514851    0 0.00000000     0     0  factor
## 7         resting_electro     151 0.4983498    0 0.00000000     0     0  factor
## 8          max_heart_rate       0 0.0000000    0 0.00000000     0     0 integer
## 9             exer_angina     204 0.6732673    0 0.00000000     0     0 integer
## 10                oldpeak      99 0.3267327    0 0.00000000     0     0 numeric
## 11                  slope       0 0.0000000    0 0.00000000     0     0 integer
## 12      num_vessels_flour     176 0.5808581    4 0.01320132     0     0 integer
## 13                   thal       0 0.0000000    2 0.00660066     0     0  factor
## 14 heart_disease_severity     164 0.5412541    0 0.00000000     0     0 integer
## 15           exter_angina     204 0.6732673    0 0.00000000     0     0  factor
## 16      has_heart_disease       0 0.0000000    0 0.00000000     0     0  factor
##    unique
## 1      41
## 2       2
## 3       4
## 4      50
## 5     152
## 6       2
## 7       3
## 8      91
## 9       2
## 10     40
## 11      3
## 12      4
## 13      3
## 14      5
## 15      2
## 16      2

Note: df_status will be deprecated, please use status instead.

data_integrity: Dataset health status (2nd version)

A handy function to return different vectors of variable names aimed to quickly filter NA, categorical (factor / character), numerical and other types (boolean, date, posix).

It also returns a vector of variables which have high cardinality.

It returns an 'integrity' object, which has: 'status_now' (comes from status function), and 'results' list, following elements can be found: vars_cat, vars_num, vars_num_with_NA, etc. Explore the object for more.

library(funModeling)

di=data_integrity(heart_disease)

# returns a summary
summary(di)
## 
## ◌ {Numerical with NA} num_vessels_flour
## ◌ {Categorical with NA} thal
# print all the metadata information
print(di)
## $vars_num_with_NA
##            variable q_na       p_na
## 1 num_vessels_flour    4 0.01320132
## 
## $vars_cat_with_NA
##   variable q_na       p_na
## 1     thal    2 0.00660066
## 
## $vars_cat_high_card
## [1] variable unique  
## <0 rows> (or 0-length row.names)
## 
## $MAX_UNIQUE
## [1] 35
## 
## $vars_one_value
## character(0)
## 
## $vars_cat
## [1] "gender"              "chest_pain"          "fasting_blood_sugar"
## [4] "resting_electro"     "thal"                "exter_angina"       
## [7] "has_heart_disease"  
## 
## $vars_num
## [1] "age"                    "resting_blood_pressure" "serum_cholestoral"     
## [4] "max_heart_rate"         "exer_angina"            "oldpeak"               
## [7] "slope"                  "num_vessels_flour"      "heart_disease_severity"
## 
## $vars_char
## character(0)
## 
## $vars_factor
## [1] "gender"              "chest_pain"          "fasting_blood_sugar"
## [4] "resting_electro"     "thal"                "exter_angina"       
## [7] "has_heart_disease"  
## 
## $vars_other
## character(0)

plot_num: Plotting distributions for numerical variables

Plots only numeric variables.

plot_num(heart_disease)

plot of chunk unnamed-chunk-3

Notes:

Read more here.


profiling_num: Calculating several statistics for numerical variables

Retrieves several statistics for numerical variables.

profiling_num(heart_disease)
##                 variable        mean    std_dev variation_coef   p_01  p_05
## 1                    age  54.4389439  9.0386624      0.1660330  35.00  40.0
## 2 resting_blood_pressure 131.6897690 17.5997477      0.1336455 100.00 108.0
## 3      serum_cholestoral 246.6930693 51.7769175      0.2098840 149.00 175.1
## 4         max_heart_rate 149.6072607 22.8750033      0.1529004  95.02 108.1
## 5            exer_angina   0.3267327  0.4697945      1.4378558   0.00   0.0
## 6                oldpeak   1.0396040  1.1610750      1.1168436   0.00   0.0
## 7                  slope   1.6006601  0.6162261      0.3849825   1.00   1.0
## 8      num_vessels_flour   0.6722408  0.9374383      1.3944978   0.00   0.0
## 9 heart_disease_severity   0.9372937  1.2285357      1.3107265   0.00   0.0
##    p_25  p_50  p_75  p_95   p_99   skewness kurtosis  iqr        range_98
## 1  48.0  56.0  61.0  68.0  71.00 -0.2080241 2.465477 13.0        [35, 71]
## 2 120.0 130.0 140.0 160.0 180.00  0.7025346 3.845881 20.0      [100, 180]
## 3 211.0 241.0 275.0 326.9 406.74  1.1298741 7.398208 64.0   [149, 406.74]
## 4 133.5 153.0 166.0 181.9 191.96 -0.5347844 2.927602 32.5 [95.02, 191.96]
## 5   0.0   0.0   1.0   1.0   1.00  0.7388506 1.545900  1.0          [0, 1]
## 6   0.0   0.8   1.6   3.4   4.20  1.2634255 4.530193  1.6        [0, 4.2]
## 7   1.0   2.0   2.0   3.0   3.00  0.5057957 2.363050  1.0          [1, 3]
## 8   0.0   0.0   1.0   3.0   3.00  1.1833771 3.234941  1.0          [0, 3]
## 9   0.0   0.0   2.0   3.0   4.00  1.0532483 2.843788  2.0          [0, 4]
##         range_80
## 1       [42, 66]
## 2     [110, 152]
## 3 [188.8, 308.8]
## 4   [116, 176.6]
## 5         [0, 1]
## 6       [0, 2.8]
## 7         [1, 2]
## 8         [0, 2]
## 9         [0, 3]

Note:

Read more here.


freq: Getting frequency distributions for categoric variables

library(dplyr)

# Select only two variables for this example
heart_disease_2=heart_disease %>% select(chest_pain, thal)

# Frequency distribution
freq(heart_disease_2)

plot of chunk distribution1

##   chest_pain frequency percentage cumulative_perc
## 1          4       144      47.52           47.52
## 2          3        86      28.38           75.90
## 3          2        50      16.50           92.40
## 4          1        23       7.59          100.00

plot of chunk distribution1

##   thal frequency percentage cumulative_perc
## 1    3       166      54.79           54.79
## 2    7       117      38.61           93.40
## 3    6        18       5.94           99.34
## 4 <NA>         2       0.66          100.00
## [1] "Variables processed: chest_pain, thal"

Notes:

Read more here.


Correlation

correlation_table: Calculates R statistic

Retrieves R metric (or Pearson coefficient) for all numeric variables, skipping the categoric ones.

correlation_table(heart_disease, "has_heart_disease")
##                 Variable has_heart_disease
## 1      has_heart_disease              1.00
## 2 heart_disease_severity              0.83
## 3      num_vessels_flour              0.46
## 4                oldpeak              0.42
## 5                  slope              0.34
## 6                    age              0.23
## 7 resting_blood_pressure              0.15
## 8      serum_cholestoral              0.08
## 9         max_heart_rate             -0.42

Notes:

Read more here.


var_rank_info: Correlation based on information theory

Calculates correlation based on several information theory metrics between all variables in a data frame and a target variable.

var_rank_info(heart_disease, "has_heart_disease")
##                       var    en    mi           ig           gr
## 1  heart_disease_severity 1.846 0.995 0.9950837595 0.5390655068
## 2                    thal 2.032 0.209 0.2094550580 0.1680456709
## 3             exer_angina 1.767 0.139 0.1391389302 0.1526393841
## 4            exter_angina 1.767 0.139 0.1391389302 0.1526393841
## 5              chest_pain 2.527 0.205 0.2050188327 0.1180286190
## 6       num_vessels_flour 2.381 0.182 0.1815217813 0.1157736478
## 7                   slope 2.177 0.112 0.1124219069 0.0868799615
## 8       serum_cholestoral 7.481 0.561 0.5605556771 0.0795557228
## 9                  gender 1.842 0.057 0.0572537665 0.0632970555
## 10                oldpeak 4.874 0.249 0.2491668741 0.0603576874
## 11         max_heart_rate 6.832 0.334 0.3336174096 0.0540697329
## 12 resting_blood_pressure 5.567 0.143 0.1425548155 0.0302394591
## 13                    age 5.928 0.137 0.1371752885 0.0270548944
## 14        resting_electro 2.059 0.024 0.0241482908 0.0221938072
## 15    fasting_blood_sugar 1.601 0.000 0.0004593775 0.0007579095

Note: It analyzes numerical and categorical variables. It is also used with the numeric discretization method as before, just as discretize_df.

Read more here.


cross_plot: Distribution plot between input and target variable

Retrieves the relative and absolute distribution between an input and target variable. Useful to explain and report if a variable is important or not.

cross_plot(data=heart_disease, input=c("age", "oldpeak"), target="has_heart_disease")
## Plotting transformed variable 'age' with 'equal_freq', (too many values). Disable with 'auto_binning=FALSE'
## Plotting transformed variable 'oldpeak' with 'equal_freq', (too many values). Disable with 'auto_binning=FALSE'

plot of chunk profiling1plot of chunk profiling1

Notes:

Read more here.


plotar: Boxplot and density histogram between input and target variables

Useful to explain and report if a variable is important or not.

Boxplot:

plotar(data=heart_disease, input = c("age", "oldpeak"), target="has_heart_disease", plot_type="boxplot")
## Warning: `fun.y` is deprecated. Use `fun` instead.

## Warning: `fun.y` is deprecated. Use `fun` instead.

plot of chunk boxplot_analysisplot of chunk boxplot_analysis

Read more here.


Density histograms:

plotar(data=mtcars, input = "gear", target="cyl", plot_type="histdens")

plot of chunk density_histogram

Read more here.

Notes:


categ_analysis: Quantitative analysis for binary outcome

Profile a binary target based on a categorical input variable, the representativeness (perc_rows) and the accuracy (perc_target) for each value of the input variable; for example, the rate of flu infection per country.

df_ca=categ_analysis(data = data_country, input = "country", target = "has_flu")

head(df_ca)
##          country mean_target sum_target perc_target q_rows perc_rows
## 1       Malaysia       1.000          1       0.012      1     0.001
## 2         Mexico       0.667          2       0.024      3     0.003
## 3       Portugal       0.200          1       0.012      5     0.005
## 4 United Kingdom       0.178          8       0.096     45     0.049
## 5        Uruguay       0.175         11       0.133     63     0.069
## 6         Israel       0.167          1       0.012      6     0.007

Note:

This function is used to analyze data when we need to reduce variable cardinality in predictive modeling.

Read more here.

Data preparation

Data discretization

discretize_get_bins + discretize_df: Convert numeric variables to categoric

We need two functions: discretize_get_bins, which returns the thresholds for each variable, and then discretize_df, which takes the result from the first function and converts the desired variables. The binning criterion is equal frequency.

Example converting only two variables from a dataset.

# Step 1: Getting the thresholds for the desired variables: "max_heart_rate" and "oldpeak"
d_bins=discretize_get_bins(data=heart_disease, input=c("max_heart_rate", "oldpeak"), n_bins=5)
## Variables processed: max_heart_rate, oldpeak
# Step 2: Applying the threshold to get the final processed data frame
heart_disease_discretized=discretize_df(data=heart_disease, data_bins=d_bins, stringsAsFactors=T)
## Variables processed: max_heart_rate, oldpeak

The following image illustrates the result. Please note that the variable name remains the same.

data discretization

Notes:

Read more here.


equal_freq: Convert numeric variable to categoric

Converts numeric vector into a factor using the equal frequency criterion.

new_age=equal_freq(heart_disease$age, n_bins = 5)

# checking results
Hmisc::describe(new_age)
## new_age 
##        n  missing distinct 
##      303        0        5 
## 
## lowest : [29,46) [46,54) [54,59) [59,63) [63,77]
## highest: [29,46) [46,54) [54,59) [59,63) [63,77]
##                                                   
## Value      [29,46) [46,54) [54,59) [59,63) [63,77]
## Frequency       63      64      71      45      60
## Proportion   0.208   0.211   0.234   0.149   0.198

Read more here.

Notes:


discretize_rgr: Variable discretization based on gain ratio maximization

This is a new method developed in funModeling developed improve the binning based on a binary target variable.

input=heart_disease$oldpeak
target=heart_disease$has_heart_disease

input2=discretize_rgr(input, target)

# checking:
summary(input2)
## [0.0,0.6) [0.6,1.0) [1.0,1.4) [1.4,1.9) [1.9,6.2] 
##       135        31        34        39        64

Adjust max number of bins with: max_n_bins; 5 as default. Control minimum sample size per bin with min_perc_bins; 0.1 (or 10%) as default)

range01: Scales variable into the 0 to 1 range

Convert a numeric vector into a scale from 0 to 1 with 0 as the minimum and 1 as the maximum.

age_scaled=range01(heart_disease$oldpeak)

# checking results
summary(age_scaled)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.0000  0.0000  0.1290  0.1677  0.2581  1.0000


Outliers data preparation

hampel_outlier and tukey_outlier: Gets outliers threshold

Both functions retrieve a two-value vector that indicates the thresholds for which the values are considered as outliers. The functions tukey_outlier and hampel_outlier are used internally in prep_outliers.

Using Tukey's method:

tukey_outlier(heart_disease$resting_blood_pressure)
## bottom_threshold    top_threshold 
##               60              200

Read more here.


Using Hampel's method:

hampel_outlier(heart_disease$resting_blood_pressure)
## bottom_threshold    top_threshold 
##           85.522          174.478

Read more here.


prep_outliers: Prepare outliers in a data frame

Takes a data frame and returns the same data frame plus the transformations specified in the input parameter. It also works with a single vector.

Example considering two variables as input:

# Get threshold according to Hampel's method
hampel_outlier(heart_disease$max_heart_rate)
## bottom_threshold    top_threshold 
##           86.283          219.717
# Apply function to stop outliers at the threshold values
data_prep=prep_outliers(data = heart_disease, input = c('max_heart_rate','resting_blood_pressure'), method = "hampel", type='stop')

Checking the before and after for variable max_heart_rate:

## [1] "Before transformation -> Min: 71; Max: 202"
## [1] "After transformation -> Min: 71; Max: 202"

The min value changed from 71 to 86.23, while the max value remained the same at 202.

Notes:

Read more here.


Predictive model performance

gain_lift: Gain and lift performance curve

After computing the scores or probabilities for the class we want to predict, we pass it to the gain_lift function, which returns a data frame with performance metrics.

# Create machine learning model and get its scores for positive case 
fit_glm=glm(has_heart_disease ~ age + oldpeak, data=heart_disease, family = binomial)
heart_disease$score=predict(fit_glm, newdata=heart_disease, type='response')

# Calculate performance metrics
gain_lift(data=heart_disease, score='score', target='has_heart_disease')

plot of chunk performance

##    Population   Gain Lift Score.Point
## 1          10  20.86 2.09   0.8185793
## 2          20  35.97 1.80   0.6967124
## 3          30  48.92 1.63   0.5657817
## 4          40  61.15 1.53   0.4901940
## 5          50  69.06 1.38   0.4033640
## 6          60  78.42 1.31   0.3344170
## 7          70  87.77 1.25   0.2939878
## 8          80  92.09 1.15   0.2473671
## 9          90  96.40 1.07   0.1980453
## 10        100 100.00 1.00   0.1195511

Read more here.