The AI Fairness 360 toolkit is an open-source library to help detect and mitigate bias in machine learning models. The AI Fairness 360 R package includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
This package is currently under CRAN review.
You can install the development version from GitHub with:
Then, use the install_aif360() function to install AIF360:
# load a toy dataset
data <- data.frame("feature1" = c(0,0,1,1,1,1,0,1,1,0),
"feature2" = c(0,1,0,1,1,0,0,0,0,1),
+ "label" = c(1,0,0,1,0,0,1,0,1,1))
# format the dataset
formatted_dataset <- aif360::aif_dataset(data_path = data,
favor_label = 0,
unfavor_label = 1,
unprivileged_protected_attribute = 0,
privileged_protected_attribute = 1,
target_column = "label",
protected_attribute = "feature1")
If you’d like to contribute to the development of aif360, please read these guidelines.
Please note that the aif360 project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.