alibi-detect is an open source Python library focused on outlier, adversarial and concept drift detection. The package aims to cover both online and offline detectors for tabular data, images and time series. The outlier detection methods should allow the user to identify global, contextual and collective outliers.
alibi-detect can be installed from PyPI:
pip install alibi-detect
We will use the VAE outlier detector to illustrate the API.
from alibi_detect.od import OutlierVAE
from alibi_detect.utils.saving import save_detector, load_detector
# initialize and fit detector
od = OutlierVAE(threshold=0.1, encoder_net=encoder_net, decoder_net=decoder_net, latent_dim=1024)
od.fit(X_train)
# make predictions
preds = od.predict(X_test)
# save and load detectors
filepath = './my_detector/'
save_detector(od, filepath)
od = load_detector(filepath)
The predictions are returned in a dictionary with as keys meta
and data
. meta
contains the detector's metadata while data
is in itself a dictionary with the actual predictions. It contains the outlier, adversarial or drift scores as well as the predictions whether instances are e.g. outliers or not. The exact details can vary slightly from method to method, so we encourage the reader to become familiar with the types of algorithms supported.
The save and load functionality for the Prophet time series outlier detector is currently experiencing issues in Python 3.6 but works in Python 3.7.
The following tables show the advised use cases for each algorithm. The column Feature Level indicates whether the detection can be done at the feature level, e.g. per pixel for an image. Check the algorithm reference list for more information with links to the documentation and original papers as well as examples for each of the detectors.
Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level |
---|---|---|---|---|---|---|---|
Isolation Forest | ✔ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ |
Mahalanobis Distance | ✔ | ✘ | ✘ | ✘ | ✔ | ✔ | ✘ |
AE | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✔ |
VAE | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✔ |
AEGMM | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ |
VAEGMM | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ |
Likelihood Ratios | ✔ | ✔ | ✔ | ✘ | ✔ | ✘ | ✔ |
Prophet | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ | ✘ |
Spectral Residual | ✘ | ✘ | ✔ | ✘ | ✘ | ✔ | ✔ |
Seq2Seq | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ | ✔ |
Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level |
---|---|---|---|---|---|---|---|
Adversarial AE | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ |
Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level |
---|---|---|---|---|---|---|---|
Kolmogorov-Smirnov | ✔ | ✔ | ✘ | ✘ | ✔ | ✔ | ✔ |
Maximum Mean Discrepancy | ✔ | ✔ | ✘ | ✘ | ✔ | ✘ | ✘ |
Isolation Forest (FT Liu et al., 2008)
Mahalanobis Distance (Mahalanobis, 1936)
Variational Auto-Encoder (VAE) (Kingma et al., 2013)
Auto-Encoding Gaussian Mixture Model (AEGMM) (Zong et al., 2018)
Variational Auto-Encoding Gaussian Mixture Model (VAEGMM)
Likelihood Ratios (Ren et al., 2019)
Prophet Time Series Outlier Detector (Taylor et al., 2018)
Spectral Residual Time Series Outlier Detector (Ren et al., 2019)
Sequence-to-Sequence (Seq2Seq) Outlier Detector (Sutskever et al., 2014; Park et al., 2017)
Maximum Mean Discrepancy (Gretton et al, 2012)
The package also contains functionality in alibi_detect.datasets
to easily fetch a number of datasets for different modalities. For each dataset either the data and labels or a Bunch object with the data, labels and optional metadata are returned. Example:
from alibi_detect.datasets import fetch_ecg
(X_train, y_train), (X_test, y_test) = fetch_ecg(return_X_y=True)
Genome Dataset: fetch_genome
from alibi_detect.datasets import fetch_genome
(X_train, y_train), (X_val, y_val), (X_test, y_test) = fetch_genome(return_X_y=True)
ECG 5000: fetch_ecg
NAB: fetch_nab
alibi_detect.datasets.get_list_nab()
.CIFAR-10-C: fetch_cifar10c
fetch_cifar10c
allows you to pick any severity level or corruption type. The list with available corruption types can be retrieved with alibi_detect.datasets.corruption_types_cifar10c()
. The dataset can be used in research on robustness and drift. The original data can be found here. Example:from alibi_detect.datasets import fetch_cifar10c
corruption = ['gaussian_noise', 'motion_blur', 'brightness', 'pixelate']
X, y = fetch_cifar10c(corruption=corruption, severity=5, return_X_y=True)
Adversarial CIFAR-10: fetch_attack
from alibi_detect.datasets import fetch_attack
(X_train, y_train), (X_test, y_test) = fetch_attack('cifar10', 'resnet56', 'cw', return_X_y=True)
fetch_kdd
fetch_kdd
allows you to select a subset of network intrusions as targets or pick only specified features. The original data can be found here.Models and/or building blocks that can be useful outside of outlier, adversarial or drift detection can be found under alibi_detect.models
. Main implementations:
PixelCNN++: alibi_detect.models.pixelcnn.PixelCNN
Variational Autoencoder: alibi_detect.models.autoencoder.VAE
Sequence-to-sequence model: alibi_detect.models.autoencoder.Seq2Seq
ResNet: alibi_detect.models.resnet
from alibi_detect.utils.fetching import fetch_tf_model
model = fetch_tf_model('cifar10', 'resnet32')
The integrations folder contains various wrapper tools to allow the alibi-detect algorithms to be used in production machine learning systems with examples on how to deploy outlier and adversarial detectors with KFServing.
creme
fbprophet
holidays
matplotlib
numpy
pandas
opencv-python
Pillow
scipy
scikit-image
scikit-learn
tensorflow>=2.0.0
tensorflow_probability>=0.8
If you use alibi-detect in your research, please consider citing it.
BibTeX entry:
@software{alibi-detect,
title = {{Alibi-Detect}: Algorithms for outlier and adversarial instance detection, concept drift and metrics.},
author = {Van Looveren, Arnaud and Vacanti, Giovanni and Klaise, Janis and Coca, Alexandru},
url = {https://github.com/SeldonIO/alibi-detect},
version = {0.4.1},
date = {2020-05-12},
}