Attention Regularization
Bootstrap-based regularization method to filter noisy attention scores and produce more interpretable explanations for vision transformers (ViT)
Class-Discriminative Attention Maps (CDAM)
Explainable AI method for vision transformer (ViT) to estimate the importance scores of input features with respect to the class or concept.
Concept Saliency Maps (CSM)
Evaluate and visualize latent representation of high-level concepts in generative models, such as variational autoencoders (VAEs).
Feature Perturbation Augmentation (FPA)
Data augmentation technique for deep learning training to reduce perturbation artifacts in downstream evaluation of explainable AI methods.
Jaccard - Similarity tests for binary data
Statistical tests of similarity between binary data using the Jaccard/Tanimoto coefficient – the ratio of intersection to union.
- R Package (CRAN) on CRAN
- R Package (Dev) on GitHub
Jackstraw - Statistical Inference for Unsupervised Learning
Statistical methods to evaluate association between variables and their estimated latent variables. Latent variables may be estimated by principal component analysis (PCA), logistic factor analysis (LFA), clustering, and related techniques.
- R Package (CRAN) on CRAN
- R Package (Dev) on GitHub
Tutorials
Association test with Principal Components
Statistical test of cluster memberships with the mtcars example
Unsupervised evaluation of cell identities in single cell genomics
Jaws - Jackstraw weighted shrinkage estimation
Jackstraw weighted shrinkage estimation for high-dimensional latent variable models. The jackstraw is used to estimate sparse loadings (i.e., coefficients) of Principal Component Analysis, Logistic Factor Analysis, and related techniques.
Obz AI
Obz AI is designed to bring explainability, continuous monitoring, and advanced outlier detection to AI-powered computer vision systems. With support for modern XAI methods, Obz AI enables ML engineers & scientists to ensure transparency, reliability, and trustworthiness in their vision models.