Straightforward implementations of interpretable ML models + demos of how to use various interpretability techniques. Code is optimized for readability. Pull requests very welcome!
Docs • Implementations of imodels • Demo notebooks
Scikit-learn style wrappers/implementations of different interpretable models. The interpretable models can be easily installed and used:
pip install git+https://github.com/csinva/imodels
(see here for more help)
from imodels import RuleListClassifier, GreedyRuleListClassifier, SkopeRulesClassifier, IRFClassifier
from imodels import SLIMRegressor, RuleFitRegressor
model = RuleListClassifier() # initialize Bayesian Rule List
model.fit(X_train, y_train) # fit model
preds = model.predict(X_test) # discrete predictions: shape is (n_test, 1)
preds_proba = model.predict_proba(X_test) # predicted probabilities: shape is (n_test, n_classes)
- bayesian rule list (docs, ref implementation, paper) - learn a compact rule list
- rulefit (docs, ref implementation, paper) - find rules from a decision tree and build a linear model with them
- skope-rules (docs, ref implementation, paper) - extracts rules from base estimators (e.g. decision trees) then tries to deduplicate them
- sparse integer linear model (docs, cvxpy implementation, paper)
- greedy rule list (docs, ref implementation) - uses CART to learn a list (only a single path), rather than a decision tree
- (in progress) iterative random forest (docs, ref implementation, paper)
- (in progress) optimal classification tree (docs, ref implementation, paper) - learns succinct trees using global optimization rather than greedy heuristics
- (coming soon) rule ensembles - e.g. SLIPPER, Lightweight Rule Induction, MLRules
- (coming soon) gams
- (coming soon) symbolic regression
- see readmes in individual folders within imodels for details
The demos are contained in 3 main notebooks, following this cheat-sheet:
- model_based.ipynb - how to use different interpretable models and examples with the imodels package
- see an example of using this package for deriving a clinical decision rule in this nb
After fitting models, we can also do posthoc analysis as shown in these two notebooks:
- posthoc.ipynb - different simple analyses to interpret a trained model
- uncertainty.ipynb - code to get uncertainty estimates for a model
- Readings
- Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (molnar 2019, pdf) - book on interpretable ML
- Interpretable machine learning: definitions, methods, and applications (murdoch et al. 2019, pdf) - good quick review on interpretable ML
- Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead (rudin 2019, pdf) - good explanation of why one should use interpretable models
- Review on evaluating interpretability (doshi-velez & kim 2017, pdf)
- Reference implementations (also linked above): the code here heavily derives from (and in some case is just a wrapper for) the wonderful work of previous projects. We seek to to extract out, combine, and maintain select relevant parts of these projects.
- sklearn-expertsys - by @tmadl and others based on original code by Ben Letham
- rulefit - by @christophM
- skope-rules - by the skope-rules team
For updates, star the repo, see this related repo, or follow @chandan_singh96. Feel free to cite the package using the below, but more importantly make sure to give authors of original methods / base implementations credit:
@software{
singh2020,
title = {imodels python package for interpretable modeling},
publisher = {Zenodo},
year = 2020,
author = {Chandan Singh},
version = {v0.2.2},
doi = {10.5281/zenodo.4026887},
url = {https://doi.org/10.5281/zenodo.4026887}
}