Interpretable Machine Learning

Our lab believes in developing interpretable algorithms and so there is a tradeoff (more than often) between the expressiveness and interpretability of the algorithms that we develop. The other desired effect of this approach of building interpretable variants of algorithms is that these algorithms are computationally efficient without sacrificing too much on the accuracy angle, that is, within the bounds of user-supplied accuracy bounds. See for example this figure from our paper on making neural networks interpretable.