Discovering Classification Rules for Interpretable Learning with Linear Programming

M Hakan Akyüz*, Ş İlker Birbil

*Corresponding author for this work

Research output: Working paperPreprintAcademic

Abstract

Rules embody a set of if-then statements which include one or more conditions to classify a subset
of samples in a dataset. In various applications such classification rules are considered to be interpretable by
the decision makers. We introduce two new algorithms for interpretability and learning. Both algorithms take
advantage of linear programming, and hence, they are scalable to large data sets. The first algorithm extracts
rules for interpretation of trained models that are based on tree/rule ensembles. The second algorithm generates
a set of classification rules through a column generation approach. The proposed algorithms return a set of
rules along with their optimal weights indicating the importance of each rule for classification. Moreover, our
algorithms allow assigning cost coefficients, which could relate to different attributes of the rules, such as; rule
lengths, estimator weights, number of false negatives, and so on. Thus, the decision makers can adjust these
coefficients to divert the training process and obtain a set of rules that are more appealing for their needs. We
have tested the performances of both algorithms on a collection of datasets and presented a case study to elaborate
on optimal rule weights. Our results show that a good compromise between interpretability and accuracy can be
obtained by the proposed algorithms.
Original languageEnglish
PublisherarXiv
Publication statusPublished - 21 Apr 2021

Publication series

SeriesarXiv preprint arXiv:2104.10751

Fingerprint

Dive into the research topics of 'Discovering Classification Rules for Interpretable Learning with Linear Programming'. Together they form a unique fingerprint.

Cite this