Extracting Rules from Neural Networks by Pruning and Hidden-Unit Splitting
- 1 January 1997
- journal article
- Published by MIT Press in Neural Computation
- Vol. 9 (1) , 205-225
- https://doi.org/10.1162/neco.1997.9.1.205
Abstract
An algorithm for extracting rules from a standard three-layer feedforward neural network is proposed. The trained network is first pruned not only to remove redundant connections in the network but, more important, to detect the relevant inputs. The algorithm generates rules from the pruned network by considering only a small number of activation values at the hidden units. If the number of inputs connected to a hidden unit is sufficiently small, then rules that describe how each of its activation values is obtained can be readily generated. Otherwise the hidden unit will be split and treated as output units, with each output unit corresponding to an activation value. A hidden layer is inserted and a new subnetwork is formed, trained, and pruned. This process is repeated until every hidden unit in the network has a relatively small number of input units connected to it. Examples on how the proposed algorithm works are shown using real-world data arising from molecular biology and signal processing. Our results show that for these complex problems, the algorithm can extract reasonably compact rule sets that have high predictive accuracy rates.Keywords
This publication has 5 references indexed in Scilit:
- A Penalty-Function Approach for Pruning Feedforward Neural NetworksNeural Computation, 1997
- Extracting refined rules from knowledge-based neural networksMachine Learning, 1993
- Backpropagation neural nets with one and two hidden layersIEEE Transactions on Neural Networks, 1993
- Approximation capabilities of multilayer feedforward networksNeural Networks, 1991
- Analysis of hidden units in a layered network trained to classify sonar targetsNeural Networks, 1988