Connectionistic models of boolean category representation

Abstract
Several distinct connectionistic/neural representations capable of computing arbitrary Boolean functions are described and discussed in terms of possible tradeoffs between time, space, and expressive clarity. It is suggested that the ability of a threshold logic unit (TLU) to represent prototypical groupings has significant advantages for representing real world categories. Upper and lower bounds on the number of nodes needed for Boolean completeness are demonstrated. The necessary number of nodes is shown to increase exponentially with the number of input features, the exact rate of increase depending on the representation scheme. In addition, in non-recurrent networks, connection weights are shown to increase exponentially with a linear reduction in the number of nodes below approximately 2d. This result suggests that optimum memory efficiency may require unacceptable learning time. Finally, two possible extensions to deal with non-Boolean values are considered.