Abstract
Summary form only given, as follows. Neural models of computing are defined in terms of large numbers of interconnected neuron-like units. These models have been implemented on various parallel processors, employing relatively coarse-grained parallelism at the level of neurons or groups of neurons. The authors present a novel algorithm for parallelism at the synaptic level on fine-grained mesh-connected systolic arrays. The resulting system is shown to perform extremely well, computing at the rate of 300 million connections per second during generalized delta rule learning for a multilayered neural network.<>

This publication has 0 references indexed in Scilit: