Abstract
Artificial neural network algorithms give adequate or even excellent results on many computational problems. Such algorithms can be embedded in special-purpose hardware for efficient implementation. Within a particular hardware class, the algorithms can be implemented either as analogue neural networks or as a digital representation of the same problem. The speed, area and required precision of the two forms of hardware for representing the same problem are discussed for a hardware model which lies between VLSI hardware and biological neurons. It is usually true that the digital representation computes faster, requires more devices and resources, and requires less precision of manufacture. An exception to this rule occurs when the device physics generates a function which is explicitly needed in the algorithm. Major advances in analogue neural net hardware will require the exploitation of device physics available at the level of materials and transistors.

This publication has 5 references indexed in Scilit: