Abstract
Most models of memory proposed so far use symmetric synapses. We show that this assumption is not necessary for a neural network to display memory abilities. We present an analytical derivation of memory capacities which does not appeal to the replica technique. It only uses a more transparent and straightforward mean-field approximation. The memorization efficiency depends on four learning parameters which, if the case arises, can be related to datas provided by experiments carried out on real synapses. We show that the learning rules observed so far are fully compatible with memorization capacities