Preprocessing of vectors before storage on associative memory

 

          The use of a chaotic transformation makes it possible to store patterns very close together and to preprocess quickly and without loss of information binary images before storage on associative memory.

           Xl is the original vector and Xk this vector modified by a chaotic transformation.

Cos a = Xl . Xk / || Xl || . || Xk ||

           The denominator does not change (the norm of the vectors is unchanged). The transformation modifies the scalar product to the numerator.

           We apply the transformation to a binary vector (+1/-1) of 120 components representing a figure. The following graph shows the evolution of Cos a according to the parameter b. The binary vector Xl representing the figure "3" is transformed in different vectors Xk.The value b=1 transforms vector Xl in an identical vector .

 

          We used a traditional associative memory of type HOPFIELD [ DAV89 ]. The vectors of 120 components (+1/-1) represent alphanumerics. They are transformed and stored in the form of a autoassociative matrix:

M =   S Xk.Xk T

          With each vector is associated a parameter b different. This parameter is the " key " assigned to the image. In phase of recall we do not know, a priori, the key of the image which is stored. We know only the set of the keys used (a set of prime numbers). We'll present to the network the vectors transformed with all the keys and we'll bring the nearest or the most quickly extracted vector.
         According to the model of Hopfield, the neural networks have a storage capacity of approximately 0,15n with n = number of neuron. Here n = 120. One could hope to store 18 images of characters. The memorized characters not being orthogonal, one obtains from them only 7 or 8 without the chaotic transformation.
         With the chaotic transformation, one doubles the number of images stored without losing the associative properties of recognition of disturbed or truncated images.

 Références:

[DAV89]     Eric Davalo, Patrick Naïm, Le modèle de HOPFIED , in des Réseaux de Neurones, Eyrolles 89, p 104-112

[HER94]     Jeanny Hérault, Christian Jutten, Modèles de neurones et de réseaux, in Réseaux neuronaux et traitement du signal, Hermès 94, p 25

 

return to the previous page