Kohonen Self-Organizing Maps.*;

Introduction

based on articles by Laurene Fausett, and T. Kohonen.

Self-organizing neural networks are used to cluster input patterns into groups of similar patterns.  They're called "maps" because they assume a topological structure among their cluster units; effectively mapping weights to input data.  The Kohonen network is probably the best example, because it's simple, yet introduces the concepts of self-organization and unsupervised learning easily.

Each weight is representative of a certain input. Input patterns are shown to all neurons simultaneously.

Self-organizing networks can be either supervised or unsupervised. Unsupervised learning is a means of modifying the weights of a neural network without specifying the desired output for any input patterns. The advantage is that it allows the network to find its own solution, making it more efficient with pattern association. The disadvantage is that other programs or users have to figure out how to interpret the output.

The structure of a self-organizing map involves m cluster units, arranged in either a one- or two-dimensional array, with vectors of n input signals.

 

Figure 1.  Example self-organizing network with five cluster units, Yi, and seven input units, Xi.  The five cluster units are arranged in a linear array.

The weight vectors define each cluster. Input patterns are compared to each cluster, and associated with the cluster it best matches. The comparison is usually based on the square of the minimum Euclidean distance. When a best match is found, the associated cluster gets its weights and its neighboring units updated.

Weight vectors are arranged into lines or various grid structures. Some neighborhoods closer to the ends or edges will have fewer weights, because the algorithm doesn't wrap around.

 

Figure 2.  Neighborhoods (R) for a linear array of cluster units: R = 0 in black brackets, R = 1 in red, and R = 2 in blue.

 

Figure 3.  Neighborhoods (R) for a rectangular matrix of cluster units: R = 0 in black brackets, R = 1 in red, and R = 2 in blue.

 

Figure 4.  Neighborhoods (R) for a hexagonal matrix of cluster units: R = 0 in black brackets, R = 1 in red, and R = 2 in blue.

 

The learning rate α alpha is a slowly decreased with each epoch.

The size or radius of the neighborhood around a cluster unit can also decrease during the later epochs.

The formation of a map occurs in two stages:

  1. the initial formation of the correct order
  2. and the final convergence

The second stage takes much longer, and usually occurs when the learning rate gets smaller.

The initial weights can be random values.

The Algorithm

nodes are Dj.

set decay rate.

set alpha

set minimum alpha

while alpha is > minimum alpha
{

for each input vector
{

for each node x
{

  • compute:
  • find index j such that Dj is a minimum.
  • update the weights for the vector at index j and its neighbors:
wij(new) = wij(old) +α[xi - wij(old)]

}

}

reduce alpha

optionally, reduce radius of topological neighborhoods at specific times.

}

 

 

Works Cited

Fausett, Laurene.  (1994).  Fundamentals of neural networks: Architectures, algorithms, and applications.  New Jersey: Prentice Hall.  pp. 169-175.

Kohonen, T.  (1989).  Self-organizing and associative memory.  (3rd ed.), Berlin: Springer-Verlag.

Kohonen, T.  (1989).  "A self-learning musical grammar, or 'Associative memory of the second kind'."  International Joint Conference On Neural Networks.  Washington, DC.  Chap. 1: 1 - 5.

public void footer() {
About | Contact | Privacy Policy | Terms of Service | Site Map
Copyright© 2009-2012 John McCullock. All Rights Reserved.
}