Learning Vector Quantization.*;

LVQ Example 1

Most of this code was taken from the self-organizing map examples and modified for LVQ. The weights are not initialized to random values, and they're also updated according to a different scheme.

This is one of the ways in which LVQ is heavily dependent on supervised learning; the weights are initially set to match specific patterns, as well as training through the input nodes. Note both the "initializeWeights()" and "training()" functions.

Also, you can how positive and negative reinforcement come into play in UpdateWeights().

Example Results 1

I could already guess that with THAT MUCH supervised learning going on, the results will be a little too perfect.

All A's belong to cluster 0, B's to cluster 1, C's to cluster 2, etc.

 

Weights for cluster 0 initialized to pattern A1

Weights for cluster 1 initialized to pattern B1

Weights for cluster 2 initialized to pattern C1

Weights for cluster 3 initialized to pattern D1

Weights for cluster 4 initialized to pattern E1

Weights for cluster 5 initialized to pattern J1

Weights for cluster 6 initialized to pattern K1

Pattern A1 belongs to cluster 0

Pattern B1 belongs to cluster 1

Pattern C1 belongs to cluster 2

Pattern D1 belongs to cluster 3

Pattern E1 belongs to cluster 4

Pattern J1 belongs to cluster 5

Pattern K1 belongs to cluster 6

Pattern A2 belongs to cluster 0

Pattern B2 belongs to cluster 1

Pattern C2 belongs to cluster 2

Pattern D2 belongs to cluster 3

Pattern E2 belongs to cluster 4

Pattern J2 belongs to cluster 5

Pattern K2 belongs to cluster 6

Pattern A3 belongs to cluster 0

Pattern B3 belongs to cluster 1

Pattern C3 belongs to cluster 2

Pattern D3 belongs to cluster 3

Pattern E3 belongs to cluster 4

Pattern J3 belongs to cluster 5

Pattern K3 belongs to cluster 6

 

All are correct

Example Results 2

You might be wondering if there's any learning going on at all if the weights are already adjusted before the algorithm even starts.

Try this, comment out the following line from the UpdateWeights() function:

"w(DMin, i) -= (Alpha * (mPattern(VectorNumber)(i) - w(DMin, i"

 

And notice the difference in the results:

 

Weights for cluster 0 initialized to pattern A1

Weights for cluster 1 initialized to pattern B1

Weights for cluster 2 initialized to pattern C1

Weights for cluster 3 initialized to pattern D1

Weights for cluster 4 initialized to pattern E1

Weights for cluster 5 initialized to pattern J1

Weights for cluster 6 initialized to pattern K1

Pattern A1 belongs to cluster 0

Pattern B1 belongs to cluster 1

Pattern C1 belongs to cluster 2

Pattern D1 belongs to cluster 3

Pattern E1 belongs to cluster 4

Pattern J1 belongs to cluster 5

Pattern K1 belongs to cluster 6

Pattern A2 belongs to cluster 0

Pattern B2 belongs to cluster 1

Pattern C2 belongs to cluster 2

Pattern D2 belongs to cluster 2 - incorrect

Pattern E2 belongs to cluster 1 - incorrect

Pattern J2 belongs to cluster 5

Pattern K2 belongs to cluster 0 - incorrect

Pattern A3 belongs to cluster 0

Pattern B3 belongs to cluster 4 - incorrect

Pattern C3 belongs to cluster 2

Pattern D3 belongs to cluster 3

Pattern E3 belongs to cluster 4

Pattern J3 belongs to cluster 5

Pattern K3 belongs to cluster 6

As you can see, both positive and negative reinforcement play a role in the LVQ learning process.

public void footer() {
About | Contact | Privacy Policy | Terms of Service | Site Map
Copyright© 2009-2012 John McCullock. All Rights Reserved.
}