![]() ![]() |
![]() | ![]() ![]() ![]() |
Ezequiel López-Rubio, Juan Miguel Ortiz-de-Lazcano-Lobato, María del Carmen Vargas-González, José Miguel López-Rubio
One of the best known techniques for multidimensional data analysis is the Principal Components Analysis (PCA). A number of local PCA neural models have been proposed to partition an input distribution into meaningful clusters. Each neuron of these models uses a certain number of basis vectors to represent the principal directions of a particular cluster. Most of these neural networks are unable to learn the number of basis vectors, which is specified a priori as a fixed parameter. This leads to poor adaptation to input data. Here we develop a method where the number of basis vectors of each neuron is learned. Then we apply this method to a well known local PCA neural model. Finally, experimental results are presented where the original and modified versions of the neural model are compared.
Keywords: neural networks, Principal Components Analysis, dimensionality reduction, multispectral imaging
Citation: Ezequiel López-Rubio, Juan Miguel Ortiz-de-Lazcano-Lobato, María del Carmen Vargas-González, José Miguel López-Rubio: Dynamic Selection of Model Parameters in Principal Components Analysis Neural Networks. In R.López de Mántaras and L.Saitta (eds.): ECAI2004, Proceedings of the 16th European Conference on Artificial Intelligence, IOS Press, Amsterdam, 2004, pp.618-622.
![]() ![]() ![]() |