artificial-intelligence,encog,som,dimensionality-reduction

Dimensionality reduction (compression of information) is reversible in auto-encoders. Auto-encoder is regular neural network with bottleneck layer in the middle. You have for instance 20 inputs in the first layer, 10 neurons in the middle layer and again 20 neurons in the last layer. When you train such network you...

som,dimensionality-reduction,uci

Ok so first of all refer to some previous related questions which will give you a better understanding of the dimensional reduction and visualization properties of the SOM. Plotting the Kohonen map - Understanding the visualization, Interpreting a Self Organizing Map. Second a simple case to test the properties of...

neural-network,deep-learning,dimensionality-reduction,autoencoder

You should take a look at some of the tutorials over at deeplearning.net. They have a Stacked Denoising Autoencoder example with code. All of the tutorials are written in Theano which is a scientific computing library that will generate GPU code for you. Here's an example of a visualization of...

matlab,image-processing,dimensionality-reduction

You can directly apply a 2D-PCA. At least it exists and should perform better (reduction-wise) than the 1D-PCA. A very highly cited research paper from 2004 on this: Yang, J. et al., 2004. Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition. IEEE Transactions on Pattern Analysis and...

matlab,pca,dimensionality-reduction

You should take advantage of the built-in functions in Matlab, and use the pca function directly, or even the cov function if you want to compare eigs to pcaconv. Now to answer your question, both return the same eigenvectors but not in the same order. See the following example: >>...