Restricted Boltzmann Machine
Restricted Boltzmann machines (RBMs) are the first neural networks used for unsupervised learning, created by Geoff Hinton (university of Toronto).
The aim of RBMs is to find patterns in data by reconstructing the inputs using only two layers (the visible layer and the hidden layer). By moving forward an RBM translates the visible layer into a set of numbers that encodes the inputs, in backward pass it takes those set of numbers and translates them to the visible layer to regenerate the inputs.
In this code we introduce to you very simple algorithms that depend on contrastive divergence training. The details of this method are explained step by step in the comments inside the code.
To learn about RBM you can start from these referances:
[1] G. Hinton and G. Hinton, “A Practical Guide to Training Restricted Boltzmann Machines A Practical Guide to Training Restricted Boltzmann Machines,” 2010.
how To use the codes:
https://www.youtube.com/watch?v=uaVfyeE3Jwk&feature=youtu.be
Citation pour cette source
BERGHOUT Tarek (2024). Restricted Boltzmann Machine (https://www.mathworks.com/matlabcentral/fileexchange/71212-restricted-boltzmann-machine), MATLAB Central File Exchange. Extrait(e) le .
Compatibilité avec les versions de MATLAB
Plateformes compatibles
Windows macOS LinuxCatégories
Tags
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!Découvrir Live Editor
Créez des scripts avec du code, des résultats et du texte formaté dans un même document exécutable.
RBM_new
RBM_new/RBM
Version | Publié le | Notes de version | |
---|---|---|---|
3.1.0 | discription |
||
3.0.0 | description |
||
2.0.0 | new version |
||
1.5.0 | referances |
|
|
1.4.0 | In the last version we made a mistake: instead of giving all the samples of the image to visible neurons we used only one sample in each Gibbs sampling; now it is corrected. |
|
|
1.3.0 | new descriptif image |
|
|
1.2.0 | in the last code we trained by mistake the RBM with scalar units in visible and hidden layers, as we change the representation of these units into binary units during training and we'v got a much more improvements in accurcy |
||
1.0.0 |
|