What are the purposes of autoencoders?Neural networks and number theoryHow can I combine these two understandings of RBM (Restricted Boltzmann Machine)?Do Le et al. (2012) train all three autoencoder layers at a time, or just one?What are good parameters of an encoder?Sparsity constraint in a deep autoencoderBatch Normalization in Deep Autoencoders?Autoencoder why it is special for image decoding?AUTOENCODERS FOR CREDIT CARD FRUD DETECTIONHow to choose the dimensions of the encoding layer in autoencoders?Additive Attention in Convolutional Networks
Multiplicative persistence
Removing files under particular conditions (number of files, file age)
lightning-datatable row number error
Why is it that I can sometimes guess the next note?
What does routing an IP address mean?
Terse Method to Swap Lowest for Highest?
Added a new user on Ubuntu, set password not working?
The IT department bottlenecks progress. How should I handle this?
Is Witten's Proof of the Positive Mass Theorem Rigorous?
Loading commands from file
Are the IPv6 address space and IPv4 address space completely disjoint?
How can I block email signup overlays or javascript popups in Safari?
Rising and falling intonation
How much character growth crosses the line into breaking the character
When were female captains banned from Starfleet?
How could a planet have erratic days?
Has any country ever had 2 former presidents in jail simultaneously?
What should you do if you miss a job interview (deliberately)?
How should I respond when I lied about my education and the company finds out through background check?
Count the occurrence of each unique word in the file
What if a revenant (monster) gains fire resistance?
Pre-mixing cryogenic fuels and using only one fuel tank
Is there any references on the tensor product of presentable (1-)categories?
Travelling outside the UK without a passport
What are the purposes of autoencoders?
Neural networks and number theoryHow can I combine these two understandings of RBM (Restricted Boltzmann Machine)?Do Le et al. (2012) train all three autoencoder layers at a time, or just one?What are good parameters of an encoder?Sparsity constraint in a deep autoencoderBatch Normalization in Deep Autoencoders?Autoencoder why it is special for image decoding?AUTOENCODERS FOR CREDIT CARD FRUD DETECTIONHow to choose the dimensions of the encoding layer in autoencoders?Additive Attention in Convolutional Networks
$begingroup$
Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.
Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?
Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?
machine-learning autoencoders dimensionality-reduction curse-of-dimensionality
$endgroup$
add a comment |
$begingroup$
Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.
Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?
Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?
machine-learning autoencoders dimensionality-reduction curse-of-dimensionality
$endgroup$
1
$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
5 hours ago
add a comment |
$begingroup$
Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.
Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?
Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?
machine-learning autoencoders dimensionality-reduction curse-of-dimensionality
$endgroup$
Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.
Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?
Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?
machine-learning autoencoders dimensionality-reduction curse-of-dimensionality
machine-learning autoencoders dimensionality-reduction curse-of-dimensionality
asked 6 hours ago
nbronbro
1,814624
1,814624
1
$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
5 hours ago
add a comment |
1
$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
5 hours ago
1
1
$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
5 hours ago
$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
5 hours ago
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.
Autoencoders and PCA are related:
an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.
For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.
$endgroup$
add a comment |
$begingroup$
PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)
Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.
LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.
PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.
Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.
There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.
New contributor
$endgroup$
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
16 mins ago
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
15 mins ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "658"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f11405%2fwhat-are-the-purposes-of-autoencoders%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.
Autoencoders and PCA are related:
an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.
For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.
$endgroup$
add a comment |
$begingroup$
A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.
Autoencoders and PCA are related:
an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.
For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.
$endgroup$
add a comment |
$begingroup$
A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.
Autoencoders and PCA are related:
an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.
For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.
$endgroup$
A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.
Autoencoders and PCA are related:
an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.
For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.
edited 6 hours ago
answered 6 hours ago
nbronbro
1,814624
1,814624
add a comment |
add a comment |
$begingroup$
PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)
Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.
LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.
PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.
Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.
There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.
New contributor
$endgroup$
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
16 mins ago
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
15 mins ago
add a comment |
$begingroup$
PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)
Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.
LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.
PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.
Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.
There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.
New contributor
$endgroup$
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
16 mins ago
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
15 mins ago
add a comment |
$begingroup$
PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)
Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.
LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.
PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.
Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.
There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.
New contributor
$endgroup$
PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)
Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.
LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.
PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.
Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.
There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.
New contributor
New contributor
answered 32 mins ago
Pedro Henrique MonfortePedro Henrique Monforte
513
513
New contributor
New contributor
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
16 mins ago
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
15 mins ago
add a comment |
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
16 mins ago
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
15 mins ago
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
16 mins ago
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
16 mins ago
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
15 mins ago
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
15 mins ago
add a comment |
Thanks for contributing an answer to Artificial Intelligence Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f11405%2fwhat-are-the-purposes-of-autoencoders%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
5 hours ago