What are the purposes of autoencoders?Neural networks and number theoryHow can I combine these two understandings of RBM (Restricted Boltzmann Machine)?Do Le et al. (2012) train all three autoencoder layers at a time, or just one?What are good parameters of an encoder?Sparsity constraint in a deep autoencoderBatch Normalization in Deep Autoencoders?Autoencoder why it is special for image decoding?AUTOENCODERS FOR CREDIT CARD FRUD DETECTIONHow to choose the dimensions of the encoding layer in autoencoders?Additive Attention in Convolutional Networks

Multiplicative persistence

Removing files under particular conditions (number of files, file age)

lightning-datatable row number error

Why is it that I can sometimes guess the next note?

What does routing an IP address mean?

Terse Method to Swap Lowest for Highest?

Added a new user on Ubuntu, set password not working?

The IT department bottlenecks progress. How should I handle this?

Is Witten's Proof of the Positive Mass Theorem Rigorous?

Loading commands from file

Are the IPv6 address space and IPv4 address space completely disjoint?

How can I block email signup overlays or javascript popups in Safari?

Rising and falling intonation

How much character growth crosses the line into breaking the character

When were female captains banned from Starfleet?

How could a planet have erratic days?

Has any country ever had 2 former presidents in jail simultaneously?

What should you do if you miss a job interview (deliberately)?

How should I respond when I lied about my education and the company finds out through background check?

Count the occurrence of each unique word in the file

What if a revenant (monster) gains fire resistance?

Pre-mixing cryogenic fuels and using only one fuel tank

Is there any references on the tensor product of presentable (1-)categories?

Travelling outside the UK without a passport



What are the purposes of autoencoders?


Neural networks and number theoryHow can I combine these two understandings of RBM (Restricted Boltzmann Machine)?Do Le et al. (2012) train all three autoencoder layers at a time, or just one?What are good parameters of an encoder?Sparsity constraint in a deep autoencoderBatch Normalization in Deep Autoencoders?Autoencoder why it is special for image decoding?AUTOENCODERS FOR CREDIT CARD FRUD DETECTIONHow to choose the dimensions of the encoding layer in autoencoders?Additive Attention in Convolutional Networks













2












$begingroup$


Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.



Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?



Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?










share|improve this question









$endgroup$







  • 1




    $begingroup$
    See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
    $endgroup$
    – nbro
    5 hours ago
















2












$begingroup$


Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.



Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?



Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?










share|improve this question









$endgroup$







  • 1




    $begingroup$
    See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
    $endgroup$
    – nbro
    5 hours ago














2












2








2





$begingroup$


Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.



Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?



Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?










share|improve this question









$endgroup$




Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.



Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?



Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?







machine-learning autoencoders dimensionality-reduction curse-of-dimensionality






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 6 hours ago









nbronbro

1,814624




1,814624







  • 1




    $begingroup$
    See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
    $endgroup$
    – nbro
    5 hours ago













  • 1




    $begingroup$
    See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
    $endgroup$
    – nbro
    5 hours ago








1




1




$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
5 hours ago





$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
5 hours ago











2 Answers
2






active

oldest

votes


















1












$begingroup$

A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.



Autoencoders and PCA are related:




an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.




For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.






share|improve this answer











$endgroup$




















    1












    $begingroup$

    PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)



    Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.



    LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.



    PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.



    Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.



    There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.






    share|improve this answer








    New contributor




    Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






    $endgroup$












    • $begingroup$
      What does LDA have to do with question?
      $endgroup$
      – nbro
      16 mins ago










    • $begingroup$
      LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
      $endgroup$
      – Pedro Henrique Monforte
      15 mins ago










    Your Answer





    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "658"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f11405%2fwhat-are-the-purposes-of-autoencoders%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1












    $begingroup$

    A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.



    Autoencoders and PCA are related:




    an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.




    For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.






    share|improve this answer











    $endgroup$

















      1












      $begingroup$

      A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.



      Autoencoders and PCA are related:




      an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.




      For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.






      share|improve this answer











      $endgroup$















        1












        1








        1





        $begingroup$

        A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.



        Autoencoders and PCA are related:




        an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.




        For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.






        share|improve this answer











        $endgroup$



        A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.



        Autoencoders and PCA are related:




        an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.




        For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited 6 hours ago

























        answered 6 hours ago









        nbronbro

        1,814624




        1,814624























            1












            $begingroup$

            PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)



            Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.



            LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.



            PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.



            Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.



            There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.






            share|improve this answer








            New contributor




            Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$












            • $begingroup$
              What does LDA have to do with question?
              $endgroup$
              – nbro
              16 mins ago










            • $begingroup$
              LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
              $endgroup$
              – Pedro Henrique Monforte
              15 mins ago















            1












            $begingroup$

            PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)



            Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.



            LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.



            PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.



            Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.



            There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.






            share|improve this answer








            New contributor




            Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$












            • $begingroup$
              What does LDA have to do with question?
              $endgroup$
              – nbro
              16 mins ago










            • $begingroup$
              LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
              $endgroup$
              – Pedro Henrique Monforte
              15 mins ago













            1












            1








            1





            $begingroup$

            PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)



            Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.



            LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.



            PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.



            Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.



            There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.






            share|improve this answer








            New contributor




            Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$



            PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)



            Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.



            LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.



            PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.



            Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.



            There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.







            share|improve this answer








            New contributor




            Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            share|improve this answer



            share|improve this answer






            New contributor




            Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            answered 32 mins ago









            Pedro Henrique MonfortePedro Henrique Monforte

            513




            513




            New contributor




            Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.





            New contributor





            Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.











            • $begingroup$
              What does LDA have to do with question?
              $endgroup$
              – nbro
              16 mins ago










            • $begingroup$
              LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
              $endgroup$
              – Pedro Henrique Monforte
              15 mins ago
















            • $begingroup$
              What does LDA have to do with question?
              $endgroup$
              – nbro
              16 mins ago










            • $begingroup$
              LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
              $endgroup$
              – Pedro Henrique Monforte
              15 mins ago















            $begingroup$
            What does LDA have to do with question?
            $endgroup$
            – nbro
            16 mins ago




            $begingroup$
            What does LDA have to do with question?
            $endgroup$
            – nbro
            16 mins ago












            $begingroup$
            LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
            $endgroup$
            – Pedro Henrique Monforte
            15 mins ago




            $begingroup$
            LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
            $endgroup$
            – Pedro Henrique Monforte
            15 mins ago

















            draft saved

            draft discarded
















































            Thanks for contributing an answer to Artificial Intelligence Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f11405%2fwhat-are-the-purposes-of-autoencoders%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Oświęcim Innehåll Historia | Källor | Externa länkar | Navigeringsmeny50°2′18″N 19°13′17″Ö / 50.03833°N 19.22139°Ö / 50.03833; 19.2213950°2′18″N 19°13′17″Ö / 50.03833°N 19.22139°Ö / 50.03833; 19.221393089658Nordisk familjebok, AuschwitzInsidan tro och existensJewish Community i OświęcimAuschwitz Jewish Center: MuseumAuschwitz Jewish Center

            Valle di Casies Indice Geografia fisica | Origini del nome | Storia | Società | Amministrazione | Sport | Note | Bibliografia | Voci correlate | Altri progetti | Collegamenti esterni | Menu di navigazione46°46′N 12°11′E / 46.766667°N 12.183333°E46.766667; 12.183333 (Valle di Casies)46°46′N 12°11′E / 46.766667°N 12.183333°E46.766667; 12.183333 (Valle di Casies)Sito istituzionaleAstat Censimento della popolazione 2011 - Determinazione della consistenza dei tre gruppi linguistici della Provincia Autonoma di Bolzano-Alto Adige - giugno 2012Numeri e fattiValle di CasiesDato IstatTabella dei gradi/giorno dei Comuni italiani raggruppati per Regione e Provincia26 agosto 1993, n. 412Heraldry of the World: GsiesStatistiche I.StatValCasies.comWikimedia CommonsWikimedia CommonsValle di CasiesSito ufficialeValle di CasiesMM14870458910042978-6

            Typsetting diagram chases (with TikZ?) Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How to define the default vertical distance between nodes?Draw edge on arcNumerical conditional within tikz keys?TikZ: Drawing an arc from an intersection to an intersectionDrawing rectilinear curves in Tikz, aka an Etch-a-Sketch drawingLine up nested tikz enviroments or how to get rid of themHow to place nodes in an absolute coordinate system in tikzCommutative diagram with curve connecting between nodesTikz with standalone: pinning tikz coordinates to page cmDrawing a Decision Diagram with Tikz and layout manager