Variance of Monte Carlo integration with importance sampling The Next CEO of Stack OverflowMonte Carlo Integration for non-square integrable functionsMonte Carlo integration aim for maximum varianceVariance reduction technique in Monte Carlo integrationMonte Carlo integration and varianceMonte Carlo Integration on the Real LineUse Importance Sampling and Monte carlo for estimating a summationSampling / Importance Resampling Poisson WeightsImportance SamplingFind the value of an integral using Monte-Carlo methodOptimal proposal for self-normalized importance sampling

Compensation for working overtime on Saturdays

Mathematica command that allows it to read my intentions

Horror film about a man brought out of cryogenic suspension without a soul, around 1990

Could you use a laser beam as a modulated carrier wave for radio signal?

Creating a script with console commands

Simplify trigonometric expression using trigonometric identities

Why do we say “un seul M” and not “une seule M” even though M is a “consonne”?

"Eavesdropping" vs "Listen in on"

Direct Implications Between USA and UK in Event of No-Deal Brexit

Gauss' Posthumous Publications?

How can I prove that a state of equilibrium is unstable?

Ising model simulation

Why did the Drakh emissary look so blurred in S04:E11 "Lines of Communication"?

How to find if SQL server backup is encrypted with TDE without restoring the backup

Shortening a title without changing its meaning

Does the Idaho Potato Commission associate potato skins with healthy eating?

Upgrading From a 9 Speed Sora Derailleur?

Is a distribution that is normal, but highly skewed, considered Gaussian?

The sum of any ten consecutive numbers from a fibonacci sequence is divisible by 11

Is it possible to make a 9x9 table fit within the default margins?

Why can't we say "I have been having a dog"?

How do I keep Mac Emacs from trapping M-`?

Incomplete cube

What difference does it make matching a word with/without a trailing whitespace?



Variance of Monte Carlo integration with importance sampling



The Next CEO of Stack OverflowMonte Carlo Integration for non-square integrable functionsMonte Carlo integration aim for maximum varianceVariance reduction technique in Monte Carlo integrationMonte Carlo integration and varianceMonte Carlo Integration on the Real LineUse Importance Sampling and Monte carlo for estimating a summationSampling / Importance Resampling Poisson WeightsImportance SamplingFind the value of an integral using Monte-Carlo methodOptimal proposal for self-normalized importance sampling










3












$begingroup$


I am following these lecture slides on Monte Carlo integration with importance sampling. I am just implementing a very simple example: $int_0^1 e^xdx$. For the importance sampling version, I rewrite $int_0^1 e^xdx = int_0^1 e^x/p(x)cdot p(x)dx$ where $p(x) = 2.5x^1.5$. Then



$$hatI = frac1Nsum_j=1^N fracf(x_j)p(x_j),$$



where $x_j$ are sampled from $p(x_j)$ (I use an inverse transform method here). For the variance, I have $sigma_I^2 = hatsigma_I^2/N$ and



$$hatsigma_I^2 = frac1N sum_j=1^N fracf(x_j)^2g(x_j)^2 - hatI^2.$$



I know I should expected the variance to decrease with importance sampling, but a plot of the variance with $N$ shows that not much happens. Can anyone explain to me what I'm doing incorrectly? I'm not sure how the they are able to achieve such a drastic decrease in variance in the lecture slides.



enter image description here










share|cite|improve this question









$endgroup$
















    3












    $begingroup$


    I am following these lecture slides on Monte Carlo integration with importance sampling. I am just implementing a very simple example: $int_0^1 e^xdx$. For the importance sampling version, I rewrite $int_0^1 e^xdx = int_0^1 e^x/p(x)cdot p(x)dx$ where $p(x) = 2.5x^1.5$. Then



    $$hatI = frac1Nsum_j=1^N fracf(x_j)p(x_j),$$



    where $x_j$ are sampled from $p(x_j)$ (I use an inverse transform method here). For the variance, I have $sigma_I^2 = hatsigma_I^2/N$ and



    $$hatsigma_I^2 = frac1N sum_j=1^N fracf(x_j)^2g(x_j)^2 - hatI^2.$$



    I know I should expected the variance to decrease with importance sampling, but a plot of the variance with $N$ shows that not much happens. Can anyone explain to me what I'm doing incorrectly? I'm not sure how the they are able to achieve such a drastic decrease in variance in the lecture slides.



    enter image description here










    share|cite|improve this question









    $endgroup$














      3












      3








      3


      3



      $begingroup$


      I am following these lecture slides on Monte Carlo integration with importance sampling. I am just implementing a very simple example: $int_0^1 e^xdx$. For the importance sampling version, I rewrite $int_0^1 e^xdx = int_0^1 e^x/p(x)cdot p(x)dx$ where $p(x) = 2.5x^1.5$. Then



      $$hatI = frac1Nsum_j=1^N fracf(x_j)p(x_j),$$



      where $x_j$ are sampled from $p(x_j)$ (I use an inverse transform method here). For the variance, I have $sigma_I^2 = hatsigma_I^2/N$ and



      $$hatsigma_I^2 = frac1N sum_j=1^N fracf(x_j)^2g(x_j)^2 - hatI^2.$$



      I know I should expected the variance to decrease with importance sampling, but a plot of the variance with $N$ shows that not much happens. Can anyone explain to me what I'm doing incorrectly? I'm not sure how the they are able to achieve such a drastic decrease in variance in the lecture slides.



      enter image description here










      share|cite|improve this question









      $endgroup$




      I am following these lecture slides on Monte Carlo integration with importance sampling. I am just implementing a very simple example: $int_0^1 e^xdx$. For the importance sampling version, I rewrite $int_0^1 e^xdx = int_0^1 e^x/p(x)cdot p(x)dx$ where $p(x) = 2.5x^1.5$. Then



      $$hatI = frac1Nsum_j=1^N fracf(x_j)p(x_j),$$



      where $x_j$ are sampled from $p(x_j)$ (I use an inverse transform method here). For the variance, I have $sigma_I^2 = hatsigma_I^2/N$ and



      $$hatsigma_I^2 = frac1N sum_j=1^N fracf(x_j)^2g(x_j)^2 - hatI^2.$$



      I know I should expected the variance to decrease with importance sampling, but a plot of the variance with $N$ shows that not much happens. Can anyone explain to me what I'm doing incorrectly? I'm not sure how the they are able to achieve such a drastic decrease in variance in the lecture slides.



      enter image description here







      monte-carlo integral importance-sampling






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 4 hours ago









      user1799323user1799323

      1234




      1234




















          1 Answer
          1






          active

          oldest

          votes


















          3












          $begingroup$

          enter image description hereThis is a good illustration of the dangers of importance sampling: while
          $$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
          shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
          $$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
          since the integral diverges in $x=0$. For instance,



          > x=runif(1e7)^1/2.5
          > range(exp(x)/x^1.5)
          [1] 2.718282 83403.685972


          shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since



           > mean(exp(x)/x^1.5)/2.5
          [1] 1.717576
          > var(exp(x)/x^1.5)/(2.5)^2/1e7
          [1] 2.070953e-06


          but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)






          share|cite|improve this answer











          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "65"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f400628%2fvariance-of-monte-carlo-integration-with-importance-sampling%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            3












            $begingroup$

            enter image description hereThis is a good illustration of the dangers of importance sampling: while
            $$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
            shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
            $$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
            since the integral diverges in $x=0$. For instance,



            > x=runif(1e7)^1/2.5
            > range(exp(x)/x^1.5)
            [1] 2.718282 83403.685972


            shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since



             > mean(exp(x)/x^1.5)/2.5
            [1] 1.717576
            > var(exp(x)/x^1.5)/(2.5)^2/1e7
            [1] 2.070953e-06


            but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)






            share|cite|improve this answer











            $endgroup$

















              3












              $begingroup$

              enter image description hereThis is a good illustration of the dangers of importance sampling: while
              $$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
              shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
              $$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
              since the integral diverges in $x=0$. For instance,



              > x=runif(1e7)^1/2.5
              > range(exp(x)/x^1.5)
              [1] 2.718282 83403.685972


              shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since



               > mean(exp(x)/x^1.5)/2.5
              [1] 1.717576
              > var(exp(x)/x^1.5)/(2.5)^2/1e7
              [1] 2.070953e-06


              but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)






              share|cite|improve this answer











              $endgroup$















                3












                3








                3





                $begingroup$

                enter image description hereThis is a good illustration of the dangers of importance sampling: while
                $$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
                shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
                $$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
                since the integral diverges in $x=0$. For instance,



                > x=runif(1e7)^1/2.5
                > range(exp(x)/x^1.5)
                [1] 2.718282 83403.685972


                shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since



                 > mean(exp(x)/x^1.5)/2.5
                [1] 1.717576
                > var(exp(x)/x^1.5)/(2.5)^2/1e7
                [1] 2.070953e-06


                but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)






                share|cite|improve this answer











                $endgroup$



                enter image description hereThis is a good illustration of the dangers of importance sampling: while
                $$int_0^1 frace^xp(x), p(x)textd x = int_0^1 e^x textd x = I$$
                shows that $hatI_N$ is an unbiased estimator of $I$, this estimator does not have a finite variance since
                $$int_0^1 left(frace^xp(x)right)^2, p(x)textd x = int_0^1 frace^2x2.5 x^1.5 textd x = infty$$
                since the integral diverges in $x=0$. For instance,



                > x=runif(1e7)^1/2.5
                > range(exp(x)/x^1.5)
                [1] 2.718282 83403.685972


                shows that the weights can widely differ. I am not surprised at the figures reported in the above slides since



                 > mean(exp(x)/x^1.5)/2.5
                [1] 1.717576
                > var(exp(x)/x^1.5)/(2.5)^2/1e7
                [1] 2.070953e-06


                but the empirical variance is rarely able to spot infinite variance importance sampling. (The graph shows that both the standard Monte Carlo estimate and the importance sampling version see the empirical standard deviation is decreasing as $N^-1/2$.)







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited 4 hours ago

























                answered 4 hours ago









                Xi'anXi'an

                59k897365




                59k897365



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Cross Validated!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f400628%2fvariance-of-monte-carlo-integration-with-importance-sampling%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Oświęcim Innehåll Historia | Källor | Externa länkar | Navigeringsmeny50°2′18″N 19°13′17″Ö / 50.03833°N 19.22139°Ö / 50.03833; 19.2213950°2′18″N 19°13′17″Ö / 50.03833°N 19.22139°Ö / 50.03833; 19.221393089658Nordisk familjebok, AuschwitzInsidan tro och existensJewish Community i OświęcimAuschwitz Jewish Center: MuseumAuschwitz Jewish Center

                    Valle di Casies Indice Geografia fisica | Origini del nome | Storia | Società | Amministrazione | Sport | Note | Bibliografia | Voci correlate | Altri progetti | Collegamenti esterni | Menu di navigazione46°46′N 12°11′E / 46.766667°N 12.183333°E46.766667; 12.183333 (Valle di Casies)46°46′N 12°11′E / 46.766667°N 12.183333°E46.766667; 12.183333 (Valle di Casies)Sito istituzionaleAstat Censimento della popolazione 2011 - Determinazione della consistenza dei tre gruppi linguistici della Provincia Autonoma di Bolzano-Alto Adige - giugno 2012Numeri e fattiValle di CasiesDato IstatTabella dei gradi/giorno dei Comuni italiani raggruppati per Regione e Provincia26 agosto 1993, n. 412Heraldry of the World: GsiesStatistiche I.StatValCasies.comWikimedia CommonsWikimedia CommonsValle di CasiesSito ufficialeValle di CasiesMM14870458910042978-6

                    Typsetting diagram chases (with TikZ?) Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How to define the default vertical distance between nodes?Draw edge on arcNumerical conditional within tikz keys?TikZ: Drawing an arc from an intersection to an intersectionDrawing rectilinear curves in Tikz, aka an Etch-a-Sketch drawingLine up nested tikz enviroments or how to get rid of themHow to place nodes in an absolute coordinate system in tikzCommutative diagram with curve connecting between nodesTikz with standalone: pinning tikz coordinates to page cmDrawing a Decision Diagram with Tikz and layout manager