Recursively updating the MLE as new observations stream inSimple MLE Question4 cases of Maximum Likelihood Estimation of Gaussian distribution parameterssimulating random samples with a given MLEFor the family of distributions, $f_theta(x) = theta x^theta-1$, what is the sufficient statistic corresponding to the monotone likelihood ratio?Prove that MLE does not depend on the dominating measureDetermining an MLEMLE of $f(xmidtheta) = theta x^theta−1e^−x^thetaI_(0,infty)(x)$Sufficient statistic when $Xsim U(theta,2 theta)$Estimating the MLE where the parameter is also the constraintTrouble with MLE

Exposing a company lying about themselves in a tightly knit industry: Is my career at risk on the long run?

10 year ban after applying for a UK student visa

Jem'Hadar, something strange about their life expectancy

Imaginary part of expression too difficult to calculate

Did Nintendo change its mind about 68000 SNES?

What is it called when someone votes for an option that's not their first choice?

How can I create URL shortcuts/redirects for task/diff IDs in Phabricator?

Why doesn't the fusion process of the sun speed up?

What kind of footwear is suitable for walking in micro gravity environment?

Hot air balloons as primitive bombers

When should a starting writer get his own webpage?

Isn't the word "experience" wrongly used in this context?

If I cast the Enlarge/Reduce spell on an arrow, what weapon could it count as?

Why I don't get the wanted width of tcbox?

Justification failure in beamer enumerate list

Why is participating in the European Parliamentary elections used as a threat?

Print last inputted byte

Why doesn't the chatan sign the ketubah?

How do researchers send unsolicited emails asking for feedback on their works?

pipe commands inside find -exec?

When did hardware antialiasing start being available?

How to test the sharpness of a knife?

Was World War I a war of liberals against authoritarians?

How to read string as hex number in bash?



Recursively updating the MLE as new observations stream in


Simple MLE Question4 cases of Maximum Likelihood Estimation of Gaussian distribution parameterssimulating random samples with a given MLEFor the family of distributions, $f_theta(x) = theta x^theta-1$, what is the sufficient statistic corresponding to the monotone likelihood ratio?Prove that MLE does not depend on the dominating measureDetermining an MLEMLE of $f(xmidtheta) = theta x^theta−1e^−x^thetaI_(0,infty)(x)$Sufficient statistic when $Xsim U(theta,2 theta)$Estimating the MLE where the parameter is also the constraintTrouble with MLE













4












$begingroup$


General Question



Say we have iid data $x_1$, $x_2$, ... $sim f(x,|,boldsymboltheta)$ streaming in. We want to recursively compute the maximum likelihood estimate of $boldsymboltheta$. That is, having computed
$$hatboldsymboltheta_n-1=undersetboldsymbolthetainmathbbR^pargmaxprod_i=1^n-1f(x_i,|,boldsymboltheta),$$
we observe a new $x_n$, and wish to somehow incrementally update our estimate
$$hatboldsymboltheta_n-1,,x_n to hatboldsymboltheta_n$$
without having to start from scratch. Are there generic algorithms for this?



Toy Example



If $x_1$, $x_2$, ... $sim N(x,|,mu, 1)$, then
$$hatmu_n-1 = frac1n-1sumlimits_i=1^n-1x_iquadtextandquadhatmu_n = frac1nsumlimits_i=1^nx_i,$$
so
$$hatmu_n=frac1nleft[(n-1)hatmu_n-1 + x_nright].$$










share|cite|improve this question











$endgroup$











  • $begingroup$
    Awesome question!
    $endgroup$
    – dlnB
    2 hours ago






  • 1




    $begingroup$
    Don't forget the inverse of this problem: updating the estimator as old observations are deleted.
    $endgroup$
    – Hong Ooi
    34 mins ago















4












$begingroup$


General Question



Say we have iid data $x_1$, $x_2$, ... $sim f(x,|,boldsymboltheta)$ streaming in. We want to recursively compute the maximum likelihood estimate of $boldsymboltheta$. That is, having computed
$$hatboldsymboltheta_n-1=undersetboldsymbolthetainmathbbR^pargmaxprod_i=1^n-1f(x_i,|,boldsymboltheta),$$
we observe a new $x_n$, and wish to somehow incrementally update our estimate
$$hatboldsymboltheta_n-1,,x_n to hatboldsymboltheta_n$$
without having to start from scratch. Are there generic algorithms for this?



Toy Example



If $x_1$, $x_2$, ... $sim N(x,|,mu, 1)$, then
$$hatmu_n-1 = frac1n-1sumlimits_i=1^n-1x_iquadtextandquadhatmu_n = frac1nsumlimits_i=1^nx_i,$$
so
$$hatmu_n=frac1nleft[(n-1)hatmu_n-1 + x_nright].$$










share|cite|improve this question











$endgroup$











  • $begingroup$
    Awesome question!
    $endgroup$
    – dlnB
    2 hours ago






  • 1




    $begingroup$
    Don't forget the inverse of this problem: updating the estimator as old observations are deleted.
    $endgroup$
    – Hong Ooi
    34 mins ago













4












4








4


2



$begingroup$


General Question



Say we have iid data $x_1$, $x_2$, ... $sim f(x,|,boldsymboltheta)$ streaming in. We want to recursively compute the maximum likelihood estimate of $boldsymboltheta$. That is, having computed
$$hatboldsymboltheta_n-1=undersetboldsymbolthetainmathbbR^pargmaxprod_i=1^n-1f(x_i,|,boldsymboltheta),$$
we observe a new $x_n$, and wish to somehow incrementally update our estimate
$$hatboldsymboltheta_n-1,,x_n to hatboldsymboltheta_n$$
without having to start from scratch. Are there generic algorithms for this?



Toy Example



If $x_1$, $x_2$, ... $sim N(x,|,mu, 1)$, then
$$hatmu_n-1 = frac1n-1sumlimits_i=1^n-1x_iquadtextandquadhatmu_n = frac1nsumlimits_i=1^nx_i,$$
so
$$hatmu_n=frac1nleft[(n-1)hatmu_n-1 + x_nright].$$










share|cite|improve this question











$endgroup$




General Question



Say we have iid data $x_1$, $x_2$, ... $sim f(x,|,boldsymboltheta)$ streaming in. We want to recursively compute the maximum likelihood estimate of $boldsymboltheta$. That is, having computed
$$hatboldsymboltheta_n-1=undersetboldsymbolthetainmathbbR^pargmaxprod_i=1^n-1f(x_i,|,boldsymboltheta),$$
we observe a new $x_n$, and wish to somehow incrementally update our estimate
$$hatboldsymboltheta_n-1,,x_n to hatboldsymboltheta_n$$
without having to start from scratch. Are there generic algorithms for this?



Toy Example



If $x_1$, $x_2$, ... $sim N(x,|,mu, 1)$, then
$$hatmu_n-1 = frac1n-1sumlimits_i=1^n-1x_iquadtextandquadhatmu_n = frac1nsumlimits_i=1^nx_i,$$
so
$$hatmu_n=frac1nleft[(n-1)hatmu_n-1 + x_nright].$$







maximum-likelihood online






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 1 hour ago







bamts

















asked 2 hours ago









bamtsbamts

758313




758313











  • $begingroup$
    Awesome question!
    $endgroup$
    – dlnB
    2 hours ago






  • 1




    $begingroup$
    Don't forget the inverse of this problem: updating the estimator as old observations are deleted.
    $endgroup$
    – Hong Ooi
    34 mins ago
















  • $begingroup$
    Awesome question!
    $endgroup$
    – dlnB
    2 hours ago






  • 1




    $begingroup$
    Don't forget the inverse of this problem: updating the estimator as old observations are deleted.
    $endgroup$
    – Hong Ooi
    34 mins ago















$begingroup$
Awesome question!
$endgroup$
– dlnB
2 hours ago




$begingroup$
Awesome question!
$endgroup$
– dlnB
2 hours ago




1




1




$begingroup$
Don't forget the inverse of this problem: updating the estimator as old observations are deleted.
$endgroup$
– Hong Ooi
34 mins ago




$begingroup$
Don't forget the inverse of this problem: updating the estimator as old observations are deleted.
$endgroup$
– Hong Ooi
34 mins ago










2 Answers
2






active

oldest

votes


















4












$begingroup$

See the concept of sufficiency and in particular, minimal sufficient statistics. In many cases you need the whole sample to compute the estimate at a given sample size, with no trivial way to update from a sample one size smaller (i.e. there's no convenient general result).



If the distribution is exponential family (and in some other cases besides; the uniform is a neat example) there's a nice sufficient statistic that can in many cases be updated in the manner you seek (i.e. with a number of commonly used distributions there would be a fast update).



One example I'm not aware of any direct way to either calculate or update is the estimate for the location of the Cauchy distribution (e.g. with unit scale, to make the problem a simple one-parameter problem). There may be a faster update, however, that I simply haven't noticed - I can't say I've really done more than glance at it for considering the updating case.



On the other hand, with MLEs that are obtained via numerical optimization methods, the previous estimate would in many cases be a great starting point, since typically the previous estimate would be very close to the updated estimate; in that sense at least, rapid updating should often be possible. Even this isn't the general case, though -- with multimodal likelihood functions (again, see the Cauchy for an example), a new observation might lead to the highest mode being some distance from the previous one (even if the locations of each of the biggest few modes didn't shift much, which one is highest could well change).






share|cite|improve this answer











$endgroup$




















    0












    $begingroup$

    In machine learning, this is referred to as online learning.



    As @Glen_b pointed out, there are special cases in which the MLE can be updated without needing to access all the previous data. As he also points out, I don't believe there's a generic solution for finding the MLE.



    A fairly generic approach for finding the approximate solution is to use something like stochastic gradient descent. In this case, as each observation comes in, we compute the gradient with respect to this individual observation and move the parameter values a very small amount in this direction. Under certain conditions, we can show that this will converge to a neighborhood of the MLE with high probability; the neighborhood is tighter and tighter as we reduce the step size, but more data is required for convergence. However, these stochastic methods in general require much more fiddling to obtain good performance than, say, closed form updates.






    share|cite|improve this answer









    $endgroup$












      Your Answer





      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "65"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398220%2frecursively-updating-the-mle-as-new-observations-stream-in%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      4












      $begingroup$

      See the concept of sufficiency and in particular, minimal sufficient statistics. In many cases you need the whole sample to compute the estimate at a given sample size, with no trivial way to update from a sample one size smaller (i.e. there's no convenient general result).



      If the distribution is exponential family (and in some other cases besides; the uniform is a neat example) there's a nice sufficient statistic that can in many cases be updated in the manner you seek (i.e. with a number of commonly used distributions there would be a fast update).



      One example I'm not aware of any direct way to either calculate or update is the estimate for the location of the Cauchy distribution (e.g. with unit scale, to make the problem a simple one-parameter problem). There may be a faster update, however, that I simply haven't noticed - I can't say I've really done more than glance at it for considering the updating case.



      On the other hand, with MLEs that are obtained via numerical optimization methods, the previous estimate would in many cases be a great starting point, since typically the previous estimate would be very close to the updated estimate; in that sense at least, rapid updating should often be possible. Even this isn't the general case, though -- with multimodal likelihood functions (again, see the Cauchy for an example), a new observation might lead to the highest mode being some distance from the previous one (even if the locations of each of the biggest few modes didn't shift much, which one is highest could well change).






      share|cite|improve this answer











      $endgroup$

















        4












        $begingroup$

        See the concept of sufficiency and in particular, minimal sufficient statistics. In many cases you need the whole sample to compute the estimate at a given sample size, with no trivial way to update from a sample one size smaller (i.e. there's no convenient general result).



        If the distribution is exponential family (and in some other cases besides; the uniform is a neat example) there's a nice sufficient statistic that can in many cases be updated in the manner you seek (i.e. with a number of commonly used distributions there would be a fast update).



        One example I'm not aware of any direct way to either calculate or update is the estimate for the location of the Cauchy distribution (e.g. with unit scale, to make the problem a simple one-parameter problem). There may be a faster update, however, that I simply haven't noticed - I can't say I've really done more than glance at it for considering the updating case.



        On the other hand, with MLEs that are obtained via numerical optimization methods, the previous estimate would in many cases be a great starting point, since typically the previous estimate would be very close to the updated estimate; in that sense at least, rapid updating should often be possible. Even this isn't the general case, though -- with multimodal likelihood functions (again, see the Cauchy for an example), a new observation might lead to the highest mode being some distance from the previous one (even if the locations of each of the biggest few modes didn't shift much, which one is highest could well change).






        share|cite|improve this answer











        $endgroup$















          4












          4








          4





          $begingroup$

          See the concept of sufficiency and in particular, minimal sufficient statistics. In many cases you need the whole sample to compute the estimate at a given sample size, with no trivial way to update from a sample one size smaller (i.e. there's no convenient general result).



          If the distribution is exponential family (and in some other cases besides; the uniform is a neat example) there's a nice sufficient statistic that can in many cases be updated in the manner you seek (i.e. with a number of commonly used distributions there would be a fast update).



          One example I'm not aware of any direct way to either calculate or update is the estimate for the location of the Cauchy distribution (e.g. with unit scale, to make the problem a simple one-parameter problem). There may be a faster update, however, that I simply haven't noticed - I can't say I've really done more than glance at it for considering the updating case.



          On the other hand, with MLEs that are obtained via numerical optimization methods, the previous estimate would in many cases be a great starting point, since typically the previous estimate would be very close to the updated estimate; in that sense at least, rapid updating should often be possible. Even this isn't the general case, though -- with multimodal likelihood functions (again, see the Cauchy for an example), a new observation might lead to the highest mode being some distance from the previous one (even if the locations of each of the biggest few modes didn't shift much, which one is highest could well change).






          share|cite|improve this answer











          $endgroup$



          See the concept of sufficiency and in particular, minimal sufficient statistics. In many cases you need the whole sample to compute the estimate at a given sample size, with no trivial way to update from a sample one size smaller (i.e. there's no convenient general result).



          If the distribution is exponential family (and in some other cases besides; the uniform is a neat example) there's a nice sufficient statistic that can in many cases be updated in the manner you seek (i.e. with a number of commonly used distributions there would be a fast update).



          One example I'm not aware of any direct way to either calculate or update is the estimate for the location of the Cauchy distribution (e.g. with unit scale, to make the problem a simple one-parameter problem). There may be a faster update, however, that I simply haven't noticed - I can't say I've really done more than glance at it for considering the updating case.



          On the other hand, with MLEs that are obtained via numerical optimization methods, the previous estimate would in many cases be a great starting point, since typically the previous estimate would be very close to the updated estimate; in that sense at least, rapid updating should often be possible. Even this isn't the general case, though -- with multimodal likelihood functions (again, see the Cauchy for an example), a new observation might lead to the highest mode being some distance from the previous one (even if the locations of each of the biggest few modes didn't shift much, which one is highest could well change).







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited 1 hour ago

























          answered 1 hour ago









          Glen_bGlen_b

          214k22414764




          214k22414764























              0












              $begingroup$

              In machine learning, this is referred to as online learning.



              As @Glen_b pointed out, there are special cases in which the MLE can be updated without needing to access all the previous data. As he also points out, I don't believe there's a generic solution for finding the MLE.



              A fairly generic approach for finding the approximate solution is to use something like stochastic gradient descent. In this case, as each observation comes in, we compute the gradient with respect to this individual observation and move the parameter values a very small amount in this direction. Under certain conditions, we can show that this will converge to a neighborhood of the MLE with high probability; the neighborhood is tighter and tighter as we reduce the step size, but more data is required for convergence. However, these stochastic methods in general require much more fiddling to obtain good performance than, say, closed form updates.






              share|cite|improve this answer









              $endgroup$

















                0












                $begingroup$

                In machine learning, this is referred to as online learning.



                As @Glen_b pointed out, there are special cases in which the MLE can be updated without needing to access all the previous data. As he also points out, I don't believe there's a generic solution for finding the MLE.



                A fairly generic approach for finding the approximate solution is to use something like stochastic gradient descent. In this case, as each observation comes in, we compute the gradient with respect to this individual observation and move the parameter values a very small amount in this direction. Under certain conditions, we can show that this will converge to a neighborhood of the MLE with high probability; the neighborhood is tighter and tighter as we reduce the step size, but more data is required for convergence. However, these stochastic methods in general require much more fiddling to obtain good performance than, say, closed form updates.






                share|cite|improve this answer









                $endgroup$















                  0












                  0








                  0





                  $begingroup$

                  In machine learning, this is referred to as online learning.



                  As @Glen_b pointed out, there are special cases in which the MLE can be updated without needing to access all the previous data. As he also points out, I don't believe there's a generic solution for finding the MLE.



                  A fairly generic approach for finding the approximate solution is to use something like stochastic gradient descent. In this case, as each observation comes in, we compute the gradient with respect to this individual observation and move the parameter values a very small amount in this direction. Under certain conditions, we can show that this will converge to a neighborhood of the MLE with high probability; the neighborhood is tighter and tighter as we reduce the step size, but more data is required for convergence. However, these stochastic methods in general require much more fiddling to obtain good performance than, say, closed form updates.






                  share|cite|improve this answer









                  $endgroup$



                  In machine learning, this is referred to as online learning.



                  As @Glen_b pointed out, there are special cases in which the MLE can be updated without needing to access all the previous data. As he also points out, I don't believe there's a generic solution for finding the MLE.



                  A fairly generic approach for finding the approximate solution is to use something like stochastic gradient descent. In this case, as each observation comes in, we compute the gradient with respect to this individual observation and move the parameter values a very small amount in this direction. Under certain conditions, we can show that this will converge to a neighborhood of the MLE with high probability; the neighborhood is tighter and tighter as we reduce the step size, but more data is required for convergence. However, these stochastic methods in general require much more fiddling to obtain good performance than, say, closed form updates.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered 46 mins ago









                  Cliff ABCliff AB

                  13.6k12567




                  13.6k12567



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Cross Validated!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398220%2frecursively-updating-the-mle-as-new-observations-stream-in%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Oświęcim Innehåll Historia | Källor | Externa länkar | Navigeringsmeny50°2′18″N 19°13′17″Ö / 50.03833°N 19.22139°Ö / 50.03833; 19.2213950°2′18″N 19°13′17″Ö / 50.03833°N 19.22139°Ö / 50.03833; 19.221393089658Nordisk familjebok, AuschwitzInsidan tro och existensJewish Community i OświęcimAuschwitz Jewish Center: MuseumAuschwitz Jewish Center

                      Valle di Casies Indice Geografia fisica | Origini del nome | Storia | Società | Amministrazione | Sport | Note | Bibliografia | Voci correlate | Altri progetti | Collegamenti esterni | Menu di navigazione46°46′N 12°11′E / 46.766667°N 12.183333°E46.766667; 12.183333 (Valle di Casies)46°46′N 12°11′E / 46.766667°N 12.183333°E46.766667; 12.183333 (Valle di Casies)Sito istituzionaleAstat Censimento della popolazione 2011 - Determinazione della consistenza dei tre gruppi linguistici della Provincia Autonoma di Bolzano-Alto Adige - giugno 2012Numeri e fattiValle di CasiesDato IstatTabella dei gradi/giorno dei Comuni italiani raggruppati per Regione e Provincia26 agosto 1993, n. 412Heraldry of the World: GsiesStatistiche I.StatValCasies.comWikimedia CommonsWikimedia CommonsValle di CasiesSito ufficialeValle di CasiesMM14870458910042978-6

                      Typsetting diagram chases (with TikZ?) Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How to define the default vertical distance between nodes?Draw edge on arcNumerical conditional within tikz keys?TikZ: Drawing an arc from an intersection to an intersectionDrawing rectilinear curves in Tikz, aka an Etch-a-Sketch drawingLine up nested tikz enviroments or how to get rid of themHow to place nodes in an absolute coordinate system in tikzCommutative diagram with curve connecting between nodesTikz with standalone: pinning tikz coordinates to page cmDrawing a Decision Diagram with Tikz and layout manager