In Search of Bayesian Inference
In Search of #Bayesian Inference
http://cacm.acm.org/magazines/2015/1/181628-in-search-of-bayesian-inference/fulltext Nice intuition on priors in recovering air-crash wreckage & analyzing mammographs
QT:{{”
In its most basic form, Bayes’ Law is a simple method for updating beliefs in the light of new evidence. Suppose there is some statement A that you initially believe has a probability P(A) of being correct (what Bayesians call the “prior” probability). If a new piece of evidence, B, comes along, then the probability that A is true given that B has happened (what Bayesians call the “posterior” probability) is given by
P(A|B)=P(B|A) P(A) / P(B)
where P(B|A) is the likelihood that B would occur if A is true, and P (B) is the likelihood that B would occur under any circumstances.
Consider an example described in Silver’s book The Signal and the Noise: A woman in her forties has a positive mammogram, and wants to know the probability she has breast cancer. Bayes’ Law says that to answer this question, we need to know three things: the probability that a woman in her forties will have breast cancer (about 1.4%); the probability that if a woman has breast cancer, the mammogram will detect it (about 75%); and the probability that any random woman in her forties will have a positive mammogram (about 11%). Putting these figures together, Bayes’ Law—named after the Reverend Thomas Bayes, whose manuscript on the subject was published posthumously in 1763—says the probability the woman has cancer, given her positive mammogram result, is just under 10%; in other words, about 9 out of 10 such mammogram results are false positives.
In this simple setting, it is clear how to construct the prior, since there is plenty of data available on cancer rates. In such cases, the use of Bayes’ Law is uncontroversial, and essentially a tautology—it simply says the woman’s probability of having cancer, in light of her positive mammogram result, is given by the proportion of positive mammograms that are true positives. Things get murkier when
statisticians use Bayes’ rule to try to reason about one-time events, or other situations in which there is no clear consensus about what the prior probabilities are. For example, large passenger airplanes do not crash into the ocean very often, and when they do, the
circumstances vary widely. In such cases, the very notion of prior probability is inherently subjective; it represents our best belief, based on previous experiences, about what is likely to be true in this particular case. If this initial belief is way off, we are likely to get bad inferences.
“}}