Bayes' theorem
Probability based on prior knowledge / From Wikipedia, the free encyclopedia
Dear Wikiwand AI, let's keep it short by simply answering these key questions:
Can you list the top facts and stats about Bayes' theorem?
Summarize this article for a 10 year old
In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event.[1] For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to an individual of a known age to be assessed more accurately by conditioning it relative to their age, rather than assuming that the individual is typical of the population as a whole.
One of the many applications of Bayes' theorem is Bayesian inference, a particular approach to statistical inference. When applied, the probabilities involved in the theorem may have different probability interpretations. With Bayesian probability interpretation, the theorem expresses how a degree of belief, expressed as a probability, should rationally change to account for the availability of related evidence. Bayesian inference is fundamental to Bayesian statistics. It has been considered to be "to the theory of probability what Pythagoras's theorem is to geometry."[2]
Based on Bayes law both the prevalence of a disease in a given population and the error rate of an infectious disease test have to be taken into account to evaluate the meaning of a positive test result correctly and avoid the base-rate fallacy.
Bayes' theorem is named after the Reverend Thomas Bayes (/beɪz/), also a statistician and philosopher. Bayes used conditional probability to provide an algorithm (his Proposition 9) that uses evidence to calculate limits on an unknown parameter. His work was published in 1763 as An Essay towards solving a Problem in the Doctrine of Chances. Bayes studied how to compute a distribution for the probability parameter of a binomial distribution (in modern terminology). On Bayes's death his family transferred his papers to a friend, the minister, philosopher, and mathematician Richard Price.
Over two years, Richard Price significantly edited the unpublished manuscript, before sending it to a friend who read it aloud at the Royal Society on 23 December 1763.[3] Price edited[4] Bayes's major work "An Essay towards solving a Problem in the Doctrine of Chances" (1763), which appeared in Philosophical Transactions,[5] and contains Bayes' theorem. Price wrote an introduction to the paper which provides some of the philosophical basis of Bayesian statistics and chose one of the two solutions offered by Bayes. In 1765, Price was elected a Fellow of the Royal Society in recognition of his work on the legacy of Bayes.[6][7] On 27 April a letter sent to his friend Benjamin Franklin was read out at the Royal Society, and later published, where Price applies this work to population and computing 'life-annuities'.[8]
Independently of Bayes, Pierre-Simon Laplace in 1774, and later in his 1812 Théorie analytique des probabilités, used conditional probability to formulate the relation of an updated posterior probability from a prior probability, given evidence. He reproduced and extended Bayes's results in 1774, apparently unaware of Bayes's work.[note 1][9] The Bayesian interpretation of probability was developed mainly by Laplace.[10]
About 200 years later, Sir Harold Jeffreys put Bayes's algorithm and Laplace's formulation on an axiomatic basis, writing in a 1973 book that Bayes' theorem "is to the theory of probability what the Pythagorean theorem is to geometry".[2]
Stephen Stigler used a Bayesian argument to conclude that Bayes' theorem was discovered by Nicholas Saunderson, a blind English mathematician, some time before Bayes;[11][12] that interpretation, however, has been disputed.[13] Martyn Hooper[14] and Sharon McGrayne[15] have argued that Richard Price's contribution was substantial:
By modern standards, we should refer to the Bayes–Price rule. Price discovered Bayes's work, recognized its importance, corrected it, contributed to the article, and found a use for it. The modern convention of employing Bayes's name alone is unfair but so entrenched that anything else makes little sense.[15]
Bayes' theorem is stated mathematically as the following equation:[16]
where and are events and .
- is a conditional probability: the probability of event occurring given that is true. It is also called the posterior probability of given .
- is also a conditional probability: the probability of event occurring given that is true. It can also be interpreted as the likelihood of given a fixed because .
- and are the probabilities of observing and respectively without any given conditions; they are known as the prior probability and marginal probability.
Proof
For events
Bayes' theorem may be derived from the definition of conditional probability:
where is the probability of both A and B being true. Similarly,
Solving for and substituting into the above expression for yields Bayes' theorem:
For continuous random variables
For two continuous random variables X and Y, Bayes' theorem may be analogously derived from the definition of conditional density:
Therefore,
General case
Let be the conditional distribution of given and let be the distribution of . The joint distribution is then . The conditional distribution of given is then determined by
Existence and uniqueness of the needed conditional expectation is a consequence of the Radon–Nikodym theorem. This was formulated by Kolmogorov in his famous book from 1933. Kolmogorov underlines the importance of conditional probability by writing "I wish to call attention to ... and especially the theory of conditional probabilities and conditional expectations ..." in the Preface.[17] The Bayes theorem determines the posterior distribution from the prior distribution. Uniqueness requires continuity assumptions.[18][19] Bayes' theorem can be generalized to include improper prior distributions such as the uniform distribution on the real line.[20] Modern Markov chain Monte Carlo methods have boosted the importance of Bayes' theorem including cases with improper priors.[21]
Recreational mathematics
Bayes' rule and computing conditional probabilities provide a solution method for a number of popular puzzles, such as the Three Prisoners problem, the Monty Hall problem, the Two Child problem and the Two Envelopes problem.
Drug testing
Suppose, a particular test for whether someone has been using cannabis is 90% sensitive, meaning the true positive rate (TPR) = 0.90. Therefore, it leads to 90% true positive results (correct identification of drug use) for cannabis users.
The test is also 80% specific, meaning true negative rate (TNR) = 0.80. Therefore, the test correctly identifies 80% of non-use for non-users, but also generates 20% false positives, or false positive rate (FPR) = 0.20, for non-users.
Assuming 0.05 prevalence, meaning 5% of people use cannabis, what is the probability that a random person who tests positive is really a cannabis user?
The Positive predictive value (PPV) of a test is the proportion of persons who are actually positive out of all those testing positive, and can be calculated from a sample as:
- PPV = True positive / Tested positive
If sensitivity, specificity, and prevalence are known, PPV can be calculated using Bayes theorem. Let mean "the probability that someone is a cannabis user given that they test positive," which is what is meant by PPV. We can write:
The fact that is a direct application of the Law of Total Probability. In this case, it says that the probability that someone tests positive is the probability that a user tests positive, times the probability of being a user, plus the probability that a non-user tests positive, times the probability of being a non-user. This is true because the classifications user and non-user form a partition of a set, namely the set of people who take the drug test. This combined with the definition of conditional probability results in the above statement.
In other words, even if someone tests positive, the probability that they are a cannabis user is only 19%—this is because in this group, only 5% of people are users, and most positives are false positives coming from the remaining 95%.
If 1,000 people were tested:
- 950 are non-users and 190 of them give false positive (0.20 × 950)
- 50 of them are users and 45 of them give true positive (0.90 × 50)
The 1,000 people thus yields 235 positive tests, of which only 45 are genuine drug users, about 19%. See Figure 1 for an illustration using a frequency box, and note how small the pink area of true positives is compared to the blue area of false positives.
Sensitivity or specificity
The importance of specificity can be seen by showing that even if sensitivity is raised to 100% and specificity remains at 80%, the probability of someone testing positive really being a cannabis user only rises from 19% to 21%, but if the sensitivity is held at 90% and the specificity is increased to 95%, the probability rises to 49%.
Test Actual |
Positive | Negative | Total | |
---|---|---|---|---|
User | 45 | 5 | 50 | |
Non-user | 190 | 760 | 950 | |
Total | 235 | 765 | 1000 | |
90% sensitive, 80% specific, PPV=45/235 ≈ 19% |
Test Actual |
Positive | Negative | Total | |
---|---|---|---|---|
User | 50 | 0 | 50 | |
Non-user | 190 | 760 | 950 | |
Total | 240 | 760 | 1000 | |
100% sensitive, 80% specific, PPV=50/240 ≈ 21% |
Test Actual |
Positive | Negative | Total | |
---|---|---|---|---|
User | 45 | 5 | 50 | |
Non-user | 47 | 903 | 950 | |
Total | 92 | 908 | 1000 | |
90% sensitive, 95% specific, PPV=45/92 ≈ 49% |
Cancer rate
Even if 100% of patients with pancreatic cancer have a certain symptom, when someone has the same symptom, it does not mean that this person has a 100% chance of getting pancreatic cancer. Assuming the incidence rate of pancreatic cancer is 1/100000, while 10/99999 healthy individuals have the same symptoms worldwide, the probability of having pancreatic cancer given the symptoms is only 9.1%, and the other 90.9% could be "false positives" (that is, falsely said to have cancer; "positive" is a confusing term when, as here, the test gives bad news).
Based on incidence rate, the following table presents the corresponding numbers per 100,000 people.
Symptom Cancer |
Yes | No | Total | |
---|---|---|---|---|
Yes | 1 | 0 | 1 | |
No | 10 | 99989 | 99999 | |
Total | 11 | 99989 | 100000 |
Which can then be used to calculate the probability of having cancer when you have the symptoms:
Defective item rate
Condition Machine |
Defective | Flawless | Total | |
---|---|---|---|---|
A | 10 | 190 | 200 | |
B | 9 | 291 | 300 | |
C | 5 | 495 | 500 | |
Total | 24 | 976 | 1000 |
A factory produces items using three machines—A, B, and C—which account for 20%, 30%, and 50% of its output respectively. Of the items produced by machine A, 5% are defective; similarly, 3% of machine B's items and 1% of machine C's are defective. If a randomly selected item is defective, what is the probability it was produced by machine C?
Once again, the answer can be reached without using the formula by applying the conditions to a hypothetical number of cases. For example, if the factory produces 1,000 items, 200 will be produced by Machine A, 300 by Machine B, and 500 by Machine C. Machine A will produce 5% × 200 = 10 defective items, Machine B 3% × 300 = 9, and Machine C 1% × 500 = 5, for a total of 24. Thus, the likelihood that a randomly selected defective item was produced by machine C is 5/24 (~20.83%).
This problem can also be solved using Bayes' theorem: Let Xi denote the event that a randomly chosen item was made by the i th machine (for i = A,B,C). Let Y denote the event that a randomly chosen item is defective. Then, we are given the following information:
If the item was made by the first machine, then the probability that it is defective is 0.05; that is, P(Y | XA) = 0.05. Overall, we have
To answer the original question, we first find P(Y). That can be done in the following way:
Hence, 2.4% of the total output is defective.
We are given that Y has occurred, and we want to calculate the conditional probability of XC. By Bayes' theorem,
Given that the item is defective, the probability that it was made by machine C is 5/24. Although machine C produces half of the total output, it produces a much smaller fraction of the defective items. Hence the knowledge that the item selected was defective enables us to replace the prior probability P(XC) = 1/2 by the smaller posterior probability P(XC | Y) = 5/24.
The interpretation of Bayes' rule depends on the interpretation of probability ascribed to the terms. The two predominant interpretations are described below. Figure 2 shows a geometric visualization.
Bayesian interpretation
In the Bayesian (or epistemological) interpretation, probability measures a "degree of belief". Bayes' theorem links the degree of belief in a proposition before and after accounting for evidence. For example, suppose it is believed with 50% certainty that a coin is twice as likely to land heads than tails. If the coin is flipped a number of times and the outcomes observed, that degree of belief will probably rise or fall, but might even remain the same, depending on the results. For proposition A and evidence B,
- P (A), the prior, is the initial degree of belief in A.
- P (A | B), the posterior, is the degree of belief after incorporating news that B is true.
- the quotient P(B | A)/P(B) represents the support B provides for A.
For more on the application of Bayes' theorem under the Bayesian interpretation of probability, see Bayesian inference.
Frequentist interpretation
In the frequentist interpretation, probability measures a "proportion of outcomes". For example, suppose an experiment is performed many times. P(A) is the proportion of outcomes with property A (the prior) and P(B) is the proportion with property B. P(B | A) is the proportion of outcomes with property B out of outcomes with property A, and P(A | B) is the proportion of those with A out of those with B (the posterior).
The role of Bayes' theorem is best visualized with tree diagrams such as Figure 3. The two diagrams partition the same outcomes by A and B in opposite orders, to obtain the inverse probabilities. Bayes' theorem links the different partitionings.
Example
An entomologist spots what might, due to the pattern on its back, be a rare subspecies of beetle. A full 98% of the members of the rare subspecies have the pattern, so P(Pattern | Rare) = 98%. Only 5% of members of the common subspecies have the pattern. The rare subspecies is 0.1% of the total population. How likely is the beetle having the pattern to be rare: what is P(Rare | Pattern)?
From the extended form of Bayes' theorem (since any beetle is either rare or common),