Bayes' Theorem
P(A|B) = P(B|A) ร P(A) / P(B)
Bayes' theorem provides a way to update the probability of a hypothesis (A) after observing evidence (B). It connects the prior probability P(A) with the likelihood P(B|A) to produce the posterior probability P(A|B). It is foundational in Bayesian statistics, medical diagnosis, and machine learning.
Variables
The updated probability of A given that B has occurred
The probability of observing B given that A is true
The initial probability of A before observing B
The total probability of B occurring, often computed using the law of total probability
Example Calculation
Scenario
A disease affects 1% of a population. A test detects the disease 95% of the time (sensitivity) and has a 3% false positive rate. If a person tests positive, what is the probability they have the disease?
Given Data
Calculation
P(B) = P(Pos|Disease)รP(Disease) + P(Pos|No Disease)รP(No Disease) = 0.95ร0.01 + 0.03ร0.99 = 0.0095 + 0.0297 = 0.0392; P(Disease|Pos) = (0.95ร0.01)/0.0392
Result
P(Disease|Positive) = 0.242 or about 24.2%
Interpretation
Despite the test being 95% accurate, a positive result only means there is about a 24.2% chance the person actually has the disease. The low prevalence (1%) means most positive results are false positives. This illustrates why screening test results must be interpreted carefully.
When to Use This Formula
- โUpdating probabilities after new evidence is observed
- โMedical diagnostic testing and interpreting sensitivity and specificity
- โSpam filtering, classification, and machine learning algorithms
- โAny situation requiring the reversal of conditional probabilities
Common Mistakes
- โConfusing P(A|B) with P(B|A), known as the prosecutor's fallacy
- โForgetting to use the law of total probability to compute P(B) in the denominator
- โIgnoring the base rate (prior probability), which dramatically affects the result
- โAssuming prior probabilities without justification
Calculate This Formula Instantly
Snap a photo of any problem and get step-by-step solutions.
Download StatsIQFAQs
Common questions about this formula
Frequentist statistics treats probability as the long-run frequency of events and parameters as fixed but unknown. Bayesian statistics treats probability as a degree of belief and allows parameters to have probability distributions. Bayes' theorem is the mechanism for updating beliefs with data in the Bayesian framework.
Use the law of total probability: P(B) = P(B|A)รP(A) + P(B|not A)รP(not A). This sums the probability of B across all possible states of A. In problems with more than two hypotheses, extend the sum over all possible hypotheses.