๐Ÿ”„probability

Bayes' Theorem

P(A|B) = P(B|A) ร— P(A) / P(B)

Bayes' theorem provides a way to update the probability of a hypothesis (A) after observing evidence (B). It connects the prior probability P(A) with the likelihood P(B|A) to produce the posterior probability P(A|B). It is foundational in Bayesian statistics, medical diagnosis, and machine learning.

Variables

P(A|B)=Posterior Probability

The updated probability of A given that B has occurred

P(B|A)=Likelihood

The probability of observing B given that A is true

P(A)=Prior Probability

The initial probability of A before observing B

P(B)=Marginal Probability

The total probability of B occurring, often computed using the law of total probability

Example Calculation

Scenario

A disease affects 1% of a population. A test detects the disease 95% of the time (sensitivity) and has a 3% false positive rate. If a person tests positive, what is the probability they have the disease?

Given Data

P(Disease):0.01
P(Positive|Disease):0.95
P(Positive|No Disease):0.03

Calculation

P(B) = P(Pos|Disease)ร—P(Disease) + P(Pos|No Disease)ร—P(No Disease) = 0.95ร—0.01 + 0.03ร—0.99 = 0.0095 + 0.0297 = 0.0392; P(Disease|Pos) = (0.95ร—0.01)/0.0392

Result

P(Disease|Positive) = 0.242 or about 24.2%

Interpretation

Despite the test being 95% accurate, a positive result only means there is about a 24.2% chance the person actually has the disease. The low prevalence (1%) means most positive results are false positives. This illustrates why screening test results must be interpreted carefully.

When to Use This Formula

  • โœ“Updating probabilities after new evidence is observed
  • โœ“Medical diagnostic testing and interpreting sensitivity and specificity
  • โœ“Spam filtering, classification, and machine learning algorithms
  • โœ“Any situation requiring the reversal of conditional probabilities

Common Mistakes

  • โœ—Confusing P(A|B) with P(B|A), known as the prosecutor's fallacy
  • โœ—Forgetting to use the law of total probability to compute P(B) in the denominator
  • โœ—Ignoring the base rate (prior probability), which dramatically affects the result
  • โœ—Assuming prior probabilities without justification

Calculate This Formula Instantly

Snap a photo of any problem and get step-by-step solutions.

Download StatsIQ

FAQs

Common questions about this formula

Frequentist statistics treats probability as the long-run frequency of events and parameters as fixed but unknown. Bayesian statistics treats probability as a degree of belief and allows parameters to have probability distributions. Bayes' theorem is the mechanism for updating beliefs with data in the Bayesian framework.

Use the law of total probability: P(B) = P(B|A)ร—P(A) + P(B|not A)ร—P(not A). This sums the probability of B across all possible states of A. In problems with more than two hypotheses, extend the sum over all possible hypotheses.

More Formulas