ANOVA vs T-Test
ANOVA vs T-Test
Two hypothesis tests for comparing means. The t-test compares means of one or two groups. ANOVA (Analysis of Variance) extends this to three or more groups, controlling the overall Type I error rate.
Comparison Table
| Feature | ANOVA | T-Test |
|---|---|---|
| Number of Groups | Three or more | One or two |
| Test Statistic | F-statistic | t-statistic |
| Null Hypothesis | All group means are equal | Two means are equal |
| Type I Error Control | Controls overall alpha | Controls alpha for one comparison |
| Follow-up Needed | Post-hoc tests to find which groups differ | No follow-up needed |
Key Differences
- โANOVA tests all group means simultaneously in a single test, while the t-test is limited to comparing at most two means at once.
- โRunning multiple t-tests instead of ANOVA inflates the overall Type I error rate (the multiple comparisons problem).
- โANOVA uses the F-statistic, which is the ratio of between-group variance to within-group variance; the t-test uses the t-statistic.
- โA significant ANOVA result tells you at least one group differs but does not tell you which pair, requiring post-hoc tests like Tukey HSD.
When to Use ANOVA
- โYou are comparing means across three or more groups.
- โYou want to control the family-wise error rate across all comparisons.
- โYour experimental design has one or more categorical factors with multiple levels.
When to Use T-Test
- โYou are comparing the mean of one sample to a known value (one-sample t-test).
- โYou are comparing means between exactly two groups (independent or paired).
- โYour study design involves only two conditions or time points.
Common Confusions
- !Running multiple t-tests instead of ANOVA and not realizing the Type I error rate is inflated.
- !Thinking a significant ANOVA tells you which specific groups differ (it only says at least one pair differs).
- !Not recognizing that a two-group ANOVA is mathematically equivalent to an independent-samples t-test (F = t-squared).
FAQs
Common questions about this comparison
With k groups there are k(k-1)/2 pairwise comparisons. Each t-test at alpha = 0.05 has a 5% chance of a false positive. Running many tests multiplies that risk. For example, with 4 groups there are 6 comparisons and roughly a 26% chance of at least one false positive. ANOVA tests all groups at once, keeping the overall error rate at 0.05.
Post-hoc tests (such as Tukey HSD, Bonferroni, or Scheffe) are pairwise comparison procedures used after a significant ANOVA result. They identify which specific group means differ while controlling the family-wise error rate. You only run them if the overall ANOVA F-test is significant.