# Analysis of Variance (ANOVA)

Analysis of Variance (ANOVA) – comparing 3 or more means:
Used to analyses difference between 3 or more means – does this by comparing the ‘between groups variability’ with the ‘within groups variability’ – the ratio is the F value  if the ‘between group variability’ exceeds the ‘within group variability’ by more than what would be expected by chance  one of the means must be different to another  use post hoc testing to determine which one. The ‘between groups variability’ is the variation between each group mean and the overall mean for all the groups. The ‘within groups variability’ is the variation of each participant in the study and the participants group mean.

ANOVA is most common statistical test used – it is not appropriate to use multiple t-tests to look at differences between 3 or more groups:
• Greater chance of making a type 1 error if use multiple t-tests on samples taken from the same population
• The t-test uses only two samples at a time – if 3 or more sample are used the t-test does not make use of all the available information about the populations from all the samples.
• Multiple t-tests take more time than a simple ANOVA

Assumption of ANOVA:
• Sample are drawn from a population that is normally distributed (but ANOVA is a robust test, so this criteria is not that strict)
• Homogeneity of variance – variance of each sample should be nearly equal
• All groups are independent – not correlated in any way

Two-way ANOVA:
Used when have two independent variables
Two main effects:
1) Main effects – effect of each independent variable can be examined separately
2) Interaction effects – effect of combined independent variable – interaction is present when the effects of one variable is different across different levels of other variables.

Post hoc tests:
• The F from ANOVA does not specify which group is different – only indicates that a difference exists  use post hoc test
• Similar to a t-test, but have correction for alpha errors
• Tukey’s honestly significant difference (HSD) – compares means of two groups
• Scheffe’s Confidence Interval (I) – compares the means of any two groups or combinations of groups

Size of Effect:
A significant value of F only indicates the probability that the differences occurred by chance. Practical significance can be determined by:
• R2 – ratio of the variance due to treatment and the total variance – tells the percent of variance due to treatment effect
• Omega squared – more accurate

Analysis of Variance (ANOVA) – repeated measures:
The simple ANOVA (above) assumes that groups are independent looks for between subject effects– when repeated measures done on one group, the groups are dependent  use repeated measure analysis – looks for within subject effects

Produces an F value, used to determine significant differences between means. If significant  use post hoc test

Assumption of repeated measure ANOVA – the variance of the several repeated measures soul be equal (homogeneity of variance) – called assumption of sphericity. The correlations among all trials should be equal (homogeneity of covariance). If these assumptions are not true  increased chance of type 1 error  use corrections – the Greenhouse-Geisser Adjustment or the Huynh-Feldt Adjustment.

Analysis of Covariance (ANCOVA):
Used to assess the effect of one or more categorical explanatory variables while controlling for the effects of some other variable – possible a continuous variable.
Combination of ANOVA and linear regressions.
Used to test if groups are different on some variable before treatment.

https://www.technologynetworks.com/informatics/articles/one-way-vs-two-way-anova-definition-differences-assumptions-and-hypotheses-306553?fbclid=IwAR0qDkHFUKIv7dPYRrER7TnKmQA47d7j6d1oncyXOxBv3aoDSMl-jBbZt4Y