# Statistical Inference

Wikis > Research > Statistics > Statistical Inference

Statistical Inference:

Interferential statistics is the decision making process of estimating a population parameter from a sample.

Steps in statistical significance testing:
1. State research hypothesis
2. Formulate a null hypothesis
3. Decide the statistical significance cut off level
4. Conduct research/collect data
5. Use statistical significance test
6. Reject or fail to reject the null hypothesis

Hypothesis testing:
A theory is often expressed as a study hypothesis. The converse of the study hypothesis is the null hypothesis. An acceptable hypothesis is testable and will have a sound rational basis.

Research hypotheses – statement of expected results
Null hypothesis – the statistical hypothsis (Ho) – usually states that any observed differences between means are due to chance. Can never “prove” a null hypothesis – the purpose of research/experiment is to disprove it.
Alternate hypothesis (HA) – predict that the observed difference is not due to chance.

Hypothesis testing is the method of choosing between the null hypothesis and an alternative hypothesis. This is done by choosing the significance level (alpha) of the test  conduct the study and compute the p-value. If the p-value < alpha  reject the null hypothesis in favour of the alternative. If the p-value > alpha  do not reject the null hypothesis.

Type I and type II errors:
In statistical terms, the hypothesis is never proven to be true or false, it is only accepted or rejected.

Type I error – alpha error - reject the null hypothesis when it is in fact true
Type II error – beta error - accept the null hypothesis when it is in fact false

The probability of making a type I error is the level of the significance of the statistical test.
The probability of making a type II error is the power of the test and is dependent on sample size.

Causes of Type 1 error – measurement error; lack of random sample; researcher bias; improper use of one tailed test

Causes of Type 2 error – measurement error; lack of sufficient power; treatment effect not properly applied

The p-value:
The p-value is the probability of obtaining the observed difference if the null hypothesis is true. A p <0.05 does not constitute proof that there is a difference. A p value is a measure of strength of the statistical evidence in favour of the null hypothesis – the smaller the p value the stronger the evidence against the null hypothesis.