Measurement Reliability Statistics:
Basis of research data collection is to how much a measurement is free from error reliability
Measurement error – difference between true value and observed value
Systematic error – predictable errors – occurs in one direction – ie consistently over or under estimate the true value
Random error – error due to chance – unpredictable
Regression towards the mean:
Extreme values obtained on one occasion are expected to move closer to the mean (regress) on subsequent occasion.
Test-retest reliability – instrument measures variable with consistency
Intratester reliability – consistency of data measured by one individual across one or more sessions
Intertester reliability – variation between two or more raters measuring same group of subjects
Statistical Measures of Reliability:
1) Intraclass Correlation Coefficients (ICC’s):
• theoretically from 0.00 to 1.00
• Used to assess reliability among two or more raters
2) Percent of agreement:
• measure of how often raters agree on scores – for categorical data
3) Kappa Statistic:
• chance – corrected measure of agreement
4) Cronbach’s Alpha:
• for interval consistency of measuring instruments
5) Standard Error of Measurement:
• standard deviation of the measurement error
6) Coefficient of variation:
• ratio of the standard deviation as a proportion of the mean
7) Method Error:
• measure of the discrepancy between two sets of repeated scores
8) Limits of agreement:
http://www.archives-pmr.org/article/S0003-9993(16)30050-8/abstract
http://www.ncbi.nlm.nih.gov/pubmed/27330520?dopt=Abstract
We have not yet got to this page to finish it yet. We will eventually. Please contact us if you have something to contribute to it or sign up for our newsletter or like us on Facebook and Instagram or follow us on Twitter.![]() |
Page last updated:
Comments are closed.