Test–retest reliability evaluates the stability of scores over time by administering the same instrument to the same respondents on two occasions separated by an appropriate interval. High correlation between the two sets of scores suggests that the instrument yields consistent results and is relatively free from random fluctuations. This is especially important for traits expected to remain stable over the tested period. Thus, the reliability described in the stem is test–retest reliability.
Option A:
Split-half reliability examines internal consistency by correlating scores on two halves of the same test administered once, rather than across two occasions. It provides information about item homogeneity, not temporal stability.
Option B:
Test–retest reliability focuses on whether individuals maintain similar relative positions on the test across time under similar conditions. If the scores show strong agreement, the instrument is considered stable. Because the stem mentions administering the same test on two occasions, this option is correct.
Option C:
Inter-rater reliability assesses the level of agreement between different raters or observers who score the same performance, which is a different dimension of reliability from repeated testing.
Option D:
Internal reliability generally refers to consistency among items in the same test, often measured by coefficients like Cronbach’s alpha, and does not involve two separate administrations over time.
Comment Your Answer
Please login to comment your answer.
Sign In
Sign Up
Answers commented by others
No answers commented yet. Be the first to comment!