Type II error, also known as beta error, occurs when a statistical test fails to reject the null hypothesis even though it is actually false in the population. In practical terms, this means the test has missed a real effect or difference that truly exists. The probability of committing a Type II error is related to factors such as sample size, effect size and chosen significance level. Because the stem describes failing to reject a false null hypothesis, it is defining Type II error.
Option A:
Type I error occurs when a true null hypothesis is incorrectly rejected, leading to a false positive conclusion. It is controlled by the significance level alpha and is conceptually different from failing to detect a real effect. Since the stem refers to failing to reject a false null, Type I error is not appropriate.
Option B:
The power of a test is the probability of correctly rejecting a false null hypothesis and is equal to 1 minus the probability of Type II error. While related, power is a desirable property rather than an error itself. Therefore, power of the test does not match the error described in the question.
Option C:
Type II error reflects the situation where the test is not sensitive enough to detect the existing effect, perhaps due to small sample size or high variability. Researchers strive to keep this error reasonably low by designing adequately powered studies. Because the stem explicitly mentions failing to reject a false null hypothesis, Type II error is the correct completion.
Option D:
Effect size measures the magnitude of a difference or relationship, independent of sample size, and indicates practical significance. It is not an error probability and does not describe the decision-making mistake outlined in the stem. Consequently, effect size cannot be the right answer here.
Comment Your Answer
Please login to comment your answer.
Sign In
Sign Up
Answers commented by others
No answers commented yet. Be the first to comment!