|
|
2) If I have an alpha ( 3) What are the commonly accepted alpha levels in social science research? When would it be acceptable to use a different alpha level? 4) How do we test whether we have met the required level of statistical significance? 5) Power is represented by which of the following? 6) What does the power level tell us? 7) If I have a power of .65 and I want to increase the power to .80, what must I do? 8) What is the difference between Type I error and Type II error? How do I determine which is more important? 9) A research project has resulted in the following information: 10) The printouts for a statistical study have provided the following
information:
ANSWERS 1) C; the null hypothesis indicates nothingness and in this example "c" is indicating that there is NO difference between the means 2) This tells me that 5% of the time (5 times out of 100) I may mistakenly reject the null hypothesis when I should have failed to reject the null hypothesis 3) .05 and .01 are the acceptable levels of significance. I may use a different value, perhaps higher (.10), if the research involved something that would not cause serious harm if more mistakes were made (commonly used example would be when dealing with money) 4) If we only have the statistical value that we calculated (T value, Pearson’s R, whatever) and we know the alpha level then we can compare the calculated value to the table value for our statistical test (found in the back of most statistical textbooks). If our calculated value is greater than the table value, then we reject the null hypothesis; if our calculated value is less than or equal to the table value, then we fail to reject the null hypothesis. If we are using SPSS or statistical software, the printouts will provide us with the "p value" under the column labeled "significance". Here, if the significance value is less than our preset alpha level (.05 or .01 depending upon the research), then we reject the null hypothesis; if the significance value is greater than our preset alpha level, then we fail to reject the null hypothesis. 5) C; power is 1 – Beta 6) Power refers to the probability that we will find a statistically significant difference if one truly exists. 7) If I have a power of .65, then that means that my value for
8) Type I error refers to the probability that we will reject a null hypothesis when we should have failed to reject; Type II error refers to the probability that we will fail to reject a null hypothesis when we should have rejected. Both are important, but if the research is such that it is more important that you not reject a null hypothesis incorrectly (you incorrectly find that there is a statistically significant difference when there is not), then you should focus on improving the Type I error. 9) Calculated R is greater than the table value, so we reject the null hypothesis |