Readings:

- William M. K. Trochim, “Analysis” (through “Randomized Block Analysis”),
*Research Methods Knowledge Base*(October 20, 2006 Edition) <http://www.socialresearchmethods.net>- Potential confusion in section on “Threats to Conclusion Validity”. In this:

- conclude that there is no relationship when in fact there is (you missed the relationship or didn’t see it)
- conclude that there is a relationship when in fact there is not (you’re seeing things that aren’t there!)

You might get confused because number

*1*above is Type II error, and number*2*is Type I error. - “Effect size” is often measured in standard deviations, e.g. 1/2 standard deviation is about 8 IQ points.
- Section on “Data Preparation” mentions item reversals. Reversal items are a way of checking to see if people are paying attention, or if they are responding in a “response set” manner. This is an example of an instructional manipulation check.
- Tables 1 and 2 in the section on “Descriptive Statistics” use unequal bucket ranges for the independent variable. This can lead to misleading data presentation.

- Potential confusion in section on “Threats to Conclusion Validity”. In this:
- “Big Data”,
*Wikipedia*, version 18:03, 26 April 2013- “Big data” just refers to the amount of data. It does not always imply the use of techniques or concepts different from those used with smaller amounts of data. For example, the Preis et al. study noted in this article, showing that countries which generate more searches for the next year than for the past year tend to have higher GDPs, applies very simple between-groups statistical inference.
- A large sample does not guarantee it is unbiased. See “Critiques of Big Data execution” in the article.

- OPTIONAL: Gary Marcus, “Steamrolled By Big Data”,
*The New Yorker*, April 3, 2013

Slides: http://www.stanford.edu/class/symsys130/SymSys130-5-13-2013.ppt.pdf. Notes:

*An example of power analysis computation*(adapted from Arthur M. Glenberg,*Learning from Data: An Introduction to Statistical Reasoning*, HBJ, 1988): Suppose a researcher believes that a given population P’ (say, one that has been given specific tutoring in taking IQ tests) scores higher on an IQ test than the general population P. Average IQ is 100 in P, so the null hypothesis H0 is AvgIQ(P’) = 100, and the alternative hypothesis is a one-sided hypothesis, H1: AvgIQ(P’) > 100. The researcher designs a study to test H0 versus H1. If the significance level alpha = 0.05, and if the number of test-takers in our sample from P’ is 64, and if the standard deviation of the IQ test is 16, then we can compute the power of the test if the researcher specifies an effect size which is the minimum s/he wants to be able to detect. Suppose they choose an effect size = 0.25s.d., or 4 IQ points, i.e. the alternative hypothesis is now H1: AvgIQ(P’) = 104. We can compute the power of giving an IQ test to 64 people drawn at random from P’. Power = Probability(Rejecting H0 | H1 is true). Since IQ scores are generally assumed to be normally distributed around their mean in a population, we can compute this probability by looking up the area under the curve of a normal distribution in a standard table for average measured IQ scores from P’ above a threshold value, determined by alpha in this test. Since it is a one-sided test and alpha=0.05, the table tells us that we will be rejecting H0 if the sample AvgIQ(P’) is greater than z=1.65 [Can you see how we found this?], where z=[SampleMean(P’)-Mean(P)]/StandardError(P’). In this example, SampleMean(P’) is the average of the scores received by test takers in our sample from P’. Mean(P) = 100, by assumption since P is the general population, and StandardError(P’) = the estimated standard deviation of SampleMean(P’), which is calculated by dividing the standard deviation of the distribution (=16) by the square root of the sample size (=64), so it is 16/Sqrt(64) = 16/8 = 2. With z=1.65, we can solve for the SampleMean(P’) which will be our cutoff for rejecting H0. We therefore compute that we will reject H0 if SampleMean(P’) > 100 + 2(1.65) = 103.3. Applying this to our H1 assumption that AvgIQ(P’) = 104, we compute the area under the normal distribution of sample means from that population given a standard deviation of 16 and a sample size of 64. The z-score for 103.3 in this distribution is (103.3-104)/[16/Sqrt(64)] = 0.7/2 = 0.35. From the normal distribution table, this means the probability that we will get a sample mean > 103.3 given H1 is 0.1368+0.5 = 0.6368 [Again, see if you can tell where this comes from in the table], and this is our power. So we have a little less than a 64% chance of rejecting H0 given our revised H1 (AvgIQ(P’)=104). A heuristic for power is that it should be above 0.8, so this test is a little underpowered for the effect size we are aiming to show.

Advertisements