Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.

QUESTION

Interpreting Quantitative ResultsOne common mistake new researchers make in reporting quantitative research study results is to assume that the data speaks for itself. Data by itself doesn’t mean anyt

Interpreting Quantitative ResultsOne common mistake new researchers make in reporting quantitative research study results is to assume that the data speaks for itself. Data by itself doesn’t mean anything—even highly analyzed data. To have meaning, data must be interpreted in light of the theory that served as the foundation of the study, because the only purpose for the data you collect is to test this theory.Oh, and correlation is not causality—but hardly anybody actually believes that. If two variables are highly correlated, there must be a reason, right? There must be some link.This unusual discussion will help you engage with the preceding statements in a dramatic way.Complete the following: Go to http://www.tylervigen.com/spurious-correlations and choose one of the correlations displayed on the site. Create a cause-effect chain (essentially, an ad hoc theory) to explain why the two variables must be connected to each other. Briefly define a quantitative experiment to test one link of the causal chain you have created. List the analytical technique that you would use to test this link.For example: The correlation between “Total revenue generated by arcades” and “Computer science doctorates awarded in the US” is 98.5% (r=0.985065). (Yes, really. 98.5%. That high. And by the way, since this one has been used, you can't use it.) Clearly, highly trained computer scientists (as measured by the number of doctorates) create superior computer games (more fun and challenging). Because the games are more fun, they draw more people to the arcades; because they are more challenging, people spend more money trying to beat them. Let’s just test the first link: better computer scientists create better computer games, with better defined as more highly trained for computer scientists, and better defined as more fun and more challenging. To test this, I would create a questionnaire to give to a random selection of arcade-goers, asking them to rate each of the games in the arcade according to “level of fun” and “level of challenge,” likely using a 5-point Likert scale (using a sample of about 200 to start). I would then go to the companies who created each of the games on the survey and find out how many PhD computer scientists they employed on their game development team. I may be able to analyze the data using a multiple linear regression, using “number of computer scientists” as the dependent variable, and “fun” and “challenge” as the independent variables. I suspect, though, that computer scientist PhDs could be a little scarce on game development teams, so I will be prepared to divide the teams into two groups (“have a PhD CS on the team” and “don’t have a PhD CS on the team") and use logistic regression to analyze the data instead. Since I’m really trying to prove a negative here, I would also do a follow-on analysis to determine the actual power of the test and be prepared to increase the sample size to increase the power. That way, I could assert that the connection between the independent and dependent variables can be “no larger than” a certain number, with 95% confidence. And, yeah: there’s very little chance that this post-hoc theory holds any water at all—which is why we insist that you establish your theory before you define your quantitative research plan.

Show more
LEARN MORE EFFECTIVELY AND GET BETTER GRADES!
Ask a Question