7860-U3D2-Identify and describe key concept in the research. Describe and evaluate the data collection method or methods. SEE DETIALS BELOW

Unit 3 [D] INTRODUCTION QUANTITATIVE TOOLS Constructs While substantial phenomena exist in the physical world and can therefore be measured directly, insubstantial phenomena exist in a symbolic or abstract form and cannot be measured directly. Insubstantial phenomena are frequently called constructs. You can think of a construct as a scientific concept or an abstraction designed to explain a natural phenomenon (Kerlinger, 1986). Constructs allow us to talk about abstract ideas. The following are examples of constructs:

•Intelligence.

• Self-esteem.

• Depression.

• Job satisfaction • Anxiety.

Constructs are used to understand, organize, and study the world we live in.

Operational Definitions As abstractions, constructs are inherently difficult to measure. Because we cannot touch them, we have to develop alternative ways to gather information about them. When we try to measure constructs, or what Leedy and Ormrod (2013) call insubstantial measurements , we try to bridge the gap between what we can observe and the constructs, or abstraction, that a theory proposes.

This process of obtaining systematic data on abstract ideas is called operationalizing the constructs, or creating operational definitions for the constructs. Operational definitions allow us to devise methods, procedures, and instruments that enable the quantification of constructs.

For example, let us say we are interested in studying the relationship between intelligence and school achievement. Both intelligence and school achievement are constructs, or scientific ideas we have created to help us describe and explain natural phenomena (in this case, behaviors).

Our theory suggests the existence of a relationship between these two constructs. In order to test our theory, we must obtain measurable data or information on intelligence and school achievement. The Tools of Research Tools The graphic depicts the relationship between intelligence and school achievement. The left portion of the graphic implies that intelligence is a theoretical construct that is measured by the WISC intelligence test and teacher assessments. The right portion of the graphic implies that school achievement is a theoretical construct that is measured by the Iowa Test of Basic Skills and student grade point averages, both of which are observable phenomena.

How do we measure abstractions? We define observable phenomena from which we can obtain data. In the example, we might decide to measure intelligence using an intelligence test. In selecting an operational definition, we must recognize its limitations. We assume that intelligence tests measure certain aspects of intelligence, but clearly these tests do not tap the entirety of the construct of intelligence. In fact, critics of intelligence tests would argue that they measure very minor aspects of intelligence. We can improve our chances of tapping the intelligence construct by using more than one operational definition. Can you think of the reasons why?

The Importance of Selecting Good Operational Definitions Selecting good operational definitions is critical to supporting the construct validity of our research. Researchers quite frequently use only one operational definition per construct. The validity of this common practice rests on the ability of the operational definition to capture the essence (or at least the most relevant aspects) of the construct. If that definition does not tap the construct we are interested in (for example, using head circumference to measure intelligence), our results will be meaningless. Fortunately, such obviously poor operational definitions are rare. Operational definitions that generate some controversy (for example, using the results of standardized achievement tests to measure school performance) are far more common.

Variables In the foregoing discussion, we identified how we can measure constructs. Measuring constructs turns them into variables in our research. A variable is something that can vary or, in other words, take on at least two different values. This idea is critical to understanding and identifying variables. For example, gender cannot be a variable in a study when all the participants are women.

We should note that not all variables are constructs. Leedy and Ormrod's (2013) distinction between substantial and insubstantial measurements highlights the point that some variables used in research are not constructs.

Substantial measurements, such as the physical attributes of a person, such as hair color, ring size, head circumference, height, or thing, such as a car model, can be used as variables, but they are not abstract constructs. You will read more about variables in the chapters on quantitative and qualitative research.

Hypothesis Testing and Data Analysis How are hypotheses tested? Traditional scientific hypothesis testing is associated with quantitative research designs. The essence of hypothesis testing is straightforward and involves some variation of the following steps:1.State the research hypothesis and the null hypothesis.

2. Select a significance level (for example, 0.01, or 0.5).

3. Select a test statistic. Based on the level of significance and other information, such as degrees of freedom and directional or nondirectional testing, determine the critical value of the test statistic (for example, F, r, t) and the corresponding decision rule. 4.Collect your data. Applying the statistical test to the data, compute an observed value of your chosen test statistic.

5. Compare the observed value of the test statistic (number 4) with the preset critical value (number 3).

6. Make the appropriate decision, to either reject or fail to reject the null hypothesis. Specifically, if the obtained value is greater than the critical value, reject the null hypothesis. If the obtained value is less than the critical value of the test hypothesis, you would fail to reject the null hypothesis.

In other words, researchers use statistics to compute an obtained value of their test statistic, which they compare with a previously established critical value. The critical value is based on the alpha they selected before they began their research. Some of the terms used in traditional hypothesis testing may be unfamiliar to you, so review the following definitions: •Alpha: The risk you are willing to accept in the event your statistical analysis indicates a statistically significant finding, but that finding is not real—it occurred due to chance or some other factor. In other words, alpha is the probability of making a Type I error, which is the same as the p level.

• p level: The likelihood that your obtained results were due to chance if the null hypothesis is true.

• Statistical test: The method you use to statistically analyze your data, such as a t-test or Pearson's correlation.

• Test statistic: The ruler you use to test your hypothesis. It is the obtained value from the result of your statistical test, such as the obtained t value or r.

• Critical value: The value of your test statistic at which you can reject the null hypothesis.

Types of Errors in Hypothesis Testing Hypothesis testing is based on probability. Regardless of the statistical test or test statistic used, the researcher is testing the null hypothesis against the laws of probability. Using probability, you will always have four possible outcomes:

•Error 1: You were wrong. You rejected the null hypothesis when the null was the correct alternative. You concluded there was a relationship or difference when the observed differences actually occurred purely by chance. A Type I Error occurs when you incorrectly reject the null hypothesis. Alpha (α) or your p value is the probability of this happening to you.

• Error 2: You were wrong. You retained, or did not reject, the null when it was actually false. You were not able to find the relationship or difference that really exists. A Type II Error occurs when you incorrectly accept the null hypothesis. The probability of this happening to you is known as beta (β).

• Correct Decision 1: You were right. You rejected the null when it was indeed false. You successfully found the difference or relationship that really exists. Power is the probability of this happening.

• Correct Decision 2: You were right. You retained the null when it was true. No difference or relationship exists, and you found none.

Thus, we try to guard against two types of error when testing hypotheses. A Type I Error is defined as a false positive or rejecting the null hypothesis when it should have been accepted. A Type II Error is defined as a false negative or failing to reject the null hypothesis when it should have been rejected (Williams, 1992). Statistical power refers to the probability of finding relationships or differences that indeed exist. Much research in the social sciences is underpowered, meaning that the designs are not sensitive enough to detect relationships that may indeed exist (Cohen, 1988). See Lipsey's 1997 chapter, "Design Sensitivity: Statistical Power for Applied Experimental Research," in Handbook of Social Science Research for recommendations on how to increase the statistical power of research designs. The following is a paraphrased excerpt from Percy and Kostere's (2008) Qualitative Research Approaches in Psychology .

QUALITATIVE TOOLS Level of Analysis Level of analysis specifies the type of phenomena being investigated, according to the following hierarchy:

intra-psychic phenomena; individual whole-person phenomena; inter-personal phenomena, family phenomena (including couples or dyads in committed relationships); small-group phenomena (work group, team); organizational phenomena (large corporation, large church, government agency); social-cultural phenomena (society as a whole, a culture or subculture).

Key Phenomena Key phenomena in a qualitative study correspond to the variables in a quantitative study. They might be specific phenomena, cases, factors, or simply focal elements that are the focus of the study. They are the primary constructs under investigation and they should be defined in a similar way to the way that we define quantitative constructs. These definitions should be consistent with similar constructs used in previous research, whether qualitative or quantitative, and consistent with the meaning of the terms in the research question.

Types of Data Qualitative data is not collected from tests and measures like quantitative data, but consists of verbal and behavioral output as recorded in interviews, documents, videotapes or photographs, journals, notes of observations, and so on. The most common type of qualitative data is verbal data from interviews.

Role of the Researcher In qualitative research, the researcher is a tool of the research. The researcher has to use his or her own knowledge, training, and experience to collect the data in an efficient and sufficient manner, to bring the data together, and to make it into something meaningful and useful.

Data Preparation Qualitative data must be physically prepared. For example, if in-depth interviews will generate audiotapes, those must be transcribed. If videotapes are used, they too must be transcribed, or perhaps behaviors must be coded. Transcripts must be organized physically (electronic copies, paper copies). In some approaches such as grounded theory, this phase (preparing and organizing the data) is already part of the data analysis procedures.

In other approaches, such as case study or survey, the various kinds of data must first be organized and prepared so that it is usable.

Data Analysis Depending on the approach used, qualitative data will be analyzed in different ways, Much like there are different statistical tests used for different quantitative approaches, the diff erent qualitative approaches demand different data analysis techniques. Some examples include data coding, data reduction, and thematic analysis. References Cohen, J. L. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.

Cozby, P. C. (1993). Methods in behavioral research (5th ed.). Mountain View, CA: Mayfield.

Kerlinger, F. N. (1986). Foundations of behavioral research (3rd ed.). New York: Holt, Rinehart, and Winston.

Leedy, P. D., & Ormrod, J. E. (2013). Practical research: Planning and design (10th ed.). Upper Saddle River, NJ:

Pearson.

Lipsey, J. W. (1997). Design sensitivity: Statistical power for applied experimental research. In L. Bickman & D.

Rog, Handbook of social science research (pp. 39–68). Thousand Oaks, CA: Sage.

Percy, W. H., & Kostere, K. (2008). Qualitative research approaches in psychology . Minneapolis, MN:

Williams, F. (1992). Reasoning with statistics: How to read quantitative research (4th ed.). Orlando, FL: Harcourt Brace Jovanovich.

OBJECTIVES To successfully complete this learning unit, you will be expected to: 1. Identify variables in a research study.

2. Delineate quantitative instruments used to measure variables.

3. Explain the importance of operational definitions to scientific merit.

4. Evaluate the data collection method or methods.

[u03s1] Unit 3 Study 1 STUDIES Readings Read the introduction to Unit 3, "The Tools of Research." This will provide basic explanations and examples of the key components of quantitative and qualitative research.

Use your Leedy and Ormrod text to complete the following:

• Read Chapter 1, "The Nature and Tools of Research," beginning with page 7 at the heading "Toolsfor Research," through page 25. This reading covers some of the tangible tools researchers use, such as libraries and computers, as well as "cognitive tools," such as critical thinking and logic.

• Read Chapter 8, "Analyzing Quantitative Data," pages 211–250. This chapter reviews the types of quantitative data, descriptive statistics, and inferential statistics. Use Trochim's Research Methods Knowledge Base Web site to read the following pages. This information contains more information on qualitative phenomena, data collection, and analysis:•Qualitative Measures.

• The Qualitative Debate.

• Qualitative Data.

• Qualitative Methods.

Also on the Research Methods Knowledge Base Web site, read the following pages for further information on quantitative variables, levels of measurement, hypotheses, and hypothesis testing. Additionally, you will learn about the relationship between qualitative and quantitative data, and other cognitive tools. •Variables.

• Levels of Measurement.

• Hypotheses.

• Types of Data.

• Deduction and Induction.

• Inferential Statistics. There are several links to specific kinds of statistical tests on this page that you might find useful for understanding the data analyses reported in various quantitative research articles.

PSY Learners Additional Required Reading In addition to the other required study activities for this unit, PSY learners are also required to compete the following:

•Read Percy, Kostere, and Kostere's 2015 document, Qualitative Research Approaches in Psychology .

This document provides an overview of qualitative methods and the major qualitative approaches.

Data collection and data analyses are covered under each approach. You may find it helpful to refer to this document throughout this course.

Optional Program-Specific Content Some programs have opted to provide program-specific content designed to help you better understand how the subject matter in this study is incorporated into your particular field of study. Check below to see if your program has any suggested readings for you.

SOBT Learners •Faul, F., Erdfelder, E., Lang, A-G., & Buchner, A. (2007). G*power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39 (2), 1175–191. This reading provides the foundation for using G*Power to determine sample size for quantitative studies.

• Faul, F., Erdfelder, E., Buchner, A., & Lang, A-G. (2009). Statistical power analyses using G*power 3.1:

Tests for correlation and regression analysis. Behavior Research Methods, 41 (4), 1149–1150. This reading provides the foundation for using G*Power for determining sample size for correlation and regression analysis studies.

• Eisenhardt, K. M. (1989). "Building theories from case study research." Academy of Management Review 14 (4), 532–550. This article provides an overall process and structure for case study research.

Probably the most cited case study methods paper in business and management papers. [u03s2] Unit 3 Study 2 PROJECT PREPARATION Resources Research Topic and Methodology Form.

Research Topic and Methodology Scoring Guide.

In preparation for the Unit 4 assignment due next week, make sure during this unit that you have thoroughly read and understand the approved research study you selected for the Unit 2 assignment. Be prepared to complete the Unit 4 assignment by identifying and understanding the research topic, research problem, research question, and basic methodology. You may also find it beneficial to view the Research Topic and Methodology Form that you will use to complete the Unit 4 assignment. Also, view the assignment description and scoring guide to learn how you will be evaluated.

[u03d1] Unit 3 Discussion 1  QUANTITATIVE TOOLS Resources Discussion Participation Scoring Guide.

APA Style and Format.

Persistent Links and DOIs.

Make sure the quantitative article that you selected in Unit 1 will allow you to thoroughly address all of the points required for this discussion. Using the information from this week's readings, complete the following:• Identify the instrument or instruments used to quantify the data, the level of measurement for each instrument, and the statistics used to analyze the data.

• Identify and describe the constructs, variables, and operational definitions included in the research.

Do not just list terms. Include a description of how the researcher defined these.

• Describe the cognitive tool used to interpret the data. Possibilities include deductive logic, inductive reasoning, scientific method, or critical thinking.

• Discuss the usefulness of the operational definitions for the constructs in this study. How could they have been defined differently? Were the operational definitions sufficient to allow the researcher to answer the research question? Make sure to justify your answer.

• Explain the importance of operational definitions to scientific merit.

• List the persistent link for the article in your response. Refer to the Persistent Links and DOIs guide, linked in Resources, to learn how to locate this information in the library databases.

• Cite all sources in APA style and provide an APA-formatted reference list at the end of your post. Response Guidelines After reviewing the discussion postings, choose one peer to respond to. For your response:• Follow the persistent link to the article being discussed.

• Using the language of research, explain how you agree or disagree with your peer's evaluation,offering your own suggestions for improving the research design. [u03d2] Unit 3 Discussion 2  QUALITATIVE TOOLS Resources Discussion Participation Scoring Guide.

APA Style and Format.

Persistent Links and DOIs.

Make sure the qualitative article that you selected in Unit 1 will allow you to thoroughly address all of the points required for this discussion. Using the information from this week's readings, complete the following: • Identify and describe key concept in the research.

• Describe and evaluate the data collection method or methods. Was the data collection method appropriate to allow the researcher to answer the research question? Why or why not?

• Describe and evaluate the data analysis procedure or procedures. Was the data analysis procedure appropriate to allow the researcher to answer the research question?

• Explain the importance of appropriate data collection and data analyses procedures to scientific merit.

• List the persistent link for the article in your response. Refer to the Persistent Links and DOIs guide, linked in Resources, to learn how to locate this information in the library database.

• Cite all sources in APA style and provide an APA-formatted reference list at the end of your post.

Response Guidelines After reviewing the discussion postings, choose one peer to res pond to. For your response:

• Follow the persistent link to the article being discussed.

• Using the language of research, explain how you agree or disagree with your peer's evaluation, offering your own suggestions for improving the research design .