Need back in 4 hours MPA6999

Example 8.1 A Survey Method Section

An example follows of a survey method section that illustrates many of the steps just mentioned. This excerpt (used with permission) comes from a journal article reporting a study of factors affecting student attrition in one small liberal arts college (Bean & Creswell, 1980, pp. 321–322).

Methodology

The site of this study was a small (enrollment 1,000), religious, coeducational, liberal arts college in a Midwestern city with a population of 175,000 people. [Authors identified the research site and population.]

The dropout rate the previous year was 25%. Dropout rates tend to be highest among freshmen and sophomores, so an attempt was made to reach as many freshmen and sophomores as possible by distribution of the questionnaire through classes. Research on attrition indicates that males and females drop out of college for different reasons (Bean, 1978, in press; Spady, 1971). Therefore, only women were analyzed in this study.

During April 1979, 169 women returned questionnaires. A homogeneous sample of 135 women who were 25 years old or younger, unmarried, full-time U.S. citizens, and Caucasian was selected for this analysis to exclude some possible confounding variables (Kerlinger, 1973).

Of these women, 71 were freshmen, 55 were sophomores, and 9 were juniors. Of the students, 95% were between the ages of 18 and 21. This sample is biased toward higher-ability students as indicated by scores on the ACT test. [Authors presented descriptive information about the sample.]

Data were collected by means of a questionnaire containing 116 items. The majority of these were Likert-like items based on a scale from “a very small extent” to “a very great extent.” Other questions asked for factual information, such as ACT scores, high school grades, and parents' educational level. All information used in this analysis was derived from questionnaire data. This questionnaire had been developed and tested at three other institutions before its use at this college. [Authors discussed the instrument.]

Concurrent and convergent validity (Campbell & Fiske, 1959) of these measures was established through factor analysis, and was found to be at an adequate level. Reliability of the factors was established through the coefficient alpha. The constructs were represented by 25 measures—multiple items combined on the basis of factor analysis to make indices—and 27 measures were single item indicators. [Validity and reliability were addressed.]

Multiple regression and path analysis (Heise, 1969; Kerlinger & Pedhazur, 1973) were used to analyze the data. In the causal model …, intent to leave was regressed on all variables which preceded it in the causal sequence. Intervening variables significantly related to intent to leave were then regressed on organizational variables, personal variables, environmental variables, and background variables. [Data analysis steps were presented.]

Table 8.1 A Checklist of Questions for Designing a Survey Method

__________

Is the purpose of a survey design stated?

__________

Are the reasons for choosing the design mentioned?

__________

Is the nature of the survey (cross-sectional vs. longitudinal) identified?

__________

Is the population and its size mentioned?

__________

Will the population be stratified? If so, how?

__________

How many people will be in the sample? On what basis was this size chosen?

__________

What will be the procedure for sampling these individuals (e.g., random, nonrandom)?

__________

What instrument will be used in the survey? Who developed the instrument?

__________

What are the content areas addressed in the survey? The scales?

__________

What procedure will be used to pilot or field-test the survey?

__________

What is the timeline for administering the survey?

__________

What are the variables in the study?

__________

How do these variables cross-reference with the research questions and items on the survey?

What specific steps will be taken in data analysis to do the following:

(a)_______

Analyze returns?

(b)_______

Check for response bias?

(c)_______

Conduct a descriptive analysis?

(d)_______

Collapse items into scales?

(e)_______

Check for reliability of scales?

(f)_______

Run inferential statistics to answer the research questions or assess practical implications of the results?

_______

How will the results be interpreted?