Assignment 2: A Brief Literature Review

10.1177/1049731504271603 RESEARCH ON SOCIAL WORK PRACTICEShek et al. / EVALUATION STUDIES USING QUALITATIVE RESEARCH METHODS Evaluation of Evaluation Studies Using Qualitative Research Methods in the Social Work Literature (1990-2003):

Evidence That Constitutes a Wake-Up Call Daniel T. L. Shek Vera M. Y. Tang X. Y. Han The Chinese University of Hong Kong Objective: This study examines the quality of evaluation studies using qualitative research methods in the social work literature in terms of a number of criteria commonly adopted in the field ofqualitative research. Method: Usingquali - tativeandevaluationas search terms, relevant qualitative evaluation studies from 1990 to 2003 indexed bySocial Work Abstractswere examined, and their quality was evaluated. Results: The review showsthat the quality of pub - lished evaluation studies using qualitative research methods in the social work field is not high and that many of the reviewed studies are not sensitive to the following issues: philosophical base of the study, auditability, bias, truth value, consistency, and critical interpretations of the data. Conclusions: Social workers using findings arising from published evaluation studies using qualitative research methods in social work should be cautious and social workers conducting qualitative evaluation studies should be sensitive to the issue of quality. Adequate training for social workers on qualitative evaluation should also be carried out.

Keywords:qualitative research; evaluation; social work literature; evaluative criteria; criteriology Is social work intervention effective? A review of social work literature shows that the answer to this question has changed with time. In the 1970s, several reviews of quan- titative social work evaluation studies suggested that social work intervention was not effective (Fischer, 1973; Segal, 1972). However, with the growth of quantitative outcome studies showing that social work intervention programs were effective (Reid & Hanrahan, 1982; Rubin, 1985), this gloomy picture changed in the 1980s. Besides, there were attempts to develop guidelines that govern the quality of quantitative evaluation studies. Thyer (1989) outlined a series of first principles governing social work practice research, and Thyer (1991) further proposed guidelines for evaluating social work outcome research reports. With the publication of new social work journals,such asResearch on Social Work Practice, quantitative studies documenting the effectiveness of social work have gradually accumulated.

Although there is a growing effort to evaluate the effectiveness of social work intervention via quantitative methods, there has also been a growing literature on qualitative studies in the social work context. A review of Social Work Abstractsin June 2004 showed that although there were 513 publications when the search termquantitativewas used, there were 1,338 publica- tions when the search termqualitativewas used. In a review of social work research dissertations and theses, Dellgran and Hojer (2001) found that of the 89 Ph.D. the- ses covering the years 1979 to 1998, half of them were qualitative studies, 14% were quantitative studies, and 36% were mixed-method studies.

With the growing number of qualitative evaluation studies in the social work literature, one important ques- tion that should be asked is whether the qualitative evalu- ation studies paint an optimistic picture of social work intervention as effective. In response to the growing emphasis of qualitative research in the social work pro- fession, Thyer (1989) argued that “the advocates of qual- itative research are urged to provide the profession with similar positive examples of research on the outcomes of 180 Authors’ Note:This work was supported by the Research Grants Council of the Hong Kong Special Administrative Region (Grant CUHK4293/03H) and Wofoo Foundation. The authors wish to thank those anonymous reviewers who have provided stimulating and constructive comments in their reviews. Corre- spondence may be addressed to Daniel T. L. Shek, Social Welfare Practice and Research Centre, Department of Social Work, The Chinese University of Hong Kong, Shatin, Hong Kong; e-mail: [email protected].

Research on Social Work Practice, Vol. 15 No. 3, May 2005 180-194 DOI: 10.1177/1049731504271603 © 2005 Sage Publications social work practice, and to develop explicit guide-lines for the conduct of qualitative studies” (p. 309). Neverthe - less, before we can claim that qualitative evaluation stud - ies in the social work literature give support to the effec - tiveness of social work intervention, we have to ask a more fundamental question: Do qualitative evaluation studies in the social work context have good qualities so that we can draw meaningful conclusions about the effec - tiveness of social work intervention? Unfortunately, a review of the social work literature shows that no study has been conducted to evaluate the quality of existing evaluation studies using qualitative research methods in social work. Against this background, this study was carried out to examine this question.

When one intends to evaluate the quality of evaluation studies using qualitative research methods in social work, one fundamental issue that one has to face is the question of whether there are criteria that can be used to evaluate the related studies (Lather, 1986; Lincoln & Guba, 2000; Seale, 2002). Basically, there are two different views on this issue. On one hand, for those who adhere to the strong versions of social constructionist views (Schwandt, 2000) and hard-core postmodernism pursuing anarchy in knowledge claims (Bloland, 1995), it is maintained that it is not possible and, in fact, there is no need to develop any valid criteria. For example, Schwandt (1996) suggested we should say “farewell to criteriology,” which means that social science researchers should abandon “the pur- suit of autonomous, indisputable criteria for distinguish- ing legitimate from not so legitimate social scientific knowledge” (p. 70). Although such radical social con- structionist and postmodern thoughts are thought provok- ing, there are alternative views arguing that the related views are “fashionable nonsense” (Sokal & Bricmont, 1998) and queries questioning whether such relativistic views are consistent with social work values (Atherton & Bolland, 2002; Rubin, 2000). Most important of all, in an era that emphasizes accountability, it would be very diffi- cult for the public and members of the social work profes- sion to make sense and accept the claim that there is no way to differentiate good and bad social work practice via qualitative research methods.

On the other hand, there are views suggesting that there is a need and it is possible to develop criteria to examine the quality of qualitative studies (Huberman & Miles, 1994). However, a review of the literature shows that there are different versions of the criteria that could be used to evaluate qualitative studies. In a comprehen- sive summary of the criteria in different paradigms, Patton (2002) clearly pointed out that researchers with different worldviews used different criteria to evaluatequalitative studies. In the traditional positivistic para - digm, the criteria adopted include objectivity of the inquirer, validity of data, systematic rigor of fieldwork procedures, triangulation, reliability of coding and pat - terns of analyses, correspondence of findings to reality, external validity, and strength of evidence supporting the causal hypotheses. In the constructivist paradigm, researchers used the following criteria to examine quali - tative research: acknowledgment of subjectivity, trust - worthiness, authenticity, triangulation, reflexivity of the researcher, praxis, particularity, and degree of deep understanding. For those who emphasize artistic and evo - cative principles, criteria including creativity, aesthetic quality, interpretive vitality, degree of stimulation, expression of distinct voices, and feelings of true, authen - tic, or real are adopted. Finally, criteria in terms of enhancement of consciousness about injustice, identifi - cation of the nature and sources of inequalities and injus - tices, representation of the perspective of the less power - ful, degree of collaboration between the researchers and the researched, and degree of empowerment for the researched are used by researchers adhering to the critical theory perspective.

With specific reference to the postpositivistic or realist standpoint (Seale, 1999), different criteria have been put forward to examine the issue of quality in qualitative stud- ies. In an early attempt to address the issue of quality of qualitative studies, LeCompte and Goetz (1982) proposed four criteria to evaluate the quality of qualitative research.

These include internal reliability (i.e., whether different researchers within the same study agree with each other), external reliability (i.e., whether independent researchers would identify the same things in the same or similar set- ting), internal validity (i.e., whether the observations can really reflect the reality), and external validity (i.e., whether the phenomena identified could be applicable across groups). Triangulation (i.e., whether findings based on different methods, data, and researchers converge) is another criterion commonly used to examine the validity of claims emerging from mixed-method studies involving quantitative and qualitative methods (Tschudi, 1989).

Alternatively, evaluative criteria based on the con- structivist paradigm have been proposed. Guba and Lin- coln (1981) suggested several criteria that could be used, including credibility (i.e., whether there are faithful descriptions or interpretations of human experiences), fit- tingness (i.e., whether a study’s findings can fit into con- text outside the study and whether the readers view the findings to be meaningful and applicable to one’s experi- ence), auditability (i.e., whether the details of the study are described in sufficient details so that one can follow Shek et al. / EVALUATION STUDIES USING QUALITATIVE RESEARCH METHODS 181 the decision trails of the researchers), and confirmability (i.e., neutrality).

With particular focus on auditability, Sandelowski (1986) suggested different ways to enhance the auditabil - ity of a qualitative research. These suggestions include clear description, explanation, and justification in the fol - lowing areas: (a) rationales for the study, (b) researchers’ views on the subject matter, (c) purposes of the study, (d) how the participants are engaged, (e) mutual influences between the researchers and participants, and (f) details of the data collection, data analyses, and transformation.

Adopting a critical realist position, Huberman and Miles (1994) proposed five criteria that attempt to inte - grate and reflect both the postpositivistic and construc - tivist standpoints: (a) objectivity and confirmability: the degree of neutrality of research findings and the relative influences of researcher biases; (b) reliability, depend - ability, and auditability: whether the process of the study is consistent and reasonably stable with time and across researchers and methods; (c) internal validity, credibility, and authenticity: the extent to which the findings repre- sent an authentic picture of the reality; (d) external valid- ity, transferability, and fittingness: the extent to which the findings can be applicable to contexts of the original study; (e) usage, application, and action orientation:

whether the findings enhance levels of understanding of the participants and promote their actions to improve their state.

In the social work literature (particularly in the North American context), several social work researchers have discussed the issue of quality in qualitative research.

Rodwell and Woody (1994) discussed different tech- niques for achieving authenticity, including fairness (e.g., obtaining full, informed consent from the participants), ontological authenticity (e.g., maintaining audit trails), educative authenticity (e.g., appreciating alternative views), catalytic authenticity (e.g., assessing the resultant action), and tactical authenticity (e.g., treating all partici- pants as equal partners). Drisko (1997) proposed several criteria for implementing qualitative research and evalu- ating qualitative reports in social work. These include specification of the philosophical framework, goals and audience, and methodology (Criteria 1, 2, and 3); identifi- cation of biases (Criterion 4); maintenance of social work ethics (Criterion 5); and assurance of consistency of con- clusions with study philosophy and data. Padgett (1998) proposed six strategies that can be used to enhance the rigor of a qualitative study. These include prolonged engagement, triangulation, peer debriefing and support, member checking, negative case analysis, and audit trail.

Anastas (2004) suggested that several dimensions couldbe used to evaluate the quality of qualitative studies, including clarity of the research questions, identification of the epistemological framework, effective use of theory and prior knowledge, adequate dealing of ethical issues, documentation of all aspects of the methods of the study, assurance of trustworthiness of the data, and effective communication of findings. Except the above isolated attempts, “thorough and detailed evaluative criteria for qualitative research are rare in social work” (Drisko, 1997, p. 185).

Based on the preceding discussion and a thorough review of the literature, the following criteria were adopted in this study to evaluate evaluation studies using qualitative research methods in social work. First, because there are many different branches in qualitative research and because the philosophical bases differ among the different brands of qualitative research (Denzin & Lincoln, 2000; Patton, 1990), it is meaningless to simply tell the readers that a qualitative study has been conducted or that qualitative methods have been used.

Thus, there is a need for the researcher to clearly point out whether the study is based on a general qualitative orien- tation (where general elements of qualitative studies are incorporated in the study, such as holistic emphasis and empathic neutrality; Patton, 1990) or a specific qualita- tive orientation (where a specific approach such as phe- nomenology, grounded theory, critical theory, or ethnog- raphy) has been used. In other words, there is a need for qualitative researchers to clearly spell out the philosophi- cal base of the study (Criterion 1; Drisko, 1997) because qualitative approaches with different philosophical bases call for different methodological approaches. In the case of mixed-method studies, the researchers should also specify the underlying philosophy for mixing different research methods, such as pragmatism, which assumes that mixture of methods at the methodological level is possible, or social constructionism, which argues that there is not any one form of superior understanding.

Based on the principle of auditability, it is important for qualitative researchers to clearly document the details of the participants, including the number and nature of the participants and the related justifications. According to Patton (1990), there are different ways of sampling par- ticipants in qualitative research (e.g., intensity sampling, deviant case sampling, typical case sampling, maximum variation sampling), and the rationales for using these methods differ for different studies. To enable other researchers to have a better understanding of the study and to compare their findings with the reported findings, there is a need to justify the number and nature of the par- ticipants (Criterion 2). Similarly, based on the principle 182 RESEARCH ON SOCIAL WORK PRACTICE of auditability, it is important for qualitative researchers to clearly describe the procedures in detail (Criterion 3; Anastas, 2004; Drisko, 1997).

Another consideration is related to the notion of criti - cal reflexivity (Lincoln & Guba, 2000; Ryan, 1998).

According to social constructionists, because bias in social science research cannot be eliminated, it is impor - tant for the researchers to be conscious of their biases and preoccupations. Steier (1991) argued that “researchers have wanted to say something about the ‘subjects’ (for example, social groups) that they are studying. . . they may now realize that in doing so they are saying some - thing about themselves” (p. 2). Rosenau (1992) also pointed out the importance of acknowledging the “impossibility of setting aside all normative values” and that “researchers must make them explicit in the hope that this will alert readers to their existence” (p. 114). There - fore, whether the researchers have clearly spelled out their biases and ideological preoccupation is an important point to be considered (Criterion 4; Drisko, 1997). Fur- thermore, because the intense interaction between the researchers and informants in qualitative research may generate bias and because the subjective biases of the researchers may be unacknowledged (Huberman & Miles, 1994), it is important for the researchers to clearly discuss how bias could possibly be minimized.

For those who do not believe that bias can be elimi- nated (e.g., postmodern researchers), it is argued that the related arguments should be outlined (Criterion 5; Drisko, 1997).

Two hallmarks of quality in postpositivist research are reliability and validity (Patton, 2002). Based on the prin- ciple of dependability, it is important to know whether the coding and interpretations of the researchers are consis- tent. Basically, one could ask whether intrarater reliabil- ity (stability of the interpretations of the same researcher with time) and interrater reliability (stability of the inter- pretations across researchers) have been conducted, and one could ask for the related levels of agreement (Crite- rion 6). Based on the principle of triangulation, it is important to know whether multiple researchers and mul- tiple methods have been employed and the outcome of the triangulated findings (Criterion 7).

Although the hallmark of validity is not explicitly stated in the constructivist literature, there is a strong emphasis on the communal nature of knowledge claims.

Lincoln (1998) suggested that the community be used as an arbitrator of quality. Rizzo, Corsaro, and Bates (1992) suggested that there are two ways to check the quality of a qualitative study. These include peer checking (i.e., invi- tation of peers who are not researchers to check thequality of the study) and member checking (i.e., ask the participants to check the quality of the study). The essence of these techniques is the involvement of others in helping the researcher arrive at a truthful understand - ing of the reality (Padgett, 1998). Therefore, checking by members of the community, including peer checking and member checking, forms another criterion for judging qualitative evaluation studies (Criterion 8).

Similarly, although constructivists do not commonly emphasize the hallmark of reliability, the principle of auditability is upheld. According to Sandelowski (1986), auditability of a qualitative study is high if the researcher “leaves a clear decision trail concerning the study from its beginning to the end. . . auditability means that any reader or another researcher can follow the progression of events in the study and understand their logic” (p. 34). Based on this principle, it is important to ask whether the research - ers are conscious of the importance of the audit trail (Huberman & Miles, 1994) in which the data, perspec - tive, decisions, and situations are clearly documented (Criterion 9). According to several social work research- ers (Ansatas, 2004; Drisko, 1997; Padgett, 1998), it is important for qualitative researchers to provide adequate raw data so that the readers can assess the reasonableness of the interpretations advanced by the researchers. Of course, one major difficulty associated with this sugges- tion is space limitation imposed by journal editors, which makes it difficult of qualitative researchers to report the audit trails in detail. However, it can be argued that it is still important to know whether the researcher is con- scious of the importance of audit trail, whether audit trail has been conducted, and whether the mechanism is described (though not necessarily in detail) by the researcher.

The final three criteria are related to the notion of criti- cal interpretations and analyses of qualitative data. Con- sistent with the spirit of logical thinking in postpositiv- istic thoughts and the notion of critical reflexivity of the social constructionists, whether researchers have clearly outlined alternative explanations for the findings should be examined (Criterion 10). In other words, whether the researcher has considered all possible interpretations of the findings is a source of concern. Because qualitative data may not be so uniform and because there are always cases that do not conform to the patterns observed (Padgett, 1998), whether researchers have properly described and explained the negative cases forms another dimension for the evaluation (Criterion 11). According to Huberman and Miles (1994), description and explanation of negative cases can enhance the quality of a qualitative study. Finally, consistent with the spirit of critical Shek et al. / EVALUATION STUDIES USING QUALITATIVE RESEARCH METHODS 183 rationalism (Gambrill, 1999; Gomory, 2000a, 2000b) and critical reflexivity (Lincoln & Guba, 2000; Ryan, 1998), whether the researchers are conscious of the limitations of the study and document them in a clear and honest manner is another area that should be examined (Crite - rion 12). For example, researchers should clearly answer the question of whether the sample employed is adequate to generate useful data to address the study questions.

Consistent with the spirit of critical reflexivity, sev - eral special features of the study are highlighted. First, the notion of subtle realism (Hammersley, 1992) is shared by the researchers, in which it is maintained that social phenomena exist in the objective world while the constructed nature of research is also acknowledged.

Seale (1999) argued that “subtle realism provides a pragmatic philosophical rationale for researchers locat - ing their practice with a constructively self-critical research community” (p. 31). Second, multiple criteria from both postpositivistic and constructivist paradigms were used to examine the studies under review.

Although the former position accepts critical realism and the use of multiple research methods, with the belief that quantitative methods are superior to qualitative methods (Denzin & Lincoln, 2000), the latter also accepts the use of multiple research methods but with the belief that no method is superior to the other.

Although there are views suggesting that generalized quality criteria are not useful, Seale (1999) argued that “the quality of research is not automatically determined by the imposition of generalized quality criteria, but such schemes can help sensitize researchers to the issues that a particular project may need to address” (p. 50).

Third, to enable other researchers conducting similar studies to compare their findings with the present find- ings, the articles under review and the review procedures are clearly described. Fourth, to deal with the personal preoccupation that qualitative studies in the social work context are of low quality, multiple researchers were used, and independent assessment of the studies was carried out. Fifth, to enhance the rigor of the evaluation, interrater reliability measures were computed, and views of the independent researchers were triangulated.

Sixth, qualitative (e.g., presence vs. absence of certain attributes, such as consideration of alternative explana- tions) and quantitative (e.g., interrater reliability) indi- cators were used. Finally, limitations of the study are explicitly acknowledged in this article.METHOD Based on theSocial Work Abstracts, the search terms qualitativeandevaluationwere used to identify qualita - tive evaluation studies indexed in the database from 1990 to 2003. As of December 31, 2003, 75 studies were iden - tified. By excluding articles published in non-social- work journals (Exclusion Criterion 1) and articles not directly reporting evaluation studies of social work inter - vention programs (Exclusion Criterion 2), 27 articles were retained in the final evaluation process. Although the study by Elks and Kirkhart (1993) did not deal with qualitative evaluation of a social work intervention pro - gram, social workers’ evaluation practice in a general context was examined in the study. As such, it was also included in the present analysis. Among these 28 studies, 20 studies were mixed-method studies, in which both quantitative and qualitative research methods were used (see Table 1). Because the focus of this article is to exam - ine the quality of social work evaluation studies using qualitative research methods, focus was put on the quali- tative components of the mixed-method studies under review. Using the first study (Vera, 1990) as an example, although single-case design, group comparison, and qualitative data and analyses were employed, the review was focused on the qualitative component (including design, participants, data collection, data analyses, and data interpretation) of the study.

Based on the preceding discussion, 12 criteria were used to evaluate these 28 qualitative evaluation studies:

explicit statement of the philosophical base of the study (Criterion 1); justifications for the number and nature of the participants of the study (Criterion 2); detailed description of the data collection procedures (Criterion 3); discussion of the biases and preoccupations of the researchers (Criterion 4); description of the steps taken to guard against biases or arguments that should or could not be eliminated (Criterion 5); inclusion of measures of reli- ability, such as interrater reliability and intrarater reliabil- ity (Criterion 6); inclusion of measures of triangulation in terms of researchers and data types (Criterion 7); inclu- sion of peer-checking and member-checking procedures (Criterion 8); consciousness of the importance and devel- opment of audit trails (Criterion 9); consideration of alternative explanations for the observed findings (Crite- rion 10); inclusion of explanations for negative evidence (Criterion 11); and clear statement of the limitations of the study (Criterion 12).

There are several steps in the evaluation process. In the first step, three authors evaluated the studies in terms of 184 RESEARCH ON SOCIAL WORK PRACTICE (continued on page 188) 185 TABLE 1:

Evaluation of Qualitative Evaluation Studies Under Review in Terms of Criteria 1 to 5 Basic Information on the Study Criterion 1 Criterion 2 Criterion 3Criterion 4 Criterion 5 Data Statement of Justification Steps to Collection Philosophical for Number Procedure Bias and Guard Against Evaluation Approach Method and Base of the and Nature of Given Preoccupation Biases Study Author Year Topic Adopted Analysis Study the Participants in Detail Clearly Stated Explicitly Stated Number Nature 1 Vera 1990 Adjustment to divorce after joining a 10-week group programGeneral Mixed No No Yes No No No 2 Ringma & Brown1991 Disabled people’s opin - ions on existing servicesSpecific Pure qualitative No No No No Yes Yes 3 Stout 1991 Model for educating social work students and professions about the male controls and violence against womenGeneral Mixed No No No No No No 4 Elks & Kirkhart1993 Social workers’ evalua- tion on their practiceSpecific Pure qualitative Yes No Yes Yes No No 5 Rehr & Epstein1993 Mount Sinai Leadership Exchange programGeneral Pure qualitative No No No Yes No No 6 Davis, Ray, & Sayles1995 Structured outdoor pro- ject for high-risk rural youthGeneral Mixed No No Yes No No No 7 Derezotes 1995 Innovative recreational gang diversion programGeneral Pure qualitative Yes No Yes No No Yes 8 Illback & Kalafat1995 School-based integrated service programSpecific Mixed No Yes Yes (not for the follow-up study)No No No 9 Salcido & Cota1995 Cross-cultural training program for child wel- fare workersSpecific Mixed Yes Yes Yes Yes No No 10 Erera 1997 Empathy Training Pro- gram and emotionally oriented empathy train- ing program for helping professionsGeneral Mixed Yes No No Yes Yes Yes (continued) 186 11 Whipple, Grettenber- ger, & Flynn1997 Outreach research course in social work doctoral programGeneral Mixed No No No No No No 12 Mowbray & Bybee1998 Outreach or linkage model of providing ser- vices for homeless and mentally ill personsGeneral Mixed No No No No No No 13 Heath 1998 Pilot project of medi- ated adoptionsGeneral Mixed No Yes (Sample 1) Yes Yes No No 14 Bronstein & Kelly1998 School-linked service programs for immigrant students and their familiesGeneral Mixed Yes Yes Yes Yes No Yes 15 Sun 1998 Continuing education program for school counselorsGeneral Mixed No No Yes No No No 16 Walsh- Bowers & Basso1999 Two creative drama programs for early adolescentsGeneral Mixed No No Yes No No No 17 Batchelor, Gould, & Wright1999 Resources for families in two local authority housing estatesGeneral Mixed No No Yes Yes No No 18 Cigno & Gore1999 Services of a multiagency children’s center for disabled chil- dren and their familiesGeneral Mixed No No Yes Yes No Yes 19 Ligon, Markward, & Yegidis1999 Distance learning and standard classroom courses of a graduate school of social workGeneral Mixed No No No No No No TABLE 1 (continued) Basic Information on the Study Criterion 1 Criterion 2 Criterion 3Criterion 4 Criterion 5 Data Statement of Justification Steps to Collection Philosophical for Number Procedure Bias and Guard Against Evaluation Approach Method and Base of the and Nature of Given Preoccupation Biases Study Author Year Topic Adopted Analysis Study the Participants in Detail Clearly Stated Explicitly Stated Number Nature 187 20 Rivard, Johnsen, Morrissey, & Starrett1999 Interagency service for children with serious disturbed emotionsGeneral Mixed No Yes Yes No No No 21 Goicoeche a-Balbona, Barnaby, Ellis, & Foxworth2000 Social services for women with HIV and AIDSGeneral Pure qualitative No Yes Yes Yes Yes Yes 22 Netting & Williams2000 Nine demonstration projects of primary care for elderly peopleGeneral Pure qualitative Yes No No Yes No Yes 23 De Anda 2001 Mentor program for at- risk high school youthGeneral Pure qualitative No No No No No No 24 Düvell & Jordan2001 Social workers’ com - ments on working in the asylum teamsGeneral Pure qualitative No No Yes Yes No No 25 Gellis 2001 Mental health services of three immigrant caregiver groupsSpecific Mixed Yes No Yes Yes No Yes 26 Platt 2001 Pilot initial assessment process for children in needGeneral Mixed No No No No No No 27 Hill, Dillane, Bannister, & Scott2002 Intensive project for families facing evictionGeneral Mixed No No No No No No 28 Gardner 2003 Two community-based projects by using a criti- cal reflection frameworkSpecific Mixed Yes No Yes Yes Yes Yes Interrater reliability0.96 1.00 0.96 0.96 0.96 0.82 the criteria outlined above in an independent manner.

After the second author (a registered social worker study - ing in a master of philosophy program) and the third author (a Ph.D. candidate) consolidated their evaluations, they submitted their evaluation results to the first author for comparison and checking. Interrater reliability mea - sures were then computed to assess the degree of agree - ment among the authors in terms of the proposed criteria.

According to Huberman and Miles (1994), interrater agreement is expressed in terms of the number of agree - ment over the sum of the number of agreement and dis - agreement. For those areas that were not agreed on among the authors, they were eventually resolved via discussion and consensus between the first author and second author.

RESULTS Generally speaking, reliability analyses showed that the agreement between the first author and the second and third authors was high. This suggests that the assess- ment was consistent across evaluators in this study (see Tables 1 and 2).

With reference to Criterion 1, results showed that most of the studies adopted a general qualitative approach (n= 22), and explicit discussion of the philosophical bases of the studies was mentioned in a few studies only (n= 8).

Regarding description of the justifications for the number and nature of the participants of the study (Criterion 2), results showed that very few reviewed studies provided justifications for the number of participants, and only approximately half of the studies included justifications for the nature of the participants. Concerning the question of whether there was a detailed description of the data collection procedures (Criterion 3), it was considered adequate in 13 studies only. These findings suggest that the details included in the social work evaluation studies under review were not detailed enough to permit replica- tion or comparisons by other researchers.

As far as the issue of bias is concerned, it was found that explicit discussion of biases and preoccupations of the researchers was found in only four studies (Crite- rion 4). Similarly, explicit statements on the steps taken to guard against bias and preoccupation or arguments that biases could not be reduced (Criterion 5) could be found in only nine studies.

With reference to the hallmarks of quality in postposi- tivistic research, results showed that only three studies had examined interrater reliability and that none of the studies had examined intrarater reliability (Criterion 6).

In studies involving multiple researchers, triangulation offindings in terms of researchers (Criterion 7) was present in only six studies. In contrast, triangulation in terms of data types (Criterion 7) was found in most of the studies, probably because most of the studies under review were mixed-method studies. Using criteria more closely asso - ciated with the constructivist paradigm, involvement of others (Criterion 8) and auditability (Criterion 9) were used. The findings show that peer checking and member checking (Criterion 8) were seldom carried out. Finally, results show that the researchers were not very conscious about the importance of auditability (Criterion 9).

Among the 28 studies under review, strong emphases on audit trails could be found in only 5 studies. In addition, the termsauditabilityandaudit trailare seldom men - tioned in the studies under review. In sum, using criteria based on either the postpositivistic paradigm or the constructivist paradigm, the present review showed that the rigor of social work evaluation studies under review was not high.

To what extent were the qualitative evaluation studies under review critical? Results showed that 13 and 18 studies considered the issues of alternative explanations (Criterion 10) and negative evidence (Criterion 11), respectively. Finally, the review showed that researchers of evaluation studies using qualitative research methods in the social work context were not very critical because the limitations of the studies (Criterion 12) were explicit- ly discussed in only 8 studies.

DISCUSSION AND APPLICATIONS FOR SO- CIAL WORK RESEARCH AND PRACTICE Several observations can be highlighted from the pres- ent review. First, the review shows that most of the quali- tative evaluation studies in social work are operated in terms of the general qualitative orientation and that very few studies have adopted a specific qualitative approach in conducting the study. Because the termqualitativehas different meanings for different qualitative researchers, failure to describe the philosophical base of one’s study is a fundamental flaw. For example, a qualitative study in a critical realist orientation would be very different from a qualitative study based on a radical social constructivist orientation. In particular, because most of the studies under review are mixed-method studies, there is a need for the researchers to specify the philosophical base for mixing quantitative and qualitative research methods.

The second observation is related to the principle of auditability, and three points can be highlighted based on Criteria 2, 3, and 10. The first point is that justifications 188 RESEARCH ON SOCIAL WORK PRACTICE 189 TABLE 2:

Evaluation of the Qualitative Evaluation Studies Under Review in Terms of Criteria 6 to 12 Criterion 6 Criterion 7 Criterion 8 Criterion 9 Criterion 10 Criterion 11 Criterion 12 Alternative Consciousness Hypotheses Negative Limitations Peer Member of the Importance and Interpretation Evidence Clearly Study Reliability Triangulation Checking Checking of Audit Trail Considered Accounted for Stated Interrater Intrarater Researcher Data 1 No No NA Yes No No No No No No 2 No No No No No No No Yes No No 3 No No NA No No No No No Yes No 4 No No No No No No No Yes Yes Yes 5 No No No Yes No No No No No No 6 No No No Yes No No No Yes Yes No 7 Yes No Yes No No No Yes No Yes Yes 8 No No No Yes No No No Yes Yes No 9 No No No Yes No No Yes Yes Yes Yes 10 No No NA Yes No No No Yes Yes Yes 11 No No No Yes No No No No No No 12 No No No Yes No No No Yes Yes No 13 No No No Yes No No No No Yes Yes 14 No No Yes Yes No No No No No No 15 No No No No No No No Yes Yes No 16 No No No Yes No No No No Yes No 17 No No No Yes No No No No No No 18 No No No Yes No No No Yes Yes No 19 No No No No No No No No Yes No 20 No No No Yes No No No No Yes No 21 No No Yes Yes No No No Yes No No 22 No No Yes No Yes Yes Yes Yes No No 23 No No NA Yes No No No No Yes Yes 24 No No No Yes No No No Yes Yes No 25 Yes No Yes Yes No No Yes No No No 26 No No NA No No No No No No Yes 27 No No No Yes No No No No Yes No 28 Yes No Yes Yes No No Yes Yes Yes Yes Interrater reliability 0.96 0.96 1.00 1.00 0.96 1.00 0.96 0.96 0.86 0.96 NOTE: See Table 1 for study information by number. for the number and nature of participants were generally not clear in the studies under review. Although qualitative studies do not generally aim at generalizability, a clear description of the number and nature of the participants and the related justifications is important for researchers planning to do the study again. In addition, in qualitative approaches such as the grounded theory approach, docu - mentation of the sampling procedures via memos and jus - tifications for the choice of the participants in terms of theoretical sampling are important.

The second point is that the details of the data collec - tion procedures were generally not presented clearly. One striking example can be seen in the work of Salcido and Cota (1995). Although the authors claimed to use grounded theory in analyzing answers to open-ended questions (p. 44), details of the grounded theory proce - dures, such as coding, memoing, and theoretical sam - pling, are missing in the article. Finally, the review showed that the principle of auditability was not strongly emphasized in the studies under review. In fact, very few researchers mentioned these two terms in their studies, and the related process (not the details of the decisions) was not clearly described. According to LeCompte and Goetz (1982), without clearly documenting the process of the study, external reliability of the study would be seriously undermined.

The third observation concerns biases and preoccupa- tions of the researchers (Criteria 4 and 5). According to many qualitative researchers, a basic feature of qualita- tive study is that researchers should be sensitive to one’s ideological biases and preoccupations. For example, Janesick (1998) stated that qualitative research requires “description of the researcher’s own biases and ideologi- cal preference” (p. 42). Unfortunately, although honest reflection and explicit discussion of one’s bias and preoc- cupation are highly valued characteristics in qualitative research, these features were not reflected in the studies under review. Similarly, explicit discussions of the steps on how biases can be dealt with (or should not be dealt with) were not commonly found in the studies under review. These observations suggest that social workers conducting studies using qualitative research methods are not very sensitive about the issue of bias. Obviously, insensitivity of this issue will contribute to insensitivity of data interpretations.

The fourth observation is related to the truth value and consistency of the observed findings. Using the positiv- istic notions of reliability and validity, results show that reliability measures were seldom computed. In his dis- cussion of the postmodern challenge, Rubin (2000) raised the following challenge for qualitative researchers: Why should we eschew the use of research methods (such as blind raters, etc.) intended to minimize the extent to which our biases influence our findings just because we can’t reach per - fection in being completely objective and value free ourselves?

(pp. 12-13) Unfortunately, the present review suggests that social work evaluators adopting qualitative methods do not ap - pear to be very enthusiastic about the issue of reliability.

Furthermore, it is questionable whether social workers have a correct concept of reliability. For example, Salcido and Cota (1995) reported that because their study “was a descriptive evaluation with no specific hypotheses, no interrater reliability tests were conducted” (p. 44). It is difficult to understand this claim because it is perfectly le - gitimate and technically viable to compute interrater reli - ability estimates for descriptive evaluation with no spe - cific hypothesis. Concerning the issue of triangulation, the review showed that triangulation across researchers in studies with multiple researchers was not very satisfac - tory. In short, the rigor of the qualitative evaluation stud - ies under review, in terms of the traditional hallmarks of reliability and validity, is not satisfactory. Negligence of these aspects can be regarded as significant, particularly with respect to the fact that most of the studies under review are mixed-method studies that are commonly conducted within the postpositivistic framework.

How about the issues of truth value and consistency when constructivist criteria were used? Again, the find- ings are not very positive. Concerning peer checking, it seems that social work researchers are not familiar with this technique, and it was not commonly mentioned in the studies under review. In addition, it is disturbing to find that the informants were seldom invited to check the interpretations of the researchers. The present findings suggest that social work researchers value their interpre- tations more than those of the participants.

The final observation that can be highlighted from the present analysis is that the analyses and interpretations of the data in the reviewed studies did not appear to be criti- cal; few researchers mentioned alternative explanations and negative cases, and limitations of the studies were explicitly acknowledged in only a low proportion of the studies. These features of the reviewed studies sug- gest that social work researchers are not sensitive to the notion of critical reflexivity treasured by constructivist researchers.

In short, the conclusion that can be drawn from this analysis is that the quality of the qualitative evaluation studies under review is not satisfactory, and it appears that social work researchers are not very sensitive about the issue of quality. Historically speaking, qualitative 190 RESEARCH ON SOCIAL WORK PRACTICE researchers have criticized positivism as naïve realism.

Ironically, the present review suggests that the limitations of the existing qualitative evaluation studies, including the lack of sensitivity about the philosophical position of the study, auditability, bias, truth value, consistency, and critical interpretations, would probably lead to another form of naïve realism.

However, there are several points that should be taken into account when considering the above conclusions.

First, the proposal of 12 criteria should not be misinter - preted as a requirement that all of them should be fulfilled in a single study. As mentioned above, different criteria based on the postpositivistic (e.g., reliability and triangu - lation) and constructivist (e.g., reflexivity) positions were employed. As such, the relevance of some of the criteria to some extent depends on the philosophical orientation of the study. For example, interrater reliability may make little sense to social constructionist researchers, whereas the related information would be regarded as important for a qualitative researcher adopting a subtle realist posi- tion. In fact, the most important point that should be high- lighted is that qualitative researchers should conduct qualitative studies in accordance with the standards upheld by the research paradigm that has been adopted.

As we have seen, regardless of using postpositivistic or constructivist criteria, the quality of the studies under review does not appear to be high.

The second point is that the present findings do not logically imply that the quality of qualitative social work evaluation research is poorer than that of quantitative social work research. In fact, methodological problems in quantitative social work research are not uncommon (Proctor, 1996). For example, quantitative social work researchers usually do not place strong emphasis on alter- native explanations and limitations of their studies. In the same vein, the present findings do not logically imply that the quality of qualitative research in social work is lower than that in other disciplines. In fact, the issue of quality of qualitative studies also exists in other disci- plines, such as nursing (Sandelowski, 1986) and educa- tion (Huberman & Miles, 1994; LeCompte & Goetz, 1982).

The final point is that assuming the observations high- lighted in the present observation is a reflection of the quality of evaluation studies using qualitative research methods in social work, one should ask why this is the case. There are four possible factors that might contribute to the present situation. First, one should be aware of the difficulties involved in implementing strategies that attempt to enhance the quality of qualitative studies (Padgett, 1998). For example, the time and cost involved in peer checking and member checks are extremelyenormous, and they may deter qualitative social work researchers from engaging themselves in the related strat - egies. Second, probably because of the lack of consensus and the fact that journals rarely provide guidelines and standards to report qualitative research findings, qualita - tive researchers may face special difficulties when they attempt to consider the issue of quality. Third, space con - straint in academic journals is another difficulty that may force qualitative researchers to report their findings in an inadequate manner. Finally, inadequate social work train - ing in qualitative social work research may contribute to the observed findings.

There are several limitations of this review. First, because the studies reviewed were based on theSocial Work Abstracts, the conclusions of the study may not be generalizable to other studies that are not captured by the Social Work Abstracts. Of course, social work researchers are encouraged to publish their qualitative evaluation studies with good quality in those journals that are indexed bySocial Work Abstracts. The second limitation is that the criteria proposed are based on the belief that it is possible (though in an imperfect manner) to differentiate good and bad qualitative social work evaluation studies.

For those who adopt a postmodern stand, they would defi- nitely object to some of the criteria proposed. Neverthe- less, it must be stressed that to counteract such arguments, criteria reflecting both postpositivistic and constructivist standpoints were used in this study. Third, only triangula- tion by researchers and interrater reliability were used to enhance the rigor of the study. It would definitely be more illuminating if other procedures, such as peer checking, could be included in future studies. Fourth, some of the criteria mentioned by other researchers (e.g., ethical issues and the appropriateness of qualitative methodol- ogy to the research questions being asked; Anastas, 2004; Drisko, 1997) were not addressed in the present review because of space constraint. In view of their importance in qualitative social work research, this obviously demands an additional study. Finally, the scope of the present review may be constrained by the search terms used (i.e.,qualitativeandevaluation). In fact, even a fewer number of studies were identified when the search termqualitative evaluationwas used. Thus, it is strongly recommended that the indexing system ofSocial Work Abstractsbe reviewed to better capture social work evaluation studies using qualitative research methods.

There are several implications of the present findings to social work research and practice. First, consistent with the spirit of critical rationalism (Gambrill, 1999; Gomory, 2000a, 2000b) and critical reflexivity (Steier, 1991), there is a need to critically assess the quality of Shek et al. / EVALUATION STUDIES USING QUALITATIVE RESEARCH METHODS 191 qualitative evaluation research in the social work context.

Fundamentally, social workers have to critically ask whether the findings emerging from qualitative social work evaluation studies are trustworthy. Based on the ideas of Gambrill (1999), social workers should draw evidence-based rather than authority-based conclusions from qualitative evaluation studies. In addition, social workers should understand that uncritical acceptance of qualitative evaluation studies of low quality would rein - force the implementation of low-quality qualitative eval - uation research in the long run. In particular, if journal editors and editorial board members accept qualitative evaluation research with defects and if social work practi - tioners and students consume such knowledge, this would create a vicious cycle that would jeopardize the quality of evaluation studies using qualitative research methods in the long run.

The second implication of the present findings is that there is an obvious need to enhance the quality of qualita - tive evaluation studies in the social work context, and sev- eral guidelines are suggested. First, because the term qualitativemeans different things to different qualitative researchers, the philosophical base of a qualitative study should be clearly described (Guideline 1). Second, social work researchers should be sensitive to the auditability of a study. These include a clear description of the proce- dures and rationales for recruiting the participants and data collection procedures (Guideline 2). Third, social work researchers should be sensitive to the issue of bias, and discussion of the steps of how bias would be (or would not be) dealt with should be included (Guide- line 3). Fourth, social work researchers should be sensi- tive to issues of truth value (e.g., triangulation, peer checking, and member checking) and consistency, such as reliability and audit trails (Guideline 4). Fifth, social work researchers should adopt a critical attitude in which alternative explanations and negative cases are properly considered and in which limitations of the study are explicitly stated (Guideline 5).

The third implication of the present findings is with reference to Thyer’s (1989) challenge that qualitative researchers are urged to “develop explicit guide-lines for the conduct of qualitative studies” (p. 309) and the remark of Guba and Lincoln (1994) that “the issue of quality cri- teria in constructivism is. . . not well resolved, and further critique is needed” (p. 114), more discussion on the issue of criteriology in qualitative social work research and development of the related criteria is needed. A review of the literature shows that although there are heated debates on the issue of criteria in the qualitative literature in otherdisciplines, very little discussion has taken place in the context of social work.

There are two points that should be noted as far as the issue of criteria is concerned. First, there are views sug - gesting that it is indeed possible to formulate criteria to evaluate qualitative studies. For example, Hammersley (1990) argued that “we do not need to abandon the con - cept of truth as correspondence to an independent reality.

We can retain this concept of truth by adopting a more subtle realism” (p. 61). In fact, Hammersley (1992) attempted to reformulate postpositivist and constructivist criteria in terms of concepts of truth (e.g., different levels of evidence) and relevance (Seale, 2002). Second, even for those who adopt a relativist view, the issue of quality is still present, although in a subtle form. For example, although Smith and Deemer (2000) maintained that “rel - ativism must be accepted” (p. 878), they also argued that “relativism need not and must not be seen in terms of ‘anything goes’” (p. 878) and that it is plausible to devise “a list of features that we think, or more or less agree at any given time and place, characterize good versus bad inquiry” (p. 894). Perhaps it is time for social workers to work on such a list and contribute to the discussion on criteriology in qualitative research.

The fourth implication is that the present findings sug- gest that there is a need to conduct further research to understand the issue of quality in qualitative evaluation studies in social work. As mentioned above, further work should be conducted to examine the studies in terms of social work ethics (Anastas, 2004; Drisko, 1997). Fur- thermore, it is important to understand how journal edi- tors and editorial board members evaluate qualitative evaluation studies.

The final implication of the present study is on social work education. The present study suggests that there is a need to strengthen the qualitative research training of social workers. This suggestion is important, particularly in view of the fact that the number of qualitative evalua- tion studies has increased sharply in recent years but that “the profession continues to suffer a dearth of sufficient, relevant, and quality research” (Proctor, 1996, p. 366). If the argument of Weissman (1981) that “the types of prob- lems that social workers have to deal with, whether they are working with groups or individuals, often require knowledge that, to date, can only be developed through qualitative methods” (p. 63) is taken, it is obvious that qualitative research training for social workers should be strengthened. Unfortunately, as pointed out by Burnette (1998), probably because of time constraint, most social work students have limited knowledge of the complex 192 RESEARCH ON SOCIAL WORK PRACTICE nature of qualitative research. It is suggested that social work researchers be trained to possess the following com - petencies: (a) a clear understanding of different types of qualitative studies and the differences between those adopting a general qualitative orientation and those adopting a specific qualitative orientation, (b) sensitivity to the importance of auditability, (c) high reflexivity of one’s biases and philosophical orientations and knowl - edge of the steps by which biases could be reduced (or arguments that biases cannot and should not be reduced), (d) understanding of the relevance of truth value (e.g., tri - angulation, peer checking, member checking) and con - sistency (e.g., reliability and audit trails) to qualitative research, and (e) ability to critically analyze and interpret qualitative data. To produce social work graduates with the above characteristics, there is a need for social work training institutions to review the existing curricula so that social work researchers who are more sensitive to the issue of quality in evaluation studies using qualitative research methods will be trained.

REFERENCES Anastas, J. W. (2004). Quality in qualitative evaluation: Issues and possible answers.Research in Social Work Practice,14, 57-65.

Atherton, C. R., & Bolland, K. A. (2002). Postmodernism: A danger- ous illusion for social work.International Social Work,45, 421- 433.

Batchelor, J., Gould, N., & Wright, J. (1999). Family centers: A focus for the children in need debate.Child and Family Social Work,4, 197-208.

Bloland, H. G. (1995). Postmodernism and higher education.Journal of Higher Education,66, 521-559.

Bronstein, L. R., & Kelly, T. B. (1998). A multidimensional approach to evaluating school-linked services: A school of social work and county public school partnership.Social Work in Education,20, 152-164.

Burnette, D. (1998).Teaching qualitative research: A compendium of model syllabi. Alexandria, VA: Council on Social Work Education.

Cigno, K., & Gore, J. (1999). A seamless service: Meeting the needs of children with disabilities through a multi-agency approach.Child and Family Social Work,4, 325-335.

Davis, D., Ray, J., & Sayles, C. (1995). Ropes course training for youth in a rural setting: “At first I thought it was going to be boring. . . .” Child and Adolescent Social Work Journal,12, 445-463.

De Anda, D. (2001). A qualitative evaluation of a mentor program for at-risk youth: The participants’ perspective.Child and Adolescent Social Work Journal,18, 97-117.

Dellgran, S., & Hojer, P. (2001). Mainstream is contextual: Swedish social work research dissertations and theses.Social Work Research,25, 243-252.

Denzin, N. K., & Lincoln, Y. S. (Eds.). (2000).Handbook of qualita- tive research. Thousand Oaks, CA: Sage.

Derezotes, D. (1995). Evaluation of the Late Nite Basketball Project.

Child and Adolescent Social Work Journal,12, 33-50.Drisko, J. W. (1997). Strengthening qualitative studies and reports:

Standards to promote academic integrity.Journal of Social Work Education,33, 185-197.

Düvell, F., & Jordan, B. (2001). “How low can you go?” Dilemmas of social work with asylum seekers in London.Journal of Social Work Research and Evaluation,2, 189-205.

Elks, M. A., & Kirkhart, K. E. (1993). Evaluating effectiveness from the practitioner perspective.Social Work,38, 554-563.

Erera, P. I. (1997). Empathy training for helping professionals: Model and evaluation.Journal of Social Work Education,33, 245-260.

Fischer, J. (1973). Is casework effective?Social Work,18, 5-20.

Gambrill, E. (1999). Evidence-based practice: An alternative to authority-based practice.Families in Society,80, 341-350.

Gardner, F. (2003). Critical reflection in community-based evaluation.

Qualitative Social Work,2, 197-212.

Gellis, Z. D. (2001). Using a participatory research approach to mobi - lize immigrant minority family caregivers.Journal of Social Work Research,2, 267-282.

Goicoechea-Balbona, A., Barnaby, C., Ellis, I., & Foxworth, V.

(2000). AIDS: The development of a gender appropriate research intervention.Social Work in Health Care,30, 19-37.

Gomory, T. (2000a). Critical realism (Gomory’s blurry theory) or pos - itivism (Thyer’s theoretical myopia): Which is the prescription for social work research?Journal of Social Work Education,37, 67- 78.

Gomory, T. (2000b). A fallibilistic response to Thyer’s theory of theory-free empirical research in social work practice.Journal of Social Work Education,37, 26-50.

Guba, E. G., & Lincoln, Y. S. (1981).Effective evaluation. San Fran- cisco: Jossey-Bass.

Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualita- tive research. In N. K. Denzin & Y. S. Lincoln (Eds.),Handbook of qualitative research(pp. 105-117). Thousand Oaks, CA; Sage.

Hammersley, M. (1990).Reading ethnographic research: A critical guide. London: Longman.

Hammersley, M. (1992).What’s wrong with ethnography: Method- ological explorations. London: Routledge.

Heath, D. T. (1998). Qualitative analysis of private mediation: Bene- fits for families in public child welfare agencies.Children and Youth Services Review,20, 605-627.

Hill, M., Dillane, J., Bannister, J., & Scott, S. (2002). Everybody needs good neighbours: An evaluation of an intensive project for families facing eviction.Child and Family Social Work,7, 79-89.

Huberman, A. M., & Miles, M. B. (1994).Qualitative data analysis:

An expanded sourcebook. Thousand Oaks, CA: Sage.

Illback, R. J., & Kalafat, J. (1995). Initial evaluation of a school-based integrated service program: Kentucky Family Resource and Youth Services Centers.Special Services in the Schools,10, 139-163.

Janesick, V. J. (1998). The dance of qualitative research design. In N. K. Denzin & Y. S. Lincoln (Eds.),Strategies of qualitative inquiry(pp. 35-55). Thousand Oaks, CA: Sage.

Lather, P. (1986). Issues of validity in openly ideological research:

Between a rock and a soft place.Interchange,17, 63-84.

LeCompte, M. D., & Goetz, J. P. (1982). Problems of reliability and validity in ethnographic research.Review of Educational Research,52, 31-60.

Ligon, J., Markward, M. J., & Yegidis, B. L. (1999). Comparing stu- dent evaluations of distance learning and standard classroom courses in graduate social work education.Journal of Teaching in Social Work,19, 21-29. Shek et al. / EVALUATION STUDIES USING QUALITATIVE RESEARCH METHODS 193 Lincoln, Y. S. (1998). From understanding to action: New imperatives, new criteria, new methods for interpretive researchers.Theory and Research in Social Education,26, 12-29.

Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, con - tradictions and emerging confluences. In N. K. Denzin & Y. S. Lin - coln (Eds.),Handbook of qualitative research(pp. 163-188).

Thousand Oaks, CA: Sage.

Mowbray, C. T., & Bybee, D. (1998). The importance of context in understanding homelessness and mental illness: Lessons learned from a research demonstration project.Research on Social Work Practice,8, 172-199.

Netting, F. E., & Williams, F. G. (2000). Expanding the boundaries of primary care for elderly people.Health and Social Work,25, 233- 242.

Padgett, D. K. (1998).Qualitative methods in social work research:

Challenge and rewards. Thousand Oaks, CA: Sage.

Patton, M. Q. (1990).Qualitative evaluation and research methods.

Newbury Park, CA: Sage.

Patton, M. Q. (2002). Two decades of developments in qualitative inquiry.Qualitative Social Work,1, 261-283.

Platt. D. (2001). Refocusing children’s services: Evaluation of an ini - tial assessment process.Child and Family Social Work,6, 139-148.

Proctor, E. K. (1996). Research and research training in social work:

Climate, connections, and competencies.Research on Social Work Practice,6, 366-378.

Rehr, H., & Epstein, I. (1993). Evaluating the Mount Sinai Leadership Enhancement Program: A developmental perspective.Social Work in Health Care,18, 79-99.

Reid, W. J., & Hanrahan, P. (1982). Recent evaluations of social work:

Grounds for optimism.Social Work,27, 328-340.

Ringma, C., & Brown, C. (1991). Hermeneutics and the social sci- ences: An evaluation of the function of hermeneutics in a consumer disability study.Journal of Sociology and Social Welfare,18, 57- 73.

Rivard, J. C., Johnsen, M. C., Morrissey, J. P., & Starrett, B. E. (1999).

The dynamics of interagency collaboration: How linkages develop for child welfare and juvenile justice sectors in a system of care demonstration.Journal of Social Service Research,25, 61-82.

Rizzo, T. A., Corsaro, W. A., & Bates, J. E. (1992). Ethnographic methods and interpretive analysis: Expanding the methodological options of psychologists.Developmental Review,12, 101-123.

Rodwell, M. K., & Woody, D. (1994). Constructive evaluation: The policy/practice context. In E. Sherman & W. J. Reid (Eds.),Quali- tative research in social work(pp. 315-327). New York: Columbia University Press.

Rosenau, P. M. (1992).Post-modernism and the social sciences insights, inroads and intrusions. Princeton, NJ: Princeton Univer- sity Press.

Rubin, A. (1985). Practice effectiveness: More grounds for optimism.

Social Work,30, 469-476.

Rubin, A. (2000). Editorial: Social work research at the turn of the mil- lennium: Progress and challenges.Research on Social Work Prac- tice,10, 9-14.

Ryan, K. (1998). Advantages and challenges of using inclusive evalua- tion approaches in evaluation practice.American Journal of Evalu- ation,19, 101-122.Salcido, R. M., & Cota, V. (1995). Cross-cultural training for child welfare workers when working with Mexican-American clients.

Journal of Continuing Social Work Education,6, 39-46.

Sandelowski, M. (1986). The problem of vigor in qualitative research.

Advances in Nursing Science,8, 27-37.

Schwandt, T. A. (1996). Farewell to criteriology.Qualitative Inquiry, 2, 58-72.

Schwandt, T. A. (2000). Three epistemological stances for qualitative inquiry: Interpretivism, hermeneutics, and social constructionism.

In N. K. Denzin & Y. S. Lincoln (Eds.),Handbook of qualitative research(pp. 189-213). Thousand Oaks, CA: Sage.

Seale, C. (1999).The quality of qualitative research. Thousand Oaks, CA: Sage.

Seale, C. (2002). Quality issues in qualitative inquiry.Qualitative Social Work,1, 97-110.

Segal, S. P. (1972). Research on the outcomes of social work interven - tions: A review of the literature.Journal of Health and Social Behavior,13, 3-17.

Smith, J. K., & Deemer, D. K. (2000). The problem of criteria in the age of relativism. In N. K. Denzin & Y. S. Lincoln (Eds.),Hand - book of qualitative research(pp. 877-896). Thousand Oaks, CA; Sage.

Sokal, A., & Bricmont, J. (1998).Fashionable nonsense: Postmodern intellectuals’ sense of science. New York: Picador USA.

Steier, F. (Ed.). (1991).Research and reflexivity. London: Sage.

Stout, K. D. (1991). A continuum of male controls and violence against women: A teaching model.Journal of Social Work Educa- tion,27, 305-319.

Sun, A. (1998). Organizing an international continuing education pro- gram: A preliminary study based on participants’ perspectives.

Journal of Continuing Social Work Education,6, 23-29.

Thyer, B. A. (1989). First principles of practice research.British Jour- nal of Social Work,19, 309-323.

Thyer, B. A. (1991). Guidelines for evaluating outcome studies on social work practice.Research on Social Work Practice,1, 76-91.

Tschudi, F. (1989). Do qualitative and quantitative methods require different approaches to validity? In S. Kvale (Ed.),Issues of valid- ity in qualitative research(pp. 109-134). Lund, Sweden:

Studentlitteratur.

Vera, M. I. (1990). Effects of divorce groups on individual adjustment:

A multiple methodology approach.Social Work Research and Abstracts,26, 11-20.

Walsh-Bowers, R., & Basso, R. (1999). Improving early adolescents’ peer relations through classroom creative drama: An integrated approach.Social Work in Education,21, 23-32.

Weissman, H. H. (1981). Teaching qualitative research methods. In S.

Briar, H. Weissman, & A. Rubin (Eds.),Research utilization in social work education(pp. 59-65). New York: Council on Social Work Education.

Whipple, E. E., Grettenberger, S. E., & Flynn, M. S. (1997). Doctoral research education in a social context: Development and imple- mentation of a model outreach course.Journal of Teaching in Social Work,14, 3-26. 194 RESEARCH ON SOCIAL WORK PRACTICE