By now, you should be aware that the findings from a research study are only part of the story. As a consumer, hoping to inform practice by use of an evidence base, you want to know much more. A sound

EDITORIAL Thoughts on Social Work Knowledge Development Activities v^ithin a Quantitative Framev^ork Andrew Grogan-Kaylor and Jorge Delva A s a result of many years of serving as review- ers for numerous journals from multiple professions and disciplines and through our own experience as researchers and authors, we of- fer the following thoughts on conducting research within a quantitative framework. We hope these ideas can further strengthen knowledge develop- ment activities hy social work researchers who rely on quantitative methods.The focus on quantitative methods is largely because this is the work we do, and hence we helieve we are in a strong position to offer ideas that can serve to strengthen this line of work. Colleagues who can make suggestions along the lines of qualitative research will he sought out to speak on these issues in a subsequent editorial.

In this editorial, we make a two-stage argument.

First, we argue that quantitative research in social work must increase in its statistical sophistication if social work research is to make robust and widely read conclusions about the social problems and issues that social workers care about.

Second, we argue that technical sophistication is not enough. Social work- ers and social work researchers must think carefully about the research questions, methodologies, and conclusions that underlie social work research.

For many research questions, simple univariate statistics such as means, medians, standard devia- tions, and percentages, or hivariate statistics such as correlations, t tests, and chi-squares are often not sufficient to address the research question of interest. Such univariate and bivariate statistics provide critical pieces of information about the study sample and about basic relationships among variables, but they are only a first step in the pro- cess of uncovering more complex relationships. In the case of bivariate statistics alone, for example, these estimates do not afford the ability to control for the effects of other variables that might affect, or account for, the relationships of interest. For example, in a study of the relationship of a par- ticular kind of parenting with children's behavior problems, it would be important to control for other variables. Without such statistical controls it would be possible that any observed bivariate relationship could be attributed to an unobserved third factor. As an illustration, in a recent article, Grogan-Kaylor (2004) used regression methods to demonstrate that a relationship between parental use of corporal punishment and children's antiso- cial behavior persisted even when factors such as children's age, or initial levels of antisocial behavior were accounted.Through the use of more sophisti- cated statistical techniques, the author was able to account for a number of factors that are sometimes suggested as explanations for the observed relation- ship between parental use of corporal punishment and higher levels of children's behavior problems and to provide stronger evidence that parental use of corporal punishment has undesirable effects on children's behavior. The particular regression models were ftxed-efFect regression models, an extension of ordinary least squares regression that is able to account for both observed and some unobserved variables.

We acknowledge that the use of more sophisti- cated statistical methods that can rule out alternative explanations in social work research is hindered by several factors. First of all, authors may not always have the necessary expertise to carry out the ap- propriate statistical analyses. Graduate programs in social work at both the master's and doctoral levels are encouraged to provide social work students with at least some training in multivariate methods. At the same time, we recognize that it may often be CCC Code: 0037-8046/08 J3.00 ©2008 National Association of Social Workers 293 beneficial for social workers to collaborate with researchers such as statisticians, among others, who have the necessary complementary expertise.

In fact, this is yet another example of the importance of engaging in multidisciplinary or interdisciplinary collaborations.

It creates a win—win situation for all disciplines involved.

A second limitation on the use of multivariate methods in social work research is that it is preferable to use multivariate methods with larger study samples. For example, a common if somewhat limited rule of thumb suggests that between 10 and 20 additional cases are needed for every additional variable that is added to a model.

Although we believe that the cost of collecting larger study samples is offset by the more rigorous conclusions that can be produced with multivariate research, we recognize that larger sample sizes may be more difFtcult and expensive to produce.

As the importance of more sophisticated quan- titative research becomes clearer to the social work community, more sophisticated methods are enter- ing the journal literature. With that in mind, we offer the reminder that good statistics can provide a foundation for good thinking, but good statistics cannot replace good thinking.

Complicated statistics bereft of solid conceptualization and research design methods do little to advance the social work litera- ture.

In that spirit, we offer four recommendations to authors, to journal reviewers, and to ourselves as scholars in this field that we believe can substantially strengthen a researchers work.

First, it is important to have a solid understanding of the literature and of the competing or comple- mentary conceptual models.

By "solid understand- ing," we mean a critical understanding of the strengths and weaknesses of the theoretical models that guide prior research and of all the aspects of the research methods of prior studies upon which a subsequent study may build.

For example, we submit that it is insufficient to write a literature review in which fmdings of prior research are presented with- out a critical analysis of at least one or more of the following components: the adequacy of the study's sampling strategy, sample size, contact and response rates, conceptualization and operationalization of measures, research design (for example, posttest only, pre- and posttest, experimental), and analytic strategies, among others.

We think that it is more informative for readers to know that even in the case of a particular cross-sectional study in which a significant association is found between variables that no definitive statements can be made about the temporal and causal directions of these associations or, that despite being significant, the magnitude of the association, commonly known as the effect size, is so small that the substantive importance of the association is suspect.

Another example of information that readers may find useful when reading a literature review is that low participation rate or distinct differences in characteristics between those who participated and those who did not may limit the generalizability of the findings to the sample itself Given the realities of page limitations that journals have and the large number of areas in which studies can be criticized, we suggest that authors who elect to highlight one or two of the most salient strengths or weaknesses of the studies they review significantly strengthen the quality of their manuscripts.

Second, regardless of statistical sophistication, the interpretation of a study's findings is bound by the research methods used in the study. For example, if a study's participation rates are low or the sample is cross-sectional or nonexperimental and the researchers use structural equation modeling to analyze associations among variables, the implica- tions of the findings could be less significant than the findings of a study that uses simpler statistics, such as multivariate analyses of variance, and a more rigorous design such as a longitudinal or a quasi- experimental or experimental design.Through this example, we imply three points about being critical about one's work: (1) Researchers should not trust one statistic over another but should remind them- selves that any analytic approach is bound by the context of the research method; (2) there is a need for researchers to use more rigorous research designs in their knowledge development activities; and (3) if it is not feasible to use a more rigorous research design, one that can provide more confidence in the findings, then it is important for researchers to critically describe their findings within the context of these limitations. Of course, if the limitations are well highlighted there is a greater risk that journal reviewers will completely agree with the author and decide that the manuscript should not be pub- lished.

We believe that this is a risk authors should take because honest and critical analysis better contributes to the process of knowledge develop- ment. However, if authors are not persuaded by this argument, knowledge that reviewers are likely to raise concerns about the study limitations should 294 Social Work VOLUME 53, NUMBER 4 OCTOBER 2008 be a strong motivator. In addition, as the method- ological rigor of social work literature improves, we believe that social work research will become more frequently cited by other disciplines, increasing the positive effects of social work research.

Third, it is not enough to focus on statistical significance. Authors should also pay attention to the size of the effects under consideration. Statistical significance, as measured by the ubiquitous "p value," is a measure of the likelihood that a given result is due to chance. Generally, we only see results with small p values as scientifically meaningful. At the same time, it is worth repeating the statistical truth that a result might be statistically significant but that it might represent a relationship whose size is not meaningful. Researchers would do well to pay as much attention to the effect sizes as to their statisti- cal significance. Several procedures for commenting on the size of statistical effects exist. In our opinion, the most basic procedure is to think carefully about the units in which two variables are measured. For example, if an income variable is measured in dol- lars and a mental health variable is measured in the frequency of certain beliefs, then thinking carefully about the units in which each variable is measured may lead to important conclusions about the amount of change in a mental health variable that is likely to be associated with a particular magnitude of change in income.

Imagine two sets of results, both of which are statistically significant. Imagine further that in the first set of results changes in income are associ- ated with large changes in mental health, whereas in the second set of results, the magnitude of the association is small. In each case, the results would be statistically differentiable from chance results, but the substantive implications for policy, practice, and intervention might be quite different. Although more sophisticated procedures for assessing effect sizes can be used, we argue that at the most basic level we should at least strive toward more careful thinking.

Fourth, the use of multivariate analyses requires careful consideration in the choice of a model that conforms to the major features of the data. A plethora of multivariate models are in existence.

The ordinary least squares regression models that are usually the beginning (and too often the end) of coursework in multivariate models are not ap- propriate for every situation. For example, ordinary least squares regression models make the assumption that dependent variables are continuous and that every step of that continuum is equivalent. This may be plausible when trying to predict income, when every additional unit of income is an extra dollar. However, in other cases in which the de- pendent variable may have a range of values, it may be less plausible that every step of the continuum is equivalent. For example, in a study of depressive symptoms, participants might be asked whether they never (coded as 0), sometimes (coded as 1), or frequently (coded as 2) experience a particular symptom. Such a study would afford an outcome with a range of values (that is, 0, 1, or 2), but many might find it implausible to assume that the distance between "never" and "sometimes" is the same as the distance from "sometimes" to "frequently." In such cases, a more appropriate choice would be an ordinal regression model that captures the ordered nature of the outcome of interest but that recognizes that the distances between the levels of the outcome may be nonequivalent. In still other cases, outcomes may be clearly noncontinuous. For example, social w^orkers are likely to be interested in many outcomes that are categorical in nature. At the simplest level, such categorical outcomes are binary in nature. For ex- ample,"Did the respondent smoke tobacco or not?" or "Did the respondent become homeless or not?" Several methods for correctly analyzing such binary outcomes exist. Categorical outcomes can also be multinomial when outcomes can be classified into several groups, but there is no distinct ordering of those outcomes. For example, respondents might choose to become affiliated with any one of a num- ber of religious organizations, and there is no way to rank or order religions in terms of intrinsic value.

For such situations, more complicated multinomial regression models exist. This discussion has not ex- hausted the range of possible types of outcomes. For example, space does not permit us to engage in a discussion of count outcomes or censored outcomes.

Our point, however, is that accurate specification of outcomes of interest to social workers may require use of advanced statistical models.

Dependent variables are not the only statistical constructs subject to a wealth of possible specifica- tions.

Modeling requires careful thinking about both the independent and the dependent variables.

One situation in which this commonly occurs, which is likely to be of interest to social workers, is when data are nested or clustered. For example, families and children may be clustered inside com- munities, schools, health care centers, hospitals, and GROGAN-KAYLOR AND DELVA / Thoughts on Social Work Knowledge 295 organizations, and so forth.

As Raudenbush and Bryk (2002) have pointed out, failure to account for this clustering or nesting may lead us to mistakenly infer statistically significant results.

For example, in a study of the effects of neighborhood social climate on outcomes for children and youths, the failure to account for the fact that two residents live in the same neighborhood might overstate the importance of some effects. One advantage of the use of so- called multilevel models is that they lead to more modest, but more accurate, conclusions with data that are clustered according to the cluster unit (for example, neighborhoods, schools, organizations).

Another situation that may call for more advanced statistical expertise is when the research question, the design, and the data lend themselves to the modeling of unobserved variables, as in the fmdings of the study by Grogan-Kaylor (2004) described earlier.

Note that the common ordinary least squares regression model, which is often the starting point for coursework in multivariate methods, can only account for observed variables that are entered into the model.

In the Grogan-Kaylor study, one explanation for an observed relationship between parental use of corporal punishment and increased children's behavior problems might be unobserved aspects of context, such as a higher level of neighbor- hood violence that contributed both to parental use of corporal punishment and to children's behavior problems. If this factor had not been observed and not entered into a model, the results of the model could have been biased as a result.

In the Grogan- Kaylor (2004) example, the fixed effects regression was able to control for a number of such unobserved variables, strengthening the conclusions provided.

As with many of the other statistical questions we have discussed, there is a rich and growing literature on the modeling of unobserved factors, including the techniques of fixed-effects regression (Wooldridge, 2002) and propensity scores (Morgan & Harding, 2006).

In this editorial, we offer a number of thoughts on approaches that we think could help knowledge development activities of social work researchers.

We recognize that we have been inconsistent in our own work in following these ideas.

Therefore, we offer these thoughts in the spirit of guidelines to which we aspire.

We are certain that if we and our colleagues follow these guidelines, the quality of all of our knowledge development activities will increase.

REFERENCES Grogan-Kaylor, A.

(2004).

The effect of corporal punish- ment on antisocial hehavior in children.

Social Work Research, 28, 153-162.

Morgan, S. L., & Harding, D.

J. (2006). Matching estimators of causal effects: Prospects and pitfalls in theory and practice.

Sociological Methods and Research, 35, 3-60.

Raudenhush, S.W., & Bryk,A. S. (2002).

Hierarchical linear models:Applications and data analysis mei/iorfs.

Thousand Oaks, CA: Sage Publications.

Wooldridge,J.

M.

(2002).

Econometric analysis of cross section and panel data.

Cambridge, MA: MIT Press.

Andrew Grogan-Kaylor, PhD, is associate professor, and Jorge Delva, PhD, is professor.

School of Social Work, University of Michigan, Ann Arbor, MI 48109; e-mail:

[email protected] and [email protected].

NASW PRESS POLICY ON ETHICAL BEHAVIOR T he NASW Press expects authors to ad- here to ethical standards for scholarship as articulated in the NASW Code of Ethics and Writing for the NASW Press:

Information for Authors.

These standards include actions such as • taking responsibility and credit only for work they have actually performed • honestly acknowledging the work of others • submitting only original work to journals • fuUy documenting their own and others' related work.

If possible breaches of ethical standards have been identified at the review or publication process, the NASW Press may notify the au- thor and bring the ethics issue to the attention of the appropriate professional body or other authority.

Peer review confidentiality will not apply where there is evidence of plagiarism.

As reviewed and revised by NASW National Committee on Inquiry (NCOI), May 30, 1997 Approved by NASW Board of Directors, September 1997 296 SocidWork VOLUME 53, NUMBER 4 OCTOBER 2008