For this question, choose two criticisms of the comprehensive community college presented in Chapter 13 of The American Community College (linked in below). Write a letter to the editor of no more tha

Cohen c14.tex V3 - 08/05/2013 9:47am Page 391 14 Student Progress and Outcomes A New Age of Accountability A round twenty-five years ago, after decades of ineffective requests from a small group of analysts and psychometri- cians, the accrediting associations began insisting that colleges define and document student and institutional outcomes. Until then researchers and commentators had made little headway in convincing educators in any postsecondary sector to specify in mea- surable terms just what they were trying to teach and how much success they were having; the importunacy hardly penetrated the world of educational practice. But gradually the state legislatures too began making demands for evidence not only of student attain- ment but also of college contributions to the broader community.

Then the U.S. Department of Education entered the conversation, even to the point of suggesting a national collegiate assessment system. The major philanthropic organizations jumped on board as well, putting millions toward the measurement and improvement of student progress and success. Now, outcomes assessment is a serious consideration on every campus.

This chapter considers outcomes related to student progress — including retention, credit accumulation, progression through developmental sequences, and success in gateway courses — and those pertaining to student outcomes, primarily transfer and degree and certificate completion, as well as other, more broadly defined measures of success after program completion. Also examined are occupational outcomes and their benefits, both to individuals and to the public at large. The chapter then discusses the growing 391Cohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 392 392 T HE AMERICAN COMMUNITY COLLEGE accountability movement, including outcomes assessment, recent national efforts to evaluate student progress and outcomes at community colleges, and problems and possibilities in assessment.

Measures of Student Progress The most widespread measure of student progress is a college’s abil- ity to retain students from year to year or from semester to semester.

The National Center for Higher Education Management Systems (NCHEMS) aggregates and reports Integrated Postsecondary Edu- cation Data System (IPEDS) data on the percentage of first-time college freshmen returning for a second year. In 2010, 53 percent of community college students nationwide were retained to their sophomore year. Full-timers demonstrated higher retention rates (60 percent) than their part-time counterparts (41 percent), and rates varied widely among the states, from 24 percent in Alaska (the only state with community college retention rates lower than 40 percent) to 66 percent in South Dakota (NCHEMS, 2010).

Fall-to-next-term retention rates are typically higher than the more traditional fall-to-fall rates, as are retention rates among degree- or certificate-seeking students (not surprisingly, as students attending for skills upgrading or personal interest may have reached their goals in the first year).

Traditionally, community colleges (or agencies on behalf of the colleges) have defined retention as the percentage of students who remain enrolled from one fall term to the next. However, this methodology ignores the fact that one-third of community college students transfer to or take courses at other institutions as well as the fact that some short-term occupational certificates can be completed in a year or less. A more accurate gauge of student persistence involves subtracting transfers and completers from the denominator; limiting both the denominator and the numerator to degree- or certificate-seeking students may similarly improve the relevance of the statistic. Arizona’s community colleges calculateCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 393 Student Progress and Outcomes 393 retention in this manner. In 2012 the ten districts in that state reported a 77 percent fall-to-fall retention rate, a number substan- tially higher than the 48 percent reported by NCHEMS (Arizona Community Colleges, 2012).

Many community college advocates have recently argued that more nuanced measures of student progress are equal to or more important than more traditional indicators such as retention and graduation rates. Intermediate or progress indicators can be assessed in shorter periods of time (one or two years, as opposed to four or six for outcomes measures) and meaningful information about where students are or are not succeeding can be used to improve programs and services, ideally while those students are still enrolled. Stu- dent progress measures have been criticized, however, for putting pressure on faculty and administrators to game the system by, for example, passing students who might otherwise deserve to fail or pushing inadequately prepared students into college-level courses.

Course pass rates are particularly susceptible to these pressures, especially if they are tied to an instructor’s evaluations or an institution’s funding.

Nonetheless, student progress metrics are increasingly included in state and national accountability efforts. One such measure relates to the idea that student progress and success may be affected by when and if students reach certain momentum points. A 2009 study of California community college students showed significantly higher retention and completion rates among full-time students who completed a college-level math or English course within two years as well as those who earned at least twenty credits in their first year (Moore, Shulock, and Offenstein, 2009). Based on evidence such as this, the American Association of Community College’s national Voluntary Framework of Accountability (VFA) initiative has incorporated an expanded credit threshold measure:

the percent of full-time learners completing forty-two credits and the percent of part-time learners completing twenty-four credits within two years. Among Arizona’s community colleges, 45 percentCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 394 394 T HE AMERICAN COMMUNITY COLLEGE of full-time credential-seeking students beginning in 2009 had attained the two-year credit threshold, as had 51 percent of part- time credential-seekers (Arizona Community Colleges, 2012).

Success in gateway courses — defined as first college-level courses in English and math — have also been shown to improve student retention and completion, as these courses are typically required for a degree or certificate. As such, the National Community Col- lege Benchmark Project (NCCBP) tracks success rates in gateway courses such as English Composition I and II, Speech, and College Algebra. According to the NCCBP (2012), national success rates in these courses range from 61 percent (in Algebra) to 77 percent (in Speech). However, many students — especially those who require remediation — never enroll in gateway courses; less than half of stu- dents who first enrolled in a Virginia community college in 2004 completed a college-level English course within four years, and just over one-quarter completed a college-level math class (Roska et al., 2009). Data such as these have spurred the VFA and other national accountability initiatives to routinely collect data on the percent of developmental students who successfully complete college-level courses in English and math within six years.

Many colleges also track students’ success in and progress through developmental sequences. A 2010 study of California community colleges showed that just over 50 percent of students passed the first developmental math class in which they enrolled; another 25 percent withdrew from the course, and the rest (25 percent) failed it. Pass rates in developmental writing courses were only slightly higher (Perry, 2010). An examination of devel- opmental success rates among colleges affiliated with Achieving the Dream (a nongovernmental initiative spawned in 2004 by the Lumina Foundation) showed that 15 percent of students referred to developmental education completed the entire sequence; an additional 40 percent completed some of it, and 45 percent did not complete any developmental requirements (Achieving the Dream, 2008). Whether in developmental or college-level programs,Cohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 395 Student Progress and Outcomes 395 student progress indicators may be more likely than traditional outcomes measures to foster institutional change. Nonetheless, most state and national accountability efforts still put more weight on outcomes such as transfer and graduation rates.

Transfer Rates Transfer rates have long been a primary measure of institutional effectiveness, although the definitions used to calculate the rates have varied, often dramatically. In 1989, the Center for the Study of Community Colleges (CSCC) began collecting data on transfer, using the following definition: “all students entering the community college in a given year who have no prior college experience, and who complete at least twelve college credit units within four years of entry, divided into the number of that group who take one or more classes at an in-state, public university within four years.” By collecting data from individual colleges and state agencies, CSCC published national transfer rates for students entering in each year, beginning in 1984. Table 14.1 displays those findings.

Figures for the 1995 entrants were corroborated by the National Center for Education Statistics (NCES), which found over 22 percent of the students entering two-year institutions transferring within three years (Bailey, Jenkins, and Leinbach, 2005). Different rates obtain when students transferring to private or out-of-state institutions are included. The transfer rates certainly would be further inflated if more than four years were allowed before tabulating the transfers. Data from the National Student Clearinghouse (2012b) show that 20 percent of first-time community college students transfer to a four-year institution within five years. Constricting the denominator also affects the rates: Doyle (2006) used data from the Beginning Postsecondary Students Longitudinal Survey to show that 66 percent of the students who declared bachelor’s degree intent at entry had transferred within six years. Townsend (2002) found a transferCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 396 396 T HE AMERICAN COMMUNITY COLLEGE Table 14.1. National Transfer Assembly Transfer Rates, 1989–2001 Year of Percentage Receiving Percentage Number of College 12 or More Credits Transferring Colleges in Entry within Four Years within Four Years Sample 1984 50.5 23.7 48 1985 46.7 23.6 114 1986 46.7 23.4 155 1987 46.9 22.6 366 1988 45.5 22.1 395 1989 44.3 21.5 416 1990 47.1 21.8 417 1991 47.3 22.1 424 1993 50.7 23.4 345 1995 52.5 25.2 538 Source: Szelenyi, 2002. rate of 63 percent among associate in arts degree recipients. Since community college matriculants arguably are potential transfers until they either show up at a university or die, the transfer rate calculations can never be fully reflective of student performance.

The transfer rates can be modified by adding in different types of information. For example, how many entering students aspire to further education? The way that the question is asked is key. A compilation of data from several NCES studies regarding purposes for attending found 37 percent of the students beginning in community colleges citing transfer intentions (Hoachlander, Sikora, and Horn, 2003). But subsequent NCES reports found that in response to the question “What is the highest level of education you ever expect to complete?,” 71 percent in 2000 and over 75 percent in 2004 indicated a bachelor’s degree or higher (Bradburn and Hurst, 2001; Horn, Nevill, and Griffith, 2006).

And when Hagedorn and Maxwell (2002) asked a smaller sample, “If there were no obstacles, what is the highest degree you wouldCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 397 Student Progress and Outcomes 397 like to obtain in your life?”, 88 percent aspired to a bachelor’s or beyond.

Obviously, the way students’ intent to transfer is calculated is reflected in the overall percentage of students reaching that goal.

This has led many states and agencies — including CSCC — to determine transfer intent by students’ actual course-taking patterns, as opposed to statements of intent alone. California’s community colleges calculate their transfer rate as the “percentage of first-time students with a minimum of 12 credits earned who attempted transfer-level math or English during enrollment who transferred to a baccalaureate granting institution within six years” (California Community Colleges, 2012b, p. 13). Under these parameters, 42 percent of students beginning college in 2004–05 and meeting the cohort definition had transferred by 2010–11.

Interestingly, although the transfer rate in most of the states with comprehensive college systems clusters around the 25 percent national mark, the range between states is from 11 to 40 percent.

Some of the reasons for this wide interstate disparity are obviously related to the structure of higher education within a state. Where the two-year colleges are organized as branch campuses of the state university, the transfer rates are high; where they function as technical institutes that emphasize trade and industry programs, the transfer rates are low. Deviations from the norm appear also in states where transfer to independent universities is a prominent feature of the higher education system or policies related to enroll- ment have been effected. For example, state-mandated limitations on college growth eventually elevate the transfer rate because the community colleges tend to react to enrollment caps by cutting the programs that attract adult, part-time students, that is, those least likely to transfer. Transfer rates among colleges in the same state similarly show wide variations, undoubtedly because of local conditions, community demographics, college proximity to a uni- versity campus, and employment or economic conditions in the district (Cohen and Brawer, 1996). Nonetheless, for the analystCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 398 398 T HE AMERICAN COMMUNITY COLLEGE seeking evidence of the role of the community college in assisting people toward the baccalaureate, the data that have been collected uniformly across the states are indispensable.

Success after Transfer One way of measuring students’ success after transfer involves retrospective analyses of the transcripts of baccalaureate recipients to see how many transferred credits from community colleges. A recent study from the National Student Clearinghouse (2012c) illustrates the community colleges’ contribution to baccalaureate degree production nicely, showing that 45 percent of all students who completed a bachelor’s degree in 2010–11 had previously enrolled at a two-year college. These rates varied greatly among the states: those with large community college systems such as Texas reported that as many as 78 percent of bachelor’s degree recipients had transferred community college credits; in Alaska, New Hampshire, and Delaware, that proportion was much lower, around 20 percent.

Numerous studies have been made of the students who transfer to baccalaureate-granting institutions. Once the students arrive, they seem to do as well eventually as the native juniors, although they may take longer overall to obtain the bachelor’s degree. Other consistent findings are that students who transfer with greater numbers of credits do better than those with fewer, especially if they have received an associate degree; and continuous enrollment, even part time, increases the probability of degree completion (Adelman, 2007, p. xxi). The National Student Clearinghouse (2012d) recently reported that 79 percent of transfer students who had earned an associate degree obtained the baccalaureate within four years, compared with 55 percent of students who transferred without an associate degree.

The phenomenon of transfer shock, the first-term decline in grade-point average that has been observed for decades, is stillCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 399 Student Progress and Outcomes 399 apparent, evidenced by studies done in Virginia (Tidewater Com- munity College, 2005), Iowa (Breja, 2006), Hawaii (University of Hawaii, 2005), North Carolina (Glass and Harrington, 2002), and Maryland (Filipp, 2004). The reasons that students transferring to universities may have difficulties are not fully understood. Possi- bly the native students were tied into an informal network that advised them on which professors and courses were most likely to yield favorable results. Students transferring to research universities have said the competitive environment differed markedly from the cooperative relationships they had enjoyed at their community col- leges (Chang, 2006; Townsend and Wilson, 2006). Transfers may have satisfactorily completed their distribution requirements at the community colleges but could not do as well when they entered the specialized courses at the universities. Community colleges may have been passing students who would have failed or dropped out of the freshman and sophomore classes in the senior institutions.

And as a group, the community college students must still contend with the circumstances that drove their initial enrollment at a two-year college: lower academic ability; less money for college; children and family responsibilities; and the need to work while enrolled. All these variables probably operate to some degree and tend to confound the reasons for posttransfer dropout and failure.

Forty years ago Astin (1977) said that, for those who begin at a community college, “even after controlling for the student’s social background [and] ability and motivation at college entrance, the chances of persisting to the baccalaureate degree are substan- tially reduced” (p. 234). He found several factors leading to the attainment of a degree: residence on campus; a high degree of interaction with the peer group; the presence of good students on the campus; and full-time-student status, factors rarely found in community colleges. But the situation is not as bleak as some commentators have made it. Pascarella and Terenzini’s (2005) meta-analysis showed that initial attendance at a two-year insti- tution reduces the likelihood of bachelor’s degree completion byCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 400 400 T HE AMERICAN COMMUNITY COLLEGE 15 to 20 percent. However, two-year college students “who make the transfer are as likely as four-year matriculants to persist overall (76 percent versus 78 percent)” (p. 376). Their most telling com- ment was: “Net of other relevant variables, former community college students were as likely as their four-year counterparts to graduate from a baccalaureate degree-granting institution, to aspire to attend gradu- ate school, and to enroll in graduate school” (p. 377, italics added).

Furthermore, students who began at community colleges typically transferred to four-year institutions that have students with con- siderably higher SAT scores. Put another way, “students are able to attend more selective four-year institutions if they first attend community colleges” (p. 495). This benefit is largest for community college students who come from poor families, are of low ability, or performed poorly in high school.

Degree and Certificate Completion Like transfer rates, graduation rates have long been used to assess community college effectiveness. However, recent efforts to increase college completion — President Barack Obama’s goal to produce an additional five million community college graduates by 2020, the Lumina Foundation’s efforts to increase the pro- portion of Americans with high-quality degrees and credentials to 60 percent by 2025, and the Bill & Melinda Gates Foun- dation’s investments in college completion initiatives, to name the biggies — have prompted new scrutiny of these statistics and strategies to improve them. Few community colleges have escaped pressure to improve graduation rates, especially among populations that have been traditionally less likely to complete: part-timers, low-income students, underrepresented minorities, adult students, and those first in their families to attend college.

As with transfer rates, the way graduation rates are calculated leads to vastly different conclusions about how well the colleges are performing in this area. An analysis of data from the BPSCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 401 Student Progress and Outcomes 401 showed that by spring 2006 just under 10 percent of community college students beginning in fall 2003 had completed an associate degree and another 5 percent had earned a certificate (NCES, Digest, 2009). According to these figures, the national three-year community college graduation rate was 14 percent. A subsequent NCES report showed a 20 percent three-year attainment rate among students beginning in 2007 (NCES,Digest, 2011).

Degree and certificate completion rates rise when credentials earned at institutions other than that where the student began are taken into account. Data from the BPS show that after six years, 34 percent of students who began in a community college in 2003 had earned a postsecondary credential: 9 percent had earned an associate degree, 14 percent a certificate, and 12 percent a bachelor’s degree. National Student Clearinghouse (2012a) data examining students entering in 2006 showed similar results: within six years 24 percent of students beginning in community colleges had attained a degree or certificate from their original institution; an additional 12 percent had earned a credential from another two- or four-year school. As Table 14.2 and these figures illustrate, national community college degree and certificate attainment rates can vary from 14 to 36 percent depending on the definitions and databases used. Extending the time period to ten years after initial entry might further increase completion rates, especially among students who transferred (without a degree) to a four-year institution, since six years after their initial entry roughly 20 percent of students are still taking classes.

Critics of the colleges typically elect to use rates calculated after three or four years to bolster charges that the institutions are failing to graduate the vast majority of their students, but in reality these rates do little except reflect high numbers of part-time and swirling students at community colleges (after three years, almost half of all students are still enrolled). Indeed, a National Postsecondary Student Aid Study did a cross-sectional survey of all students enrolled during an academic year and found that “only 22% …Cohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 402 Table 14.2. Degree and Certificate Attainment Rates among Students Beginning in Community Colleges after Three and Six Years Percent Attainment at First Institution Percent Attainment at Any Institution Source Any Associate Certificate Any Associate Certificate Bachelor’s Still Degree or Degree Degree or Degree Degree Enrolled Certificate Certificate at Any Institution Attainment after 3 years BPS, 2003 entering cohort14.3 9.7 4.6 46.3 NCES,Digest, 2007 degree-seeking cohort20.4 Attainment after 6 years NCES,Digest, 2003 entering cohort34.4 8.5 14.4 11.6 19.6 National Student Clearinghouse, 2006 entering cohort ∗ 23.9 36.3 20.1 ∗Note: National Student Clearinghouse data reflect only the first degree or certificate earned; bachelor’s degrees subsequently earned by students already holding associate degrees, for example, are not included here.

Sources: NCES, 2009; NCES,Digest, 2011; National Student Clearinghouse, 2012a.Cohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 403 Student Progress and Outcomes 403 enrolled full-time and for the full academic year” (Bailey, Crosta, and Jenkins, 2007, p. 2).

College leaders and other advocates, not surprisingly, argue that the six-year attainment rates are more accurate gauges of community college effectiveness; some even suggest that degrees earned elsewhere should count as a success as long as the student attaining the credential earned a minimum number of credits at their college along the way. Indeed, the renewed national focus on completion as well as the emergence of the VFA and other initiatives that seek to define outcomes measures that are more appropriate to the community college setting have led to a broader view of what should qualify as a community college success.

A Broader View of Completion The rationale for taking a broader view of community college completion is captured in the following three scenarios:

1. A recent high school graduate attends a community college for a few terms and then transfers to an out-of-state (or private, or for-profit) institution. She successfully transfers and earns a bachelor’s degree, but since the transfer is not collected in any statewide or national data system she is considered a failure.

2. An adult student enrolls full time at a community college to earn a degree or certificate. However, his financial situation changes and he must begin to work forty hours per week, taking classes whenever he can fit them in. As a result, he stops and starts frequently and swirls among colleges in the area to take courses at times and places that fit his schedule.

He completes his associate degree but takes more than four years to do so. He is considered a failure by all of the community colleges he attended.

3. A student enrolls in a community college with the intent of upgrading her skills and possibly earning an occupationalCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 404 404 T HE AMERICAN COMMUNITY COLLEGE certificate or degree to qualify for a promotion. She earns more than thirty credits, is promoted in the process, and sees no need to return for further training. She achieved her goal but is considered a dropout.

Until recently, all three of these students would have been con- sidered failures in the eyes of IPEDS and other national reporting systems. However, the first student could now be counted as an official transfer if the college utilized the National Student Clear- inghouse’s StudentTracker service; the second would be considered a completer in national surveys tracking outcomes for six years (he would be considered a success by all of the institutions he attended if they used StudentTracker to trace lateral transfers of credit); and the third would qualify as a success under the VFA’s aggregate mea- sure of all successful community college outcomes. The aggregate measure is particularly useful as it recognizes the diversity of educa- tion and training goals — as well as attendance patterns — among community college students. Through this measure the VFA has broadened the definition of a successful community college out- come to: earning a degree or certificate; transferring to another two- or four-year college or university; continued enrollment; or leaving the institution after earning thirty or more credits. In Arizona, 77 percent of credential-seeking students entering a com- munity college in 2005 achieved a successful outcome within six years, a number parallel to six-year graduation rates at public universities (Arizona Community Colleges, 2012).

However, although tools are now available for community colleges to more broadly report successful outcomes, there remains some pressure (mostly from outsiders) to stick with traditional calculations of degree and certificate attainment. After a national U.S. Department of Education advisory committee recommended in 2012 that community college graduation rates be expanded to include the large number of students who transfer to other schools after having completed thirty or more credits and that theCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 405 Student Progress and Outcomes 405 time period used to calculate outcomes be expanded from three to four years (modest changes compared with the VFA’s aggregate measure), critics pounced, arguing that community colleges should be more focused on improving their graduation rates than “trying to … reconstruct them” (Marcus, 2012, n.p.). Indeed, changing the way community colleges count completers will not make a difference in the proportion of adults holding degrees or certificates, but that fair point seems to pale in light of the reality that community college purse strings are increasingly being tied to their ability to effectively serve their students. Those who would choose not to count as many successful outcomes as possible are effectively dismissing the individual and societal benefits of community college attendance for any reasons other than attaining a degree or certificate — a demonstrably false claim. Yet the pressure to report degree completion over all else remains and, while more colleges will likely collect and report data on broader outcomes measures in the future, convincing legislators to look at anything other than a completion rate may be an uphill struggle.

Goal Attainment Goal attainment takes many forms, with transfer and job getting leading the list. As confirmed in the studies of dropouts, most stu- dents seem to attain at least their short-term goals. Students usually have more than one reason for attending, and the importance of one or another may shift over time. Several colleges have begun tracking students’ intentions to better serve their students and, ultimately, to more accurately assess the institutions’ ability to help students attain their goals. Pima Community College (Arizona) requires all students registering for classes at the beginning of each semester to fill out a short, online survey asking their reasons for attending the college. Options include pursuing a degree or certifi- cate, job skills, personal interest, transfer to a four-year institution, and university student taking courses at Pima. Students may changeCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 406 406 T HE AMERICAN COMMUNITY COLLEGE their intention each semester, allowing the college to track goals over time and tailor services accordingly. Ultimately survey data will be used to determine the percentage of students who achieved their stated goals (Arizona Community Colleges, 2011).

Basing completion data on student intentions will likely grow in popularity as colleges move toward outcomes assessment that is more meaningful to their population than traditional university outcomes such as retention and graduation rates. It also recognizes that completion of a degree or certificate is not always the best course of action for students. Indeed, the students who attend for only a short time and then transfer or go to work without receiving a degree or certificate may be the pragmatic ones. The associate degree itself has had little value in the marketplace, a fact acknowledged by the American Association of Community and Junior Colleges, which organized a short-livedAssociate Degree Preferredcampaign (Parnell, 1985) to encourage students to obtain degrees and employers to give preference to those who have them.

The proponents of program completion policies must continually battle not only the students’ and employers’ perceptions but also the universities that readily accept transfers without associate degrees and the colleges’ own managers who may want to maintain the institutions as passive environments providing ad hoc studies for anyone at any time. Still, there is an earnings premium of 13 per- cent for females aged twenty-five to thirty-four with some college experience or an associate degree relative to those in that age group who hold high school diplomas only. The earnings differential for males is slightly higher, at 15 percent (College Board, 2010).

This premium is a result largely of the credentials (lab technician, paralegal, medical assistant) that many graduates possess.

Occupational Outcomes Career programs are established with the intention of prepar- ing students for employment and serving industries by supplyingCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 407 Student Progress and Outcomes 407 them with trained workers. The college staff presumably initi- ate programs by perusing employment trends in the local area and surveying employers. Program coordinators are appointed, and advisory committees composed of trade and employer representa- tives are established. Funds are often secured through priorities established by state and federal agencies. The entire process sug- gests rational program planning. Nonetheless, questions have been raised about the appropriateness of certain programs and whether the matriculants are well served, and much research on program effects has been mandated by state and federal funding agencies.

Most students in occupational programs seem satisfied with the training they receive. Follow-up studies routinely find 80 to 90 per- cent of the program graduates saying that they were helped and that they would recommend the program to others. Among the students who do not complete a program, a sizable number usually indicate that they dropped out because they received all the training they needed in the courses they took, not because they were dissatisfied with the program. Employers seem satisfied as well. National stud- ies routinely find high percentages of employers approving of the colleges’ quality of training, responsiveness to employer needs, and cost of training. Surveys of major employers of recent graduates in Tennessee received “overwhelming positive results, ranging from satisfied to very satisfied” (Tennessee Higher Education Commis- sion, 2003, p. 3), but the employers appear to be “less satisfied with communication skills … than with other skills” (p. 3).

Several states also collect data on licensure pass rates among occupational program completers. Among Texas technical and community college students who sat for licensure or certification exams in 2007–08, 91 percent passed. Pass rates ranged from 83 percent in industrial services to 97 percent in legal and protec- tive services (Texas Higher Education Coordinating Board, 2009).

In North Carolina in 2008–09, 86 percent of community college students taking the exams passed (North Carolina Community College System, 2010). A total of 87 percent of MassachusettsCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 408 408 T HE AMERICAN COMMUNITY COLLEGE graduates from community college nursing programs passed the registered nurse exam in 2008, a rate that mirrors the national pass rate (Massachusetts Department of Higher Education, 2010).

Several statewide data sets showing the number of students who obtain employment are available. Among Wisconsin Techni- cal College graduates, 76 percent of those employed one year later were in jobs related to their training (Wisconsin Technical College System, 2006). Of Texas technical program completers in 2009, 79 percent were employed within six months of graduating (Texas Higher Education Coordinating Board, 2011). The numbers of employed graduates elsewhere were 81 percent in Washington; 58 percent in Oregon; 78 percent in Connecticut; 70 percent in Wyoming; and 85 percent in Florida (Washington State Board for Community and Technical Colleges, 2006; Oregon Performance Reporting Information System, 2010; Connecticut Department of Higher Education, 2006; Wyoming Community College Commis- sion, 2004; Florida Department of Education, 2007). But the reli- ability of these types of data is dubious. The employer satisfaction reports invariably rest on response rates from a minority of those surveyed. The employment studies may or may not include students continuing their education, those working in fields unrelated to the programs in which they were enrolled, their initial intentions, whether they received a degree or certificate, students in prison, those for whom data are missing, and those working out of state.

Furthermore, the data on program success must be interpreted in the light of the programs’ features and the students enrolled. The number who are already employed and enter vocational programs only to get additional skills must be factored in. Students who leave before completing the programs and enter employment in the field for which they are prepared should be considered program successes; thesejob-outsaccount for as many as 75 percent of the students in some programs. Students who graduate but do not obtain employment because they have entered related baccalaure- ate programs should not be counted among the unemployed. AndCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 409 Student Progress and Outcomes 409 it is misleading to place all occupational programs in the same category, because there are high- and low-status programs. Also, there are programs preparing people for areas of high demand and those in areas where the market is not as distinct. Much depends also on the time that has elapsed since the students were enrolled; the ordinary drift of careers suggests that fewer students will be employed in jobs related to their program several years after they have left college.

Some critics of occupational education are concerned that the programs do little to equalize status and salaries among types of jobs.

They view with alarm the high dropout rates without realizing that program completion is an institutional artifact. To the student who seeks a job in the field, completing the program becomes irrelevant as soon as a job is available. The categoriesgraduateanddropoutlose much of their force when viewed in this light. This phenomenon is not peculiar to community colleges: Students in liberal arts programs at universities, for example, frequently enter professions unrelated to their undergraduate majors in Russian studies, art history, anthropology, and the like. If one merely surveys the occupational program graduates who are working in that area or places graduates in one category and dropouts or job-outs in another, the true services rendered by those programs may be lost.

Questions about the value of occupational education are far too complex to be answered with simplistic data on job entry, licensure pass rates, and first salary earned. What is the value of a program when an enrollee hears about an available job, obtains it, and leaves after two weeks? In that case, the program has served as an employment agency of sorts. What is the value of a sequence in which a person who already has a job spends a few weeks learning some new skills and then receives a better job in the same company?

There the courses have served as a step on a career ladder. What of the person who enrolls to sharpen skills and gain confidence to apply for a job and ends up doing essentially the same work but for a different company? And what of the students who enterCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 410 410 T HE AMERICAN COMMUNITY COLLEGE occupational programs but then transfer to other programs in the same or a different college?

A curriculum is a conduit through which people move to prepare themselves to do or be something other than what they were. Yet for some people, the curriculum has served an essential purpose if it but allows them to matriculate and be put in touch with those who know where jobs may be obtained. Indeed, half or more of the students who obtain jobs learn of them through their program contacts, in many cases where, as Rosenfeld notes, “close ties between faculty and employers and informal labor market information networks make traditional college placement services superfluous” (1998, p. 18). At the other extreme are the students who go all the way through the curriculum and learn the skills but either fail to obtain jobs in the field for which they were trained or, having attained them, find them unsatisfying.

Success may be measured in many ways. Bailey, Alfonso, Scott, and Leinbach (2004) analyzed data from three national longi- tudinal studies and found the completion or transfer rates for occupational entrants to be between 7 and 11 percent lower than for students in liberal arts programs, even after controlling for their background characteristics and enrollment patterns “that are iden- tified with local levels of completion in college. One reason could be that many of them are seeking specific skills rather than degrees” (2004, p. 4). Other studies of both graduates and nongraduates of occupational programs have shown that although most enrolled to obtain job-entry skills many sought advancement in jobs they already held.

Around 60 percent of the respondents to a survey of career program completers in a Kansas community college gave “prepare to enter job market” or “to change careers” as their reason for enrolling, whereas 13 percent had sought to improve their skills in jobs they already held (Conklin, 2000). The figures on job skill upgrading are notably steady; 11 percent of the students in vocational programs in California in 1979 had enrolled to improveCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 411 Student Progress and Outcomes 411 skills for their current job (Hunter and Sheldon, 1980), and in 1999 13 percent of Wisconsin Technical College graduates so indicated (Wisconsin Technical College System Board, 1999). Such data often fall between the planks when program follow-up studies or comparative wage studies are made.

Another important finding in studies of graduates and current enrollees in occupational programs is the sizable number who plan and eventually do transfer to four-year colleges. In national data compiled by CSCC in 1986, roughly 25 percent of students enrolled in vocational curricula said that they intended to transfer (Palmer, 1987a). A more recent study put that figure at 15 percent (Bailey and others, 2003). Regardless of their intentions when they enrolled, anywhere from 5 to 30 percent of occupational program graduates transfer immediately to baccalaureate institutions. Yet a transfer percentage double that of the national norm is seen in programs that are directly linked to the baccalaureate, as in teacher education and several health fields. Although the termoccupational programincludes several types of activities, from noncredit certifi- cate and associate in applied science to associate in science, the relationship between these programs and further education is well established. Indeed, the overall transferability of the occupational courses suggests that “except for trade and industry courses, the concept of ‘terminal education’ should be laid to rest” (Cohen and Ignash, 1994, p. 29).

Because occupational education has several purposes, the mea- sures of success that can be applied to it vary. It prepares people for specific jobs. How much do business and industry gain when their workers are trained at public expense? It assists the disadvan- taged and people with disabilities to become self-sufficient. How much is that worth to society? It aids economic development.

How much does a locality or region gain thereby? It enhances individual income generation and career mobility. What value has been added, person by person? Indicators of success and, indirectly, legislation and funding depend on which purpose is being reviewed.Cohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 412 412 T HE AMERICAN COMMUNITY COLLEGE Benefits to the Individual Analysts of occupational education often consider the economic benefits to the enrollees. Because of the difficulty in disaggregating vocational and liberal arts curricula and the motives and subsequent earnings of the students who complete the programs or leave without a degree or certificate, the analyses usually relate cost of attendance and incremental earnings for matriculants overall as well as for occupational program graduates. Sanchez and Laanan (1998) calculated that students under age twenty-five who were working during their last year in college earned twice as much three years after receiving a degree or certificate. Those aged twenty-five or older showed lesser percentage gains, primarily because their earnings while they were in college were substantially higher.

College Board (2010) data showed that the median salary for associate degree holders is $6,800 higher than the median salary for high school graduates (median earnings for those with some college but no degree are $4,900 higher than the figure for high school graduates). Put another way, “The earnings benefit to the average college graduate is high enough for graduates to recoup both the cost of full tuition and fees and earnings forgone during the college years in a relatively short period of time” (p. 7). Larger returns are enjoyed by students in health and technical fields, while courses in sales, basic education, and the liberal arts show relatively negligible earnings increments.

Many of the economic benefits studies consider cost of atten- dance as well as earnings increments. Because community college tuition is low, most of the cost to the students is in the form of forgone earnings, what the students could be making on a job if they were not spending the time in school. This is difficult to estimate precisely because most students work while attending.

The fact that possession of a degree is essential before one can obtain a job in certain fields (as in nursing) comes into play as well. For this reason, associate degree and certificate holders displayCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 413 Student Progress and Outcomes 413 considerably greater rates of return. Other variables include the different rates of employment and wages in different fields and the earnings displayed soon after leaving college compared with those enjoyed one or several years later. Nearly all analysts conclude that community college attendance yields a net benefit. They differ only on the amount of gain.

Benefits to the Public Numerous publications have applauded the benefits of higher education to the public at large, which include higher tax revenues, lower demands on social support programs, better trained workers, lower incarceration rates, and higher levels of civic participation. A 2012 publication from a California advocacy group argued that for every $1 the state invests in higher education it will receive a net return investment of $4.50. Furthermore, even those students who enter but fail to complete college provide a return on investment of $2.40 to the state (Campaign for College Opportunity, 2012).

Studies linking college attendance with economic benefits to the community have similarly been reported for several decades; most are wildly distorted. Littlefield, a Long Beach City College (California) instructor, provided an example. Citing evidence that people who have been to college earn considerably more than people who have not been to college, Littlefield (1982) claimed that the community was saving an additional $18,000 per student per year, the difference between keeping a person in college or in jail. For these data to be credible one has to assume that all the students would be in prison if they were not attending college.

The most prolific set of reports dates from 1999 when the Association of Community College Trustees contracted with a group to develop a generic tool for documenting college benefits.

The resultant template was based on four types of benefits: college expenditures on salaries; the higher earnings accruing to former students; social benefits such as reduced expenditures for prisons, welfare, and medical care resulting from the more healthful lifeCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 414 414 T HE AMERICAN COMMUNITY COLLEGE style exhibited by alumni; and overall return to the public relative to their support of the colleges. Over five hundred studies resulted, some relating to all community colleges in a state and others to individual institutions. According to their analyses, the fifty-eight colleges in North Carolina account for $1.4 billion in the state’s economy, $13.3 billion in alumni earnings, and $184.1 million per year in social benefits, while the state’s taxpayers “see a real money ‘book’ return of 16.8%” on their annual investments in the colleges (Christophersen and Robison, 2004b, p. 1). Comparable figures for Connecticut’s twelve colleges show $24 million per year in social benefits and a return to taxpayers of 18.3 percent (Christophersen and Robison, 2004a). And the twelve Oklahoma colleges save the state more than $38 million per year in “improved health and reduced welfare, unemployment, and crime,” while the taxpayers receive a 14.9 percent return (Christophersen and Robison, 2003, p. 3). The same group used that formula to report data on numerous individual colleges. Their review of Grand Rapids Community College (Michigan) indicated contributions of close to $1 billion in higher earnings; reduced medical costs (employer savings from reduced absenteeism and reduced smoking and alcohol abuse among alumni); and reduced incarceration rates among students (Christophersen and Robison, 2006). Similar accounts report contributions of Ivy Tech Community College (Indiana) as more than $700 million and Housatonic Community College (Connecticut), $60 million (Dembicki, 2007).

The economic impact studies stand apart from studies of col- lege effects on student access, learning, and progress through the higher education system. Such studies are rarely conducted rou- tinely unless mandated by state or accreditation agencies, a recent phenomenon, whereas those on economic impact have long been used as marketing and public relations tools. Schuyler (1997) sum- marized methodology, models employed, and findings of nineteen economic impact studies conducted by colleges over a twenty-year period and concluded that most were “geared toward policy makers,Cohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 415 Student Progress and Outcomes 415 incorporating factors that exemplify the worthiness and value of the community college, and written so as to highlight the positive outcomes” (p. 76). In sum, studies of student learning as related to cost are rarely seen, possibly because of the difficulty in com- bining these variables; concepts of efficiency and learning may be ultimately incompatible.

Accountability A half-century ago, the editor of theJunior College Journalcautioned the colleges about their mythmaking and marketing enthusiasm.

He predicted presciently that “increased sales resistance will bring increased demands for proof — sound proof — to support claims concerning the quality of the [junior college] product” (Reynolds, 1957, p. 1; cited in Meier, 2008). Among the myths for which he found scant evidence were assertions that junior college instruction is superior to lower-division instruction at the universities, smaller classes lead to greater personal attention and improved student outcomes, and the local institution provides greater attention to local educational needs. And he warned that the ever-insistent demands on the public for more money would force the colleges “to translate their folklore into well-authenticated principles” (p. 2).

Accountability has grown since the 1960s as higher education expanded and as trust in and respect for all social institutions declined. The term refers to the responsibility of campus and system administrators to provide reports of their stewardship of public funds and “policy-relevant statistics produced regularly to support overall planning and monitoring at the national, state, or system level” (Leveille, 2006, p. 8).Performance accountabilityhas three forms: performance funding, budgeting, and reporting. Per- formance funding connects state funding tightly to institutional performance. Performance budgeting means that the links be- tween performance and funds allocated are more contingent.

Performance reporting involves little or no connection be- tween performance and funding.Cohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 416 416 T HE AMERICAN COMMUNITY COLLEGE Nearly all the states have some form of performance account- ability. As of 2003, all but four used performance reporting; twenty-one had performance budgeting; and fifteen had perfor- mance funding (Burke and Minassians, 2003). All these systems are based on the same types of measures: remediation success; transfer to four-year institutions; graduation rate; and job placement rate.

Performance accountability has had varied effects. Where it has been tied to funding, the amount that each college receives has been too small a portion of the college budget to induce the managers to make radical changes. Also, all but a few states that introduced it in the 1980s subsequently abandoned it, although a few have reintroduced the notion in recent years. Some prac- titioners have publicized interinstitutional comparisons, especially when an indicator makes their college look better than others.

But most comparisons are tenuous at best and essentially use- less between colleges in different states because the measures of developmental success and job placement rates usually differ.

Overall, accountability’s main positive effects have been to focus institutional managers’ attention on outcomes and state priori- ties and to increase their research capabilities. A 2006 study of accountability systems in eight states showed that all had moved away from ranking individual colleges, instead placing emphasis on contextualizing accountability measures through benchmarking and peer-group comparisons (Institute for Higher Education Pol- icy, 2006). However, the report also argued that there remained “significant problems with these accountability systems … The frequent disconnect we found between performance indicators and statewide goals, together with a lack of clarity about the appropri- ate audiences for accountability reports, means that cash-strapped community colleges may be asked to collect data that are not really being used in an effective way to drive state policy” (p. 17).

A basic misunderstanding pervades the constant commen- tary regarding accountability, which Adelman (2010) discussed at length. As he put it, students enter colleges and pay for services thatCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 417 Student Progress and Outcomes 417 may or may not produce results. The institution does not pledge that students will gain specific learning, graduate, or get a job. “Nor does the institution pledge to public funding authorities that it will produce X number of graduates, Y dollars of economic benefits, or Z volume of specified community services, or be subject to litigation if it fails to reach these benchmarks” (n.p.). It pledges services, which is a unilateral contract: “No obvious party to accept the offer, no obvious rewards for provision, and no obvious sanction if the provision falls short of promise” (n.p.). Furthermore, when students attend two or three schools, the relationship between institutional actions and student outcomes attenuates — which college should be held accountable?

A similar anomaly surrounds performance funding, which relies on annual reports that public colleges submit to state authorities.

Many performance-based funding contracts include hold-harmless clauses, which protect institutions from losing funding if they fall short of a benchmark or two. Furthermore, if performance data are incomplete or dubious, no trial or potential dismissal of the chief financial officer is in the offing. Funding on the basis of course or degree completions may incorporate stiffer penalties for the colleges, but in doing so it “holds the institution responsible for the behavior of its students, thus clouding the locus of both obligation and responsibility” (Adelman, 2010, n.p.). In sum, while we assume that the termaccountabilityrelates to ways institutions are being held responsible for their own behaviors, used in the context of student outcomes and performance funding, it is little more than a synonym fortransparency, which can be satisfied by providing the types of data that accrediting agencies and state auditors have been demanding for decades.

California has begun publishing data on the demographic pro- file of students and the course sections offered in each of the 112 community colleges, along with the percentage of credential- seekers who remain enrolled for three consecutive terms, and those earning at least thirty transferrable units. Other data includeCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 418 418 T HE AMERICAN COMMUNITY COLLEGE the percentage transferring or earning a degree or certificate and the percent of developmental students who complete college- level courses. Despite college leaders’ assertions that thesestudent success scorecards“will make the state’s community colleges the most accountable and transparent in the nation” (Rivera, Apr. 9, 2013, p. A1), they are a prime example of transparency without accountability. There is no indication that any college will be rewarded or punished for its place on the roster.

Other negative effects of accountability have been in the nature of what is calledgaming the system. To raise their graduation and retention rates, some colleges have weakened their academic stan- dards by reducing course requirements or by presenting faculty with directives that they submit detailed reports on each student who has dropped their courses (Dougherty and Hong, 2006, p. 74).

A social science principle called Campbell’s Law holds that the more a social indicator is used for social decision making, the more subject it will be to corruption pressures, thus distorting the social processes it was intended to monitor (Campbell, 1975). Nonethe- less, calls for greater accountability — from the federal government, from state legislatures, and in particular from philanthropic orga- nizations putting large sums of money toward efforts that they believe will increase completion and student success — have made assessment of student progress and outcomes standard practice on community college campuses.

Outcomes Assessment Community college outcomes refer in the main to licensure-exam pass rates, employment status, transfer rates, and graduation rates.

They are related to accountability when they are used to assess institutional effectiveness against preset standards. The North Carolina Community College System (2010) presents one example of such a statewide program. It includes eight indicators of student success: progress of basic skills students; licensure pass rates; perfor- mance after transfer; developmental course success rates; successCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 419 Student Progress and Outcomes 419 after developmental education rates; student satisfaction; student retention, graduation, and transfer; and client satisfaction with customized training. Data regarding each of these eight indicators are displayed for each college, each year, along with notes showing the number of colleges meeting standard. Any college not meeting a standard is required to submit an action plan for improving performance to the state board. In 2010, all but one of the indi- cators was met by at least forty-seven of the fifty-eight colleges, and three indicators were met by all colleges in the system. While the methodologies used to calculate many of these indicators are sound, one remains particularly tenuous: An indicator asking how many students “indicate that the quality of the college programs and services meet or exceed their expectations” (p. 26) is based on responses to a survey administered by each college. In 2010 the percent of students so indicating averaged 96 percent among the colleges, but this finding was based on a minuscule survey response rate of 10 percent.

As this example illustrates, the methods that colleges use to verify attainment of performance indicators have frequently not met standards of academic research. But why should they? If state support depends on successful attainment, then political realities demand that all colleges qualify on practically all criteria. In brief, much accountability research varies depending on the audience for whom it is intended. For much of the history of community colleges, spokespersons would respond to questions of college value with anecdotes, lauding the progress of individual students: “Do students succeed at our college? Let me tell you about Mildred, a single parent, marginally literate when she came to us. We mentored and encouraged her, and now she’s a pre-med student at the state university.” Such stories were superseded by studies using larger numbers but having no greater validity. The researchers who publish in academic journals must adhere to accepted standards of research in the social sciences, and if their reports are for internal college staffCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 420 420 T HE AMERICAN COMMUNITY COLLEGE they must provide information on programs within that institution and recognize that faculty prefer using their own instruments or other classroom-based assignments to test particular outcomes.

But an audience unknowledgeable about data validity has no problem applauding the Littlefield (1982) study summarized earlier in this chapter. The similarly designed Christophersen and Robison (2003, 2004a, 2004b, 2006) economic impact studies would gain credibility if they did parallel studies of communities that do not have colleges within their borders or used data on the same communities before the college opened. But their audience is not the researchers concerned with quasi-experimental design, control groups, and validity. It is the people satisfied with brief, easily understood and, ideally, favorable information. Not incidentally, the colleges that commission such reports invariably issue press releases sharing the good news (Dembicki, 2007). Few, if any, of the media carrying the stories mention the obvious fact that since the colleges draw most of their funds from outside their service area, whereas most of their beneficiaries reside within the locality, a net cash benefit to the community is guaranteed a priori.

Similar economic benefits would result if instead of a college the examined entity was a sanitarium, hospital, prison, or asylum. (In the early years of public higher education, communities competing for state colleges were often mollified by legislatures’ awarding such institutions to them instead.) National Efforts In recent years major philanthropic organizations have funded attempts to standardize and improve rigor in community college outcomes assessment and collect data uniformly across states and the nation. Attempts have also been made to tailor student progress and outcomes assessment to the community college missions and student body instead of simply modifying accountability indicators more appropriate in university settings (the traditional source of community college outcomes measures). One of the most influentialCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 421 Student Progress and Outcomes 421 efforts is the American Association of Community Colleges’ VFA initiative, which uses cohorts of entering and credential-seeking students to assess student progress and outcomes after two and six years. The VFA incorporates many of the progress indicators noted at the beginning of this chapter, including retention and attainment of credit thresholds, as well as employment outcomes, transfer rates, degree and certificate attainment rates, and success in adult basic education. The VFA’s most important contribution to outcomes assessment, however, is its aggregate measure of the percent of students achieving successful community college outcomes.

Another national accountability initiative is funded by several major philanthropic organizations and enjoys the support of many state governors. Complete College America tracks postsecondary completion outcomes for both two- and four-year institutions, focusing in particular on the total number of degrees and certificates awarded, graduation rates, time and credits to degree, transfer rates, and progress through developmental education. Unlike the VFA, however, Complete College America’s time frame (four years) and reliance on traditional outcomes measures (graduation rates, time to degree) do little to acknowledge or reward community colleges for successfully serving students whose attendance patterns or reasons for enrolling may not conform to the initiative’s relatively narrow definition of a successful outcome.

Other ways of tracking community college outcomes on a national scale have emerged as well. The National Student Clear- inghouse’s (NSC) StudentTracker tool provides a near-census method of tracking student enrollment and transcript data (the database includes over 96 percent of the students in U.S. public and private colleges and universities). While the NSC has long been used by individual institutions and employers to verify degrees or analyze students’ transfer destinations, in 2010 it established a research center, which has since published several reports on student mobility, transfer, and completion. The NSC’s ability to follow students across state lines and through public, private, andCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 422 422 T HE AMERICAN COMMUNITY COLLEGE for-profit institutions has enabled a much more robust way to deter- mine transfer and completion rates. Not surprisingly, these rates are considerably higher than those captured by the U.S. Department of Education.

Problems in Assessment Despite national accountability initiatives such as those previously mentioned, the vast majority of assessments of student learning, progress, and outcomes are conducted on college campuses, by institutional researchers or consultants, ostensibly for the purpose of improving institutional programs or student success rates. And indeed, the accelerated drive for accountability has yielded some strange fruit:

C ONSULTANT : You need to set up an automated process- and outcomes-tracking data system.

P RESIDENT : Would the tables and graphs be useful?

C ONSULTANT : Of course. You can refer to them with confidence when you present the decisions you have already made based on institutional politics.

The overarching problem of assessment is that most decision making is based on other variables. Attempts to rigorously assess the contemporary colleges confront the realities of the institutions. On one hand, teaching and learning are open-ended. We can always do better: Graduation rates can be higher; the students can be more satisfied. On the other hand, the professional staff have only limited incentives to improve what they do. Their welfare does not depend on their institution’s performance as measured by student outcomes. They do not get paid more when the students learn more, and no one is dismissed if they learn less. Therefore, major gaps appear between goals (however derived), the research that is supposed to measure attainment of the goals, and the extent toCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 423 Student Progress and Outcomes 423 which the institution will change according to the findings. Each is a separate event. No externally generated mandate for institutional accountability can create a hyper-rational, tightly coupled system.

This disjuncture is revealed in the types of goals that practi- tioners prefer to set. Because they cannot penetrate the boundaries separating the different roles of the educators, they usually are stated in a way that does not lend itself to straightforward mea- surement. Admonishing the faculty to set measurable objectives attacks their unwillingness to put forth targets for which they can be held accountable. State-level pressure for outcomes data confronts administrators’ need for positive findings exclusively. Because insti- tutional support depends on image, not data, the quest for valid information may be self-defeating.

Therefore, the goals that intrainstitutional committees pro- nounce usually relate to process (the computer lab will upgrade its equipment; the college will offer more classes in the evening) and only occasionally to outcomes (our graduation rates will increase by 5 percent; the percentage of area employers who report they are satisfied with our students’ job performance will average 80 across all programs). Process goals are acceptable because they suggest that the staff are trying harder. Product goals are suspect because too many uncontrollable variables may act to diminish the results, and failure to achieve the objective may generate untoward criticism.

This is the main reason that college leaders remain suspicious of, if not antagonistic to, performance indicators.

In intramural research, the people studying the phenomenon are included in the complexity being studied. This characteristic sets intrinsic educational research apart from research in other fields. It is the Heisenberg effect squared; examining a phenomenon changes it, and when the analysts are themselves the object of examination the paradigms of traditional research are hopelessly distorted. Few practitioners dare to organize studies that have the potential of making them look ineffective; they fear being compared with other institutions or losing credibility. And because the colleges andCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 424 424 T HE AMERICAN COMMUNITY COLLEGE their clients have multiple purposes, they know that no single outcomes measure captures the institution’s complexities. Since we cannot account for all the subtleties in everything we do, some say, why measure anything and risk being misunderstood? Ewell (1987) discussed many such problems, showing that often no one on campus knows what assessment is for or what its consequences will be. Further, Burke and Minassians noted that most of the people responsible for policy have little or no familiarity with the performance reports, which thus “have only minimal effect in improving policymaking on campus” (2003, p. 62).

Another complication in accountability measurement is in assigning responsibilities for student success (or lack thereof). The directives demanding program review if certain outcomes are not attained bump into this problem. Whose fault is it: the colleges’ or the students’? Always an issue, it has become even more unwieldy as many students transfer among institutions several times before com- pleting a degree. When a student matriculates at one college, enrolls concurrently at a second institution in the next term, abandons both of those and takes classes at a third institution in a subsequent term, and eventually completes a degree at yet a fourth institution, which one is most accountable for the student’s progress?

The premise that institutional effects can be separated from the students’ tendencies is flawed. Some residential colleges may have been able to demonstrate their value to students who enrolled and stayed for four years straight. But such cohorts are in a minority among students in bachelor’s degree–granting institutions and practically nonexistent in the community colleges. Stop in, stop out, take classes here or there, amass credits, get a degree eventually; where and how did learning occur? Assessing students at entry and at graduation, the traditional way of estimating the cognitive or affective change, loses its power in an institution where relatively few students graduate; too few of the entrants are there to take both halves of the measurement.Cohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 425 Student Progress and Outcomes 425 Another accountability-related problem is that assessments of institutional productivity have often led to the untoward practice of ranking the colleges, which can have the effect of misleading the public and generating antagonism on the part of practitioners.

One example is available from California, where in 2000 the state chancellor’s office released a list of colleges in order of their transfer rates (Hom, 2000). Spokespersons from the colleges toward the bottom of the list reacted angrily, saying that the calculation did not consider their types of students, emphases, or other programs (Weiss, 2000). College administrators have long been suspicious of the uses to which any types of reported data are put. “If the data are specific and clear, and if they demonstrate differences among universities and colleges, bitter experience teaches that the regulators will praise those whose indicators are high and condemn those whose are low, without paying the slightest attention to purpose, organization, circumstance, or mission of the institutions involved” (Lombardi, 2006, p. 17).

These types of reactions and criticisms are certain to appear every time reports are released showing how colleges rank on any variable. No matter how carefully the data are collected and analyzed, the practice of ranking has the inherent flaw of being a zero-sum game; someone has to be on the bottom. And what is the difference between a college with a transfer rate of 36.41 percent and one with a rate of 36.29 percent? The problem is magnified when state funds are allocated or withheld on the basis of a college’s position on a list of jobs attained or licensure-pass rates exhibited by its students. Public display of rankings and funding decisions based on them tend to feed the unease that most practitioners have long felt about outcomes studies. Even so, the states’ and the federal government’s pressure for comparative outcomes data has been increasing and shows no sign of subsiding.

Assessment of student learning — in individual courses as well as in programmatic sequences — can be particularly challenging.

Many reasons that assessment of student learning has not beenCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 426 426 T HE AMERICAN COMMUNITY COLLEGE widely adopted have been advanced, including the uncertain feasi- bility of measuring important outcomes, the limited time or money available to implement a testing program, the suspicion that the faculty will teach primarily what the test will measure, the risk of outsiders’ misusing the information gained, and the students’ unwillingness to cooperate in a process that has no relevance to them. But overriding all these objections is the dilemma of research in the social sciences in general. For at least a century, social scien- tists have sought the predictability and absolute conclusions that they thought marked mathematics and the physical sciences. But that quest leads only to trivial findings or mathematical model- ing that few people within the colleges or among their support groups can follow. Banta (2007) issued a warning about measuring learning outcomes, saying that calls for collecting data that would allow for interstate comparison of student learning are highly ques- tionable. First, standardized test scores are highly correlated with entering student ability and reflect individual differences among students more accurately than they do differences in the quality of education offered at different institutions. “For nearly fifty years, measurement scholars have warned against pursuing the blind alley of value added assessment … We see no virtue in attempting to compare institutions, since by design they are pursuing diverse missions” (p. 2).

Hoos (1979) too was skeptical of quantitative approaches to assessing the effectiveness of social institutions and pointed out how seeking only what can be measured leads to ignoring all else. And as Biesta and Burbules paraphrased from John Dewey:

“The idea of ‘improving’ education practice in any direct way through educational research should be abandoned … Educational problems are always unique and for that reason always require unique responses, tailored as best as possible to the idiosyncrasies of the actual, unique situation” (2003, p. 81). In other words, all findings in educational research are tentative, equivocal, and derivative, and dressing them up with statistics gives only theCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 427 Student Progress and Outcomes 427 illusion of precision. McKeachie (1963) put it most succinctly when he concluded that, at bottom, research on teaching can demonstrate only that incollege A, on day B, instructor C used method D to teach concept E to F set of students. Change any of those variables and the findings shift.

Possibilities in Assessment Why should the staff members in any colleges measure the learning attained by their students or other institutional outcome? Such measurement in the abstract is an exercise not likely to gain much staff support. Most of a college’s funds are calculated according to formulas fixed in a political arena, not assigned on the basis of results attained. Appeals to professionalism are of little use because the staff perceive information on student learning gathered by outsiders as irrelevant. Attempts to feed student learning data back to instructors so that classroom practices can be improved usually prove ineffectual because few instructors will accept data about their students from anyone else.

Testing individuals, which is familiar to all educators, is done to motivate students, set goals, design and modify media, and estimate learning attained through various interventions. Using the same process to assess group attainment distorts it, especially when it becomes the basis for judging the worth of a program. Because testing individuals to determine their progress is so familiar, it militates against educators using methods that are considerably more reliable and valid for estimating group progress. Multiple- matrix sampling, a technique known for decades, serves as an example. The National Assessment of Educational Progress uses it to estimate and publicize how much nine-year-olds, thirteen-year- olds, and seventeen-year-olds know. A sample of items is given to a sample of students in a sample of schools, with the items, students, and schools sampled anew at each biennial iteration.

A longitudinal study can be initiated, with a cohort of students who are entering for the first time as its subjects. However, thisCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 428 428 T HE AMERICAN COMMUNITY COLLEGE procedure, favored by many higher education researchers, stems from a view of college as a place where students develop over years of attendance, clearly not the community college norm. If it is to be used, it works best where a percentage of students is sampled. Each term these students can be asked about their aspirations and course- taking patterns. Different forms of the placement exam or other measures can be used to test the students at entry and at various points along the way. When a small group is sampled, follow-up becomes much more feasible (although national databases such as that maintained by the National Student Clearinghouse allow for much larger groups of students to be easily tracked over time).

Community college–based researchers such as Rasor and Barr (1998) have detailed straightforward ways of sampling, yet the practice has not been widely adopted.

An alternative form of outcomes assessment is based on a cross sectional model, where content measures are included along with items asking about student satisfaction, course-taking behavior, use of support services, and other information about intrainstitutional concerns. An item bank can be developed, with items categorized by skills, such as critical thinking, reading, and writing; by content, such as history, chemistry, and mathematics; and by response type, including multiple and free response. The items can be as specific or as general as desired. Tests can be constructed and administered to students in classes, and certain demographic information can be solicited at the same time. After the tests have been taken, groups of students can be classified according to aspiration, number of units taken, prior school experience, or any other measure that seems of interest. This model was used as the basis for the General Academic Assessment and the General Academic Learning Experience, both described in Chapter Nine.

The longitudinal model works best in a college where students matriculate with the intention of participating in programs orga- nized sequentially and where the college’s processes are designed to ensure that they do. The cross-sectional model skirts the problem ofCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 429 Student Progress and Outcomes 429 student retention and the difficulty of follow-up because it generates new cohorts each time it is administered. The level of knowledge displayed by the students collectively, first at entrance, then after they have completed a certain number of units, and at graduation, can be compared. Any available demographic information can be used to make further differentiations.

One of the basic problems in assessing learning outcomes is that the gold standard for assessing group progress — random assignment — is rarely undertaken in practice (Levin and Calcagno, 2007). In the case of developmental education, for example, it would demand assessing students at entry, determining which of them need remedial studies, and then randomly assigning a portion of that group to the college’s developmental education activities and a different set of the group to the regular collegiate program.

This would mitigate the problem defined by Bailey and Alfonso (2005), who said that the issue of how and why students enter remedial programs in the first place is rarely examined. If students could be placed on the basis of random assignment, then evidence of the consequences of various interventions could be brought forward. Otherwise, the question of which students participated in which learning activities and for what reasons always remains open.

In one of the few studies using random assignment, two New Orleans–area colleges offered $1,000 per year scholarships, in addition to any other financial aid for which students qualified, to low-income parents. The program was evaluated using a random assignment research design; that is, participants were randomly assigned to two groups, one receiving the scholarships and a control group that received only whatever aid was available to all students.

The group receiving the scholarships was more likely to enroll full time, pass more courses, and return for successive semesters (Brock and Richburg-Hayes, 2006). Jaschik (2008) reported a similar study at Kingsborough Community College (New York) where full-time students were assigned at random to learning communities or to a control group. Those in the learning communities took and passedCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 430 430 T HE AMERICAN COMMUNITY COLLEGE more courses, earned more credit, and showed greater increases in English test scores.

Each area of the college deserves its own set of measurements.

For example, an area of inquiry dating to the behavioral objectives movement pioneered by Ralph Tyler in the 1950s was promoted subsequently by Cohen (1969) and Angelo and Cross (1993). Each instructor is to set objectives and assess the effects of different techniques, using the findings as the basis for instructional mod- ification. There is no comparing one instructor’s outcomes with another and no attempting to relate the outcomes to other college purposes. With intrinsically designed goals, each classroom is its own object of study, each instructor a researcher.

Extending this concept of the self-contained study to other areas is just as feasible. Only a few easily understood principles of research need control the process; for example, where surveys are employed, population sampling and nonrespondent bias checks are basic. Most important is that no comparisons be sought between institutions or between programs in the same institution. Each community college’s main missions of transfer, job entry, career upgrading, literacy and general education development, and per- sonal satisfaction can be assessed separately and regularly, with results communicated routinely. The measurements could yield periodic reports arrayed as follows.

•Transfer:X percent of the students who entered our college with no prior college experience six years ago completed at least twelve credits here. Of those, Y percent have transferred to an in-state, public university. We anticipate an increase to a Z percent transfer rate within the next two years because of our emphasis on recruiting full-time students and because we have recently concluded new articulation agreements for three of our basic programs with our major receiving university.Cohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 431 Student Progress and Outcomes 431 •Job entry:X percent of the students who enter with no prior experience in the field and who complete three or more courses in one of our office skills or sales training programs obtain full-time positions in their field of study within one year, and four times that are working part time. This suggests that our clerical and sales programs serve predominantly as a route to part-time employment and hence can be modified to address that clientele more directly. During the coming year, we will organize job placement and training sessions in Y sites to accommodate these job seekers.

•Developmental education:We administered basic-skills tests to all our first-time students and directed X percent of them to our integrated developmental program. Y percent of those who enrolled completed the program and matriculated in collegiate studies. Z percent of that group obtained associate degrees or transferred to universities within six years. We plan on building closer links between our developmental and collegiate programs with the intention of increasing that proportion to Z plus 15 percent within five years.

These examples display how the definitions and methodologies are revealed in the report along with the study’s purposes, projec- tions, and actions to be taken based on its findings. This process would also yield the data that state and federal agencies are deter- mined to acquire. Unless each college controls its own research agenda — a scenario not likely to occur since it goes against the grain of the current push for standard accountability measurements — a mentality of compliance may develop in response to these external demands, and research in and about the colleges may not reach the potential it deserves. Although college leaders rarely use assess- ment information in making decisions about program maintenanceCohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 432 432 T HE AMERICAN COMMUNITY COLLEGE or support, they could well abandon what have been their tradi- tional reactions to it: “The data are flawed, because they didn’t consider … ”; “We may fall short there but look what we are doing over here”; and the ultimate, one-size-fits-all rationalization, “If we had more money we could correct the problem.” Issues The community college has been criticized for its failure to move sizable proportions of its matriculants to the baccalaureate. But these effects are not uniform. Why do different students go through at different rates? How do institutional and personal factors interact to affect progress?

Will the colleges of their own volition produce routine data on productivity (degrees awarded, licensure, and job-getting rates) as well as data on student learning outcomes (knowledge gained in humanities, science, math, social science, English usage)?

Institutional decision making and extramural support take assessment findings into account only minimally. Many reasons may be cited for this:

• Imprecisions in measurement in the social sciences; • Assigning responsibility for findings (e.g., who is at fault when students drop out or fail to learn); • Open-ended goals in education (the results can always be better); • Institutional or program ranking is a zero-sum exercise and support may be accorded or jeopardized thereby; • Random assignment to courses and programs is rarely feasible;Cohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 433 Student Progress and Outcomes 433 • Setting specific objectives limits the educators’ propensity for grading and sorting students; • Fear that individual practitioners would be penalized for displaying poor results (the controversy over including student test scores in K–12 teacher evaluations provides a case in point).

What effects have the moves toward greater accountability and nationally standardized indicators of student progress and success had on these historic tendencies?

It is gratifying to conclude with Pascarella and Terenzini’s (2005) view of the educational and occupational outcomes associ- ated with two- versus four-year enrollment:

Studies published since 1990 indicate that in certain out- come areas, community college students derive benefits equal to or even greater than those realized by simi- lar students at four-year colleges or universities. After one year of college, for example, and after adjusting for precollege ability, motivation, and other confounding influences, community college students gain to about the same degree as similar students at four-year institutions on measures of reading comprehension, math skills, and critical thinking skills. After two years, the two groups also showed increases of about the same degree in their science reasoning and writing skills. Moreover, although community college students in general reap these bene- fits, the gains are greatest among students of color, older students, and less affluent students — in other words, those most likely to attend a community college rather than a four-year institution in the first place. And for similar individuals of equal educational attainment, ini- tial attendance at a two-year college appears to impose no significant penalty on earnings. (p. 639)Cohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.

Cohen c14.tex V3 - 08/05/2013 9:47am Page 434Cohen, Arthur M., et al. The American Community College, John Wiley & Sons, Incorporated, 2013. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/capella/detail.action?docID=1366278.

Created from capella on 2020-06-09 18:50:55.

Copyright © 2013. John Wiley & Sons, Incorporated. All rights reserved.