Discussion 007 (Six Sigma)

USING SIX SIGMA FOR PERFORMANCE

IMPROVEMENT IN BUSINESS CURRICULUM:

A CASE STUDY

Anil Kukreja Joe M. Ricks Jr. Jean A. Meyer

During the last few decades, a number of quality improvement methodologies have been used by organizations. This article provides a brief review of the quality improvement literature related to academia and a case study using Six Sigma methodology to analyze students’ performance in a standardized examination. We found Six Sigma to be an effective tool for curriculum improvement and team building. The challenges and benefits of using Six Sigma are discussed.

DURING THE PAST FEW DECADES, a number of qual- ity improvement methodologies have been used exten- sively by organizations to improve products and services. Techniques like continuous quality improvement (CQI) and total quality management (TQM) have been used to provide modest, incremental improvements, whereas tech- niques like reengineering and Six Sigma have been used for making drastic changes to existing processes.

Quality improvement in higher education involves examining the needs and expectations of various stake- holders (e.g., students, faculty, accrediting agencies). In addition, it requires the evaluation of the effectiveness of academic and academic support programs based on care- ful documentation and data collection. Temponi (2005) suggests that a number of factors drive the need for CQI in higher education: employers’ desire for expanded skill sets at the entry level, increased competition for talent, changes in student composition, demands from accredit- ing bodies, and the complexities of a constantly changing business environment. She also notes a number of con- cerns among academic stakeholders regarding CQI: the perception of the student as the customer may lead to stu- dents’ having too much influence; professors’ lack of full control over course content may lead to violations of aca- demic freedom; should inexperienced students’ needs and wants actually be satisfied; implementation of CQI may affect the primary mission of teaching and research; and

whether students are ready for the challenge of a collabo- rative effort to improve the learning experience.

This article presents a case study using a Six Sigma process improvement methodology—DMAIC (define, measure, analyze, improve, and control)—to analyze stu- dents’ academic performance in the accounting section of the Educational Testing Service (ETS) major field exami- nation in business administered by the business division at a liberal arts university in the southern United States. We view this as an initial investigation in using DMAIC to assess and improve curriculum. The literature has a num- ber of cases employing continuous improvement strate- gies in institutions of higher education in nonacademic areas (Hogg & Hogg, 1995; Hargrove & Burge, 2002). Stevenson and Mergen (2006) discuss the desirability of including Six Sigma thinking in an undergraduate busi- ness curriculum, whereas Weinstein, Castellano, Petrick, and Vokurka (2008) describe an approach used to direct MBA students through Six Sigma process improvement projects. However, the literature is void of a DMAIC case that assesses, evaluates, and improves curriculum design and delivery.

To start the process of filling this void, we present a case study as a model for assessing and improving cur- riculum. We believe case methodology is appropriate because of its three advantages in initial investigations: it has a long tradition in the academic business literature,

Performance Improvement, 48, no. 2, February 2009 ©2009 International Society for Performance Improvement

Published online in Wiley InterScience (www.interscience.wiley.com) • DOI: 10.1002/pfi.20042 9

individuals and programs may derive value from case studies, and it is flexible enough to allow greater depth in initial investigations (Ricks, Williams, & Weeks, 2008). We demonstrate how a Six Sigma process can be used to exe- cute the two primary purposes of CQI in an academic environment: examining the needs and expectations of stakeholders and evaluating the effectiveness of academic programs. In addition, we demonstrate how this process can address a number of the concerns held by academic stakeholders regarding CQI initiatives. This article has four parts. It provides a brief review of the quality improvement literature with a focus on academia, sets out the context for taking on the project, explains the process and tools in detail and how they address the concerns of academic stakeholders, and provides results and offers conclusions containing advantages and disadvantages of using Six Sigma for curriculum improvement.

LITERATURE REVIEW

In the context of higher education, Chizmar (1994) defines TQM as the collaborative and holistic application of the idea of the industrial TQM model to teaching and learning. As the name suggests, the focus of TQM is on managing overall quality. Deming (1986) defines quality as “satisfying the needs of the consumer, present and future” (p. 5). Juran (1989) suggests quality is “fitness for use” (p. 15). Definitions of quality focus on customers since they are the key to any TQM implementation.

The concept of quality improvement entered higher education as a direct result of criticism about the qual- ity of higher education during the late 1980s and was well established on some level in most universities by the 1990s. A number of articles related to TQM in higher education were published in the May–June 1993 issue of Change, and entire issues of the journal Higher Education (Vol. 25, No. 3, 1993) and Total Quality Management (Vol. 7, No. 2, 1996) were devoted to this topic. Quality improvement initiatives encour- aged colleges and universities to place a greater empha- sis on constituent input and satisfaction—in essence, defining excellence in more service-oriented terms.

TQM methodology used in industry focuses on cus- tomer satisfaction through an integrated framework that examines the relationships between various systemwide elements and makes data-driven decisions to reduce errors and waste in processes. According to Hogg and Hogg (1995), TQM can help universities find solutions because it stresses continuous improvement of processes and products by listening to the stakeholders: students, faculty, employers, and others. Wild (1995) reported, “There is now a growing realization that TQM may also

provide us with guidelines for transforming the way that we do business, the way that we do our own jobs. And for a significant proportion of us, a large part of that business is the teaching of students” (p. 50).

Hogg and Hogg (1995) provide examples of the imple- mentation of CQI for nonacademic as well as for aca- demic improvements in colleges and universities. However, they note that the academic side of CQI in higher education faces more resistance to its implementa- tion: faculty individualism, academic freedom, and treat- ing students as customers, among others, pose unique challenges. They state that if top leadership is not com- mitted, then it may not be advisable for a particular uni- versity to try CQI. However, even in the absence of such support, faculty can lead change in specific processes in colleges, departments, individual teaching methods, and personal quality. Stark, Lowther, Ryan, and Genthon (1988) note that the majority of faculty members fre- quently adjust their individual courses to stay up to date and improve their effectiveness.

Quality assurance is a cyclical process that involves measuring, judging, and improving. Dolmans, Wolfhagen, and Scherpbier (2003) conclude that for quality assurance to be successful, the evaluation activities should be carried out in a systematic, structural fashion and should be an integral part of an institution’s work pattern. They write, “In most institutions, quality assurance activities cover most aspects and involve most stakeholders. Furthermore, quality assurance activities take place at regular intervals and with proper frequency. However, all too often respon- sibilities are not clearly defined, quality assurance is not adequately integrated into regular work patterns, lack coherence, and as a result do not lead to continuous improvement of the educational quality” (p. 216).

An important aspect of most business education pro- grams is accreditation. The two main accrediting agencies for business programs are the Association to Advance Collegiate Schools of Business (AACSB) and the Associ- ation of Collegiate Business Schools and Programs (ACBSP), both of which stress quality assurance as part of the accreditation assessment. In 2001, as a strategic response to competition within accreditation bodies, AACSB revised its accreditation standards and trans- formed its reaffirmation framework from a traditional outcome-based process to an accreditation process based on goals related to quality assurance, an imperative for continuous improvement and stakeholder relationship management (Miles, Hazeldine, & Munilla, 2004). As a result of these changes in accreditation, quality improve- ment has moved from individual courses by individual faculty to whole programs. The accreditation standards and criteria used by ACBSP draw heavily from the

10 www.ispi.org • DOI: 10.1002/pfi • FEBRUARY 2009

To meet accreditation standards, business programs face mandates to assess student outcomes, use the results of outcomes assessment to improve academic programs, and do so as part of a continuous improvement program.

Malcolm Baldrige National Quality Award Performance Excellence in Education Criteria (Association of Colleg- iate Business Schools and Programs, 2008).

To meet accreditation standards, business programs face mandates to assess student outcomes, use the results of outcomes assessment to improve academic programs, and do so as part of a continuous improvement pro- gram. Many faculty members continue to resist these efforts to promote a systems approach, which requires programs to assess outcomes in the light of explicit goals and objectives and then to initiate improvements. Wergin (1999) reports that academic departments have engaged in a great deal of evaluation activity, but the cumulative impact has not led to a constructive change in depart- mental planning practices or a stronger culture of collec- tive responsibility. Most departments and faculty failed to see the relevance of program evaluation and assessment to the work that they did.

Another process improvement methodology that has been used extensively in industry is Six Sigma. In the Financial Times (Tomkins, 1997), GE explained the Six Sigma quality initiative as “a program aimed at the near elimination of defects from every product, process, and transaction” (p. 29). This continuous improvement con- cept was introduced at and popularized by Motorola in 1987 in its quest to reduce defects of manufactured elec- tronics products. The basic concept of Six Sigma is a dis- ciplined, quantitative approach for improvements, based on defined metrics, in manufacturing, service, or finan- cial processes. Hahn, Hill, Hoerl, and Zinkgraf (1999) describe the Six Sigma initiative and its impact on major

corporations like AlliedSignal, GE, Motorola, and Pol- aroid. They explain the DMAIC process and the major elements of the Six Sigma implementation. They also note that the initial emphasis of Six Sigma was in manu- facturing, but now it is being applied in key areas beyond manufacturing and beyond what would traditionally be considered quality. Some of the areas where Six Sigma methodology is being applied are voice of the customer, value chain analysis, customer satisfaction, and financial and banking services.

Cherry and Seshadri (2000) give an example of suc- cessful implementation of Six Sigma in the radiology department of the Commonwealth Health Corporation and report an annual savings of $1.65 million. Benedetto (2003) provides yet another example of Six Sigma imple- mentation in a health care setting. He concludes that the DMAIC technique appears to be equally applicable to services as it is to manufacturing. However, it is much more difficult to acquire data for a service model process than for a manufacturing model process. The concepts and tools employed in Six Sigma methodology are flexi- ble enough that they can be used to address many differ- ent problem areas.

The application of Six Sigma methodology in higher education has been limited. In this case, we use the Six Sigma approach to analyze the performance of our stu- dents in a standardized test and use the results to develop strategies to address this issue. To date, we have not found any articles in the literature that employ Six Sigma methodology to address curriculum evaluation and improvement issues.

CONTEXT FOR THE CASE

As part of the graduation requirement, the Division of Business students have to take the ETS major field exami- nation in business. The division then uses the results of the examinations to compare the performance of its students to the national benchmarks. The ETS examination tests students in nine areas: accounting, economics, manage- ment, quantitative business analysis, finance, marketing, legal and social environment, information systems, and international issues. At the time when this project was undertaken, business students were required to take a minimum of twelve credit hours in accounting. In the majority of business schools, the accounting requirement is nine credit hours. Therefore, the faculty hoped that the performance of business majors on the accounting section of the ETS examination would be better than their coun- terparts. The ETS results did not support this assumption: the scores of our students in the accounting section were below those of their peers from other business schools.

Performance Improvement • Volume 48 • Number 2 • DOI: 10.1002/pfi 11

Thus, the Division of Business, as part of its continu- ous improvement strategy, undertook a Six Sigma project to analyze the performance of students in the accounting section of the ETS major field examination in business. The performance of students in the accounting section of the ETS exam is shown in Figure 1, which sets out the mean percentage correct for the business division stu- dents in the last two years as well as the national mean over that period.

Figure 1 shows that the mean percentage correct score of our business students was significantly lower than the national mean. This trend in performance prompted the business division faculty to take it up as a Six Sigma project, which would help in analyzing the data and determining possible strategies to address this situation.

The idea of using the Six Sigma methodology for this situation came about as a result of the business division’s partnership with 3M Company, a Six Sigma organization. In 2003, 3M sponsored a two-day workshop on Six Sigma concepts for its partner schools in the Frontline Sales Initiative, and two faculty members from the Division of Business attended the workshop. After the workshop, the faculty members were convinced that Six Sigma concepts could also be applied to educational processes, and the decision was made to apply Six Sigma methodology to analyze students’ performance in the accounting section of the ETS major field examination in business. We named it the Accounting Curriculum Project. And although we selected accounting, the process could read- ily be applied to any curriculum.

The DMAIC Six Sigma improvement model was used for the project (see Pande, Neuman, & Cavanaugh, 2000, and Stamatis, 2004, for complete descriptions of the DMAIC model). Because the DMAIC methodology is generally applied to manufacturing or service aspects of

FIGURE 1. OUR STUDENTS’ MEAN SCORE VERSUS NATIONAL MEAN IN ACCOUNTING SECTION OF THE ETS EXAMINATION

business organizations, adjustments had to be made to apply the methodology to an educational delivery process.

THE DMAIC PROCESS The Define Phase

A project team was constituted to work on the project. Snee (2003) notes that team involvement in model for- mulation, data collection, interpretation of results, and implementation of process changes is important to the effective use of the Six Sigma tools. Because of the disci- pline involved in the Six Sigma process, the project and organizational leaders are identified by martial arts terms: Black Belts and Green Belts, for example. He mentions that the results of creating a process map, cause-and- effect (CE) matrix, or failure mode and effects analysis (FMEA) are most useful when these tools are used by a Black Belt or Green Belt and a project team. Zinkgraf (2006) provides a complete description of the infrastruc- ture needed for successful implementation of Six Sigma.

For this project, the team included faculty members from various disciplines as well as students. The makeup of the team was one of the control factors for addressing some of the concerns of academic stakeholders. We chose students who had high cumulative grade point averages or held leadership positions in student organizations. Only junior and senior students were chosen. This stu- dent selection process ensured that the students’ view- point was considered as we selected students who were mature enough to provide deliberative and thoughtful input to the process. In addition, all faculty members who taught the affected courses were part of the team to ensure that any potential academic freedom infringement issues were addressed. The lead faculty member for accounting was identified as the process owner. The team had this makeup:

  • Project leader: Faculty member experienced in Six Sigma methodology.

  • Project champion: Chair, Division of Business.

  • Green Belts: Two accounting faculty members who teach the principles of accounting courses.

  • Process owner: Lead faculty in accounting area.

  • Team member: Finance faculty.

  • Team member: Marketing major (senior standing).

  • Team member: Accounting major (senior standing).

  • Team member: Management major (junior standing).

The first assignment for the team was to define the proj- ect. All the templates required for a Six Sigma project were

12 www.ispi.org • DOI: 10.1002/pfi • FEBRUARY 2009

provided by 3M Company, which proved to be extremely useful and helped us stay focused. Since the ETS examina- tion in business is common for all business majors, the questions in the accounting section are generally based on the principles of accounting and managerial accounting courses. Thus, the scope of the project was defined as “the delivery process (when, what, where, and how) of courses related to principles and managerial accounting concepts, specifically Principles of Accounting 1, Principles of Accounting 2, Managerial Accounting 1, Managerial Ac- counting 2, Elementary Cost Accounting, Governmental Accounting, and Corporate Finance.” Properly setting boundaries or scoping projects is vital to the success of Six Sigma programs and projects. Projects with too broad a scope or with expanding scope are often called world peace or boiling the ocean; they are often unrealistic and frustrate team members (Lynch, Bertolino, & Cloutier, 2003).

The next assignment was to develop the baseline, enti- tlement, gap, and goal for this project. For purposes of comparison, the ETS provides comparative data that show the mean percentage correct scores and the corre- sponding percentiles. For example, the average score of 41.4 corresponds to the 27th percentile: 27% of all stu- dents who took this test received a score equal to or below the mean score of our students. The baseline refers to the current state of affairs, and we can see from Figure 1 that our students’ performance was at the 27th percentile (corresponding to 41.4 mean percentage correct score) in the accounting section of the ETS major field exam in business in 2003 and 2004. Entitlement is the best possi- ble state for the process. According to the Six Sigma process used at 3M, entitlement can be determined in one of three ways: by identifying the best performance achieved in a particular process based on its historical performance, by using industry benchmarks, or by a the- oretical assessment of where the process owners think the best possible performance should be. We used the theo- retical approach to identify entitlement for this project. Based on the fact that our students take at least three more hours of accounting than students in most other business school programs, the accounting faculty felt that they should perform in the top quartile in this section of the exam. Therefore, the team identified a theoretical entitlement performance at the 75th percentile. The cor- responding mean percentage correct score at 75th per- centile is 49.

The gap is the difference between the baseline 27th percentile, or 41.4 mean percentage correct score, and entitlement, the 75th percentile or 49 mean percentage correct score. Entitlement goal setting differs from tradi- tional goal setting in that it goes far beyond incremental advancement by seeking to take large steps in closing the

gap between baseline performance and entitlement per- formance. The goal was then set to the 63rd percentile; that is, once the process improvement strategies are implemented, our students should score at or above the 63rd percentile in the accounting section, which is equiv- alent to a mean percentage correct score of 46.5. The 63rd percentile represents closing 75% of the baseline-to- entitlement gap. Six Sigma should be used for projects where large gains in performance are desired. For smaller or more incremental gains, a less time-consuming process would likely be more appropriate.

For the Define phase, we used the problem worksheet in Exhibit 1 to address all of the relevant questions regarding problem definition and the appropriateness of the project for the Six Sigma methodology. Tools like this help the team stay focused on the issues at hand and make sure that all relevant issues are addressed prior to moving into the measurement phase.

The Measure Phase

During the measurement phase of the project, we found the process maps and CE matrix to be the most appro- priate tools for our project. Marrelli (2005) mentions that the process maps document the sequence of actions involved in the process as well as the key inputs and out- puts of the process. Kubiak (2007) notes that process maps address the key foundational concept of Six Sigma: the concept of Y = f (X): outputs are a function of inputs. The CE matrix prioritizes key inputs for action based on how they affect the critical outcomes (outputs) identified by the team. This matrix is not only a priori- tization tool; it also works as a means to ensure that all viewpoints are considered and conflicts are addressed (Snee, 2003; Mena, Aspinall, Bernon, Templar, & Whicker, 2004).

Both a high-level and a low-level process map were used on this project. The high-level process map (the 50,000-foot view) describes the process in one step and documents the main inputs and outputs in the overall process. It helps the team confirm the scope of the project. Identifying the key inputs and outputs from the high-level view of the process prevents specification error, which happens when a key input is left out of the scope of the project or a key output is not addressed in the CE matrix (see Figure 2). The low-level process map (see Figure 3) states the specific steps necessary to execute the process in the high-level map and identifies the inputs and outputs for each step. Inputs to the process are then categorized as controllable or uncontrollable, represented as c or u in the process map. Controllable inputs are variables that can be changed at the direction of the process owner to see the effect on output variables. Uncontrollable inputs are

Performance Improvement • Volume 48 • Number 2 • DOI: 10.1002/pfi 13

EXHIBIT 1

PROBLEM STATEMENT WORKSHEET

Process name
What is the process that you are trying to improve?

Delivery of courses related to principles and managerial accounting concepts for business department majors.

Process definition
How would you define the process?

The delivery process (when, what, where, how) of accounting courses taken by business majors, specifically, Principles of Accounting 1, Principles of Accounting 2, Managerial Accounting 1, Managerial Accounting 2, Elementary Cost Accounting, Governmental Accounting, and Corporate Finance.

Process scope
What are the boundaries that you would put around the process?

Courses related to principles and managerial accounting concepts for business majors.

Customer identification
Who are the customers of the process?

Student business majors and the Department of Business.

Problem statement
What are the problems associated with the process?

Learning assessment tool (ETS) score fell below the 25th percentile in 2003.

Is it specific?
How do these problems impact the business?

Prevents the department from meeting its learning outcomes goals.

Is it measurable?
What could we measure to demonstrate that the problems are improved?

ETS scores.

Is is achievable?
What are the targets for the measurement?

Increase ETS score to 63rd percentile. Baseline is 27th percentile, and entitlement is 75th. The 63rd represents closing 75% of the gap.

Is it relevant?
If these problems are improved, how will our customer benefit?

Students will be better prepared for opportunities after graduation, and the department will meet its learning outcome goals.

Is it timely?
Can improvements be made in 4 to 6 months?

Savings estimate
If cost, growth, cash describe potential of project and method for calculation.

variables that have an impact on the output variables but are difficult or impossible to control. According to Kubiak (2007), classifying input variables helps practitioners

focus on inputs that are controllable and guides practi- tioners away from spending time and energy on those that arenot.

14 www.ispi.org • DOI: 10.1002/pfi • FEBRUARY 2009

The process map leads to the CE matrix, which emphasizes the importance of understanding critical out- comes; these are usually in the form of customer or stake- holder requirements. This tool requires identifying critical outcomes from the process map. Each key output is also rated on a scale from 1 to 10, with 1 being the least important and 10 being an extremely important out-

come. For our project, the outcomes shown in Table 1 were chosen as key outcomes.

Identifying the critical outcomes is one of the most important elements in Six Sigma. As the process moves for- ward, only the parts of the process that affect the identified critical outcomes will be assessed. If the team leaves out a critical outcome, then all parts of the process that affect

FIGURE 2. HIGH-LEVEL PROCESS MAP

FIGURE 3. LOW-LEVEL PROCESS MAP
Performance Improvement • Volume 48 • Number 2 • DOI: 10.1002/pfi 15

TABLE 1

KEY OUTCOMES

KEY OUTPUT

RATING

ETS results

10

Proficiency in business-related work environment

10

Student evaluation of learning

5

that outcome will be overlooked. And if outcomes that are not critical are identified, then the team will waste time and energy on parts of the process that will not make much of a difference. A larger problem could be that an important

input to the process could fall out of the analysis because of its relation to outcomes that are not so critical but are included in the CE matrix.

Once the critical outputs are identified and rated, then each step in the process and all of the controllable inputs for each step are listed in the CE matrix (see Exhibit 2). Each controllable input is given a score based on the degree to which it correlates with critical outcomes. We used correlation scores of 0, 1, 3, and 9:

0: No correlation.

1: The process input only remotely affects the key outputs.

3: The process input has a moderate effect on the key outputs.

9: The process input has a direct and strong effect on the key outputs.

EXHIBIT 2

CAUSE-AND-EFFECT MATRIX

RATING OF IMPORTANCE TO CUSTOMER

10

10

5

9

ETS results

Proficiency in business- related work environments

Student evaluation of learning

3

1

0

PROCESS STEP

PROCESS INPUTS

TOTAL

Assign Faculty

Faculty hours available Faculty interest
Previous experience Courses offered Number of preparations Number of sections

0 3 1 9 3 0

0 3 3 9 3 0

0 9 3 0 1 0

0

105

55

180

65

0

Select Textbook and Resources

Course goals

9

9

3

195

16 www.ispi.org • DOI: 10.1002/pfi • FEBRUARY 2009

(continues on next page)

EXHIBIT 2

CAUSE-AND-EFFECT MATRIX (continued)

PROCESS STEP

PROCESS INPUTS

TOTAL

Course Design

Textbook selected
Software selected
Course description
Faculty handbook guidelines Course goals

Instructor experience with course

3 0 0 0 9 1

3 9 0 0 9 3

3 3 0 0 3 3

75

105

0

0

195

55

Teach Material

Syllabus
Faculty preparation Student preparation Teaching style
Faculty training Software
Computer availability Supplemental material Teaching (theory) Teaching (application)

3 9 9 3 9 1 0 3 9 9

3 9 9 3 9 9 3 3 9 9

3 9 9 3 3 9 3 3 9 9

75

225

225

75

195

145

45

75

225

225

Evaluate Learning

Knowledge of software application Exams
Projects/cases
Homework

Test design

0 3 3 3 9

9 3 9 3 3

9 3 3 3 3

135

75

135

75

135

1020

1410

530

These ratings were used to force separation on the importance of the inputs. This was one of the more time- consuming steps in the project, as there was spirited debate on the degree to which an input affected an out- come. The discussions were more intense when there was a disagreement between an input having a moderate effect or a strong effect. The forced separation made the team discuss the issues thoroughly and come to a consensus on how to rate each input. The team then assigned correlation scores for each input to each outcome. The scores were multiplied by the importance rating for each critical out- put and summed across. The higher the summed score, the more importance the input had on the process based on the identified critical outcomes. The team selected 105 as the cutoff score for inputs to move through the pro- cess as this score seemed to be a natural breaking point. Based on this analysis, the inputs in Table 2 were selected as influential and required further analysis.

The Analyze Phase

The purpose of the analyze phase of the DMAIC model is to reduce the process input variables to a manageable number and determine the high-risk inputs. To accom- plish this we used a failure mode and effects analysis (FMEA). Reid (2005) describes FMEA as a tool to list pos- sible failure modes of a product, service, or process and to provide a rating so improvement efforts can focus on the most important features or characteristics. He provides an example of the application of FMEA in the health care sector. Another example of application of FMEA in the service sector is its use at Compuware Corp., where proj- ect teams use FMEA to ensure that the service designs and processes meet desired outcomes and can be imple- mented effectively (McCain, 2006).

The output from the CE matrix serves as an input to the FMEA. The FMEA helps analyze the root causes of

Performance Improvement • Volume 48 • Number 2 • DOI: 10.1002/pfi 17

failure by examining each of the influential inputs identi- fied by the matrix and determining the failure modes or what specifically could cause that particular process input to fail. For example, in our project, the CE identified fac- ulty interest as an influential input. In the case where a faculty member is assigned to teach a course he or she is qualified to teach but has little or no interest in that par- ticular course, it is a possible failure mode for the faculty interest process input.

To complete the FMEA, we had to develop FMEA rankings regarding particular process failures. The rank- ings are based on the severity of effect of the failure, the likelihood of occurrence for the process failure, and abil- ity to detect the failure. As in the example where the fail- ure mode is assigning a faculty member to teach a course when the faculty member has little to no interest, we have to rate how severe the effect would be on each of our crit- ical outcomes, what the potential causes of the failure mode are and the likelihood that cause would occur, and what controls are in place and how likely the control could detect the cause of the failure mode. We used a rat- ings scale of 1 to 7, with a rating of 7 given to an input that has a major effect on most students’ performance, has a very high likelihood of occurrence, and cannot be detected. A rating of 1 is given to an input that has no effect on most students’ performance, has a remote likeli- hood of occurrence, and will likely be detected. Each rat- ing on the 1 to 7 scale was given a specific operational meaning (see Table 3).

TABLE 2

INFLUENTIAL INPUTS REQUIRING FURTHER ANALYSIS

PROCESS STEP

KEY INPUTS

Assign faculty

Faculty interest Courses offered

Select textbook and resources

Course goals

Design course

Software selected Course goals

Teach material

Faculty preparation Student preparation Faculty training Software used Teaching (theory) Teaching (application)

Evaluate learning

Knowledge of software application Projects or cases used
Test design

TABLE 3

EXPLANATIONS OF FMEA RATINGS

RATING

SEVERITY OF EFFECT

LIKELIHOOD OF OCCURRENCE

ABILITY TO DETECT

7

Major effect on most students’ results

Very high: Failure is almost inevitable

Cannot detect

6

Major effect on some students’ results

High: Repeated failures

Remote chance of detection

5

Moderate effect on most students’ results

Moderate: Occasional failures

Low chance of detection

4

Moderate effect on some students’ results

Moderate: Occasional failures

Moderate chance of detection

3

Minor effect on most students’ results

Low: Relatively few failures

Moderately high chance of detection

2

Minor effect on some students’ results

Low: Relatively few failures

High chance of detection

1

No effect

Remote: Failure is unlikely

Almost certain detection

18 www.ispi.org • DOI: 10.1002/pfi • FEBRUARY 2009

Based on these ratings, a risk-priority number (RPN) is assigned to each input during the FMEA by multiplying the ratings, so an input rated a 7 on severity, a 7 on occur- rence, and a 7 on (lack of) ability to detect would have an RPN of 343. Inputs with high RPNs are critical to the process and should become the basis for improvement strategies.

Our team conducted a FMEA on the fourteen inputs in the five process steps shown in Exhibit 3. Multiple causes for each failure mode were identified and rated for the effects on each critical outcome. Our experience suggests that FMEA is a time-consuming process and requires a committed team so the potential outcomes should be worth the time that the team has to invest. Exhibit 3 shows a part of the FMEA containing the key potential failures with high RPNs identified in our project.

The Improve and Control Phase

The FMEA helps to focus the team’s attention on two types of process failures that need to be addressed. First, the FMEA identifies the few important inputs, based on high RPNs, to the process that have the highest impact on the outputs. The failure modes for these process inputs require strategic thought and planning by the team because they would have the largest impact on process improvement. The second type of failure may or may not have high RPNs, but solutions to these process failures are obvious and easy to implement; they are often called “just do its.” For example, in FMEA, course goals were identified as a key input in the “select textbook and resources process,” and course goals being inconsistent with out- come objectives was identified as the failure mode. The potential cause of this failure was a lack of faculty aware- ness of outcomes. Correcting the cause of this failure is simply to make all faculty members aware of the outcomes and ask that they use these outcomes as guidelines when developing course objectives—an example of “just do it.”

To address the strategic issues, the team identified three recommendations to improve the process:

  • Review syllabi and textbooks for Principles of Accounting, Managerial Accounting, and Corporate Finance.

  • Require a minimum grade of C in Principles of Accounting courses.

  • Develop a formalized advising policy that will require at least one meeting per semester between the advisor and the advisee and the use of an advising checklist.

The committee to review syllabi and textbooks would serve as a control function for the failure modes for sev- eral key inputs such as course goals, teaching theory,

teaching application, and knowledge of software applica- tions. The committee’s charge is to conduct periodic syl- labi reviews and assess course goals to ensure that they are consistent with key outcome objectives. The committee is also charged with assessing the degree to which course content covered is equivalent when more than one faculty member teaches the same course. A simple solution would be to have a common syllabus; however, this may cause a potential infringement on academic freedom. The team had to come up with a solution that addressed the potential failure caused by lack of consistency across sec- tions with different instructors but allowed enough flexi- bility for academic freedom. The committee’s charge also consisted of assessing syllabi for multiple methods of instructional delivery and departmental standards for software integration in the classroom.

The next two recommendations—requiring a mini- mum grade of C in the Principles of Accounting courses and developing a formalized advising policy—address the student preparation input but deal with separate failure causes. The minimum grade requirement is designed to address the performance of students in the course. The advising policy is designed to concentrate on student pri- orities. Proper implementation of these three recommen- dations should result in improvement in critical outcomes.

The control plans exist to ensure that the process con- sistently meets customer requirements. The documenta- tion of the entire process is also part of the control phase. For this project purpose, we used the RACI (responsible, accountable, consultation, and informed) matrix, which identifies the person responsible for each task, the person accountable, due date, persons to be used as consultants, and those who need to be informed. Table 4 shows the RACI matrix for this project.

RESULTS

Figure 4 shows the performance of business students in the accounting section of the ETS major field examination in business from academic year 2002–2003 to 2006–2007. The first two years of data were used in the Six Sigma project. The last three years of data provide the results of imple- mentation of the Six Sigma project. In the year immedi- ately following the Six Sigma project, the mean percentage correct score of business students in the accounting section of the ETS exam was 47.3, higher than the goal of 46.5. The mean percentage correct scores for the next two years were 42.7 and 44.6, an improvement to the scores prior to com- pleting the project; however, we were not able to sustain the gains achieved in the first year.

We have identified three possible explanations for this variability in students’ performance. First, it could have

Performance Improvement • Volume 48 • Number 2 • DOI: 10.1002/pfi 19

EXHIBIT 3

PARTIAL FAILURE MODE AND EFFECTS ANALYSIS

PROCESS STEP

KEY PROCESS INPUT

POTENTIAL FAILURE MODE

POTENTIAL FAILURE EFFECTS

What is the process step?

What is the key process input?

In what ways does the key input go wrong?

What is the impact on the key output variables (student outcomes) or internal requirements?

Select Textbook and Resources

Course goals

Course goals may be inconsistent with outcome objectives

Poor performance on ETS

Poor student evaluations of learning

Poor on-the-job performance

Teach Material

Student Preparation

Inadequate preparation

Poor performance on ETS

Poor student evaluations of learning

Poor on-the-job performance

Poor performance on ETS

Poor student evaluations of learning

Poor on-the-job performance

Teaching (theory)

Theory presented not necessarily consistent

Poor performance on ETS

Poor student evaluations of learning

Poor on-the-job performance

Teaching (application)

Students do not learn applications

Poor performance on ETS

Poor student evaluations of learning

Poor on-the-job performance

Poor performance on ETS

Poor student evaluations of learning

Poor on-the-job performance

Evaluate Learning

Knowledge of software application

Application software is not integrated across the curriculum

Poor performance on ETS

Poor student evaluations of learning

Poor on-the-job performance

20 www.ispi.org • DOI: 10.1002/pfi • FEBRUARY 2009

SEV

POTENTIAL CAUSES

OCC

FMEA (2) CURRENT CONTROLS

DET

RPN

How severe is the effect to the student performance?

What causes the key input to go wrong?

How often does cause or FM occur?

What are the existing controls and procedures (inspection and test) that prevent either the cause or the Failure Mode? Should include an SOP number.

How well can you detect cause or FM?

5

Faculty awareness of outcome objectives

6

Textbook

6

180

1

6

6

36

5

6

6

180

7

Performance in Accounting 1010 and 1020

6

Grades

7

294

5

6

7

210

7

6

7

294

7

Priorities

7

6

294

5

7

6

210

7

7

6

294

6

Multiple faculty teach the same course

7

Syllabus

6

252

1

7

6

42

3

7

6

126

6

Various learning styles of students

7

Syllabus

6

252

6

7

6

252

6

7

6

252

6

Method of instructional delivery

6

No control

7

252

6

6

7

252

6

6

7

252

2

No department standard

7

No control

7

98

5

7

7

245

5

7

7

245

Performance Improvement • Volume 48 • Number 2 • DOI: 10.1002/pfi 21

TABLE 4

RACI MATRIX

RESPONSIBLE

ACCOUNTABLE

CONSULTATION

INFORMED

TASKS

DUE DATE

1

Committee syllabus and textbook review 1010/1020

Green Belts

Green Belt

Dec. 04

Chair, project lead, and process owner

Division faculty

2

Committee syllabus and textbook review 2030/2040

Green Belt and process owner

Green Belt

Dec. 04

Chair, project lead, and Green Belt

Division faculty

3

Committee syllabus and textbook review Fin 3050

Selected faculty and process owner

Green Belt

May 05

Chair, project lead, and Green Belt

Division faculty

4

Recommend to the department to require a C in 1010 and 1020

Selected faculty and process owner

Project lead

Sept. 04

Division faculty

Division faculty

5

Develop a formalized advising policy (requiring at least one meeting per semester and use of the advising checklist)

Chair, project lead, and select faculty

Chair

Sept. 04

FIGURE 4. STUDENTS’ MEAN SCORES IN ACCOUNTING SECTION, 2002–2007 22 www.ispi.org • DOI: 10.1002/pfi • FEBRUARY 2009

been that because of the great deal of attention given to our accounting curriculum during the Six Sigma process, the results of the ETS exam in the first year after the con- clusion of Six Sigma project may have been somewhat influenced. However, we observed a similar pattern in seven of the eight sections of the ETS exam. This seems to suggest that it was likely that some other variables caused at least part of the decline in scores in the next two years. The second possible explanation was effects from Hurricane Katrina, which had a significant impact on our operations. The ETS scores immediately following the Six Sigma project were achieved the semester prior to Katrina. Due to faculty turnover and other Katrina- related issues, we were not able to complete all of the tasks identified in the RACI matrix in the years following the hurricane. The third possibility is that some other unidentified or unidentifiable variables that were not related to the storm, such as student motivation or indi- vidual test preparation, caused this variability in students’ performance. The fact that students scored above the baseline in all three years following the project indicates that the Six Sigma project was successful.

CONCLUSIONS AND FURTHER RESEARCH

The faculty and students who were part of the team found the Six Sigma methodology to be beneficial and a great learning experience. Everyone felt they had a better understanding of the specific problem and curriculum delivery process in general. Through our experience with this project, we have identified four critical advantages for using Six Sigma in education. First, it worked. Although the curriculum delivery process has so many nonstan- dardized variables, we are confident that the project is at least partly responsible for the increased scores over the baseline. Second, the Six Sigma methodology provides a common language for cross-discipline teams. Third, the tools used in Six Sigma create quantifiable priorities. The tools forced the team to put individual opinions in numerical terms, the source of some intense debate. However, when one of our individual pet peeves fell out of the analysis based on the numbers, it was accepted, and the process continued to move forward. The final major advantage of the Six Sigma methodology is that it was a superb team-building process.

We also identified two major disadvantages of using the Six Sigma methodology for curriculum development. First, the data collection cycle is too long. For example, the ETS data are collected once a year because the exam is given annually to graduating seniors, whereas in busi- nesses, production and service data can be collected daily.

The tools forced the team to put individual opinions in numerical terms, the source of some intense debate. However, when one of our individual pet peeves fell out of the analysis based on the numbers, it was accepted, and the process continued to move forward.

The second is the great deal of time necessary to complete the project. In meeting time alone, our project required at least one hour a week for a little over four months. This does not include the preparation time and the work done between the meetings. Due to the number of variables and the complexity of curriculum issues, the time com- mitment is a structural part of using Six Sigma in this context. However, we think that the long data cycle issue can be addressed through further research. This leads us to the first of three areas for potential research: identify- ing assessment data and techniques that provide faster feedback. We think that if Six Sigma is used more fre- quently in academia, it may result in finding more cre- ative ways for data collection that are reliable and valid, and have shorter cycle times.

Our experience suggests that Six Sigma implementa- tion is a time-consuming process and may cause faculty to spend significant time on activities other than teaching and research. The academic evaluation system in institu- tions of higher education is primarily based on teaching and research (Temponi, 2005). In addition, as Hogg and Hogg (1995) suggested, top-level administration support will be necessary to secure resources for wide-scale use of Six Sigma across academic units. Therefore, a second area of potential research is developing a model that addresses issues of faculty time and other resources needed for suc- cessful implementation of Six Sigma in a higher educa- tion setting.

The third area would be developing new or modifying existing tools specifically for curriculum issues. For exam- ple, in this project, inputs to the process were categorized

Performance Improvement • Volume 48 • Number 2 • DOI: 10.1002/pfi 23

as controllable or uncontrollable in the process map. In future Six Sigma projects, we will have three categories of inputs in the process map: controllable, uncontrollable, or influenceable. There were inputs over which we felt we did not have control, but we could influence them. For example, in our process map, we defined learning applica- tion as an uncontrollable input in the evaluate learning step. Due to numerous variables that affect student learn- ing, we do not have complete control of it; however, we can certainly influence it through various course evalua- tion methods.

In this article, we have presented an application of Six Sigma methodology to curriculum delivery, design, and assessment issues and its impact on student performance. We show that this methodology is suitable for addressing curriculum-related issues, but more research and applica- tion of Six Sigma are needed in higher education settings to address some of the challenges raised earlier to make it a more efficient and effective process.

We thank 3M Company for providing training on Six Sigma methodology and allowing us to use the process tools and templates. We also thank Janet Gillespie and John Mitchell for their feedback on earlier drafts of this article.

References

Association of Collegiate Business Schools and Programs, Kansas City. (2008, May). ACBSP standards and criteria for demonstrating excellence in baccalaureate/graduate degree schools and programs. Retrieved June 15, 2008, from http://www.acbsp.org/index.php?mo=cms&op=ld&fid=81.

Benedetto, A.R. (2003). Adapting manufacturing-based Six Sigma methodology to the service environment of a radiology film library. Journal of Healthcare Management, 48(4), 263–280.

Cherry, J., & Seshadri, S. (2000). Six Sigma: Using statistics to reduce process variability and costs in radiology. Radiology Management, 11, 42–45.

Chizmar, J.F. (1994). Total quality management (TQM) of teaching and learning. Journal of Economic Education, 25(2), 179–190.

Deming, W.E. (1986). Out of the crisis: Quality, productivity, and competitive position. Cambridge, MA: Massachusetts Institute of Technology, Center for Advanced Engineering Study.

Dolmans, D.H.J.M., Wolfhagen, H.A.P., & Scherpbier, A.J.J.A. (2003). From quality assurance to total quality management: How can quality assurance result in continuous improvement in health professions education? Education for Health, 16(2), 210–217.

Hahn, G.J., Hill, W.J., Hoerl, R.W., & Zinkgraf, S.A. (1999). The impact of Six Sigma improvement—A glimpse into the future of statistics. American Statistician, 53(3), 208–215.

Hargrove, S.K., & Burge, L. (2002, November). Developing a Six Sigma methodology for improving retention in engineer- ing education. In ASEE/IEEE Frontiers in Education Conference (Vol. 3, pp. S3C-20–S3C-24).

Hogg, R.V., & Hogg, M.C. (1995). Continuous quality improvement in higher education, International Statistical Review, 63(1), 35–48.

Juran, J.M. (1989). Juran on leadership for quality: An executive handbook. New York: Free Press.

Kubiak, T.M. (2007). Reviving the process map. Quality Progress, 40(5), 59–63.

Lynch, D.P., Bertolino, S., & Cloutier, E. (2003). How to scope DMAIC projects: The importance of the right objective cannot be overestimated. Quality Progress, 36(1), 37–41.

Marrelli, A. (2005). The performance technologist toolbox: Process mapping. Performance Improvement, 44(5), 40–44.

McCain, C. (2006). Using an FMEA in a service setting. Quality Progress, 39(9), 24–29.

Mena C., Aspinall, R., Bernon, M., Templar, S., & Whicker, L. (2004). Gaining visibility of supply-chain cost. Logistics and Transport Focus, 6(7), 54–56.

Miles, M.P., Hazeldine, M.F., & Munilla, L.S. (2004). The 2003 AACSB accreditation standards and implications for business faculty: A short note. Journal of Education for Business, 80(1), 29–34.

Pande, P., Neuman, R., & Cavanaugh, R. (2000). The Six Sigma way: How GE, Motorola, and other top companies are honing their performance. New York: McGraw-Hill.

Reid, R.D. (2005). FMEA: Something old something new. Quality Progress, 38(5), 90–93.

Ricks, J.M., Williams, J.A., & Weeks, W.A. (2008). Sales trainer roles, competencies, skills, and behaviors: A case study. Industrial Marketing Management, 37(5), 593–609.

Snee, R.D. (2003). Eight essential tools. Quality Progress, 36(12), 86–88.

Stamatis, D.H. (2004). Six Sigma fundamentals: A complete guide to the system method and tools. New York: Productivity Press.

Stark, J.S., Lowther, M.A., Ryan, M.P., & Genthon, M. (1988). Faculty reflect on course planning. Research in Higher Education, 29(3), 219–240.

Stevenson, W.J., & Mergen, E. (2006). Teaching Six Sigma con- cepts in a business school curriculum. Total Quality Management, 17(6), 751–756.

24 www.ispi.org • DOI: 10.1002/pfi • FEBRUARY 2009

Temponi, C. (2005). Continuous improvement framework: Implications for academia. Quality Assurances in Education, 13(1), 17–35.

Tomkins, R. (1997, October 10). GE beats expectations with 13% rise. Financial Times, Companies and Finance: The Americas, p. 29.

Weinstein, L.B., Castellano, J., Petrick, J., & Vokurka, R.J. (2008). Integrating Six Sigma concepts in an MBA quality management class. Journal of Education for Business, 83(4), 233–238.

Wergin, J.F. (1999). Assessment of programs and units: Program review and specialized accreditation––Architecture for change. Presented at the 1998 AAHE Assessment Conference, Washington, DC.

Wild, C.J. (1995). Continuous improvement of teaching: A case study in a large statistics course. International Statistical Review, 63(1), 49–68.

Zinkgraf, S.A. (2006). Six Sigma—The first 90 days. Upper Saddle River, NJ: Pearson Education.

Performance Improvement • Volume 48 • Number 2 • DOI: 10.1002/pfi 25

ANIL KUKREJA, MBA, PhD, serves as Capital One Endowed Professor and chair of the Division of Business at Xavier University of Louisiana in New Orleans. His PhD is in management science. His research interests are in the areas of productivity and quality, stochastic inventory modeling, and sim- ulation modeling. His work has been published in top journals such as Management Science and Computers and Operations Research. He has also presented his work in several national and inter- national conferences. He may be reached at [email protected].

JOE M. RICKS JR., PhD, is an associate professor of marketing in the Division of Business at Xavier University of Louisiana and director of the Xavier University Sales Leadership Institute. He has pub- lished in Industrial Marketing Management, Journal of Consumer Marketing, Journal of Business Ethics, Journal of Business Research, and Journal of Vocational Behavior. He has also been a faculty intern at 3M Company, a marketing intern coordinator for McIllhenny (Tabasco) Company, and a vis- iting professor at Young & Rubicam. He may be reached at [email protected].

JEAN A. MEYER, CPA, MBA, PhD, is a visiting assistant professor of accounting for the Joseph A. Butt S. J. School of Business at Loyola University of New Orleans. She has over 20 years of experience in public accounting and the health care industry. She has held various positions during her career, including regional controller of a hospital chain and chief financial officer of a psychiatric hospital. Her research interests include accounting education and continuing professional education. She may be reached at [email protected].