5-1 Report: Change Management Model Overview The leadership of the Singaporean-headquartered software solutions organization is concerned about issues arising from communication and coordination cha

Book Title: Leading Organizational Change

Authors: LAURIE LEWIS, JAMES M. KOUZES, BARRY Z. POSNER

Southern New Hampshire University

This edition first published 2019© 2019 Laurie Lewis

Edition History1e 2011 Wiley2e 2019 Wiley

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Laurie Lewis to be identified as the author of this work has been asserted in accordance with law.

Registered Office(s)John Wiley & Sons Ltd., 111 River Street, Hoboken, NJ 07030, USA

Editorial Office350 Main Street, Malden, MA 02148‐5020, USA

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Cataloging‐in‐Publication data applied for

9781119431244 [Paperback]

Cover Design: WileyCover Image: ©Larry Washburn/Getty Images

Chapter 4 (Lewis)Outcomes of Change Processes

There are only two tragedies in life: one is not getting what one wants, and the other is getting it

Oscar Wilde

Success is getting what you want; happiness is wanting what you get

Ingrid Bergman

We live immersed in narrative, recounting and reassessing the meaning of our past actions, anticipating the outcome of our future projects, situating ourselves at the intersection of several stories not yet completed

Peter Brooks

Organizational scholars have been trying to describe the outcomes of innovation and change processes for a very long time. Before we take up discussion of how outcomes of change are assessed, let us first turn to the more general topic of how one makes assessments of organizational outcomes in general. Certainly it is important to attend to the goals and purpose of any organizational strategy in terms of what has been accomplished, what has not, and what is left to do. Further, assessment of the degree and quality of accomplishments, as well as necessary and useful adjustments to goals as the initiative unfolds are important.

The Importance of Goals

Goals are important for a few reasons. First, it is critical for communicators who propose and promote a change to make a case for it. (We return to this issue in Chapter 7.) Those whose cooperation in change is necessary must come to believe that its purpose makes sense. To gain cooperation, the change should be viewed as necessary (or at least advantageous) and appropriate to the intended purpose One of the major pitfalls of change is the inability of leaders to “sell” a vision for the change to those who are responsible for pulling it off operationally. Since change always involves effort, that effort usually needs justification. For some audiences, minimal justification may be necessary; for other audiences or circumstances, this is a major undertaking. That is especially true when there is a good deal of pain involved in the change (e.g. lay‐offs, ending something of long‐held value). If implementers are not able to articulate goals for a change and provide a sense of purpose that other stakeholders can buy into, they are already starting on a path that is likely to run into resistance – and probably for good reason.

A second reason why goals are so important during change is that they provide an organization, and its stakeholders, with a metric for assessing distance traveled and direction of movement. Although goals can shift and be remade, they still provide us with markers as to where we started and where we were headed, at least at one time. Like trail markers made by hikers in the woods, they provide data points marking a path. That direction might be altered, but the hikers are better off when they know where they've been and the trajectory they have been on. If nothing else, this prevents hikers, and organizations, from mistakenly traveling in circles.

A third reason why goals are important in organizations is that they provide a sense of legitimacy in portraying the organization as rational. Decision‐making is supposed to be rational, based on an aim or direction; on information analysis; and on logical reasoning. External and internal stakeholders will often judge the soundness of an organization's decision‐making in part upon its ability to chart a path targeting a pre‐specified goal. Organizations that are unable or unwilling to specify such end points are likely to be considered as illegitimate, irrational, or even criminal/unethical. Externally powerful stakeholders such as boards of directors, contractual partners, investors, governmental oversight agencies, and the like often demand that organizations offer some sense of goals and purpose in order to achieve legal, financial, and institutional legitimacy. So, there are good reasons for organizations to create goals and to assess them. Although that may seem a rather easy and straightforward task, in fact it is quite complex.

Organizational research that attempts to assess outcomes of organizing processes will often measure effectiveness – the accomplishment of desired results – or efficiency – accomplishment of effectiveness with the fewest possible expended resources. For either type of outcome, there are numerous potential problems in pinning down useful assessments. I will highlight here five common problems with assessing organizational outcomes:

Knowing when to assess

Determining from whose perspective to make assessments

Developing methods to assess some types of outcomes

Correctly attributing causes and effects

Potential costs associated with doing genuine assessment.

Assessing Change Outcomes

Let us start with a simple question about assessment of outcomes of any organizational endeavor: When should assessment of outcomes take place? A concrete example in a more familiar context might aid this discussion. I live in New Jersey and it does snow here. Once in a while we get one of those snows that comes down all day. You wake up and find an inch or two on the ground. Then, you need to decide whether to shovel the driveway or wait and let a few more inches fall before you start shoveling. The longer you wait, the bigger the job, but the chances improve that you'll have to shovel fewer times. If you shovel at two in the afternoon and the snow stops, you have really good results – cleared driveway! If you wait another two hours and another 2 in. have fallen, your earlier results are completely erased and now you must reshovel. Of course, if you wait until 6 in. have fallen and some freezing rain falls on top of that, you may find you cannot shovel the snow at all – bad results. So, the question is at what point during the day do you assess your outcomes? If you assess your outcomes at 2:05 p.m., you'd be very pleased. Later that afternoon, you might be very displeased, and even later that night after an inch of freezing rain, you may not only judge your shoveling a failure, but your future prospects may look dim as well!

Timing of Assessing Outcomes

One lesson from the snow‐shoveling example is that timing of assessment of any outcome can play a major role in how we judge what we have accomplished. This is as true of organizational dynamics as it is of snow removal. Many of the examples and cases discussed in this book thus far provide further evidence of the principle that achieved results change over time. The Spellings Commission would likely have judged the impact of its Report as failure at the outset since the immediate negative reaction of so many stakeholders was so strong. However, over the months that stakeholders discussed, debated, and reconsidered actions relative to the Report, results became more favorable. The higher education community began to work towards accomplishing some of the goals highlighted by the Report. The Commission started to see the fruits of its labor.

Assessing Outcomes from Multiple Perspectives

Not only is it hard to know when to assess outcomes, it is hard to know from what perspective to assess outcomes. To return to our snow example, when my kids were younger they would assess outcomes of snow in terms of (i) whether enough falls to get school canceled (good result), and (iii) whether enough snow falls to afford good sledding (also a good result). My husband and I are more concerned with how fast we can have our driveway cleared and how many times we have to shovel. The local snow removal service likely judges outcomes in terms of how much revenue the snowfall produces. From their perspective, a few moderate days of snowfall is superior to one day accumulating to the same number of inches. If they can charge for two visits to clear driveways, they are better off and can probably do their job more efficiently on each day, so they don't have to pay workers for overtime.

In the case of organizational operations, many different stakeholder perspectives may be relevant and those different stakeholders may use very different metrics to assess organizations. As we discussed in

Chapter 3, stakeholders not only identify with different roles and groups, they also occupy very different positions with regard to different organizational products and by‐products. This can make stakeholders' assessments quite complex. If we consider the example of the Spellings Commission, we can predict that university and college faculty, higher education business officers, journalists, state governments, parents of college‐age children, communities that support state‐funded universities, employers of college graduates, and many other stakeholders will demand different things from higher education and thus view the outcomes in very different ways. As Pfarrer, Decelles, Smith and Taylor (2008) argue, “an organization may be able to satisfy the demands of certain stakeholder groups only at the expense of others” (p. 732).

Further, as we discussed in the last chapter, even individual stakeholders might have more than one perspective on a specific organizational outcome. As a university professor, I may view some of the Spellings Commission recommendations as potentially insulting towards higher education. I might perceive a threat of a federalization that might restrict academic freedoms in higher educational institutions. However, as a parent of children who will be applying to universities very soon, I may have a very different read on some of the recommendations of the Commission, and may see some merit in the indictments of higher education and the urgent need for reform. I might even recognize that higher education is not making enough progress on some of these issues. How I assess the attempts of higher education to address concerns raised in the Commission's Report will vary depending on what “hat” I have on.

One way for organizations to resolve the problem of multiple stakeholder perspectives in assessing organizational outcomes is to adopt a purely managerial viewpoint. In doing so, management and/or shareholder perspectives become paramount. In for‐profit organizations, a bottom‐line consideration may become most important. For nonprofit organizations, the accomplishment of a central mission may be highlighted. For government agencies, accomplishing politically expedient goals that ensure the re‐election of officials may be most sought. For any organization, survival can be an ultimate measure of success or outcome. However, due to equifinality – the principle that there are multiple paths to the same end – that can be a very ambiguous standard to use in assessment. Many different paths could permit an organization to survive. Some may be “better” paths by some other standard (e.g. the most ethical path to survival; the path that preserves the most stakeholders' demands; the path that highlights only shareholders' preferences) but all share the ultimate standard of survival. For example, in the current political environment threatened trade wars and protectionist stances by the Trump administration have caused many state governments, industries, and corporations to consider strategies to survive. These varied strategies may include creating alternative markets for goods and products; lobbying the administration officials to avoid trade tariffs; and/or riding out the trade disputes.

Difficulty of Metrics of Success

A third problem in measuring organization outcomes concerns the difficulty in measuring some outcomes. Organizations are interested in many levels and types of outcomes. Some important activities of organizations concern things that are very hard to measure, such as how the public perceives the brand of the organization; whether employees/members have internalized important values of the organization; the degree to which an organizational philosophy is being lived out in practice; how customers and clients are benefited by an organization's operations. Although probably all organizations have some goals that are difficult to measure directly, nonprofit organizations often grapple with this problem (DiMaggio 1988; Kanter and Summers 1987). For many nonprofit organizations, bottom‐line or easily quantifiable metrics often do not capture highly lofty missions. The mission of the Girl Scouts of America is a good example: “Girl Scouting builds girls of courage, confidence, and character, who make the world a better place.” The Girl Scout organization has several more specific goals: discovering fun, friendship, and power of girls together; developing girls' full individual potential; relating to others with increasing understanding, skill, and respect; developing values to guide their actions and provide the foundation for sound decision‐making; and contributing to the improvement of society. These are not easily measured, to say the least.

Our Homeless Net case provides another example of intractable mission (see Case Box 4.1). The homeless service providers have a shared mission to end homelessness. That is, that everyone in the United States will have an adequate, safe, affordable home. Although that may sound like a simple mission, the debate comes more in the manner in which the service providers operate. Some providers focus on what they do well and what they are rewarded for doing by funders – providing basic needs to persons who are homeless. Others argue that the focus ought to be eliminating the needs of these persons by resolving core causes of homelessness: lack of affordable housing; lack of medical care; and lack of a living wage. Similar debates are raised for other problematic situations. Should one put all donated monies towards finding a cure for cancer, or should a portion of the money go to the care of those suffering from cancer? These issues are challenging, not only in the sense that assessing such a large complex mission is difficult, but also in the sense that not all stakeholders may agree what priorities ought to prevail along the path to the larger goal. Large complex missions often need to be addressed in small steps and small bites.

In order to encourage supporters of their organizations, nonprofits must demonstrate that they are able to accomplish their stated goals. With missions that are very difficult to measure, this has become a major challenge for organizations (Ospina, Diaz, and O'Sullivan 2002). Sawhill and Williamson (2001) use the example of a nature conservancy's struggle to develop metrics to assess its goal achievement. Although the conservancy had a clear philosophical mission “to preserve the diversity of plants and animals around the world by protecting habitats” they used very quantifiable measures to assess outcomes that didn't really get at the core mission. They measured the numbers of dollars collected towards their cause and the numbers of acres it owned. Doubtlessly these metrics had some appeal to specific stakeholders. However, they also did not really capture the mission of the organization. “The conservancy's goal, after all, isn't to buy land or raise money, it is to preserve the diversity of life on earth” (Sawhill and Williamson 2001, p. 101). Since the extinction of species continued to spiral higher and higher every year since the organization's founding, from that perspective the organization was a failure. The Homeless Net organizations could come to a similar conclusion in that nationwide homelessness continues to increase despite their efforts. Overall, homeless service providers are losing the battle to prevent homelessness and can much more readily document “success” in terms of numbers of clients served. Ironically, that is not their true goal. In fact, the lower the need for their services, the closer they are to achieving their real stated goal!

Attribution Errors

Another potential problem in assessing organizational outcomes concerns errors in attributing causes and effects. An attribution error occurs when an observer attributes the cause of an observation incorrectly. When I hear a loud crash downstairs and assume that my children have broken something only to discover later that it is one of my cats that has caused the noise, I have made an attribution error. Cause and effect attributions are made all the time in organizations. Decision‐makers who are striving to improve service, increase employee or customer loyalty, speed up production processes, decrease defects, perform what amount to experiments in cause and effect relationships. Managers will introduce an intervention of some sort, observe whether outcomes improve by some standard, and then draw conclusions about whether what they did “worked” or not. Although we all perform these sorts of experiments in our lives in many contexts all the time, they are fundamentally flawed as true experiments because they are not controlled. A true experiment where we can really assess cause and effect relationships requires important procedures such as random assignment to conditions; controlled conditions (so nothing else varies except for the one thing we are studying); and strict objective measures over time. Usually, in the natural “experiments” that organizations conduct daily to figure out what is working and what is not, these conditions are not met and thus it is far easier to make errors in attributing cause and effect relationships.

For example, if an organization's production line introduces a new method of assembling part of the product at a particular workstation, the manager would likely want to know if the new method was (i) effective, and (ii) better than the previous method. The manager would have to first define “effective” and “better” and would likely already have a standard metric to measure that, such as speed of production, number of quality defects, and/or number of times the line had to be stopped for adjustments to be made. The manager would have to assess the relative costs of this new method (e.g. if it involved more employees at that station, costs would be higher) and thus, what level of improvement would justify the additional costs. Once you have a target and metrics, the experiment would commence. At some appointed time (and as noted earlier in this chapter, that can sometimes be hard to determine), a metric would be assessed. If the metric showed improvement over previous measures of the metric, the manager might conclude that the new method was indeed better. If the metric was equal to or less than previous measures, then the manager might conclude that the new method (i) was flawed, (ii) was not worth additional cost, or (iii) needs more time to show evidence of success. However, what if the week that the manager conducted her experiment, her best three workers were all out with the flu? Or, what if the new method appeared to raise the production quality and speed on this line, but production and speed went up on all lines in that week? The manager would have a hard time discerning whether the rise in production on the experimental line was due to the new method or just a fluke of productivity in the whole plant that week.

Now, this line manager might eventually be able to gather enough data to eliminate other possible explanations of the new production method's success or failure; but things can get much more complicated in the larger contexts of organizations. It can be much more difficult to sort out cause and effect in cases where organizations are trying to influence the image of their brand; increase knowledge that potential customers have of their products; increase sales with a particular marketing campaign, or other large outcomes of organizational practice that are nearly impossible to study in a controlled way. Many factors could account for the effects of brand name awareness, product awareness, and sales. Whether one specific action taken by an organization is responsible for these outcomes, or some portion of them, can be very difficult to determine.

Documenting Failure

A final problem in assessing organizational outcomes concerns the costs of documenting failures and the perceived need to make accurate measurements. Given the political context of organizations that we discussed in Chapter 1, it may be that at times organizational leaders do not truly wish to accurately assess organizational outcomes. Often data, even assessment data, can serve symbolic functions in an organization. As Feldman and March (1981) argue, the mere fact that data is present during decision‐making can provide legitimacy to the outcome. Data and information can often be used as a symbol of due diligence. This can be true even when the data does not properly support the decisions that are made. So, for example, if the plant manager has a vested interest in the new line methodology in the earlier example, and has essentially already committed herself to the promotion of the new methodology across the organization, the data collected about its relative benefits over other methods may be moot. The collection of data may be for symbolic reasons only – to be able to claim that she “studied” the new method. In this case, assessment of outcomes of this new production method is made politically, not in terms of objective metrics.

In some cases, documenting complete success might imply the end of the organization. Think of organizations whose mission is to eradicate disease. If that eradication is achieved, the purpose of the organization is fulfilled – complete success equals organizational death! The March of Dimes (see Highlight Box 4.1) actually faced such a crisis when Jonas Salk found the vaccine for polio. Following that discovery, the March of Dimes changed its focus to the prevention of birth defects since its primary mission had been accomplished and, ironically, they had a very difficult time finding financial support. It was only through quick adaptation that they were able to avoid financial crisis. Documenting success in this case was not beneficial.

Assessing Change Outcomes

As discussed in Chapter 3, we can think of outcomes of change implementation in terms of the observable system – that we can notice through participation and observation – and results – that can be assessed in terms of what is achieved plus other consequences that arise. As noted in Chapter 1, Rogers (1983) proposed the concept of “routinization” – when the innovation/change has become incorporated into the regular activities of an organization and is no longer considered a separate new idea – as a descriptor for the “observable system.” Others have used terms like “refreezing” (Lewin 1951) and “institution‐alization” (Goodman, Bazerman, and Conlon 1980) to refer to the point where the change outcome is known and the process of change is complete.

Other scholarship has focused more on descriptors for the nature of outcomes of change. Scholarship as early as the 1970s acknowledged that implementation outcomes should not be assessed merely in binary terms (e.g. adopted/not adopted) as in diffusion research, but described in terms of some degree of use or partial adoption measure (Calsyn, Tornatzky, and Dittmar 1977; Hall and Loucks 1977). Also, terms like “reinvention” (Rice and Rogers 1980; Rogers 1988), “adaptation” (Glaser and Backer 1977; Leonard‐Barton 1988), “modification” (Lewis and Seibold 1993), and “appropriation” (DeSanctis and Poole 1994) have been used to describe how the original idea of a change sometimes morphs during implementation.

For some scholars, this is viewed as a very positive outcome because it demonstrates that users alter the change to fit their own needs and goals. For example, Barrett and Stephens (2017) examined how electronic health records (EHR) could be implemented with more or less tolerance for individualized appropriation. Barrett and Stephens draw upon Adaptive Structuration Theory (AST) in order to account for the means by which employees confront EHR implementation. As employees attempt to incorporate a mandated technology into their work practices, they engage in “change appropriation practices” wherein modifications to a technology's built‐in features in order to help people accomplish their work tasks. DeSanctis and Poole (1994) conceptualized two means of appropriating technology: faithful and unfaithful:

Faithful Appropriation – Using the technology in a way that complies with its spirit – or the general understanding of how the technology “ought” to be used according to its original designer

Unfaithful Appropriation – Using the technology in a way that is inconsistent with or violates the spirit of the technology and the intentions of the designer

Examples of unfaithful appropriation of EHR technology were reported by Saleem et al. (2011). They describe nurses' practices to print paper forms for each patient at the start of their workday which they used to write down clinical reminders, prescription doses, and blood work that is needed. Nurses then would return at the end of a shift to input this information into the EHR database. This type of practice fell outside of what designers of EHR systems intended by delaying real‐time updates in the system for each patient.

Fidelity and Uniformity

In earlier work with my colleague David Seibold, we introduced the terms of “fidelity” and “uniformity” to describe different dimensions of change outcomes. Fidelity describes the degree of departure from the intended design of the change, similar to DeSanctis and Poole notion of faithfulness. Uniformity describes the range of use of the change across adopting unit(s) or stakeholder groups. These two dimensions can be combined to describe how a change is “modified” in use in an organization. They can also be used to describe how and to what degree implementers intend for users to adapt change programs in use. We can imagine high and low degrees of desired “fidelity” and “uniformity”

High–High Case. Implementation efforts that aim to produce a high degree of fidelity (match to a specific vision for use/participation) and a high degree of uniformity (all stakeholders using or participating in similar ways) suggest an implementation that is focused on producing a single model and enforcing or cajoling that specific model to be followed by all participants. The implementation of a sexual harassment policy might be a good example of this sort of effort. The desired observable system outcome is that everyone follows the same rules and guidelines for creating an environment that is non‐discriminatory; non‐threatening; and void of sexually charged language, displays or other inappropriate behaviors. It would not be a desirable outcome to have much, if any, variation in how stakeholders or various stakeholder groups applied such a new policy.

Low–Low Case. Implementation of new software for data management might be a very different case. In such an effort, the implementers may encourage experimentation and different possible applications of the software across stakeholders. Perhaps accounting would use the data management system for keeping track of payments; the human resource department may use it to track applications; and the production unit would use it in some aspect of product quality control. In such a case, neither fidelity nor uniformity may be important in order to consider the implementation successful. Simple experimentation by different departments may be the only goal.

High–Low Case. The case of high fidelity and low uniformity may occur where there is a specific vision by decision‐makers and implementers for each different stakeholder group/unit for use or participation in the change. So, although differences across stakeholders are tolerated (in fact, desired), those differences are prescribed by the implementers in advance and not left to experimentation. An example of this might be the introduction of a new design tool wherein various work groups are asked to experiment with a specific application of the tool within their own group. Thus, all work groups would be doing very different things with the tool, but they would do so as prescribed by the organization.

Low–High Case. The low fidelity and high uniformity situation might involve a context in which the organization has many possibilities for making use of a new innovation but needs to ensure that all stakeholders are ultimately participating in similar ways. This might involve a round of discussion and brainstorming in ways that the organization might make use of a new strategy, resource, or tool. Then, once the best use has been decided, it is implemented with the goal of uniform use/participation across all stakeholders. A new payroll system for keeping track of employee hours would be an example of this. There might be many ways to report work hours and keep track of vacation and sick leave; but it might be important that all units in an organization do it the same way so as not to create chaos in the payroll department. There could be some joint discussion about different alternatives (high fidelity to some designer's plan need not be mandated), but ultimately, everyone would need to do it the same way (high uniformity).

The fidelity and uniformity concepts provide us with language to describe both the intentions of implementers (e.g. how much fidelity and uniformity is desired at the outset) and the ways in which the observable system exists at any point after introduction of change. While I was still in graduate school I was part of a team that studied a large food manufacturing plant in the Midwestern United States. “Kelco” (Lewis and Seibold 1993) introduced a line technician program that involved new roles for a set of employees. Those who were placed in the line tech role had to learn either new technical or administrative skills since they had been recruited from either purely mechanic or production jobs. When our evaluation team came on the scene, we observed much variation in how the role of line technician was operating across shifts and across the plant. There had been an unclear definition of the role of line technician coupled with some poor support in training line technicians in new skills. This left the new line techs to invent their own roles based on their best guess as to what they should be doing or what they wanted to have as their own role.

Some line techs, who formerly had been production workers, focused their efforts on the administrative portion of the job and ignored the mechanical repair aspects of the job. Others, who formerly had been mechanics, focused their efforts on the repair work and ignored many of the administrative parts of the job. As one interviewee put it, “they went with their strengths” – focusing their efforts on those portions of the new job at which they already had competence. The result of these attempts to self‐socialize into the new role was extreme low fidelity and low uniformity in the practice of the line technician program at Kelco. Because the implementers' intentions were to have high fidelity and high uniformity, this was a very undesired outcome for the organization.

Organizational Goals

An important note about goals needs to be considered here. Goals in organizations seem like very rational and fixed constructions that (i) are known at the outset of change, (ii) are stable across the change process, and (iii) can be assessed at any given point. After all, that is the very definition of a goal – a guidepost by which we measure performance over time. However, it is likely that organizational goals are much more fluid than this rational depiction. First, goals are likely to be held differently by different stakeholders because they hold different stakes. Further, some goals may be hidden at times. Goals shift over time and through the process of enactment we discussed in Chapter 1. Individuals and collectives are able to rewrite history and convince themselves that the real goals (the ones now held) were present even at the start. Think of the classic example of the New Year's Resolution that many of us make each year. Or the goals we make about our grade performance at the start of a term. We start out saying “I'll lose x amount of weight,” “I'll get all As.” But as the difficulty in reaching those goals (often ones we've set repeatedly) and various unpredicted barriers get in our way, we revise our goals (“I really meant to lose y amount of weight,” or “s\Stay on a healthy diet regardless of the weight I lose,” or “I'm actually happy to get at least a B average this term”). We can sometimes convince ourselves, as well as others, that the previous goals never existed and measure performance against the revised goal.

In organizations it is perhaps even easier to experience shifting goals over time due to the complex nature of setting goals, assessing them, and reporting on them to stakeholders. How we frame a goal in terms of language like “improvement,” “effectiveness,” “more,” “less,” and “successful” to name only a few can become contested in organizations. It is a bad habit and perhaps a defensive strategy for organizations to be fairly poor at nailing down measurable goals and holding themselves publicly accountable for specific measurable results.

Authenticity

In my 2007 model (see Figure 3.1), I introduced a third way to describe the observable system – authenticity. Authenticity concerns the sincerity of stakeholders' compliance with implementers' expectations for their behavior. As we discussed in Chapter 2, inauthenticity arises when stakeholders suppress genuine emotions and “fake” their approval, liking, and/or enthusiasm for a change. Surabhi Sahay (2017) found that lack of trust in implementers' sincerity in “authentic” input solicitation led to lack of authentic input that was provided. One possible result of this cycle is that implementers may observe a desired level of uniformity and fidelity in stakeholder participation in change, but still not readily detect inauthenticity in stakeholders' responses. In other words, when they hear and see what appear to be enthusiastic or at least compliant responses, they assume that the change is well accepted. In cases where implementation involves mandated participation, where input by stakeholders is neither invited nor tolerated, some stakeholders' compliance may involve faking enthusiasm, support, and/or approval.

As already discussed, change often invokes the politics of organizational contexts, where display of genuine feelings and assessments of change may be risky. Feelings of disappointment, fear, frustration, anxiety, and even rage among stakeholders may need to be suppressed to avoid incurring some political cost. Such suppression can lead to increases in stress (Grandey 2003), burnout (Schmisseur unpublished; Tracy 2000), emotional exhaustion (Schmisseur 2005), and depressed mood (Erickson and Wharton 1997) for individual stakeholders.

Also, the consequences of inauthenticity may be high for organizations and implementation efforts. The indirect costs of this outcome may include change burnout (exhaustion of an individual's capacity or willingness to continue to participate in change programs: Lewis 2006); lack of vigilance in reporting and working to resolve problems in change implementation; and unit and organizational turnover. Harris and Ogbonna (2002) provide a good example of ritualistic cooperation (a form of inauthenticity) with a culture change in a UK hotel chain. The managers instilled an annual review of goals and progress as part of an effort to shift the culture. The front‐line interpretation of this practice illustrates inauthenticity and the undesirable organizational outcomes of this review:

Every three months or so we get a pack – yeah. Yeah – “this is how we're supposed to act this month.” Yeah, yeah – this is now your philosophy for life! Oh, when we say “for life” we actually mean until we change our minds next year! It's like a ceremonial event – we troop into the room, get lectured on what the company wants and troop back out again. It's just one of those things we do. After a few years you don't even question it anymore! (p. 38)

Clearly, these front‐line workers were not internalizing the cultural shift that implementers desired. They approached the goal setting meeting as a mere symbolic event that was to be endured but not truly embraced as an important activity. In another example, the same authors describe how organizational members fake their transformation of attitudes and values in pure performance for their supervisors:

Oh God! We've got rules for everything – how to greet, how to act, how to smile, when to smile, what to say – it's all bollocks! I mean you just do what you want but obey the rules when the boss has got his beady little eye on you! (p. 44)

These examples illustrate how inauthenticity can potentially harm the organization's efforts at accomplishing results. If the outcomes achieved are merely “for show” and not true change, the likelihood of achieving the desired results that are expected to arise from the change effort is low. Further, as you can hear in the comments of these front‐line workers, cynicism, annoyance, and misrepresentation become commonplace.

Scholars have examined individual stakeholder outcomes related to change in terms of attitudes like “willingness” (Miller, Johnson, and Grau 1994) and “liking of change” (Lewis and Seibold 1996), and individuals' abilities to cope and their general well‐being (Noblet, McWilliams, and Rodwell 2006; Rafferty and Griffin 2006; Robinson and Griffiths 2005). This approach has generally focused on the degree to which stakeholders' (generally employees) reactions to change become precursors to managerial goals being met or to resistance. Examination of outcomes related to stakeholders' alteration of roles, status, skills, job security, hours worked, internalization of the change philosophy, and similar outcomes are far more rare in the literature. Further, it is extremely rare in the literature to see the examination of outcomes for stakeholders other than employees.

Assessing Results of Change

In the 2007 model, I discussed the idea of “results” as a separable concept from observable system outcomes. Results concern not just what the change program looks like in practice but whether it accomplishes implementers' preconceived goals. Results also concern the material conditions created by the change, as well as unintended consequences.

In terms of goals, research on change outcomes tends to use concepts of “success” and “failure.” As we observed earlier in this chapter, those are highly debatable terms depending on how, when, and by whose standard outcomes are assessed. Results are similarly subject to these effects. On an organizational level, it is not only a problem that assessment of results is difficult to do, it is also something that may not be done often. Doyle, Claydon, and Buchanan's (2000) survey of a group of UK managers suggests that systematic, formal evaluation of change outcomes is rare. In fact, in their study 67% agreed that the “change process cannot be evaluated effectively because there are too many overlapping initiatives running at one time” – an indicator of the problem noted earlier in making accurate attributions of cause and effect relationships. Additionally, this study found that the learning process as a result of organizational development activity was neither systematic nor effective. Fifty‐four percent of respondents agreed “we don't have the luxury of time to pause and reflect on what we've done in change” and 53% agreed “we tend to repeat mistakes in implementing change because there was no time to learn from what happened in the past” (p. S64). I found a similar result in a study (Lewis 1999), with an international sample of implementers, in that very few respondents reported use of formal evaluation as a means to assess change programs.

For the most part, researchers have asked organizational leaders, implementers, and decision‐makers to assess results of change initiatives. When other stakeholders are asked for an evaluation of success, it is usually in terms of the organization's original goals. Very little exploration of individuals' goals or perspectives of results has been done in change scholarship. Further, little is done to effectively measure whether “original goals” are widely shared, understood, and recalled.

One excellent place to begin examining the results that change has for different stakeholders would be to focus on the material conditions that are produced or altered through organizational change. Material conditions would focus on things like employee pay and benefits; community members' experienced levels of noise and air pollution; speed or efficiency of service to an organization's clients, etc. These sorts of results change the day‐to‐day reality for stakeholders. They are very separable issues from organizational results that usually have to do with the survival, economic well‐being, or competitive advantage and the like.

Unintended consequences are yet another way to describe results for both organizations and for individual stakeholders. Harris and Ogbonna (2002) define this concept as “used to imply unforeseen or unpredicted results to an action (often negative in nature)” (p. 34). Jian (2007) defines this as “consequences that would not have taken place if a social actor had acted differently but that are not what the actor had intended to happen” (p. 6). Examples of unintended consequences include negative results for employees such as lowered job or organizational satisfaction; lowered trust in the organization; organizational turnover; and stress.

Some researchers have studied cynicism as an unintended consequence of organizational change. Reichers, Wanous, and Austin (1997) suggest that failed change programs and inadequate sharing of information about intended change can lead to cynicism, which in turn can lead to lowered commitment, satisfaction, and motivation. Further, Doyle et al. (2000) found that “constant change … seems to have fostered self‐interest, fatigue, burnout, and cynicism, to have damaged relationships, and to have reduced organizational commitment and loyalty” (p. S65).

Our case study of Ingredients Inc. (see Case Box 4.2) provides an excellent example of how change burnout can be problematic. As we learned in Chapter 1, Ingredients Inc. experienced at least nine large to moderate changes over a 12‐month period. The employees were reeling from a series of new announcements of, in some cases, unexpected changes. There were many different messages received about how extensive changes would be. Laster (2008) found in this case that employees who had more foreknowledge of changes to come fared better. In another study of multiple change by Grunberg et al. (2008), multiple changes created anxiety and uncertainty and produced deterioration in many attitudes of employees concerning their work and the organization. However, these authors did find that most employees rebounded in attitudes over time.

Unintended consequences can be understood from the perspective of managers' intentions or the intentions of other actors and stakeholders involved in a change effort. A union strike that results in the unintended shut down of a plant would count here just as much as the stress caused by managers' decisions to increase production requirements. The more stakeholders acting in a situation, the more likely unintended consequences are to occur.

Causes for Implementation Failures and Successes

Change scholars have investigated many reasons for the failure or success of change programs in organizations. In a survey of consultants, researchers, and managers, Covin and Kilmann (1990) found that eight themes emerged as having the most impact on results. (See Table 4.1 for a summary of their study and also of themes in studies by Bikson and Gutek 1994; Ellis 1992; Fairhurst, Green, and Courtright

1995 and Miller et al. 1994.) They include issues related to managers' behaviors and communication; general communication; participation practices; vision; and expectations. Other research has called attention to these and other themes such as willingness or readiness for change; impacts of uncertainty and stress caused by change; and politics. Less attention has been paid to factors that tend to encourage success aside from a good deal of research on the effects of involvement and participation. What does exist about achievement of successful results has been dominated by a focus on a few specific types of change. In our review of literature (Lewis and Seibold 1998), David Seibold and I found that of the 18 works examining practice advice for success in change, 11 addressed the implementation of new technologies, and 7 addressed manufacturing technologies. It will be hard to determine general theories of success and failure of change until more studies are conducted using broader samples

In a study (Lewis 2000) with an international sample of 76 implementers in for‐profit, nonprofit, and governmental sectors implementing a broad sample of different types of changes (e.g. management programs, reorganization, technologies, customer programs, merger, quality programs, recruitment programs, and reward programs), I found that implementers tended to identify potential problems that might cause failure in terms of fear or anxiety of staff; negative attitudes; politics; limited resources; and lack of enthusiastic support. Problems that implementers identified as having the largest impact on perceived success of change programs were those concerning the functioning of the implementation team and overall cooperation of stakeholders. However, they also considered these kinds of problems to be least anticipated and the least encountered.

A large body of work on important contingencies that impact failure or success of planned change concerns resistance to change. We will return to this topic in Chapter 6, but for now will note some of the general trends in research on resistance. Whether resistance is a predictor of success or failure is debated in the literature. While that may sound counterintuitive, Markus (1983) points out a reasonable explanation, “[resistance can be functional] by preventing the installation of systems whose use might have on‐going negative consequences” (p. 433). Essentially, resistance can be the manifestation of a correct judgment that the change is a bad idea for the organization. Piderit (2000) and Dent and Goldberg (1999) have also suggested that the notion of “resistance to change” as a vilified concept – the ultimate archenemy of implementers – be retired. Piderit's complaints about how the concept has been used in the literature includes:

Largely positive intentions of “resistors” are generally ignored in the research

Dichotomizing responses to change as “for” or “against” oversimplifies the potential attitudinal responses that are possible

The “resistance” term has lost a clear, core meaning in the multiple ways it has been used in the literature.

Piderit points out through her review that resistance has typically been viewed as something less powerful stakeholders do to slow down or sabotage a change effort. She argues that the language of resistance tends to favor implementers viewing those who oppose change as obstacles to success.

Two major concerns arise from this standard treatment of resistance. First, thinking of resistance as an obstacle to “right thinking” blinds the implementer to potentially useful observations of flaws in the change initiative. That is, if implementers are not open to the possibility that the change program might be flawed, and they treat any negative commentary as mere resistance, it is easy to dismiss it without consideration. Efforts are directed at fixing the resistors rather than reconsidering or altering the change initiative itself. Piderit quotes Krantz (1999, p. 42) on the use of the concept of resistance in organizations as “a not‐so‐disguised way of blaming the less powerful for unsatisfactory results of change efforts.” Piderit refers to this tendency as the fundamental attribution error (discussed above as the wrongful attribution of cause and effect). Evidence that practitioners are guided towards such an error abounds in the practice‐oriented literature. Popular press books on change implementation often have sections or chapters devoted to resistance to change. Many present their advice about strategies and specific tactics in terms of their ability to reduce or forestall resistance (Lewis et al. 2006).

A second concern about resistance as it is typically used concerns cueing behavior. Implementers who assume resistance in stakeholders may actually promote that response. Think of the last time someone close to you started their sentence with “I know you are going to hate this, but …” It usually sets up an automatic thinking process for you to search for the worst interpretation of what you hear next. A similar reaction may be triggered when implementers introduce change efforts with warnings of how “none of us will like this but” or “this will be a challenge for some of you.” Zorn, Page, and Cheney (2000) provide an example of this in their study of the cultural transformation by Ken, the manager, who introduced change by telling his subordinates they should be “scared, frightened, and excited of the changes that are about to happen” (p. 530).

We can be certain in considering resistance in change initiatives that a good deal of noncompliant behavior is considered problematic by implementers and that from their perspectives much of it is believed to be born of ignorance, fear, stubbornness, or some political motive. For some more enlightened implementers, signs of resistance may be signals that the change has flaws or needs adjustment so that it can be used in a successful way. Such implementers might treat those who raise objections or concerns about the change as loyal, committed, and/or ethical stakeholders who have the organization and its stakeholders uppermost in mind. In either case, the resistance and the reactions to that resistance by implementers and other stakeholders certainly have large impacts on change initiatives.

Another research area pointing to important sources of explanation for success and failure concerns managers' expectations and influence. King (1974) conducted a field study in which managers' expectations for success or failure of an organizational change effort were manipulated (some being led to believe success was more likely; some being led to believe success was less likely). Findings revealed that the results were related more to the managers' expectations than to qualities of the change program itself. Other research has pointed to the importance of managers' cues and encouragement in the behavioral and attitudinal responses of stakeholders (Isabella 1990; Leonard‐Barton and Deschamps 1988).

Conclusion

This chapter has focused our attention on various ways in which organizations perceive, enact, and assess outcomes and results. We have seen that assessment of outcomes is a very important strategic activity that aids in planning, hindsight learning, and course‐correction. However, we have also noted that assessment can be a very difficult task. Assessment of outcomes in organizations in general can be fraught with challenges in understanding the moving target of goals; the politicization of goal assessment; the attribution of error; and the problems associated with timing and means of assessing outcomes. In the context of organizational change, different stakeholders can come to widely different assessments of outcomes. Also, the material conditions of stakeholders of change can vary widely.

Change outcomes have been treated in different ways in the literature, including focus on “original goals” as baseline for comparison (e.g. fidelity); focus on the degree of implementation or outcome achievement; measurement of the similarity of adoption of change across users (e.g. uniformity); and in terms of the genuine “buy‐in” of key stakeholders (e.g. authenticity). Further, scholars have measured unintended consequences of change programs that can sometimes create additional challenges and burdens for both organizations and individual stakeholders even when overall intended outcomes and results are achieved.

A number of factors have been identified in the change literature as having predictive value in explaining and accounting for failure and success. Most of the research has explored predictors of failure, with a special focus on “resistance.” In Chapter 6 we return to the topic of resistance in some detail.

This review and discussion of outcomes of change processes has called attention to the importance of multiple perspectives in defining and assessing “what has happened” as a result of initiating change. Stakeholders and implementers have various perspectives on those outcomes; may certainly differ on how and when to assess them; and may even be self‐conflicted as to assessing them at any given point. Understanding that “calling” a success or failure in a change process is a highly social activity is an important first step in grappling with the challenges of measuring outcomes – a topic we return to in Chapter 8.

In the 2007 model (Figure 3.1), stakeholder concerns, assessments of each other, and stakeholder interactions are driven by the strategic communication strategies enacted by implementers and stakeholders. As discussed in Chapter 2, implementers and stakeholders make use of different strategies to disseminate information and solicit input. In Chapter 5 we will discuss a more nuanced view of these general strategic approaches, among others.