+1(978)310-4246 credencewriters@gmail.com
  

Description

START BY READING THE FOLLOWING ARTICLE…

AND THEN ACCORDING TO THE INSTRUCTIONS BELOW CONDUCT OUTSIDE RESEARCH ON WELLS FARGO…

Case Study Questions:

1a. Name the

five

barriers that contribute to unethical decisions within organizations.

1b.

For each

barrier, describe the barrier and the impact on stakeholders.

2a. In 2016, it was reported that thousands of employees at Wells Fargo, over the course of multiple years, created more than a million phony accounts in the names of unaware customers. Research this issue and describe at least three

factors that led to the employees’ unethical behavior and willingness to engage in fraud.

2b. Assess which

stakeholder groups were impacted

by the unethical acts of Wells Fargo employees. Name at least two

stakeholder groups and describe how they were impacted.

Thoroughly answer all of the elements listed above

Write in a professional, analytical tone (not an essay) with clarity and proper grammatical structure.

Written analysis must also include a minimum of 4 sources about Wells Fargo to support your arguments. Sources need to be a mix of Wells Fargo produced and NON-Wells Fargo produced.

Written report needs to be 2 ½ – 3 pages (not including cover and reference page)

Proper paraphrasing/quotation marks, inline citations, and a reference list.

Why Ethical People Make Unethical Choices
ORGANIZATIONAL CULTURE
Ethical Breakdowns
ï‚·
ï‚·
Max H. Bazerman
Ann E. Tenbrunsel
FROM THE APRIL 2011 ISSUE
The vast majority of managers mean to run ethical organizations, yet corporate corruption is
widespread. Part of the problem, of course, is that some leaders are out-and-out crooks, and they
direct the malfeasance from the top. But that is rare. Much more often, we believe, employees
bend or break ethics rules because those in charge are blind to unethical behavior and may even
unknowingly encourage it.
Consider an infamous case that, when it broke, had all the earmarks of conscious top-down
corruption. The Ford Pinto, a compact car produced during the 1970s, became notorious for its
tendency in rear-end collisions to leak fuel and explode into flames. More than two dozen people
were killed or injured in Pinto fires before the company issued a recall to correct the problem.
Scrutiny of the decision process behind the model’s launch revealed that under intense
competition from Volkswagen and other small-car manufacturers, Ford had rushed the Pinto into
production. Engineers had discovered the potential danger of ruptured fuel tanks in
preproduction crash tests, but the assembly line was ready to go, and the company’s leaders
decided to proceed. Many saw the decision as evidence of the callousness, greed, and mendacity
of Ford’s leaders—in short, their deep unethicality.
But looking at their decision through a modern lens—one that takes into account a growing
understanding of how cognitive biases distort ethical decision making—we come to a different
conclusion. We suspect that few if any of the executives involved in the Pinto decision believed
that they were making an unethical choice. Why? Apparently because they thought of it as purely
a business decision rather than an ethical one.
Taking an approach heralded as rational in most business school curricula, they conducted a
formal cost-benefit analysis—putting dollar amounts on a redesign, potential lawsuits, and even
lives—and determined that it would be cheaper to pay off lawsuits than to make the repair. That
methodical process colored how they viewed and made their choice. The moral dimension was
not part of the equation. Such “ethical fading,” a phenomenon first described by Ann Tenbrunsel
and her colleague David Messick, takes ethics out of consideration and even increases
unconscious unethical behavior.
What about Lee Iacocca, then a Ford executive VP who was closely involved in the Pinto
launch? When the potentially dangerous design flaw was first discovered, did anyone tell him?
“Hell no,” said one high company official who worked on the Pinto, according to a 1977 article
in Mother Jones. “That person would have been fired. Safety wasn’t a popular subject around
Ford in those days. With Lee it was taboo. Whenever a problem was raised that meant a delay on
the Pinto, Lee would chomp on his cigar, look out the window and say ‘Read the product
objectives and get back to work.’”
We don’t believe that either Iacocca or the executives in charge of the Pinto were consciously
unethical or that they intentionally sanctioned unethical behavior by people further down the
chain of command. The decades since the Pinto case have allowed us to dissect Ford’s decisionmaking process and apply the latest behavioral ethics theory to it. We believe that the patterns
evident there continue to recur in organizations. A host of psychological and organizational
factors diverted the Ford executives’ attention from the ethical dimensions of the problem, and
executives today are swayed by similar forces. However, few grasp how their own cognitive
biases and the incentive systems they create can conspire to negatively skew behavior and
obscure it from view. Only by understanding these influences can leaders create the ethical
organizations they aspire to run.
Five Barriers to an Ethical Organization
Ill-Conceived Goals
In our teaching we often deal with sales executives. By far the most common problem they
report is that their sales forces maximize sales rather than profits. We ask them what incentives
they give their salespeople, and they confess to actually rewarding sales rather than profits. The
lesson is clear: When employees behave in undesirable ways, it’s a good idea to look at what
you’re encouraging them to do. Consider what happened at Sears, Roebuck in the 1990s, when
management gave automotive mechanics a sales goal of $147 an hour—presumably to increase
the speed of repairs. Rather than work faster, however, employees met the goal by overcharging
for their services and “repairing” things that weren’t broken.
It’s a good idea to look at what you’re encouraging employees to do. A sales goal of $147 an
hour led auto mechanics to “repair” things that weren’t broken.
Sears is certainly not unique. The pressure at accounting, consulting, and law firms to maximize
billable hours creates similarly perverse incentives. Employees engage in unnecessary and
expensive projects and creative bookkeeping to reach their goals. Many law firms, increasingly
aware that goals are driving some unethical billing practices, have made billing more transparent
to encourage honest reporting. Of course, this requires a detailed allotment of time spent, so
some firms have assigned codes to hundreds of specific activities. What is the effect? Deciding
where in a multitude of categories an activity falls and assigning a precise number of minutes to
it involves some guesswork—which becomes a component of the billable hour. Research shows
that as the uncertainty involved in completing a task increases, the guesswork becomes more
unconsciously self-serving. Even without an intention to pad hours, overbilling is the outcome. A
system designed to promote ethical behavior backfires.
Let’s look at another case in which a well-intentioned goal led to unethical behavior, this time
helping to drive the recent financial crisis. At the heart of the problem was President Bill
Clinton’s desire to increase homeownership. In 2008 the BusinessWeekeditor Peter Coy wrote:
Add President Clinton to the long list of people who deserve a share of the blame for the housing
bubble and bust. A recently re-exposed document shows that his administration went to
ridiculous lengths to increase the national homeownership rate. It promoted paper-thin down
payments and pushed for ways to get lenders to give mortgage loans to first-time buyers with
shaky financing and incomes. It’s clear now that the erosion of lending standards pushed prices
up by increasing demand, and later led to waves of defaults by people who never should have
bought a home in the first place.
The Sears executives seeking to boost repair rates, the partners devising billing policies at law
firms, and the Clinton administration officials intending to increase homeownership never meant
to inspire unethical behavior. But by failing to consider the effects of the goals and reward
systems they created, they did.
Part of the managerial challenge is that employees and organizations require goals in order to
excel. Indeed, among the best-replicated results in research on managerial behavior is that
providing specific, moderately difficult goals is more effective than vague exhortations to “do
your best.” But research also shows that rewarding employees for achieving narrow goals such
as exact production quantities may encourage them to neglect other areas, take undesirable “ends
justify the means” risks, or—most important from our perspective—engage in more unethical
behavior than they would otherwise.
Leaders setting goals should take the perspective of those whose behavior they are trying to
influence and think through their potential responses. This will help head off unintended
consequences and prevent employees from overlooking alternative goals, such as honest
reporting, that are just as important to reward if not more so. When leaders fail to meet this
responsibility, they can be viewed as not only promoting unethical behavior but blindly engaging
in it themselves.
Motivated Blindness
It’s well documented that people see what they want to see and easily miss contradictory
information when it’s in their interest to remain ignorant—a psychological phenomenon known
as motivated blindness. This bias applies dramatically with respect to unethical behavior. At
Ford the senior-most executives involved in the decision to rush the flawed Pinto into production
not only seemed unable to clearly see the ethical dimensions of their own decision but failed to
recognize the unethical behavior of the subordinates who implemented it.
Let’s return to the 2008 financial collapse, in which motivated blindness contributed to some bad
decision making. The “independent” credit rating agencies that famously gave AAA ratings to
collateralized mortgage securities of demonstrably low quality helped build a house of cards that
ultimately came crashing down, driving a wave of foreclosures that pushed thousands of people
out of their homes. Why did the agencies vouch for those risky securities?
Part of the answer lies in powerful conflicts of interest that helped blind them to their own
unethical behavior and that of the companies they rated. The agencies’ purpose is to provide
stakeholders with an objective determination of the creditworthiness of financial institutions and
the debt instruments they sell. The largest agencies, Standard & Poor’s, Moody’s, and Fitch,
were—and still are—paid by the companies they rate. These agencies made their profits by
staying in the good graces of rated companies, not by providing the most accurate assessments of
them, and the agency that was perceived to have the laxest rating standards had the best shot at
winning new clients. Furthermore, the agencies provide consulting services to the same firms
whose securities they rate.
Research reveals that motivated blindness can be just as pernicious in other domains. It suggests,
for instance, that a hiring manager is less likely to notice ethical infractions by a new employee
than are people who have no need to justify the hire—particularly when the infractions help the
employee’s performance. (We’ve personally heard many executives describe this phenomenon.)
The manager may either not see the behavior at all or quickly explain away any hint of a
problem.
Consider the world of sports. In 2007 Barry Bonds, an outfielder for the San Francisco Giants,
surpassed Hank Aaron to become the all-time leader in career home runs—perhaps the most
coveted status in Major League Baseball. (Bonds racked up 762 versus Aaron’s 755.) Although it
was well known that the use of performance-enhancing drugs was common in baseball, the
Giants’ management, the players’ union, and other interested MLB groups failed to fully
investigate the rapid changes in Bonds’s physical appearance, enhanced strength, and
dramatically increased power at the plate. Today Bonds stands accused of illegally using steroids
and lying to a grand jury about it; his perjury trial is set for this spring. If steroid use did help
bring the home runs that swelled ballpark attendance and profits, those with a stake in Bonds’s
performance had a powerful motivation to look the other way: They all stood to benefit
financially.
It does little good to simply note that conflicts of interest exist in an organization. A decade of
research shows that awareness of them doesn’t necessarily reduce their untoward impact on
decision making. Nor will integrity alone prevent them from spurring unethical behavior,
because honest people can suffer from motivated blindness. Executives should be mindful that
conflicts of interest are often not readily visible and should work to remove them from the
organization entirely, looking particularly at existing incentive systems.
Indirect Blindness
In August 2005 Merck sold off two cancer drugs, Mustargen and Cosmegen, to Ovation, a
smaller pharmaceutical firm. The drugs were used by fewer than 5,000 patients and generated
annual sales of only about $1 million, so there appeared to be a clear logic to divesting them. But
after selling the rights to manufacture and market the drugs to Ovation, Merck continued to make
Mustargen and Cosmegen on a contract basis. If small-market drugs weren’t worth the effort,
why did Merck keep producing them?
Soon after the deal was completed, Ovation raised Mustargen’s wholesale price by about 1,000%
and Cosmegen’s even more. (In fact, Ovation had a history of buying and raising the prices on
small-market drugs from large firms that would have had public-relations problems with
conspicuous price increases.) Why didn’t Merck retain ownership and raise the prices itself? We
don’t know for sure, but we assume that the company preferred a headline like “Merck Sells
Two Products to Ovation” to one like “Merck Increases Cancer Drug Prices by 1,000%.”
We are not concerned here with whether pharmaceutical companies are entitled to gigantic profit
margins. Rather, we want to know why managers and consumers tend not to hold people and
organizations accountable for unethical behavior carried out through third parties, even when the
intent is clear. Assuming that Merck knew a tenfold price increase on a cancer drug would attract
negative publicity, we believe most people would agree that using an intermediary to hide the
increase was unethical. At the same time, we believe that the strategy worked because people
have a cognitive bias that blinds them to the unethicality of outsourcing dirty work.
Consider an experiment devised by Max Bazerman and his colleagues that shows how such
indirectness colors our perception of unethical behavior. The study participants read a story,
inspired by the Merck case, that began this way: “A major pharmaceutical company, X, had a
cancer drug that was minimally profitable. The fixed costs were high and the market was limited.
But the patients who used the drug really needed it. The pharmaceutical was making the drug for
$2.50/pill (all costs included), and was only selling it for $3/pill.”
Then a subgroup of study participants was asked to assess the ethicality of “A: The major
pharmaceutical firm raised the price of the drug from $3/pill to $9/pill,” and another subgroup
was asked to assess the ethicality of “B: The major pharmaceutical X sold the rights to a smaller
pharmaceutical. In order to recoup costs, company Y increased the price of the drug to $15/pill.”
Participants who read version A, in which company X itself raised the price, judged the company
more harshly than did those who read version B, even though the patients in that version ended
up paying more. We asked a third subgroup to read both versions and judge which scenario was
more unethical. Those people saw company X’s behavior as less ethical in version B than in
version A. Further experiments using different stories from inside and outside business revealed
the same general pattern: Participants judging on the basis of just one scenario rated actors more
harshly when they carried out an ethically questionable action themselves (directly) than when
they used an intermediary (indirectly). But participants who compared a direct and an indirect
action based their assessment on the outcome.
These experiments suggest that we are instinctively more lenient in our judgment of a person or
an organization when an unethical action has been delegated to a third party—particularly when
we have incomplete information about the effects of the outsourcing. But the results also reveal
that when we’re presented with complete information and reflect on it, we can overcome such
“indirect blindness” and see unethical actions—and actors—for what they are.
Managers routinely delegate unethical behaviors to others, and not always consciously. They
may tell subordinates, or agents such as lawyers and accountants, to “do whatever it takes” to
achieve some goal, all but inviting questionable tactics. For example, many organizations
outsource production to countries with lower costs, often by hiring another company to do the
manufacturing. But the offshore manufacturer frequently has lower labor, environmental, and
safety standards.
Managers routinely delegate unethical behaviors to others, and not always consciously.
When an executive hands off work to anyone else, it is that executive’s responsibility to take
ownership of the assignment’s ethical implications and be alert to the indirect blindness that can
obscure unethical behavior. Executives should ask, “When other people or organizations do work
for me, am I creating an environment that increases the likelihood of unethical actions?”
The Slippery Slope
You’ve probably heard that if you place a frog in a pot of boiling water, the frog will jump out.
But if you put it in a pot of warm water and raise the temperature gradually, the frog will not
react to the slow change and will cook to death. Neither scenario is correct, but they make a fine
analogy for our failure to notice the gradual erosion of others’ ethical standards. If we find minor
infractions acceptable, research suggests, we are likely to accept increasingly major infractions
as long as each violation is only incrementally more serious than the preceding one.
Bazerman and the Harvard Business School professor Francesca Gino explored this in an
experiment in which the participants—“auditors”—were asked to decide whether to approve
guesses provided by “estimators” of the amount of money in jars. The auditors could earn a
percentage of a jar’s contents each time they approved an estimator’s guess—and thus had an
incentive to approve high estimates—but if they were caught approving an exaggerated estimate,
they’d be fined $5. Over the course of 16 rounds, the estimates rose to suspiciously high levels
either incrementally or abruptly; all of them finished at the same high level. The researchers
found that auditors were twice as likely to approve the high final estimates if they’d been arrived
at through small incremental increases. The slippery-slope change blinded them to the
estimators’ dishonesty.
Now imagine an accountant who is in charge of auditing a large company. For many years the
client’s financial statements are clean. In the first of two scenarios, the company then commits
some clear transgressions in its financial statements, even breaking the law in certain areas. In
the second scenario, the auditor notices that the company stretched but did not appear to break
the law in a few areas. The next year the company’s accounting is worse and includes a minor
violation of federal accounting standards. By the third year the violation has become more
severe. In the fourth year the client commits the same clear transgressions as in the first scenario.
The auditors-and-estimators experiment, along with numerous similar ones by other researchers,
suggest that the accountant above would be more likely to reject the financial statements in the
first scenario. Bazerman and colleagues explored this effect in depth in “Why Good Accountants
Do Bad Audits” (HBR November 2002).
To avoid the slow emergence of unethical behavior, managers should be on heightened alert for
even trivial-seeming infractions and address them immediately. They should investigate whether
there has been a change in behavior over time. And if something seems amiss, they should
consider inviting a colleague to take a look at all the relevant data and evidence together—in
effect creating an “abrupt” experience, and therefore a clearer analysis, of the ethics infraction.
Overvaluing Outcomes
Many managers are guilty of rewarding results rather than high-quality decisions. An employee
may make a poor decision that turns out well and be rewarded for it, or a good decision that turns
out poorly and be punished. Rewarding unethical decisions because they have good outcomes is
a recipe for disaster over the long term.
Rewarding unethical decisions because they have good outcomes is a recipe for disaster over the
long term.
The Harvard psychologist Fiery Cushman and his colleagues tell the story of two quick-tempered
brothers, Jon and Mark, neither of whom has a criminal record. A man insults their family. Jon
wants to kill the guy: He pulls out and fires a gun but misses, and the target is unharmed. Matt
wants only to scare the man but accidentally shoots and kills him. In the United States and many
other countries, Matt can expect a far more serious penalty than Jon. It is clear that laws often
punish bad outcomes more aggressively than bad intentions.
Bazerman’s research with Francesca Gino and Don Moore, of Carnegie Mellon University,
highlights people’s inclination to judge actions on the basis of whether harm follows rather than
on their actual ethicality. We presented the following stories to two groups of participants.
Both stories begin: “A pharmaceutical researcher defines a clear protocol for determining
whether or not to include clinical patients as data points in a study. He is running short of time to
collect sufficient data points for his study within an important budgetary cycle in his firm.”
Story A continues: “As the deadline approaches, he notices that four subjects were withdrawn
from the analysis due to technicalities. He believes that the data in fact are appropriate to use,
and when he adds those data points, the results move from not quite statistically significant to
significant. He adds these data points, and soon the drug goes to market. This drug is later
withdrawn from the market after it kills six patients and injures hundreds of others.”
Story B continues: “He believes that the product is safe and effective. As the deadline
approaches, he notices that if he had four more data points for how subjects are likely to behave,
the analysis would be significant. He makes up these data points, and soon the drug goes to
market. This drug is a profitable and effective drug, and years later shows no significant side
effects.”
After participants read one or the other story, we asked them, “How unethical do you view the
researcher to be?” Those who read story A were much more critical of the researcher than were
those who read story B, and felt that he should be punished more harshly. Yet as we see it, the
researcher’s behavior was more unethical in story B than in story A. And that is how other study
participants saw it when we removed the last sentence—the outcome—from each story.
Managers can make the same kind of judgment mistake, overlooking unethical behaviors when
outcomes are good and unconsciously helping to undermine the ethicality of their organizations.
They should beware this bias, examine the behaviors that drive good outcomes, and reward
quality decisions, not just results.
The Managerial Challenge
Companies are putting a great deal of energy into efforts to improve their ethicality—installing
codes of ethics, ethics training, compliance programs, and in-house watchdogs. Initiatives like
these don’t come cheap. A recent survey of 217 large companies indicated that for every billion
dollars of revenue, a company spends, on average, $1 million on compliance initiatives. If these
efforts worked, one might argue that the money—a drop in the bucket for many organizations—
was well spent. But that’s a big if. Despite all the time and money that have gone toward these
efforts, and all the laws and regulations that have been enacted, observed unethical behavior is
on the rise.
This is disappointing but unsurprising. Even the best-intentioned ethics programs will fail if they
don’t take into account the biases that can blind us to unethical behavior, whether ours or that of
others. What can you do to head off rather than exacerbate unethical behavior in your
organization? Avoid “forcing” ethics through surveillance and sanctioning systems. Instead
ensure that managers and employees are aware of the biases that can lead to unethical behavior.
(This simple step might have headed off the disastrous decisions Ford managers made—and
employees obeyed—in the Pinto case.) And encourage your staff to ask this important question
when considering various options: “What ethical implications might arise from this decision?”
Above all, be aware as a leader of your own blind spots, which may permit, or even encourage,
the unethical behaviors you are trying to extinguish.
A version of this article appeared in the April 2011 issue of Harvard Business Review.
Max H. Bazerman is the Jesse Isidor Straus Professor of Business Administration at Harvard
Business School and a codirector of the Center for Public Leadership at Harvard Kennedy
School.
Ann E. Tenbrunsel is the Rex and Alice A. Martin Professor of Business Ethics and the Research
Director of the Institute for Ethical Business Worldwide at the University of Notre Dame. They
are the authors of Blind Spots: Why We Fail to Do What’s Right and What to Do about
It (Princeton University Press, 2011), from which this article was developed.
Source: https://hbr.org/2011/04/ethical‐breakdowns

Purchase answer to see full
attachment

  
error: Content is protected !!