Big fines paid by businesses that break the law provide no incentive for companies to change cultures that lead to that illegal activity. Even large fines and bad publicity are often viewed as a cost of doing business rather than a deterrent for companies that break the law. And a common federal tactic, so-called deferred prosecution, is in effect a get out of jail free card for executives.
Virtually every 21st century business scandal is reducible to a morality tale of a technology that allows us to do things we couldn’t before, coupled with major institutional failures that were enabled by failures of omission and commission of corporate leaders.
Consider the major, self-inflicted crises at Wells Fargo, where two million accounts were opened without customers’ knowledge, or at Volkswagen, where emissions data was falsified, or News Corp, where editors illegally hacked cell phones to publish private information.
These are different from the kind of product-safety scandals we grew accustomed to in the 20th century. And yet most business schools and leadership development programs still focus on those. Consider the Columbia and Challenger space shuttle disasters. These are still two of the most popular case studies taught in business schools, and because of them, we believe we know why organizations self-inflict crises. Countless executives and MBAs have studied the key lessons, learning about individual and institutional biases that warp our world views. They learn that the absence of psychological safety keeps team members from disagreeing with dominant opinions. They learn that organizational failures result from rigid reporting lines, “one right way” problem-solving, cultures that shoot – or specify unreasonable standards for – the messenger, and restrictive communications protocols. These lessons are valuable, but incomplete for today’s world.
Although the public might like to see accused executives wind up behind bars, they don’t because the U.S. Department of Justice finds it easier to prosecute corporations instead of the people who run them.
What incentive would work to change corporate behavior? The threat of prison. Executives accused of white-collar crimes fear prison, and they fear it mightily. They would have paid any amount of money, done anything to avoid going to prison. So prison does have a major deterrent effect.
Until the late 1990s, the United States, along with most of the developed world, prosecuted individuals — not corporations — in cases of white-collar crime. That had been the prevailing view of the Department of Justice for decades and it made a good deal of sense. Corporations are not robots that go out and commit crimes by themselves.
The practice changed for a variety of reasons, but the driving factor was cost. Building a criminal case against a high-level executive is a lengthy and complex process. It can take several years of patiently flipping low-level members of an organization to compile enough evidence to prosecute the people at the top. There’s always a danger that the low-level types will lie to save their necks, and there’s no guarantee that the work will result in a prosecution, let alone a conviction.
But prosecuting a corporation is faster and cheaper. Corporations can’t be in a state of perpetual war with the government. The stakes are too high. You know in advance that sooner or later they will come to terms and it will never go to trial. Plus there’s a bonus for ambitious prosecutors: If packaged right, prosecution of corporations can be politically appealing.
As prosecutors changed strategy, they began to use deferred prosecution. That appeared in the New York Review of Books in 2015, deferred prosecution came into vogue in the 1930s, as a way to help juvenile offenders. Prosecutors could defer prosecution of a juvenile if the young offender agreed to enter a rehabilitation program. Offenders who completed the program would not be charged.
By the late 1990s, prosecutors agreed to defer corporate prosecution if the business agreed to pay a fine and to adopt various measures designed to rehabilitate the company’s culture.
Deferred prosecutions averaged 35 a year between 2007 and 2012, the last year for which data are available. Crimes for which prosecution was deferred included felony violations of the securities laws, banking laws, antitrust laws, anti-money-laundering laws, food and drug laws, foreign corrupt practices laws, and numerous provisions of the general federal criminal code.
Deferred prosecution of white-collar crime has been used for some 20 years, more than enough time to see if it has served as a deterrent to crime or encouraged positive changes in corporate culture. It hasn’t.
More than half of the people who committed serious fraud offenses in the last few years were recidivists. That figure suggests that the practice of going after companies but not individuals has not changed the corporate culture in which most white-collar crimes are committed. But even if deferred prosecution were effective, it would not excuse not going after the individuals, because they are still the people who did the crime.
Digital technologies today enable individual employees to do much more than they could before. Mid-tier executives, who have serious decision-making power devolved to them (compared to 25 years ago) drive this workflow. The reasons behind this vary by organization, but they are often rooted in the cultures that the ease and openness of information sharing have spawned. These executives lead teams in which globally dispersed people from multiple organizations collaborate on critical tasks.
But in most companies, despite the free-flow of exchanges, they still lack information they need, can’t communicate with team members in real time, or can’t foresee the implications of key decisions. Undoubtedly, one of them pulls the trigger when something goes wrong – whether it is an inability to design to needed standards at Volkswagen or the opening of unauthorized accounts at Wells Fargo. They are blamed because they can easily be blamed. More than 5,000 midlevel or junior people were fired at Wells Fargo after the truth came out.
Have we rethought how we work in a digital age when work increasingly requires large doses of unseen discretionary effort? Have we redesigned processes and structures to surface problems before these become crises? Have we allowed the free flow of key information to distributed decision makers? Have we created collaborative, learning-focused cultures? In most companies, we have not.
When a crisis unfolds, we are now quick to say, as General Duane Deal said of the Columbia explosion, that “the institution allowed it.” And yet we have been too hesitant to add the necessary phrase: “and top leaders enabled it.” The motive force behind institutional failure is leadership failure. The failure may be unintended, but that doesn’t exculpate individuals who spend their adult lives seeking the power and prestige of top positions.
Top leaders are enabling the current failures in two ways. First, though they speak of ecosystems and a VUCA world, they fail to rationally consider the implications of these realities for the day-to-day jobs their mid-tier executives. They make the mistake of thinking 20th century human organizations can thrive amidst 21st century technology. They don’t even recognize that the slate of questions posed above are relevant, even critically important. Second, they don’t consider at a human level how their stated strategic intents shape the acceptable ethical boundaries for those who must turn those intents into reality.
In the highly interconnected digital world, it is very hard to rationally consider the many factors that affect any event. The difficulties are magnified when the factors change unpredictably and with great speed, and give rise to precious few “one right answer” and many “no good answers.” Given the archaic structures and processes, and without repeated, clear guidance on “what we don’t ever risk,” is it any surprise that decisions about ambiguous options subsequently turn out to be ethically compromised?
While an editor “pulled the trigger” to illegally hack the mobile phone of a kidnapped child, Rupert Murdoch enabled the decision. He didn’t set ethical boundaries in a scoop-focused media market, and he hired executives who didn’t set policies and procedures to preclude such acts. Indeed, he rehired an executive cleared of criminal wrongdoing, signaling that her ethical and managerial failures didn’t matter. While mid-tier executives and engineers pulled the trigger to design Volkswagen engines that responded falsely to emissions tests, Ferdinand Piech and Martin Winterkorn’s demands of win-at-all-costs performance and the absence of appropriate procedural safeguards enabled – even encouraged – them to do so. At Wells Fargo, a culture and a warped incentive system created by top executives enabled malfeasance. That didn’t stop CEO John Stumpf from blaming employees who didn’t get it right, or CFO John Shrewsberry from blaming underperformers. Stumpf was forced out, but since neither Mr. Shrewsberry nor CAO/HR Director Hope Harrison were, the seeds for future crises have been left undisturbed.
Avoiding further self-inflicted crises – and the human damage they cause – will require more attention to both institutional norms and ethical leadership. That responsibility ultimately lies at the very top. When they hire CEOs, Boards of Directors must make ethics the deal-breaking criterion. CEOs and their direct reports must rethink not just how to compete using digital technology, but more importantly, how work should be done in a world mediated by digital technology.