Lessons from The Brain Center at Whipple's
Technology can be a barrier to ethics. We need leaders to consider the humanities just as deeply as the sciences as we develop new technology.
“I think technology really increased human ability. But technology cannot produce compassion." — Dalai Lama
What is it about technology that makes us think we’re inoculated from the errors, pains, and foibles of what it means to be human?
I’m a big fan of the old Twilight Zone series, and I was recently watching it on Hulu. One of the episodes in the final season is called “The Brain Center at Whipple’s,” and while it was made in the mid-1960s, it is remarkably timeless.
The episode opens with Mr. Whipple previewing a film with his Chief Engineer: an update that Whipple is making to the board of directors. He assesses the W.V. Whipple Manufacturing Corporation by the numbers: 283,000 personnel, 13 plants, and 34,827 people in one plant.
He says, “At Whipple's, we only take forward steps,” and proceeds to introduce the X109B14 Automatic Assembly Machine, proudly announcing that it will replace 61,000 jobs, 73 machines, and save the company $4 million in employee insurance, welfare, hospitalization, and profit-sharing. And that the entire company would be automated within six months, running from a so-called “Brain Center” filled with similar machines.
He says all of this without emotion or regret, much to the chagrin of Hanley, the Chief Engineer, who does not approve of “a lot of men out of work.” He cautions Whipple about taking men's livelihood and reason for being away from them, and about Whipple's lack of goodwill and compassion through his “heartless manipulation of man and metals.”
Whipple’s response is one rooted in numbers: “I am here to provide efficiency. That is my only concern.” And in Whipple’s mind, efficiency only comes from machines.
The episode ends with the board removing Whipple from his job, due to overexertion and the inability to make sound judgments. Cut to his office, where a robot is now handling all of his duties.
Rod Serling reminds us that this is “the historical battle between flesh and steel. Between the brain of man and the product of man's brain.”
“Educating the mind without educating the heart is no education at all.” — Aristotle
Ethics, Efficiency, and Empathy
This isn’t the first example of humans grappling with technology. The term “robot” was first introduced in the 1920s in the play “Rossum’s Universal Robots” by Czech playwright Karl Čapek. The word robota means “forced labor.”
And robots were certainly a central theme in Fritz Lang’s 1927 silent film Metropolis, about the conflict between man and machine and lives lost in the process. The film ended with an interstitial card that read:
In Westworld on HBO, we witness android hosts at an American Old West theme park that are programmed with narratives and interact with each other and guests. Meanwhile, guests can do whatever they want to the robot hosts, including murder and rape.
What is it about technology that makes us distance ourselves from the laws and customs that have been established for thousands of years?
Part of the answer lies in the drive within Mr. Whipple, who was only concerned with efficiency. Looking at the history of technology, in every case, tools — however crude they were — allowed humans to be more efficient. Stone-tipped spears were more effective at hunting. The wheel allowed us to move large objects more easily. The machine gun and chemical weapons made warfare much more deadly.
In every case of technology, it removed us one step or more from the task at hand: we didn't risk being maimed by a wild animal during a hunt, construction became less back-breaking, and we didn't have to murder with our bare hands.
But that distance created a barrier of sorts. Because we aren't so viscerally connected to those and other instances of the infliction of pain, we’re less likely to have the empathy that Whipple's chief engineer noted was missing.
Fast forward to the present day. With artificial intelligence, mounds of data, the Internet of Things, and autonomous everything, we ostensibly have more technology at our disposal than ever before. As a result, we're experiencing a crisis of ethics that seems to take on a daily drumbeat:
Elizabeth Holmes ran a massive fraud scheme at Theranos, keeping people in the dark about the true nature of the blood test she supposedly invented. The lie was promulgated such that Walgreen’s signed a major deal with them, and investors were blinded by their own greed, seeing their paper worth balloon. Even a board member like George Shultz chose to protect his investment rather than believe his own grandson, the whistleblower behind the downfall.
The Securities and Exchange Commission sued the former CEO of Volkswagen for the “massive fraud” behind the diesel emissions scandal. By using a simple bit of software, Volkswagen was able to make emissions cleaner during testing, but left drivers’ vehicles as dirty as ever, when they thought they were buying cleaner vehicles.
And Facebook. Oh, Facebook. They’ve used half-hearted apologies as their strategy for too long. And the Cambridge Analytica scandal brought even more focus to them, as did their inability to curtail the Christchurch shooting video. Facebook has made it clear that they continue to worship at the altar of the golden calf rather than try to protect users. As Scott Galloway said on the Pivot podcast, “Have any two individuals done more damage while making more money than Mark Zuckerberg and Sheryl Sandberg?”
In every one of these instances, technology is at the core. But here’s the thing: the technology isn’t to blame.
In each of those cases, a human made the decision to say, “I have a choice between right and wrong. And I’m choosing wrong.”
The technology just made it easier for them to accept what they thought: that they weren’t causing direct harm.
I don’t think it’s any coincidence that three books of this sort were released within weeks of each other:
Shiv Singh and Rohini Luthra’s book Savvy: Navigating Fake Companies, Fake Leaders, and Fake News in the Post-Trust Era addresses this head-on by looking at how technology has isolated us and engendered a dearth of trust in the marketplace.
Amy Webb’s latest book The Big Nine: How the Tech Titans & Their Thinking Machines Could Warp Humanity deals with the unintended consequences that we're dealing with, following the implementation of vast amounts of online technology and artificial intelligence.
And Kate O’Neill’s Tech Humanist: How You Can Make Technology Better for Business and Better for Humans gets to the core of what's needed: better-informed decisions based on how people are affected.
Is it any wonder that we’re trying to wrap our heads around this lack of humanity in technology right now? The engineers and programmers that have contributed to this phenomenon have been brilliant technologists in their own right.
But with any undertaking that affects such a mass of humanity — whether it’s the 283,000 employees of the Whipple Manufacturing Corporation or the 2.3 billion people who use Facebook’s services — needs to have a leader that understands how humanity may be impacted by decisions.
Here at Timeless & Timely, we like to consider timeless wisdom: putting things in perspective and reflecting on how similar struggles were met in the past. It’s borne from my own degree in classics and from a collection of Great Books of the Western World.
And that’s what’s missing in a technology-heavy culture: the importance of humanities and a fundamental understanding or awareness of how human beings will react to various forms of technology that are introduced to them.
To study humanities is to understand humans, as flawed as we are. From head to heart. And we need technology that accounts for both.
Thanks, and I’ll see you on the internet.