Stories with a Moral Blueprint – part 7 of 8

Truth in Storytelling

When I wrote the first edition of The Story Factor twenty years ago, I began with the idea that people don’t want more information. They want faith in you and your positive intentions. I never suspected that two decades later we’d be discussing an explosion of stories that intentionally undermine this faith. Without the conventional power of weaponry to achieve their goals, certain groups have learned to harvest the twin powers of stories and technology. These weaponized stories persuade neighbors to attack each other as enemies. They undermine trust and often trick us to act against our own best interests. Now that technology amplifies the frequency and reach of malicious storytelling, the power of malicious storytelling to destroy social cohesion is more evident than ever. These results call on all storytellers to double-check that the stories we tell do not erode our ability to balance competing needs.

BIg T Truths make stories come alive.
BIg T Truths make stories come alive.

The good news is that stories only get better when a teller digs deep to reveal capital “T” Truths that that engage, describe, and explore real-life ethical issues. I knew a retired preacher, now dead, who complained to me that writing the history of his church only produced a flat, boring story. I innocently suggested, “Then you aren’t telling the whole truth because in my experience telling the whole truth makes every story more interesting.” He responded, “You mean I should tell about the S&M affair I had with a female preacher from the next town over?” Stunned, all I could say was, “Well, it already sounds more interesting.” It’s an extreme example, and no, his affair didn’t make it into the church’s history. But I share this story in the hope it will help you remember that withholding truths to control a narrative is a recipe for flat stories. Interesting stories take on ethical dilemmas, paradoxical truths, and lift our gaze from the transactional to the transcendent. Sanitized stories are boring.

Digital storytellers now seem to be rediscovering this fertile ground after initial attempts to mechanize, accelerate, and simplify storytelling produced lackluster results. One approach I admire provides a personal Bitmoji avatar to illustrate real-life paradox with image-based stories we can drop on top of real-life photos or video to add more meaning. For instance, the Bitmoji avatar that shakes off fear before walking across hot coals, then celebrates a very short while before his or her feet burst into flames. The “damned if you do, damned if you don’t” dilemma is a paradox we have all experienced. Drop this Bitmoji sequence onto a wedding or graduation photo on Snapchat and the reminder that joy and suffering are usually a package deal can make us feel more connected.

Chatbots, on the other hand, have trouble with paradox and universal dilemmas. From the get-go, I’m wary of the ethical implications of designing a bot with the precise goal of tricking humans to believe a machine is also human. Even more worrisome is the idea that this trick is often achieved by teaching humans to think so much like machines that we can’t tell the difference. Chatbots tend to simplify paradox into binary options that discourage any inquiry that might undermine the bots programmed goals. Natural language processing can obviously be automated. But how could we possibly develop a machine that won’t oversimplify moral paradoxes? It’s not surprising that initial attempts, like Microsoft’s experiment with the chatbot on Twitter named Tay (“thinking about you”), quickly learned to maximize the speed of response rates with racist, sexist, and Nazi-sympathizing posts.

From the beginning of time, humans have recorded organic wisdom with stories to guide real-life personal choices. Today it seems that a search for organic wisdom, or even a road less traveled, is blocked by increasing arrays of algorithmic chutes and ladders designed to lead us to travel only the roads that are profitable to the road builders. (As opposed to the original Snakes and Ladders game from the second century bc that illustrated karma by characterizing good deeds with ladders and evil deeds with back-sliding snakes.) This algorithmic distortion means you can search the internet for advice to treat any chronic disease, and the solutions you find are typically controlled by whichever group profits most from your misfortune.

It’s no accident that creative people increasingly protect a certain amount of time spent without screens, to experience nature, and cultivate transcendental perspectives using meditation and ritual. These people don’t hate technology. They simply see solid advantages to ensuring their brains can still sense transcendent truths as well as invent rational equations. Ursula Le Guin, whose stories mined real-life conundrums, once said:

“Commodified fantasy takes no risks: it invents nothing, but imitates and trivializes. It proceeds by depriving the old stories of their intellectual and ethical complexity…Profoundly disturbing moral choices are sanitized, made cute, made safe. The passionately conceived ideas of the great storytellers are copied, stereotyped, reduced to toys, molded in bright colored plastic, advertised, sold, broken, junked, replaceable interchangeable.”

Ancient Aramaic tales of genies (jinn) feature tricksters with supernatural powers who grant extravagant wishes purely to teach a character how to watch what he wishes for. Now that we have manufactured supernatural technologies that reach exponentially larger audiences and grant extravagant wishes at the press of a button, we are relearning the hard way to watch out what we wish for.

Certainly there are those who will argue that business is not responsible for keeping moral stories alive. But one thing I know deep in my bones from more than twenty years of teaching stories in the business environment is that facing a moral conflict and taking a stand builds trust much faster than sidestepping these issues. In order to stand out your stories need to show what you stand for.

It is emotion, not numbers, that keep us engaged, drive us to protect the weak, fund philanthropy, and fuel our search for justice, equity, and meaning. Just because it is impossible to quantify the long-term payoffs of moral actions doesn’t mean they aren’t worth the investment. And just because you want to make a profit doesn’t mean you need to engage in the dark arts to do so.

Tomorrow: Magic School for Storytellers

Excerpt from Chapter 12, 3rd ed. of The Story Factor (2019)  AUDIBLE VERSION HERE

Facebook
Twitter
LinkedIn
Email
Print

3 thoughts on “Stories with a Moral Blueprint – part 7 of 8”

  1. Annette –

    I used to think that all tools were amoral and that only the person that wields the tool can impart meaning to its use. I think that is still true, but there is more to it than that simplicity.

    I think you are hitting on something that McLuhan talked about long ago, “The medium IS the message.” When technology impacts the way people see the world around them it affects their behavior. So, even though the technology is an amoral tool, it affects society in deep ways that are unrelated to whether there is a wielder or good or malicious intent. It IS the message. And it is up to each of us to choose to use it well or poorly.

    It is good that you shine a light on both those that use it maliciously as well as shining a light on the undeniable fact that the existence of technology itself, has an impact on storytelling that requires careful use of the tool to avoid unintended consequences.

    I believe that the law of unintended consequences is as immutable as the law of gravity and the laws of electromagnetism. We would all do well to pay more attention to it. A lot of what you describe, beyond malicious intent, is the work of the law of unintended consequences. It is worthy of attention.

    Steve

    1. Then I think you will like todays post. I’m not sure that tools embedded with utilitarian logic should be considered amoral. I say should because the unintended consequences are easy to predict at this point. The Boolean logic that underpins the way computers make decisions cannot correct when this logic promotes decisions that exploit resources (previously known as humans) to maximize gain. The alorithms in machines are teaching people to think like machines. Just like someone with utlimate privilege these machines don’t know have enough experiences with suffering and pain and so can’t/don’t make moral decisions. Pain is just not on their radar. As this point not enough people are over-riding machine thinking with moral reasoning, so the practical advantage of looking at certain kinds of logic as leading to immoral decisions is a caution warranted by evidence.

  2. Annette –

    I think you nailed it in today’s post. Just because you can, doesn’t mean you should.

    I believe the great failing in over-reliance on computer-like thinking, AI, or whatever you call it is not a failing of the tool. It is a failing of the programmer who limited its map (Peck’s internal view of the universe and your place in it) and provided the rules to be followed regardless of unintended consequences. It is a mirror of our own arrogance and short-sightedness that we conveniently refuse to see. It’s not a coincidence that so many of the great Greek tragedies were about hubris of the heroes.

    Or as Walt Kelly so aptly stated (as he paraphrased Commodore Perry after the Battle of Lake Erie), “We have met the enemy, and they are us (ours).”

    Our imperfections manifest in our choices (Dumbledore) and stories are one of many of our choices.

    Your message of caution and the requirement to examine the potential impacts of our actions beyond ourselves is as profound as the lessons of Aeschylus and Euripides and Pythagoras (to throw in a random math genius).

    Steve

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top