I’ve been interested in chatbots for a while, for a slew of reasons.
When people form emotive relationships with intelligent AI, it raises loads of philosophical and moral questions. Can one-sided relationships be fulfilling? How can they be abused? If text conjures voice and photo/video conjures human presence, are chatbots just extensions of these technologies? Does it mean anything that we form one-sided connections to those things, too?
As you can see, it gets thorny. And it’s not just theoretical.
One story from last year reported that a 14-year-old in Florida sadly took his own life after fostering an intimate relationship with a Character.ai chatbot modeled after Game of Thrones’ Daenerys Targaryen. This may be a fringe case, but the message is clear: these early iterations of chatbots signal Black Mirror futures, particularly for young and vulnerable populations.
But do things need to be so bleak? A recent Aeon article titled “Chatbots of the Dead” outlines a different vision for the potential uses of chatbots.
Authors Amy Kurzweil and Daniel Story argue that our thinking around chatbots has become overly dominated by gloomy sci-fi conceptions. One common fear we may have is that we become unheathily attached to chatbots (Spike Jonze’s Her), or that they somehow develop a maniacal will to harm us (Alex Garland’s Ex Machina). Clearly those scenarios are overblown.
The authors interrogate one important implicit assumption related to our fear of unhealthy emotional attachment to AI, which is that chatbots are meant to function as a stand-in for a lost loved one. We expect that chatbots will be designed to be as realistic as possible in emulating humans, and the authors claim two problems arise from that in our minds:
...either chatbots cannot live up to this standard, and therefore they are deceptive and defective, or they can live up to this standard, and therefore they will confuse, isolate or exploit those who use them.
In other words, a bad chatbot will be a horrible ersatz companion for people. Boo. And a good chatbot will deceive and exploit us to no end. Also boo.
But can we move past this dilemma? Can we reject this pernicious kind of realism in chatbot design? Kurzweil and Story think so.
The authors offer interactive theater as an example. In interactive theater, there is acting and convincing deception, but those elements only serve the art.
The authors write:
…the potential of these technologies would be better understood if they were thought of more like artworks – or, rather, like theatrical performances. Engaging with a chatbot is a lot like attending a participatory theatre performance. In these performances, audience members play active roles in the story.
In essence, we would build a chatbot using someone’s data, which would include all the textual materials that person produced in their lifetime. That would mean things like letters, online chats, and published writing. That newly forged character would then be like an improv actor who has studied someone’s background and emulates that character.
A Civil War solider at a historical reenactment would be one character mold for this. Another would be an Elvis impersonator. And, importantly, we might move away from factual and historical accuracy as the reigning objective for designing these characters.
After all, it would be silly to come up to an Elvis impersonator on Hollywood Boulevard and say, “Hey, you! You’re not actually Elvis! Why are you trying to trick people?!”
That’s because we understand the impersonator in a performative context. And his performance is meant to provide a certain experience: passing entertainment for pedestrians and, for a small tip, a cute selfie to memorialize your trip to the Walk of Fame.
And maybe we can criticize that impersonator for not delivering well on those offerings, say if he twists his face into a grimace in the selfie.“Hey, that’s a horrible photo!” you might say. “Give me a better face!”
Likewise, when a biopic depicts a historical figure, we might have aesthetic objections. We might complain that the story is too sappy or romanticized, or that it glosses over the harsh temperment of that person in real life. We might complain that it leaves out an important side of that person that we think is essential to their story. The implicit truth in those objections is that films prioritize effective storytelling over perfectly accurate storytelling. The medium is not a perfect reflection of reality but an art form that moves us or instructs us in some meaningful way.
Filmmakers know they can’t possibly capture the full scope of someone’s life in two hours. What biopics do instead is they find particular angles in the story that resonate with audiences in a particular moment.
For example, in an era of tech optimism, a Steve Jobs biopic might focus on his heroic achievements pioneering Apple computers and the iPhone, but in an era of tech backlash, his biopic might focus on his abusive and sociopathic tendencies as a boss and romantic partner. The point is that filmmakers make creative decisions when depicting characters, and to criticize those decisions on points of realism is just missing the game.
Similarly, we needn’t focus on maximizing accuracy and realism for chatbots. Instead, chatbot-creating might be an exercise in personal and collective imagination, and like artistic products, chabots should enable new kinds of exploration and conversation.
Kurzweil and Story drive these possibilities home:
For example, a chatbot could be designed to speak as a spiritual medium channelling the deceased from a spiritual realm in order to emphasise the separation of death and impart a sense of mysticism to the imaginative experience…Chatbots of public figures might inspire stylised voices of the deceased, eg, caricatures that, like Elvis impersonators, magnify idiosyncratic qualities for the purpose of parody, satire or celebration…You might create a bot trained on your childhood journals, so you can converse with your past self. An abusive parent might be represented at their most rageful to help process childhood trauma in a therapy session. An anxious person might be represented in a way that allows users to read their worried thoughts to promote empathy and understanding. A chatbot might be trained on all available family records to speak in the voice of your ancestors. Another might be trained on all the known writing from a particular ancient village. Yet another might be trained on all the works of a prolific lost writer: Plato, James Baldwin, Ursula K Le Guin, or a group of writers from a particular point in history: the Beats, the Transcendentalists, the members of OuLiPo. The possibilities are endless. This technology can help us appreciate the composite nature of the characters we imagine in fictional worlds and the different purposes remembrance might play in our lives.
In other words, building a chatbot would be a process and product of creative remembrance that will join the ranks of mediums that we don’t question today like film biopics. We can acknowledge that there are a million ways to write a biography and to make a biopic with the same giant set of data about a person. This rings true for chatbots as well.
And yes, there might be a new interactivity to chatbots that make this an imperfect analogy, but the core of the argument still stands.
To make another analogy, the first experiments in photography were made on chemical plates, and their goal was to represent reality—a view of rooftops outside a window, or, in film, a hurtling train entering a station.
And then we discovered that the camera could be used to bring fantasies and fiction to life — and even that the camera might have been BETTER suited for fantasy and fiction.
Likewise, ChatGPT is cool and we can keep trying to imagine that it is getting closer to replacing our best friend or our therapist or a lover, but ultimately it shouldn’t be any of those things; it should be a piece of art, and maybe it’s just me, but I prefer to have my art and my realism served separately.
The idea that ChatGPT *is* art reminds me of the view that the "internet personality" is art. I like it. https://peterclarke.substack.com/p/is-internet-personality-a-new-form