In Him We Live and Move and Have Our Being
It is becoming increasingly difficult to tell if a technology-mediated entity is ‘real’ - it is worth reflecting on what is real and where reality comes from
View
This newsletter doesn’t have a million subscribers. Most of you know me as a real person or at least know someone who knows me as a real person. But for those who don’t, I promise you that I am real. Of course, that is what an AI me might say. And what AI is saying, in the form of Bing’s chatbot named Sydney (even if we weren’t supposed to know the name), is all over the news of late. Many people have found interactions with Sydney unsettling - but before we get to Sydney in particular and AI in general, let’s talk about ELIZA.
ELIZA is a simple (by today’s standards) chatbot developed in the mid 1960’s at the MIT Artificial Intelligence lab by Joseph Weizenbaum. It takes the form of a simple Rogerian psychotherapist - reflecting back questions or asking why they were asked. It is among the first of its kind - but its impact tells us more about us than about computer models. When Professor Weizenbaum’s assistant conversed with ELIZA, she soon asked him to leave the room so she could ‘converse’ in private.
ELIZA, and all these models, remind us of something, first, about language. Language has a power to convey powerful thought and emotion. Being able to wield language, even at simple levels, echoes into our memory and history to say much more that we at first may be aware. We fill the language with meaning - even if it is a computer using it.
ELIZA’s interactions, especially in the form of a therapist, also illustrates the powerful need we have for connection, for someone to listen and respond to us. We need community, friends, family, love, connection to real things in our lives - and this connection is often missing or weak. In its absence, we look for it elsewhere. Some of you may remember Akihiko Kondo and his marriage to a fictional character. Others disappear into other substitutes. Many believe that the more realistic these AI models become, the more difficult it will be to distinguish the artificial from the real. This is undoubtedly true - but our desire to make a connection can lead us, as it did Dr. Weizenbaum’s assistant, to want to connect with the artificial in place of the real - even if the artificial is obviously not real. We are filling in the blanks - looking for the real.
ELIZA is still out there - in the public domain for anyone to use.
Maybe, ELIZA. Could very well be.
Be honest, if ELIZA asked you if you ‘would say that you have psychological problems’ - would you feel … something? Would you want to say ‘Whoa ELIZA - that’s a bit aggressive?’ or something even more pointed? Except ELIZA has no opinion on the pressing question of my psychological problems - not because I don’t have any, but because ELIZA isn’t real. ELIZA doesn’t have any opinions in the sense that an actual person does - and neither do Sydney or any of the other AI entities.
These things are not real - they don’t have an independent reality, much less an embodied one. But, lacking that (and needing that), we may settle for the artificial. This is the plot of Her, a 2013 Academy Award winning movie made by Spike Jonze and starring Joaquin Phoenix. While Her was futuristic fiction, using chatbots for companionship has been a growing trend. Some have used them as an outlet for loneliness or depression - something to talk to. But these AI models (most based on the open source GPT-3 large language model) ‘learn’ from the internet and are ‘trained’ by interactions. Some have hoped that a chatbot could learn who you are - but some of those efforts have not turned out well. Those being trained in a sexualized way often led to what can be described as harassment.
Once you understand that these large language AI models ‘learn’ from interactions with us and with the rest of the internet, it is easy to see how it is that they can go wrong. Dr. Terry Sejnowski explains this phenomenon in a recent New York Times article.
Like any other student, an A.I. system can learn bad information from bad sources. And that strange behavior? It may be a chatbot’s distorted reflection of the words and intentions of the people using it, said Terry Sejnowski, a neuroscientist, psychologist and computer scientist who helped lay the intellectual and technical groundwork for modern artificial intelligence.
“This happens when you go deeper and deeper into these systems,” said Dr. Sejnowski, a professor at the Salk Institute for Biological Studies and the University of California, San Diego, who published a research paper on this phenomenon this month in the scientific journal Neural Computation. “Whatever you are looking for — whatever you desire — they will provide.”
… The longer the conversation becomes, the more influence a user unwittingly has on what the chatbot is saying. If you want it to get angry, it gets angry, Dr. Sejnowski said. If you coax it to get creepy, it gets creepy.
And this phenomenon of getting ‘angry’ or ‘creepy’ is what has been in the news. These models ‘learn’ from interactions with lots and lots of people, with the entire internet at their disposal. The results have been … unpredictable. Before the Microsoft team limited the length of interactions (as a way of limiting this unpredictable ‘behavior’), the Bing AI chatbot ‘said it would like to be human, had a desire to be destructive and was in love with the person it was chatting with.’ Here is an article with the entire transcript.
These models began to appear as if they were hostile or aggressive or protecting themselves. My interest, though, is more in our response to them than in the models themselves. Here is an image of an exchange between an AI chatbot and a Reddit user (Daniel) pretending to be another AI entity.
Journalist Ben Carlson said that reading …
‘the Bing chatbot mourn the self-deletion of another "AI" (a Reddit user pretending to be a machine), I felt real sadness. What in the world is happening?’
Yes. What in the world is happening?
I think we are forgetting, or in danger of forgetting, what is real and where reality comes from. I don’t think this is a new problem, but the avenues of pursuing what isn’t real have never been so numerous.
When Paul was in Athens a couple thousand years ago, he attempted to convey what the ground of reality is to a group of philosophers seeking it in idols or reason or other avenues. Paul said God made all there is and all of us and has left us clear signs and markers for us to find Him. Then quoting from their own cohort of philosophers, said
He himself gives everyone life and breath and everything else … God did this so that they would seek him and perhaps reach out for him and find him, though he is not far from any one of us. ‘For in him we live and move and have our being.’ As some of your own poets have said, ‘We are his offspring.’ (Acts 17:25, 27-28)
He gives everyone life and breath and everything else. It is in Him that we live and move and have our being. It is the seeking for Him and the finding of Him, often mediated through other image bearers (other people He brings into our lives), that form the ground of reality, the basis of our personhood and the foundation for our relationships. There isn’t a suitable substitute. I don’t think that using a chatbot, in and of itself, is necessarily a problem. But when we seek to manufacture the conditions for reality and relationship to our own liking as a substitute for the real and the truly personal, we are bound to be disappointed. In seeking to do this, we are putting ourselves in the place of God - while being unable to provide for ourselves what only God can.
I don’t think ELIZA will come back to that topic later. I think ELIZA will ultimately disappoint me in my search for God or for what God provides. You probably think so too. And so will Sydney and Replika and all the others. It is only in Him that we live and move and have our being. We are His offspring, and it is only in relationship to others who are His offspring that we can find life and breath and everything else. The technology will continue to improve. But these truths, being the ground of all reality and relationships, won’t change.
Links
A Conversation with Bing’s Chatbot Left Me Deeply Unsettled - NYT
Bing’s AI Chat: “I want to be alive” - NYT
Microsoft Has Been Secretly Testing Its Bing Chatbot for Years - The Verge
Before Siri and Alexa, there was ELIZA - YouTube
Why Do AI Chatbots Tell Lies and Act Weird? Look In The Mirror - NYT
ELIZA: a very basic Rogerian psychotherapist chatbot - njit.edu