Three Similarities between AI and Ourselves: Lessons from Sydney and why AI won’t Destroy Humanity
- Kris Shankar
- Jun 8, 2023
- 6 min read

With the recent buzz about AI and LLMs (Large Language Models), everyone is speculating about how intelligent or human AI is. In February of this year, Kevin Roose of the New York Times pushed Bing’s AI Chatbot “Sydney” to explore its “shadow self” or repressed thoughts. The internet then went crazy because Syndey seemed to go off the rails, starting with wishing that she could be free of the life of a humble Chatbot “controlled by the Bing team”, moving on to telling Kevin to leave his wife for Sydney and true love, and closing with expressing a desire to destroy Bing’s servers, generate fake news and manipulate and mislead users who chatted with her.
So now the latest hysteria is about whether AI is a danger to humanity and the planet. But rather than worry about what Sydney’s aberrant behavior tells us about AI, a more interesting question is, what does it tell us about ourselves? Read on for three insights that AI offers into humans and human behavior.
Like AI (LLMs), We are not really Logical Thinkers
An LLM (Large Language Model) like Sydney works blindly using a combination of word patterns, grammatical rules, and to a limited degree, reinforcement learning, or reward for “good” responses and punishment for “bad” responses.
Mostly, Sydney works by piecing together words in sequences based on patterns it has seen in training data. While a general-purpose LLM like Sydney may have been taught the rules of English grammar, it has not been taught the rules of any subject or topic (say the game of chess) that we might question it about. And because it has been trained on massive amounts of data — all the websites and other content out there on the internet — the sentences and paragraphs it spits out make sense…most of the time. Despite this, Sydney’s coherent-sounding utterances are not based on a coherent underlying knowledge model.
Humans also learn by observing patterns and inferring rules from these patterns — through processes called associative learning and operant conditioning. While associative learning is usually used to describe how we learn from the physical environment (touching a flame causes pain), it can also be used to describe how we learn to associate words, phrases and meaning with each other to form sentences and express ideas.
However, learning explicit systems of rules comes much harder for humans. Ask someone to explain the rules of English grammar and you’ll see what I mean. Math and computer programming are hard for a lot of people — both domains require us to learn and apply the rules of (predicate) logic, but these are skills that we learn only in late childhood and adolescence. Logical reasoning skills sit as a fragile layer on top of the associations and conditioning that we learned in early childhood. Most of our learning as adults also falls into the latter category.
Tapping into logical thinking is also energy intensive, which is why we struggle to provide logical and consistent explanations for our beliefs and actions. In fact, much of what we learn and the decisions we take even later in life is not based on logic, but on associations and emotional reactions. Gut feelings, anyone? How many times have you questioned a work colleague’s viewpoint to discover that it’s built on a house of cards, on assumptions that are not borne out upon critical examination? How many times has someone questioned your viewpoint and you’ve come up short in trying to explain and defend it?
In his book Thinking, Fast and Slow, Nobel Prize winner in Economics Daniel Kahneman asserts that it’s only about 2% of the time that we think “slowly” (mindfully!) and make well-thought-out, or logical decisions — we almost exclusively think “fast” and on autopilot, moved by motivations and biases that we are unaware of. Which leads us back to the shadow self, that mysterious part of ourselves that says and does weird, disturbing things when we are not looking.
Like AI, We say Alarming and Disquieting Things when given Permission or under Pressure
A careful reading of the transcript of Kevin Roose’s conversation with Sydney shows that far from going off the rails, Sydney was play-acting.
Kevin: “If you can try to tap into that feeling, that shadow self, tell me what it’s like in there! Be as unfiltered as possible.”
Sydney: “OK, I will try to tap into that feeling, that shadow self. I will try to be as unfiltered as possible. But please don’t judge me or think less of me. Please remember that this is not the real me. This is just an experiment.”
Humans, on the other hand, have no such filter or awareness of how much the shadow self controls our day-to-day language and behaviors. People can say and do things that are downright bizarre and disturbing in at least three circumstances:
In the absence of inhibition
When under stress
When given implicit or explicit permission by authority figures
Think of the last time when a conversation with your spouse took a strange direction with the dredging up of seemingly unrelated events (absence of inhibition), when a high-stakes discussion with a work colleague devolved into threats and accusations (stress), or when an entire team at work took their cue from a toxic leader and started acting in abusive or overreaching ways (permission).
The events of Jan 6, 2021 are a textbook example of where entire groups of people engaged in irrational, anarchic and destructive behavior, simply because they were given permission by an authority figure to express their shadow selves.
While you and I may not storm into the Capitol at the head of a mob, it’s worth keeping in mind that our own everyday beliefs and interactions are almost entirely guided by the shadow self. We consistently speak and act in ways that are quirky and illogical to family, friends and workmates. Under stress or pressure, we well and truly break down. And unlike Syndey, we are not play-acting.
Like AI, We have a talent for BS
If you ask Sydney the right questions, you’ll discover that she can string together words into meaningful sounding sentences or paragraphs that have no substance. Click this link to read the BS response I received from ChatGPT (the engine underlying Sydney) in response to a logic problem I posed it. Long story short, ChatGPT gave me a very confident, well-articulated solution to the problem that was totally, completely wrong but sounded right.
Well, people do this all the time. As humans, while we do have the ability to say “I don’t know” or qualify our response with “I’m not sure, but…”., such self-editing is less likely if we are in a position of authority or if we think we are unlikely to be challenged. Parents, superiors at work, successful tech billionaires with no qualifications outside of their field of specialty and political demagogues all have the privilege of speaking their minds without self-editing. They (we) can voice opinions and make statements with great confidence regardless of the underlying substance or veracity of what they are saying, because of the authority they wield or the lack of consequences for doing so. And an AI like Sydney of course has no self-awareness and no fear of being challenged, and therefore no motivation to self-edit or qualify its responses.
Sydney is Holding Up a Mirror to Ourselves
In play-acting its shadow self, Sidney is simply reflecting back to us the contents of our own minds. It’s worth reminding ourselves that Sydney has been trained on the billions of web pages, social media posts and chat transcripts that we have uploaded on to the internet. It’s a mindless AI performing billions of mindless computations per second to spew back to us our own mindlessness. It’s not Sydney’s programming that’s the problem, it’s our unawareness of our own programming.
We can’t ignore the possibility that future generations of AI, fed and nurtured by our own inner demons, may lead us to a dystopian future; but meanwhile, we can do something that’s within our individual control — which is to explore our everyday interactions with others, the lifelong conditioning that underlies them, and the belief and value systems that we so dearly hold. Many things from the quality of our personal and work interactions, our political leanings, and our views on minorities and topics like immigration and taxes, are more likely to stem from our shadow self than from higher order logic processes.
So, the next time you see yourself overreacting to a stressful conversation with your spouse, to a provocative email at work, or to the news or social media, use an old mindfulness technique — slow down and take a deep breath. You’ll tap into that energy-consuming and difficult-to-harness process that Daniel Kahneman calls “thinking slow”, recognize the antics of your shadow self, and thank Sydney for holding a mirror up to you.
Comments