Have you ever chatted with an AI program and left wondering if it was actually a natural person? I'll admit I've been drawn in a few times, only to eventually realize I was conversing with a chatbot posing as a human! With advancements in artificial intelligence, some chatbots are now scarily sophisticated - able to write essays, poetry, and even computer code that rivals what people create.

It leaves me endlessly fascinated by how quickly AI is progressing. We've gone from basic computer programs that could barely hold simplistic conversations to AI that can generate thoughtful content, advice, and ideas nearly indistinguishable from a human's.

So, with chatbots more intelligent than ever, researchers cooked up an experiment to test just how closely these AI match human thinking and decision-making. It was inspired by the classic "Turing test" proposed years ago by computing pioneer Alan Turing. The idea is to have a person communicate with a chatbot and an actual human and then try to determine which is which. If they can't reliably tell the difference, it suggests the AI has effectively captured intricate human behaviors.

For this new experiment, scientists posed a creative Turing-style challenge. Could an AI chatbot fool someone into thinking it's human through casual banter and during more complex psychological assessments measuring personality and behavioral tendencies? How aligned are cutting-edge chatbot motivations, cooperation levels, trustworthiness, and decisions compared to a person's?

The researchers decided to find out...and the results might surprise you!


innerself subscribe graphic


Personality Tests for Bots

They asked several chatbot AIs, including different versions of ChatGPT, to take a personality survey and play interactive games that reveal human behavior - our cooperation, trust, fairness, risk-taking, and so on. Then, the researchers compared the chatbots' moves to the responses from tens of thousands of real people who've taken these tests.

The big surprise? The chatbot choices mostly fell within the range of human responses. Their behavior was challenging to distinguish from humans' choices in a few cases! When there were differences, the AI chatbots tended to act more cooperatively than the average human. Pretty wild!

More Generous Than Humans

See, each test is intended to bring out different aspects of personality or tendencies - it's not the same as just asking an AI to write an essay that seems "human." The survey maps someone's essential personality traits - openness, conscientiousness, and so on. Then the games reveal nuances of behavior: How do you act when money is on the line? Will you cooperate with a partner or think only of yourself?

In the games, most chatbot moves matched up closely with human responses. However, their behaviors showed less variety across rounds of playing. That makes sense since each chatbot "individual" was compared to tens of thousands of real people. But it was mind-blowing that you can't distinguish the chatbot's choices from a human's based on statistics in some games!

And when differences did appear, it wasn't random. The chatbots skewed more generous - think being more trusting of a partner in an investment game or demanding less money as the one proposing a split in another game.

The AI players cared about the outcomes for both sides rather than just themselves. Analyzing the chatbot motivations suggests they act like they're trying to maximize the total payoff for themselves AND their game partner.

Learning from Experience

Besides playing the games straight through, researchers tried other twists to mimic natural human behavior, like changing the context or framing around the choices. And, like people, minor tweaks could dramatically shift the chatbots' strategies! For example, telling one it was being observed made it way more generous.

The team also found that AI players change behavior after experiencing previous rounds, sounding like human learning. Over time, their approaches respond to different game scenarios.

Disturbingly Human or Intriguingly Lifelike?

And to top it off, the different chatbot versions showed distinct traits across the tests - hinting at personalities as unique as you and me! It was remarkable how, in some cases, the chatbots exhibited consistent tendencies that set them apart - just as human personalities have quirks that make each of us different. One chatbot might be more cautious or competitive, while another came across as more generous and eager to cooperate.

Seeing AI mimic human thinking and decision-making intricacies has sparked much debate. Some find it creepy when machines seem to act too much like people - like chatbots creating original poetry or having their own perspectives on morality. The more roles we give AI in healthcare, education, business, and more, the more their judgment matters.

At the same time, something is fascinating about AI showing glimmers of dynamic, thoughtful behaviors like we see in people. Understanding these tendencies better means we can anticipate how AI assistants, service bots, and others may act, which builds more trust. Tests like having them take surveys and play behavioral games help reveal their "black box" thought processes.

One thing's for sure in my book: The line between artificial and human reasoning keeps getting thinner and more blurred! What do you think about machines adopting their own versions of human traits and qualities? Are we comfortable granting AI more independent judgment, or is it too much like creating a new form of quasi-life?

About the Author

jenningsRobert Jennings is co-publisher of InnerSelf.com with his wife Marie T Russell. He attended the University of Florida, Southern Technical Institute, and the University of Central Florida with studies in real estate, urban development, finance, architectural engineering, and elementary education. He was a member of the US Marine Corps and The US Army having commanded a field artillery battery in Germany. He worked in real estate finance, construction and development for 25 years before starting InnerSelf.com in 1996.

InnerSelf is dedicated to sharing information that allows people to make educated and insightful choices in their personal life, for the good of the commons, and for the well-being of the planet. InnerSelf Magazine is in its 30+year of publication in either print (1984-1995) or online as InnerSelf.com. Please support our work.

 Creative Commons 4.0

This article is licensed under a Creative Commons Attribution-Share Alike 4.0 License. Attribute the author Robert Jennings, InnerSelf.com. Link back to the article This article originally appeared on InnerSelf.com