How Do People Make Decisions In Groups?

How People Make Decisions In Groups

Using a mathematical framework with roots in artificial intelligence and robotics, researchers have uncovered the process for how people make decisions in groups.

The researchers also found they could predict a person’s choice more often than more traditional descriptive methods.

In large groups of essentially anonymous members, people make choices based on a model of the “mind of the group” and an evolving simulation of how a choice will affect that theorized mind, the study finds.

“Our results are particularly interesting in light of the increasing role of social media in dictating how humans behave as members of particular groups,” says senior author Rajesh Rao, a professor in the University of Washington’s Paul G. Allen School of Computer Science & Engineering and co-director of the Center for Neurotechnology.

“We can almost get a glimpse into a human mind and analyze its underlying computational mechanism for making collective decisions.”

Our actions and the group

“In online forums and social media groups, the combined actions of anonymous group members can influence your next action, and conversely, your own action can change the future behavior of the entire group,” Rao says.

The researchers wanted to find out what mechanisms are at play in settings like these.


 Get The Latest From InnerSelf


In the paper, they explain that human behavior relies on predictions of future states of the environment—a best guess at what might happen—and the degree of uncertainty about that environment increases “drastically” in social settings. To predict what might happen when another human is involved, a person makes a model of the other’s mind, called a theory of mind, and then uses that model to simulate how one’s own actions will affect that other “mind.”

While this act functions well for one-on-one interactions, the ability to model individual minds in a large group is much harder. The new research suggests that humans create an average model of a “mind” representative of the group even when the identities of the others are not known.

To investigate the complexities that arise in group decision-making, the researchers focused on the “volunteer’s dilemma task,” wherein a few individuals endure some costs to benefit the whole group. Examples of the task include guarding duty, blood donation, and stepping forward to stop an act of violence in a public place, they explain in the paper.

Predicting decisions

To mimic this situation and study both behavioral and brain responses, the researchers put subjects in an MRI, one by one, and had them play a game. In the game, called a public goods game, the subject’s contribution to a communal pot of money influences others and determines what everyone in the group gets back. A subject can decide to contribute a dollar or decide to “free-ride”—that is, not contribute to get the reward in the hopes that others will contribute to the pot.

If the total contributions exceed a predetermined amount, everyone gets two dollars back. The subjects played dozens of rounds with others they never met. Unbeknownst to the subject, a computer mimicking previous human players actually simulated the others.

“We can almost get a glimpse into a human mind and analyze its underlying computational mechanism for making collective decisions,” says lead author Koosha Khalvati, a doctoral student in the Allen School. “When interacting with a large number of people, we found that humans try to predict future group interactions based on a model of an average group member’s intention. Importantly, they also know that their own actions can influence the group. For example, they are aware that even though they are anonymous to others, their selfish behavior would decrease collaboration in the group in future interactions and possibly bring undesired outcomes.”

In their study, the researchers were able to assign mathematical variables to these actions and create their own computer models for predicting what decisions the person might make during play. They found that their model predicts human behavior significantly better than reinforcement learning models—that is, when a player learns to contribute based on how the previous round did or didn’t pay out regardless of other players—and more traditional descriptive approaches.

Given that the model provides a quantitative explanation for human behavior, Rao wonders if it may be useful when building machines that interact with humans.

“In scenarios where a machine or software is interacting with large groups of people, our results may hold some lessons for AI,” he says. “A machine that simulates the ‘mind of a group’ and simulates how its actions affect the group may lead to a more human-friendly AI whose behavior is better aligned with the values of humans.”

The results appear in Science Advances.

About the Authors

Senior author: Rajesh Rao, a professor in the University of Washington’s Paul G. Allen School of Computer Science & Engineering and co-director of the Center for Neurotechnology. Lead author: Koosha Khalvati, a doctoral student in the Allen School.

Additional coauthors are from UC Davis; New York University; and the Institut des Sciences Cognitives Marc Jeannerod. The National Institute of Mental Health, the National Science Foundation, and the Templeton World Charity Foundation funded the work.

Original Study

enafarzh-CNzh-TWnltlfifrdehiiditjakomsnofaptruessvtrvi

follow InnerSelf on

facebook-icontwitter-iconrss-icon

 Get The Latest By Email

{emailcloak=off}

FROM THE EDITORS

InnerSelf Newsletter: September 6, 2020
by InnerSelf Staff
We see life through the lenses of our perception. Stephen R. Covey wrote: “We see the world, not as it is, but as we are──or, as we are conditioned to see it.” So this week, we take a look at some…
InnerSelf Newsletter: August 30, 2020
by InnerSelf Staff
The roads we are travelling these days are as old as the times, yet are new for us. The experiences we are having are as old as the times, yet they also are new for us. The same goes for the…
When The Truth Is So Terrible It Hurts, Take Action
by Marie T. Russell, InnerSelf.com
Amidst all the horrors taking place these days, I am inspired by the rays of hope that shine through. Ordinary people standing up for what is right (and against what is wrong). Baseball players,…
When Your Back Is Against The Wall
by Marie T. Russell, InnerSelf
I love the internet. Now I know a lot of people have a lot of bad things to say about it, but I love it. Just like I love the people in my life -- they are not perfect, but I love them anyway.
InnerSelf Newsletter: August 23, 2020
by InnerSelf Staff
Everyone probably can agree that we are living in strange times... new experiences, new attitudes, new challenges. But we can be encouraged in remembering that everything is always in flux,…