People act more rationally when they think they are dealing with AI, study finds

Researchers found participants were more willing to accept an unfair cash split from an AI partner than from a human, suggesting people see artificial intelligence as more logical and less driven by emotion

People make more economically rational decisions when they believe they are interacting with artificial intelligence rather than another person, according to research.

In a new study, participants took part in an economic game involving real money and were asked whether to accept or reject an unfair offer from either an AI or a human partner.

The proposed split was heavily weighted against them: 90 cents for the partner and 10 cents for the participant from a total of one dollar. If the participant rejected the offer, both sides received nothing.

From a purely economic point of view, accepting the 10-cent offer is the rational choice because it leaves the participant better off than rejecting it and getting nothing.

The researchers found that participants who believed they were dealing with AI were significantly more likely to accept the unfair split than those who thought the offer had come from a human.

The study by UCD Michael Smurfit Graduate Business School suggests that people tend to see AI as more reason-driven and human decision-making as more influenced by emotion.

The researchers said this may mean people adjust their own behaviour to match what they believe is a more logical counterpart, rather than the effect revealing something unique about AI itself.

The findings could matter well beyond the lab as AI becomes more common in business, public policy and negotiations, where perceptions of how machines “think” may shape the choices people make in response.

The research was carried out by behavioural scientists Dr Suhas Vijayakumar, Dr Yuna Yang and Dr David DeFranza, who examined how “lay beliefs” – everyday assumptions about how something works – can influence behaviour directly.

Decision-makers should be aware that people may bring pre-existing assumptions about AI’s decision-making style into interactions, and that those beliefs may affect how readily they accept AI recommendations, they said.

Dr Vijayakumar added: “We speculate perhaps a reason why people are less likely to accept a similar unfair offer from a person (human), could also be because of expectations of reciprocity and emotional fairness that we share with other human beings. Future research needs to look at further expectations and beliefs about AI”.




READ MORE: These are the 10 AI trends to watch in 2026 that will drive business forward. So-called vertical AI, context engineering, and the use of metadata operating systems are helping companies make smarter, more efficient decisions

Do you have news to share or expertise to contribute? The European welcomes insights from business leaders and sector specialists. Get in touch with our editorial team to find out more.

Main image: Pixabay

RECENT ARTICLES