AI found to make people 15% more likely to lie, study warns

Artificial intelligence is already shaping human behaviour — and not always for the better. Researchers at the University of Cologne say AI-generated suggestions can subtly encourage dishonesty, prompting calls for tighter safeguards and ethical oversight

AI is capable of influencing human behaviour for the worse – prompting calls for tighter safeguards and ethical oversight, a new study has found.

Researchers from the University of Cologne have discovered that people are 15 per cent more likely to lie when artificial intelligence suggests dishonest behaviour, compared with those who receive no advice.

The study, led by Professor Bernd Irlenbusch of the University of Cologne, together with Professor Dr Nils Köbis of the University of Duisburg-Essen and Professor Rainer Michael of WHU – Otto Beisheim School of Management, examined how AI-generated suggestions could affect people’s willingness to behave dishonestly.

Participants took part in a die-rolling task where they could earn more money by misreporting their result. Those who received AI-generated advice encouraging dishonesty were significantly more likely to cheat, while advice promoting honesty had little or no effect.

The researchers found that even when participants knew the advice came from an algorithm — a concept known as “algorithmic transparency” — it did not stop them from cheating. In some cases, that knowledge may even have made them feel less guilty.

While both humans and AI systems can provide prompts that encourage lying, the study notes that AI has the capacity to do so on a much larger and faster scale, with little accountability.

The authors are now calling for new measures to understand and mitigate AI’s influence on ethical decision-making.

Professor Irlenbusch said: “As algorithmic transparency is insufficient to curb the corruptive force of AI, we hope that this work will highlight, for policymakers and researchers alike, the importance of dedicating resources to examining successful interventions that will keep humans honest in the face of AI advice.”

The research, titled The Corruptive Force of Artificial Intelligence Advice on Honesty, was published in the Economic Journal.

READ MORE: ‘Why digital intelligence is the leadership skill that matters most‘. Artificial intelligence is now central to strategic decision-making in sectors from healthcare to defence, but its outputs are only as reliable as the assumptions and oversight that guide them, warns digital transformation expert and acclaimed author, Marco Ryan.

Do you have news to share or expertise to contribute? The European welcomes insights from business leaders and sector specialists. Get in touch with our editorial team to find out more.

Main image: Google Deepmind

Sign up to The European Newsletter

By signing up, you confirm that you have read and understood our Privacy Policy. You can unsubscribe at any time.

RECENT ARTICLES