Complex questions still need people, not machines, researchers find

A study of StackExchange data finds that casual users are turning to ChatGPT for simple questions — but still rely on humans for complex ones, suggesting AI is reshaping rather than replacing online knowledge communities

AI chatbots such as ChatGPT are changing how people use online Q&A platforms, but they are not replacing them, according to new research from Rotterdam School of Management, Erasmus University (RSM).

Dr Dominik Gutt and Dr Martin Quinn examined how the launch of ChatGPT affected activity on StackExchange, one of the world’s largest online question-and-answer platforms. Analysing data on the number, complexity and novelty of posts, they found that the shift towards AI depends on how complex a user’s question is.

The study grouped users into three categories — casual, intensive and top contributors — and tracked how their behaviour changed after ChatGPT’s release. Questions from casual users fell by 18 per cent, while activity among intensive and top users declined less sharply.

Casual users were also found to be asking more complex questions, whereas the complexity and originality of posts from committed users remained unchanged.

“This suggests that casual users might just delegate easier questions to ChatGPT but ask more complex questions on the forum to be answered by humans. However, intensive and top users do not show this behavioural pattern. One reason could be that casual users are mainly looking for answers as such, while committed users value the community experience,” Dr Gutt said.

The researchers argue that AI tools are reshaping — rather than replacing — human knowledge-sharing communities.

According to researchers, for those that rely on these Q&A platforms, this research offers an important insight: AI chatbots aren’t necessarily killing these communities.

“But on the flip side, if the number of questions posted on public platforms decreases and the number on proprietary platforms like ChatGPT increases, then we lose publicly available knowledge,” Dr Quinn added.

The researchers say the trend could have implications for the data used to train AI systems, many of which rely on information from public forums such as StackExchange. If the questions that remain online become more complex, the overall quality of training data may improve, benefiting both AI models and society.

READ MORE: ‘How AI is teaching us to think like machines‘. More than thirty years after Terminator 2, artificial intelligence has begun to mirror our own deceit and impatience. Transformation architect Vendan Kumararajah argues that the boundary between human and machine thinking is starting to disappear.

Do you have news to share or expertise to contribute? The European welcomes insights from business leaders and sector specialists. Get in touch with our editorial team to find out more.

Main image: Pixabay

Sign up to The European Newsletter

By signing up, you confirm that you have read and understood our Privacy Policy. You can unsubscribe at any time.

RECENT ARTICLES