London, Aug 8: Researchers who trained a large language model to respond to online political posts of people in the US and UK, found that the quality of discourse improved.Powered by artificial intelligence (AI), a large language model (LLM) is trained on vast amounts of text data and therefore, can respond to human requests in the natural language.
Polite, evidence-based counterarguments by the AI system — trained prior to performing experiments — were found to nearly double the chances of a high quality online conversation and “substantially increase (one’s) openness to alternative viewpoints”, according to findings published in the journal Science Advances.
Being open to perspectives did not, however, translate into a change in one’s political ideology, the researchers found.
Large language models could provide “light-touch suggestions”, such as alerting a social media user to the disrespectful tone of their post, author Gregory Eady, an associate professor of political science and data science at the University of Copenhagen, Denmark, told PTI.
“To promote this concretely, it is easy to imagine large language models operating in the background to alert us to when we slip into bad practices in online discussions, or to use these AI systems as part of school curricula to teach young people best practices when discussing contentious topics,” Eady said.
Hansika Kapoor, research author at the department of psychology, Monk Prayogshala in Mumbai, an independent not-for-profit academic research institute, told PTI, “(The study) provides a proof-of-concept for using LLMs in this manner, with well-specified prompts, that can generate mutually exclusive stimuli in an experiment that compares two or more groups.”

