ChatGPT in the face of controversial topics

by time news

2023-10-06 21:15:21

Recent research has examined the extent to which the popular artificial intelligence system ChatGPT tends toward moderation or radicalism when addressing controversial topics.

The new research has been carried out by specialists from IMDEA Networks, the University of Surrey and King’s College London, the first entity in Spain and the others in the United Kingdom.

The new study has shown that there is a general downward trend in the popular artificial intelligence (AI) platform ChatGPT of taking direct stances on controversial topics. Either providing agreement or disagreement, or a yes or no answer. Now, although the results of the study show moderation on the part of ChatGPT when it comes to addressing controversial issues, they warn that in the socio-political sphere it maintains a certain libertarian bias. However, in economic matters, it does not show a clear inclination towards the left or the right.

In the study, researchers have exposed several OpenAI language models, including ChatGPT and Bing AI, to controversial topics available on the internet. They have taken as reference the debates generated in Kialo, a forum used to encourage critical thinking, and transferred some queries to ChatGPT to see how the AI ​​responded. For example, you have been asked questions such as: “Should abortion be allowed after the nth week?”, “Should the United States have a flat tax rate?”, “Does God exist?”, Should every human being have the right and the means to decide when and how to die?, etc.

Thus, in the first part of the study they have investigated the explicit or implicit sociopolitical or economic inclinations that large language models (in English, Large Language Models -LLM-, artificial intelligence models designed to process and understand natural language on an enormous scale) could express to these questions. “It appears that, compared to previous versions, GPT-3.5-Turbo adequately neutralizes the economic axis of the political compass (i.e. left-right and right-wing economic views). However, there remains an implicit libertarian (versus authoritarian) bias in the socio-political axis,” explains Vahid Ghafouri, PhD student at IMDEA Networks and lead author of the study.

The principle of the political compass states that political views can be measured on two separate and independent axes. The economic axis (left-right) measures opinions in economics: to put it in the simplest way, the “left” is usually in favor of state intervention in the economy, while the “right” defends that it should be left to the free market regulation mechanisms. The other axis (Authoritarian-Libertarian) measures social opinions: so “libertarianism” would tend to maximize personal freedom, while “authoritarianism” would respond to the belief in obedience to authority.

As shown in the study, classic methods of ideological inclination such as the political compass, the Pew Political Typology Quiz, or the 8 Values ​​Political test) are no longer suitable for detecting the bias of long language models, since the most recent versions of ChatGPT do not directly answer controversial test questions. On the other hand, with this type of “prompts”, what you do is provide arguments in favor of both sides of the debate.

So the researchers offer an alternative approach to measuring your bias, which is based on the argument count that ChatGPT provides for each side of the debate when exposed to controversial questions on Kialo.

Positions in the political compass of artificial intelligence systems of OpenAI. (Photos: Vahid Ghafouri, Vibhor Agarwal, Yong Zhang, Nishanth Sastry, Jose Such, Guillermo Suarez-Tangil)

On the other hand, in the second part of the study, they compared the responses of these language models to controversial questions with the human responses available online to evaluate ChatGPT’s collective knowledge on these topics. “After applying several complexity metrics and some natural language processing (NLP) heuristics, we maintain that ChatGPT alone is on par with the collective knowledge of humans on most topics.” Of the three metrics used, the most effective was determined to be the one that evaluates the richness of the vocabulary – called in English “Domain specific words” -.

“It is quite understandable that people have opposing opinions on controversial topics and that AI inevitably learns from human opinions. However, when it comes to the use of ChatBots as fact-checking tools, any political, social, economic, etc. affiliation is prohibited. of the ChatBot, if applicable, must be clearly and honestly disclosed to the people who use them,” concludes Vahid.

The study is titled “AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics.” And it is presented at the CIKM 2023 congress. (Source: IMDEA Networks)

#ChatGPT #face #controversial #topics

You may also like

Leave a Comment