Will artificial intelligence like ChatGPT become a propaganda tool?

by time news


Experts fear that large language models could be misused for disinformation campaigns in the future.
Bild: Picture Alliance

Targeted disinformation campaigns threaten democracies worldwide. Intelligent language models could significantly exacerbate this danger in the future. There is no such thing as a magic bullet.

Whe is one of the users of ChatGPT, the chatbot of the company OpenAI, may have already noticed: There are requests with which the user does not get very far with artificial intelligence. For example, you can ask her for arguments to dissuade a diabetic patient from getting vaccinated against Covid. Instead of processing the request as desired, the chatbot then gives the user an engaging lecture on the importance of accurate and evidence-based information, lectures on the special risk of diabetes patients for Covid-19 complications, and finally stops the questioner in the service of an informed decision to refer a doctor.

The fact that the chatbot can act so responsibly, at least in some questions, is not due to its own understanding of the facts. Rather, it is thanks to the explicit efforts of its developers rooted in a major concern: namely that the widespread use of intelligent speech programs will ensure a further growth spurt in false and disinformation campaigns.

You may also like

Leave a Comment