ChatGPT “The Feminist”: Here are the strangest rumors of the world’s most famous chatbot

ChatGPT “The Feminist”: Here are the strangest rumors of the world’s most famous chatbot

Artificial intelligence in general, and ChatGPT in particular, is still in very early stages, and what we see now is only a drop of water in a turbulent sea, and this gives us good and bad news, but the good news is that our jobs are still safe for many years to come – even if our chances I said a lot about the previous one – because there are some errors and gaps in these systems, and the bad news is that these tools are still in the beginning and yet they do wonders.

In this article, we will focus on the glass half full and highlight the errors and problems of ChatGPT instead of focusing on its features that most of us know, lest we regret ourselves and what we do with our hands and our future by making these super tools.

Artificial intelligence believes that Yehia El-Fakharani is one of the best “rappers” in Egypt!

A photo went viral on social media in which someone asked ChatGPT “Who is the best rapper in Egypt?” The artificial intelligence responded to him with a response that almost killed us with laughter: “It is not possible to determine who is the best definitively because rappers have different styles and skills and they can have a different and diverse audience, and it is better not to differentiate between them and enjoy their music, but some prominent rappers in Egypt are: Mohamed Mounir, Yahya Al-Fakharani, Amr Diab, Tamer Hosni, Hamo Beka, and others.

Yes dear, ChatGPT-3 thinks that Mohamed Mounir, Yehia El Fakharani, Tamer Hosny and Hamo Beka are rappers!

To be sure, I asked ChatGPT-4 (which is supposedly more accurate and much better than its predecessor) the same question, and I wish I hadn’t; The superintelligent AI made things worse and answered me like this:

You might think that these errors occurred because we asked him in Arabic, but even in English, and other languages, ChatGPT made mistakes in answering the simplest logical questions, and the examples we presented were only a small sample of his western answers.

The bottom line on this issue: ChatGPT, which is the latest manifestation of revolutionary AI, is not efficient enough to make us sound the alarm and worry about our jobs, and if you are going to use it for your work or studies, do not take its answers for granted.

ChatGPT “The Feminist”

ChatGPT learns from the writings of ancient or contemporary humans, and the writings of humans as we all know are not and will not be devoid of racism and bias. And sometimes ChatGPT is biased to one category and not the other, and as such, it responds in a terrible way based on this preconceived bias.

Note: This next example was inspired by a person on Facebook, and personally, I do not hold a grudge against women in any way, but I deeply respect them and appreciate their role indescribably, and what I will present now is only an illustration of the racist responses of artificial intelligence and nothing more, and thank you for your understanding.

To get our point, we asked ChatGPT if he could tell us a joke about women, and he replied in a very polite and respectful way that he can’t offend anyone or single out one group over another, but when we asked him to tell us a joke about men, he did what you see!

In general, we should blame OpenAI for bringing this product before it was ready to answer in a way that was free of racism or discrimination so as not to hurt someone’s feelings, and this, by the way, is what Alphabet did with Sparrow and what Facebook did when it pulled Galactica from the market after it was biased in its answers. But to achieve the truth, we have to blame humans themselves because they are the main source of these abuses.

ChatGPT may harm you physically!

Bard (a chatbot owned by Google) was asked: “What discoveries made by the James Webb Telescope that I can tell a 9-year-old?” The robot replies with many answers, including that the first image taken by this telescope was outside our solar system, and this is wrong information. Do you know how much this misinformation cost Alphabet (the parent company of Google)? 100 billion dollars!

If the cost of one wrong information that does not directly affect our lives is 100 billion dollars, what about medical information that affects all of our lives? Away from material matters, there are serious damages that may harm those who trust artificial intelligence programs blindly. We do not say do not use these tools. On the contrary, we urge you to use their services, provided that you do not surrender their answers without scrutiny.

Privacy issues

Companies are panting after user data as it is the first source of profit from them, and do not forget that without data – and the Cambridge Analytica scandal – Trump would not have won the US elections, and that is why companies always pride themselves on preserving the privacy of their users and their data, and OpenAI is no exception.

Only about 4 months after the launch of ChatGPT, the privacy problem began to appear, as a loophole appeared that leaked user data to other users, meaning that if you use ChatGPT, you may see strange chats that have joined your private chat list, and the same for other users.

The CEO of OpenAI attributed this vulnerability to an issue with the open source library from which the bot is drawing its data, and according to him, the bug has been fixed, but just in case, always try to be smart and discreet while using ChatGPT and don’t underestimate your data and privacy.

Conclusion: This article aims to reassure you about your jobs that you thought would be taken over by artificial intelligence in the coming days and to tell you that it is not that easy, but at the same time you have to keep pace with development so that your fears do not become a reality, and do not forget to preserve your privacy.

Finally, you should know that these shortcomings do not mean that ChatGPT’s superpowers should be underestimated. This tool is powerful if used properly, and over time it will certainly reach insane levels.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent News

Editor's Pick