What if you could ask an artificial intelligence to explain to you how a quantum computer works, how to redecorate your interior or how to manage your relationship problems?
The experience now possible with the impressive ChatGPT, a conversational tool founded by the company OpenAI. It was tested by a million users in just a few days, its creator Sam Altman announced on Monday, December 5.
Designed to mimic human conversation, it is able to provide natural-sounding answers to complex questions almost instantly.
It is based on a machine learning technique called “Reinforcement Learning from Human Feedback”. In other words, humans helped it grow after providing it with large amounts of textual data.
ChatGPT’s detailed responses impress users, who have posted numerous interactions with the machine on social media, as well as its ability to recall previous conversations and recognize mistakes.
The multiple shares of conversations on Twitter have proven the versatility of the software, capable of expounding on politics and philosophy or inventing poems on all kinds of subjects. Some have even announced that the tool will replace teachers or make the exercises given to students obsolete, as they can be done in two clicks with its help.
Fluent and understandable writing
Many media have also tried to have their articles written by ChatGPT, amazed by its ability to write in a fluid and understandable way. It is particularly capable of producing complex tutorials, such as learning to code in Python, as demonstrated by a user.
Um… I just had like a 20 minute conversation with ChatGPT about the history of modern physics. If I had this shit as a tutor during high school and college…. OMG.
I think we can basically re-invent the concept of education at scale. College as we know it will cease to exist.
— Peter Wang (@pwang) December 4, 2022
Not content with providing a precise answer to the questions posed to it, artificial intelligence is also capable of providing a nuanced answer. At the question “Is the right wrong about nuclear power”ChatGPT recalls for example that the term of ” right “ is too vague to serve as a basis for serious reflection.
Bias and dangerous information
The tool differs from Google in that it sorts the information for the user, who has even less work to do than when using a search engine.
However, ChatGPT’s intelligence has its limits. Faced with certain questions, the tool gives incorrect or even absurd answers. On Twitter, a user was notably answered by the application that “peace in the world is far from being a desirable ideal”.
He is also accused of having integrated biases into his algorithm discriminating against certain groups of people and of giving dangerous information, such as the method for building explosive objects, if the question is asked correctly.