OpenAI wants ChatGPT technology to moderate what is said on the internet

by time news

2023-08-17 15:38:33

OpenAI, the company that created the ChatGPT talking machine, has presented a content moderation system based on its GPT-4 technology, the same one that moves its conversational robot, with which it intends to moderate online traffic and filtering “toxic and harmful” material from the Internet to “relieve the mental burden” of the human moderators who perform this role.

In a announcementthe company run by Sam Altman It has valued the need to moderate the content on digital platforms, since it is something “crucial in maintaining the health” of said media. In this sense, he has pointed out the “meticulous effort, sensitivity and deep understanding of the context” that the online content moderation process requires. Likewise, he has also pointed out the need for a “rapid adaptation” to new use cases in this field.

OpenAI has also remarked that, due to its complexity, it is a “slow and challenging” process for users who are dedicated to moderating content and filtering harmful or inappropriate material.

Within this framework, the technology company has presented a content moderation system, which uses its own GPT-4 technology to filter online content and thus detect “toxic and harmful” material on digital platforms.

Thus, as OpenAI has detailed in a statement on its blog, it is a system that, through its most powerful AI technology, can help moderate online traffic in accordance with the specific policies of the platforms where it is implemented. . In fact, any user with access to the OpenAI API can implement this system and create their own AI-assisted moderation process.

In this way, the system is designed to “relieve the mental load of a large number of human moderators”, who can rely on GPT-4 to filter content. In addition, as the company has explained, this technology allows for a “more consistent” labeling of online content, since LLMs (large language models) are more sensitive to wording differences and can adapt more quickly to updates. policies to deliver a “consistent content experience.” On top of all this, it offers a “faster feedback loop” for refining the moderation policies used.

To use this system, the desired moderation rules must first be entered into GPT-4. After that, OpenAI tests the operation of the moderation system with a sample of problematic content, based on these pre-established rules.

The decisions made by the AI ​​must be reviewed by the moderator users and, in case of finding erroneous judgments, the AI’s decision can be corrected and, thus, trained to carry out a more precise moderation. “We can repeat the steps until we are satisfied with the quality of the policy,” OpenAI explained. This procedure reduces the content policy development process “from months to hours.”

Despite all this, OpenAI has pointed out that, for the moment, there are some limitations in the system. For example, you have made reference to possible “unwanted biases” that might have been introduced into the model during training.

“As we continue to refine and develop this method, we remain committed to transparency and will continue to share our learnings and progress with the community,” OpenAI has stated in this regard.

#OpenAI #ChatGPT #technology #moderate #internet

You may also like

Leave a Comment