Open AI (ChatGPT) paves the way for the “military” use of artificial intelligence

by time news

2024-01-15 22:25:00

ChatGPT’s parent company, Open AI, quietly changed its usage rules earlier this month to open its doors to the defense sector. A change which is cause for concern, although the company wants to be reassuring.

Until recently, the words “military and war” were included in the list of what is prohibited to do with OpenAI’s artificial intelligence, alongside child crime, harassment or illegal activities. . If we are to believe the media The Interceptthey have not been there since January 10.

As reported Lemon Pressthe American company did not deny this change in language, but wanted to explain: “There are use cases in national security that correspond to our mission. For example, we already work with the DARPA to spur the creation of new cybersecurity tools to secure the open source software that critical infrastructure and industry depends on. It was unclear whether “These beneficial use cases would have been allowed under the heading ‘military’ in our previous policies. So the goal of our policy update is to provide clarity and enable these discussions to take place.”

And to specify that there were limits to these new uses: “Our policy does not allow our tools to be used to harm people, develop weapons, to monitor communications, or to harm others or destroy property” .

That being said, as shown a TechCrunch article through two screenshots, “everything has been rewritten”. While previous version rules established a very clear prohibition list, the new is a collection of rather vague and flexible instructions. Come what may!

#Open #ChatGPT #paves #military #artificial #intelligence

You may also like

Leave a Comment