How dangerous is AI in the hands of criminals?

by time news

2023-08-29 12:52:43

San Francisco The huge success of text robots like ChatGPT is now encouraging cybercriminals to build new types of digital intrusion tools. WormGPT and FraudGPT are the names of two malicious programs based on artificial intelligence (AI) that criminals want to use to attack companies and private individuals. And they have the potential to scale the hackers’ fraud model.

Mirko Ross, CEO of the cybersecurity company Asvin, already sees a trend: Dangerous cyber weapons are increasingly being offered for rent. “This lowers the threshold for ‘less talented’ cyberattacks to be carried out,” Ross told Handelsblatt.

Fraud attempts, including phishing attacks, are already widespread. Criminals often try to get personal information such as passwords or credit card details using fake emails or websites. This is where AI systems could be particularly helpful.

Steven Stone, who heads the threat analysis unit at cybersecurity specialist Rubrik, warns that AI is capable of massively expanding the scope of hacker attacks. Fraudulent e-mails no longer have to be translated manually or adapted to each recipient.

“Content research and language skills have historically limited attackers,” Stone said. “AI can overcome this hurdle at high speed.” The entire field of application for artificial intelligence is still in its infancy, for both attackers and defenders.

The quality of phishing attacks has increased with the AI ​​boom

With ChatGPT, the start-up OpenAI has developed an AI model that can interpret and produce human speech. The technology behind it are the so-called large language models.

If they are fed the right data, they can answer almost all questions and write order texts. In everyday office life, they are now a help in numerous companies when it comes to e-mail letters or descriptive texts. Cyber ​​criminals want to use this ability and have developed their own systems based on the ChatGPT model.

There are many assumptions about the extent to which AI is already being used in cyber attacks. However, there are no concrete figures. Because it is hardly possible to tell whether a fraudulent email was written by an AI or a human.

>> Read also: How new cyber attacks with AI threaten the banks

“However, our employees and I sense that the quality of phishing attacks has increased,” affirms Ross. E-mails and messenger messages from such attack campaigns are now mostly written without errors: “It is therefore becoming increasingly difficult to recognize defective e-mails solely from errors in the text structure and grammar.”

What the AI ​​models WormGPT and FraudGPT should be able to do

Several AI systems for cyber attacks are already being offered in a hidden part of the internet called the dark web and deep web. The WormGPT developers claim, for example, that their tool is perfectly designed for attacks on companies and achieves a similar quality to ChatGPT.

Security researcher Daniel Kelley from cybersecurity firm Slashnext tested the system. “WormGPT crafted an email that was not only remarkably compelling but also strategically smart,” Kelley wrote in his analysis. In the test, the tool formulated an email urging an employee to pay a fake bill.

“The quality is similar to ChatGPT, but WormGPT has no ethical boundaries or restrictions,” Kelley concluded (Analysis in the original). Although scam emails have been around for many years, Kelley also fears a noticeable increase. “This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of inexperienced cybercriminals.”

The developers of FraudGPT claim that their tool offers many more functions. Language models can not only write texts, but also computer code. FraudGPT aims to use this ability to specifically develop malware.

Virus scanners are partly based on the fact that they recognize and prevent known malicious programs. FraudGPT claims to be able to write completely new software on command that can no longer be tracked down. Security researcher Rakesh Krishnan from the company Nentenrich spoke of a great danger from the system.

Other cyber experts warn against exaggerating the tools. “My team has analyzed the tools and they really don’t live up to the hype that’s been surrounding them,” said Michael Sikorski, who heads up Unit 42 at cybersecurity firm Palo Alto Networks. “We see no evidence that cybercriminals are successfully using these tools.”

WormGPT and FraudGPT: How menacing are the new AI tools?

Cyber ​​security experts suspect that publicly available language models are behind both models. While OpenAI keeps its own models secret, other companies such as the Facebook parent company Meta or the London start-up Stability AI have made their language models available. And while ChatGPT refuses to answer questions about how to hijack a company’s computer system or build cyber weapons, the open models are more informative.

>> Read also: How the boss of a medium-sized company survived a cyber attack

But even models that are actually secure, such as ChatGPT, can be used for criminal purposes. On the Darknet, hackers are discussing which tricks can be used to overturn the security mechanisms.

It doesn’t have to be specially trained hacking tools that are already helping criminals today, Sikorski warned: “I believe that these tools in their current state have the same level of risk as all other AI tools on the market.” Hackers could too simply fall back on established models and trick them into using them for criminal purposes.

Artificial intelligence could also be used to defend against hacker attacks. “AI can be relentless, she’s a bit like the Terminator. It won’t stop,” says Sikorski. In this way, AI can also help to make computer systems more secure and better ward off attacks.

“We’re used to arms races with attackers,” said Gary Steele of San Francisco-based cloud company Splunk. Not only criminals upgraded their tactics with artificial intelligence.

More: AI chatbots like ChatGPT allow criminals to personalize their scams for millions of victims

#dangerous #hands #criminals

You may also like

Leave a Comment