The deepfake reaches the voice: cybercriminals now have another tool to scam

by time news

2023-12-26 11:23:49

The Beatles have delighted fans around the world with the release of a new song. Artificial Intelligence (AI) has made it possible to rescue an old recording and improve its sound quality. A tool that is used to create masterpieces, but that can also be used for dark purposes by creating fake voices and images that can be used to carry out scams.

Fortunately these deepfakes are not yet well developed. They are easily detectable but are already used for fraudulent activities, something that will get worse in the future because technology does not stop. The results will improve significantly. OpenAI, the company that developed ChatGPT, has shown an app that allows you to generate voices from text. This system generates audio very similar to a real human voice; a new weapon for cybercriminals. The OpenAI application allows you to select between different voices before speaking the text, which highlights the rapid speed of evolution of this technology, with what this means for security.

There is currently no system that can reproduce deepfake voices that are indistinguishable from human speech, but more and more tools of this type have been launched in recent months. Until not long ago, users needed programming knowledge to create deepfakes, but now everything is simpler. In the short term we are going to see models that will be even easier to use and with great quality results.

Frauds based on Artificial Intelligence are rare, but it is true that there are some notable cases. In mid-October 2023, billionaire and founder of several venture capital funds, Tim Draper, warned on social networks that AI is increasingly intelligent and that a false version of his voice was being used for fraudulent activities. He explained that the scammers requested the sending of cryptocurrencies taking advantage of his high knowledge in investment, posing as himself.

How to protect ourselves from deepfakes?

At the moment, society does not perceive deepfakes as a threat because there are few cases in which they have been used for malicious activities. For this same reason, security solutions against deepfakes are taking a long time to appear.

The best way to be protected is to listen carefully to what the person who is trying to communicate with us through voice says. If it is of poor quality, shows noises and has a robotic touch, it should never be trusted.

Another way to detect if we are facing a deepfake is to ask different questions. For example, asking the interlocutor about their favorite color may be something that puts off the cyber attacker, since it is not something that victims usually ask. Therefore, he will try to answer as soon as possible to avoid being detected, but there will almost certainly be a delay that will make it clear that they are trying to deceive us.

The safest option to stay protected is to install a trusted security solution on your devices that helps combat deepfakes, prevent malicious websites, unwanted downloads, and continuously scan files on your devices.

“The main advice right now is not to obsess over these types of threats and continually look for deepfake voices where there are none. It is unlikely that current technology can create a voice that cannot be recognized as artificial. However, we must be aware of the threats that loom and prepare for the future. “Deepfakes will soon be a new reality”says Dmitry Anikin, senior data scientist at Kaspersky.

More information

#deepfake #reaches #voice #cybercriminals #tool #scam

You may also like

Leave a Comment