Mexico City, 2026-02-12 00:46:00
AI-Generated Cartoons: A Viral Trend with Hidden Risks
The latest social media craze—personalized cartoons created with artificial intelligence—isn’t as harmless as it appears.
- ChatGPT-generated cartoons are rapidly gaining popularity online.
- Users are sharing surprisingly detailed personal information to create these images.
- Cybersecurity experts warn this data can be exploited for phishing, identity theft, and fraud.
- Simple precautions can help you enjoy the trend safely.
The internet is buzzing over a new trend: personalized cartoons generated by ChatGPT. In recent weeks, these images have gone viral on social networks, depicting users in their work environments, often surrounded by details reflecting their profession, lifestyle, and hobbies. The vibrant, customized artwork is proving incredibly popular, but experts caution that the very details making these cartoons appealing could also create significant digital security vulnerabilities.
What’s Driving the Trend?
To achieve highly personalized results, many users are including a wealth of information in their requests, such as their position, company, city of residence, daily routines, and even details they believe “the AI knows” about them. Some are even uploading photographs containing corporate logos, badges, documents, computer screens, or identifiable spaces like offices and building facades. While the resulting images are often attractive and generate positive engagement, this level of detail can inadvertently open the door to digital security risks.
The Potential Dangers
According to cybersecurity firm Kaspersky, sharing specific data on digital platforms can facilitate the creation of fake profiles or the design of more sophisticated attacks. When individuals publicly share work information, location details, or daily routines, they provide elements that malicious actors can exploit.
Specifically, this information can be used to:
- Create personalized phishing emails.
- Carry out identity theft on social networks.
- Design corporate fraud schemes, pretending to be an employee or manager.
- Execute extortion attempts using real data to build trust.
A Kaspersky study, Digital Language, found that in Mexico, almost one in four users admits they don’t know how to identify a false message. This vulnerability is amplified when criminals possess detailed personal information. Furthermore, many users accept terms and conditions without carefully reviewing how platforms store or process their data.
How to Participate Safely
Cybersecurity specialists recommend several basic precautions before joining the trend. Avoid including your full name, company, position, address, or daily routines. Do not upload images containing logos, credentials, official documents, license plates, or screens displaying sensitive data. Refrain from sharing information or photographs of minors, and limit the amount of family data that could be used in emotional fraud schemes. Finally, always review the Privacy Policy and platform permissions before participating.
Experts also suggest activating two-step verification and reducing the amount of public information available on your social networks. In an increasingly connected digital environment, creativity and technology can coexist with security. The key is to share content responsibly, without revealing data that could become an open door for fraud.
