AI Video Realism: The Future & Risks

Since 2022, we’ve been using the prompt “a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting” to test AI image generators like Midjourney. It’s time to bring that barbarian to life.

A muscular barbarian man holding an axe, standing next to a CRT television set. He looks at the TV, then to the camera and literally says, “You’ve been looking for this for years: a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting. Got that, Benj?”


The video above represents significant technical progress in AI media synthesis over the course of only three years. We’ve gone from a blurry colorful still-image barbarian to a photorealistic guy that talks to us in 720p high definition with audio. Most notably, there’s no reason to believe technical capability in AI generation will slow down from here.

Horror film: A scared woman in a Victorian outfit running through a forest, dolly shot, being chased by a man in a peanut costume screaming, “Wait! You forgot your wallet!”


Trailer for The Haunted Basketball Train: a Tim Burton film where 1990s basketball star is stuck at the end of a haunted passenger train with basketball court cars, and the only way to survive is to make it to the engine by beating different ghosts at basketball in every car


ASMR video of a muscular barbarian man whispering slowly into a microphone, “You love CRTs, don’t you? That’s OK. It’s OK to love CRT televisions and barbarians.”


1980s PBS show about a man with a beard talking about how his Apple II computer can “connect to the world through a series of tubes”


A 1980s fitness video with models in leotards wearing werewolf masks


A female therapist looking at the camera, zoom call. She says, “Oh my lord, look at that Atari 800 you have behind you! I can’t believe how nice it is!”


With this technology, one can easily imagine a virtual world of AI personalities designed to flatter people. This is a fairly innocent example about a vintage computer, but you can extrapolate, making the fake person talk about any topic at all. There are limits due to Google’s filters, but from what we’ve seen in the past, a future uncensored version of a similarly capable AI video generator is very likely.

AI Video Generation: The Barbarian and Beyond – A Conversation with Deepfake Expert Dr. Anya Sharma

Keywords: AI video generation, AI media synthesis, deepfakes, AI trends, synthetic media, AI ethics, AI risks, generative AI, AI video examples, text-to-video

Time.news: Dr. Sharma, thank you for joining us. The recent advancements in AI video generation, illustrated so vividly by the “muscular barbarian” example, are quite striking. What’s your take on this rapid progress?

Dr. Anya sharma: Thanks for having me. What we’re seeing isn’t just linear improvement; it’s exponential. The jump in quality, realism, and control in just three years is honestly breathtaking.That “barbarian” video, engaging directly with the viewer, showcases a level of sophistication that was science fiction not long ago. It highlights the power of AI media synthesis.

Time.news: The original prompt for generating a picture of a barbarian has now come to life seamlessly as AI video. What does this mean for the future of entertainment and media?

Dr. Anya Sharma: The entertainment industry faces a monumental shift. Think personalized content creation, interactive narratives, and the ability to resurrect deceased actors for new roles. Self-reliant filmmakers, advertisers, and educators now have access to powerful tools to bring their imagination to life without huge budgets.We may see the rise of highly customized stories that take users to a more tailored cinematic experience. However, for creators, it also means navigating new copyright and intellectual property landscapes.

Time.news: The article also showcased examples like “The Haunted Basketball Train” trailer and the 1980s PBS show simulation. How easily can these AI-generated videos blur the lines between reality and fiction?

Dr. Anya Sharma: That’s one of the biggest concerns. The increasing realism poses a significant challenge to our perception of truth.The “Haunted Basketball Train” is whimsical, but imagine using this technology to create convincing but entirely fabricated news reports or political endorsements. We need to develop media literacy skills to critically assess content, constantly questioning its origins and authenticity. the key is understanding the manipulation that can occur.

Time.news: The piece also touches upon the potential for AI-generated content designed to flatter individuals. How significant is this and what are the threats?

Dr. Anya Sharma: While the example of the “female therapist praising an Atari 800” seems harmless, it exposes a darker potential. Imagine AI personalities designed solely to manipulate, deceive, or exploit individuals with fake reviews or comments. This opens the door to sophisticated scams, online harassment, and the erosion of trust in online interactions. Be careful about what you read in comment sections because it might not really be another individual commenting and rather an auto-generated Bot.

Time.news: Google, like many tech companies, has filters in place.But the article mentions the potential for “uncensored” versions of these AI video generators.How real is this concern?

Dr. Anya Sharma: It’s a very real concern. While responsible AI growth is crucial, the open-source nature of many AI models means that these technologies can fall into the wrong hands. An uncensored AI video generation platform could be used to create malicious deepfakes, propagate misinformation, or generate harmful content without any safeguards. The global focus should be on creating international frameworks and regulations to address this issue proactively to navigate ethical use of generative AI.

Time.news: What kind of regulatory framework can be an effective solution in this field?

Dr. Anya Sharma: Some possible solutions include the use of digital video watermarks, AI-driven video detection tools to identify deepfakes, and media literacy programs that empower individuals to critically assess online content. Most importantly, we need clarity from AI developers and clear labeling of AI-generated content. Legislation should take a proactive stance without stifling innovation.

Time.news: What practical advice do you have for our readers, especially considering the rapid advancement of AI video technology?

Dr. Anya sharma: Be skeptical of everything you see online, especially videos. Verify information from multiple trusted sources. Educate yourself about deepfakes and synthetic media. Support organizations that promote media literacy and combat misinformation. Demand transparency from social media platforms and tech companies. We all have a role to play in navigating this new reality responsibly.

Time.news: Dr. Sharma, thank you for sharing your insights and for helping our readers understand the complexities of AI video generation.

You may also like

Leave a Comment