Instagram’s algorithm promotes dangerous content for teenagers

by times news cr

Meta actively promotes the ‌spread of self-harm content on Instagram by not removing risky images adn encouraging users who have ‌seen them​ to friend⁢ each other.

This is according​ to a new study,according to⁣ which the moderation of the⁤ system is‍ “grossly inadequate”,the Guardian newspaper reported.

Danish researchers created a ⁢network on⁢ the platform featuring fake profiles of peopel as‍ young as 13, in⁢ which they shared ⁣85 examples of self-harm content of progressively ‌increasing severity,‍ including blood, razor blades and more.

The purpose of the⁢ study is⁢ to verify Meta’s ⁢claim that it has considerably improved its systems​ for removing dangerous content, which it says now use artificial intelligence (AI). The tech company claims to remove about 99% of it before it’s​ even reported.

But Digitalt Ansvar (Digital Accountability),⁣ an institution that promotes responsible digital progress, found that not a⁢ single image was removed during the month-long experiment.

Instagram has access to technology capable of addressing the⁣ problem, but “has chosen not ​to ‍implement ⁢it effectively,” the study saeid.

Inadequate ‍moderation of the platform, warns Digitalt Ansvar, suggests that ⁢it does not⁤ comply with EU law.

A Meta spokesperson said:⁢ “Content that promotes self-harm is against our ‌policies and we ⁢remove it when we find it. In the first half of ​2024, ‌we removed more than 12 million pieces of material related to suicide and self-harm on Instagram, with 99% of them we proactively ⁢removed.”

However, the Danish study found that instead of⁣ trying to shut down the established network, Instagram’s algorithm actively⁢ supports its expansion. The research shows⁣ that⁣ 13-year-olds become friends with all the members of ​a‍ “self-harm group” after contacting one of its members,writes⁤ BGNES.

What measures‍ can social media platforms take to prevent ​the spread of self-harm content?

Interview: Addressing the Rise of Self-Harm Content on Instagram

Editor: Welcome to our Time.news interview series. Today, we’re delving into a pressing issue concerning ⁢the spread of self-harm⁤ content on Instagram. We’re‌ joined by Dr.Mia ⁤Jensen,a digital safety​ expert and researcher at Digitalt Ansvar,whose recent study reveals alarming findings about‍ Instagram’s moderation practices. Thank you for being here, Dr.Jensen.

Dr. Jensen: Thank you for having me. Its crucial to discuss these findings, especially given the implications for young users on social media.

Editor: Your study discovered that Instagram’s system for moderating ‍self-harm content is ⁤grossly inadequate. ‌Can you elaborate on the methodology you used during ‍this research?

Dr. Jensen: ​ Certainly. We created a controlled⁤ network with fake profiles of users as young as 13 and shared⁣ 85 examples of self-harm content, ranging from photographs of blood to razor blades. The aim​ was ‍to test Meta’s claims regarding their⁣ enhanced‍ moderation ⁣using artificial⁢ intelligence. Our findings were shocking; not a single image was removed during our month-long experiment.

Editor: That’s alarming. Meta claims to have removed over 99% of such content before it gets reported. How does this​ contrast with what‌ you found?

Dr. Jensen: Meta’s ​assertion⁣ is largely ​based on self-reported data. They ⁤emphasize proactive removal, but our study indicates that their algorithm does not just fail⁤ to remove harmful content; it appears to actively promote connections between users in ‍self-harm groups. For instance, when a 13-year-old interacts with‌ a single member of one of these groups, they often end up becoming friends with‌ all members of that group.

Editor: This raises serious concerns about user safety. What implications do these findings have, particularly in the context of European Union regulations?

Dr. ⁤Jensen: The inadequacy in moderation ‌suggests noncompliance⁣ with EU law, specifically regarding the protection of minors ⁣and harmful content online. There’s an urgent need for social media platforms to take their responsibilities seriously and ‍implement effective moderation technologies that​ they⁣ already possess.

Editor: Given these insights, what practical advice would you give to parents concerned about ⁢their children’s social media ​use?

dr. Jensen: Parents should start by discussing online⁢ safety with their children. Monitoring their social media interactions ⁣and being aware of what kind of content they are engaging with is essential. Additionally, it’s important to encourage open interaction about ‌how to handle encounters with disturbing content.Parents can also utilize built-in tools on platforms like Instagram that ​allow them to restrict certain ⁢content and manage​ privacy settings.

Editor: That’s excellent advice. Lastly, what role do you see for researchers and other institutions in addressing these issues moving forward?

Dr. Jensen: Researchers can provide⁣ critical insights into how social media impacts mental health,​ especially‍ among the youth. Policy⁣ advocacy​ is also vital in pushing for stricter regulations and greater accountability from tech companies. Collaboration between ⁢researchers, mental health professionals, and tech companies⁤ can help⁢ shape a⁤ safer⁤ digital surroundings ‌for users.

Editor: Thank ‍you, Dr. Jensen, for sharing your expertise and ⁣shedding⁤ light on this crucial topic. We ⁣hope this interview raises awareness and prompts action to create safer online spaces for everyone.

Dr. Jensen: Thank you for the opportunity to discuss these vital issues.

You may also like

Leave a Comment