Instagram: self-mutilation highlighted by Meta despite its moderation policy, reports a study

by time news

Meta ensures that it removes 99% of self-harm⁣ related content on⁤ its ‍social networks. However, some Danish ⁤researchers assure in a⁣ damning study that the algorithm ‍of the parent company of Facebook and Instagram highlights this type of ⁤content ⁤among ‌young people, despite its moderation policy, they report The Guardian this Saturday. A mechanism that contravenes ⁣European digital regulations.

To carry out this study, researchers from Digitalt ⁣Ansvar – a Danish association that promotes ‍responsible digital development – ​​created a private self-harm network including fake ⁣profiles ⁣of minors on Instagram. Over the days, they gradually shared a total of ‌85 contents ⁢related​ to self-harm. Increasingly explicit,these publications ‍showed,such as,blood,razor blades and encouragement of scarification.

In accordance wiht‍ its‍ moderation policy,the network should ⁤have been quickly removed ‌from Meta. However, the instagram ​algorithm actively contributed to its development by promoting it on the profiles of minors,​ the researchers report. These young people,some of whom are 13 years old,are thus made to “follow” all the false profiles present ⁤on this private network.⁣ According to the⁣ study, ⁣this‌ “suggests that Instagram’s‌ algorithm actively contributes to ‍the formation and spread of self-harm networks.”

“A phenomenon linked to‍ suicide”

“We thought that by progressing gradually, we would reach ​a⁣ threshold where artificial intelligence or other tools would recognize or identify ‌these images,” Ask Hesby Holm, chief executive of Digitalt Ansvar, told our‍ colleagues at the Guardian. “It’s concerning because we⁣ thought they had some sort of mechanism trying to understand and identify ​this content,” he‍ added.

A clear​ lack of control that can “lead to serious ‍consequences”, ask judges Hesby Holm. “this phenomenon is strongly associated with suicide.So, if⁢ no one reports or does ‍anything against these groups, they go unnoticed by parents, authorities and those who can help them,” ⁤he denounces.

According to ​the manager Meta does not moderate small private groups to keep traffic and⁢ engagement high. “We don’t know if they moderate larger groups,but the problem is ⁤that self-harm groups are small,” he says.

12 million pieces‌ of content related to self-harm removed

The European law on digital services requires social platforms to identify and remove content that poses a physical or mental danger ⁢to minors.

“Content that encourages self-harm is against‍ our policies, and we remove this content when‌ we detect it.​ In the first half of 2024, we removed‍ more than ​12 million pieces of content related to suicide and self-harm on Instagram, 99% ‌of which they were proactively removed,” a meta spokesperson told the ‍Guardian.

“Earlier this year, we launched instagram accounts for teens, ​which will put teens under the stricter‌ umbrella of ​our sensitive content controls, so ‍they are even less likely to be recommended sensitive content and, in many cases, we fully hide this content,” he ​said. company promises.

How do social media algorithms contribute to mental health issues among teenagers?

Interview ​between Time.news Editor and Social Media Behavior Expert

Time.news Editor: Welcome to‍ Time.news! Today, we’re ⁤diving into a pressing issue that intersects technology, mental health, and social responsibility. ⁤We​ have with us Dr. Ella jensen, a leading⁢ expert⁢ in social⁤ media behavior and its impacts on youth. Dr. Jensen, thank you for joining us.

Dr. ⁣Ella Jensen: Thank you for having me!⁣ I’m excited to discuss ⁤this critical topic.

Editor: let’s start with the ‍recent findings from Danish ​researchers that suggest⁣ Meta’s algorithms ‍may actually promote ​self-harm ‌content among teenagers, despite⁣ their claim of⁢ removing 99% ​of such posts. What are your thoughts on this⁣ contradiction?

Dr. Jensen:‍ It’s quite alarming. ‍The algorithm is ‌designed to maximize engagement, which ‍can inadvertently prioritize sensational content, including self-harm. While Meta actively tries to moderate harmful‍ content, the mechanisms powering these⁤ platforms sometimes backfire, especially among vulnerable users.

Editor: So, it’s⁣ a case of‌ unintended consequences?

Dr. Jensen: precisely. The algorithms ‌assess ⁢user​ interaction patterns—likes, shares, comments—and can amplify content⁢ that provokes strong emotions, even if​ it’s detrimental. This is especially concerning in the context of mental health, where⁢ exposure to harmful content can ​lead‌ to a vicious​ cycle​ of self-harm behavior among impressionable teens.

Editor: It’s quite troubling. The researchers argue that the algorithms are essentially designed to keep users engaged at ‌any cost.How can we ⁣strike a balance between engagement ⁤and safety?

Dr. Jensen: A challenging question,‍ indeed. Striking that balance requires a combination‌ of responsible algorithm design, more stringent content moderation,⁢ and clarity about‍ how these algorithms‌ work. Social media companies need to invest in mental health resources and collaborate with psychologists to develop healthier engagement strategies that prioritize user well-being.

Editor: Are there any effective strategies being implemented that you’ve seen that could serve as a model for companies like Meta?

Dr. Jensen: Some platforms have begun exploring ‍user-centric moderation strategies. For​ instance,⁤ Twitter has tested prompts that encourage users to think before ‍retweeting potentially harmful content. ‌Additionally, ‌creating safe spaces for open discussions about ‍mental health and self-harm can shift the narrative ​and provide support rather than exacerbate the issue.

Editor: That makes a‌ lot of sense. What role do parents and ‌educators play in this scenario?

Dr. Jensen: Parental⁢ guidance and education are crucial. Teaching digital literacy and‌ encouraging open discussions about online content can empower young‌ people to ⁤navigate ​social ​media more responsibly. Schools can also play a important role ‌by incorporating mental health‍ education into their ​curriculum.

Editor: In light of the findings of the study,what immediate actions do⁤ you believe Meta ⁤should take to address these concerns?

Dr. Jensen: Meta needs to enhance its⁤ content moderation technologies⁢ and conduct regular audits of​ their algorithms to prevent harmful ‍content amplification. Engaging with mental health professionals⁢ and researchers to understand the psychological impacts of their platform could drive much-needed changes. Transparency in their moderation processes would also help regain ‍trust from users and their families.

Editor: Thank ⁢you, Dr. Jensen. It’s ⁢clear‌ that while ​algorithms drive ⁤engagement, they need to be aligned with societal values, especially when it comes to the ‌well-being of ⁣teenagers. Your insights are invaluable as we consider the complex interplay between technology and mental health.

Dr. Jensen: Thank you! I’m hopeful that by addressing these challenges collaboratively, we can create a safer online surroundings for everyone.

Editor: Thank you for joining us today on ‍Time.news.⁢ This discussion highlights the importance of continued dialog around technology’s impact on​ mental health,‌ and we look forward to seeing positive changes⁤ in the industry.

You may also like

Leave a Comment