Internet users have had their pictures edited by Lensa for weeks – now the app is being criticized. It’s about copyright, data protection and sexism.
Montage: Salvatore Saba/Berliner Zeitung
All you have to do is download an app, upload a few dozen photos of yourself and off you go: after half an hour, up to 50 pictures created by artificial intelligence are spit out. At the end of last year this was a big hype on social media, which was literally inundated with such “avatar photos”. The Lensa AI app, which was downloaded around six million times between November and December 2022 according to the statistics platform Statista, is particularly well known.
What was long considered cool and funny is now being questioned more and more: The AI technology behind the Lensa app harbors risks – from sexualizing yourself, which some users report, to a sometimes questionable one Data protection declaration, which, as is so often the case, is accepted unread. Artists also criticize the Lensa app, which often changes images in the style of a specific artist; Copyrights would be violated by the operators of the app.
What is Lensa and who developed this app?
Lensa is a photo editing app that converts selfies into avatars, i.e. digital proxies whose appearance is based on that of the real person. The software can also be used to edit photos. The service is only free for seven days, after which users have to pay EUR 49.99 for an annual subscription. Nevertheless, the app has quickly worked its way up to the top of the iPhone and Android store charts.
Even though Lensa has only been the subject of much discussion for a few months, the app is not new. It was developed by Prisma Labs back in 2018. This company is based in Silicon Valley and was founded in 2016 by five Russian software developers: Alexey Moiseenkov, Oleg Poyaganov, Ilya Frolov, Andrey Usoltsev and Aram Hardy. In 2018, Moiseenkov resigned as CEO of Prisma Labs, leaving the company that Usoltsev has since headed.
Why do users feel sexualized by the app?
It is now being said more and more often that female users in particular are being sexualized by the Lensa app. “Out of 100 avatars that were generated, 16 were shirtless and another 14 showed me in sheer dresses and in erotic poses,” wrote AI expert Melissa Heikkilä, who tried the app, in an article for online magazine MIT Technology Review. “Funnily enough, I got more realistic portraits of myself telling the app I was a man.”
Lensa said big boobs is all you’re good for ???? pic.twitter.com/2tRIh7XhrM
— Lexi ⁷???? (@lexinlindsey) December 9, 2022
Who reads the privacy policy?
There is also the question of what actually happens to the photos that are uploaded to the app for editing. This is in the data protection regulations, but experience has shown that these are rarely read in their entirety. By December 15, a look at the regulations made it clear: Yes, Lensa can also use the uploaded photos to further train the artificial intelligence behind the app.
The data protection regulations initially stated that the photos would not be used for anything other than filters and effects. However, a few exceptions were listed, such as “the training of neurological network algorithms” or the use “to optimize and monitor the functionality of Lensa”. The provisions were then changed in mid-December – probably also because criticism of them had been raised. Today it says: “We do not use your data to train artificial intelligence”.
What happened to the AI Act?
For years, the so-called AI Act has been discussed in Germany, but also at EU level; a regulation intended to protect users in various ways with regard to AI technologies. However, those responsible have a hard time with the pure definition of what artificial intelligence actually is. And with regard to offers such as Lensa, but also the chatbot ChatGPT, which develops texts independently, technological progress often seems to be so fast that it can hardly be regulated in good time.
If you’ve recently been playing around with the Lensa App to make AI art “magic avatars” please know that these images are created with stolen art through the Stable Diffusion model. pic.twitter.com/VGrrECYVn5
— meg rae (@megraeart) December 2, 2022
How does the technology work?
And how does the technology work in the Lensa app? Lensa uses a technology known as Stable Diffusion (SD). This is a “deep learning text-to-image generator”, so to speak. Mainly, such technology is used to generate detailed images based on textual descriptions; The software can also create cartoon-like avatars. For this to work, stable diffusion was trained using image and label pairs from LAION-5B, a publicly available dataset derived from internet common crawl data.
More easily translated, this means: The software is fed with a lot of images that are automatically taken from the Internet – regardless of their origin. “Stable Diffusion managed to circumvent copyright on thousands of images by allegedly developing the technology for non-profit purposes,” artist Megan Rae Schroeder commented in a December tweet that went viral. “It has been claimed that SD is ‘ethical and legal,’ when the companies using this open-source model are actually making a profit.”