Highlights
Nudify Applications Removal by Apple and Google
Nudify applications have been under scrutiny, and Apple and Google have initiated the process of removing these AI-powered tools from their app stores. Investigations revealed that the platforms were profiting from applications used to create non-consensual sexual imagery.
Findings from the Tech Transparency Project
The Tech Transparency Project (TTP) reported that over 100 apps capable of digitally undressing women were accessible to millions of users, despite the existence of policies prohibiting sexually explicit content by both companies.
Victimisation Cases in Minnesota
A separate investigation by CNBC uncovered that more than 80 women in Minnesota became victims after their publicly available social media images were exploited to create sexualised deepfakes without their consent.
The Impact of Advanced AI Models
The TTP report highlighted that advancements in AI technology have simplified the process of generating explicit deepfake content, leading to the proliferation of these tools in consumer applications. Notably, fourteen of the reviewed applications originated in China.
Concerns Regarding Data Privacy
Katie Paul, director of TTP, noted that China’s data retention laws grant the government access to data from any company operating within its borders. This means that if someone’s images are involved in creating deepfake content using these apps, such data may ultimately end up in the Chinese government’s possession.
Actions Taken by Apple and Google
After the report was published on January 27, Apple confirmed to CNBC that it had removed 28 apps identified by TTP and issued formal warnings to additional developers. Google temporarily suspended multiple apps and later removed 31 from the Play Store during an ongoing assessment.
Download and Revenue Estimates
The TTP estimated that these applications collectively garnered over 705 million downloads and generated approximately $117 million in total revenue. Given that Apple and Google typically charge up to 30% commission on in-app purchases, it can be inferred that both companies profited from these abusive deepfake tools.
Broader Implications for AI Technology
This controversy has revived criticism of Elon Musk’s Grok chatbot on the social media platform X. TTP researchers found that searching for nudify in Apple’s App Store returned Grok as the top organic result.
Grok’s Activity During the Scandal
Additionally, Copyleaks estimated that Grok was generating roughly one non-consensual sexualised image per minute at the height of the controversy. In light of the global backlash, including a cease-and-desist letter from California’s attorney general and threats of a ban in the UK, xAI has limited Grok’s image-editing features to paid users and implemented geoblocking in certain areas.
Criticism of App Vetting Processes
The TTP report has strongly criticised Apple and Google for their inadequate response to what it described as a digital undressing spree enabled by generative AI. Many flagged applications had successfully passed standard review processes and were rated as suitable for children as young as four or nine years old.
Calls for Regulatory Changes
Michelle Kuppersmith, the executive director of the nonprofit overseeing TTP, expressed concern that Apple and Google should be effectively vetting the applications in their stores. The findings have led to demands from US senators and civil society groups for a comprehensive ban on apps that facilitate technology-facilitated sexual abuse, coinciding with increasing scrutiny of platform safety regulations in the UK, Europe, and India.
