AI deepfake nude services skyrocket in popularity: Research
[ad_1]
Social media analytics company Graphika has stated that the use of “AI undressing” is increasing.
This practice involves utilizing generative artificial intelligence (AI) tools precisely adjusted to eliminate clothing from images provided by users.
According to its report, Graphika measured the number of comments and posts on Reddit and X containing referral links to 34 websites and 52 Telegram channels providing synthetic NCII services, and it totaled 1,280 in 2022 compared to over 32,100 so far this year, representing a 2,408% increase in volume year-on-year.
Synthetic NCII services refer to the use of artificial intelligence tools to create Non-Consensual Intimate Images (NCII), often involving the generation of explicit content without the consent of the individuals depicted.
Graphika states that these AI tools make generating realistic explicit content at scale easier and cost-effective for many providers.
Without these providers, customers would face the burden of managing their custom image diffusion models themselves, which is time-consuming and potentially expensive.
Graphika warns that the increasing use of AI undressing tools could lead to the creation of fake explicit content and contribute to issues such as targeted harassment, sextortion, and the production of child sexual abuse material (CSAM).
While undressing AIs typically focus on pictures, AI has also been used to create video deepfakes using the likeness of celebrities, including YouTube personality Mr. Beast and Hollywood actor Tom Hanks.
Related: Microsoft faces UK antitrust probe over OpenAI deal structure
In a separate report in October, UK-based internet watchdog firm the Internet Watch Foundation (IWF) noted that it found over 20,254 images of child abuse on a single dark web forum in just one month. The IWF warned that AI-generated child pornography could “overwhelm” the internet.
Due to advancements in generative AI imaging, the IWF cautions that distinguishing between deepfake pornography and authentic images has become more challenging.
In a June 12 report, the United Nations called artificial intelligence-generated media a “serious and urgent” threat to information integrity, particularly on social media. The European Parliament and Council negotiators agreed on the rules governing the use of AI in the European Union on Friday, Dec 8.
Magazine: Real AI use cases in crypto: Crypto-based AI markets and AI financial analysis
[ad_2]
Source link