#ai-deepfakes — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #ai-deepfakes, aggregated by home.social.
-
AI Nudification: The 55% Stat Parents Can’t Ignore
Originally Published on May 15th, 2026 at 07:00 amHow AI Nudification Became the New Adolescent Normal
From Virtual Fitting Rooms to Digital Danger
Generative AI (GenAI) was supposed to be our creative co-pilot. We didn’t see AI Nudification coming.
We marveled at its ability to turn text into art and embraced “virtual try-on” applications that allowed us to see how clothing might fit using nothing more than a smartphone camera. But as a tech ethicist, I’ve watched this innovation take a dark, predatory turn. While the underlying technology, specifically “inpainting,” is legitimate, its application in adolescent circles has reached a terrifying tipping point.
We are no longer talking about a few “tech-savvy” outliers; we are witnessing the mass-normalization of AI-generated Child Sexual Exploitation Material (CSEM) among teenagers. This isn’t just the next stage of digital growing pains. It’s a fundamental shift in how the first generation of “AI adolescents” navigates consent, identity, and digital harm.
Takeaway 1: The “Scaling Gap” and the New AI Nudification Normal
For years, educators and parents tracked the steady rise of traditional “sexting.” Historical meta-analyses placed adolescent creation and receipt of self-generated sexual imagery at roughly 14.8% and 27.4%, respectively. This latest data reveals a staggering “scaling gap” that should alarm every stakeholder in digital safety.
Today, GenAI has effectively quadrupled those rates. According to a nationally representative survey of 13-to-17-year-olds:
- 55.3% of adolescents have used AI “nudification” tools to create sexualized images of themselves.
- 54.4% have received these images.
What was once a niche behavior has become a majority experience. This isn’t just a technological update to sexting; it is a total normalization of CSEM production as a routine part of adolescent sexual exploration.
Are you an LPC in need of continuing education? Dr. Weeks has a course on this material and many other unique and interesting topics.
In the course, “The Prevalence of Youth-Produced Image-Based Sexual Abuse,” Dr. Weeks teaches how child digital safety is undergoing a paradigm shift, how changes in Image Based Sexual Abuse require adaptation, and proposes a framework for conceptualizing IBSA.
Takeaway 2: Nudification vs. Creation – The Personal Toll of Inpainting
It is vital to understand the technical nuance that makes this trend so invasive. There is a massive difference between general text-to-image GenAI (which creates an image from a prompt) and “nudification” tools. These tools utilize a technique called inpainting, which modifies a pre-existing, real photo.
The survey found that usage of these specific nudification tools is significantly higher than traditional AI content creation. This is precisely why the victimization is so direct: it requires the likeness of a real person. As the study notes, these tools are designed to:
“…visualize what individuals might look like without clothing.”
By using a real individual as a “basis image,” the technology allows for the digital removal of clothing, turning a casual school photo into CSEM in seconds. The distinction between a “fake” image and a “real” person is erased, leading to a profound degree of direct victimization.
Are you exploring your trauma? Do you feel your childhood experiences were detrimental to your current mental or physical health? Utilize this free, validated, self-report questionnaire to find out.
Take the Adverse Childhood Experience (ACE) Questionnaire
Takeaway 3: The High Cost of Non-Consensual “Deepfakes”
The most heartbreaking aspect of this shift is the erosion of consent. The data highlights a crisis of victimization: 36.3% of participants reported having a non-consensual image of them created, and 33.2% had such an image shared without their permission.
Victims describe a visceral sense of “powerlessness” and “dehumanization.” When your likeness can be hijacked and sexualized without your involvement, it leads to a state of constant hypervigilance. Crucially, these statistics represent a lower bound of the crisis. Because the study only measured peer-to-peer actions, it does not account for images created by adults exploiting minors or images of children under the age of 13. If those variables were included, the scale of victimization would likely skyrocket.
Takeaway 4: The Gender and Age Myths Around AI Nudification Debunked
We often fall into the trap of thinking digital crises are limited to specific subcultures or older teens. The data tells a different story. The usage of AI nudification tools is remarkably uniform across all demographics: race, region, and sexual orientation showed no statistically significant differences in prevalence. This is a universal adolescent issue.
While male participants showed higher rates of regular (frequent) creation and distribution, the most startling finding was the age breakdown. There was no statistically significant difference in usage between 13-year-olds and 17-year-olds. This destroys the myth that we can wait until high school to talk about AI safety. To be effective, digital literacy and intervention must begin before age 13, as younger adolescents are already engaging with these tools at the same rates as their older peers.
Learn why it’s important for everyone, especially teens, to be able to control their online experiences. Dick Pic Culture: How do Teenage Girls Navigate it?
Takeaway 5: A Legal and Ethical Gray Zone
We must call these images what they are: CSEM. Under federal law (18 U.S. Code § 1466A), the production and distribution of pornographic GenAI images of minors is illegal, regardless of whether the image is “real.”
This puts policymakers in an ethical bind.
We are currently seeing thousands of adolescents technically committing federal crimes as part of “exploratory” peer behavior. Ethicists and lawmakers are now forced to debate whether we need legal “carve-outs” for consensual, same-age peer interactions, or if the permanent digital harm of these images necessitates strict criminal enforcement. Meanwhile, “gray market” apps continue to bypass app store controls, providing easy access to nudification tools without any meaningful age verification.
Conclusion: A Call for Proactive Digital Literacy
The window for intervention is narrow but still open. Because much of the current usage is reported as “exploratory” rather than “habitual,” we have a brief opportunity to steer this generation toward a more ethical digital future. However, our response cannot be reactive. We need multimodal education that doesn’t just teach “online safety” but addresses the profound ethical weight of AI tools and the lifelong impact of non-consensual sharing.
Final Thought: As we enter an era where a child’s likeness can be permanently decoupled from their consent in a matter of clicks, we must ask: Are our legal and educational frameworks fundamentally incompatible with this new reality, or are we simply too slow to protect the first generation of AI adolescents?
Are you a professional looking to stay up-to-date with the latest information on, sex addiction, trauma, and mental health news and research? Or maybe you’re looking for continuing education courses? Then you should stay up-to-date with all of Dr. Jen’s work through her practice’s newsletter!
Are you looking for more reputable, data-backed information on sexual addiction? The Mitigation Aide Research Archive is an excellent source for executive summaries of research studies.
#AdolescentDigitalSafety #AIDeepfakes #AIEthics #AINudification #CSEM #DeepfakeAbuse #DigitalConsent #DigitalLiteracy #GenerativeAI #NonConsensualImages #OnlineSafetyForParents #ParentEducation #TeenSexting #TeenTechnologyRisks #YouthOnlineSafety -
The Rise of #DeclinePorn : How #AIdeepfakes are Weaponising #UrbanDecay
-
Regulators are zeroing in on AI deepfakes, but the quieter threat lives in everyday chatbots, wearables, and subtle algorithmic nudges. How will Meta, Google and others shape the rules? Dive into the hidden risks shaping our digital lives. #AIDeepfakes #ConversationalAgents #AlgorithmicPersuasion #TechRegulation
🔗 https://aidailypost.com/news/regulators-focus-ai-deepfakes-while-everyday-whispers-pose-unseen-risk
-
Samsung’s new ticket campaign is being called a ‘train wreck’ as AI deepfakes flood brand ads. What does this mean for authenticity, creative control, and the unclear limits of AI‑generated content? Dive into the clash of marketing tech and ethics in our latest industry analysis. #AIDeepfakes #SamsungAds #AIgeneratedContent #MarketingTech
🔗 https://aidailypost.com/news/ai-deepfakes-dubbed-train-wreck-samsung-sells-tickets-ai-limits
-
New research shows that simple transparency warnings don’t stop the spread of AI‑deepfakes. Even with labels, synthetic videos still mislead, raising questions about content authenticity and the limits of C2PA standards. What does this mean for open‑source tools fighting misinformation? Dive into the findings. #AIDeepfakes #TransparencyWarnings #SyntheticContent #C2PA
🔗 https://aidailypost.com/news/study-finds-transparency-warnings-fail-curb-ai-deepfake-harm
-
Please Sir May I have Some More! We knew this was coming.
The EU opens a formal investigation into X and AI chatbot Grok re posting volumes of nonconsensual sexualized deepfake images. Related: A separate ongoing investigation has been expanded into X’s recommendation systems and the switch to using Grok to choose which posts users see. https://apnews.com/article/elon-musk-x-grok-ai-deepfakes-sexual-c1a3039e5aaeb4dd517d995b8b301537 #SocialMedia #X #Grok #EU #Investigation #AI #ChatBot #DeepFakes #AIDeepFakes #DSA #DigitalServicesAct
-
Grok offensive: How an AI ‘undressing’ tool sparked bans and investigations https://english.mathrubhumi.com/technology/x-grok-ai-deepfake-restrictions-btnmaasx?utm_source=dlvr.it&utm_medium=mastodon #Grok #XPlatform #AIDeepfakes #OnlineSafety #TechRegulation
-
Woman felt 'dehumanised' after Musk's #Grok AI used to digitally remove her clothes
It has been criticised for allowing users to generate photos videos with nudity, sexualised content, it was previously accused of making a sexually explicit clip of Taylor SwiftClare McGlynn, a law professor at #DurhamUniversity, said X or #Grok "could prevent these forms of abuse if they wanted to", adding they "appear to enjoy impunity"
#AIdeepfakes,
Laura Cress
Technology reporter -
AI Deepfakes Targets Physicists to Push Claim That Comet Is an Alien Spacecraft
https://atlas.whatip.xyz/post.php?slug=ai-deepfakes-targets-physicists-to-push-claim-that-comet-is-an-alien-spacecraft
"Unveiling the Cosmic Deception: AI Deepfakes Targeting Physicists in Alien Comet Saga"
#space #astronomy #aideepfakes #comet #michiokaku -
Global crackdowns target crypto scams and AI deepfake fraud - Vietnamese police arrest crypto mining scammers, Springfield warns about... - https://cointelegraph.com/news/global-crackdown-crypto-scams-ai-deepfake-fraud #springfieldcryptoatm #hongkongcybercrime #cryptominingfraud #cryptoscams #aideepfakes
-
The first is that all generative AI engines have to be trained on pre-existing data before they can start “creating” stuff.
Read more 👉 https://bit.ly/3Nyr1sr
#AiImages #AiDeepfakes #AiGeneratedArt #GenerativeAI #GenerativeAiUsage
-
The writing on the yellow hat is far too unfocused, which is a good indicator of generative AI usage.
Read more 👉 https://bit.ly/3Nyr1sr
#AiImages #AiDeepfakes #AiGeneratedArt #GenerativeAI #GenerativeAiUsage
-
That doesn’t guarantee that it’s generative AI, but it’s a good indicator.
Read more 👉 https://bit.ly/3Nyr1sr
#AiImages #AiDeepfakes #AiGeneratedArt #GenerativeAI #GenerativeAiUsage
-
The objective answer could be that it’s a generative AI image.
Read more 👉 https://bit.ly/3Nyr1sr
#AiImages #AiDeepfakes #AiGeneratedArt #GenerativeAI #GenerativeAiUsage
-
Jake Tapper at CNN has a very enlightening piece about the risks of deep-fake images and videos created with generative AI.
Read more 👉 https://bit.ly/3Nyr1sr
#AiImages #AiDeepfakes #AiGeneratedArt #GenerativeAI #GenerativeAiUsage
-
Debunking Generative AI Images: What You Need To Know
▸ https://bit.ly/3Nyr1sr#AiImages #AiDeepfakes #AiGeneratedArt #GenerativeAI #GenerativeAiUsage
-
AI deepfake attacks will extend beyond videos and audio: Security firms - AI-powered deepfake scams are rising, targeting crypto wallets. Experts ... - https://cointelegraph.com/news/ai-deepfake-scams-threaten-crypto-wallets-2024 #facialrecognition #cryptosecurity #walletsecurity #cybersecurity #deepfakescams #digitalassets #web3security #aideepfakes #cryptoscams #cryptofraud #aiscams
-
AI deepfake crypto scammers ramp up operations in Q2: Gen Digital - Gen Digital’s Siggi Stefnisson warned users that scammers are adept at u... - https://cointelegraph.com/news/ai-deepfake-crypto-scams-2024 #vitalikbuterin #michaelsaylor #youtubehacks #aideepfakes #cryptoscams #cryptocore #gendigital #elonmusk #spacex
-
Harmful “nudify” websites used Google, Apple, and Discord sign-on systems - Enlarge (credit: Photo illustration by Jacqui VanLiew/WIRED; Getty Imag... - https://arstechnica.com/?p=2046575 #aideepfakes #syndication #google #policy #apple #ai
-
Google won’t downrank top deepfake porn sites unless victims mass report - Enlarge (credit: Paper Boat Creative | Stone)
Today, Google an... - https://arstechnica.com/?p=2040372 #non-consensualintimateimagery #explicitdeepfakes #deepfakeporn #googlesearch #aideepfakes #aiseximages #google #policy
-
New bill suggests thwarting AI copycats, deepfakes with watermarks - A bipartisan group of senators has introduced a new bill aimed at tackli... - https://cointelegraph.com/news/new-bill-suggests-standardized-watermark-content-combat-ai-deepfakes #aigeneratedcontent #ailegalproblems #aiwatermark #aideepfakes #aicopyright #deepfakes #aiprivacy #aidata #ailaw
-
Court ordered penalties for 15 teens who created naked AI images of classmates - Enlarge (credit: master1305 | iStock / Getty Images Plus)
A Sp... - https://arstechnica.com/?p=2036279 #childsexualabusematerials #artificialintelligence #aiimagegenerators #aichildseximages #faceswapping #fakeaiimages #generativeai #aideepfakes #nudityapps #policy #csam #ai
-
AI trained on photos from kids’ entire childhood without their consent - Enlarge (credit: RicardoImagen | E+)
Photos of Brazilian kids—... - https://arstechnica.com/?p=2030521 #artificialintelligence #childrensonlineprivacy #onlinechildsafety #generativeai #aideepfakes #aidataset #laion-5b #policy #openai #laion #ai
-
Children’s photos are being ‘illegally used to train AI’ - Personal photos of Brazilian children are being used without their knowledge or co... - https://readwrite.com/childrens-photos-are-being-illegally-used-to-train-ai/ #aideepfakes #stabilityai #security #laion #news #ai
-
Many will argue that the essence of podcasting—its human touch, the nuances of storytelling, and the emotional connection it fosters—cannot ever be replicated by machines.
Read more 👉 https://lttr.ai/AS8Ww
#VoiceCloning #ArtificialIntelligence #AiDeepfakes #Podcastlife #podcasting #SyntheticVoiceTechnology #podcastcommunity #podcastlife #newpodcastalert #ReachMillions #Podcasting