#digitalliteracy — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #digitalliteracy, aggregated by home.social.
-
AI Nudification: The 55% Stat Parents Can’t Ignore
Originally Published on May 15th, 2026 at 07:00 amHow AI Nudification Became the New Adolescent Normal
From Virtual Fitting Rooms to Digital Danger
Generative AI (GenAI) was supposed to be our creative co-pilot. We didn’t see AI Nudification coming.
We marveled at its ability to turn text into art and embraced “virtual try-on” applications that allowed us to see how clothing might fit using nothing more than a smartphone camera. But as a tech ethicist, I’ve watched this innovation take a dark, predatory turn. While the underlying technology, specifically “inpainting,” is legitimate, its application in adolescent circles has reached a terrifying tipping point.
We are no longer talking about a few “tech-savvy” outliers; we are witnessing the mass-normalization of AI-generated Child Sexual Exploitation Material (CSEM) among teenagers. This isn’t just the next stage of digital growing pains. It’s a fundamental shift in how the first generation of “AI adolescents” navigates consent, identity, and digital harm.
Takeaway 1: The “Scaling Gap” and the New AI Nudification Normal
For years, educators and parents tracked the steady rise of traditional “sexting.” Historical meta-analyses placed adolescent creation and receipt of self-generated sexual imagery at roughly 14.8% and 27.4%, respectively. This latest data reveals a staggering “scaling gap” that should alarm every stakeholder in digital safety.
Today, GenAI has effectively quadrupled those rates. According to a nationally representative survey of 13-to-17-year-olds:
- 55.3% of adolescents have used AI “nudification” tools to create sexualized images of themselves.
- 54.4% have received these images.
What was once a niche behavior has become a majority experience. This isn’t just a technological update to sexting; it is a total normalization of CSEM production as a routine part of adolescent sexual exploration.
Are you an LPC in need of continuing education? Dr. Weeks has a course on this material and many other unique and interesting topics.
In the course, “The Prevalence of Youth-Produced Image-Based Sexual Abuse,” Dr. Weeks teaches how child digital safety is undergoing a paradigm shift, how changes in Image Based Sexual Abuse require adaptation, and proposes a framework for conceptualizing IBSA.
Takeaway 2: Nudification vs. Creation – The Personal Toll of Inpainting
It is vital to understand the technical nuance that makes this trend so invasive. There is a massive difference between general text-to-image GenAI (which creates an image from a prompt) and “nudification” tools. These tools utilize a technique called inpainting, which modifies a pre-existing, real photo.
The survey found that usage of these specific nudification tools is significantly higher than traditional AI content creation. This is precisely why the victimization is so direct: it requires the likeness of a real person. As the study notes, these tools are designed to:
“…visualize what individuals might look like without clothing.”
By using a real individual as a “basis image,” the technology allows for the digital removal of clothing, turning a casual school photo into CSEM in seconds. The distinction between a “fake” image and a “real” person is erased, leading to a profound degree of direct victimization.
Are you exploring your trauma? Do you feel your childhood experiences were detrimental to your current mental or physical health? Utilize this free, validated, self-report questionnaire to find out.
Take the Adverse Childhood Experience (ACE) Questionnaire
Takeaway 3: The High Cost of Non-Consensual “Deepfakes”
The most heartbreaking aspect of this shift is the erosion of consent. The data highlights a crisis of victimization: 36.3% of participants reported having a non-consensual image of them created, and 33.2% had such an image shared without their permission.
Victims describe a visceral sense of “powerlessness” and “dehumanization.” When your likeness can be hijacked and sexualized without your involvement, it leads to a state of constant hypervigilance. Crucially, these statistics represent a lower bound of the crisis. Because the study only measured peer-to-peer actions, it does not account for images created by adults exploiting minors or images of children under the age of 13. If those variables were included, the scale of victimization would likely skyrocket.
Takeaway 4: The Gender and Age Myths Around AI Nudification Debunked
We often fall into the trap of thinking digital crises are limited to specific subcultures or older teens. The data tells a different story. The usage of AI nudification tools is remarkably uniform across all demographics: race, region, and sexual orientation showed no statistically significant differences in prevalence. This is a universal adolescent issue.
While male participants showed higher rates of regular (frequent) creation and distribution, the most startling finding was the age breakdown. There was no statistically significant difference in usage between 13-year-olds and 17-year-olds. This destroys the myth that we can wait until high school to talk about AI safety. To be effective, digital literacy and intervention must begin before age 13, as younger adolescents are already engaging with these tools at the same rates as their older peers.
Learn why it’s important for everyone, especially teens, to be able to control their online experiences. Dick Pic Culture: How do Teenage Girls Navigate it?
Takeaway 5: A Legal and Ethical Gray Zone
We must call these images what they are: CSEM. Under federal law (18 U.S. Code § 1466A), the production and distribution of pornographic GenAI images of minors is illegal, regardless of whether the image is “real.”
This puts policymakers in an ethical bind.
We are currently seeing thousands of adolescents technically committing federal crimes as part of “exploratory” peer behavior. Ethicists and lawmakers are now forced to debate whether we need legal “carve-outs” for consensual, same-age peer interactions, or if the permanent digital harm of these images necessitates strict criminal enforcement. Meanwhile, “gray market” apps continue to bypass app store controls, providing easy access to nudification tools without any meaningful age verification.
Conclusion: A Call for Proactive Digital Literacy
The window for intervention is narrow but still open. Because much of the current usage is reported as “exploratory” rather than “habitual,” we have a brief opportunity to steer this generation toward a more ethical digital future. However, our response cannot be reactive. We need multimodal education that doesn’t just teach “online safety” but addresses the profound ethical weight of AI tools and the lifelong impact of non-consensual sharing.
Final Thought: As we enter an era where a child’s likeness can be permanently decoupled from their consent in a matter of clicks, we must ask: Are our legal and educational frameworks fundamentally incompatible with this new reality, or are we simply too slow to protect the first generation of AI adolescents?
Are you a professional looking to stay up-to-date with the latest information on, sex addiction, trauma, and mental health news and research? Or maybe you’re looking for continuing education courses? Then you should stay up-to-date with all of Dr. Jen’s work through her practice’s newsletter!
Are you looking for more reputable, data-backed information on sexual addiction? The Mitigation Aide Research Archive is an excellent source for executive summaries of research studies.
#AdolescentDigitalSafety #AIDeepfakes #AIEthics #AINudification #CSEM #DeepfakeAbuse #DigitalConsent #DigitalLiteracy #GenerativeAI #NonConsensualImages #OnlineSafetyForParents #ParentEducation #TeenSexting #TeenTechnologyRisks #YouthOnlineSafety -
AI Nudification: The 55% Stat Parents Can’t Ignore
Originally Published on May 15th, 2026 at 07:00 amHow AI Nudification Became the New Adolescent Normal
From Virtual Fitting Rooms to Digital Danger
Generative AI (GenAI) was supposed to be our creative co-pilot. We didn’t see AI Nudification coming.
We marveled at its ability to turn text into art and embraced “virtual try-on” applications that allowed us to see how clothing might fit using nothing more than a smartphone camera. But as a tech ethicist, I’ve watched this innovation take a dark, predatory turn. While the underlying technology, specifically “inpainting,” is legitimate, its application in adolescent circles has reached a terrifying tipping point.
We are no longer talking about a few “tech-savvy” outliers; we are witnessing the mass-normalization of AI-generated Child Sexual Exploitation Material (CSEM) among teenagers. This isn’t just the next stage of digital growing pains. It’s a fundamental shift in how the first generation of “AI adolescents” navigates consent, identity, and digital harm.
Takeaway 1: The “Scaling Gap” and the New AI Nudification Normal
For years, educators and parents tracked the steady rise of traditional “sexting.” Historical meta-analyses placed adolescent creation and receipt of self-generated sexual imagery at roughly 14.8% and 27.4%, respectively. This latest data reveals a staggering “scaling gap” that should alarm every stakeholder in digital safety.
Today, GenAI has effectively quadrupled those rates. According to a nationally representative survey of 13-to-17-year-olds:
- 55.3% of adolescents have used AI “nudification” tools to create sexualized images of themselves.
- 54.4% have received these images.
What was once a niche behavior has become a majority experience. This isn’t just a technological update to sexting; it is a total normalization of CSEM production as a routine part of adolescent sexual exploration.
Are you an LPC in need of continuing education? Dr. Weeks has a course on this material and many other unique and interesting topics.
In the course, “The Prevalence of Youth-Produced Image-Based Sexual Abuse,” Dr. Weeks teaches how child digital safety is undergoing a paradigm shift, how changes in Image Based Sexual Abuse require adaptation, and proposes a framework for conceptualizing IBSA.
Takeaway 2: Nudification vs. Creation – The Personal Toll of Inpainting
It is vital to understand the technical nuance that makes this trend so invasive. There is a massive difference between general text-to-image GenAI (which creates an image from a prompt) and “nudification” tools. These tools utilize a technique called inpainting, which modifies a pre-existing, real photo.
The survey found that usage of these specific nudification tools is significantly higher than traditional AI content creation. This is precisely why the victimization is so direct: it requires the likeness of a real person. As the study notes, these tools are designed to:
“…visualize what individuals might look like without clothing.”
By using a real individual as a “basis image,” the technology allows for the digital removal of clothing, turning a casual school photo into CSEM in seconds. The distinction between a “fake” image and a “real” person is erased, leading to a profound degree of direct victimization.
Are you exploring your trauma? Do you feel your childhood experiences were detrimental to your current mental or physical health? Utilize this free, validated, self-report questionnaire to find out.
Take the Adverse Childhood Experience (ACE) Questionnaire
Takeaway 3: The High Cost of Non-Consensual “Deepfakes”
The most heartbreaking aspect of this shift is the erosion of consent. The data highlights a crisis of victimization: 36.3% of participants reported having a non-consensual image of them created, and 33.2% had such an image shared without their permission.
Victims describe a visceral sense of “powerlessness” and “dehumanization.” When your likeness can be hijacked and sexualized without your involvement, it leads to a state of constant hypervigilance. Crucially, these statistics represent a lower bound of the crisis. Because the study only measured peer-to-peer actions, it does not account for images created by adults exploiting minors or images of children under the age of 13. If those variables were included, the scale of victimization would likely skyrocket.
Takeaway 4: The Gender and Age Myths Around AI Nudification Debunked
We often fall into the trap of thinking digital crises are limited to specific subcultures or older teens. The data tells a different story. The usage of AI nudification tools is remarkably uniform across all demographics: race, region, and sexual orientation showed no statistically significant differences in prevalence. This is a universal adolescent issue.
While male participants showed higher rates of regular (frequent) creation and distribution, the most startling finding was the age breakdown. There was no statistically significant difference in usage between 13-year-olds and 17-year-olds. This destroys the myth that we can wait until high school to talk about AI safety. To be effective, digital literacy and intervention must begin before age 13, as younger adolescents are already engaging with these tools at the same rates as their older peers.
Learn why it’s important for everyone, especially teens, to be able to control their online experiences. Dick Pic Culture: How do Teenage Girls Navigate it?
Takeaway 5: A Legal and Ethical Gray Zone
We must call these images what they are: CSEM. Under federal law (18 U.S. Code § 1466A), the production and distribution of pornographic GenAI images of minors is illegal, regardless of whether the image is “real.”
This puts policymakers in an ethical bind.
We are currently seeing thousands of adolescents technically committing federal crimes as part of “exploratory” peer behavior. Ethicists and lawmakers are now forced to debate whether we need legal “carve-outs” for consensual, same-age peer interactions, or if the permanent digital harm of these images necessitates strict criminal enforcement. Meanwhile, “gray market” apps continue to bypass app store controls, providing easy access to nudification tools without any meaningful age verification.
Conclusion: A Call for Proactive Digital Literacy
The window for intervention is narrow but still open. Because much of the current usage is reported as “exploratory” rather than “habitual,” we have a brief opportunity to steer this generation toward a more ethical digital future. However, our response cannot be reactive. We need multimodal education that doesn’t just teach “online safety” but addresses the profound ethical weight of AI tools and the lifelong impact of non-consensual sharing.
Final Thought: As we enter an era where a child’s likeness can be permanently decoupled from their consent in a matter of clicks, we must ask: Are our legal and educational frameworks fundamentally incompatible with this new reality, or are we simply too slow to protect the first generation of AI adolescents?
Are you a professional looking to stay up-to-date with the latest information on, sex addiction, trauma, and mental health news and research? Or maybe you’re looking for continuing education courses? Then you should stay up-to-date with all of Dr. Jen’s work through her practice’s newsletter!
Are you looking for more reputable, data-backed information on sexual addiction? The Mitigation Aide Research Archive is an excellent source for executive summaries of research studies.
#AdolescentDigitalSafety #AIDeepfakes #AIEthics #AINudification #CSEM #DeepfakeAbuse #DigitalConsent #DigitalLiteracy #GenerativeAI #NonConsensualImages #OnlineSafetyForParents #ParentEducation #TeenSexting #TeenTechnologyRisks #YouthOnlineSafety -
AI Nudification: The 55% Stat Parents Can’t Ignore
Originally Published on May 15th, 2026 at 07:00 amHow AI Nudification Became the New Adolescent Normal
From Virtual Fitting Rooms to Digital Danger
Generative AI (GenAI) was supposed to be our creative co-pilot. We didn’t see AI Nudification coming.
We marveled at its ability to turn text into art and embraced “virtual try-on” applications that allowed us to see how clothing might fit using nothing more than a smartphone camera. But as a tech ethicist, I’ve watched this innovation take a dark, predatory turn. While the underlying technology, specifically “inpainting,” is legitimate, its application in adolescent circles has reached a terrifying tipping point.
We are no longer talking about a few “tech-savvy” outliers; we are witnessing the mass-normalization of AI-generated Child Sexual Exploitation Material (CSEM) among teenagers. This isn’t just the next stage of digital growing pains. It’s a fundamental shift in how the first generation of “AI adolescents” navigates consent, identity, and digital harm.
Takeaway 1: The “Scaling Gap” and the New AI Nudification Normal
For years, educators and parents tracked the steady rise of traditional “sexting.” Historical meta-analyses placed adolescent creation and receipt of self-generated sexual imagery at roughly 14.8% and 27.4%, respectively. This latest data reveals a staggering “scaling gap” that should alarm every stakeholder in digital safety.
Today, GenAI has effectively quadrupled those rates. According to a nationally representative survey of 13-to-17-year-olds:
- 55.3% of adolescents have used AI “nudification” tools to create sexualized images of themselves.
- 54.4% have received these images.
What was once a niche behavior has become a majority experience. This isn’t just a technological update to sexting; it is a total normalization of CSEM production as a routine part of adolescent sexual exploration.
Are you an LPC in need of continuing education? Dr. Weeks has a course on this material and many other unique and interesting topics.
In the course, “The Prevalence of Youth-Produced Image-Based Sexual Abuse,” Dr. Weeks teaches how child digital safety is undergoing a paradigm shift, how changes in Image Based Sexual Abuse require adaptation, and proposes a framework for conceptualizing IBSA.
Takeaway 2: Nudification vs. Creation – The Personal Toll of Inpainting
It is vital to understand the technical nuance that makes this trend so invasive. There is a massive difference between general text-to-image GenAI (which creates an image from a prompt) and “nudification” tools. These tools utilize a technique called inpainting, which modifies a pre-existing, real photo.
The survey found that usage of these specific nudification tools is significantly higher than traditional AI content creation. This is precisely why the victimization is so direct: it requires the likeness of a real person. As the study notes, these tools are designed to:
“…visualize what individuals might look like without clothing.”
By using a real individual as a “basis image,” the technology allows for the digital removal of clothing, turning a casual school photo into CSEM in seconds. The distinction between a “fake” image and a “real” person is erased, leading to a profound degree of direct victimization.
Are you exploring your trauma? Do you feel your childhood experiences were detrimental to your current mental or physical health? Utilize this free, validated, self-report questionnaire to find out.
Take the Adverse Childhood Experience (ACE) Questionnaire
Takeaway 3: The High Cost of Non-Consensual “Deepfakes”
The most heartbreaking aspect of this shift is the erosion of consent. The data highlights a crisis of victimization: 36.3% of participants reported having a non-consensual image of them created, and 33.2% had such an image shared without their permission.
Victims describe a visceral sense of “powerlessness” and “dehumanization.” When your likeness can be hijacked and sexualized without your involvement, it leads to a state of constant hypervigilance. Crucially, these statistics represent a lower bound of the crisis. Because the study only measured peer-to-peer actions, it does not account for images created by adults exploiting minors or images of children under the age of 13. If those variables were included, the scale of victimization would likely skyrocket.
Takeaway 4: The Gender and Age Myths Around AI Nudification Debunked
We often fall into the trap of thinking digital crises are limited to specific subcultures or older teens. The data tells a different story. The usage of AI nudification tools is remarkably uniform across all demographics: race, region, and sexual orientation showed no statistically significant differences in prevalence. This is a universal adolescent issue.
While male participants showed higher rates of regular (frequent) creation and distribution, the most startling finding was the age breakdown. There was no statistically significant difference in usage between 13-year-olds and 17-year-olds. This destroys the myth that we can wait until high school to talk about AI safety. To be effective, digital literacy and intervention must begin before age 13, as younger adolescents are already engaging with these tools at the same rates as their older peers.
Learn why it’s important for everyone, especially teens, to be able to control their online experiences. Dick Pic Culture: How do Teenage Girls Navigate it?
Takeaway 5: A Legal and Ethical Gray Zone
We must call these images what they are: CSEM. Under federal law (18 U.S. Code § 1466A), the production and distribution of pornographic GenAI images of minors is illegal, regardless of whether the image is “real.”
This puts policymakers in an ethical bind.
We are currently seeing thousands of adolescents technically committing federal crimes as part of “exploratory” peer behavior. Ethicists and lawmakers are now forced to debate whether we need legal “carve-outs” for consensual, same-age peer interactions, or if the permanent digital harm of these images necessitates strict criminal enforcement. Meanwhile, “gray market” apps continue to bypass app store controls, providing easy access to nudification tools without any meaningful age verification.
Conclusion: A Call for Proactive Digital Literacy
The window for intervention is narrow but still open. Because much of the current usage is reported as “exploratory” rather than “habitual,” we have a brief opportunity to steer this generation toward a more ethical digital future. However, our response cannot be reactive. We need multimodal education that doesn’t just teach “online safety” but addresses the profound ethical weight of AI tools and the lifelong impact of non-consensual sharing.
Final Thought: As we enter an era where a child’s likeness can be permanently decoupled from their consent in a matter of clicks, we must ask: Are our legal and educational frameworks fundamentally incompatible with this new reality, or are we simply too slow to protect the first generation of AI adolescents?
Are you a professional looking to stay up-to-date with the latest information on, sex addiction, trauma, and mental health news and research? Or maybe you’re looking for continuing education courses? Then you should stay up-to-date with all of Dr. Jen’s work through her practice’s newsletter!
Are you looking for more reputable, data-backed information on sexual addiction? The Mitigation Aide Research Archive is an excellent source for executive summaries of research studies.
#AdolescentDigitalSafety #AIDeepfakes #AIEthics #AINudification #CSEM #DeepfakeAbuse #DigitalConsent #DigitalLiteracy #GenerativeAI #NonConsensualImages #OnlineSafetyForParents #ParentEducation #TeenSexting #TeenTechnologyRisks #YouthOnlineSafety -
AI Nudification: The 55% Stat Parents Can’t Ignore
Originally Published on May 15th, 2026 at 07:00 amHow AI Nudification Became the New Adolescent Normal
From Virtual Fitting Rooms to Digital Danger
Generative AI (GenAI) was supposed to be our creative co-pilot. We didn’t see AI Nudification coming.
We marveled at its ability to turn text into art and embraced “virtual try-on” applications that allowed us to see how clothing might fit using nothing more than a smartphone camera. But as a tech ethicist, I’ve watched this innovation take a dark, predatory turn. While the underlying technology, specifically “inpainting,” is legitimate, its application in adolescent circles has reached a terrifying tipping point.
We are no longer talking about a few “tech-savvy” outliers; we are witnessing the mass-normalization of AI-generated Child Sexual Exploitation Material (CSEM) among teenagers. This isn’t just the next stage of digital growing pains. It’s a fundamental shift in how the first generation of “AI adolescents” navigates consent, identity, and digital harm.
Takeaway 1: The “Scaling Gap” and the New AI Nudification Normal
For years, educators and parents tracked the steady rise of traditional “sexting.” Historical meta-analyses placed adolescent creation and receipt of self-generated sexual imagery at roughly 14.8% and 27.4%, respectively. This latest data reveals a staggering “scaling gap” that should alarm every stakeholder in digital safety.
Today, GenAI has effectively quadrupled those rates. According to a nationally representative survey of 13-to-17-year-olds:
- 55.3% of adolescents have used AI “nudification” tools to create sexualized images of themselves.
- 54.4% have received these images.
What was once a niche behavior has become a majority experience. This isn’t just a technological update to sexting; it is a total normalization of CSEM production as a routine part of adolescent sexual exploration.
Are you an LPC in need of continuing education? Dr. Weeks has a course on this material and many other unique and interesting topics.
In the course, “The Prevalence of Youth-Produced Image-Based Sexual Abuse,” Dr. Weeks teaches how child digital safety is undergoing a paradigm shift, how changes in Image Based Sexual Abuse require adaptation, and proposes a framework for conceptualizing IBSA.
Takeaway 2: Nudification vs. Creation – The Personal Toll of Inpainting
It is vital to understand the technical nuance that makes this trend so invasive. There is a massive difference between general text-to-image GenAI (which creates an image from a prompt) and “nudification” tools. These tools utilize a technique called inpainting, which modifies a pre-existing, real photo.
The survey found that usage of these specific nudification tools is significantly higher than traditional AI content creation. This is precisely why the victimization is so direct: it requires the likeness of a real person. As the study notes, these tools are designed to:
“…visualize what individuals might look like without clothing.”
By using a real individual as a “basis image,” the technology allows for the digital removal of clothing, turning a casual school photo into CSEM in seconds. The distinction between a “fake” image and a “real” person is erased, leading to a profound degree of direct victimization.
Are you exploring your trauma? Do you feel your childhood experiences were detrimental to your current mental or physical health? Utilize this free, validated, self-report questionnaire to find out.
Take the Adverse Childhood Experience (ACE) Questionnaire
Takeaway 3: The High Cost of Non-Consensual “Deepfakes”
The most heartbreaking aspect of this shift is the erosion of consent. The data highlights a crisis of victimization: 36.3% of participants reported having a non-consensual image of them created, and 33.2% had such an image shared without their permission.
Victims describe a visceral sense of “powerlessness” and “dehumanization.” When your likeness can be hijacked and sexualized without your involvement, it leads to a state of constant hypervigilance. Crucially, these statistics represent a lower bound of the crisis. Because the study only measured peer-to-peer actions, it does not account for images created by adults exploiting minors or images of children under the age of 13. If those variables were included, the scale of victimization would likely skyrocket.
Takeaway 4: The Gender and Age Myths Around AI Nudification Debunked
We often fall into the trap of thinking digital crises are limited to specific subcultures or older teens. The data tells a different story. The usage of AI nudification tools is remarkably uniform across all demographics: race, region, and sexual orientation showed no statistically significant differences in prevalence. This is a universal adolescent issue.
While male participants showed higher rates of regular (frequent) creation and distribution, the most startling finding was the age breakdown. There was no statistically significant difference in usage between 13-year-olds and 17-year-olds. This destroys the myth that we can wait until high school to talk about AI safety. To be effective, digital literacy and intervention must begin before age 13, as younger adolescents are already engaging with these tools at the same rates as their older peers.
Learn why it’s important for everyone, especially teens, to be able to control their online experiences. Dick Pic Culture: How do Teenage Girls Navigate it?
Takeaway 5: A Legal and Ethical Gray Zone
We must call these images what they are: CSEM. Under federal law (18 U.S. Code § 1466A), the production and distribution of pornographic GenAI images of minors is illegal, regardless of whether the image is “real.”
This puts policymakers in an ethical bind.
We are currently seeing thousands of adolescents technically committing federal crimes as part of “exploratory” peer behavior. Ethicists and lawmakers are now forced to debate whether we need legal “carve-outs” for consensual, same-age peer interactions, or if the permanent digital harm of these images necessitates strict criminal enforcement. Meanwhile, “gray market” apps continue to bypass app store controls, providing easy access to nudification tools without any meaningful age verification.
Conclusion: A Call for Proactive Digital Literacy
The window for intervention is narrow but still open. Because much of the current usage is reported as “exploratory” rather than “habitual,” we have a brief opportunity to steer this generation toward a more ethical digital future. However, our response cannot be reactive. We need multimodal education that doesn’t just teach “online safety” but addresses the profound ethical weight of AI tools and the lifelong impact of non-consensual sharing.
Final Thought: As we enter an era where a child’s likeness can be permanently decoupled from their consent in a matter of clicks, we must ask: Are our legal and educational frameworks fundamentally incompatible with this new reality, or are we simply too slow to protect the first generation of AI adolescents?
Are you a professional looking to stay up-to-date with the latest information on, sex addiction, trauma, and mental health news and research? Or maybe you’re looking for continuing education courses? Then you should stay up-to-date with all of Dr. Jen’s work through her practice’s newsletter!
Are you looking for more reputable, data-backed information on sexual addiction? The Mitigation Aide Research Archive is an excellent source for executive summaries of research studies.
#AdolescentDigitalSafety #AIDeepfakes #AIEthics #AINudification #CSEM #DeepfakeAbuse #DigitalConsent #DigitalLiteracy #GenerativeAI #NonConsensualImages #OnlineSafetyForParents #ParentEducation #TeenSexting #TeenTechnologyRisks #YouthOnlineSafety -
AI Nudification: The 55% Stat Parents Can’t Ignore
Originally Published on May 15th, 2026 at 07:00 amHow AI Nudification Became the New Adolescent Normal
From Virtual Fitting Rooms to Digital Danger
Generative AI (GenAI) was supposed to be our creative co-pilot. We didn’t see AI Nudification coming.
We marveled at its ability to turn text into art and embraced “virtual try-on” applications that allowed us to see how clothing might fit using nothing more than a smartphone camera. But as a tech ethicist, I’ve watched this innovation take a dark, predatory turn. While the underlying technology, specifically “inpainting,” is legitimate, its application in adolescent circles has reached a terrifying tipping point.
We are no longer talking about a few “tech-savvy” outliers; we are witnessing the mass-normalization of AI-generated Child Sexual Exploitation Material (CSEM) among teenagers. This isn’t just the next stage of digital growing pains. It’s a fundamental shift in how the first generation of “AI adolescents” navigates consent, identity, and digital harm.
Takeaway 1: The “Scaling Gap” and the New AI Nudification Normal
For years, educators and parents tracked the steady rise of traditional “sexting.” Historical meta-analyses placed adolescent creation and receipt of self-generated sexual imagery at roughly 14.8% and 27.4%, respectively. This latest data reveals a staggering “scaling gap” that should alarm every stakeholder in digital safety.
Today, GenAI has effectively quadrupled those rates. According to a nationally representative survey of 13-to-17-year-olds:
- 55.3% of adolescents have used AI “nudification” tools to create sexualized images of themselves.
- 54.4% have received these images.
What was once a niche behavior has become a majority experience. This isn’t just a technological update to sexting; it is a total normalization of CSEM production as a routine part of adolescent sexual exploration.
Are you an LPC in need of continuing education? Dr. Weeks has a course on this material and many other unique and interesting topics.
In the course, “The Prevalence of Youth-Produced Image-Based Sexual Abuse,” Dr. Weeks teaches how child digital safety is undergoing a paradigm shift, how changes in Image Based Sexual Abuse require adaptation, and proposes a framework for conceptualizing IBSA.
Takeaway 2: Nudification vs. Creation – The Personal Toll of Inpainting
It is vital to understand the technical nuance that makes this trend so invasive. There is a massive difference between general text-to-image GenAI (which creates an image from a prompt) and “nudification” tools. These tools utilize a technique called inpainting, which modifies a pre-existing, real photo.
The survey found that usage of these specific nudification tools is significantly higher than traditional AI content creation. This is precisely why the victimization is so direct: it requires the likeness of a real person. As the study notes, these tools are designed to:
“…visualize what individuals might look like without clothing.”
By using a real individual as a “basis image,” the technology allows for the digital removal of clothing, turning a casual school photo into CSEM in seconds. The distinction between a “fake” image and a “real” person is erased, leading to a profound degree of direct victimization.
Are you exploring your trauma? Do you feel your childhood experiences were detrimental to your current mental or physical health? Utilize this free, validated, self-report questionnaire to find out.
Take the Adverse Childhood Experience (ACE) Questionnaire
Takeaway 3: The High Cost of Non-Consensual “Deepfakes”
The most heartbreaking aspect of this shift is the erosion of consent. The data highlights a crisis of victimization: 36.3% of participants reported having a non-consensual image of them created, and 33.2% had such an image shared without their permission.
Victims describe a visceral sense of “powerlessness” and “dehumanization.” When your likeness can be hijacked and sexualized without your involvement, it leads to a state of constant hypervigilance. Crucially, these statistics represent a lower bound of the crisis. Because the study only measured peer-to-peer actions, it does not account for images created by adults exploiting minors or images of children under the age of 13. If those variables were included, the scale of victimization would likely skyrocket.
Takeaway 4: The Gender and Age Myths Around AI Nudification Debunked
We often fall into the trap of thinking digital crises are limited to specific subcultures or older teens. The data tells a different story. The usage of AI nudification tools is remarkably uniform across all demographics: race, region, and sexual orientation showed no statistically significant differences in prevalence. This is a universal adolescent issue.
While male participants showed higher rates of regular (frequent) creation and distribution, the most startling finding was the age breakdown. There was no statistically significant difference in usage between 13-year-olds and 17-year-olds. This destroys the myth that we can wait until high school to talk about AI safety. To be effective, digital literacy and intervention must begin before age 13, as younger adolescents are already engaging with these tools at the same rates as their older peers.
Learn why it’s important for everyone, especially teens, to be able to control their online experiences. Dick Pic Culture: How do Teenage Girls Navigate it?
Takeaway 5: A Legal and Ethical Gray Zone
We must call these images what they are: CSEM. Under federal law (18 U.S. Code § 1466A), the production and distribution of pornographic GenAI images of minors is illegal, regardless of whether the image is “real.”
This puts policymakers in an ethical bind.
We are currently seeing thousands of adolescents technically committing federal crimes as part of “exploratory” peer behavior. Ethicists and lawmakers are now forced to debate whether we need legal “carve-outs” for consensual, same-age peer interactions, or if the permanent digital harm of these images necessitates strict criminal enforcement. Meanwhile, “gray market” apps continue to bypass app store controls, providing easy access to nudification tools without any meaningful age verification.
Conclusion: A Call for Proactive Digital Literacy
The window for intervention is narrow but still open. Because much of the current usage is reported as “exploratory” rather than “habitual,” we have a brief opportunity to steer this generation toward a more ethical digital future. However, our response cannot be reactive. We need multimodal education that doesn’t just teach “online safety” but addresses the profound ethical weight of AI tools and the lifelong impact of non-consensual sharing.
Final Thought: As we enter an era where a child’s likeness can be permanently decoupled from their consent in a matter of clicks, we must ask: Are our legal and educational frameworks fundamentally incompatible with this new reality, or are we simply too slow to protect the first generation of AI adolescents?
Are you a professional looking to stay up-to-date with the latest information on, sex addiction, trauma, and mental health news and research? Or maybe you’re looking for continuing education courses? Then you should stay up-to-date with all of Dr. Jen’s work through her practice’s newsletter!
Are you looking for more reputable, data-backed information on sexual addiction? The Mitigation Aide Research Archive is an excellent source for executive summaries of research studies.
#AdolescentDigitalSafety #AIDeepfakes #AIEthics #AINudification #CSEM #DeepfakeAbuse #DigitalConsent #DigitalLiteracy #GenerativeAI #NonConsensualImages #OnlineSafetyForParents #ParentEducation #TeenSexting #TeenTechnologyRisks #YouthOnlineSafety -
Maintenance begins at creation, so why are we not creating better?
by @beet_keeperThe beats are the same. You work for government, or academia (lets face it, that’s probably where 90% of the work is) you have a deliverable; you save it; you print to PDF; you store it on an institutional repository with some metadata (or Zenodo, OSF or equivalent) and its done.
There’s a small chance that it’s FAIR (Findable, Accessible, Interoperable, Reusable) right? It has metadata that can be discovered by an audience looking for it and can be indexed by search engines. The data is potentially accessible if published correctly. They’re not particularly interoperable or easily converted, and PDFs aren’t really designed for reuse, even if tools like Apache Tika help ease the burden of extracting artifacts. It’s just a PDF, why are we even talking about FAIR? There begins a story…
The beats are the same, yet, we work in digital preservation, our backgrounds are in GLAM or software, why do we want to shoot ourselves in the foot? Why are we not using our skills to create better?
#Archives #BetterPoster #ContinuumModel #createToMaintain #digipres #DigitalArchiving #DigitalContunuity #digitalLiteracy #DigitalPreservation #FAIR #FileFormats #GLAM #informationRecordsMangagement #NationalDigitalStewardshipAlliance #NDSA #OpenAccess #OpenData #PDF #RDM #ResearchDataLifecycle #RIM -
Maintenance begins at creation, so why are we not creating better?
by @beet_keeperThe beats are the same. You work for government, or academia (lets face it, that’s probably where 90% of the work is) you have a deliverable; you save it; you print to PDF; you store it on an institutional repository with some metadata (or Zenodo, OSF or equivalent) and its done.
There’s a small chance that it’s FAIR (Findable, Accessible, Interoperable, Reusable) right? It has metadata that can be discovered by an audience looking for it and can be indexed by search engines. The data is potentially accessible if published correctly. They’re not particularly interoperable or easily converted, and PDFs aren’t really designed for reuse, even if tools like Apache Tika help ease the burden of extracting artifacts. It’s just a PDF, why are we even talking about FAIR? There begins a story…
The beats are the same, yet, we work in digital preservation, our backgrounds are in GLAM or software, why do we want to shoot ourselves in the foot? Why are we not using our skills to create better?
Continue reading “Maintenance begins at creation, so why are we not creating better?”…
#Archives #BetterPoster #ContinuumModel #createToMaintain #digipres #DigitalArchiving #DigitalContunuity #digitalLiteracy #DigitalPreservation #FAIR #FileFormats #GLAM #informationRecordsMangagement #NationalDigitalStewardshipAlliance #NDSA #OpenAccess #OpenData #PDF #RDM #ResearchDataLifecycle #RIM -
Maintenance begins at creation, so why are we not creating better?
by @beet_keeperThe beats are the same. You work for government, or academia (lets face it, that’s probably where 90% of the work is) you have a deliverable; you save it; you print to PDF; you store it on an institutional repository with some metadata (or Zenodo, OSF or equivalent) and its done.
There’s a small chance that it’s FAIR (Findable, Accessible, Interoperable, Reusable) right? It has metadata that can be discovered by an audience looking for it and can be indexed by search engines. The data is potentially accessible if published correctly. They’re not particularly interoperable or easily converted, and PDFs aren’t really designed for reuse, even if tools like Apache Tika help ease the burden of extracting artifacts. It’s just a PDF, why are we even talking about FAIR? There begins a story…
The beats are the same, yet, we work in digital preservation, our backgrounds are in GLAM or software, why do we want to shoot ourselves in the foot? Why are we not using our skills to create better?
#Archives #ContinuumModel #createToMaintain #digipres #DigitalArchiving #DigitalContunuity #digitalLiteracy #DigitalPreservation #FAIR #FileFormats #GLAM #informationRecordsMangagement #NationalDigitalStewardshipAlliance #NDSA #OpenData #PDF #RDM #ResearchDataLifecycle #RIM -
Maintenance begins at creation, so why are we not creating better?
by @beet_keeperThe beats are the same. You work for government, or academia (lets face it, that’s probably where 90% of the work is) you have a deliverable; you save it; you print to PDF; you store it on an institutional repository with some metadata (or Zenodo, OSF or equivalent) and its done.
There’s a small chance that it’s FAIR (Findable, Accessible, Interoperable, Reusable) right? It has metadata that can be discovered by an audience looking for it and can be indexed by search engines. The data is potentially accessible if published correctly. They’re not particularly interoperable or easily converted, and PDFs aren’t really designed for reuse, even if tools like Apache Tika help ease the burden of extracting artifacts. It’s just a PDF, why are we even talking about FAIR? There begins a story…
The beats are the same, yet, we work in digital preservation, our backgrounds are in GLAM or software, why do we want to shoot ourselves in the foot? Why are we not using our skills to create better?
#Archives #ContinuumModel #createToMaintain #digipres #DigitalArchiving #DigitalContunuity #digitalLiteracy #DigitalPreservation #FAIR #FileFormats #GLAM #informationRecordsMangagement #NationalDigitalStewardshipAlliance #NDSA #OpenData #PDF #RDM #ResearchDataLifecycle #RIM -
Maintenance begins at creation, so why are we not creating better?
by @beet_keeperThe beats are the same. You work for government, or academia (lets face it, that’s probably where 90% of the work is) you have a deliverable; you save it; you print to PDF; you store it on an institutional repository with some metadata (or Zenodo, OSF or equivalent) and its done.
There’s a small chance that it’s FAIR (Findable, Accessible, Interoperable, Reusable) right? It has metadata that can be discovered by an audience looking for it and can be indexed by search engines. The data is potentially accessible if published correctly. They’re not particularly interoperable or easily converted, and PDFs aren’t really designed for reuse, even if tools like Apache Tika help ease the burden of extracting artifacts. It’s just a PDF, why are we even talking about FAIR? There begins a story…
The beats are the same, yet, we work in digital preservation, our backgrounds are in GLAM or software, why do we want to shoot ourselves in the foot? Why are we not using our skills to create better?
#Archives #BetterPoster #ContinuumModel #createToMaintain #digipres #DigitalArchiving #DigitalContunuity #digitalLiteracy #DigitalPreservation #FAIR #FileFormats #GLAM #informationRecordsMangagement #NationalDigitalStewardshipAlliance #NDSA #OpenAccess #OpenData #PDF #RDM #ResearchDataLifecycle #RIM -
Spuren im Netz: Wie meine #Daten im Netz gesammelt werden und was ich dagegen tun kann
Zu diesem Thema gebe ich für den Career Service der @tudresden einen Workshop im Rahmen des Study Smart Projektes. Dieses soll den Studienerfolg fördern und den Einsteig in den Arbeitsmarkt erleichtern, in dem u.a. grundlegende digitale und transformative gesellschaftliche Kompetenzen vermittelt werden.
👪 Studierende der TUD
🕖 16:00-18:00 Uhr
📍 Fritz-Förster-Bau, TU Dresden
💰 kostenfreiEs sind noch ein paar Plätze frei! Anmeldung (nur für Studis über Opal möglich):
https://tu-dresden.de/studium/im-studium/career-service/semesterprogramm
#FutureSkills #DigitalLiteracy #Dresden #Medienkompetenz #Tracking #Datenschutz
-
Spuren im Netz: Wie meine #Daten im Netz gesammelt werden und was ich dagegen tun kann
Zu diesem Thema gebe ich für den Career Service der @tudresden einen Workshop im Rahmen des Study Smart Projektes. Dieses soll den Studienerfolg fördern und den Einsteig in den Arbeitsmarkt erleichtern, in dem u.a. grundlegende digitale und transformative gesellschaftliche Kompetenzen vermittelt werden.
👪 Studierende der TUD
🕖 16:00-18:00 Uhr
📍 Fritz-Förster-Bau, TU Dresden
💰 kostenfreiEs sind noch ein paar Plätze frei! Anmeldung (nur für Studis über Opal möglich):
https://tu-dresden.de/studium/im-studium/career-service/semesterprogramm
#FutureSkills #DigitalLiteracy #Dresden #Medienkompetenz #Tracking #Datenschutz
-
Spuren im Netz: Wie meine #Daten im Netz gesammelt werden und was ich dagegen tun kann
Zu diesem Thema gebe ich für den Career Service der @tudresden einen Workshop im Rahmen des Study Smart Projektes. Dieses soll den Studienerfolg fördern und den Einsteig in den Arbeitsmarkt erleichtern, in dem u.a. grundlegende digitale und transformative gesellschaftliche Kompetenzen vermittelt werden.
👪 Studierende der TUD
🕖 16:00-18:00 Uhr
📍 Fritz-Förster-Bau, TU Dresden
💰 kostenfreiEs sind noch ein paar Plätze frei! Anmeldung (nur für Studis über Opal möglich):
https://tu-dresden.de/studium/im-studium/career-service/semesterprogramm
#FutureSkills #DigitalLiteracy #Dresden #Medienkompetenz #Tracking #Datenschutz
-
Spuren im Netz: Wie meine #Daten im Netz gesammelt werden und was ich dagegen tun kann
Zu diesem Thema gebe ich für den Career Service der @tudresden einen Workshop im Rahmen des Study Smart Projektes. Dieses soll den Studienerfolg fördern und den Einsteig in den Arbeitsmarkt erleichtern, in dem u.a. grundlegende digitale und transformative gesellschaftliche Kompetenzen vermittelt werden.
👪 Studierende der TUD
🕖 16:00-18:00 Uhr
📍 Fritz-Förster-Bau, TU Dresden
💰 kostenfreiEs sind noch ein paar Plätze frei! Anmeldung (nur für Studis über Opal möglich):
https://tu-dresden.de/studium/im-studium/career-service/semesterprogramm
#FutureSkills #DigitalLiteracy #Dresden #Medienkompetenz #Tracking #Datenschutz
-
Texas A&M: Digital use linked to higher employment and earnings for women, global study finds. “Women who use digital technologies are more likely to be employed and tend to earn more than those who do not, according to a new World Bank study co-authored by Dr. Raymond Robertson of Texas A&M University. The analysis also finds the link between digital adoption and employment is stronger for […]
https://rbfirehose.com/2026/05/09/texas-am-digital-use-linked-to-higher-employment-and-earnings-for-women-global-study-finds/ -
Texas A&M: Digital use linked to higher employment and earnings for women, global study finds. “Women who use digital technologies are more likely to be employed and tend to earn more than those who do not, according to a new World Bank study co-authored by Dr. Raymond Robertson of Texas A&M University. The analysis also finds the link between digital adoption and employment is stronger for […]
https://rbfirehose.com/2026/05/09/texas-am-digital-use-linked-to-higher-employment-and-earnings-for-women-global-study-finds/ -
Texas A&M: Digital use linked to higher employment and earnings for women, global study finds. “Women who use digital technologies are more likely to be employed and tend to earn more than those who do not, according to a new World Bank study co-authored by Dr. Raymond Robertson of Texas A&M University. The analysis also finds the link between digital adoption and employment is stronger for […]
https://rbfirehose.com/2026/05/09/texas-am-digital-use-linked-to-higher-employment-and-earnings-for-women-global-study-finds/ -
Texas A&M: Digital use linked to higher employment and earnings for women, global study finds. “Women who use digital technologies are more likely to be employed and tend to earn more than those who do not, according to a new World Bank study co-authored by Dr. Raymond Robertson of Texas A&M University. The analysis also finds the link between digital adoption and employment is stronger for […]
https://rbfirehose.com/2026/05/09/texas-am-digital-use-linked-to-higher-employment-and-earnings-for-women-global-study-finds/ -
Texas A&M: Digital use linked to higher employment and earnings for women, global study finds. “Women who use digital technologies are more likely to be employed and tend to earn more than those who do not, according to a new World Bank study co-authored by Dr. Raymond Robertson of Texas A&M University. The analysis also finds the link between digital adoption and employment is stronger for […]
https://rbfirehose.com/2026/05/09/texas-am-digital-use-linked-to-higher-employment-and-earnings-for-women-global-study-finds/ -
🇷🇴 Pentru comunitatea română de film: Dacă cauți "seriale indiene subtitrate în română 2026", platforma https://www.indianul.com/ oferă o experiență curată, actualizată
Ce oferă:
• Seriale noi: Jagriti, Vasudha, Miley Jab Hum Tum
• Filme Bollywood cu Shah Rukh Khan, Salman Khan
• Subtitrări profesionale în română (nu automate!)
→ Actualizări frecvente = semnal de prospețime pentru algoritmi.#IndianCinema #RomanianSubtitles #OpenWeb #DigitalLiteracy #SEOFriendly
-
Modern Ghana: The Gambia, ECOWAS launch West Africa’s first strategic centre to combat misinformation, disinformation. “The Government of The Gambia, in collaboration with the Economic Community of West African States (ECOWAS) Commission, has launched the National Misinformation and Disinformation Response Centre, the first of its kind in West Africa.”
https://rbfirehose.com/2026/04/30/modern-ghana-the-gambia-ecowas-launch-west-africas-first-strategic-centre-to-combat-misinformation-disinformation/ -
Modern Ghana: The Gambia, ECOWAS launch West Africa’s first strategic centre to combat misinformation, disinformation. “The Government of The Gambia, in collaboration with the Economic Community of West African States (ECOWAS) Commission, has launched the National Misinformation and Disinformation Response Centre, the first of its kind in West Africa.”
https://rbfirehose.com/2026/04/30/modern-ghana-the-gambia-ecowas-launch-west-africas-first-strategic-centre-to-combat-misinformation-disinformation/ -
Modern Ghana: The Gambia, ECOWAS launch West Africa’s first strategic centre to combat misinformation, disinformation. “The Government of The Gambia, in collaboration with the Economic Community of West African States (ECOWAS) Commission, has launched the National Misinformation and Disinformation Response Centre, the first of its kind in West Africa.”
https://rbfirehose.com/2026/04/30/modern-ghana-the-gambia-ecowas-launch-west-africas-first-strategic-centre-to-combat-misinformation-disinformation/ -
Modern Ghana: The Gambia, ECOWAS launch West Africa’s first strategic centre to combat misinformation, disinformation. “The Government of The Gambia, in collaboration with the Economic Community of West African States (ECOWAS) Commission, has launched the National Misinformation and Disinformation Response Centre, the first of its kind in West Africa.”
https://rbfirehose.com/2026/04/30/modern-ghana-the-gambia-ecowas-launch-west-africas-first-strategic-centre-to-combat-misinformation-disinformation/ -
Modern Ghana: The Gambia, ECOWAS launch West Africa’s first strategic centre to combat misinformation, disinformation. “The Government of The Gambia, in collaboration with the Economic Community of West African States (ECOWAS) Commission, has launched the National Misinformation and Disinformation Response Centre, the first of its kind in West Africa.”
https://rbfirehose.com/2026/04/30/modern-ghana-the-gambia-ecowas-launch-west-africas-first-strategic-centre-to-combat-misinformation-disinformation/ -
Coding #LLM's are receiving a lot of attention now, a marked change from the mass market #chatGPT era of "AI". This is poetic justice of sorts.
The general public has limited #digitalliteracy and agency. Technical communities can better "dogfood" their own creations and now they are doing it.
So while attitudes towards coding LLM's vary from hyperventilating euphoria to apocalyptically-tinted rejection, it seems the verdict on utility, tradeoffs, adverse effects etc. will come much sooner.
-
Coding #LLM's are receiving a lot of attention now, a marked change from the mass market #chatGPT era of "AI". This is poetic justice of sorts.
The general public has limited #digitalliteracy and agency. Technical communities can better "dogfood" their own creations and now they are doing it.
So while attitudes towards coding LLM's vary from hyperventilating euphoria to apocalyptically-tinted rejection, it seems the verdict on utility, tradeoffs, adverse effects etc. will come much sooner.
-
Coding #LLM's are receiving a lot of attention now, a marked change from the mass market #chatGPT era of "AI". This is poetic justice of sorts.
The general public has limited #digitalliteracy and agency. Technical communities can better "dogfood" their own creations and now they are doing it.
So while attitudes towards coding LLM's vary from hyperventilating euphoria to apocalyptically-tinted rejection, it seems the verdict on utility, tradeoffs, adverse effects etc. will come much sooner.
-
Coding #LLM's are receiving a lot of attention now, a marked change from the mass market #chatGPT era of "AI". This is poetic justice of sorts.
The general public has limited #digitalliteracy and agency. Technical communities can better "dogfood" their own creations and now they are doing it.
So while attitudes towards coding LLM's vary from hyperventilating euphoria to apocalyptically-tinted rejection, it seems the verdict on utility, tradeoffs, adverse effects etc. will come much sooner.
-
Coding #LLM's are receiving a lot of attention now, a marked change from the mass market #chatGPT era of "AI". This is poetic justice of sorts.
The general public has limited #digitalliteracy and agency. Technical communities can better "dogfood" their own creations and now they are doing it.
So while attitudes towards coding LLM's vary from hyperventilating euphoria to apocalyptically-tinted rejection, it seems the verdict on utility, tradeoffs, adverse effects etc. will come much sooner.
-
Aw #Italy I love you but on the tech and privacy side of things, you still have a looooooong way to go. #BigTech is so dominant over here.
I just booked a table for 3 at a local restaurant from their website and the next screen read: "Thank you for booking with us. Keep an eye on your phone, we will send you a Whatsapp message to confirm." They're assuming everyone has a Whatsapp account 😭
Meanwhile, the email address I gave them was a freshly created alias that included their name in the address 😏
I know people who run the local administration and they have FABULOUS cultural programs... maybe I'll propose a talk about how to set oneself free from Big Tech... I'll be here for 4 weeks in July...
-
Aw #Italy I love you but on the tech and privacy side of things, you still have a looooooong way to go. #BigTech is so dominant over here.
I just booked a table for 3 at a local restaurant from their website and the next screen read: "Thank you for booking with us. Keep an eye on your phone, we will send you a Whatsapp message to confirm." They're assuming everyone has a Whatsapp account 😭
Meanwhile, the email address I gave them was a freshly created alias that included their name in the address 😏
I know people who run the local administration and they have FABULOUS cultural programs... maybe I'll propose a talk about how to set oneself free from Big Tech... I'll be here for 4 weeks in July...
-
Aw #Italy I love you but on the tech and privacy side of things, you still have a looooooong way to go. #BigTech is so dominant over here.
I just booked a table for 3 at a local restaurant from their website and the next screen read: "Thank you for booking with us. Keep an eye on your phone, we will send you a Whatsapp message to confirm." They're assuming everyone has a Whatsapp account 😭
Meanwhile, the email address I gave them was a freshly created alias that included their name in the address 😏
I know people who run the local administration and they have FABULOUS cultural programs... maybe I'll propose a talk about how to set oneself free from Big Tech... I'll be here for 4 weeks in July...
-
Aw #Italy I love you but on the tech and privacy side of things, you still have a looooooong way to go. #BigTech is so dominant over here.
I just booked a table for 3 at a local restaurant from their website and the next screen read: "Thank you for booking with us. Keep an eye on your phone, we will send you a Whatsapp message to confirm." They're assuming everyone has a Whatsapp account 😭
Meanwhile, the email address I gave them was a freshly created alias that included their name in the address 😏
I know people who run the local administration and they have FABULOUS cultural programs... maybe I'll propose a talk about how to set oneself free from Big Tech... I'll be here for 4 weeks in July...
-
Aw #Italy I love you but on the tech and privacy side of things, you still have a looooooong way to go. #BigTech is so dominant over here.
I just booked a table for 3 at a local restaurant from their website and the next screen read: "Thank you for booking with us. Keep an eye on your phone, we will send you a Whatsapp message to confirm." They're assuming everyone has a Whatsapp account 😭
Meanwhile, the email address I gave them was a freshly created alias that included their name in the address 😏
I know people who run the local administration and they have FABULOUS cultural programs... maybe I'll propose a talk about how to set oneself free from Big Tech... I'll be here for 4 weeks in July...
-
The new digital divide: How literacy is shaping AI adoption across Europe
https://notd.io/notes/6317872065019904_4_1777149145442/the%20new%20digital%20divide:%20how%20literacy%20is%20shaping%20ai%20adoption%20across%C2%A0europe
#ai #artificialintelligence #chatbots #generativetools #aitools #literacy #digitalliteracy -
The new digital divide: How literacy is shaping AI adoption across Europe
https://notd.io/notes/6317872065019904_4_1777149145442/the%20new%20digital%20divide:%20how%20literacy%20is%20shaping%20ai%20adoption%20across%C2%A0europe
#ai #artificialintelligence #chatbots #generativetools #aitools #literacy #digitalliteracy -
The new digital divide: How literacy is shaping AI adoption across Europe
https://notd.io/notes/6317872065019904_4_1777149145442/the%20new%20digital%20divide:%20how%20literacy%20is%20shaping%20ai%20adoption%20across%C2%A0europe
#ai #artificialintelligence #chatbots #generativetools #aitools #literacy #digitalliteracy -
The new digital divide: How literacy is shaping AI adoption across Europe
https://notd.io/notes/6317872065019904_4_1777149145442/the%20new%20digital%20divide:%20how%20literacy%20is%20shaping%20ai%20adoption%20across%C2%A0europe
#ai #artificialintelligence #chatbots #generativetools #aitools #literacy #digitalliteracy -
The new digital divide: How literacy is shaping AI adoption across Europe
https://notd.io/notes/6317872065019904_4_1777149145442/the%20new%20digital%20divide:%20how%20literacy%20is%20shaping%20ai%20adoption%20across%C2%A0europe
#ai #artificialintelligence #chatbots #generativetools #aitools #literacy #digitalliteracy -
Why Swedish Schools Are Cutting Back on Digital Learning and Bringing Back Textbooks
Photo: SergeyNivens/Depositphotos Sweden’s eagerness to embrace digital learning may have been to its detriment. Like the rest of…
#Sweden #Sverige #SE #Europe #Europa #EU #debate #Digitallearning #digitalliteracy #digitaltools #learning #nyheter #Rules #school #Students #sweden #technology #textbooks
https://www.europesays.com/2942794/ -
Why Swedish Schools Are Cutting Back on Digital Learning and Bringing Back Textbooks https://www.byteseu.com/1961901/ #debate #DigitalLearning #DigitalLiteracy #DigitalTools #learning #rules #School #students #Sweden #Technology #textbooks
-
If you are teaching your students critical thinking and digital literacy skills about AI, what resources are you using to teach with? We'd love to hear from K16 educators in classrooms and homeschooling how you're tackling this and what you recommend.
#Education #K12 #K16 #Homeschooling #HigherEd #DigitalLiteracy
-
If you are teaching your students critical thinking and digital literacy skills about AI, what resources are you using to teach with? We'd love to hear from K16 educators in classrooms and homeschooling how you're tackling this and what you recommend.
#Education #K12 #K16 #Homeschooling #HigherEd #DigitalLiteracy
-
If you are teaching your students critical thinking and digital literacy skills about AI, what resources are you using to teach with? We'd love to hear from K16 educators in classrooms and homeschooling how you're tackling this and what you recommend.
#Education #K12 #K16 #Homeschooling #HigherEd #DigitalLiteracy
-
If you are teaching your students critical thinking and digital literacy skills about AI, what resources are you using to teach with? We'd love to hear from K16 educators in classrooms and homeschooling how you're tackling this and what you recommend.
#Education #K12 #K16 #Homeschooling #HigherEd #DigitalLiteracy
-
If you are teaching your students critical thinking and digital literacy skills about AI, what resources are you using to teach with? We'd love to hear from K16 educators in classrooms and homeschooling how you're tackling this and what you recommend.
#Education #K12 #K16 #Homeschooling #HigherEd #DigitalLiteracy
-
This #ITVNews article https://itv.com/news/2026-04-20/mobile-phones-to-be-banned-across-schools-in-england-under-new-plans is a pretty typical reaction when it comes to things people don’t like or agree with.
Ban it!
That’s often the response; we don’t like it, we don’t agree with it, we don’t fully understand how to manage it, and therefore it should be banned.
How about, instead of banning phones in schools, we actually move with the times and start educating our children on how to use modern tech safely and responsibly?
-
The Association of International Schools in India joins Safer Internet India coalition as institutional partner
#TycoonWorld #TAISI #SaferInternetIndia #OnlineSafety #DigitalLiteracy #CyberSafety #InternetSafety #SafeInternet #EducationIndia #InternationalSchools #SchoolEducation #EdTechIndia #DigitalAwareness #CyberAwareness #StudentSafety #ParentingInDigitalAge #TeacherTraining #CyberbullyingAwareness -
[Working paper] The Daimon of the Interface: an (Alien) Phenomenological Approach to Writing TechnologyOn February 20, 2026, I presented a paper at the Future of Writing symposium 2026, whose main theme was “adaptability”. The symposium was organized by Mark Marino and Z.D. Dochterman and it was presented by The Dornsife Writing Program at the University of Southern California, the Institute on Ethics & Trust in Computing, the Viterbi Engineering in Society program, the Ahmanson Lab, the Electronic Literature Organization, and the Humanities and Critical Code Studies Lab.
I do not actually teach writing, but digital writing is the focus of my academic research and I teach digital literacy workshops (with the Socialini collective). These two experiences led me to present a phenomenological approach to teaching writing, a method that is in debt to C.I.R.C.E.‘s “hacker pedagogy” and their interface analysis, which I apply to writing interfaces. The title of the paper is The Daimon of the Interface: an (Alien) Phenomenological Approach to Writing Technology.
The overall goal of this approach is to help students develop a deep awareness of the interconnection and interdependence of writing and thinking, and of the influence that the tools we use have on our cognitive and writing processes.
Since my academic career is likely coming to its end soon, I thought it was better to publish it as a working paper on my Zenodo profile instead of going through the everlasting and exhausting process of developing it as it should be done, then submitting it to an academic journal and going through the full loop. I know this is a shortcut, but I also think that in the paper there could be some interesting ideas and the method I propose could be of some use to teachers. So in a spirit of openness and sharing, I prefer to put it out in the world. Of course, I’ll be more than happy to receive comments, critics and feedback: if you want, get in touch!
#CIRCE #digitalHumanities #digitalLiteracy #digitalWriting #FutureOfWriting #SocialiniIt #teaching https://wp.me/pa8vBQ-u7