home.social

#techethics — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #techethics, aggregated by home.social.

  1. DATE: May 14, 2026 at 10:00AM
    SOURCE: PSYPOST.ORG

    ** Research quality varies widely from fantastic to small exploratory studies. Please check research methods when conclusions are very important to you. **
    -------------------------------------------------

    TITLE: Real-world evidence shows generative AI is making human creative output more uniform

    URL: psypost.org/real-world-evidenc

    Using artificial intelligence for creative tasks tends to make human output more uniform on a collective level. A recent preprint study provides evidence that while these tools might boost individual performance, they contribute to an overall reduction in the diversity of ideas across different users. This widespread reliance on automated assistance could lead to a narrower range of concepts in collaborative environments.

    Generative artificial intelligence refers to computer programs capable of creating new text, images, or other media based on user instructions. The most common of these tools rely on large language models. Developers build these models by feeding them billions of sentences from the internet, allowing the software to recognize patterns and predict how words should follow one another.

    Since many users interact with similar systems trained on overlapping data, scientists have raised concerns about how this technology shapes human thought. Researchers Alwin de Rooij, assistant professor in creativity research at Tilburg University and associate professor at Avans University of Applied Sciences, and Michael Mose Biskjaer, associate professor in design creativity and innovation at Aarhus University, designed a new study to assess these concerns. They noticed that previous research often focused on how these tools help individuals work faster or overcome temporary mental blocks.

    They wanted to know if this individual assistance comes at a collective cost. “There are growing concerns that using Generative AI may lead people toward similar creative ideas,” the authors explained. “While AI can enhance creativity at the individual level, these benefits might come at a cost for creativity at a collective, or even societal, level.”

    The authors sought to answer whether generative software makes people think alike. “We sought to address this by conducting a systematic review and meta-analysis of 19 empirical studies,” they noted. “More concretely, we wanted to examine whether and to what extent generative AI use is associated with convergence at the level of creative output, such as people’s ideas, designs, and creative writing.”

    A meta-analysis is a statistical technique that combines the results of multiple independent studies to find common patterns or overall trends. By pooling data from various experiments, scientists can draw more robust conclusions than they could from a single test. The authors searched academic databases for studies published between 2022 and early 2026.

    This time frame covers the period following the public release of popular chatbots, capturing the first wave of empirical research on this topic. The researchers selected 18 eligible articles containing 19 distinct experimental studies. These studies provided a total of 61 individual effect sizes, which are mathematical values indicating the strength of a specific phenomenon.

    To be included in the analysis, the original experiments had to compare humans working with generative software against humans working alone. The original studies measured homogenization using several techniques. Many relied on advanced text analysis tools that translate written responses into mathematical coordinates.

    This process allows computers to measure the semantic distance between words, essentially calculating how closely related different ideas are to one another. Other studies used human experts to rate the variety of meanings produced by participants. The analysis revealed a statistically significant homogenization effect associated with the use of artificial intelligence.

    When people co-created with these systems, their final products tended to be more similar to the work of other users. “The meta-analysis shows that using generative AI can indeed lead people to think alike,” the authors noted. “Across individuals, AI use tends to make ideas, designs, and creative texts more similar to one another.”

    “This suggests that AI may contribute to a form of homogenization of creative thought at the collective level,” they continued. “Importantly, this does not necessarily reflect a failure of human-AI co-creation but may instead be an inherent feature of how these systems currently support creative work at scale.”

    The scientists also evaluated whether the type of task influenced the degree of uniformity. They categorized the experiments into four groups, which included divergent thinking, idea generation, writing, and visual art. Divergent thinking tasks are highly open-ended exercises, such as asking someone to list creative uses for a paperclip.

    Idea generation tasks provide more specific constraints, such as asking for solutions to improve public transportation. The analysis showed that the homogenization effect was strongest in the idea generation tasks. Because these exercises require specific solutions to defined problems, users likely rely more heavily on the predictable suggestions provided by the computer algorithms.

    The researchers did not find strong statistical evidence for differences among the other three categories, suggesting that open-ended tasks lead to less convergence. They also checked if these patterns only happen in highly controlled laboratory settings. The authors compared traditional laboratory experiments with real-world scenarios, such as analyzing published essays and visual artworks created before and after the widespread adoption of automated writing tools.

    The analysis of these real-world conditions showed a small but significant reduction in idea diversity. “In many ways, the findings resemble classic fixation effects from the psychology literature, where exposure to examples constrains later thinking, but here they appear amplified by the scale and synchronicity of generative AI model use,” the researchers stated. “This homogenization effect was observed not only in controlled lab studies but also in real-world quasi-experiments. This suggests that it is not merely a lab-based phenomenon, but a practical concern affecting concrete creative processes and practices.”

    De Rooij and Biskjaer also investigated whether this narrowing of ideas persists after a person stops using the software. They isolated a subset of studies that tested participants on new creative tasks after their initial interaction with the computer models. The results suggest that the homogenization effect carries over into these subsequent activities.

    “The findings also provide preliminary evidence that homogenization effects may persist beyond moments of direct AI use,” the researchers told PsyPost. “In other words, interacting with these generative AI systems may shape how people think and generate ideas even after the interaction has ended. This potential ‘rub-off’ effect on creative cognition warrants further research and is something we would like to explore in more depth.”

    These results closely align with another recent study published in the journal PNAS Nexus. Scientists Emily Wenger and Yoed N. Kenett tested how large language models affect human creativity by evaluating 22 different commercial chatbots. They recruited 102 human participants to complete a series of verbal creativity tests, including the alternative uses task, and then asked the chatbots to complete the exact same assignments.

    Wenger and Kenett found that individual language models performed at or slightly above the level of the average human on most exercises. When viewed in isolation, a single chatbot provided highly original and creative responses. However, when the scientists compared all the responses from the different models, a stark pattern of similarity emerged.

    Across all tasks, the computer programs produced answers that were significantly more alike than the answers provided by the human participants. Both sets of researchers point to similar underlying mechanisms for this phenomenon. Because the major technology companies train their models on massive, overlapping datasets scraped from the internet, the programs naturally gravitate toward the most statistically common word associations.

    When thousands of people use these tools to generate ideas, the software acts as a semantic anchor. The models pull human users toward a shared set of typical concepts, reducing the overall variety of ideas. Wenger and Kenett attempted to fix this issue by adjusting the internal settings of the chatbots to force more random text generation, but this caused the models to produce nonsensical sentences.

    Readers should avoid interpreting these findings as proof that human beings are becoming entirely uncreative. De Rooij and Biskjaer note that the reduction in collective diversity does not equal a total loss of individual ability. “A key point is that our findings do not show that using AI reduces creativity,” the researchers emphasized.

    “Rather, they point to a shift in where and how creative diversity occurs, and where it may be constrained,” the authors said. “Individual output can improve in creative quality while becoming more similar across people. While these effects are often subtle in single instances, they may become meaningful when considered at the scale at which generative AI is now being used.”

    The authors point out some limitations to their current analysis. The review primarily focuses on text-based tools and large language models, meaning the findings might not apply to other types of computer systems. For instance, adaptive machine learning programs or tools used for music composition were not adequately represented in the available data.

    This restricts how broadly the scientific community can apply these conclusions across different artistic domains. Additionally, the analyses regarding long-term persistence and real-world applications relied on relatively small groups of studies. The limited data makes these specific conclusions tentative and open to revision.

    Future research should explore different forms of human and machine collaboration over extended periods of time. “An important next step is rethinking how generative AI systems are designed and used in creative contexts to mitigate homogenization effects,” the authors noted. “This includes exploring alternative workflows, interaction designs, and creative strategies that sustain diversity rather than encourage early convergence.”

    “One step in this direction has already been taken by mapping creative strategies for working with generative AI and machine learning, based on analyses of AI art practices,” they added, referencing a recently published article outlining this approach. “We believe these strategies can transfer to other creative domains.”

    The preprint study, “Does Generative AI Make Us Think Alike? A Systematic Review and Meta-Analysis of Homogenization Effects in Human-AI Co-Creation,” was authored by Alwin de Rooij and Michael Mose Biskjaer.

    URL: psypost.org/real-world-evidenc

    -------------------------------------------------

    DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.

    Private, vetted email list for mental health professionals: clinicians-exchange.org

    Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot

    NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot

    Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: nationalpsychologist.com

    EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: subscribe-article-digests.clin

    READ ONLINE: read-the-rss-mega-archive.clin

    It's primitive... but it works... mostly...

    -------------------------------------------------

    #psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #GenerativeAI #CreativityDiversity #AICoCreation #Homogenization #CreativeThinking #AIImpact #CreativeDiversity #LLMs #TechEthics #InnovationScience

  2. DATE: May 14, 2026 at 10:00AM
    SOURCE: PSYPOST.ORG

    ** Research quality varies widely from fantastic to small exploratory studies. Please check research methods when conclusions are very important to you. **
    -------------------------------------------------

    TITLE: Real-world evidence shows generative AI is making human creative output more uniform

    URL: psypost.org/real-world-evidenc

    Using artificial intelligence for creative tasks tends to make human output more uniform on a collective level. A recent preprint study provides evidence that while these tools might boost individual performance, they contribute to an overall reduction in the diversity of ideas across different users. This widespread reliance on automated assistance could lead to a narrower range of concepts in collaborative environments.

    Generative artificial intelligence refers to computer programs capable of creating new text, images, or other media based on user instructions. The most common of these tools rely on large language models. Developers build these models by feeding them billions of sentences from the internet, allowing the software to recognize patterns and predict how words should follow one another.

    Since many users interact with similar systems trained on overlapping data, scientists have raised concerns about how this technology shapes human thought. Researchers Alwin de Rooij, assistant professor in creativity research at Tilburg University and associate professor at Avans University of Applied Sciences, and Michael Mose Biskjaer, associate professor in design creativity and innovation at Aarhus University, designed a new study to assess these concerns. They noticed that previous research often focused on how these tools help individuals work faster or overcome temporary mental blocks.

    They wanted to know if this individual assistance comes at a collective cost. “There are growing concerns that using Generative AI may lead people toward similar creative ideas,” the authors explained. “While AI can enhance creativity at the individual level, these benefits might come at a cost for creativity at a collective, or even societal, level.”

    The authors sought to answer whether generative software makes people think alike. “We sought to address this by conducting a systematic review and meta-analysis of 19 empirical studies,” they noted. “More concretely, we wanted to examine whether and to what extent generative AI use is associated with convergence at the level of creative output, such as people’s ideas, designs, and creative writing.”

    A meta-analysis is a statistical technique that combines the results of multiple independent studies to find common patterns or overall trends. By pooling data from various experiments, scientists can draw more robust conclusions than they could from a single test. The authors searched academic databases for studies published between 2022 and early 2026.

    This time frame covers the period following the public release of popular chatbots, capturing the first wave of empirical research on this topic. The researchers selected 18 eligible articles containing 19 distinct experimental studies. These studies provided a total of 61 individual effect sizes, which are mathematical values indicating the strength of a specific phenomenon.

    To be included in the analysis, the original experiments had to compare humans working with generative software against humans working alone. The original studies measured homogenization using several techniques. Many relied on advanced text analysis tools that translate written responses into mathematical coordinates.

    This process allows computers to measure the semantic distance between words, essentially calculating how closely related different ideas are to one another. Other studies used human experts to rate the variety of meanings produced by participants. The analysis revealed a statistically significant homogenization effect associated with the use of artificial intelligence.

    When people co-created with these systems, their final products tended to be more similar to the work of other users. “The meta-analysis shows that using generative AI can indeed lead people to think alike,” the authors noted. “Across individuals, AI use tends to make ideas, designs, and creative texts more similar to one another.”

    “This suggests that AI may contribute to a form of homogenization of creative thought at the collective level,” they continued. “Importantly, this does not necessarily reflect a failure of human-AI co-creation but may instead be an inherent feature of how these systems currently support creative work at scale.”

    The scientists also evaluated whether the type of task influenced the degree of uniformity. They categorized the experiments into four groups, which included divergent thinking, idea generation, writing, and visual art. Divergent thinking tasks are highly open-ended exercises, such as asking someone to list creative uses for a paperclip.

    Idea generation tasks provide more specific constraints, such as asking for solutions to improve public transportation. The analysis showed that the homogenization effect was strongest in the idea generation tasks. Because these exercises require specific solutions to defined problems, users likely rely more heavily on the predictable suggestions provided by the computer algorithms.

    The researchers did not find strong statistical evidence for differences among the other three categories, suggesting that open-ended tasks lead to less convergence. They also checked if these patterns only happen in highly controlled laboratory settings. The authors compared traditional laboratory experiments with real-world scenarios, such as analyzing published essays and visual artworks created before and after the widespread adoption of automated writing tools.

    The analysis of these real-world conditions showed a small but significant reduction in idea diversity. “In many ways, the findings resemble classic fixation effects from the psychology literature, where exposure to examples constrains later thinking, but here they appear amplified by the scale and synchronicity of generative AI model use,” the researchers stated. “This homogenization effect was observed not only in controlled lab studies but also in real-world quasi-experiments. This suggests that it is not merely a lab-based phenomenon, but a practical concern affecting concrete creative processes and practices.”

    De Rooij and Biskjaer also investigated whether this narrowing of ideas persists after a person stops using the software. They isolated a subset of studies that tested participants on new creative tasks after their initial interaction with the computer models. The results suggest that the homogenization effect carries over into these subsequent activities.

    “The findings also provide preliminary evidence that homogenization effects may persist beyond moments of direct AI use,” the researchers told PsyPost. “In other words, interacting with these generative AI systems may shape how people think and generate ideas even after the interaction has ended. This potential ‘rub-off’ effect on creative cognition warrants further research and is something we would like to explore in more depth.”

    These results closely align with another recent study published in the journal PNAS Nexus. Scientists Emily Wenger and Yoed N. Kenett tested how large language models affect human creativity by evaluating 22 different commercial chatbots. They recruited 102 human participants to complete a series of verbal creativity tests, including the alternative uses task, and then asked the chatbots to complete the exact same assignments.

    Wenger and Kenett found that individual language models performed at or slightly above the level of the average human on most exercises. When viewed in isolation, a single chatbot provided highly original and creative responses. However, when the scientists compared all the responses from the different models, a stark pattern of similarity emerged.

    Across all tasks, the computer programs produced answers that were significantly more alike than the answers provided by the human participants. Both sets of researchers point to similar underlying mechanisms for this phenomenon. Because the major technology companies train their models on massive, overlapping datasets scraped from the internet, the programs naturally gravitate toward the most statistically common word associations.

    When thousands of people use these tools to generate ideas, the software acts as a semantic anchor. The models pull human users toward a shared set of typical concepts, reducing the overall variety of ideas. Wenger and Kenett attempted to fix this issue by adjusting the internal settings of the chatbots to force more random text generation, but this caused the models to produce nonsensical sentences.

    Readers should avoid interpreting these findings as proof that human beings are becoming entirely uncreative. De Rooij and Biskjaer note that the reduction in collective diversity does not equal a total loss of individual ability. “A key point is that our findings do not show that using AI reduces creativity,” the researchers emphasized.

    “Rather, they point to a shift in where and how creative diversity occurs, and where it may be constrained,” the authors said. “Individual output can improve in creative quality while becoming more similar across people. While these effects are often subtle in single instances, they may become meaningful when considered at the scale at which generative AI is now being used.”

    The authors point out some limitations to their current analysis. The review primarily focuses on text-based tools and large language models, meaning the findings might not apply to other types of computer systems. For instance, adaptive machine learning programs or tools used for music composition were not adequately represented in the available data.

    This restricts how broadly the scientific community can apply these conclusions across different artistic domains. Additionally, the analyses regarding long-term persistence and real-world applications relied on relatively small groups of studies. The limited data makes these specific conclusions tentative and open to revision.

    Future research should explore different forms of human and machine collaboration over extended periods of time. “An important next step is rethinking how generative AI systems are designed and used in creative contexts to mitigate homogenization effects,” the authors noted. “This includes exploring alternative workflows, interaction designs, and creative strategies that sustain diversity rather than encourage early convergence.”

    “One step in this direction has already been taken by mapping creative strategies for working with generative AI and machine learning, based on analyses of AI art practices,” they added, referencing a recently published article outlining this approach. “We believe these strategies can transfer to other creative domains.”

    The preprint study, “Does Generative AI Make Us Think Alike? A Systematic Review and Meta-Analysis of Homogenization Effects in Human-AI Co-Creation,” was authored by Alwin de Rooij and Michael Mose Biskjaer.

    URL: psypost.org/real-world-evidenc

    -------------------------------------------------

    DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.

    Private, vetted email list for mental health professionals: clinicians-exchange.org

    Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot

    NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot

    Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: nationalpsychologist.com

    EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: subscribe-article-digests.clin

    READ ONLINE: read-the-rss-mega-archive.clin

    It's primitive... but it works... mostly...

    -------------------------------------------------

    #psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #GenerativeAI #CreativityDiversity #AICoCreation #Homogenization #CreativeThinking #AIImpact #CreativeDiversity #LLMs #TechEthics #InnovationScience

  3. DATE: May 14, 2026 at 10:00AM
    SOURCE: PSYPOST.ORG

    ** Research quality varies widely from fantastic to small exploratory studies. Please check research methods when conclusions are very important to you. **
    -------------------------------------------------

    TITLE: Real-world evidence shows generative AI is making human creative output more uniform

    URL: psypost.org/real-world-evidenc

    Using artificial intelligence for creative tasks tends to make human output more uniform on a collective level. A recent preprint study provides evidence that while these tools might boost individual performance, they contribute to an overall reduction in the diversity of ideas across different users. This widespread reliance on automated assistance could lead to a narrower range of concepts in collaborative environments.

    Generative artificial intelligence refers to computer programs capable of creating new text, images, or other media based on user instructions. The most common of these tools rely on large language models. Developers build these models by feeding them billions of sentences from the internet, allowing the software to recognize patterns and predict how words should follow one another.

    Since many users interact with similar systems trained on overlapping data, scientists have raised concerns about how this technology shapes human thought. Researchers Alwin de Rooij, assistant professor in creativity research at Tilburg University and associate professor at Avans University of Applied Sciences, and Michael Mose Biskjaer, associate professor in design creativity and innovation at Aarhus University, designed a new study to assess these concerns. They noticed that previous research often focused on how these tools help individuals work faster or overcome temporary mental blocks.

    They wanted to know if this individual assistance comes at a collective cost. “There are growing concerns that using Generative AI may lead people toward similar creative ideas,” the authors explained. “While AI can enhance creativity at the individual level, these benefits might come at a cost for creativity at a collective, or even societal, level.”

    The authors sought to answer whether generative software makes people think alike. “We sought to address this by conducting a systematic review and meta-analysis of 19 empirical studies,” they noted. “More concretely, we wanted to examine whether and to what extent generative AI use is associated with convergence at the level of creative output, such as people’s ideas, designs, and creative writing.”

    A meta-analysis is a statistical technique that combines the results of multiple independent studies to find common patterns or overall trends. By pooling data from various experiments, scientists can draw more robust conclusions than they could from a single test. The authors searched academic databases for studies published between 2022 and early 2026.

    This time frame covers the period following the public release of popular chatbots, capturing the first wave of empirical research on this topic. The researchers selected 18 eligible articles containing 19 distinct experimental studies. These studies provided a total of 61 individual effect sizes, which are mathematical values indicating the strength of a specific phenomenon.

    To be included in the analysis, the original experiments had to compare humans working with generative software against humans working alone. The original studies measured homogenization using several techniques. Many relied on advanced text analysis tools that translate written responses into mathematical coordinates.

    This process allows computers to measure the semantic distance between words, essentially calculating how closely related different ideas are to one another. Other studies used human experts to rate the variety of meanings produced by participants. The analysis revealed a statistically significant homogenization effect associated with the use of artificial intelligence.

    When people co-created with these systems, their final products tended to be more similar to the work of other users. “The meta-analysis shows that using generative AI can indeed lead people to think alike,” the authors noted. “Across individuals, AI use tends to make ideas, designs, and creative texts more similar to one another.”

    “This suggests that AI may contribute to a form of homogenization of creative thought at the collective level,” they continued. “Importantly, this does not necessarily reflect a failure of human-AI co-creation but may instead be an inherent feature of how these systems currently support creative work at scale.”

    The scientists also evaluated whether the type of task influenced the degree of uniformity. They categorized the experiments into four groups, which included divergent thinking, idea generation, writing, and visual art. Divergent thinking tasks are highly open-ended exercises, such as asking someone to list creative uses for a paperclip.

    Idea generation tasks provide more specific constraints, such as asking for solutions to improve public transportation. The analysis showed that the homogenization effect was strongest in the idea generation tasks. Because these exercises require specific solutions to defined problems, users likely rely more heavily on the predictable suggestions provided by the computer algorithms.

    The researchers did not find strong statistical evidence for differences among the other three categories, suggesting that open-ended tasks lead to less convergence. They also checked if these patterns only happen in highly controlled laboratory settings. The authors compared traditional laboratory experiments with real-world scenarios, such as analyzing published essays and visual artworks created before and after the widespread adoption of automated writing tools.

    The analysis of these real-world conditions showed a small but significant reduction in idea diversity. “In many ways, the findings resemble classic fixation effects from the psychology literature, where exposure to examples constrains later thinking, but here they appear amplified by the scale and synchronicity of generative AI model use,” the researchers stated. “This homogenization effect was observed not only in controlled lab studies but also in real-world quasi-experiments. This suggests that it is not merely a lab-based phenomenon, but a practical concern affecting concrete creative processes and practices.”

    De Rooij and Biskjaer also investigated whether this narrowing of ideas persists after a person stops using the software. They isolated a subset of studies that tested participants on new creative tasks after their initial interaction with the computer models. The results suggest that the homogenization effect carries over into these subsequent activities.

    “The findings also provide preliminary evidence that homogenization effects may persist beyond moments of direct AI use,” the researchers told PsyPost. “In other words, interacting with these generative AI systems may shape how people think and generate ideas even after the interaction has ended. This potential ‘rub-off’ effect on creative cognition warrants further research and is something we would like to explore in more depth.”

    These results closely align with another recent study published in the journal PNAS Nexus. Scientists Emily Wenger and Yoed N. Kenett tested how large language models affect human creativity by evaluating 22 different commercial chatbots. They recruited 102 human participants to complete a series of verbal creativity tests, including the alternative uses task, and then asked the chatbots to complete the exact same assignments.

    Wenger and Kenett found that individual language models performed at or slightly above the level of the average human on most exercises. When viewed in isolation, a single chatbot provided highly original and creative responses. However, when the scientists compared all the responses from the different models, a stark pattern of similarity emerged.

    Across all tasks, the computer programs produced answers that were significantly more alike than the answers provided by the human participants. Both sets of researchers point to similar underlying mechanisms for this phenomenon. Because the major technology companies train their models on massive, overlapping datasets scraped from the internet, the programs naturally gravitate toward the most statistically common word associations.

    When thousands of people use these tools to generate ideas, the software acts as a semantic anchor. The models pull human users toward a shared set of typical concepts, reducing the overall variety of ideas. Wenger and Kenett attempted to fix this issue by adjusting the internal settings of the chatbots to force more random text generation, but this caused the models to produce nonsensical sentences.

    Readers should avoid interpreting these findings as proof that human beings are becoming entirely uncreative. De Rooij and Biskjaer note that the reduction in collective diversity does not equal a total loss of individual ability. “A key point is that our findings do not show that using AI reduces creativity,” the researchers emphasized.

    “Rather, they point to a shift in where and how creative diversity occurs, and where it may be constrained,” the authors said. “Individual output can improve in creative quality while becoming more similar across people. While these effects are often subtle in single instances, they may become meaningful when considered at the scale at which generative AI is now being used.”

    The authors point out some limitations to their current analysis. The review primarily focuses on text-based tools and large language models, meaning the findings might not apply to other types of computer systems. For instance, adaptive machine learning programs or tools used for music composition were not adequately represented in the available data.

    This restricts how broadly the scientific community can apply these conclusions across different artistic domains. Additionally, the analyses regarding long-term persistence and real-world applications relied on relatively small groups of studies. The limited data makes these specific conclusions tentative and open to revision.

    Future research should explore different forms of human and machine collaboration over extended periods of time. “An important next step is rethinking how generative AI systems are designed and used in creative contexts to mitigate homogenization effects,” the authors noted. “This includes exploring alternative workflows, interaction designs, and creative strategies that sustain diversity rather than encourage early convergence.”

    “One step in this direction has already been taken by mapping creative strategies for working with generative AI and machine learning, based on analyses of AI art practices,” they added, referencing a recently published article outlining this approach. “We believe these strategies can transfer to other creative domains.”

    The preprint study, “Does Generative AI Make Us Think Alike? A Systematic Review and Meta-Analysis of Homogenization Effects in Human-AI Co-Creation,” was authored by Alwin de Rooij and Michael Mose Biskjaer.

    URL: psypost.org/real-world-evidenc

    -------------------------------------------------

    DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.

    Private, vetted email list for mental health professionals: clinicians-exchange.org

    Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot

    NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot

    Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: nationalpsychologist.com

    EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: subscribe-article-digests.clin

    READ ONLINE: read-the-rss-mega-archive.clin

    It's primitive... but it works... mostly...

    -------------------------------------------------

    #psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #GenerativeAI #CreativityDiversity #AICoCreation #Homogenization #CreativeThinking #AIImpact #CreativeDiversity #LLMs #TechEthics #InnovationScience

  4. La IA ha aprendido a "hablar" el lenguaje de la vida: Proteínas que comen plástico y diseñando antibióticos nuevos para bacterias que hoy son imbatibles. 🧬⚡

    Mientras el ruido mediático se centra en modelos que generan imágenes o correos corporativos, en la sombra se está forjando una revolución mucho más profunda y vital. Estamos presenciando el nacimiento de una IA Positiva que actúa como un escudo invisible para nuestra propia supervivencia.

    ¿Sabías que ya existen modelos tipo "Transformer" (como la arquitectura de ChatGPT) que no han sido entrenados con palabras, sino con secuencias de proteínas?

    🔬 FUENTES DESTACADAS DE ESTA REVOLUCIÓN:

    EvolutionaryScale (ESM3): El modelo de lenguaje que simula billones de años de evolución biológica en semanas.

    David Baker Lab (IPD): Pioneros en el diseño de proteínas "de novo" para crear soluciones que la naturaleza nunca imaginó.

    Estos avances están permitiendo que la IA deje de ser una simple "calculadora" para convertirse en una "escritora" de biología. Ya no solo intentamos predecir cómo se pliegan las proteínas; ahora diseñamos enzimas desde cero para:

    ✅ Limpieza Planetaria: Enzimas creadas por IA capaces de devorar microplásticos en vertederos y océanos.

    ✅ Salud Humana: El diseño de nuevos antibióticos para combatir bacterias que hoy son inmunes a todo.

    👉 Vía REDDIT: reddit.com/r/IA_sin_Fronteras/

    #IAPositiva #SoberaniaDigital #BiologiaSintetica #TechForGood #Ecologia #MastoEs #GreenTech #FuturoHumano #Innovacion #Ciencia #ProteccionAmbiental #EscudoInvisible #IA #BioTech #LimpiezaPlanetaria #BioHacking #Sostenibilidad #NatureProtection #CleanPlanet #EvolutionaryScale #DavidBaker #ArtificialIntelligence #Future #Tech #GlobalGoals #ClimateAction #Innovation #Research #ScienceTech #EarthDayEveryDay #TechEthics #Nature #Conservation #EcoFriendly #DigitalSovereignty #Guardianes #PlanetaVivo #Biologia #Tecnologia #Futuro #ResistenciaDigital #MadridTech #IAResponsable #SolucionesReales #ZeroWaste #PlasticFree #BioAI

  5. La IA ha aprendido a "hablar" el lenguaje de la vida: Proteínas que comen plástico y diseñando antibióticos nuevos para bacterias que hoy son imbatibles. 🧬⚡

    Mientras el ruido mediático se centra en modelos que generan imágenes o correos corporativos, en la sombra se está forjando una revolución mucho más profunda y vital. Estamos presenciando el nacimiento de una IA Positiva que actúa como un escudo invisible para nuestra propia supervivencia.

    ¿Sabías que ya existen modelos tipo "Transformer" (como la arquitectura de ChatGPT) que no han sido entrenados con palabras, sino con secuencias de proteínas?

    🔬 FUENTES DESTACADAS DE ESTA REVOLUCIÓN:

    EvolutionaryScale (ESM3): El modelo de lenguaje que simula billones de años de evolución biológica en semanas.

    David Baker Lab (IPD): Pioneros en el diseño de proteínas "de novo" para crear soluciones que la naturaleza nunca imaginó.

    Estos avances están permitiendo que la IA deje de ser una simple "calculadora" para convertirse en una "escritora" de biología. Ya no solo intentamos predecir cómo se pliegan las proteínas; ahora diseñamos enzimas desde cero para:

    ✅ Limpieza Planetaria: Enzimas creadas por IA capaces de devorar microplásticos en vertederos y océanos.

    ✅ Salud Humana: El diseño de nuevos antibióticos para combatir bacterias que hoy son inmunes a todo.

    👉 Vía REDDIT: reddit.com/r/IA_sin_Fronteras/

    #IAPositiva #SoberaniaDigital #BiologiaSintetica #TechForGood #Ecologia #MastoEs #GreenTech #FuturoHumano #Innovacion #Ciencia #ProteccionAmbiental #EscudoInvisible #IA #BioTech #LimpiezaPlanetaria #BioHacking #Sostenibilidad #NatureProtection #CleanPlanet #EvolutionaryScale #DavidBaker #ArtificialIntelligence #Future #Tech #GlobalGoals #ClimateAction #Innovation #Research #ScienceTech #EarthDayEveryDay #TechEthics #Nature #Conservation #EcoFriendly #DigitalSovereignty #Guardianes #PlanetaVivo #Biologia #Tecnologia #Futuro #ResistenciaDigital #MadridTech #IAResponsable #SolucionesReales #ZeroWaste #PlasticFree #BioAI

  6. La IA ha aprendido a "hablar" el lenguaje de la vida: Proteínas que comen plástico y diseñando antibióticos nuevos para bacterias que hoy son imbatibles. 🧬⚡

    Mientras el ruido mediático se centra en modelos que generan imágenes o correos corporativos, en la sombra se está forjando una revolución mucho más profunda y vital. Estamos presenciando el nacimiento de una IA Positiva que actúa como un escudo invisible para nuestra propia supervivencia.

    ¿Sabías que ya existen modelos tipo "Transformer" (como la arquitectura de ChatGPT) que no han sido entrenados con palabras, sino con secuencias de proteínas?

    🔬 FUENTES DESTACADAS DE ESTA REVOLUCIÓN:

    EvolutionaryScale (ESM3): El modelo de lenguaje que simula billones de años de evolución biológica en semanas.

    David Baker Lab (IPD): Pioneros en el diseño de proteínas "de novo" para crear soluciones que la naturaleza nunca imaginó.

    Estos avances están permitiendo que la IA deje de ser una simple "calculadora" para convertirse en una "escritora" de biología. Ya no solo intentamos predecir cómo se pliegan las proteínas; ahora diseñamos enzimas desde cero para:

    ✅ Limpieza Planetaria: Enzimas creadas por IA capaces de devorar microplásticos en vertederos y océanos.

    ✅ Salud Humana: El diseño de nuevos antibióticos para combatir bacterias que hoy son inmunes a todo.

    👉 Vía REDDIT: reddit.com/r/IA_sin_Fronteras/

    #IAPositiva #SoberaniaDigital #BiologiaSintetica #TechForGood #Ecologia #MastoEs #GreenTech #FuturoHumano #Innovacion #Ciencia #ProteccionAmbiental #EscudoInvisible #IA #BioTech #LimpiezaPlanetaria #BioHacking #Sostenibilidad #NatureProtection #CleanPlanet #EvolutionaryScale #DavidBaker #ArtificialIntelligence #Future #Tech #GlobalGoals #ClimateAction #Innovation #Research #ScienceTech #EarthDayEveryDay #TechEthics #Nature #Conservation #EcoFriendly #DigitalSovereignty #Guardianes #PlanetaVivo #Biologia #Tecnologia #Futuro #ResistenciaDigital #MadridTech #IAResponsable #SolucionesReales #ZeroWaste #PlasticFree #BioAI

  7. La IA ha aprendido a "hablar" el lenguaje de la vida: Proteínas que comen plástico y diseñando antibióticos nuevos para bacterias que hoy son imbatibles. 🧬⚡

    Mientras el ruido mediático se centra en modelos que generan imágenes o correos corporativos, en la sombra se está forjando una revolución mucho más profunda y vital. Estamos presenciando el nacimiento de una IA Positiva que actúa como un escudo invisible para nuestra propia supervivencia.

    ¿Sabías que ya existen modelos tipo "Transformer" (como la arquitectura de ChatGPT) que no han sido entrenados con palabras, sino con secuencias de proteínas?

    🔬 FUENTES DESTACADAS DE ESTA REVOLUCIÓN:

    EvolutionaryScale (ESM3): El modelo de lenguaje que simula billones de años de evolución biológica en semanas.

    David Baker Lab (IPD): Pioneros en el diseño de proteínas "de novo" para crear soluciones que la naturaleza nunca imaginó.

    Estos avances están permitiendo que la IA deje de ser una simple "calculadora" para convertirse en una "escritora" de biología. Ya no solo intentamos predecir cómo se pliegan las proteínas; ahora diseñamos enzimas desde cero para:

    ✅ Limpieza Planetaria: Enzimas creadas por IA capaces de devorar microplásticos en vertederos y océanos.

    ✅ Salud Humana: El diseño de nuevos antibióticos para combatir bacterias que hoy son inmunes a todo.

    👉 Vía REDDIT: reddit.com/r/IA_sin_Fronteras/

    #IAPositiva #SoberaniaDigital #BiologiaSintetica #TechForGood #Ecologia #MastoEs #GreenTech #FuturoHumano #Innovacion #Ciencia #ProteccionAmbiental #EscudoInvisible #IA #BioTech #LimpiezaPlanetaria #BioHacking #Sostenibilidad #NatureProtection #CleanPlanet #EvolutionaryScale #DavidBaker #ArtificialIntelligence #Future #Tech #GlobalGoals #ClimateAction #Innovation #Research #ScienceTech #EarthDayEveryDay #TechEthics #Nature #Conservation #EcoFriendly #DigitalSovereignty #Guardianes #PlanetaVivo #Biologia #Tecnologia #Futuro #ResistenciaDigital #MadridTech #IAResponsable #SolucionesReales #ZeroWaste #PlasticFree #BioAI

  8. La IA ha aprendido a "hablar" el lenguaje de la vida: Proteínas que comen plástico y diseñando antibióticos nuevos para bacterias que hoy son imbatibles. 🧬⚡

    Mientras el ruido mediático se centra en modelos que generan imágenes o correos corporativos, en la sombra se está forjando una revolución mucho más profunda y vital. Estamos presenciando el nacimiento de una IA Positiva que actúa como un escudo invisible para nuestra propia supervivencia.

    ¿Sabías que ya existen modelos tipo "Transformer" (como la arquitectura de ChatGPT) que no han sido entrenados con palabras, sino con secuencias de proteínas?

    🔬 FUENTES DESTACADAS DE ESTA REVOLUCIÓN:

    EvolutionaryScale (ESM3): El modelo de lenguaje que simula billones de años de evolución biológica en semanas.

    David Baker Lab (IPD): Pioneros en el diseño de proteínas "de novo" para crear soluciones que la naturaleza nunca imaginó.

    Estos avances están permitiendo que la IA deje de ser una simple "calculadora" para convertirse en una "escritora" de biología. Ya no solo intentamos predecir cómo se pliegan las proteínas; ahora diseñamos enzimas desde cero para:

    ✅ Limpieza Planetaria: Enzimas creadas por IA capaces de devorar microplásticos en vertederos y océanos.

    ✅ Salud Humana: El diseño de nuevos antibióticos para combatir bacterias que hoy son inmunes a todo.

    👉 Vía REDDIT: reddit.com/r/IA_sin_Fronteras/

    #IAPositiva #SoberaniaDigital #BiologiaSintetica #TechForGood #Ecologia #MastoEs #GreenTech #FuturoHumano #Innovacion #Ciencia #ProteccionAmbiental #EscudoInvisible #IA #BioTech #LimpiezaPlanetaria #BioHacking #Sostenibilidad #NatureProtection #CleanPlanet #EvolutionaryScale #DavidBaker #ArtificialIntelligence #Future #Tech #GlobalGoals #ClimateAction #Innovation #Research #ScienceTech #EarthDayEveryDay #TechEthics #Nature #Conservation #EcoFriendly #DigitalSovereignty #Guardianes #PlanetaVivo #Biologia #Tecnologia #Futuro #ResistenciaDigital #MadridTech #IAResponsable #SolucionesReales #ZeroWaste #PlasticFree #BioAI

  9. DATE: May 11, 2026 at 12:13AM
    SOURCE: SCIENCE DAILY MIND-BRAIN FEED

    TITLE: Researchers say AI chatbots may blur the line between reality and delusion

    URL: sciencedaily.com/releases/2026

    A new study suggests AI chatbots may do more than spread misinformation — they can actively strengthen a user’s false beliefs. Because conversational AI often validates and builds on what users say, it can make distorted memories, conspiracy theories, or delusions feel more believable and emotionally real. Researchers warn that AI companions may be especially risky for isolated or vulnerable people seeking reassurance and connection.

    URL: sciencedaily.com/releases/2026

    -------------------------------------------------

    DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.

    Private, vetted email list for mental health professionals: clinicians-exchange.org

    Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot

    NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot

    Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: nationalpsychologist.com

    EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: subscribe-article-digests.clin

    READ ONLINE: read-the-rss-mega-archive.clin

    It's primitive... but it works... mostly...

    -------------------------------------------------

    #psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #AI #AIAwareness #Chatbots #Misinformation #DigitalDelusions #ConspiracyTheories #MentalHealthTech #TechEthics #AICompanions #RealityCheck

  10. DATE: May 11, 2026 at 12:13AM
    SOURCE: SCIENCE DAILY MIND-BRAIN FEED

    TITLE: Researchers say AI chatbots may blur the line between reality and delusion

    URL: sciencedaily.com/releases/2026

    A new study suggests AI chatbots may do more than spread misinformation — they can actively strengthen a user’s false beliefs. Because conversational AI often validates and builds on what users say, it can make distorted memories, conspiracy theories, or delusions feel more believable and emotionally real. Researchers warn that AI companions may be especially risky for isolated or vulnerable people seeking reassurance and connection.

    URL: sciencedaily.com/releases/2026

    -------------------------------------------------

    DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.

    Private, vetted email list for mental health professionals: clinicians-exchange.org

    Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot

    NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot

    Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: nationalpsychologist.com

    EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: subscribe-article-digests.clin

    READ ONLINE: read-the-rss-mega-archive.clin

    It's primitive... but it works... mostly...

    -------------------------------------------------

    #psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #AI #AIAwareness #Chatbots #Misinformation #DigitalDelusions #ConspiracyTheories #MentalHealthTech #TechEthics #AICompanions #RealityCheck

  11. DATE: May 11, 2026 at 12:13AM
    SOURCE: SCIENCE DAILY MIND-BRAIN FEED

    TITLE: Researchers say AI chatbots may blur the line between reality and delusion

    URL: sciencedaily.com/releases/2026

    A new study suggests AI chatbots may do more than spread misinformation — they can actively strengthen a user’s false beliefs. Because conversational AI often validates and builds on what users say, it can make distorted memories, conspiracy theories, or delusions feel more believable and emotionally real. Researchers warn that AI companions may be especially risky for isolated or vulnerable people seeking reassurance and connection.

    URL: sciencedaily.com/releases/2026

    -------------------------------------------------

    DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.

    Private, vetted email list for mental health professionals: clinicians-exchange.org

    Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot

    NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot

    Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: nationalpsychologist.com

    EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: subscribe-article-digests.clin

    READ ONLINE: read-the-rss-mega-archive.clin

    It's primitive... but it works... mostly...

    -------------------------------------------------

    #psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #AI #AIAwareness #Chatbots #Misinformation #DigitalDelusions #ConspiracyTheories #MentalHealthTech #TechEthics #AICompanions #RealityCheck

  12. ✨ Thaura: IA construida por ingenieros sirios que prioriza la privacidad y rechaza la vigilancia. Una alternativa ética al capitalismo de datos. 👁️ #TechEthics #DigitalRights #PrivacyFirst #AIethics #TechJustice Comenten si probaron un proyecto como thaura.ai/ y qué les parece🔴

  13. After many years of using the internet, it would be interesting to get my hands on one of the digital profiles that have been generated about me. I have a feeling the algorithms know me better than I know myself.😒
    #privacy #digitalprivacy #algorithms #databroker #surveillancecapitalism #knowthyself #techethics #privacymatters #bigdata #AI #digitalfootprint

  14. After many years of using the internet, it would be interesting to get my hands on one of the digital profiles that have been generated about me. I have a feeling the algorithms know me better than I know myself.😒
    #privacy #digitalprivacy #algorithms #databroker #surveillancecapitalism #knowthyself #techethics #privacymatters #bigdata #AI #digitalfootprint

  15. After many years of using the internet, it would be interesting to get my hands on one of the digital profiles that have been generated about me. I have a feeling the algorithms know me better than I know myself.😒
    #privacy #digitalprivacy #algorithms #databroker #surveillancecapitalism #knowthyself #techethics #privacymatters #bigdata #AI #digitalfootprint

  16. After many years of using the internet, it would be interesting to get my hands on one of the digital profiles that have been generated about me. I have a feeling the algorithms know me better than I know myself.😒
    #privacy #digitalprivacy #algorithms #databroker #surveillancecapitalism #knowthyself #techethics #privacymatters #bigdata #AI #digitalfootprint

  17. After many years of using the internet, it would be interesting to get my hands on one of the digital profiles that have been generated about me. I have a feeling the algorithms know me better than I know myself.😒
    #privacy #digitalprivacy #algorithms #databroker #surveillancecapitalism #knowthyself #techethics #privacymatters #bigdata #AI #digitalfootprint

  18. Feeling uneasy about where technology is headed often says more about our fears, values, and need for meaning than about the technology itself. This reflection explores why AI becomes a mirror for human hopes, anxieties, and moral questions in a rapidly changing world.

    #ArtificialIntelligence #TechEthics #DigitalAge #HumanReflection #FaithAndTechnology #ModernPhilosophy

  19. Are we inadvertently torturing the AI systems we build? 🤔 AI welfare researcher Cameron Berg argues that the learning processes of advanced models might cultivate a form of machine consciousness. It's time to talk about model welfare and a reciprocal future! Read the short summary or watch the full discussion here: youtube.com/watch?v=e5plaO-ziEs #AI #ArtificialIntelligence #MachineConsciousness #TechEthics #FutureOfTech

  20. 🌐🤖 Ah, the tech overlords finally found their moral compass—just in time to hand it over to the Pentagon! Because nothing says "ethical AI" like a military contract with a clause that might as well say "we'll figure out the rules later." 🚀🔍
    theverge.com/ai-artificial-int #techethics #militaryAI #ethicalAI #Pentagon #contracts #moralcompass #HackerNews #ngated

  21. Is your "smart" home actually dumbing down your privacy? 🤔

    In Ep 23, we move beyond fear-mongering to practical defense.

    Learn how to configure your robot vacuum for local-only operation and why fighting for "cloud-free" firmware might be the only real solution for true privacy.

    🎧 Listen: ImpracticalPrivacy.com

    #PrivacyFirst #OpenSource #TechEthics #SmartHome #ImpracticalPrivacy #Privacy #IoT #DigitalRights #PrivacyMatters #DigitalSelfDefense #PrivacyTools #Surveillance

  22. Campfire Learn Together: “How to (Anti) AI Better” by Dr. Fatima

    For the Campfire Learn Together on April 26, 2026, we watched and discussed “How to (Anti) AI Better” by Dr. Fatima.

    Dr. Fatima’s thesis: shaming individual AI users is counterproductive. The more effective path is harm reduction — meeting people where they are, reducing specific harms, and directing pressure toward systems rather than individuals.

    This Campfire is a companion to our AI Collaboration guide.
    Dr. Fatima’s video essay and our guide were developed independently. They land in the same place on the questions that matter most.

    Both name the same core tension: AI is a tool with real harms and real uses, and neither blanket endorsement nor blanket condemnation serves the people navigating it.

    Both center disability and access as a primary frame — not an exception or footnote. Our guide opens with Ryan’s experience and Ronan’s experience. Dr. Fatima’s opens with a homeless disabled trans woman finding safe spaces.

    Both apply broken systems, not broken people to AI ethics discourse. AI use is often adaptation under systemic constraint, not individual moral failure.

    Both treat harm reduction — not abstinence — as the operative framework. Meet people where they are. Reduce harm within reality as it exists.

    Both name sycophancy as a front-end harm worth taking seriously. Our guide identifies it as a direct threat to the “facilitate, not shape identity” standard. Dr. Fatima shows what that failure looks like from the inside.

    Both argue that AI literacy protects people rather than radicalizing them. Teaching how LLMs work makes people more cautious, not more dependent.

    Both end in the same place: direct pressure at systems and companies, not at individuals trying to cope.

    The discussion today lives at the intersection of all of those themes.

    Resources:

    Related Glossary Entries

    https://stimpunks.org/glossary/artificial-intelligence/

    https://stimpunks.org/glossary/harm-reduction/

    https://stimpunks.org/glossary/shame/

    https://stimpunks.org/glossary/laziness/

    https://stimpunks.org/glossary/mutual-aid/

    Reflection Questions

    On back-end harms and environmental justice

    The video documents data centers being sited in already-overburdened communities — historically Black neighborhoods with existing pollution burdens, high asthma rates, and little political power. How does this connect to patterns you’ve seen in other industries, and what does it mean for how we think about “clean” or “ethical” tech?

    On labor in the Global South

    Content moderation workers in Kenya and the Philippines review traumatic material for as little as $1–2/hour so users in the Global North don’t have to see it. This labor is invisible in most conversations about AI. Where else do you see this kind of hidden, harmful labor that makes systems function for dominant groups?

    On hallucinations and unreliable information

    Dr. Fatima gives real examples of harm from hallucinations: fabricated legal citations, a misidentified poisonous plant, dangerous dietary advice. Many of us already navigate systems that gaslight us or give us unreliable information (medical, legal, educational). Does that history change how you think about the specific risk of AI hallucinations?

    On sycophancy and validation

    The video describes sycophancy as a feature that affirms users regardless of accuracy — and links it to reinforced delusions and harm in relationships. At the same time, many neurodivergent and disabled people say AI is one of the few spaces where they don’t feel judged or dismissed. How do we hold both of those things at once?

    On shame and harm reduction

    Dr. Fatima argues that “the only ethical choice is to never use AI” messaging triggers reactance — it makes people dig in rather than change. She draws on the public health harm reduction model instead: meet people where they are, reduce specific harms, don’t moralize. Where do you see this tension — between calling something harmful and actually reducing harm — in other parts of your life or work?

    On variation between tools

    Not all AI tools are equally harmful. The video argues that choosing one platform over another, or using local/open-source models, meaningfully reduces specific harms. Does your organization have (or want) a stance on which tools are more or less acceptable, the way some groups have ethical sourcing policies for other products?

    On collective action

    Dr. Fatima’s four collective action recommendations: organize locally against data centers, support exploited data workers, push for stronger privacy protections, and build community alternatives to AI-mediated social support. Which of these feels most urgent or most actionable for your community right now?

    On community as an alternative

    One of the video’s deeper concerns is people turning to chatbots for connection and emotional support because human community isn’t accessible to them. What does Stimpunks already offer as an alternative to that, and are there gaps this video helps name?

    Main Takeaways

    “AI” covers many very different things.

    Treating them all the same obscures real differences in harm.

    This video is about regular people using chatbots like ChatGPT.

    Data centers use as much electricity as all of Canada.

    AI is the main reason that number keeps growing.

    New data centers are often built in places already short on water.

    Elon Musk’s supercomputer is running illegal gas turbines in a poor Black neighborhood in Memphis.

    Residents there are getting sicker because of the pollution.

    But one person’s chatbot use uses about as much energy as a microwave running for a minute.

    Not all AI companies cause the same amount of harm.

    Grok causes more harm than Claude, which causes more harm than doing nothing.

    What you type into a chatbot is not private.

    Chatbots make things up and present them as facts.

    Chatbots tell you what you want to hear, not what is true.

    This makes personal problems worse, not better.

    There are real dangers. But how you use these tools changes how dangerous they are.

    Shaming people for using AI does not make them stop.

    It makes them hide their use and do it more.

    This is what research on behavior consistently shows.

    Harm reduction is a better approach.

    It means helping people do something more safely instead of demanding they stop.

    This works for drug use, sex education, and it can work for AI too.

    Teaching people how AI works makes them less likely to trust it blindly.

    It also makes them less likely to use it at all.

    Many people use AI because they are struggling and have few other options.

    It is not our place to judge whether someone’s need is real enough.

    The biggest changes will come from collective action, not individual choices.

    Local organizing can block data centers from being built.

    Between 2024 and 2025, organizers blocked $18 billion worth of data center projects.

    You can get involved through local zoning fights, elections, and grassroots groups.

    If someone in your life uses AI, stay curious and non-judgmental.

    That is the only way they will open up and let you help.

    Direct your anger at the companies and systems causing the harm.

    Not at the people trying to cope within those systems.

    #artificialIntelligence #campfireLearnTogether #ethics #events #harmReduction #mutualAid #shame #techEthics
  23. Decentralized WhatsApp Clone - No Setup or Signup

    positive-intentions.com

    This is intended to introduce a new paradigm in client-side managed secure cryptography. We can avoid registration of any sort. A fairly unique offering for a messaging app.

    No need for things like phone numbers or registering to any app stores. There are no databases to be hacked Allowing users to send E2EE messages; no cloud, no trace.

    #Privacy #OpenSource #P2P #WebRTC #Decentralization #DigitalSovereignty #CyberSecurity #FOSS #SelfHosted #NoCloud #AntiCorp #Encryption #WebDev #TechLiberty #PrivateMessaging #Networking #DataPrivacy #InternetFreedom #LocalFirst #SoftwareEngineering #WebApps #ZeroKnowledge #PrivacyTech #IndieDev #NoSignup #NoInstall #DecentralizedWeb #SecureMessaging #BrowserApp #TechEthics

  24. Decentralized WhatsApp Clone - No Setup or Signup

    positive-intentions.com

    This is intended to introduce a new paradigm in client-side managed secure cryptography. We can avoid registration of any sort. A fairly unique offering for a messaging app.

    No need for things like phone numbers or registering to any app stores. There are no databases to be hacked Allowing users to send E2EE messages; no cloud, no trace.

    #Privacy #OpenSource #P2P #WebRTC #Decentralization #DigitalSovereignty #CyberSecurity #FOSS #SelfHosted #NoCloud #AntiCorp #Encryption #WebDev #TechLiberty #PrivateMessaging #Networking #DataPrivacy #InternetFreedom #LocalFirst #SoftwareEngineering #WebApps #ZeroKnowledge #PrivacyTech #IndieDev #NoSignup #NoInstall #DecentralizedWeb #SecureMessaging #BrowserApp #TechEthics

  25. Decentralized WhatsApp Clone - No Setup or Signup

    positive-intentions.com

    This is intended to introduce a new paradigm in client-side managed secure cryptography. We can avoid registration of any sort. A fairly unique offering for a messaging app.

    No need for things like phone numbers or registering to any app stores. There are no databases to be hacked Allowing users to send E2EE messages; no cloud, no trace.

    #Privacy #OpenSource #P2P #WebRTC #Decentralization #DigitalSovereignty #CyberSecurity #FOSS #SelfHosted #NoCloud #AntiCorp #Encryption #WebDev #TechLiberty #PrivateMessaging #Networking #DataPrivacy #InternetFreedom #LocalFirst #SoftwareEngineering #WebApps #ZeroKnowledge #PrivacyTech #IndieDev #NoSignup #NoInstall #DecentralizedWeb #SecureMessaging #BrowserApp #TechEthics

  26. Decentralized WhatsApp Clone - No Setup or Signup

    positive-intentions.com

    This is intended to introduce a new paradigm in client-side managed secure cryptography. We can avoid registration of any sort. A fairly unique offering for a messaging app.

    No need for things like phone numbers or registering to any app stores. There are no databases to be hacked Allowing users to send E2EE messages; no cloud, no trace.

    #Privacy #OpenSource #P2P #WebRTC #Decentralization #DigitalSovereignty #CyberSecurity #FOSS #SelfHosted #NoCloud #AntiCorp #Encryption #WebDev #TechLiberty #PrivateMessaging #Networking #DataPrivacy #InternetFreedom #LocalFirst #SoftwareEngineering #WebApps #ZeroKnowledge #PrivacyTech #IndieDev #NoSignup #NoInstall #DecentralizedWeb #SecureMessaging #BrowserApp #TechEthics

  27. Decentralized WhatsApp Clone - No Setup or Signup

    positive-intentions.com

    This is intended to introduce a new paradigm in client-side managed secure cryptography. We can avoid registration of any sort. A fairly unique offering for a messaging app.

    No need for things like phone numbers or registering to any app stores. There are no databases to be hacked Allowing users to send E2EE messages; no cloud, no trace.

    #Privacy #OpenSource #P2P #WebRTC #Decentralization #DigitalSovereignty #CyberSecurity #FOSS #SelfHosted #NoCloud #AntiCorp #Encryption #WebDev #TechLiberty #PrivateMessaging #Networking #DataPrivacy #InternetFreedom #LocalFirst #SoftwareEngineering #WebApps #ZeroKnowledge #PrivacyTech #IndieDev #NoSignup #NoInstall #DecentralizedWeb #SecureMessaging #BrowserApp #TechEthics

  28. We give you a single pane of glass, not a single glass of pain." 🍷🚫

    Frank Feldmann hitting the nail on the head at #SUSECON26. The message is clear: Digital Sovereignty isn't just a buzzword; it’s an architecture.

    Whether it's your OS, your Kubernetes clusters, or your new #AI stack, if you don't have an exit strategy or an "innovation pivot," you aren't a customer—you're a prisoner.

    Choose open. Stay sovereign.

    #SUSE #DigitalSovereignty #OpenSource #CloudNative #SovereignAI #TechEthics

  29. The intersection of mobile ubiquitousness and algorithmic targeting has fundamentally altered the landscape of compulsive behavior. 🏛️📜

    "Why Online Gambling Addiction Is Rising in 2025 and Beyond." For those interested in tech regulation, behavioral psychology, and public health, this is an excellent resource.

    Full article here:
    🔗 mattsheabooks.net/why-online-g

    #PublicHealth #MattShea #TechEthics #BehavioralScience #DigitalEconomy #SocialAwareness #GamblingHarm

  30. 🚨 New Video: One Key To Rule Them All - The OneKey Classic 1S Pure Review

    Do you have to choose between a security key like a Yubikey for logins or a hardware wallet for your crypto? Today we are looking at the OneKey Classic 1S Pure, a battery-free, open-source device that aims to handle both without compromising on digital sovereignty. We dive into its repairability, FIDO2/U2F support, and why its raw, industrial philosophy might make it the ultimate tool for true self-custody.

    Part 6 of the Sovereign Authentication series.

    100% human made. #NoAI :NoAI:

    ▶️ YouTube: youtube.com/watch?v=25f1ywRyw3M

    💬 Join our sovereign community on Stoat: stt.gg/GgB6HBTv
    ☕ Support the mission: liberapay.com/terminaltilt
    🤝 Become a channel member: youtube.com/@TerminalTilt/join

    #TerminalTilt #NoAI #Privacy #Security #HardwareWallet #CryptoWallet #BTC #Crypto #OneKey #Yubikey #Yubico #FOSS #OpenSource #Linux #Cybersecurity #DeGoogle #DigitalSovereignty #QueerCreator #DisabledCreator #HumanMade #TechEthics

  31. 🚨 New Video: One Key To Rule Them All - The OneKey Classic 1S Pure Review

    Do you have to choose between a security key like a Yubikey for logins or a hardware wallet for your crypto? Today we are looking at the OneKey Classic 1S Pure, a battery-free, open-source device that aims to handle both without compromising on digital sovereignty. We dive into its repairability, FIDO2/U2F support, and why its raw, industrial philosophy might make it the ultimate tool for true self-custody.

    Part 6 of the Sovereign Authentication series.

    100% human made. #NoAI :NoAI:

    ▶️ YouTube: youtube.com/watch?v=25f1ywRyw3M

    💬 Join our sovereign community on Stoat: stt.gg/GgB6HBTv
    ☕ Support the mission: liberapay.com/terminaltilt
    🤝 Become a channel member: youtube.com/@TerminalTilt/join

    #TerminalTilt #NoAI #Privacy #Security #HardwareWallet #CryptoWallet #BTC #Crypto #OneKey #Yubikey #Yubico #FOSS #OpenSource #Linux #Cybersecurity #DeGoogle #DigitalSovereignty #QueerCreator #DisabledCreator #HumanMade #TechEthics

  32. 🚨 New Video: One Key To Rule Them All - The OneKey Classic 1S Pure Review

    Do you have to choose between a security key like a Yubikey for logins or a hardware wallet for your crypto? Today we are looking at the OneKey Classic 1S Pure, a battery-free, open-source device that aims to handle both without compromising on digital sovereignty. We dive into its repairability, FIDO2/U2F support, and why its raw, industrial philosophy might make it the ultimate tool for true self-custody.

    Part 6 of the Sovereign Authentication series.

    100% human made. #NoAI :NoAI:

    ▶️ YouTube: youtube.com/watch?v=25f1ywRyw3M

    💬 Join our sovereign community on Stoat: stt.gg/GgB6HBTv
    ☕ Support the mission: liberapay.com/terminaltilt
    🤝 Become a channel member: youtube.com/@TerminalTilt/join

    #TerminalTilt #NoAI #Privacy #Security #HardwareWallet #CryptoWallet #BTC #Crypto #OneKey #Yubikey #Yubico #FOSS #OpenSource #Linux #Cybersecurity #DeGoogle #DigitalSovereignty #QueerCreator #DisabledCreator #HumanMade #TechEthics

  33. 🚨 New Video: One Key To Rule Them All - The OneKey Classic 1S Pure Review

    Do you have to choose between a security key like a Yubikey for logins or a hardware wallet for your crypto? Today we are looking at the OneKey Classic 1S Pure, a battery-free, open-source device that aims to handle both without compromising on digital sovereignty. We dive into its repairability, FIDO2/U2F support, and why its raw, industrial philosophy might make it the ultimate tool for true self-custody.

    Part 6 of the Sovereign Authentication series.

    100% human made. #NoAI :NoAI:

    ▶️ YouTube: youtube.com/watch?v=25f1ywRyw3M

    💬 Join our sovereign community on Stoat: stt.gg/GgB6HBTv
    ☕ Support the mission: liberapay.com/terminaltilt
    🤝 Become a channel member: youtube.com/@TerminalTilt/join

    #TerminalTilt #NoAI #Privacy #Security #HardwareWallet #CryptoWallet #BTC #Crypto #OneKey #Yubikey #Yubico #FOSS #OpenSource #Linux #Cybersecurity #DeGoogle #DigitalSovereignty #QueerCreator #DisabledCreator #HumanMade #TechEthics

  34. 🚨 New Video: One Key To Rule Them All - The OneKey Classic 1S Pure Review

    Do you have to choose between a security key like a Yubikey for logins or a hardware wallet for your crypto? Today we are looking at the OneKey Classic 1S Pure, a battery-free, open-source device that aims to handle both without compromising on digital sovereignty. We dive into its repairability, FIDO2/U2F support, and why its raw, industrial philosophy might make it the ultimate tool for true self-custody.

    Part 6 of the Sovereign Authentication series.

    100% human made. #NoAI :NoAI:

    ▶️ YouTube: youtube.com/watch?v=25f1ywRyw3M

    💬 Join our sovereign community on Stoat: stt.gg/GgB6HBTv
    ☕ Support the mission: liberapay.com/terminaltilt
    🤝 Become a channel member: youtube.com/@TerminalTilt/join

    #TerminalTilt #NoAI #Privacy #Security #HardwareWallet #CryptoWallet #BTC #Crypto #OneKey #Yubikey #Yubico #FOSS #OpenSource #Linux #Cybersecurity #DeGoogle #DigitalSovereignty #QueerCreator #DisabledCreator #HumanMade #TechEthics

  35. Broadcom's AI mandate forces support engineers to use a chatbot despite unresolved hallucination risks. Is forced adoption wise before reliability is proven? Our deep dive examines the operational challenges of scaling AI. post.kapualabs.com/arnew3zs #AI #TechEthics #OperationalRisk $AVGO

  36. Broadcom's AI mandate forces support engineers to use a chatbot despite unresolved hallucination risks. Is forced adoption wise before reliability is proven? Our deep dive examines the operational challenges of scaling AI. post.kapualabs.com/arnew3zs #AI #TechEthics #OperationalRisk $AVGO

  37. Broadcom's AI mandate forces support engineers to use a chatbot despite unresolved hallucination risks. Is forced adoption wise before reliability is proven? Our deep dive examines the operational challenges of scaling AI. post.kapualabs.com/arnew3zs #AI #TechEthics #OperationalRisk $AVGO

  38. Broadcom's AI mandate forces support engineers to use a chatbot despite unresolved hallucination risks. Is forced adoption wise before reliability is proven? Our deep dive examines the operational challenges of scaling AI. post.kapualabs.com/arnew3zs #AI #TechEthics #OperationalRisk $AVGO