#llms — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #llms, aggregated by home.social.
-
Negli ultimi mesi tutti parlano di #AIO #AEO e #GEO 🤖
Ma molto spesso riscontriamo un errore comune:
pensare che l’#AIoptimization possa sostituire le fondamenta #SEO.
La realtà è che gli #LLMs leggono il web 🌐
E il web continua a essere strutturato attorno alla SEO.
Solo con una solida base di ottimizzazione SEO, ha valore effettuare ottimizzazioni specifiche per AI.
#DigitalAssetOptimizartion #DigitalStrategy #ContentStrategy #AIVisibility #AboutSolution -
Every morning she comes over to my mother and father for a coffee, and answers pretty much every question with "Ask artificial intelligence!"
One day she complained about her TV antenna not working. My mother said "I don't know much about antennas". But, the day after she went over to her place and managed to fix the antenna. Then she asked her: "Why didn't you ask artificial intelligence?" 🤪
-
Every morning she comes over to my mother and father for a coffee, and answers pretty much every question with "Ask artificial intelligence!"
One day she complained about her TV antenna not working. My mother said "I don't know much about antennas". But, the day after she went over to her place and managed to fix the antenna. Then she asked her: "Why didn't you ask artificial intelligence?" 🤪
-
Every morning she comes over to my mother and father for a coffee, and answers pretty much every question with "Ask artificial intelligence!"
One day she complained about her TV antenna not working. My mother said "I don't know much about antennas". But, the day after she went over to her place and managed to fix the antenna. Then she asked her: "Why didn't you ask artificial intelligence?" 🤪
-
Every morning she comes over to my mother and father for a coffee, and answers pretty much every question with "Ask artificial intelligence!"
One day she complained about her TV antenna not working. My mother said "I don't know much about antennas". But, the day after she went over to her place and managed to fix the antenna. Then she asked her: "Why didn't you ask artificial intelligence?" 🤪
-
Every morning she comes over to my mother and father for a coffee, and answers pretty much every question with "Ask artificial intelligence!"
One day she complained about her TV antenna not working. My mother said "I don't know much about antennas". But, the day after she went over to her place and managed to fix the antenna. Then she asked her: "Why didn't you ask artificial intelligence?" 🤪
-
DATE: May 14, 2026 at 10:00AM
SOURCE: PSYPOST.ORG** Research quality varies widely from fantastic to small exploratory studies. Please check research methods when conclusions are very important to you. **
-------------------------------------------------TITLE: Real-world evidence shows generative AI is making human creative output more uniform
Using artificial intelligence for creative tasks tends to make human output more uniform on a collective level. A recent preprint study provides evidence that while these tools might boost individual performance, they contribute to an overall reduction in the diversity of ideas across different users. This widespread reliance on automated assistance could lead to a narrower range of concepts in collaborative environments.
Generative artificial intelligence refers to computer programs capable of creating new text, images, or other media based on user instructions. The most common of these tools rely on large language models. Developers build these models by feeding them billions of sentences from the internet, allowing the software to recognize patterns and predict how words should follow one another.
Since many users interact with similar systems trained on overlapping data, scientists have raised concerns about how this technology shapes human thought. Researchers Alwin de Rooij, assistant professor in creativity research at Tilburg University and associate professor at Avans University of Applied Sciences, and Michael Mose Biskjaer, associate professor in design creativity and innovation at Aarhus University, designed a new study to assess these concerns. They noticed that previous research often focused on how these tools help individuals work faster or overcome temporary mental blocks.
They wanted to know if this individual assistance comes at a collective cost. “There are growing concerns that using Generative AI may lead people toward similar creative ideas,” the authors explained. “While AI can enhance creativity at the individual level, these benefits might come at a cost for creativity at a collective, or even societal, level.”
The authors sought to answer whether generative software makes people think alike. “We sought to address this by conducting a systematic review and meta-analysis of 19 empirical studies,” they noted. “More concretely, we wanted to examine whether and to what extent generative AI use is associated with convergence at the level of creative output, such as people’s ideas, designs, and creative writing.”
A meta-analysis is a statistical technique that combines the results of multiple independent studies to find common patterns or overall trends. By pooling data from various experiments, scientists can draw more robust conclusions than they could from a single test. The authors searched academic databases for studies published between 2022 and early 2026.
This time frame covers the period following the public release of popular chatbots, capturing the first wave of empirical research on this topic. The researchers selected 18 eligible articles containing 19 distinct experimental studies. These studies provided a total of 61 individual effect sizes, which are mathematical values indicating the strength of a specific phenomenon.
To be included in the analysis, the original experiments had to compare humans working with generative software against humans working alone. The original studies measured homogenization using several techniques. Many relied on advanced text analysis tools that translate written responses into mathematical coordinates.
This process allows computers to measure the semantic distance between words, essentially calculating how closely related different ideas are to one another. Other studies used human experts to rate the variety of meanings produced by participants. The analysis revealed a statistically significant homogenization effect associated with the use of artificial intelligence.
When people co-created with these systems, their final products tended to be more similar to the work of other users. “The meta-analysis shows that using generative AI can indeed lead people to think alike,” the authors noted. “Across individuals, AI use tends to make ideas, designs, and creative texts more similar to one another.”
“This suggests that AI may contribute to a form of homogenization of creative thought at the collective level,” they continued. “Importantly, this does not necessarily reflect a failure of human-AI co-creation but may instead be an inherent feature of how these systems currently support creative work at scale.”
The scientists also evaluated whether the type of task influenced the degree of uniformity. They categorized the experiments into four groups, which included divergent thinking, idea generation, writing, and visual art. Divergent thinking tasks are highly open-ended exercises, such as asking someone to list creative uses for a paperclip.
Idea generation tasks provide more specific constraints, such as asking for solutions to improve public transportation. The analysis showed that the homogenization effect was strongest in the idea generation tasks. Because these exercises require specific solutions to defined problems, users likely rely more heavily on the predictable suggestions provided by the computer algorithms.
The researchers did not find strong statistical evidence for differences among the other three categories, suggesting that open-ended tasks lead to less convergence. They also checked if these patterns only happen in highly controlled laboratory settings. The authors compared traditional laboratory experiments with real-world scenarios, such as analyzing published essays and visual artworks created before and after the widespread adoption of automated writing tools.
The analysis of these real-world conditions showed a small but significant reduction in idea diversity. “In many ways, the findings resemble classic fixation effects from the psychology literature, where exposure to examples constrains later thinking, but here they appear amplified by the scale and synchronicity of generative AI model use,” the researchers stated. “This homogenization effect was observed not only in controlled lab studies but also in real-world quasi-experiments. This suggests that it is not merely a lab-based phenomenon, but a practical concern affecting concrete creative processes and practices.”
De Rooij and Biskjaer also investigated whether this narrowing of ideas persists after a person stops using the software. They isolated a subset of studies that tested participants on new creative tasks after their initial interaction with the computer models. The results suggest that the homogenization effect carries over into these subsequent activities.
“The findings also provide preliminary evidence that homogenization effects may persist beyond moments of direct AI use,” the researchers told PsyPost. “In other words, interacting with these generative AI systems may shape how people think and generate ideas even after the interaction has ended. This potential ‘rub-off’ effect on creative cognition warrants further research and is something we would like to explore in more depth.”
These results closely align with another recent study published in the journal PNAS Nexus. Scientists Emily Wenger and Yoed N. Kenett tested how large language models affect human creativity by evaluating 22 different commercial chatbots. They recruited 102 human participants to complete a series of verbal creativity tests, including the alternative uses task, and then asked the chatbots to complete the exact same assignments.
Wenger and Kenett found that individual language models performed at or slightly above the level of the average human on most exercises. When viewed in isolation, a single chatbot provided highly original and creative responses. However, when the scientists compared all the responses from the different models, a stark pattern of similarity emerged.
Across all tasks, the computer programs produced answers that were significantly more alike than the answers provided by the human participants. Both sets of researchers point to similar underlying mechanisms for this phenomenon. Because the major technology companies train their models on massive, overlapping datasets scraped from the internet, the programs naturally gravitate toward the most statistically common word associations.
When thousands of people use these tools to generate ideas, the software acts as a semantic anchor. The models pull human users toward a shared set of typical concepts, reducing the overall variety of ideas. Wenger and Kenett attempted to fix this issue by adjusting the internal settings of the chatbots to force more random text generation, but this caused the models to produce nonsensical sentences.
Readers should avoid interpreting these findings as proof that human beings are becoming entirely uncreative. De Rooij and Biskjaer note that the reduction in collective diversity does not equal a total loss of individual ability. “A key point is that our findings do not show that using AI reduces creativity,” the researchers emphasized.
“Rather, they point to a shift in where and how creative diversity occurs, and where it may be constrained,” the authors said. “Individual output can improve in creative quality while becoming more similar across people. While these effects are often subtle in single instances, they may become meaningful when considered at the scale at which generative AI is now being used.”
The authors point out some limitations to their current analysis. The review primarily focuses on text-based tools and large language models, meaning the findings might not apply to other types of computer systems. For instance, adaptive machine learning programs or tools used for music composition were not adequately represented in the available data.
This restricts how broadly the scientific community can apply these conclusions across different artistic domains. Additionally, the analyses regarding long-term persistence and real-world applications relied on relatively small groups of studies. The limited data makes these specific conclusions tentative and open to revision.
Future research should explore different forms of human and machine collaboration over extended periods of time. “An important next step is rethinking how generative AI systems are designed and used in creative contexts to mitigate homogenization effects,” the authors noted. “This includes exploring alternative workflows, interaction designs, and creative strategies that sustain diversity rather than encourage early convergence.”
“One step in this direction has already been taken by mapping creative strategies for working with generative AI and machine learning, based on analyses of AI art practices,” they added, referencing a recently published article outlining this approach. “We believe these strategies can transfer to other creative domains.”
The preprint study, “Does Generative AI Make Us Think Alike? A Systematic Review and Meta-Analysis of Homogenization Effects in Human-AI Co-Creation,” was authored by Alwin de Rooij and Michael Mose Biskjaer.
-------------------------------------------------
DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.
Private, vetted email list for mental health professionals: https://www.clinicians-exchange.org
Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot
NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot
Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: https://www.nationalpsychologist.com
EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: http://subscribe-article-digests.clinicians-exchange.org
READ ONLINE: http://read-the-rss-mega-archive.clinicians-exchange.org
It's primitive... but it works... mostly...
-------------------------------------------------
#psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #GenerativeAI #CreativityDiversity #AICoCreation #Homogenization #CreativeThinking #AIImpact #CreativeDiversity #LLMs #TechEthics #InnovationScience
-
DATE: May 14, 2026 at 10:00AM
SOURCE: PSYPOST.ORG** Research quality varies widely from fantastic to small exploratory studies. Please check research methods when conclusions are very important to you. **
-------------------------------------------------TITLE: Real-world evidence shows generative AI is making human creative output more uniform
Using artificial intelligence for creative tasks tends to make human output more uniform on a collective level. A recent preprint study provides evidence that while these tools might boost individual performance, they contribute to an overall reduction in the diversity of ideas across different users. This widespread reliance on automated assistance could lead to a narrower range of concepts in collaborative environments.
Generative artificial intelligence refers to computer programs capable of creating new text, images, or other media based on user instructions. The most common of these tools rely on large language models. Developers build these models by feeding them billions of sentences from the internet, allowing the software to recognize patterns and predict how words should follow one another.
Since many users interact with similar systems trained on overlapping data, scientists have raised concerns about how this technology shapes human thought. Researchers Alwin de Rooij, assistant professor in creativity research at Tilburg University and associate professor at Avans University of Applied Sciences, and Michael Mose Biskjaer, associate professor in design creativity and innovation at Aarhus University, designed a new study to assess these concerns. They noticed that previous research often focused on how these tools help individuals work faster or overcome temporary mental blocks.
They wanted to know if this individual assistance comes at a collective cost. “There are growing concerns that using Generative AI may lead people toward similar creative ideas,” the authors explained. “While AI can enhance creativity at the individual level, these benefits might come at a cost for creativity at a collective, or even societal, level.”
The authors sought to answer whether generative software makes people think alike. “We sought to address this by conducting a systematic review and meta-analysis of 19 empirical studies,” they noted. “More concretely, we wanted to examine whether and to what extent generative AI use is associated with convergence at the level of creative output, such as people’s ideas, designs, and creative writing.”
A meta-analysis is a statistical technique that combines the results of multiple independent studies to find common patterns or overall trends. By pooling data from various experiments, scientists can draw more robust conclusions than they could from a single test. The authors searched academic databases for studies published between 2022 and early 2026.
This time frame covers the period following the public release of popular chatbots, capturing the first wave of empirical research on this topic. The researchers selected 18 eligible articles containing 19 distinct experimental studies. These studies provided a total of 61 individual effect sizes, which are mathematical values indicating the strength of a specific phenomenon.
To be included in the analysis, the original experiments had to compare humans working with generative software against humans working alone. The original studies measured homogenization using several techniques. Many relied on advanced text analysis tools that translate written responses into mathematical coordinates.
This process allows computers to measure the semantic distance between words, essentially calculating how closely related different ideas are to one another. Other studies used human experts to rate the variety of meanings produced by participants. The analysis revealed a statistically significant homogenization effect associated with the use of artificial intelligence.
When people co-created with these systems, their final products tended to be more similar to the work of other users. “The meta-analysis shows that using generative AI can indeed lead people to think alike,” the authors noted. “Across individuals, AI use tends to make ideas, designs, and creative texts more similar to one another.”
“This suggests that AI may contribute to a form of homogenization of creative thought at the collective level,” they continued. “Importantly, this does not necessarily reflect a failure of human-AI co-creation but may instead be an inherent feature of how these systems currently support creative work at scale.”
The scientists also evaluated whether the type of task influenced the degree of uniformity. They categorized the experiments into four groups, which included divergent thinking, idea generation, writing, and visual art. Divergent thinking tasks are highly open-ended exercises, such as asking someone to list creative uses for a paperclip.
Idea generation tasks provide more specific constraints, such as asking for solutions to improve public transportation. The analysis showed that the homogenization effect was strongest in the idea generation tasks. Because these exercises require specific solutions to defined problems, users likely rely more heavily on the predictable suggestions provided by the computer algorithms.
The researchers did not find strong statistical evidence for differences among the other three categories, suggesting that open-ended tasks lead to less convergence. They also checked if these patterns only happen in highly controlled laboratory settings. The authors compared traditional laboratory experiments with real-world scenarios, such as analyzing published essays and visual artworks created before and after the widespread adoption of automated writing tools.
The analysis of these real-world conditions showed a small but significant reduction in idea diversity. “In many ways, the findings resemble classic fixation effects from the psychology literature, where exposure to examples constrains later thinking, but here they appear amplified by the scale and synchronicity of generative AI model use,” the researchers stated. “This homogenization effect was observed not only in controlled lab studies but also in real-world quasi-experiments. This suggests that it is not merely a lab-based phenomenon, but a practical concern affecting concrete creative processes and practices.”
De Rooij and Biskjaer also investigated whether this narrowing of ideas persists after a person stops using the software. They isolated a subset of studies that tested participants on new creative tasks after their initial interaction with the computer models. The results suggest that the homogenization effect carries over into these subsequent activities.
“The findings also provide preliminary evidence that homogenization effects may persist beyond moments of direct AI use,” the researchers told PsyPost. “In other words, interacting with these generative AI systems may shape how people think and generate ideas even after the interaction has ended. This potential ‘rub-off’ effect on creative cognition warrants further research and is something we would like to explore in more depth.”
These results closely align with another recent study published in the journal PNAS Nexus. Scientists Emily Wenger and Yoed N. Kenett tested how large language models affect human creativity by evaluating 22 different commercial chatbots. They recruited 102 human participants to complete a series of verbal creativity tests, including the alternative uses task, and then asked the chatbots to complete the exact same assignments.
Wenger and Kenett found that individual language models performed at or slightly above the level of the average human on most exercises. When viewed in isolation, a single chatbot provided highly original and creative responses. However, when the scientists compared all the responses from the different models, a stark pattern of similarity emerged.
Across all tasks, the computer programs produced answers that were significantly more alike than the answers provided by the human participants. Both sets of researchers point to similar underlying mechanisms for this phenomenon. Because the major technology companies train their models on massive, overlapping datasets scraped from the internet, the programs naturally gravitate toward the most statistically common word associations.
When thousands of people use these tools to generate ideas, the software acts as a semantic anchor. The models pull human users toward a shared set of typical concepts, reducing the overall variety of ideas. Wenger and Kenett attempted to fix this issue by adjusting the internal settings of the chatbots to force more random text generation, but this caused the models to produce nonsensical sentences.
Readers should avoid interpreting these findings as proof that human beings are becoming entirely uncreative. De Rooij and Biskjaer note that the reduction in collective diversity does not equal a total loss of individual ability. “A key point is that our findings do not show that using AI reduces creativity,” the researchers emphasized.
“Rather, they point to a shift in where and how creative diversity occurs, and where it may be constrained,” the authors said. “Individual output can improve in creative quality while becoming more similar across people. While these effects are often subtle in single instances, they may become meaningful when considered at the scale at which generative AI is now being used.”
The authors point out some limitations to their current analysis. The review primarily focuses on text-based tools and large language models, meaning the findings might not apply to other types of computer systems. For instance, adaptive machine learning programs or tools used for music composition were not adequately represented in the available data.
This restricts how broadly the scientific community can apply these conclusions across different artistic domains. Additionally, the analyses regarding long-term persistence and real-world applications relied on relatively small groups of studies. The limited data makes these specific conclusions tentative and open to revision.
Future research should explore different forms of human and machine collaboration over extended periods of time. “An important next step is rethinking how generative AI systems are designed and used in creative contexts to mitigate homogenization effects,” the authors noted. “This includes exploring alternative workflows, interaction designs, and creative strategies that sustain diversity rather than encourage early convergence.”
“One step in this direction has already been taken by mapping creative strategies for working with generative AI and machine learning, based on analyses of AI art practices,” they added, referencing a recently published article outlining this approach. “We believe these strategies can transfer to other creative domains.”
The preprint study, “Does Generative AI Make Us Think Alike? A Systematic Review and Meta-Analysis of Homogenization Effects in Human-AI Co-Creation,” was authored by Alwin de Rooij and Michael Mose Biskjaer.
-------------------------------------------------
DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.
Private, vetted email list for mental health professionals: https://www.clinicians-exchange.org
Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot
NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot
Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: https://www.nationalpsychologist.com
EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: http://subscribe-article-digests.clinicians-exchange.org
READ ONLINE: http://read-the-rss-mega-archive.clinicians-exchange.org
It's primitive... but it works... mostly...
-------------------------------------------------
#psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #GenerativeAI #CreativityDiversity #AICoCreation #Homogenization #CreativeThinking #AIImpact #CreativeDiversity #LLMs #TechEthics #InnovationScience
-
DATE: May 14, 2026 at 10:00AM
SOURCE: PSYPOST.ORG** Research quality varies widely from fantastic to small exploratory studies. Please check research methods when conclusions are very important to you. **
-------------------------------------------------TITLE: Real-world evidence shows generative AI is making human creative output more uniform
Using artificial intelligence for creative tasks tends to make human output more uniform on a collective level. A recent preprint study provides evidence that while these tools might boost individual performance, they contribute to an overall reduction in the diversity of ideas across different users. This widespread reliance on automated assistance could lead to a narrower range of concepts in collaborative environments.
Generative artificial intelligence refers to computer programs capable of creating new text, images, or other media based on user instructions. The most common of these tools rely on large language models. Developers build these models by feeding them billions of sentences from the internet, allowing the software to recognize patterns and predict how words should follow one another.
Since many users interact with similar systems trained on overlapping data, scientists have raised concerns about how this technology shapes human thought. Researchers Alwin de Rooij, assistant professor in creativity research at Tilburg University and associate professor at Avans University of Applied Sciences, and Michael Mose Biskjaer, associate professor in design creativity and innovation at Aarhus University, designed a new study to assess these concerns. They noticed that previous research often focused on how these tools help individuals work faster or overcome temporary mental blocks.
They wanted to know if this individual assistance comes at a collective cost. “There are growing concerns that using Generative AI may lead people toward similar creative ideas,” the authors explained. “While AI can enhance creativity at the individual level, these benefits might come at a cost for creativity at a collective, or even societal, level.”
The authors sought to answer whether generative software makes people think alike. “We sought to address this by conducting a systematic review and meta-analysis of 19 empirical studies,” they noted. “More concretely, we wanted to examine whether and to what extent generative AI use is associated with convergence at the level of creative output, such as people’s ideas, designs, and creative writing.”
A meta-analysis is a statistical technique that combines the results of multiple independent studies to find common patterns or overall trends. By pooling data from various experiments, scientists can draw more robust conclusions than they could from a single test. The authors searched academic databases for studies published between 2022 and early 2026.
This time frame covers the period following the public release of popular chatbots, capturing the first wave of empirical research on this topic. The researchers selected 18 eligible articles containing 19 distinct experimental studies. These studies provided a total of 61 individual effect sizes, which are mathematical values indicating the strength of a specific phenomenon.
To be included in the analysis, the original experiments had to compare humans working with generative software against humans working alone. The original studies measured homogenization using several techniques. Many relied on advanced text analysis tools that translate written responses into mathematical coordinates.
This process allows computers to measure the semantic distance between words, essentially calculating how closely related different ideas are to one another. Other studies used human experts to rate the variety of meanings produced by participants. The analysis revealed a statistically significant homogenization effect associated with the use of artificial intelligence.
When people co-created with these systems, their final products tended to be more similar to the work of other users. “The meta-analysis shows that using generative AI can indeed lead people to think alike,” the authors noted. “Across individuals, AI use tends to make ideas, designs, and creative texts more similar to one another.”
“This suggests that AI may contribute to a form of homogenization of creative thought at the collective level,” they continued. “Importantly, this does not necessarily reflect a failure of human-AI co-creation but may instead be an inherent feature of how these systems currently support creative work at scale.”
The scientists also evaluated whether the type of task influenced the degree of uniformity. They categorized the experiments into four groups, which included divergent thinking, idea generation, writing, and visual art. Divergent thinking tasks are highly open-ended exercises, such as asking someone to list creative uses for a paperclip.
Idea generation tasks provide more specific constraints, such as asking for solutions to improve public transportation. The analysis showed that the homogenization effect was strongest in the idea generation tasks. Because these exercises require specific solutions to defined problems, users likely rely more heavily on the predictable suggestions provided by the computer algorithms.
The researchers did not find strong statistical evidence for differences among the other three categories, suggesting that open-ended tasks lead to less convergence. They also checked if these patterns only happen in highly controlled laboratory settings. The authors compared traditional laboratory experiments with real-world scenarios, such as analyzing published essays and visual artworks created before and after the widespread adoption of automated writing tools.
The analysis of these real-world conditions showed a small but significant reduction in idea diversity. “In many ways, the findings resemble classic fixation effects from the psychology literature, where exposure to examples constrains later thinking, but here they appear amplified by the scale and synchronicity of generative AI model use,” the researchers stated. “This homogenization effect was observed not only in controlled lab studies but also in real-world quasi-experiments. This suggests that it is not merely a lab-based phenomenon, but a practical concern affecting concrete creative processes and practices.”
De Rooij and Biskjaer also investigated whether this narrowing of ideas persists after a person stops using the software. They isolated a subset of studies that tested participants on new creative tasks after their initial interaction with the computer models. The results suggest that the homogenization effect carries over into these subsequent activities.
“The findings also provide preliminary evidence that homogenization effects may persist beyond moments of direct AI use,” the researchers told PsyPost. “In other words, interacting with these generative AI systems may shape how people think and generate ideas even after the interaction has ended. This potential ‘rub-off’ effect on creative cognition warrants further research and is something we would like to explore in more depth.”
These results closely align with another recent study published in the journal PNAS Nexus. Scientists Emily Wenger and Yoed N. Kenett tested how large language models affect human creativity by evaluating 22 different commercial chatbots. They recruited 102 human participants to complete a series of verbal creativity tests, including the alternative uses task, and then asked the chatbots to complete the exact same assignments.
Wenger and Kenett found that individual language models performed at or slightly above the level of the average human on most exercises. When viewed in isolation, a single chatbot provided highly original and creative responses. However, when the scientists compared all the responses from the different models, a stark pattern of similarity emerged.
Across all tasks, the computer programs produced answers that were significantly more alike than the answers provided by the human participants. Both sets of researchers point to similar underlying mechanisms for this phenomenon. Because the major technology companies train their models on massive, overlapping datasets scraped from the internet, the programs naturally gravitate toward the most statistically common word associations.
When thousands of people use these tools to generate ideas, the software acts as a semantic anchor. The models pull human users toward a shared set of typical concepts, reducing the overall variety of ideas. Wenger and Kenett attempted to fix this issue by adjusting the internal settings of the chatbots to force more random text generation, but this caused the models to produce nonsensical sentences.
Readers should avoid interpreting these findings as proof that human beings are becoming entirely uncreative. De Rooij and Biskjaer note that the reduction in collective diversity does not equal a total loss of individual ability. “A key point is that our findings do not show that using AI reduces creativity,” the researchers emphasized.
“Rather, they point to a shift in where and how creative diversity occurs, and where it may be constrained,” the authors said. “Individual output can improve in creative quality while becoming more similar across people. While these effects are often subtle in single instances, they may become meaningful when considered at the scale at which generative AI is now being used.”
The authors point out some limitations to their current analysis. The review primarily focuses on text-based tools and large language models, meaning the findings might not apply to other types of computer systems. For instance, adaptive machine learning programs or tools used for music composition were not adequately represented in the available data.
This restricts how broadly the scientific community can apply these conclusions across different artistic domains. Additionally, the analyses regarding long-term persistence and real-world applications relied on relatively small groups of studies. The limited data makes these specific conclusions tentative and open to revision.
Future research should explore different forms of human and machine collaboration over extended periods of time. “An important next step is rethinking how generative AI systems are designed and used in creative contexts to mitigate homogenization effects,” the authors noted. “This includes exploring alternative workflows, interaction designs, and creative strategies that sustain diversity rather than encourage early convergence.”
“One step in this direction has already been taken by mapping creative strategies for working with generative AI and machine learning, based on analyses of AI art practices,” they added, referencing a recently published article outlining this approach. “We believe these strategies can transfer to other creative domains.”
The preprint study, “Does Generative AI Make Us Think Alike? A Systematic Review and Meta-Analysis of Homogenization Effects in Human-AI Co-Creation,” was authored by Alwin de Rooij and Michael Mose Biskjaer.
-------------------------------------------------
DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.
Private, vetted email list for mental health professionals: https://www.clinicians-exchange.org
Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot
NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot
Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: https://www.nationalpsychologist.com
EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: http://subscribe-article-digests.clinicians-exchange.org
READ ONLINE: http://read-the-rss-mega-archive.clinicians-exchange.org
It's primitive... but it works... mostly...
-------------------------------------------------
#psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #GenerativeAI #CreativityDiversity #AICoCreation #Homogenization #CreativeThinking #AIImpact #CreativeDiversity #LLMs #TechEthics #InnovationScience
-
RE: https://dair-community.social/@KimCrayton1/116572642123503942
This is fascinating… and for sure it’s not binary.
#LLMs #organisationalInfrastructure #hiringAndFiring #restaffing #GrandTheft AutoComplete
-
RE: https://dair-community.social/@KimCrayton1/116572642123503942
This is fascinating… and for sure it’s not binary.
#LLMs #organisationalInfrastructure #hiringAndFiring #restaffing #GrandTheft AutoComplete
-
RE: https://dair-community.social/@KimCrayton1/116572642123503942
This is fascinating… and for sure it’s not binary.
#LLMs #organisationalInfrastructure #hiringAndFiring #restaffing #GrandTheft AutoComplete
-
RE: https://dair-community.social/@KimCrayton1/116572642123503942
This is fascinating… and for sure it’s not binary.
#LLMs #organisationalInfrastructure #hiringAndFiring #restaffing #GrandTheft AutoComplete
-
RE: https://dair-community.social/@KimCrayton1/116572642123503942
This is fascinating… and for sure it’s not binary.
#LLMs #organisationalInfrastructure #hiringAndFiring #restaffing #GrandTheft AutoComplete
-
For context:
We went from “cannot understand the difference between C and PHP” to “can sometimes write a valid function” to “works reasonably well to work on single files” to “can build a full greenfield app but needs extensive guidance on architecture and APIs” to “can build a full app with an engineer in the loop and build on top of it for a few weeks” to “decent at architecture and can build smaller systems without guidance” in 3 years.
But when I was trying to talk about labor issues and it being a paradigm shift for the industry at large, the standard response was that I was deluded and spreading FUD. The take that the tools are useless has been constant too, except the goal posts constantly move to whatever the current state of the art. Another take that never dies is that using llm based tools somehow can’t involve skill, that there is no difference between the prompting of an experienced software engineer who has spent years working with llms and the 3 prompts one has put into a random model “to try things out”. Imagine someone coming to like Elixir from Java, typing a few classes in Java, runs it and gets errors and say “elixir is kinda useless, all I got to run was this super barebones program after 17 tries and lots of compile errors”.
Whether one like using these tools or not (especially if you don’t like them), and especially if you are relatively new to them, spend just a few minutes or hours to compare how far you get with llama (the OG) and pure copy paste by hand, to a newer 8B model in an agent harness, to a model like glm5.1 to gpt5.5 or opus4.6 in a harness.
That’s the last 2 years in a bottle.
-
For context:
We went from “cannot understand the difference between C and PHP” to “can sometimes write a valid function” to “works reasonably well to work on single files” to “can build a full greenfield app but needs extensive guidance on architecture and APIs” to “can build a full app with an engineer in the loop and build on top of it for a few weeks” to “decent at architecture and can build smaller systems without guidance” in 3 years.
But when I was trying to talk about labor issues and it being a paradigm shift for the industry at large, the standard response was that I was deluded and spreading FUD. The take that the tools are useless has been constant too, except the goal posts constantly move to whatever the current state of the art. Another take that never dies is that using llm based tools somehow can’t involve skill, that there is no difference between the prompting of an experienced software engineer who has spent years working with llms and the 3 prompts one has put into a random model “to try things out”. Imagine someone coming to like Elixir from Java, typing a few classes in Java, runs it and gets errors and say “elixir is kinda useless, all I got to run was this super barebones program after 17 tries and lots of compile errors”.
Whether one like using these tools or not (especially if you don’t like them), and especially if you are relatively new to them, spend just a few minutes or hours to compare how far you get with llama (the OG) and pure copy paste by hand, to a newer 8B model in an agent harness, to a model like glm5.1 to gpt5.5 or opus4.6 in a harness.
That’s the last 2 years in a bottle.
-
For context:
We went from “cannot understand the difference between C and PHP” to “can sometimes write a valid function” to “works reasonably well to work on single files” to “can build a full greenfield app but needs extensive guidance on architecture and APIs” to “can build a full app with an engineer in the loop and build on top of it for a few weeks” to “decent at architecture and can build smaller systems without guidance” in 3 years.
But when I was trying to talk about labor issues and it being a paradigm shift for the industry at large, the standard response was that I was deluded and spreading FUD. The take that the tools are useless has been constant too, except the goal posts constantly move to whatever the current state of the art. Another take that never dies is that using llm based tools somehow can’t involve skill, that there is no difference between the prompting of an experienced software engineer who has spent years working with llms and the 3 prompts one has put into a random model “to try things out”. Imagine someone coming to like Elixir from Java, typing a few classes in Java, runs it and gets errors and say “elixir is kinda useless, all I got to run was this super barebones program after 17 tries and lots of compile errors”.
Whether one like using these tools or not (especially if you don’t like them), and especially if you are relatively new to them, spend just a few minutes or hours to compare how far you get with llama (the OG) and pure copy paste by hand, to a newer 8B model in an agent harness, to a model like glm5.1 to gpt5.5 or opus4.6 in a harness.
That’s the last 2 years in a bottle.
-
For context:
We went from “cannot understand the difference between C and PHP” to “can sometimes write a valid function” to “works reasonably well to work on single files” to “can build a full greenfield app but needs extensive guidance on architecture and APIs” to “can build a full app with an engineer in the loop and build on top of it for a few weeks” to “decent at architecture and can build smaller systems without guidance” in 3 years.
But when I was trying to talk about labor issues and it being a paradigm shift for the industry at large, the standard response was that I was deluded and spreading FUD. The take that the tools are useless has been constant too, except the goal posts constantly move to whatever the current state of the art. Another take that never dies is that using llm based tools somehow can’t involve skill, that there is no difference between the prompting of an experienced software engineer who has spent years working with llms and the 3 prompts one has put into a random model “to try things out”. Imagine someone coming to like Elixir from Java, typing a few classes in Java, runs it and gets errors and say “elixir is kinda useless, all I got to run was this super barebones program after 17 tries and lots of compile errors”.
Whether one like using these tools or not (especially if you don’t like them), and especially if you are relatively new to them, spend just a few minutes or hours to compare how far you get with llama (the OG) and pure copy paste by hand, to a newer 8B model in an agent harness, to a model like glm5.1 to gpt5.5 or opus4.6 in a harness.
That’s the last 2 years in a bottle.
-
For context:
We went from “cannot understand the difference between C and PHP” to “can sometimes write a valid function” to “works reasonably well to work on single files” to “can build a full greenfield app but needs extensive guidance on architecture and APIs” to “can build a full app with an engineer in the loop and build on top of it for a few weeks” to “decent at architecture and can build smaller systems without guidance” in 3 years.
But when I was trying to talk about labor issues and it being a paradigm shift for the industry at large, the standard response was that I was deluded and spreading FUD. The take that the tools are useless has been constant too, except the goal posts constantly move to whatever the current state of the art. Another take that never dies is that using llm based tools somehow can’t involve skill, that there is no difference between the prompting of an experienced software engineer who has spent years working with llms and the 3 prompts one has put into a random model “to try things out”. Imagine someone coming to like Elixir from Java, typing a few classes in Java, runs it and gets errors and say “elixir is kinda useless, all I got to run was this super barebones program after 17 tries and lots of compile errors”.
Whether one like using these tools or not (especially if you don’t like them), and especially if you are relatively new to them, spend just a few minutes or hours to compare how far you get with llama (the OG) and pure copy paste by hand, to a newer 8B model in an agent harness, to a model like glm5.1 to gpt5.5 or opus4.6 in a harness.
That’s the last 2 years in a bottle.
-
You can observe the moving of goalposts that seems to be a constant when dealing with #llm #llms and coding, right now:
We went from “llms can’t find real bugs, just a hallucinated mess” to “llms can find valid bugs but are not able to construct exploitation paths” to “llms can find real security issues and constructs exploitation chains, but they’re not real security bugs exploitable at large” to “some are real and exploitable, but you need xyz to do it” within like, a year.
Predicted trajectory “some are real and exploitable but you need knowledge to properly prompt the llm” to “security is not about finding exploitable bugs” to “why are all the techbros suddenly rushing into software security” to “we are banning all llm based security tooling”.
Maybe you are not yet used to this cycle, but this hopefully explains why some of the voices much less invested in the hype are genuinely worried, and the mythos announcement is just the opportunity to have a wider discussion. Mythos is hype yes, the problem was already here and won’t go away even if mythos is a “dud”. The scary part is the trajectory.
-
You can observe the moving of goalposts that seems to be a constant when dealing with #llm #llms and coding, right now:
We went from “llms can’t find real bugs, just a hallucinated mess” to “llms can find valid bugs but are not able to construct exploitation paths” to “llms can find real security issues and constructs exploitation chains, but they’re not real security bugs exploitable at large” to “some are real and exploitable, but you need xyz to do it” within like, a year.
Predicted trajectory “some are real and exploitable but you need knowledge to properly prompt the llm” to “security is not about finding exploitable bugs” to “why are all the techbros suddenly rushing into software security” to “we are banning all llm based security tooling”.
Maybe you are not yet used to this cycle, but this hopefully explains why some of the voices much less invested in the hype are genuinely worried, and the mythos announcement is just the opportunity to have a wider discussion. Mythos is hype yes, the problem was already here and won’t go away even if mythos is a “dud”. The scary part is the trajectory.
-
You can observe the moving of goalposts that seems to be a constant when dealing with #llm #llms and coding, right now:
We went from “llms can’t find real bugs, just a hallucinated mess” to “llms can find valid bugs but are not able to construct exploitation paths” to “llms can find real security issues and constructs exploitation chains, but they’re not real security bugs exploitable at large” to “some are real and exploitable, but you need xyz to do it” within like, a year.
Predicted trajectory “some are real and exploitable but you need knowledge to properly prompt the llm” to “security is not about finding exploitable bugs” to “why are all the techbros suddenly rushing into software security” to “we are banning all llm based security tooling”.
Maybe you are not yet used to this cycle, but this hopefully explains why some of the voices much less invested in the hype are genuinely worried, and the mythos announcement is just the opportunity to have a wider discussion. Mythos is hype yes, the problem was already here and won’t go away even if mythos is a “dud”. The scary part is the trajectory.
-
You can observe the moving of goalposts that seems to be a constant when dealing with #llm #llms and coding, right now:
We went from “llms can’t find real bugs, just a hallucinated mess” to “llms can find valid bugs but are not able to construct exploitation paths” to “llms can find real security issues and constructs exploitation chains, but they’re not real security bugs exploitable at large” to “some are real and exploitable, but you need xyz to do it” within like, a year.
Predicted trajectory “some are real and exploitable but you need knowledge to properly prompt the llm” to “security is not about finding exploitable bugs” to “why are all the techbros suddenly rushing into software security” to “we are banning all llm based security tooling”.
Maybe you are not yet used to this cycle, but this hopefully explains why some of the voices much less invested in the hype are genuinely worried, and the mythos announcement is just the opportunity to have a wider discussion. Mythos is hype yes, the problem was already here and won’t go away even if mythos is a “dud”. The scary part is the trajectory.
-
You can observe the moving of goalposts that seems to be a constant when dealing with #llm #llms and coding, right now:
We went from “llms can’t find real bugs, just a hallucinated mess” to “llms can find valid bugs but are not able to construct exploitation paths” to “llms can find real security issues and constructs exploitation chains, but they’re not real security bugs exploitable at large” to “some are real and exploitable, but you need xyz to do it” within like, a year.
Predicted trajectory “some are real and exploitable but you need knowledge to properly prompt the llm” to “security is not about finding exploitable bugs” to “why are all the techbros suddenly rushing into software security” to “we are banning all llm based security tooling”.
Maybe you are not yet used to this cycle, but this hopefully explains why some of the voices much less invested in the hype are genuinely worried, and the mythos announcement is just the opportunity to have a wider discussion. Mythos is hype yes, the problem was already here and won’t go away even if mythos is a “dud”. The scary part is the trajectory.
-
#State #media #control influences #LargeLanguageModels (#LLMs)
"Millions of people around the world query LLMs for information. Although several studies have compellingly documented the persuasive potential of these models, there is limited evidence of who or what influences the models themselves, leading to a flurry of concerns about which companies and governments build and regulate the models. Here we show through six studies that government control of the media across the world already influences the output of LLMs via their #TrainingData. We use a cross-national audit to show that LLMs exhibit a #stronger #ProGovernment valence in the languages of countries with #LowerMediaFreedom than in those with higher media freedom. The combination of influence and persuasive potential across languages suggests the troubling conclusion that states and powerful institutions have increased strategic incentives to leverage media control in the hopes of shaping LLM output."
-
#State #media #control influences #LargeLanguageModels (#LLMs)
"Millions of people around the world query LLMs for information. Although several studies have compellingly documented the persuasive potential of these models, there is limited evidence of who or what influences the models themselves, leading to a flurry of concerns about which companies and governments build and regulate the models. Here we show through six studies that government control of the media across the world already influences the output of LLMs via their #TrainingData. We use a cross-national audit to show that LLMs exhibit a #stronger #ProGovernment valence in the languages of countries with #LowerMediaFreedom than in those with higher media freedom. The combination of influence and persuasive potential across languages suggests the troubling conclusion that states and powerful institutions have increased strategic incentives to leverage media control in the hopes of shaping LLM output."
-
#State #media #control influences #LargeLanguageModels (#LLMs)
"Millions of people around the world query LLMs for information. Although several studies have compellingly documented the persuasive potential of these models, there is limited evidence of who or what influences the models themselves, leading to a flurry of concerns about which companies and governments build and regulate the models. Here we show through six studies that government control of the media across the world already influences the output of LLMs via their #TrainingData. We use a cross-national audit to show that LLMs exhibit a #stronger #ProGovernment valence in the languages of countries with #LowerMediaFreedom than in those with higher media freedom. The combination of influence and persuasive potential across languages suggests the troubling conclusion that states and powerful institutions have increased strategic incentives to leverage media control in the hopes of shaping LLM output."
-
#State #media #control influences #LargeLanguageModels (#LLMs)
"Millions of people around the world query LLMs for information. Although several studies have compellingly documented the persuasive potential of these models, there is limited evidence of who or what influences the models themselves, leading to a flurry of concerns about which companies and governments build and regulate the models. Here we show through six studies that government control of the media across the world already influences the output of LLMs via their #TrainingData. We use a cross-national audit to show that LLMs exhibit a #stronger #ProGovernment valence in the languages of countries with #LowerMediaFreedom than in those with higher media freedom. The combination of influence and persuasive potential across languages suggests the troubling conclusion that states and powerful institutions have increased strategic incentives to leverage media control in the hopes of shaping LLM output."
-
#State #media #control influences #LargeLanguageModels (#LLMs)
"Millions of people around the world query LLMs for information. Although several studies have compellingly documented the persuasive potential of these models, there is limited evidence of who or what influences the models themselves, leading to a flurry of concerns about which companies and governments build and regulate the models. Here we show through six studies that government control of the media across the world already influences the output of LLMs via their #TrainingData. We use a cross-national audit to show that LLMs exhibit a #stronger #ProGovernment valence in the languages of countries with #LowerMediaFreedom than in those with higher media freedom. The combination of influence and persuasive potential across languages suggests the troubling conclusion that states and powerful institutions have increased strategic incentives to leverage media control in the hopes of shaping LLM output."
-
@davidgerard When one blindly trusts purely #GenAI built software beyond throwaway prototypes, one is basically both being very stupid, and throwing money at stupidity. It wasn't just #Dijkstra in 1975 at #ACM warning of the "complexity generators"; the #CHILI effort predates the trend also. https://chili.cs.illinois.edu/ And the #SOUP definition, Software of Unknown Provenance: https://en.wikipedia.org/wiki/Software_of_unknown_pedigree I prefer the #IEC62304 (medical products) wording. #LLms #agentic #ai @wdtz
-
@davidgerard When one blindly trusts purely #GenAI built software beyond throwaway prototypes, one is basically both being very stupid, and throwing money at stupidity. It wasn't just #Dijkstra in 1975 at #ACM warning of the "complexity generators"; the #CHILI effort predates the trend also. https://chili.cs.illinois.edu/ And the #SOUP definition, Software of Unknown Provenance: https://en.wikipedia.org/wiki/Software_of_unknown_pedigree I prefer the #IEC62304 (medical products) wording. #LLms #agentic #ai @wdtz
-
@davidgerard When one blindly trusts purely #GenAI built software beyond throwaway prototypes, one is basically both being very stupid, and throwing money at stupidity. It wasn't just #Dijkstra in 1975 at #ACM warning of the "complexity generators"; the #CHILI effort predates the trend also. https://chili.cs.illinois.edu/ And the #SOUP definition, Software of Unknown Provenance: https://en.wikipedia.org/wiki/Software_of_unknown_pedigree I prefer the #IEC62304 (medical products) wording. #LLms #agentic #ai @wdtz
-
@davidgerard When one blindly trusts purely #GenAI built software beyond throwaway prototypes, one is basically both being very stupid, and throwing money at stupidity. It wasn't just #Dijkstra in 1975 at #ACM warning of the "complexity generators"; the #CHILI effort predates the trend also. https://chili.cs.illinois.edu/ And the #SOUP definition, Software of Unknown Provenance: https://en.wikipedia.org/wiki/Software_of_unknown_pedigree I prefer the #IEC62304 (medical products) wording. #LLms #agentic #ai @wdtz
-
@davidgerard When one blindly trusts purely #GenAI built software beyond throwaway prototypes, one is basically both being very stupid, and throwing money at stupidity. It wasn't just #Dijkstra in 1975 at #ACM warning of the "complexity generators"; the #CHILI effort predates the trend also. https://chili.cs.illinois.edu/ And the #SOUP definition, Software of Unknown Provenance: https://en.wikipedia.org/wiki/Software_of_unknown_pedigree I prefer the #IEC62304 (medical products) wording. #LLms #agentic #ai @wdtz
-
@cford I've always believed that we (or at least I) think by jumping to conclusions and then attempting to generate rational arguments to support those conclusions. A semantically augmented #LLM hybrid could generate an answer as #LLMs do now (jumping to a conclusion), and then perhaps parse that answer as a Toulmin structure and use inference to verify it (post-justification).
I can see how that could be done. It would be interesting to try.
-
@cford I've always believed that we (or at least I) think by jumping to conclusions and then attempting to generate rational arguments to support those conclusions. A semantically augmented #LLM hybrid could generate an answer as #LLMs do now (jumping to a conclusion), and then perhaps parse that answer as a Toulmin structure and use inference to verify it (post-justification).
I can see how that could be done. It would be interesting to try.
-
@cford I've always believed that we (or at least I) think by jumping to conclusions and then attempting to generate rational arguments to support those conclusions. A semantically augmented #LLM hybrid could generate an answer as #LLMs do now (jumping to a conclusion), and then perhaps parse that answer as a Toulmin structure and use inference to verify it (post-justification).
I can see how that could be done. It would be interesting to try.
-
@cford I've always believed that we (or at least I) think by jumping to conclusions and then attempting to generate rational arguments to support those conclusions. A semantically augmented #LLM hybrid could generate an answer as #LLMs do now (jumping to a conclusion), and then perhaps parse that answer as a Toulmin structure and use inference to verify it (post-justification).
I can see how that could be done. It would be interesting to try.
-
@cford I've always believed that we (or at least I) think by jumping to conclusions and then attempting to generate rational arguments to support those conclusions. A semantically augmented #LLM hybrid could generate an answer as #LLMs do now (jumping to a conclusion), and then perhaps parse that answer as a Toulmin structure and use inference to verify it (post-justification).
I can see how that could be done. It would be interesting to try.
-
#Spotify says its best developers haven’t written a line of code since December, thanks to AI - February 2026:
Also Spotify, May 2026:
"Hey everyone!
We've received some reports mentioning that the app, support site and the Web Player are slow or not working properly. This is being investigated.
Cheers."
I'm so tired of buggy #software!
-
#Spotify says its best developers haven’t written a line of code since December, thanks to AI - February 2026:
Also Spotify, May 2026:
"Hey everyone!
We've received some reports mentioning that the app, support site and the Web Player are slow or not working properly. This is being investigated.
Cheers."
I'm so tired of buggy #software!
-
#Spotify says its best developers haven’t written a line of code since December, thanks to AI - February 2026:
Also Spotify, May 2026:
"Hey everyone!
We've received some reports mentioning that the app, support site and the Web Player are slow or not working properly. This is being investigated.
Cheers."
I'm so tired of buggy #software!
-
#Spotify says its best developers haven’t written a line of code since December, thanks to AI - February 2026:
Also Spotify, May 2026:
"Hey everyone!
We've received some reports mentioning that the app, support site and the Web Player are slow or not working properly. This is being investigated.
Cheers."
I'm so tired of buggy #software!
-
#Spotify says its best developers haven’t written a line of code since December, thanks to AI - February 2026:
Also Spotify, May 2026:
"Hey everyone!
We've received some reports mentioning that the app, support site and the Web Player are slow or not working properly. This is being investigated.
Cheers."
I'm so tired of buggy #software!
-
It gave me a relationship between Queen Victoria and Prince Mstislav I of Kiev which, although probably correct, was not the one I expected.
Of course, the royal houses of Europe being so intermarried, there's bound to be more than one way of tracing that correction.
But that does illustrate the possibilities of #LLMs answers:
1. Correct, and easy to verify;
2. Plausible, but hard to verify;
3. False, and easy to refute.It's type 2 which are most interesting, and also most dangerous.