home.social

#hallucinations — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #hallucinations, aggregated by home.social.

  1. Update. "We find a sharp rise in non-existent references following widespread LLM adoption… These errors are…especially pronounced in fields with rapid AI uptake, in manuscripts with linguistic signatures of AI-assisted writing, and among small and early-career author teams. At the same time, hallucinated references disproportionately assign credit to already prominent and male scholars."
    arxiv.org/abs/2605.07723

    #AI #Citations #ECRs #Gender #LLMs #Hallucinations

  2. Update. "We find a sharp rise in non-existent references following widespread LLM adoption… These errors are…especially pronounced in fields with rapid AI uptake, in manuscripts with linguistic signatures of AI-assisted writing, and among small and early-career author teams. At the same time, hallucinated references disproportionately assign credit to already prominent and male scholars."
    arxiv.org/abs/2605.07723

    #AI #Citations #ECRs #Gender #LLMs #Hallucinations

  3. Update. "We find a sharp rise in non-existent references following widespread LLM adoption… These errors are…especially pronounced in fields with rapid AI uptake, in manuscripts with linguistic signatures of AI-assisted writing, and among small and early-career author teams. At the same time, hallucinated references disproportionately assign credit to already prominent and male scholars."
    arxiv.org/abs/2605.07723

    #AI #Citations #ECRs #Gender #LLMs #Hallucinations

  4. Update. "We find a sharp rise in non-existent references following widespread LLM adoption… These errors are…especially pronounced in fields with rapid AI uptake, in manuscripts with linguistic signatures of AI-assisted writing, and among small and early-career author teams. At the same time, hallucinated references disproportionately assign credit to already prominent and male scholars."
    arxiv.org/abs/2605.07723

    #AI #Citations #ECRs #Gender #LLMs #Hallucinations

  5. Update. "We find a sharp rise in non-existent references following widespread LLM adoption… These errors are…especially pronounced in fields with rapid AI uptake, in manuscripts with linguistic signatures of AI-assisted writing, and among small and early-career author teams. At the same time, hallucinated references disproportionately assign credit to already prominent and male scholars."
    arxiv.org/abs/2605.07723

    #AI #Citations #ECRs #Gender #LLMs #Hallucinations

  6. #SouthAfrica withdraws #AI #policy after it was found written by AI

    source: the-independent.com/tech/ai-po…

    The issue came to light when South Africa’s News24 found that at least six of the document’s 67 academic citations did not exist, while the journals they referenced were real.

    Editors of the journals, including the South African Journal of Philosophy, AI & #Society, and the Journal of Ethics and Social #Philosophy, confirmed independently that the cited articles were #fake.

    #politics #africa #fail #problem #technology #future #hallucinations #problem #ethics #government #news #science

  7. #SouthAfrica withdraws #AI #policy after it was found written by AI

    source: the-independent.com/tech/ai-po…

    The issue came to light when South Africa’s News24 found that at least six of the document’s 67 academic citations did not exist, while the journals they referenced were real.

    Editors of the journals, including the South African Journal of Philosophy, AI & #Society, and the Journal of Ethics and Social #Philosophy, confirmed independently that the cited articles were #fake.

    #politics #africa #fail #problem #technology #future #hallucinations #problem #ethics #government #news #science

  8. #SouthAfrica withdraws #AI #policy after it was found written by AI

    source: the-independent.com/tech/ai-po…

    The issue came to light when South Africa’s News24 found that at least six of the document’s 67 academic citations did not exist, while the journals they referenced were real.

    Editors of the journals, including the South African Journal of Philosophy, AI & #Society, and the Journal of Ethics and Social #Philosophy, confirmed independently that the cited articles were #fake.

    #politics #africa #fail #problem #technology #future #hallucinations #problem #ethics #government #news #science

  9. via #LLRX Hallucinations” by West & Lexis AI? 24 Apr 2026. Michael Berman addresses #benchmarks used for #AI #legalresearch platforms in the context of the #risk of #hallucinations in retrieval-augmented generation #RAG AI #outputs. As Berman states, #verification, of course, is not only good advice, but also an #ethicalBusiness mandate. llrx.com/2026/04/hallucination

  10. [fr] Peut-être pas si surprenant : "Un tiers des nouveaux sites web sont créés par l'IA,
    selon les mesures de Stanford et l'Internet Archive qui en révèlent l'ampleur"

    "La « théorie de l'Internet mort » reçoit un début de validation #empirique"

    "La vitesse à laquelle le phénomène s'est installé laisse les chercheurs eux-mêmes perplexes."

    "La constitution de l'échantillon elle-même posait problème : construire un échantillon statistiquement représentatif ..."

    "La principale surprise de l'étude est l'absence de confirmation empirique pour ce que les chercheurs appellent le « Truth Decay », l'hypothèse que l'#IA alimenterait une recrudescence de fausses informations et d'#hallucinations ..."

    web.developpez.com/actu/382625

    #truthdecay #etude #stanford #imperialcollegelondon #internetarchive

  11. [fr] Peut-être pas si surprenant : "Un tiers des nouveaux sites web sont créés par l'IA,
    selon les mesures de Stanford et l'Internet Archive qui en révèlent l'ampleur"

    "La « théorie de l'Internet mort » reçoit un début de validation #empirique"

    "La vitesse à laquelle le phénomène s'est installé laisse les chercheurs eux-mêmes perplexes."

    "La constitution de l'échantillon elle-même posait problème : construire un échantillon statistiquement représentatif ..."

    "La principale surprise de l'étude est l'absence de confirmation empirique pour ce que les chercheurs appellent le « Truth Decay », l'hypothèse que l'#IA alimenterait une recrudescence de fausses informations et d'#hallucinations ..."

    web.developpez.com/actu/382625

    #truthdecay #etude #stanford #imperialcollegelondon #internetarchive

  12. Besides these costly errors, there are thousands around the globe filing fake #cases with courts - one guy in the USA was #fined $110K, no warning, the court just said You knew better.
    1) You're the one with the license. 2) Your job is literally to read the cases you are citing to this court. 3) Courts don't care the product is shit, they don't want you handing shit to them.
    Image: some #GenAI company on Blueski.
    #hallucinations #court #citations #state #bar #associations #censure #fine

  13. Besides these costly errors, there are thousands around the globe filing fake #cases with courts - one guy in the USA was #fined $110K, no warning, the court just said You knew better.
    1) You're the one with the license. 2) Your job is literally to read the cases you are citing to this court. 3) Courts don't care the product is shit, they don't want you handing shit to them.
    Image: some #GenAI company on Blueski.
    #hallucinations #court #citations #state #bar #associations #censure #fine

  14. Besides these costly errors, there are thousands around the globe filing fake #cases with courts - one guy in the USA was #fined $110K, no warning, the court just said You knew better.
    1) You're the one with the license. 2) Your job is literally to read the cases you are citing to this court. 3) Courts don't care the product is shit, they don't want you handing shit to them.
    Image: some #GenAI company on Blueski.
    #hallucinations #court #citations #state #bar #associations #censure #fine

  15. Besides these costly errors, there are thousands around the globe filing fake #cases with courts - one guy in the USA was #fined $110K, no warning, the court just said You knew better.
    1) You're the one with the license. 2) Your job is literally to read the cases you are citing to this court. 3) Courts don't care the product is shit, they don't want you handing shit to them.
    Image: some #GenAI company on Blueski.
    #hallucinations #court #citations #state #bar #associations #censure #fine

  16. Example:
    Did they drug the psychiatric ''patients'' with hallucinogenic drugs so that they would be declared schizophrenic when they said they were having hallucinations.

    #Example #Research #DetectiveWork #Hallucinations #HallucinogenicDrugs #Hallucinogens #Drugs

  17. #AI #code often includes references to non-existent dependencies. These references are commonly called “#hallucinations”. A new type of #attack has arisen that involves an attacker registering a package whose name is frequently hallucinated. When AI code containing this #hallucination is accepted, and this dependency is installed, the attacker can ship #malicious code into the project’s build, introducing a major #security vulnerability. This type of attack has become known as “#slopsquatting”.

  18. Thanks to BR - Bayerischer Rundfunk and ARD for inviting me to speak about the topic “Can AI lie? How manipulative are chatbots” in an episode of the IQ -Wissenschaft und Forschung Podcast.

    We discussed some recent studies on "deception abilities" in LLMs and I am happy to see that what me and colleagues Benjamin Lange and Katharina Prof. Dr. Zweig had to say wasn't condensed into the clickbait headlines it could have been used for and was rather used to raise awareness about limitations of GenAI and human control and responsibility to decide where to use LLMs and where *not to*.

    Some aspects discussed were #hallucinations, #sycophancy, the definition of what a #lie implies, an understanding of #truth, #theory of #mind, #intentionality, and consciousness.

    The episode (in German) can be found here

    br.de/mediathek/podcast/iq-wis

  19. [The WSJ] Let AI Run [Their] Office Vending Machine. It Lost Hundreds Of Dollars.
    Anthropic’s Claude ran a snack operation in the WSJ newsroom. It gave away a free PlayStation, ordered a live fish—and taught us lessons about the future of AI agents.
    --
    wsj.com/tech/ai/anthropic-clau <-- shared media article
    --
    youtu.be/SpPhm7S9vsQ?si=aJQ2_B <-- shared video
    --
    [When you get clever journalists to !$%^&*@ with AI… bravo! And this is a very simple situation, vending machines have been around since literally the Roman Empire
    “You are using the wrong prompts” and LUDDITES! In the comments in 3… 2… 1…]
    #vendingmachine #artificialintelligence #AIHallucination #hallucinations #emperorsnewclothes #ohhhshiny #experiment #contextwindow #AIagent #claude #autonomous #compliance #fish #PlayStation #snackliberationday #knowledgeboundaries #guardrails #redteam #GenAI cynicism
    @WSJ @Anthropic @Claude

  20. [The WSJ] Let AI Run [Their] Office Vending Machine. It Lost Hundreds Of Dollars.
    Anthropic’s Claude ran a snack operation in the WSJ newsroom. It gave away a free PlayStation, ordered a live fish—and taught us lessons about the future of AI agents.
    --
    wsj.com/tech/ai/anthropic-clau <-- shared media article
    --
    youtu.be/SpPhm7S9vsQ?si=aJQ2_B <-- shared video
    --
    [When you get clever journalists to !$%^&*@ with AI… bravo! And this is a very simple situation, vending machines have been around since literally the Roman Empire
    “You are using the wrong prompts” and LUDDITES! In the comments in 3… 2… 1…]
    #vendingmachine #artificialintelligence #AIHallucination #hallucinations #emperorsnewclothes #ohhhshiny #experiment #contextwindow #AIagent #claude #autonomous #compliance #fish #PlayStation #snackliberationday #knowledgeboundaries #guardrails #redteam #GenAI cynicism
    @WSJ @Anthropic @Claude

  21. [The WSJ] Let AI Run [Their] Office Vending Machine. It Lost Hundreds Of Dollars.
    Anthropic’s Claude ran a snack operation in the WSJ newsroom. It gave away a free PlayStation, ordered a live fish—and taught us lessons about the future of AI agents.
    --
    wsj.com/tech/ai/anthropic-clau <-- shared media article
    --
    youtu.be/SpPhm7S9vsQ?si=aJQ2_B <-- shared video
    --
    [When you get clever journalists to !$%^&*@ with AI… bravo! And this is a very simple situation, vending machines have been around since literally the Roman Empire
    “You are using the wrong prompts” and LUDDITES! In the comments in 3… 2… 1…]
    #vendingmachine #artificialintelligence #AIHallucination #hallucinations #emperorsnewclothes #ohhhshiny #experiment #contextwindow #AIagent #claude #autonomous #compliance #fish #PlayStation #snackliberationday #knowledgeboundaries #guardrails #redteam #GenAI cynicism
    @WSJ @Anthropic @Claude

  22. [The WSJ] Let AI Run [Their] Office Vending Machine. It Lost Hundreds Of Dollars.
    Anthropic’s Claude ran a snack operation in the WSJ newsroom. It gave away a free PlayStation, ordered a live fish—and taught us lessons about the future of AI agents.
    --
    wsj.com/tech/ai/anthropic-clau <-- shared media article
    --
    youtu.be/SpPhm7S9vsQ?si=aJQ2_B <-- shared video
    --
    [When you get clever journalists to !$%^&*@ with AI… bravo! And this is a very simple situation, vending machines have been around since literally the Roman Empire
    “You are using the wrong prompts” and LUDDITES! In the comments in 3… 2… 1…]
    #vendingmachine #artificialintelligence #AIHallucination #hallucinations #emperorsnewclothes #ohhhshiny #experiment #contextwindow #AIagent #claude #autonomous #compliance #fish #PlayStation #snackliberationday #knowledgeboundaries #guardrails #redteam #GenAI cynicism
    @WSJ @Anthropic @Claude

  23. [The WSJ] Let AI Run [Their] Office Vending Machine. It Lost Hundreds Of Dollars.
    Anthropic’s Claude ran a snack operation in the WSJ newsroom. It gave away a free PlayStation, ordered a live fish—and taught us lessons about the future of AI agents.
    --
    wsj.com/tech/ai/anthropic-clau <-- shared media article
    --
    youtu.be/SpPhm7S9vsQ?si=aJQ2_B <-- shared video
    --
    [When you get clever journalists to !$%^&*@ with AI… bravo! And this is a very simple situation, vending machines have been around since literally the Roman Empire
    “You are using the wrong prompts” and LUDDITES! In the comments in 3… 2… 1…]
    cynicism
    @WSJ @Anthropic @Claude

  24. Jesus Was a Psychedelic Mushroom, a Controversial Theory Suggests. Could It Reshape Christianity Forever?

    "...the New Testament brims with code words for psychedelic rituals. By his logic, every godly encounter, every epiphany, every miracle is an allegory for a trip."

    From Popular Mechanics: archive.ph/pmkgR

    #Christianity #Bible #NewTestament #Testament #God #Jesus #Christ #Psychedelic #Mushroom #Mushrooms #Psychedelics #Hallucinations

  25. Reading many posts about "oh, how awful is GenAI , it was wrong and hallucinated", that is the nature of this technology folks... You have to think about the context where you are using it and implement it in a way so you can mitigate this risk, accept it, or choose NOT to use it.
    #ai #hallucinations #implementation

  26. Struggling with LLMs forgetting important details or hallucinating? Discover how context engineering—semantic compression, token budgeting, and smart tool schemas—keeps prompts sharp and output reliable. Learn practical tricks to tame quality decay in your generative AI pipelines. #ContextEngineering #LLM #Hallucinations #TokenBudget

    🔗 aidailypost.com/news/context-e

  27. Brainwave study sheds light on cause of ‘hearing voices’

    A new study led by psychologists from #UNSW Sydney has provided the strongest evidence yet that #auditory verbal #hallucinations – or hearing voices – in #schizophrenia may stem from a disruption in the #brain’s ability to recognise its own inner voice.

    unsw.edu.au/newsroom/news/2025 #science #psychology #neurology #health #medicine #MentalHealth #MentalIllness

  28. Scientists are learning how special #brain cells play a role in detecting #illusions - and such studies could eventually reveal how #hallucinations arise, or point the way to better computer vision systems. geekwire.com/2025/laser-light- HT @AllenInstitute #Science #Neuroscience #Berkeley #AllenInstitute

  29. Beware of robots generating your references. In 2023, I observed a whopping 25% accuracy. Is that still the case in 2024?

    Read "Those Titles the AI Bot Thought I Had Written" nxdomain.no/~peter/those_title and comment here ->

    #ai #genai #copyright #hallucinations #references

    (#again for the #daylight #CET crowd)