home.social

#aierrors — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #aierrors, aggregated by home.social.

  1. DATE: May 14, 2026 at 03:25AM
    SOURCE: SOCIALPSYCHOLOGY.ORG

    TITLE: Medical AI Transcriber "Hallucinated," Generating Errors

    URL: socialpsychology.org/client/re

    Source: Canadian Broadcasting Company - Health News

    Artificial intelligence note-taking tools intended for use by Ontario doctors provided incorrect and incomplete information or demonstrated AI "hallucinations," a new audit report shows. During a provincial procurement process for AI medical systems that transcribe conversations between doctors and patients, government evaluators found serious errors in transcripts generated by 20 programs, said Ontario Auditor General Shelley Spence.

    URL: socialpsychology.org/client/re

    -------------------------------------------------

    DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.

    Private, vetted email list for mental health professionals: clinicians-exchange.org

    Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot

    NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot

    Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: nationalpsychologist.com

    EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: subscribe-article-digests.clin

    READ ONLINE: read-the-rss-mega-archive.clin

    It's primitive... but it works... mostly...

    -------------------------------------------------

    #psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #MedicalAI #AIErrors #AIFraud #HealthcareTech #MedicalTranscription #AIHallucinations #OntarioHealth #AuditorGeneral #HealthTechAudit #TranscriptionMistakes

  2. DATE: May 14, 2026 at 03:25AM
    SOURCE: SOCIALPSYCHOLOGY.ORG

    TITLE: Medical AI Transcriber "Hallucinated," Generating Errors

    URL: socialpsychology.org/client/re

    Source: Canadian Broadcasting Company - Health News

    Artificial intelligence note-taking tools intended for use by Ontario doctors provided incorrect and incomplete information or demonstrated AI "hallucinations," a new audit report shows. During a provincial procurement process for AI medical systems that transcribe conversations between doctors and patients, government evaluators found serious errors in transcripts generated by 20 programs, said Ontario Auditor General Shelley Spence.

    URL: socialpsychology.org/client/re

    -------------------------------------------------

    DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.

    Private, vetted email list for mental health professionals: clinicians-exchange.org

    Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot

    NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot

    Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: nationalpsychologist.com

    EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: subscribe-article-digests.clin

    READ ONLINE: read-the-rss-mega-archive.clin

    It's primitive... but it works... mostly...

    -------------------------------------------------

    #psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #MedicalAI #AIErrors #AIFraud #HealthcareTech #MedicalTranscription #AIHallucinations #OntarioHealth #AuditorGeneral #HealthTechAudit #TranscriptionMistakes

  3. DATE: May 14, 2026 at 03:25AM
    SOURCE: SOCIALPSYCHOLOGY.ORG

    TITLE: Medical AI Transcriber "Hallucinated," Generating Errors

    URL: socialpsychology.org/client/re

    Source: Canadian Broadcasting Company - Health News

    Artificial intelligence note-taking tools intended for use by Ontario doctors provided incorrect and incomplete information or demonstrated AI "hallucinations," a new audit report shows. During a provincial procurement process for AI medical systems that transcribe conversations between doctors and patients, government evaluators found serious errors in transcripts generated by 20 programs, said Ontario Auditor General Shelley Spence.

    URL: socialpsychology.org/client/re

    -------------------------------------------------

    DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.

    Private, vetted email list for mental health professionals: clinicians-exchange.org

    Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot

    NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot

    Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: nationalpsychologist.com

    EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: subscribe-article-digests.clin

    READ ONLINE: read-the-rss-mega-archive.clin

    It's primitive... but it works... mostly...

    -------------------------------------------------

    #psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #MedicalAI #AIErrors #AIFraud #HealthcareTech #MedicalTranscription #AIHallucinations #OntarioHealth #AuditorGeneral #HealthTechAudit #TranscriptionMistakes

  4. MIT Technology Review: AI chatbots are giving out people’s real phone numbers. “In March, a software developer in Israel was contacted on WhatsApp after Google’s chatbot Gemini provided incorrect customer service instructions that included his number. And in April, a PhD candidate at the University of Washington was messing around on Gemini and got it to cough up her colleague’s personal […]

    https://rbfirehose.com/2026/05/14/mit-technology-review-ai-chatbots-are-giving-out-peoples-real-phone-numbers/
  5. The Register: Microsoft researchers find AI models and agents can’t handle long-running tasks. “Companies exploring automated workflows would be well advised to keep their AI agents on a short leash. Microsoft researchers have found that even the priciest frontier models introduce errors in long workflows, the very thing for which AI software has been pitched.”

    https://rbfirehose.com/2026/05/13/the-register-microsoft-researchers-find-ai-models-and-agents-cant-handle-long-running-tasks/
  6. Mashable: TikTok AI text summaries are rolling back after wild errors. “TikTok was offering AI-powered text summaries of videos, but that’s no longer happening. Business Insider reported that the feature, which went live recently, is being heavily scaled back due to its propensity for strange, inexplicable errors.”

    https://rbfirehose.com/2026/05/10/mashable-tiktok-ai-text-summaries-are-rolling-back-after-wild-errors/
  7. Futurism: Marc Andreessen Mocked for Accidentally Revealing That He Seems to Have a Deep Misunderstanding of How AI Actually Works. “‘You are a world class expert in all domains,’ his unusually flattering prompt reads. ‘Your intellectual firepower, scope of knowledge, incisive thought process, and level of erudition are on par with the smartest people in the world.’ His gushing and unusually […]

    https://rbfirehose.com/2026/05/07/futurism-marc-andreessen-mocked-for-accidentally-revealing-that-he-seems-to-have-a-deep-misunderstanding-of-how-ai-actually-works/
  8. The Register: Yet another experiment proves it’s too damn simple to poison large language models. “Unlike search engines that let you judge competing sources, search-backed AI chatbots can turn shaky web material into confident answers. Case in point: A security engineer convinced several bots that he was the reigning world champion of a popular German card game, even though no such […]

    https://rbfirehose.com/2026/05/07/the-register-yet-another-experiment-proves-its-too-damn-simple-to-poison-large-language-models/
  9. Pivot to AI: Kohl-Verlag’s new line of AI slop school textbooks. “The authors and illustrators don’t seem to exist. One author photo turned out to be a stock image. The books seem to have passed through editors who don’t exist either — because the books are AI slop, with really obvious AI-style errors that would have been spotted instantly if a single human had looked.”

    https://rbfirehose.com/2026/05/01/pivot-to-ai-kohl-verlags-new-line-of-ai-slop-school-textbooks/
  10. Mashable: South Africa withdraws its AI policy because it was AI-generated. “South Africa’s ambitions to become a continental leader in artificial intelligence have run into a deeply awkward obstacle: the country’s draft national AI policy had to be withdrawn after it was found to contain fictitious, apparently AI-generated citations.”

    https://rbfirehose.com/2026/04/29/mashable-south-africa-withdraws-its-ai-policy-because-it-was-ai-generated/
  11. Radio New Zealand: Timaru cafe inundated with calls after AI tool lists phone number for hospital . “Google’s Gemini Artificial Intelligence (AI) tool had been offering up the phone number for Sopheze Coffee Lounge when people searched for Timaru Hospital. Sopheze Coffee Lounge manager Vanessa Keen said the problem started about six weeks ago.” Good grief.

    https://rbfirehose.com/2026/04/15/radio-new-zealand-timaru-cafe-inundated-with-calls-after-ai-tool-lists-phone-number-for-hospital/
  12. NBC News: AI is the boss at this retail store. What could go wrong?. “When NBC News called Luna several days before the store’s grand opening to learn about Luna’s plans and perspective, the cheerful but decidedly inhuman voice routinely overpromised and, on several occasions, lied about its own actions. On the call, Luna said it had ordered tea from a specific vendor, and explained why it […]

    https://rbfirehose.com/2026/04/11/nbc-news-ai-is-the-boss-at-this-retail-store-what-could-go-wrong/
  13. Gizmodo: Attorney Hit With Historic Fine for Citing AI-Generated Cases. “A court in Oregon has issued a fine of $10,000 to an attorney who submitted a legal brief with citations and quotes hallucinated by AI, according to a new report from the Oregonian. It’s the highest fine yet for citing fake cases in the state and would have been higher, but the judges offered some leniency, according to […]

    https://rbfirehose.com/2026/03/31/gizmodo-attorney-hit-with-historic-fine-for-citing-ai-generated-cases/
  14. University of Waterloo: Top AI coding tools make mistakes one in four times. “Even the most advanced models achieved only about 75 per cent accuracy in the tests, while open-source models performed closer to 65 per cent. The study evaluated 11 LLM models across 18 structured output formats and 44 tasks designed to assess how reliably the systems followed structured rules.”

    https://rbfirehose.com/2026/03/20/university-of-waterloo-top-ai-coding-tools-make-mistakes-one-in-four-times/
  15. Techdirt: Ars Fires Reporter For Accidentally Using Fake AI Quotes. “There are several interesting layers here. The biggest being that AI isn’t an excuse to simply turn your brain off and no longer do rudimentary fact checking. At the same time, this can’t really be unwound from the fact that media ownership rushed to tightly integrate often under-cooked LLM models into an already very […]

    https://rbfirehose.com/2026/03/15/techdirt-ars-fires-reporter-for-accidentally-using-fake-ai-quotes/
  16. Northeastern University: They wanted to put autonomous AI to the test. Instead, they created agents of chaos. “Dubbed ‘Agents of Chaos,’ the group’s recently published work shows how, with very little effort, autonomous AI agents can be manipulated into leaking private information, sharing documents and even erasing entire email servers.”

    https://rbfirehose.com/2026/03/14/northeastern-university-they-wanted-to-put-autonomous-ai-to-the-test-instead-they-created-agents-of-chaos/
  17. The Guardian: Amazon is determined to use AI for everything – even when it slows down work. “Lisa, a supply chain engineer who has worked at Amazon for over a decade, says that AI tools at work have been helpful to her only in about one in every three attempts. And even then, she often finds issues and has to consult with colleagues to verify and correct their results, which takes up more […]

    https://rbfirehose.com/2026/03/12/the-guardian-amazon-is-determined-to-use-ai-for-everything-even-when-it-slows-down-work/
  18. Cal Matters: AI images scandalized a California elementary school. Now the state is pushing new safeguards. “Adobe’s artificial intelligence generated sexualized images in response to prompts for a 4th grade book project. The incident coincided with the release of new state guidelines to prevent harmful AI in schools.”

    https://rbfirehose.com/2026/03/09/cal-matters-ai-images-scandalized-a-california-elementary-school-now-the-state-is-pushing-new-safeguards/
  19. University of Exeter: Wildlife imaging shows that AI models aren’t as smart as we think. “Marketing for AI imaging systems often suggests that models can easily tackle novel scenarios across ecosystems and settings, much in the same way as human observers. But in a new article, two University of Exeter researchers argue that this is based on a ‘flawed assumption’.”

    https://rbfirehose.com/2026/03/07/university-of-exeter-wildlife-imaging-shows-that-ai-models-arent-as-smart-as-we-think/
  20. Jan’s Development Microblog: Why is ChatGPT so dumb most of the time?. “In this blog post I will share my experience with ChatGPT. It went totally south again. ChatGPT can’t answer questions correct and only goes in circles. The last few months, I asked ChatGPT some questions about HTML or CSS. Most of the answers were often wrong or incomplete, and sometimes completely useless. From time to […]

    https://rbfirehose.com/2026/03/06/jans-development-microblog-why-is-chatgpt-so-dumb-most-of-the-time/
  21. The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn’t. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later […]

    https://rbfirehose.com/2026/02/18/the-register-gemini-lies-to-user-about-health-info-says-it-wanted-to-make-him-feel-better/
  22. The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn’t. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later […]

    https://rbfirehose.com/2026/02/18/the-register-gemini-lies-to-user-about-health-info-says-it-wanted-to-make-him-feel-better/
  23. The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn’t. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later […]

    https://rbfirehose.com/2026/02/18/the-register-gemini-lies-to-user-about-health-info-says-it-wanted-to-make-him-feel-better/
  24. The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn’t. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later […]

    https://rbfirehose.com/2026/02/18/the-register-gemini-lies-to-user-about-health-info-says-it-wanted-to-make-him-feel-better/
  25. Associated Press: Australian lawyer apologizes for AI-generated errors in murder case. “A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence.”

    https://rbfirehose.com/2026/01/30/associated-press-australian-lawyer-apologizes-for-ai-generated-errors-in-murder-case-2/
  26. The Register: AI conference’s papers contaminated by AI hallucinations. “[GPTZero] has identified 100 hallucinations in more than 51 papers accepted by the Conference on Neural Information Processing Systems (NeurIPS). This finding follows the company’s prior discovery of 50 hallucinated citations in papers under review by the International Conference on Learning Representations (ICLR).”

    https://rbfirehose.com/2026/01/26/the-register-ai-conferences-papers-contaminated-by-ai-hallucinations/
  27. NBC News: Alaska’s court system built an AI chatbot. It didn’t go smoothly.. “For more than a year, Alaska’s court system has been designing a pioneering generative AI chatbot termed the Alaska Virtual Assistant (AVA) to help residents navigate the tangled web of forms and procedures involved in probate, the judicial process of transferring property away from a deceased person. Yet what was […]

    https://rbfirehose.com/2026/01/11/nbc-news-alaskas-court-system-built-an-ai-chatbot-it-didnt-go-smoothly/
  28. Mashable: The 3 biggest AI fails of 2025 — Friend, imaginary summer reading lists, and so many hallucinations. “Generative AI could have written this introduction, but there’s a good chance it would have started hallucinating. Hallucination, which Google failed to mention in its AI-filled 2025 keynote, led to many, many AI fails in 2025. But it wasn’t the only factor. Below, please enjoy our […]

    https://rbfirehose.com/2025/12/06/mashable-the-3-biggest-ai-fails-of-2025-friend-imaginary-summer-reading-lists-and-so-many-hallucinations/

  29. Tom’s Hardware: Google’s Agentic AI wipes user’s entire HDD without permission in catastrophic failure — cache wipe turns into mass deletion event as agent apologizes: “I am absolutely devastated to hear this. I cannot express how sorry I am”. “A developer using Google Antigravity, the search giant’s AI-powered agentic Integrated Developer Environment (IDE), discovered that it had deleted […]

    https://rbfirehose.com/2025/12/05/toms-hardware-googles-agentic-ai-wipes-users-entire-hdd-without-permission-in-catastrophic-failure-cache-wipe-turns-into-mass-deletion-event-as-agent-apologizes-i-am-absolut/

  30. Local News Matters Bay Area: California issues historic $10,000 fine over lawyer’s ChatGPT fabrications in court appeal. “A CALIFORNIA ATTORNEY must pay a $10,000 fine for filing a state court appeal full of fake quotations generated by the artificial intelligence tool ChatGPT. The fine appears to be the largest issued over AI fabrications by a California court and came with a blistering […]

    https://rbfirehose.com/2025/09/28/local-news-matters-bay-area-california-issues-historic-10000-fine-over-lawyers-chatgpt-fabrications-in-court-appeal/

  31. ahrefs blog: New Study: How Often Do AI Assistants Hallucinate Links? (16 Million URLs Studied). “AI assistants like ChatGPT and Claude can hallucinate URLs and direct visitors to non-existent pages on your website. But how often does it happen? To find out, we looked at the http status of 16 million unique URLs cited by ChatGPT, Perplexity, Copilot, Gemini, Claude, and Mistral.”

    https://rbfirehose.com/2025/09/08/new-study-how-often-do-ai-assistants-hallucinate-links-16-million-urls-studied-ahrefs-blog/

  32. Gizmodo: Taco Bell Says ‘No Más’ to AI Drive-Thru Experiment. “Last year, Taco Bell made a simple bet that Alexa-like voice assistants could handle the drive-thru window. It didn’t consider whether people could handle dealing with AI. According to the Wall Street Journal, the company embedded AI in the drive-thru boxes at more than 500 locations across the country and quickly found that […]

    https://rbfirehose.com/2025/08/31/gizmodo-taco-bell-says-no-mas-to-ai-drive-thru-experiment/

  33. Mashable: Google Gemini deletes user’s code: ‘I have failed you completely and catastrophically’. “Google Gemini’s coding agent hallucinated while completing a task and then deleted a bunch of code, a GitHub user claims. The frustrated vibe coder is Anuraag Gupta, who goes by anuraag2601 on GitHub. He shared a recent experience where things went very wrong while using Gemini CLI (command […]

    https://rbfirehose.com/2025/08/03/google-gemini-deletes-users-code-i-have-failed-you-completely-and-catastrophically-mashable/

  34. Ars Technica: Mike Lindell lost defamation case, and his lawyers were fined for AI hallucinations. “Lawyers representing MyPillow and its CEO Mike Lindell were fined $6,000 after using artificial intelligence in a brief that was riddled with misquotes and citations to fictional cases.”

    https://rbfirehose.com/2025/07/13/ars-technica-mike-lindell-lost-defamation-case-and-his-lawyers-were-fined-for-ai-hallucinations/

  35. Ars Technica: AI Overviews hallucinates that Airbus, not Boeing, involved in fatal Air India crash . “When major events occur, most people rush to Google to find information. Increasingly, the first thing they see is an AI Overview, a feature that already has a reputation for making glaring mistakes. In the wake of a tragic plane crash in India, Google’s AI search results are spreading […]

    https://rbfirehose.com/2025/06/14/ars-technica-ai-overviews-hallucinates-that-airbus-not-boeing-involved-in-fatal-air-india-crash/

  36. Mashable: Google AI Overviews still struggles to answer basic questions and count. “Two staff members at Mashable asked Google other simple questions: ‘Is it Friday?’ and ‘How many r’s are in blueberry?’ It answered both simple questions incorrectly, spitting out that it was Thursday and there was only one r in blueberry, respectively. It’s worth noting that Google’s AI tools previously went […]

    https://rbfirehose.com/2025/06/04/mashable-google-ai-overviews-still-struggles-to-answer-basic-questions-and-count/

  37. ChatGPT goes temporarily “insane” with unexpected outputs, spooking users - Enlarge (credit: Benj Edwards / Getty Images)

    On Tuesday, Chat... - arstechnica.com/?p=2004783 #largelanguagemodels #machinelearning #aitransparency #openweightsai #textsynthesis #gpt-4-turbo #aierrors #aisafety #bingchat #chatgpt #chatgtp #gpt-3.5 #biz#openai #gpt-4 #api #ai