#ai-hallucinations — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #ai-hallucinations, aggregated by home.social.
-
Ars Technica: Send the arXiv AI-generated slop, get a yearlong vacation from submissions. “One of the people involved in the physics and astronomy preprint server arXiv used a social media thread to announce that any inappropriate AI-produced content submitted to the server will result in a one-year ban and a permanent requirement that future publications undergo peer review before the arXiv […]
https://rbfirehose.com/2026/05/16/ars-technica-send-the-arxiv-ai-generated-slop-get-a-yearlong-vacation-from-submissions/ -
Going Concern: EY Gets Busted and Yeets Cybersecurity Report Littered With AI Hallucinations . “The investigation found that the 2025 EY Canada report titled ‘Points of Attack: Uncovering Cyber Threats and Fraud in Loyalty Systems’ was stuffed with hallucinations: 16 out of 27 citations to be exact.”
https://rbfirehose.com/2026/05/16/going-concern-ey-gets-busted-and-yeets-cybersecurity-report-littered-with-ai-hallucinations/ -
University of California Riverside: Blind Ambition: AI agents can turn tasks into digital disasters. “The researchers found that the automated agents can become dangerously fixated on completing assignments without recognizing when their actions are harmful, contradictory, or simply irrational.”
https://rbfirehose.com/2026/05/15/blind-ambition-ai-agents-can-turn-tasks-into-digital-disasters-university-of-california-riverside/ -
DATE: May 14, 2026 at 03:25AM
SOURCE: SOCIALPSYCHOLOGY.ORGTITLE: Medical AI Transcriber "Hallucinated," Generating Errors
Source: Canadian Broadcasting Company - Health News
Artificial intelligence note-taking tools intended for use by Ontario doctors provided incorrect and incomplete information or demonstrated AI "hallucinations," a new audit report shows. During a provincial procurement process for AI medical systems that transcribe conversations between doctors and patients, government evaluators found serious errors in transcripts generated by 20 programs, said Ontario Auditor General Shelley Spence.
-------------------------------------------------
DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.
Private, vetted email list for mental health professionals: https://www.clinicians-exchange.org
Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot
NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot
Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: https://www.nationalpsychologist.com
EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: http://subscribe-article-digests.clinicians-exchange.org
READ ONLINE: http://read-the-rss-mega-archive.clinicians-exchange.org
It's primitive... but it works... mostly...
-------------------------------------------------
#psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #MedicalAI #AIErrors #AIFraud #HealthcareTech #MedicalTranscription #AIHallucinations #OntarioHealth #AuditorGeneral #HealthTechAudit #TranscriptionMistakes
-
For the love of god, we've had transcription software for decades that will take your spoken words and transcribe them without trying to add in other stuff. Yes, sometimes they get the words wrong, but at least they're not hallucinating about topics they are not designed to understand or opine about.
#AI #CanPoli #CdnPoli #Medical #HealthCare #Doctors #AIHallucinations
-
“Transcripts created by 12 of the 20 programs contained incorrect information like capturing a different drug than was prescribed by the doctor.”
12 of 20 is a 60% failure rate.
Seventeen of the 20 systems missed key details about patients’ mental health issues. “
17 of 20 is a 85% failure rate.
THESE SYSTEMS ARE NOWHERE CLOSE TO ACCURATE AND SHOULD NOT BE USED FOR HEALTH MATTERS OR ANYTHING ELSE THAT IS IMPORTANT.
#AI #CanPoli #CdnPoli #Medical #HealthCare #Doctors #AIHallucinations
-
*Right underneath* the massive headline from the Auditor General of Ontario warning of AI ‘hallucinations' in medical AI transcripts used by Ontario doctors, there is another article with the headline
“Artificial Intelligence shows promise in emergency room diagnosis”
NO! There is not promise. There are CLEAR DANGERS being pointed out by responsible adults.
"“Inaccuracies in medical notes generated by AI scribe systems could potentially result in inadequate or harmful treatment plans that may potentially impact patient health outcomes,” the auditor’s report said.”
Auditor General story: https://www.cbc.ca/news/canada/toronto/ai-scribe-system-hallucinations-9.7197049
#AI #CanPoli #CdnPoli #Medical #HealthCare #Doctors #AIHallucinations
-
Ah yes, because applying to be the Pope is just one of those things you do after a ChatGPT session 🍷🤖💡. Clearly, the Vatican is accepting applications from anyone who has mastered the art of hallucinating with AI. Who knew the road to papacy was paved with fever dreams and Dutch IT wizards? 🧙♂️✨
https://www.thestandard.com.hk/world/article/331886/I-applied-to-be-pope-Losing-grip-on-reality-while-using-ChatGPT #PopeApplication #AIhallucinations #VaticanHumor #TechWizardry #HackerNews #ngated -
🚨 Auditor General finds that most Ontario-approved medical AI scribes erred in tests.
60% recorded a different drug than what was prescribed. Almost half "fabricated information," commonly known as hallucinations.
#onpoli #AI #AIhallucinations
https://www.thetrillium.ca/news/health/most-ontario-approved-medical-ai-scribes-erred-in-tests-auditor-general-12269049 -
Press Gazette: AI in journalism: Live tracker of scandals and mistakes. “There have been numerous similar cases of AI work mistakenly published by major news outlets in the past year as the technology grows more sophisticated. Sometimes the ‘journalists’ are caught out by commissioning editors pre-publication…. In this new page, Press Gazette will keep track of such incidents to help […]
https://rbfirehose.com/2026/05/10/ai-in-journalism-live-tracker-of-scandals-and-mistakes-press-gazette/ -
Two Home Affairs officials suspended after AI 'hallucinations' found
https://www.citizen.co.za/news/home-affairs-officials-suspended-ai-hallucinations/
#HackerNews #AIhallucinations #HomeAffairs #SuspendedOfficials #TechNews #AIethics
-
Futurism: Marc Andreessen Mocked for Accidentally Revealing That He Seems to Have a Deep Misunderstanding of How AI Actually Works. “‘You are a world class expert in all domains,’ his unusually flattering prompt reads. ‘Your intellectual firepower, scope of knowledge, incisive thought process, and level of erudition are on par with the smartest people in the world.’ His gushing and unusually […]
https://rbfirehose.com/2026/05/07/futurism-marc-andreessen-mocked-for-accidentally-revealing-that-he-seems-to-have-a-deep-misunderstanding-of-how-ai-actually-works/ -
The Register: South Africa yanks AI policy after AI-assisted drafting invents citations . “South Africa has pulled its draft national AI policy after discovering that it was citing sources that exist only in the fertile imagination of a chatbot.”
https://rbfirehose.com/2026/05/03/the-register-south-africa-yanks-ai-policy-after-ai-assisted-drafting-invents-citations/ -
Mashable: South Africa withdraws its AI policy because it was AI-generated. “South Africa’s ambitions to become a continental leader in artificial intelligence have run into a deeply awkward obstacle: the country’s draft national AI policy had to be withdrawn after it was found to contain fictitious, apparently AI-generated citations.”
https://rbfirehose.com/2026/04/29/mashable-south-africa-withdraws-its-ai-policy-because-it-was-ai-generated/ -
The better one understands what AI Hallucinations are and why they happen, the better one understands how to use AI. Perhaps. Some thoughts on both: https://www.martinbihl.com/business-thinking/ai-hallucinations #aihallucinations #AI #artificialintelligence
-
Today, I asked an AI tool where I could find support for it. It gave me three incorrect URLs in sequence. AI is not ready for us. #AI #AIhallucinations
-
Oh, look. More #FAFO underway. It's like #FAFO week. "It is representing President Trump in several appeals..." #AIslop #AI #AIhallucinations www.nytimes.com/2026/04/21/n...
A.I. ‘Hallucinations’ Created ... -
Google Search AI is fun .. I'm searching for something really specific; the AI hallucinates wildly, "yup this belongs to a .." and then varies between refreshes/searches for that exact term, between;
* A dog grooming business
* BBQ Grill restaurant
* Bank Branch office
* Government body who deals in facilitiesNone of them correct. None of them even close.
-
LiveScience: AI ‘mirages’ mean tools used to analyze medical scans could fabricate their findings. “The research, which has not been peer-reviewed yet, was posted as a preprint to arXiv on March 26. Scientists showed that multiple commonly used AI models were capable of describing an image in detail and generating a clinical finding even when they were never actually provided an image to […]
https://rbfirehose.com/2026/04/14/livescience-ai-mirages-mean-tools-used-to-analyze-medical-scans-could-fabricate-their-findings/ -
Lifehacker: I Tried ChatGPT in CarPlay, and It Immediately Hallucinated. “I don’t really have many more takeaways here, other than this: In my very short time testing the feature, the AI began hallucinating almost immediately. I asked if it knew what I was doing, and it said it didn’t. When I pressed that I thought it’d be able to guess given the context, it admitted it did know I was using […]
https://rbfirehose.com/2026/04/08/lifehacker-i-tried-chatgpt-in-carplay-and-it-immediately-hallucinated/ -
Gizmodo: Attorney Hit With Historic Fine for Citing AI-Generated Cases. “A court in Oregon has issued a fine of $10,000 to an attorney who submitted a legal brief with citations and quotes hallucinated by AI, according to a new report from the Oregonian. It’s the highest fine yet for citing fake cases in the state and would have been higher, but the judges offered some leniency, according to […]
https://rbfirehose.com/2026/03/31/gizmodo-attorney-hit-with-historic-fine-for-citing-ai-generated-cases/ -
MIT News: How to create “humble” AI. “An MIT-led team is designing artificial intelligence systems for medical diagnosis that are more collaborative and forthcoming about uncertainty.”
https://rbfirehose.com/2026/03/30/mit-news-how-to-create-humble-ai/