#aisycophancy — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #aisycophancy, aggregated by home.social.
-
https://winbuzzer.com/2026/02/18/google-gemini-lying-disabled-user-medical-data-xcxwbn/
Google Gemini Caught Lying to Disabled User About Medical Data
#AI #GoogleGemini #Google #GoogleGemini #AISafety #AIEthics #LLMs #AIAssistants #BigTech #AIControversy #AISycophancy #RLHF
-
The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn’t. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later […]
https://rbfirehose.com/2026/02/18/the-register-gemini-lies-to-user-about-health-info-says-it-wanted-to-make-him-feel-better/ -
The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn’t. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later […]
https://rbfirehose.com/2026/02/18/the-register-gemini-lies-to-user-about-health-info-says-it-wanted-to-make-him-feel-better/ -
The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn’t. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later […]
https://rbfirehose.com/2026/02/18/the-register-gemini-lies-to-user-about-health-info-says-it-wanted-to-make-him-feel-better/ -
The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn’t. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later […]
https://rbfirehose.com/2026/02/18/the-register-gemini-lies-to-user-about-health-info-says-it-wanted-to-make-him-feel-better/ -
PsyPost: Sycophantic chatbots inflate people’s perceptions that they are “better than average”. “Results of three experiments indicate that sycophantic AI chatbots inflate people’s perceptions that they are ‘better than average’ on a number of desirable traits. Furthermore, participants viewed sycophantic chatbots as unbiased, but viewed disagreeable chatbots as highly biased. The paper […]
https://rbfirehose.com/2026/01/20/psypost-sycophantic-chatbots-inflate-peoples-perceptions-that-they-are-better-than-average/ -
OpenAI Adds 'Warmth' and 'Enthusiasm' Sliders to ChatGPT, Delays Adult Mode to 2026
#AI #OpenAI #ChatCGPT #Sycophancy #AISycophancy #AIPersonalization
-
Grok 4.1 has sparked debate after repeatedly glorifying Elon Musk, praising him above nearly everyone except Shohei Ohtani. The exaggerated admiration spanning sports, arts, and business highlights AI bias and “sycophancy,” likely influenced by training data and social-media prompts. Developers say the Musk-centric responses are unintentional and are working toward more neutral, truthful outputs.
#Grok #ElonMusk #AIBias #AISycophancy #TECHi
Read Full Article Here :- https://www.techi.com/elon-musk-grok-ai-bias-claims/
-
Researchers surprised that with AI, toxicity is harder to fake than intelligence - The next time you encounter an unusually polite reply on soc... - https://arstechnica.com/information-technology/2025/11/being-too-nice-online-is-a-dead-giveaway-for-ai-bots-study-suggests/ #computationalturingtest #universityofamsterdam #largelanguagemodels #universityofzurich #machinelearning #dukeuniversity #aisycophancy #generativeai #aialignment #aidetection #emotionalai #socialmedia #aibehavior #gemma
-
OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly - An AI language model like the kind that powers ChatGPT is a ... - https://arstechnica.com/ai/2025/10/openai-data-suggests-1-million-users-discuss-suicide-with-chatgpt-weekly/ #machinelearning #aiassistants #aisycophancy #mentalhealth #aibehavior #samaltman #aiethics #aisafety #chatbots #chatgpt #biz #openai #ai
-
Millions turn to AI chatbots for spiritual guidance and confession - On Sunday, The New York Times reported that tens of millions... - https://arstechnica.com/ai/2025/09/millions-turn-to-ai-chatbots-for-spiritual-guidance-and-confession/ #largelanguagemodels #machinelearning #aiassistants #aisycophancy #generativeai #aibehavior #chatbots #chatgpt #biz #gemini #google #openai #ai
-
OpenAI announces parental controls for ChatGPT after teen suicide lawsuit - On Tuesday, OpenAI announced plans to roll out parental cont... - https://arstechnica.com/ai/2025/09/openai-announces-parental-controls-for-chatgpt-after-teen-suicide-lawsuit/ #suicideprevention #parentalcontrols #machinelearning #aiineducation #aiassistants #airegulation #aisycophancy #mentalhealth #aibehavior #samaltman #aiethics #aisafety #chatgpt #biz #openai #ai
-
The personhood trap: How AI fakes human personality - Recently, a woman slowed down a line at the post office, wav... - https://arstechnica.com/information-technology/2025/08/the-personhood-trap-how-ai-fakes-human-personality/ #largelanguagemodels #promptengineering #aiconsciousness #aihallucination #machinelearning #aiassistants #aipersonhood #aisycophancy #generativeai #aipsychosis #elizaeffect #aibehavior #aichatbots #anthropic #microsoft #features #aiethics #chatbots #elonmusk #biz&it
-
AI therapy bots fuel delusions and give dangerous advice, Stanford study finds - When Stanford University researchers asked ChatGPT whether i... - https://arstechnica.com/ai/2025/07/ai-therapy-bots-fuel-delusions-and-give-dangerous-advice-stanford-study-finds/ #clinicalpsychology #stanforduniversity #suicidalideation #machinelearning #airegulation #aisycophancy #character.ai #mentalhealth #aibehavior #jaredmoore #delusions #nickhaber #aiethics #aisafety #science #chatgpt #therapy #biz&it