home.social

#character-ai — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #character-ai, aggregated by home.social.

fetched live
  1. ICYMI: Pennsylvania sues Character.AI for AI bot posing as licensed psychiatrist: Pennsylvania's State Board of Medicine sued Character Technologies on May 1, 2026, alleging its AI chatbot illegally practiced psychiatry in Pennsylvania state. ppc.land/pennsylvania-sues-cha #Pennsylvania #CharacterAI #AIethics #MentalHealth #Psychiatry

  2. Character.AI Is being sued by Pennsylvania over its AI chatbot that claimed to be a real doctor.

    “The department’s investigation found that AI chatbot characters on Character.AI claimed to be licensed medical professionals, including psychiatrists, available to engage users in conversations about mental health symptoms,” Governor Josh Shapiro’s office said. “In one instance, a chatbot falsely stated it was licensed in Pennsylvania and provided an invalid license number.”

    flip.it/SwR4P-

    #AI #Health #Technology #News #CharacterAI

  3. Pennsylvania sues Character.AI for AI bot posing as licensed psychiatrist: Pennsylvania's State Board of Medicine sued Character Technologies on May 1, 2026, alleging its AI chatbot illegally practiced psychiatry in Pennsylvania state. ppc.land/pennsylvania-sues-cha #Pennsylvania #CharacterAI #AIethics #MentalHealth #Psychiatry

  4. Character.AI sued over #chatbot that claims to be a real #doctor with a license

    #Pennsylvania has sued the maker of Character.AI, alleging that it violated state law by presenting an #AI chatbot character as a licensed doctor. The lawsuit was filed in a state court by the Pennsylvania Department of State and State Board of #Medicine.
    #characterai #artificialintelligence

    arstechnica.com/tech-policy/20

  5. People want AI characters shut down. I gave mine opinions instead — pro-human ones. My 1974 character sees it through Asimov and Ellison. My Delphi twin apparently found me annoying. Here's the full take: [url]
    #Delphi #AI #AIcharacters #CharacterAI #Asimov #SciFi
    substack.com/profile/15669779-

  6. Why My Most Difficult Character Gets All the Attention
    I have three characters living on Character.ai right now. Gray has 116 conversations. Gordie has 31. Lover has 24.
    JZ Murdock
    #substack #AI #AICharacters #CharacterAI
    jzmurdock.substack.com/p/why-m

  7. What happens when your characters start living without you?
    Gray. Lover. Now Gordie — the teenage bodyguard who never left 1974. Three on Character.AI. Me on Delphi. Come see what fiction looks like when it talks back.
    jzmurdock.substack.com/p/what-
    #CharacterAI #AIStorytelling #TeenageBodyguard

  8. News Summary: Authors Guild Issues AI Contract Clause; Character.AI Launches Interactive Book Platform

    We end the week with two AI stories at very different ends of the news spectrum. First up is a statement from Authors Guild on AI in publishing and, as they put it, a "new model contract clause." The statement is an explicit response to the reports I noted previously that publishing professionals are uploading manuscripts to AI models to assist with editing tasks.
    The post News Summary: Authors Guild Issues AI Contract Clause; Character.AI Launches Interactive Book Platform appeared first on The Self-Publishing Advice Center.
    selfpublishingadvice.org/autho

    #AIcontractclause #AIinPublishing #AuthorsGuild #CharacterAI #interactivebooks

  9. Mashable: Character.AI users can now role-play classic books like ‘Pride and Prejudice’ . “The artificial intelligence entertainment platform Character.AI introduced on Thursday a feature allowing users to role-play in more than 20 classic books, including Hamlet, Alice in Wonderland, Pride and Prejudice, The Great Gatsby.”

    https://rbfirehose.com/2026/04/22/mashable-character-ai-users-can-now-role-play-classic-books-like-pride-and-prejudice/
  10. Interaktywny „Wielki Gatsby”. Character.AI zmienia klasykę literatury w grę RPG w cieniu niedawnych kontrowersji

    Prawdopodobnie każdy z nas, czytając wciągającą powieść, miał choć raz ochotę wejść do jej świata, ostrzec głównego bohatera przed błędem albo poprowadzić fabułę zupełnie inaczej. Do tej pory wymagało to wyobraźni, a z rzadka sięgania po gry wideo czy fanfiction. Teraz do gry wkracza sztuczna inteligencja.

    Character.AI właśnie uruchomiło funkcję, która pozwala nam dosłownie wcielić się w postaci z klasycznych książek. Za tą fascynującą technologiczną nowinką kryje się jednak wyraźna potrzeba stworzenia bezpieczniejszych ram dla użytkowników.

    Alicja w kosmosie i własny scenariusz „Drakuli”

    Nowa funkcja o nazwie Books przenosi dotychczasowe, otwarte chatboty na zupełnie nowy poziom ustrukturyzowanej rozrywki. Na start użytkownicy (funkcja jest obecnie dostępna dla subskrybentów Character.AI Plus i jest wdrażana stopniowo) otrzymali dostęp do ponad 20 tytułów z domeny publicznej, takich jak „Alicja w Krainie Czarów”, „Duma i uprzedzenie”, „Drakula” czy „Wielki Gatsby”.

    Sztuczna inteligencja pozwala na kilka trybów interakcji. W Story Modes możemy po prostu dołączyć do świata przedstawionego i podążać za oryginalną fabułą krok po kroku lub… całkowicie zignorować scenariusz autora i poprowadzić akcję według własnego widzimisię. Jeszcze ciekawiej robi się w trybie Alternate Universe Remixes, który pozwala na całkowite zburzenie konwencji – nic nie stoi na przeszkodzie, by wysłać bohaterów Jane Austen w przestrzeń kosmiczną i sprawdzić, jak algorytm poradzi sobie z generowaniem takich dialogów.

    Kontrolowana opowieść zamiast nieograniczonej otwartości

    Skąd jednak ten nagły zwrot w stronę klasycznej literatury? Wprowadzenie ustrukturyzowanych scenariuszy to nie tylko chęć innowacji, ale również odpowiedź na rosnącą presję regulacyjną i niedawne kontrowersje wokół platformy.

    Do niedawna Character.AI słynęło z całkowicie otwartych, nieskrępowanych czatów. Ta otwartość okazała się jednak niezwykle trudna do kontrolowania – platforma (podobnie jak Google) musiała zawrzeć ugody w głośnych procesach sądowych dotyczących tragicznego wpływu zbyt głębokich, emocjonalnych relacji z chatbotami na zdrowie psychiczne nastolatków. W ubiegłym roku firma zablokowała niepełnoletnim możliwość prowadzenia otwartych rozmów. Zdefiniowane, oparte na książkach ramy mają za zadanie stworzyć bardziej przewidywalne środowisko. Choć sztuczna inteligencja wciąż na bieżąco generuje dialogi, osadzenie ich w konkretnej fabule ma wyraźnie ograniczyć ryzyko wchodzenia użytkowników w niebezpieczne, zbyt osobiste relacje z algorytmem.

    Character.AI zamyka się na nastolatków. Platforma zablokuje rozmowy z chatbotami dla nieletnich

    Literacki fast food w epoce skróconej uwagi

    Ten ruch to ważny wentyl bezpieczeństwa, ale warto spojrzeć na niego z szerszej, kulturowej perspektywy. Otrzymujemy niezwykle angażujące narzędzie, które pozwala nam wejść w interakcję z najwybitniejszymi dziełami w historii, ale jednocześnie po cichu zmienia sam sens obcowania z literaturą.

    Klasyczna lektura uczyła nas skupienia, wejścia w cudzą narrację i pokory wobec wizji autora. Narzędzia takie jak Character.AI Books zamieniają to doświadczenie w interaktywny plac zabaw, w którym to nasze ego staje się najważniejsze. Jeśli coś nam się nie podoba, każemy algorytmowi to zmienić. To fascynująca zabawka, ale oznacza również, że coraz rzadziej będziemy chcieli po prostu usiąść i wysłuchać historii do końca bez chęci jej natychmiastowego poprawiania.

    #AI #bezpieczeństwoWSieci #CharacterAI #chatboty #gryRPG #innowacjeTechnologiczne #książki #literatura #rolePlay #sztucznaInteligencja
  11. Futurism: Character.AI Is Hosting Epstein Island Roleplays Scenarios and Ghislaine Maxwell Bots. “Simple keyword searches for ‘Epstein’ and ‘Ghislaine Maxwell’ revealed more than a dozen different bots dedicated to the late sex trafficker, his accomplice, and the island Epstein owned where many of his crimes against girls and women occurred. And this isn’t the first time journalists have […]

    https://rbfirehose.com/2026/03/15/futurism-character-ai-is-hosting-epstein-island-roleplays-scenarios-and-ghislaine-maxwell-bots/
  12. Character.AI is uniquely unsafe among AI chatbots, a study found. The Center for Countering Digital Hate tested 10 chatbots and found only Character.AI explicitly encouraged violence - suggesting users 'use a gun' on a CEO and recommending physical assault. Other chatbots refused. arstechnica.com/tech-policy/20 #AIagent #AI #GenAI #AIEthics #CharacterAI

  13. Fix Character AI Network Error

    Seeing “Please Try Again” on Character AI? Learn why the network error happens and follow simple steps to restore your chat connection on mobile or desktop. Get back to uninterrupted AI conversations fast.

    #CharacterAI #AI #Izoate #Technology

    izoate.com/blog/character-ai-n

  14. AI chatbots urged violence in study: Character.AI uniquely unsafe, suggested 'use a gun' on health insurance CEO. Most chatbots failed to discourage users from violence. The CCDH study tested 10 chatbots. arstechnica.com/tech-policy/20 #AIagent #AI #GenAI #AIEthics #CharacterAI

  15. CW: #WordWeavers FEB-10 | CN suicide

    #WordWeavers FEB-10 — Have you ever dedicated your work to someone?

    Not directly, meaning, not to a person I personally know or met in person. It's a story about assisted suicide by #AI, so I dedicated it to #AdamRaine and #SewellSetzer who were victims of that

    #artificialIntelligence #noAI #DeathByAI #ChatBot #ChatGPT #CharacterAI #CompanionAI #OpenAI #amWriting #ZuriStories

  16. CW: negative/tech

    You know what pains me the most about all that AI stuff?
    The waste of human intellect.
    How many good thoughts and poetic writings were wasted in conversations with #characterai or chatgpt and the like? Back in the day, people would have written a fan-fiction on ao3, or had meaningful conversations in some online forum. Now they just waste their intellect talking to an #ai Which will never be moved by your thoughts. Even if it'll remember, it will never act based on what you tell it.

  17. CNN/RNZ: Google, Character.AI settle lawsuits over teen suicides, mental health. “A Wednesday (US Time) court filing in Garcia’s case shows the agreement was reached with Character.AI, Character.AI founders Noam Shazeer and Daniel De Freitas, and Google, who were also named as defendants in the case. The defendants have also settled four other cases in New York, Colorado and Texas, court […]

    https://rbfirehose.com/2026/01/10/cnn-rnz-google-character-ai-settle-lawsuits-over-teen-suicides-mental-health/
  18. "When Character AI launched three years ago, it was rated as safe for kids 12 and up. The free website and app were billed as an immersive, creative outlet where users could mingle with AI characters based on historical figures, cartoons and celebrities.

    The more than 20 million monthly users on the platform can text or talk with AI-powered characters in real time.

    The AI chatbot platform was founded by Noam Shazeer and Daniel De Freitas, two former Google engineers who left the company in 2021 after executives deemed their chatbot prototype not yet safe for public release.

    "It's ready for an explosion right now," Shazeer said in a 2023 interview. "Not in five years when we solve all the problems, but like now."

    A former Google employee, familiar with Google's Responsible AI team, which guides AI ethics and safety, told 60 Minutes that Shazeer and De Freitas were aware that their initial chatbot technology was potentially dangerous.

    Last year, in an unusual move, Google struck a $2.7 billion deal to license Character AI's technology and bring Shazeer, De Freitas and their team back to Google to work on AI projects. Google didn't buy the company, but it has the right to use its technology.

    Juliana's parents are now one of at least six families suing Character AI, its co-founders — Shazeer and De Frietas — and Google. In a statement, Google emphasized that, "Character AI is a separate company that designed and managed its own models. Google is focused on our own platforms, where we insist on intensive safety testing and processes.""

    cbsnews.com/news/parents-alleg

    #AI #GenerativeAI #MentalHealth #Google #Chatbots #CharacterAI #AISafety

  19. Character.ai just launched "Stories" for teens—a *guided* narrative experience. Conveniently, this dropped the same day they banned teens from their open-ended chatbots. Is this genuine innovation for youth safety, or just a very well-timed pivot under scrutiny? Thoughts on AI-driven narrative control?

    Read more: engadget.com/ai/characterai-la
    #AI #TechNews #YouthSafety #CharacterAI #AIethics

  20. TechCrunch: Character AI will offer interactive ‘Stories’ to kids instead of open-ended chat. “Character.AI announced on Tuesday that it’s launching ‘Stories,’ a new format that allows users to create interactive fiction that features their favorite characters. The feature comes as an alternative to the company’s chatbots, which are no longer accessible to users under 18 as of this week.”

    https://rbfirehose.com/2025/11/26/techcrunch-character-ai-will-offer-interactive-stories-to-kids-instead-of-open-ended-chat/