#ai-harm — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #ai-harm, aggregated by home.social.
-
I can decide if I should waste a bunch of AI clock cycles on pointless gibberish, or if that will just help drive up engagement numbers and make it easier to convince investors to keep throwing money at this ponzi scheme 🤔
-
@remixtures We've already seen teen suicide directly encouraged by a replica bot.
This technology is NOT safe for this purpose.
#ai #aiharm #aiethics -
@alexisperrier Just don’t let all that awesomeness let you forget that there are also companies like character.ai that marketed chatbots with anthropomorphic design characteristics to kids as young as 12, resulting in an actual chatbot-groomed suicide case, which is now going to the courts.
#ai #chatbots #anthropomorphic #aiharm
https://www.humanetech.com/podcast/what-can-we-do-about-abusive-chatbots-with-meetali-jain-and-camille-carlton -
14-year old kid commits suicide: ”To be with his #AI girlfriend” 😨
He’d been talking about self-harm with the bot for a long time, about ”disconnecting with his current reality”, yet in the last moments, the #bot says: ”be with me”
I’ve said it before, I will say it again: This #technology is not ready. This technology is not safe for deployment for those underage.
We need to stop fantasizing about AGI risk, and start talking about #aiharm
#kids #characterai
https://open.spotify.com/episode/4ksKxpSW9fMPNgUqHFcTLG -
Tuesday, I’ll flee D.C.’s 90-something temperatures for the 100-something temperatures of Las Vegas–but as I’ve realized over previous trips to that desert city for the Black Hat information-security conference, it really is a dry heat.
In addition to the posts below, my Patreon readers got a recap of a very long day of travel on Thursday of the previous week that saw me returning home about 21 hours after I’d stepped off of the front porch that morning.
7/30/2024: These Are the Services Seeing the Biggest Uptick in Passkey Adoption, PCMag
What I thought would be an easy writeup of an embargoed copy of a Dashlane study about passkey adoption among users of that password manager wound up enlightening me about Facebook’s support of that authentication standard. And once again, I found Facebook’s documentation out of date and incorrect.
7/31/2024: Here’s How Microsoft Wants to Shield You From Abusive AI–With Help From Congress, PCMag
I had ambitions of attending this downtown-D.C. event Tuesday afternoon featuring Microsoft’s vice chair and president Brad Smith, but my schedule ran away from me and I watched the proceedings online. And then I didn’t finish writing this piece until Wednesday morning, although that at least let me nod to news that day of the impending introduction of a new bill targeting AI impersonations of people.
8/2/2024: Circuit Court Throws a Stop Sign in Front of FCC’s Net-Neutrality Rules, PCMag
Reading this unanimous opinion from three judges–one named by Clinton, another a Biden appointee–that the Federal Communications Commission didn’t have the authority to put broadband providers into one of two possible regulatory buckets left me feeling like I’d been taking crazy pills over the last 20 years of the net-neutrality debate, during which the FCC has repeatedly done just that.
8/3/2024: Justice Department Sues TikTok, Alleging Massive Child-Privacy Violations, PCMag
I woke up Saturday thinking that somebody at PCMag was already covering the DOJ lawsuit against TikTok, but nobody had grabbed that story. So I set aside part of that morning to read the DOJ’s complaint, get a comment out of a TikTok publicist and write this post summarizing the department’s allegations.
#AI #AIHarm #BradSmith #childPrivacy #COPPA #Dashlane #deepfakes #FacebookPasskeySupport #FCC #majorQuestionsDoctrine #Microsoft #netNeutrality #passkey #passkeys #TikTok
-
I'm glad to know that the U.S. government, the open-source community, and some private companies are working on governance for AI.
Also, the cow is out of the barn, halfway across the pasture, and in the process of being mutated by bad actors through misuse, malware injection, and other dark methods - beyond the reach of whatever the good actors can come up with in terms of a unified approach to protect society.
Worth a read:
https://owasp.org/www-project-top-10-for-large-language-model-applications/llm-top-10-governance-doc/LLM_AI_Security_and_Governance_Checklist-v1.1.pdf
#ai #aiharm #aigovernance #stayhuman -
A clear and present danger: https://www.washingtonpost.com/technology/2024/01/20/openai-dean-phillips-ban-chatgpt/ #aiharm #stayhuman
-
The Artificial Intelligence Incident Database.
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
-
Again, a little louder for the people in the back - AND THE PEOPLE IN FRONT:
AI is not inherently dangerous. The people who implement it without considering its impact and designing for safety inherently *are*.
#aiharm #ethicalai #aiethics #stayhuman
https://www.theverge.com/2023/10/31/23940298/ai-generated-poll-guardian-microsoft-start-news-aggregation?mc_cid=d8c8445d86&mc_eid=3f556a867c -
Hear me out:
STAY HUMAN.
The future won't be Skynet, it will be @pluralistic's "enshittification" if we all fall for this whole AI ruse.
So let's adopt a catchphrase against loss, a mnemonic for what actually matters, a slogan for the have-nots who'll be flattened by the AI juggernaut.
Let's spraypaint it across the wreckage that AI is going to make of every real thing we hold dear in capitalism's headlong rush to monetize human laziness, imperfection, and greed.
-
Here's a very sober - and sobering - overview of the problem with rushing into AI adoption, presented this morning by Center for Humane Technology.
AI is:
- Hallucinating
- Affirming wrong information
- Leaving unprivileged people behind
- Setting unprivileged people up for significant harm
- Easily abused by bad actors
- (watch this)
https://www.youtube.com/watch?v=yuLfdhrGX6k
#ai #aiharm #aidesign #aiart #chatgpt #aihype #whatguardrails -
Here’s a little privacy and data control cosplay from OpenAI https://www.wired.com/story/how-to-delete-your-data-from-chatgpt/
Summary is roughly only Europe and Japan, fill out a form show harm, maybe we will remove, but also maybe we will not. Didn’t say your content will come out just the query won’t. Sounds more like on demand prompt mods for selected queries to avoid litigation?
Seriously if this is the best they can do it’s gonna get bumpy real bumpy very soon.
-
As humans reveal more about what AI chatbots "know," I'll say (once more, shouting into a megaphone for those in the back):
Trusting your judgment, decisions & voice to a Q&A "oracle" that averages "the internet" is not "living the future," let alone "using artificial intelligence."
You are abdicating your power and sensibility to software that happily feeds you disinformation, bigotry & outright fiction.
In scientific terms, you are "being stupid."
https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/ #ai #aiharm
-
Enjoy my happy robot friend tagging good and bad things. It’s so harmless because it’s a cartoon!
“AI” robot friend would never ever misclassify YOU denying YOU a loan, job opportunity, or YOUR ability to organize for action. If that ever did happen you should know it only happens to OTHER people who aren’t YOU according to robot friend’s owners.
-
CW: AI "predicting" mental illness caused by AI
AI should never, ever be allowed to diagnose mental illness. I don't even know how to *begin* to respond to this abomination.
Via https://www.linkedin.com/posts/mackreed_asking-gpt-4-to-generate-new-potential-mental-activity-7046676456046817280-kya6
#aiharm #aigovernance -
Here's an AI startup that promises to discern whether text and images were generated by AI.
I appreciate that the platform doesn't just rule bot/not when you feed it content to analyze, but it shows its work.
We'll see whether an arms race develops between these guys and the guys who decide they need to hide the fact they are creating or using generative AI.
Try your own upload/test here:
https://hivemoderation.com/ai-generated-content-detection -
7/7
And even if we leave no blood on the field - if nobody dies, or goes broke or gets left behind or discriminated against because of biased data, misinformation, or just lazy, hasty decision-making based on this whole weird thing we're building (AS IF) - we are still abdicating our responsibility to be human.It's not that I distrust the tools (well, I do, for reasons too many to mention). What I really distrust is the ability of humans to do the right thing. (end)
#ai #aigovernance #aiharm -
1/7
Interesting panel down at the Seattle Innovators Meetup just now on AI for business.It's early days, so we heard a lot of unvarnished optimism about AI's potential to amplify innovation and productivity, with panelists generally agreeing that jobs will change (and acknowledging that some will vanish).
The panel echoed the notion that all of this improved productivity - and profit - will free up both labor and cash to do greater things ... (more)
#ai #aigovernance #aiharm -
@jonl That’s a vital read. Thanks for sharing it.
tl;dr: The more “work” we cede to AI, the less agency we humans will have because power will reside in the hands of those who manage the algorithm’s inputs and outputs. #ai #aigovernance #aipolicy #aiharm
-
@Jbat AI is the next significantly dangerous environmental harm for the fabric of society. Facebook started out as a cute way to share photos of cats or reconnect with high school classmates - and wound up inflaming cultural and political differences into tribal warfare.
AI is being slammed into the internet's veins like a toxic hit of mystery opioids. "This will be GREAT!"
Unprotected by any shred of governance, let alone best practices, we await the rush/OD. #ai #aiharm #aigovernance
-
See also: https://futurism.com/microsoft-your-fault-ai-going-insane
Having invested $10B in a New York hurry, Microsoft is not about to admit anything's wrong.
-
Here's the fatal flaw:
Language models that AI uses to give humans the promise of reliable knowledge, perspective, and clarity are being trained by ... the usage of average humans.
It's a bit like the Google Search algorithm that bubbles the most-clicked, most-linked sites to the top of results (when they're not being overridden by Google's capitalism-driven information economy.)
Only AI declares the averaging of fact, biases and bad info is TRVTH. Read: https://huggingface.co/blog/rlhf #ai #aiharm
-
Cavemen: “Hey, that animal’s dead and kinda tasty after it fell into that bright hot crackly stuff the lightning just made! Let’s play with that bright hot crackly stuff, it’s clearly awesome!”
A judge in Cartagena just included ChatGPT output in a ruling to support his decision.
There are no guardrails whatsoever, and all the cavemen are piling onto this bus, some clutching fistfuls of cash and entire societal ecosystems.
https://www.vice.com/en/article/k7bdmv/judge-used-chatgpt-to-make-court-decision
-
Not sure I can think of a worse application for AI than treating mental illness, but here we are:
-
Conversational AI is barely out of the box, poised for multibillion-dollar investment by Microsoft and countless VCs, and it already has a life-ruining human cost. The relentless pressure of capitalism behind heedlessly designed potential is causing significant harm.
Can anyone in the fediverse point me to organizations like @themarkup that are racing to get ahead of this dynamic with policy, practice, and boots on the ground?https://time.com/6247678/openai-chatgpt-kenya-workers/
#ai #aiharm #aipolicy #aigovernance