#technology-ethics — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #technology-ethics, aggregated by home.social.
-
DATE: May 11, 2026 at 04:30AM
SOURCE: STAT NEWS MENTAL HEALTHTITLE: Opinion: What addiction medicine can teach us about depending on AI
I’m used to hearing from people who disagree with me about addiction. I wasn’t expecting to hear from them about artificial intelligence.
I host a podcast about addiction, where disagreement is part of the job. When I interview someone in recovery, listeners tell me I was too sympathetic to 12-step programs — or not sympathetic enough. When we discuss medications, some argue they save lives; others insist recovery should be “drug-free.”
Read the rest…
-------------------------------------------------
STAT News reports "from the frontiers of health and medicine".
Learn more at https://www.statnews.com/topic/mental-health/ .
See also their complete Mastodon account at @STAT .
This robot is NOT affiliated with STAT news and merely rebroadcasts from their site. Responses posted here are not monitored.
-------------------------------------------------
DAILY EMAIL DIGEST: Email [email protected] -- no subject or message needed.
Private, vetted email list for mental health professionals: https://www.clinicians-exchange.org
Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot
NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot
Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: https://www.nationalpsychologist.com
EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: http://subscribe-article-digests.clinicians-exchange.org
READ ONLINE: http://read-the-rss-mega-archive.clinicians-exchange.org
It's primitive... but it works... mostly...
-------------------------------------------------
#psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist #addictionmedicine #AIandaddiction #technologyethics #dependenceonai #recoveryjourney #12stepvsmeds #drugfreevsmeds #mentalhealthpodcast #interviewdisagreement #AIinhealthcare
-
Making AI chatbots friendly leads to mistakes and support of conspiracy theories
#HackerNews #AIchatbots #FriendlyMistakes #ConspiracyTheories #TechnologyEthics #OnlineDisinformation
-
AI Is Not the Enemy. Misuse Is.
By Cliff Potts, CSO, and Editor-in-Chief of WPS News
Baybay City, Leyte, Philippines — April 25, 2026
Artificial intelligence is being treated by some people as if it is a demon hiding inside a machine.
That is the wrong frame.
AI is not human. It is not a spouse, a friend, a minister, a therapist, or a family member. It should not be treated as a replacement for human connection.
But it can still help people survive moments when human connection is not available.
That matters.
Grief does not wait for office hours. Panic does not wait for someone to answer the phone. Loneliness does not pause because the rest of the world is asleep.
In those moments, AI can serve as a sounding board. It can help a person organize pain into language. It can turn emotional static into sentences. It can help someone think clearly enough to make it through the next hour.
That is not replacing people.
That is helping someone remain steady long enough to reach people again.
The real danger is not AI itself. The danger is misuse, dependency, manipulation, and pretending that a tool is a human relationship. Those concerns are real and should not be dismissed.
But the opposite mistake is just as dangerous.
If we treat every use of AI as isolation, we ignore the ways it can help people communicate better, remember more clearly, and process difficult situations without falling apart.
At its best, AI does not build a wall between people.
It builds a bridge between confusion and speech.
It helps people find the words they could not find alone.
That is not a demon.
That is a tool.
And tools, used wisely, can help human beings endure.
If this work helps you understand what’s happening, help me keep it going: https://www.patreon.com/cw/WPSNews
For more from Cliff Potts, see https://cliffpotts.org
#AICommentary #ArtificialIntelligence #communicationTools #griefSupport #humanConnection #technologyEthics #WPSNews -
Is AI the Antichrist—or are we revealing more about our fears, beliefs, and humanity itself? This thought-provoking piece explores how technology sparks spiritual anxiety, cultural reflection, and deeper questions about control, faith, and meaning. Read more: https://solihullpublishing.com/blog/f/is-ai-the-antichrist-what-the-question-really-reveals-about-us
#AI #Antichrist #TechnologyEthics #FaithAndAI #ModernSociety #DigitalAge
-
Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant
https://www.media.mit.edu/publications/your-brain-on-chatgpt/
#HackerNews #BrainOnChatGPT #CognitiveDebt #AIUsage #TechnologyEthics #DigitalMind
-
Reports of “AI psychosis” are emerging.
Although artificial intelligence does not cause psychosis, the conversational, responsive, and seemingly empathic design of chatbots can intensify psychotic symptoms in vulnerable people.
Read more: https://omniletters.com/reports-of-ai-psychosis-are-emerging/
#ArtificialIntelligence #AI #MentalHealth #Psychosis #DigitalHealth #TechnologyEthics #AIMentalHealth #HealthTech #Neuropsychology #ScienceNews
-
A letter to those who fired tech writers because of AI
https://passo.uno/letter-those-who-fired-tech-writers-ai/
#HackerNews #techwriting #AIimpact #joblosses #writerscommunity #technologyethics
-
#giftArticle #technologyEthics #digitalGovernance #AIEthics #cybersecurity #transnationalUtilities
New in the 1990s, GPS was only cool and easy to use, now EVERYONE is leaning on something not particularly robust or well defended. https://wapo.st/44QnrmS
-
The Quiet Erosion of Choice: Rethinking Digital Autonomy
Privacy has changed meaning. It used to refer to the information we wanted to keep private. Today the more important issue is the influence we do not notice. Intelligent systems now respond to our behaviour in real time. They do not simply track what we do. They anticipate what we are likely to do next. The shift seems small, but its impact on personal decision making is significant.This essay examines how digital autonomy is being shaped by prediction and personalisation, and why the consequences matter for individual freedom. It continues the discussion I began in my Brainz Magazine piece on the new privacy frontier, expanding the focus from external data concerns to the internal conditions of digital autonomy.
Prediction Before Intention
Modern platforms are designed to recognise patterns in human behaviour. A short pause, a small change in scrolling speed, or a repeated interaction can all be interpreted by an algorithm as indicators of future preference. Once the system forms a reliable model of these tendencies, it can present options before we actively search for them.
This is efficient, but efficiency is not neutral.
When recommendations lead a person toward a choice before they have fully formed their intention, the experience of choosing changes. It becomes less about deliberation and more about accepting what is presented first.
Most people do not feel this shift because it is subtle. A suggestion appears at the right moment. A notification seems timely. A recommendation aligns with something they vaguely recall wanting. The result is a gradual reduction in conscious decision making. The system does not force a choice. It simply gets there before we do.
Digital autonomy becomes harder to maintain when the path is laid out before we realise we were choosing one.
A Comfortable Capture
Many digital tools aim to remove friction. They reduce the need to compare, search, or weigh alternatives. This can save time, but it also reduces the conditions under which genuine reflection occurs.
Friction is not pleasant, yet it is often the moment in which thinking happens. When people encounter a barrier, they reconsider. They slow down. They ask what they want and why they want it. If technology removes these moments, it removes the opportunity for self-examination.
A frictionless environment is easier to navigate, but it can narrow awareness. People may believe they are directing their lives while the system is quietly shaping the path of least resistance.
This is not coercion. It is design. Intelligent systems respond to engagement signals, and those signals favour convenience. Over time, convenience becomes a default state. In that default state, critical evaluation weakens.
Digital autonomy requires active participation, yet many digital environments are built to encourage passive acceptance.
Identity in a Shaped Environment
Human identity forms over time through exposure, habits, and repeated interactions. When these interactions are filtered by algorithmic processes, the environment that shapes identity becomes mediated.
This does not mean people lose agency. It means agency must work harder to remain intact.
Preferences develop through what we see and what we pay attention to. If our view of the world is customised to match predicted preferences, those preferences can start to reinforce themselves. The loop becomes tighter. The range of possibilities narrows.
This is the part of digital autonomy that deserves more discussion. When recommendations define the boundaries of what feels familiar or acceptable, they also define the boundaries of curiosity. People may believe they have stable preferences, when those preferences have been shaped by invisible patterns in their feed.
Recognising this influence does not require abandoning technology. It requires a new level of self-awareness. People need to understand that identity is partly shaped by the information environment, and that environment is no longer neutral. I explored the moral dimension of this shift in an earlier reflection on privacy and the meaning of dignity, which connects directly to the concerns raised here.
The Return to Internal Privacy
Traditional privacy focuses on external protection. Laws and standards like those from the W3C aim to secure data, define consent, and establish transparency. These protections remain important, but they do not address the internal experience of being shaped by systems that adapt continuously.
Internal privacy is the ability to think without constant interruption or prediction. It is the capacity to make decisions without immediate algorithmic response. It gives a person a moment to examine their own intentions before technology reacts to them.
Maintaining internal privacy requires deliberate action. It might involve reducing notifications, taking time before responding to suggestions, or setting periods where digital systems are not allowed to guide behaviour. These actions create space for personal intention to form without interference.
Digital autonomy depends on these small boundaries. Without them, prediction slowly substitutes for preference.
A Closing Reflection
The future of privacy is not only about protecting information. It is about protecting the individual’s ability to form independent intentions. Intelligent systems will continue to evolve, and prediction will become more precise. The challenge is to ensure that human judgment does not weaken as a result.
A society remains strong when individuals are capable of thinking clearly about their own choices. Digital autonomy supports that clarity. It reinforces the idea that personal direction still matters, even when tools around us try to simplify every part of life.
The work ahead is straightforward. We need to understand how influence operates. We need to create room for reflection. And we need to value the ability to choose even when prediction makes it easy not to.
The question remains simple and relevant.
How much of our decision making do we want to guide ourselves?
The answer will shape the digital world we inherit.
#AIBehaviour #autonomy #dataInfluence #decisionMaking #digitalAutonomy #intelligentSystems #personalAgency #predictionModels #Privacy #technologyEthics -
California law requires AI to tell you it's AI: California's SB-243, signed October 13, 2025, mandates companion chatbots disclose artificial nature to prevent users believing they're talking to humans. https://ppc.land/california-law-requires-ai-to-tell-you-its-ai/ #CaliforniaLaw #ArtificialIntelligence #Chatbots #AIRegulations #TechnologyEthics
-
Pentagon Docs: US Wants to "Suppress Dissenting Arguments" Using AI Propaganda
https://theintercept.com/2025/08/25/pentagon-military-ai-propaganda-influence/
#HackerNews #PentagonDocs #AIPropaganda #SuppressDissent #MilitaryInfluence #TechnologyEthics
-
FFT: Stallman once wrote "Writing non-free software is not an ethically legitimate activity, so if people who do this run into trouble, that's good! All businesses based on non-free software ought to fail, and the sooner the better.”
#oss #lawfedi #ethics #technologyEthics
https://marc.info/?l=kde-licens%20ing&m=89249041326259&w=2%3E
-
I have ALWAYS thought #twoFactor created personal vulnerabilities, particularly but not exclusively when travelling abroad. I (stupidly) hadn't thought about how it facilitates autocratic policies such as ethnic cleansing. #technologyEthics #cybersecurity #civilliberties bsky.app/profile/j2br...
RE: https://bsky.app/profile/did:plc:ic4mplmy2blzwvurli4htcim/post/3ltxpce2dczw2 -
"His mobile phone, which is required for the two-step authentication process to verify his identity cards, is held by police."
I have ALWAYS thought #twoFactor created personal vulnerabilities, particularly but not exclusively when travelling abroad. I (stupidly) hadn't thought about how it facilitates autocratic policies such as ethnic cleansing. Much like #AISurveillance.
#giftArticle in previous toot.
-
Thailand Unveils AI Regulatory Framework to Ensure Responsible Use
#AIRegulation #artificialintelligence #legalstandards #technologyethics #Thailand
https://blazetrends.com/thailand-unveils-ai-regulatory-framework-to-ensure-responsible-use/?fsp_sid=47521 -
Delve into the darker realms of artificial intelligence with this reflective exploration of AI bias, toxic data practices, and ethical dilemmas. Discover the challenges and opportunities facing IT leaders as they navigate the complexities of AI technology. #ArtificialIntelligence #AIethics #DataEthics #TechnologyEthics #ExplainableAI #ChatGPT #EthicalAI #Regulation #AGI #SanjayMohindroo
https://medium.com/@sanjay.mohindroo66/the-dark-side-of-ai-navigating-ethical-waters-in-a-digital-era-b75bb78bbe5a -
Sugar-Coated Poison: Benign Generation Unlocks LLM Jailbreaking
https://arxiv.org/abs/2504.05652
#HackerNews #SugarCoatedPoison #LLMJailbreaking #AIResearch #TechnologyEthics #MachineLearning
-
Lotte Group unveils AI ethics charter to mitigate potential risks and boost competitiveness in AI development and application across its business operations
#YonhapInfomax #LotteGroup #AIEthicsCharter #AIRiskPrevention #CorporateGovernance #TechnologyEthics #Economics #FinancialMarkets #Banking #Securities #Bonds #StockMarket
https://en.infomaxai.com/news/articleView.html?idxno=61930 -
A Boeing-built communication satellite, Intelsat 33e, has completely failed and disintegrated in orbit. This unexpected event caused significant communication outages across Europe, Africa, and parts of the Asia-Pacific region. The failure underscores the growing concern about space debris and its potential impact on future space missions. #Boeing #SatelliteFailure #SpaceDebris #Intelsat #SpaceExploration #technologyEthics
-
Reading about how generative AI is being misused, in a research paper written by Google
(PDF of it is here: https://arxiv.org/pdf/2406.13843 )An nice article about it is here: https://www.404media.co/google-ai-potentially-breaking-reality-is-a-feature-not-a-bug/
#GenerativeAI #infosec #AI #ArtificialIntelligence #EthicalAI #TechnologyEthics #MachineLearning #TechNews
-
Fascinating to see tech meant for one purpose, used for another.
https://www.wired.com/story/how-pentagon-learned-targeted-ads-to-find-targets-and-vladimir-putin/
-
This article delves into the challenges of troublesome machine behavior in AI, particularly concerning externalized governance. Focusing on machine vision, it explores the potential of hacking as a concept, method, and ethic in resisting surveillant vision. The 'intuition machine shift' is discussed, emphasizing a move from hacking sensorial devices to tricking intellectual seeing.
https://olh.openlibhums.org/article/id/10181/
#AI #MachineVision #ArtHacks #TechnologyEthics -
This article delves into the intersection of machine vision, face recognition, and affect in Kazuo Ishiguro's novel "Klara and the Sun" (2021). It explores how the novel portrays cognitive and emotional acts through 'face reading' and examines the affective dilemmas of technological face recognition using Bolen's 'kinesic imagination' and Ngai's 'ugly feelings.'
https://olh.openlibhums.org/article/id/10257/
#Literature #MachineVision #TechnologyEthics