home.social

#autonomousweapons — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #autonomousweapons, aggregated by home.social.

  1. Anthropic's positioning of usage red lines get a close examination in this piece lawfaremedia.org/article/the-s and it is good.

    Suggestions for refinements include adding more specificity to it's definition of "mass surveillance" and adding details scoping out the use cases it objects to.

    Anthropic's arguments re "autonomous lethal warfare" could also be further clarified given its statements indicating research on autonomous systems is ok, but using current AI technology is not appropriate b/c it is not reliable enough.

    So, the warfare red line is not a strict principle, it's statement of current technological limitations. #Anthropic #Claude #AI #RedLines #Lawsuit #Amodei #MassSurveillance #AutonomousWeapons #SupplyChainRisk #DoD #Military

  2. #CaitlinKalinowski, #OpenAI’s #robotics leader, #resigned over concerns about the company’s agreement with the #Pentagon to deploy its models on a classified network. She cited concerns about #surveillance and #autonomousweapons, stating that these issues deserved more deliberation. OpenAI has since clarified restrictions on #military use of its systems. fortune.com/2026/03/07/openai- #AIagent #AI #ML #NLP #LLM #GenAI

  3. Technē without safety guardrails?

    * "The public showdown between the Department of Defense and Anthropic began earlier this week after they entered into discussions about the military’s use of the company’s Claude AI system. But the talks broke down as both sides appeared to be unable to come to agreement over safety guardrails."

    "US defense officials have pushed for unfettered access to Claude’s capabilities that they say can help protect the country, while Anthropic has resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can kill people without human input." >>
    theguardian.com/us-news/2026/f

    * The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’ >>
    theconversation.com/the-pentag

    * Who decides when a machine kills? When private companies are enforcing ethical constraints and governments are not, something is very wrong >>
    euractiv.com/opinion/who-decid

    #ethics #OpenAI #BigTech #surveillance #AutonomousWeapons #ADM #war #KillerRobots #LAWs #Google #LLMs #Claude #Anthropic #transparency #accountability #AutomatedDecisionMaking #algorithms #AlgorithmicTransparency

  4. Technē without safety guardrails?

    * "The public showdown between the Department of Defense and Anthropic began earlier this week after they entered into discussions about the military’s use of the company’s Claude AI system. But the talks broke down as both sides appeared to be unable to come to agreement over safety guardrails."

    "US defense officials have pushed for unfettered access to Claude’s capabilities that they say can help protect the country, while Anthropic has resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can kill people without human input." >>
    theguardian.com/us-news/2026/f

    * The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’ >>
    theconversation.com/the-pentag

    * Who decides when a machine kills? When private companies are enforcing ethical constraints and governments are not, something is very wrong >>
    euractiv.com/opinion/who-decid

    #ethics #OpenAI #BigTech #surveillance #AutonomousWeapons #ADM #war #KillerRobots #LAWs #Google #LLMs #Claude #Anthropic #transparency #accountability #AutomatedDecisionMaking #algorithms #AlgorithmicTransparency

  5. Technē without safety guardrails?

    * "The public showdown between the Department of Defense and Anthropic began earlier this week after they entered into discussions about the military’s use of the company’s Claude AI system. But the talks broke down as both sides appeared to be unable to come to agreement over safety guardrails."

    "US defense officials have pushed for unfettered access to Claude’s capabilities that they say can help protect the country, while Anthropic has resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can kill people without human input." >>
    theguardian.com/us-news/2026/f

    * The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’ >>
    theconversation.com/the-pentag

    * Who decides when a machine kills? When private companies are enforcing ethical constraints and governments are not, something is very wrong >>
    euractiv.com/opinion/who-decid

    #ethics #OpenAI #BigTech #surveillance #AutonomousWeapons #ADM #war #KillerRobots #LAWs #Google #LLMs #Claude #Anthropic #transparency #accountability #AutomatedDecisionMaking #algorithms #AlgorithmicTransparency

  6. Technē without safety guardrails?

    * "The public showdown between the Department of Defense and Anthropic began earlier this week after they entered into discussions about the military’s use of the company’s Claude AI system. But the talks broke down as both sides appeared to be unable to come to agreement over safety guardrails."

    "US defense officials have pushed for unfettered access to Claude’s capabilities that they say can help protect the country, while Anthropic has resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can kill people without human input." >>
    theguardian.com/us-news/2026/f

    * The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’ >>
    theconversation.com/the-pentag

    * Who decides when a machine kills? When private companies are enforcing ethical constraints and governments are not, something is very wrong >>
    euractiv.com/opinion/who-decid

    #ethics #OpenAI #BigTech #surveillance #AutonomousWeapons #ADM #war #KillerRobots #LAWs #Google #LLMs #Claude #Anthropic #transparency #accountability #AutomatedDecisionMaking #algorithms #AlgorithmicTransparency

  7. Technē without safety guardrails?

    * "The public showdown between the Department of Defense and Anthropic began earlier this week after they entered into discussions about the military’s use of the company’s Claude AI system. But the talks broke down as both sides appeared to be unable to come to agreement over safety guardrails."

    "US defense officials have pushed for unfettered access to Claude’s capabilities that they say can help protect the country, while Anthropic has resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can kill people without human input." >>
    theguardian.com/us-news/2026/f

    * The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’ >>
    theconversation.com/the-pentag

    * Who decides when a machine kills? When private companies are enforcing ethical constraints and governments are not, something is very wrong >>
    euractiv.com/opinion/who-decid

    #ethics #OpenAI #BigTech #surveillance #AutonomousWeapons #ADM #war #KillerRobots #LAWs #Google #LLMs #Claude #Anthropic #transparency #accountability #AutomatedDecisionMaking #algorithms #AlgorithmicTransparency

  8. The madness continues …

    #OpenAI reaches deal to deploy #AI models on US Department of #Defense #classified network

    #SamAltman said Friday OpenAi reached an agreement with the #DoD.

    "In all of our interactions, the DoW displayed a deep respect for safety & a desire to partner to achieve the best possible outcome," Altman posted on X using the #Trump admin’s BS name for the DoD.

    #law #tech #surveillance #AutonomousWeapons #privacy #security #InfoSec #military
    reuters.com/business/openai-re

  9. …The ultimate winner could now prove to be Elon #Musk’s #xAI [great /s], which #defense ofcls say has already agreed to the #Pentagon’s terms for working on #classified systems. The entrepreneur jumped on Emil Michael’s social media Friday [the DoD’s technology chief] saying “#Anthropic hates Western Civilization.” [ridiculous idiot.]

    #Trump #RevengePolitics #law #AI #tech #surveillance #AutonomousWeapons #SupplyChainRisk #privacy #security #InfoSec #military

  10. Earlier in the week, Jeff Dean, #Google’s chief #AI scientist, said he was opposed to the #technology being used for #surveillance & repeated his long-standing opposition to #AutonomousWeapons.

    #SamAltman, #OpenAI’s chief executive, said Friday that it was important for AI companies to find ways to work with the #Pentagon but that he had concerns similar to those of his rival #Amodei.

    #Trump #RevengePolitics #law #Anthropic #tech #SupplyChainRisk #privacy #security #InfoSec #military

  11. …The #SupplyChainRisk designation issued by #Hegseth late Friday was an extraordinary escalation, ranking a leading #US #AI company alongside the likes of Chinese & Russian firms seen as a danger to the #UnitedStates.
    
It was unclear how easy it would be for government departments to move away from #Anthropic’s #technology, or for the company’s partners that do business with the #Pentagon to cut ties.

    #Trump #RevengePolitics #law #surveillance #AutonomousWeapons #privacy #security #military

  12. #Anthropic said it would fight the blacklisting in court. In a blog post late Friday, the company said that it believed the wide-reaching ban #Hegseth described was not permitted by federal #law & that the designation of the company as a #SupplyChainRisk was “legally unsound.” [pretty good bet]

    #Trump #RevengePolitics #law #AI #tech #surveillance #AutonomousWeapons #privacy #security #InfoSec #military

  13. Late Friday, #Defense Secy Pete #Hegseth followed #Trump’s unhinged post, saying in his own post that he was declaring #Anthropic a #SupplyChainRisk. “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth wrote.

    #RevengePolitics #law #AI #tech #surveillance #AutonomousWeapons #privacy #security #InfoSec #military

  14. Amodei said #Anthropic understands that the #Pentagon, “not private companies, makes #military decisions.” But “in a narrow set of cases, we believe #AI can undermine, rather than defend, democratic values.” In the case of mass #surveillance & #AutonomousWeapons are “outside the bounds of what today’s technology can safely & reliably do.”

    #Trump #Hegseth #law #privacy #InfoSec #ContractLaw #democracy

  15. #Anthropic is rejecting the Pentagon’s latest offer to change their contract, saying the changes do not satisfy the company’s concerns that #AI could be used for mass #surveillance or in fully #AutonomousWeapons.

    The #Pentagon & Anthropic are at odds over restrictions the company places on the use of #Claude, the first #AI system to be used in the #military #classified network.

    #Trump #Hegseth #law #privacy #InfoSec #ContractLaw
    cnn.com/2026/02/26/tech/anthro

  16. #Anthropic is in a dispute with the #Pentagon over its usage restrictions for military purposes. The Pentagon wants Anthropic to remove #safeguards preventing its technology from being used for #autonomousweapons targeting and #domesticsurveillance. Anthropic refuses, citing concerns about responsible use, and the Pentagon has threatened to label it a supply-chain risk. reuters.com/world/anthropic-di #tech #media #news

  17. New defense startup is turning AI agents into explosive autonomous weapons, while Washington tightens export controls on AI chips. The race to weaponize machine learning raises tough questions for battlefield logistics and open‑source ethics. Read how this shift could reshape military AI and global tech policy. #AIagents #AutonomousWeapons #AIchips #ExportControls

    🔗 aidailypost.com/news/defense-f

  18. The #Pentagon and #Anthropic are at odds: Anthropic is concerned about its tools being used for #autonomousweapons targeting and #domesticsurveillance without sufficient #humanoversight. The Pentagon, however, argues it should be able to deploy commercial AI technology regardless of #companyusagepolicies, as long as it complies with #USlaw. reuters.com/business/pentagon- #tech #media #news

  19. China’s AI War Push Beijing unveils AI-powered combat vehicles, robot dogs, and drone swarms as part of its military modernization. Despite U.S. chip bans, China leverages Huawei and Nvidia tech to boost autonomy on the battlefield. The AI arms race with the U.S. is accelerating fast.

    #China #AI #Military #DefenseTech #DeepSeek #Huawei #Nvidia #AutonomousWeapons #TechWar #Geopolitics #TECHi

    Read Full Article Here :- techi.com/china-ai-revolution-

  20. 🚨 "Killer robots" threaten human rights in war & peace! Autonomous weapons lack human judgment, risking lives, privacy & dignity. 🌍 Over 120 countries urge a global treaty to ban or regulate these deadly systems. Time to act for a safer, humane future! 🤖✋ #StopKillerRobots #HumanRights #UN #AutonomousWeapons #Peace hrw.org/news/2025/04/28/killer #newz

  21. Robot dogs armed with AI-targeting rifles undergo US Marines Special Ops evaluation - Enlarge / A still image of a robotic quadruped armed with a remote weap... - arstechnica.com/?p=2022843 #autonomousweapons #machinelearning #bostonrobotics #onyxindustries #weaponssystems #ghostrobotics #usmilitary #usmarines #military #marines #weapons #biz#marsoc #police #robots #tech #ai

  22. Army of None - Paul Scharre - bird.makeup/users/paul_scharre

    This book is horrifying and important.

    The section on the psychology of what it takes for a human being to take someone's life at close range is precient. How we value human life and human dignity is significantly affected by circumstance.Without intensive indoctrination and "training" human beings tend to not actually want to kill each other. It requires a serious dose of dehumanization and otherization.

    The "naked soldier problem" raised in the book highlights how soldiers will act differently if they find an enemy in a valnuerable position like for example naked in a bath or smoking a cigarette while they watch the sunset. Apparently statistically many soldiers will deliberately miss targets. At least this is the rationalization and mental gymnastics used by these arms dealers and military industrial complex stooges while selling autonomous weapons to the world. "Eliminate the moral burden of killing" and in doing so actually end conflicts faster and save lives. Yikes.

    share.libbyapp.com/title/40435

    #security #ai #war #weapons #AutonomousWeapons #robot #killingMachine #capitalism #privacy #opsec #ethics

  23. Army of None - Paul Scharre - bird.makeup/users/paul_scharre

    This book is horrifying and important.

    The section on the psychology of what it takes for a human being to take someone's life at close range is precient. How we value human life and human dignity is significantly affected by circumstance.Without intensive indoctrination and "training" human beings tend to not actually want to kill each other. It requires a serious dose of dehumanization and otherization.

    The "naked soldier problem" raised in the book highlights how soldiers will act differently if they find an enemy in a valnuerable position like for example naked in a bath or smoking a cigarette while they watch the sunset. Apparently statistically many soldiers will deliberately miss targets. At least this is the rationalization and mental gymnastics used by these arms dealers and military industrial complex stooges while selling autonomous weapons to the world. "Eliminate the moral burden of killing" and in doing so actually end conflicts faster and save lives. Yikes.

    share.libbyapp.com/title/40435

    #security #ai #war #weapons #AutonomousWeapons #robot #killingMachine #capitalism #privacy #opsec #ethics

  24. Army of None - Paul Scharre - bird.makeup/users/paul_scharre

    This book is horrifying and important.

    The section on the psychology of what it takes for a human being to take someone's life at close range is precient. How we value human life and human dignity is significantly affected by circumstance.Without intensive indoctrination and "training" human beings tend to not actually want to kill each other. It requires a serious dose of dehumanization and otherization.

    The "naked soldier problem" raised in the book highlights how soldiers will act differently if they find an enemy in a valnuerable position like for example naked in a bath or smoking a cigarette while they watch the sunset. Apparently statistically many soldiers will deliberately miss targets. At least this is the rationalization and mental gymnastics used by these arms dealers and military industrial complex stooges while selling autonomous weapons to the world. "Eliminate the moral burden of killing" and in doing so actually end conflicts faster and save lives. Yikes.

    share.libbyapp.com/title/40435

    #security #ai #war #weapons #AutonomousWeapons #robot #killingMachine #capitalism #privacy #opsec #ethics

  25. Army of None - Paul Scharre - bird.makeup/users/paul_scharre

    This book is horrifying and important.

    The section on the psychology of what it takes for a human being to take someone's life at close range is precient. How we value human life and human dignity is significantly affected by circumstance.Without intensive indoctrination and "training" human beings tend to not actually want to kill each other. It requires a serious dose of dehumanization and otherization.

    The "naked soldier problem" raised in the book highlights how soldiers will act differently if they find an enemy in a valnuerable position like for example naked in a bath or smoking a cigarette while they watch the sunset. Apparently statistically many soldiers will deliberately miss targets. At least this is the rationalization and mental gymnastics used by these arms dealers and military industrial complex stooges while selling autonomous weapons to the world. "Eliminate the moral burden of killing" and in doing so actually end conflicts faster and save lives. Yikes.

    share.libbyapp.com/title/40435

    #security #ai #war #weapons #AutonomousWeapons #robot #killingMachine #capitalism #privacy #opsec #ethics

  26. Army of None - Paul Scharre - bird.makeup/users/paul_scharre

    This book is horrifying and important.

    The section on the psychology of what it takes for a human being to take someone's life at close range is precient. How we value human life and human dignity is significantly affected by circumstance.Without intensive indoctrination and "training" human beings tend to not actually want to kill each other. It requires a serious dose of dehumanization and otherization.

    The "naked soldier problem" raised in the book highlights how soldiers will act differently if they find an enemy in a valnuerable position like for example naked in a bath or smoking a cigarette while they watch the sunset. Apparently statistically many soldiers will deliberately miss targets. At least this is the rationalization and mental gymnastics used by these arms dealers and military industrial complex stooges while selling autonomous weapons to the world. "Eliminate the moral burden of killing" and in doing so actually end conflicts faster and save lives. Yikes.

    share.libbyapp.com/title/40435

    #security #ai #war #weapons #AutonomousWeapons #robot #killingMachine #capitalism #privacy #opsec #ethics

  27. 2018: A Global Arms Race for Killer Robots Is Transforming the Battlefield

    "The meeting comes at a critical juncture. In July [2018], Kalashnikov, the main defense contractor of the Russian government, announced it was developing a weapon that uses neural networks to make 'shoot-no shoot' decisions. In January 2017, the U.S. Department of Defense released a video showing an autonomous drone swarm of 103 individual robots successfully flying over California. Nobody was in control of the drones; their flight paths were choreographed in real-time by an advanced algorithm. The drones “are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature,” a spokesman said. The drones in the video were not weaponized — but the technology to do so is rapidly evolving.

    "[April 2018] also marks five years since the launch of the International Campaign to Stop Killer Robots, which called for 'urgent action to preemptively ban the lethal robot weapons that would be able to select and attack targets without any human intervention.' The 2013 launch letter — signed by a Nobel Peace Laureate and the directors of several NGOs — noted that they could be deployed within the next 20 years and would 'give machines the power to decide who lives or dies on the battlefield.'"

    Read more:
    time.com/5230567/killer-robots

    #AI #AutonomousWeapons #KillerRobots #ArmedDrones #Skynet #DARPA #Terminator

  28. Two objections to autonomous weapons systems:
    1. Anti-codifiability argument (that robots can't make moral decisions)
    2. Lacking the right-kind-of reasons argument (even if robots could make moral decisions, they could not do so for the "right reasons")

    See why Erich Riesen rejects both of those arguments in this 10-minute video: youtube.com/watch?v=3XGmRtYv8C

    #Ethics #JustWarTheory #DecisionMaking #ArtificialIntelligence #AI #AutonomousWeapons #ElectronicWarfare #MilitaryEthics #Applied Ethics