home.social

#theaicon — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #theaicon, aggregated by home.social.

  1. RE: chaos.social/@epicenter_works/

    This is exactly the scenario that @emilymbender and @alex have warned us about in their excellent book "The AI Con" (thecon.ai).

    The book contains several examples where using stochastic parrots to "make" decisions has severely backfired.

    Imho this book should be mandatory reading for anyone holding a public office.

    #AtPol #TheAiCon #LLMs #Politics

  2. In #technofeudalism and #antihumanism:

    The narrative of #AI's “inevitability” is a tactic used by tech companies to discourage resistance and encourage compliance.

    […] When tech boosters want to demonise resistance, they invoke the luddites. By their telling, the luddites were primitive idiots, who smashed machines they were too stupid to understand. History though, tells a different story. As recounted by Brian Merchant’s sublime work Blood in the Machine, luddites were skilled artisans, fighting for their way of life against the “satanic mills” – textile sweatshops powered by child semi-slaves. Forbidden from unionising, luddites smashed machines as a protest tactic. And they did not lose to the inevitable march of progress. They lost to physical force. The government called in troops, and the luddites were either executed or shipped to penal colonies in Australia.

    theguardian.com/books/2026/apr

    #technofeudalism #antihumanism #ai #promptingwithhitler #nerdreich #llm #theaicon #aihype #histodons

  3. In #technofeudalism and #antihumanism:

    The narrative of #AI's “inevitability” is a tactic used by tech companies to discourage resistance and encourage compliance.

    […] When tech boosters want to demonise resistance, they invoke the luddites. By their telling, the luddites were primitive idiots, who smashed machines they were too stupid to understand. History though, tells a different story. As recounted by Brian Merchant’s sublime work Blood in the Machine, luddites were skilled artisans, fighting for their way of life against the “satanic mills” – textile sweatshops powered by child semi-slaves. Forbidden from unionising, luddites smashed machines as a protest tactic. And they did not lose to the inevitable march of progress. They lost to physical force. The government called in troops, and the luddites were either executed or shipped to penal colonies in Australia.

    theguardian.com/books/2026/apr

    #technofeudalism #antihumanism #ai #promptingwithhitler #nerdreich #llm #theaicon #aihype #histodons

  4. In #technofeudalism and #antihumanism:

    The narrative of #AI's “inevitability” is a tactic used by tech companies to discourage resistance and encourage compliance.

    […] When tech boosters want to demonise resistance, they invoke the luddites. By their telling, the luddites were primitive idiots, who smashed machines they were too stupid to understand. History though, tells a different story. As recounted by Brian Merchant’s sublime work Blood in the Machine, luddites were skilled artisans, fighting for their way of life against the “satanic mills” – textile sweatshops powered by child semi-slaves. Forbidden from unionising, luddites smashed machines as a protest tactic. And they did not lose to the inevitable march of progress. They lost to physical force. The government called in troops, and the luddites were either executed or shipped to penal colonies in Australia.

    theguardian.com/books/2026/apr

    #technofeudalism #antihumanism #ai #promptingwithhitler #nerdreich #llm #theaicon #aihype #histodons

  5. In #technofeudalism and #antihumanism:

    The narrative of #AI's “inevitability” is a tactic used by tech companies to discourage resistance and encourage compliance.

    […] When tech boosters want to demonise resistance, they invoke the luddites. By their telling, the luddites were primitive idiots, who smashed machines they were too stupid to understand. History though, tells a different story. As recounted by Brian Merchant’s sublime work Blood in the Machine, luddites were skilled artisans, fighting for their way of life against the “satanic mills” – textile sweatshops powered by child semi-slaves. Forbidden from unionising, luddites smashed machines as a protest tactic. And they did not lose to the inevitable march of progress. They lost to physical force. The government called in troops, and the luddites were either executed or shipped to penal colonies in Australia.

    theguardian.com/books/2026/apr

    #technofeudalism #antihumanism #ai #promptingwithhitler #nerdreich #llm #theaicon #aihype #histodons

  6. In #technofeudalism and #antihumanism:

    The narrative of #AI's “inevitability” is a tactic used by tech companies to discourage resistance and encourage compliance.

    […] When tech boosters want to demonise resistance, they invoke the luddites. By their telling, the luddites were primitive idiots, who smashed machines they were too stupid to understand. History though, tells a different story. As recounted by Brian Merchant’s sublime work Blood in the Machine, luddites were skilled artisans, fighting for their way of life against the “satanic mills” – textile sweatshops powered by child semi-slaves. Forbidden from unionising, luddites smashed machines as a protest tactic. And they did not lose to the inevitable march of progress. They lost to physical force. The government called in troops, and the luddites were either executed or shipped to penal colonies in Australia.

    theguardian.com/books/2026/apr

    #technofeudalism #antihumanism #ai #promptingwithhitler #nerdreich #llm #theaicon #aihype #histodons

  7. The AI Great Leap Forward

    Similar to the #Chinese Great Leap Forward's inflated grain production reports, companies are fabricating or exaggerating #AI adoption and productivity gains to please leadership, leading to increased investment based on made up numbers. The focus seem to have shifted from genuine AI development to "demoware" – impressive-looking prototypes and interfaces with little underlying validation, data infrastructure, or maintenance plans, creating future tech debt.

    […] Entire departments are stitching together n8n workflows and calling it AI — dozens of automated chains firing prompts into models, zero evaluation on any of them. These tools are merchants of complexity: they sell visual simplicity while generating spaghetti underneath. A drag-and-drop canvas makes it trivially easy to chain ten LLM calls together and impossibly hard to debug why the eighth one hallucinates on Tuesdays. The people building these workflows have never designed an evaluation pipeline, never measured model drift, never A/B tested a prompt. They don’t need to — the canvas looks clean, the arrows point forward, the green checkmarks fire. The complexity isn’t avoided. It’s hidden behind a GUI where nobody with ML expertise will ever look.

    leehanchung.github.io/blogs/20

    #ai #aihype #theaicon #n8n

  8. [...] “I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.”

    Senior executive at #Microsoft about #OpenAI's Sam Altman.

    newyorker.com/magazine/2026/04 or archive.ph/9jqJ7

    #quitgpt quitgpt.org #llm #theaicon #aihype

  9. […] Even if the accuracy problems were solved, and AI-generated summaries reliably captured all the essential points of a text, it would still be a bad idea to use them. Creating your own summaries is a crucial step in any literature study. When you read and summarize a text, you create the neural connections necessary to memorize and apply the information well in an exam, experiment, or research paper. Generating it with a click is a harmful form of cognitive offloading and will erode these skills. Writing it yourself will reveal the nuances of an academic text and allow you to register those elements that you deem essential to whatever you are working on. 

    tue.nl/en/our-university/libra

    @darby3

    #theaicon #aihype #llm

  10. […] That’s why they are doing everything they can to convince you that you actually do not have the ability to think those thoughts, and that none of the ideas you might have about your own future are ideas that can actually be realized. It’s a big win for them, in their quest to persuade you of your powerlessness, that they have gotten your university to adapt their marketing language for its official statements, to shape its academic programming around the presumption of their indefinite economic primacy, and to pay for you to have free access to technologies that will make it harder — the more you use them — to know yourself to be a free intellectual, creative and moral agent.

    @dangillmor

    #theaicon #aihype #ai #gemini

  11. […] Rewarding confidence over actual competence is a bug humanity has always had. It has produced disasters throughout history, it is producing disasters now, and not only in the tech world.

    it-notes.dragas.net/2026/03/20

    Integrating AI into customer service is as dumb as you’d expect. The enshittification of services, where companies replace human expertise with AI, will lead to confusion, wasted time, and a decline in reliability. All of this is driven by a misplaced trust in AI's assertive but often incorrect pronouncements. Anyone who used these tools for a while learned to live with the disappointment. AI is not even close to delivering most of what these companies claim it will.

    #theaicon #aihype

  12. […] #OpenAI, which will likely never be profitable, is now valued higher than some of the most established tech companies on the planet. A company that burns through cash faster than it can generate revenue, and yet investors keep lining up. It’s not a market anymore, it’s a collective delusion backed by hype and #FOMO.

    yashgarg.dev/posts/ai-slop/

    #ai #theaicon #aihype #freesoftware #opensource

  13. Why do all the AI-hypers consider themselves to be capitalists when they want to socialise all our data to train models on?

    #ai #theaicon

  14. Anthropic and The Authoritarian Ethic blog.giovanh.com/blog/2026/03/

    The real national-security crisis under Trump’s regime is the erosion of #democracy. Their demand that contractors drop basic usage limits or face blacklisting, nationalization, or ruin isn’t a policy dispute; it is an authoritarian loyalty test. It is also hypocritical, given #Anthropic’s existing DoD contracts and reports that Claude was used in the Maduro illegal kidnapping.

    quitgpt.org

    @giovan

    #AI #TheAICon

  15. Generative AI vegetarianism is a deliberate rejection of these tools, prioritizing ethical consumption and human-centric alternatives. While some applications, such as #OCR and data accessibility, are beneficial, the industry’s narrative downplays the ethical harms while overstating the promises.

    sboots.ca/2026/03/11/generativ

    #aihype #theaicon #genai

  16. Your #LLM Doesn't Write Correct Code. It Writes Plausible Code.

    Deployment is an act of faith, not engineering.

    […] THIS is the failure mode. Not broken syntax or missing semicolons. The code is syntactically and semantically correct. It does what was asked for. It just does not do what the situation requires. In the SQLite case, the intent was “implement a query planner” and the result is a query planner that plans every query as a full table scan. In the disk daemon case, the intent was “manage disk space intelligently” and the result is 82,000 lines of intelligence applied to a problem that needs none. Both projects fulfill the prompt. Neither solves the problem.
    blog.katanaquant.com/p/your-ll

    #aihype #theaicon

  17. @jani “worse than stupidity”

    In a slightly different context but very much the same issues. Cloud services indeed traded convenience for diminished software freedom, control and privacy... as we can see today user fleeing US based services. Even #Proton can't guarantee protection to its customers, as we saw recently.

    theguardian.com/technology/200

    #AI #theaicon

  18. One thing I thought #LLMs were good for was translation. Apparently #Gemini and others aren’t that great at that either.

    #Wikipedia restricted contributors from a nonprofit called the Open Knowledge Association (#OKA) after editors discovered #AI-assisted translations added factual errors and incorrect citations.

    As predicted, humans will be relegated to cleaning up the mess LLMs leave behind, for salaries far below the value of full-time employment to do the job properly.

    […] Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer‑review mechanisms.

    404media.co/ai-translations-ar

    #aihype #theaicon #globalsouth #digitalcolonialism

  19. #DigitalColonialism Good to see it’s finally being discussed in mainstream media. It’s not just exploitation and low wages it’s also trauma and mental health. Bender & Hanna’s The AI Con exposes the industry’s hype and how trauma is outsourced to the “Global South”.

    thecon.ai/

    @bhg
    #theaicon #ai #openai #meta #anthropic

  20. Contrary to what I'm reading, #Claude is far worse than I remember. It often doesn't fully understand simple questions and needs nudging - and as in this case, can't solve even trivial problems which only require careful reading of the log.

    It took #Kimi 10secs to solve a problem with an api call to a #docling
    server crashing when accessed over its api. I used Kimi in the browser. #ClaudeCode Sonnet 4.6 had full access to the actual server and couldn't solve the issue after 30 mins of repeated attempts.

    ```
    The root cause is clear: **Docling workers are crashing due to missing EasyOCR dependency**, causing the async jobs to fail. OpenWebUI polls for results, gets a 404 because the job died, and reports "Task result not found."

    **Fix the Docling installation:**

    ```bash
    # Stop the service
    sudo launchctl unload /Library/LaunchDaemons/com.docling-serve.plist

    # Install EasyOCR in the correct uv tool environment
    uv tool install --upgrade docling-serve --with easyocr

    # Or if that doesn't work, run the specific Python interpreter:
    /Users/urisharf/.local/share/uv/tools/docling-serve/bin/python -m pip install easyocr

    # Reload
    sudo launchctl load /Library/LaunchDaemons/com.docling-serve.plist
    ```

    **Alternative (if you don't need OCR):** Configure Docling to use a different OCR engine or disable it. Check Docling's env vars for `DOCLING_OCR_ENGINE` or similar.

    **Verify the fix:**
    ```bash
    curl -X POST http://mbp.lan:5001/v1/convert/source \
    -H "Content-Type: application/json" \
    -d '{"source": "arxiv.org/pdf/2206.01062.pdf"}'
    ```

    The duplicate `libavdevice` dylibs warning is harmless noise from Gradio/CV2 conflicts, but the EasyOCR error is fatal.
    ```

    #AIHype #TheAICON #Anthropic

  21. Several gaming sites, including The Escapist and Videogamer, have replaced human staff with #AI writers, generating fake bios and articles, after being acquired by Clickout Media.

    […] “This downscaling of staff in the game media is widespread, and we’ve seen several large American operations hit hard in recent years.”

    pressgazette.co.uk/news/staff-

    #aihype #aislop #theaicon

  22. Journalist Tom Germain created a fake page claiming to rank “the best tech journalists at eating hot dogs.” Despite it being obviously false, within a day or two, #Google’s #Gemini and #ChatGPT began repeating the nonsense, showing how easily generative systems can absorb and echo deceptive material when it looks legitimate enough to scrape.

    Via Schneier on Security schneier.com/blog/archives/202

    Also terribleminds.com/ramble/

    IMHO This fundamental problem with LLMs, the fact that they’re lacking any form of intelligence, not even that of a toddler, is dangerous when applied to high‑stake information. The term #AI should really be banned, and the disclaimer at the bottom of chatbot windows should simply say: “You’re talking to a parrot.”

    #Google #gemini #chatgpt #ai #aislop #aihype #theaicon

  23. LLMs Don’t.

    Model Collapse Ends AI Hype. George D. Montañez, PhD.

    LLMs Don’t Think: They process tokens via statistical patterns, lacking internal states or understanding

    LLMs Don’t Reason: They exploit superficial cues and rationalize answers post-hoc, failing at adaptive problem-solving

    LLMs Don’t Create: They recycle and degrade existing information, unable to escape the "syntax trap" (manipulating symbols without semantic grounding)

    yewtu.be/ShusuVq32hc or on the #nerdreich ’s attention farm youtu.be/ShusuVq32hc

    #nerdreic #aihype #aislop #theaicon #llm #genai

  24. 🐶 meet 🦜

    Dog: “y7u8888888ftrg34BC”

    Claude: OMG! You’re a genius, let me start working on it

    Yes it’s funny. Also, it has nothing to do with the dog. If instructed to make sense of something a paychofantic #AI agent will make sense to random sequences. This tools are fucking pattern matchers copy pasting somebody’s work doesn’t really what you feed it, something will eventually come out 💩…

    yewtu.be/watch?v=8BbPlPou3Bg (proxy) or on a #nerdreich platform youtube.com/watch?v=8BbPlPou3Bg

    Ie the “stochastic parrot” argument from linguistics researchers Bender et al. (2021)

    #ai #nerdreich #claude #aislop #aihype #theaicon

  25. #aislop All that is left is to laugh …

    From an imaginary http 406 (“not acceptable”) internet standard proposal for client error status rejecting #AI slop from code repositories:

    […] I see you are slow. Let us simplify this transaction: A machine wrote your submission. A machine is currently rejecting your submission. You are the entirely unnecessary meat-based middleman in this exchange.

    406.fail

    #aihype #theaicon #http406

  26. #Anthropic’s #AI can almost write a C compiler in Rust for $20k, but using that very same swarm to build a native desktop app? No thanks; we’re sticking with hand-crafted crappy nasty #Electron because we know our planet-destroying technology can’t deliver.

    dbreunig.com/2026/02/21/why-is

    C/c @dbreunig
    #aihype #theaicon

  27. @FluentInFinance Historians. funny. I wonder who will conduct the research if historians are no longer around. What kind of research will LLMs generate? Will they simply regurgitate existing literature or will they create their own based on their glorious tendency to “assume”? Ie “Sorry, I couldn’t access the article’s text, but here’s a detailed summary and full biographies and a timeline based on the URL’s slug. You’re welcome.” black-hole-emoji

    #slop #aihype #theaicon

  28. @seanfobbe @michaelgraaf […] #OpenAI has to be the most insufferable company in the world. They can steal from the whole world and guzzle all possible resources. But no one can give them a taste of their own medicine even a little bit.

    dair-community.social/@timnitG

    #theaicon #llm #deepseek

  29. Meanwhile, in the Ars Technica "sweatshop" (where exploitative knowledge work is defined by high stress, low pay, and the constant fear of being replaced by the tools you’re forced to oversee) a retraction of #AI-fabricated content. A reminder of the risks posed by the nonsense generating bots. Soon though there may not be any humans around to fact-check and retract, nor readers to care about what’s been published by mainstream media.

    arstechnica.com/staff/2026/02/

    #theaicon #ArsTechnica #ai #aihype

  30. @simon It’s not just about messy code, is it? #AI generates illusion of progress. Speed is prioritized over comprehension so "going fast" and doing more becomes a trap. The hype isn’t rooted in technical achievement of the type claimed. It’s not artificial intelligence, I think we can all agree that’s self serving corporate narrative. And even though these tools have their place and are useful, the very hype is pushing decision-makers to replace humans across entire domains. This is catastrophic beyond cognitive debt in a crappy pile of code. It’s unchecked surrender of understanding, control, and the ability to reason about everything possibly. Oh wait. Isn’t this exactly what the nerd reich wants? I think we should tone down the fanboyism and always remind ourselves the hidden and self evident costs of this technology. A good start is The AI Con book.

    #theaicon #nerdreich

  31. #SaveTheOpenWeb or what’s left of it

    Many bots bypass protections like robots.txt - 30% of #AI scrapes in Q4 ignored these rules, with #OpenAI’s ChatGPT being the worst offender (42% of its scrapes violated permissions).

    digiday.com/media/in-graphic-d

    #ChatGPT (#OpenAI) is #Trump's biggest donor, and #ICE uses ChatGPT. It's time to quit. quitgpt.org

    #theaicon #aihype #nerdreich

  32. @alanaqueer #AI (machine generated nonsense) companies profit from dangerous applications through contractor partnerships, refuse accountability when people die, then hide behind policy language and safety marketing #TheAICon

    #theaicon #anthropic #venezuela #nerdreich

  33. [...] "Currently, experimentation and rushed decisions have become commonplace in Dutch AI policy... These plans include changes to dismissal laws, the removal of intellectual property rights, and expedited licensing that serves private interests. The coalition views this as a violation of public values and a step toward social unrest."

    A masterclass in resisting tech bro colonialism -or any new form of resource extraction: A Dutch coalition of scientists and civil society is pushing back against AI hype, deregulation, and monopolies.

    5 core principles:

    1. Value knowledge and expertise; distinguish hype from science.

    2. AI should not receive special treatment in legislation (e.g., no IP rights rollbacks).

    3. Promote economic viability and ensure added value for everyone.

    4. Reduce energy/water/land use; recognize harmful impacts.

    5. Involve civil society in policymaking.

    [Posted 12 December 2025] openletter.earth/zorgvuldig-an

    #AI #NerdReich #TechBroColonialism #IP #Netherlands #PublicValues #TheAICon

  34. @peter these companies, including #Anthropic are playing both the hero and the villain in the same script of regulatory theater which is a microcosm of the #AI industry’s much broader deception. Publicly they warn of doom (existential risk, #bioterrorism) while privately they lobby against oversight to preserve profits. The bioterrorism research he cited? That’s potentially “dual-use” justification for the what they are already doing with fascist #Palantir (a company whose entire business model is surveillance and military applications).

    #theaicon #nerdreich

  35. The AI Con authors Emily M. Bender (Professor of Linguistics, University of Washington) and Alex Hanna (Director of Research, Distributed AI Research Institute) break down a #NYT opinion piece that exemplifies hype laundering: how AI industry narratives get legitimized through respected voices in prestigious outlets.

    The article is "Stop Worrying, and Let A.I. Help Save Your Life" by Dr. Robert Wachter, chair of the Department of Medicine at #UCSF, published in the New York Times on January 19, 2026.

    Dr. Wachter admits he's replacing professional medical consultations with colleagues—what physicians call "curbside consults"- with ChatGPT queries. He claims AI's input is "virtually always useful," though he admits it's sometimes "just plain wrong." Emily responds: "People who really should know better have fallen for this." Alex notes the absurdity: "This seems like really a weird kind of approach to medical practice... Maybe someone who is concerned about their different medical conditions and had no place to turn, but someone at UCSF. I've been to UCSF. That's very alarming."

    Wachter provides zero peer-reviewed studies, no outcome data, no comparative metrics. Just personal anecdotes claiming the tools work. Emily points out he's demonstrating "no evidence-based practice of checking like how well does this work and also how does it impact the work of physicians when they're using it."

    The accountability problem is central. Alex observes that with a human colleague, "you would actually know it's coming from them and there's some accountability if they give you some just wild advice." With LLMs? No one is responsible when the answer is wrong. Emily: "The point isn't that the answers are unreliable. Is that there's no accountability for the answers."

    She also raises automation bias concerns: "If you review the output, are you also reviewing the things that you didn't get to because it didn't come out as output?" The system's omissions may be as dangerous as its errors.

    Wachter is against "overly restrict[ing] A.I. tools" by "setting an impossibly high bar." Classic regulatory capture language. Emily: "I want all medical devices to be tightly regulated."

    twitch.tv/videos/2687163982 (just the video stream d2vi6trrdongqn.cloudfront.net/)

    nytimes.com/2026/01/19/opinion

    #ChatGPT (#OpenAI) is #Trump's biggest donor, and #ICE uses ChatGPT. It's time to quit. quitgpt.org/

    #theaicon #aihype #openai #chatgpt #publichealth

  36. [2009] When does technology pass from being a tool to being a crutch?

    On the fear of ‘systemic deskilling’… turns out 2009 programmers were worried about #IDEs and the distinction between cognitive process and naive tool use when programming 😏 No one even imagined that programmers would want to use tools that generate code that looks right but fundamentally isn’t. And yet, here we are.

    boston.conman.org/2009/11/03.1

    #theaicon #ai #coding #vibecoding

  37. [2009] When does technology pass from being a tool to being a crutch?

    On the fear of ‘systemic deskilling’… turns out 2009 programmers were worried about #IDEs and the distinction between cognitive process and naive tool use when programming 😏 No one even imagined that programmers would want to use tools that generate code that looks right but fundamentally isn’t. And yet, here we are.

    boston.conman.org/2009/11/03.1

    #theaicon #ai #coding #vibecoding

  38. [2009] When does technology pass from being a tool to being a crutch?

    On the fear of ‘systemic deskilling’… turns out 2009 programmers were worried about #IDEs and the distinction between cognitive process and naive tool use when programming 😏 No one even imagined that programmers would want to use tools that generate code that looks right but fundamentally isn’t. And yet, here we are.

    boston.conman.org/2009/11/03.1

    #theaicon #ai #coding #vibecoding

  39. [2009] When does technology pass from being a tool to being a crutch?

    On the fear of ‘systemic deskilling’… turns out 2009 programmers were worried about #IDEs and the distinction between cognitive process and naive tool use when programming 😏 No one even imagined that programmers would want to use tools that generate code that looks right but fundamentally isn’t. And yet, here we are.

    boston.conman.org/2009/11/03.1

    #theaicon #ai #coding #vibecoding

  40. Scare #Claude off your site with this content poisoning technique:

    Content creators can embed a specific ‘magic string’ in <code> tags on their blogs. Claude then refuses to engage with the content.

    aphyr.com/posts/403-blocking-c

    #claude #aiethics #llmsecurity #contentmoderation #techtips #theaicon

  41. #TheAICon gets worse

    The #Guardian reports #ChatGPT & #Claude are citing #Grokipedia, the an #AI “encyclopedia” that copies #Wikipedia and adds cites from white supremacist sites like #Stormfront and promotes conspiracy theories.

    […] An OpenAI spokesperson said the model’s web search “aims to draw from a broad range of publicly available sources and viewpoints”.
    “We apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations,” they said, adding that they had ongoing programs to filter out low-credibility information and influence campaigns. Anthropic did not respond to a request for comment.

    theguardian.com/technology/202

    That’s not a feature that’s information ecosystem contamination. Since LLMs amplify misinformation by design, they pattern-match and remix without understanding or accountability, there’s really no protection from this for the casual user of this systems.

    […] Regardless of why they are produced, synthetic or partially synthetic scientific papers damage the scholarly information ecosystem, mixing unreliable texts that no one can really vouch for in among those that, in theory, other scholars could be learning from and building on.

    kolektiva.social/@oatmeal/1158

  42. @br00t4c A great expose. None should doubt what ends the autonomous control of information serves. It is not Ai and it’s not an ethically sustainable morality, but the capture of minds to preserve industrial ecocide, as should be obvious. HAL 9000 is not sorry! #MetaHeuristicLies #TheAiCon #IPTheft #FascistsTools #TechIsNotASoutionJustATool #AiBS #EmpiresEnd #TheFederation #NotHAL

  43. @br00t4c A great expose. None should doubt what ends the autonomous control of information serves. It is not Ai and it’s not an ethically sustainable morality, but the capture of minds to preserve industrial ecocide, as should be obvious. HAL 9000 is not sorry! #MetaHeuristicLies #TheAiCon #IPTheft #FascistsTools #TechIsNotASoutionJustATool #AiBS #EmpiresEnd #TheFederation #NotHAL

  44. @br00t4c A great expose. None should doubt what ends the autonomous control of information serves. It is not Ai and it’s not an ethically sustainable morality, but the capture of minds to preserve industrial ecocide, as should be obvious. HAL 9000 is not sorry! #MetaHeuristicLies #TheAiCon #IPTheft #FascistsTools #TechIsNotASoutionJustATool #AiBS #EmpiresEnd #TheFederation #NotHAL

  45. @br00t4c A great expose. None should doubt what ends the autonomous control of information serves. It is not Ai and it’s not an ethically sustainable morality, but the capture of minds to preserve industrial ecocide, as should be obvious. HAL 9000 is not sorry! #MetaHeuristicLies #TheAiCon #IPTheft #FascistsTools #TechIsNotASoutionJustATool #AiBS #EmpiresEnd #TheFederation #NotHAL

  46. @br00t4c A great expose. None should doubt what ends the autonomous control of information serves. It is not Ai and it’s not an ethically sustainable morality, but the capture of minds to preserve industrial ecocide, as should be obvious. HAL 9000 is not sorry! #MetaHeuristicLies #TheAiCon #IPTheft #FascistsTools #TechIsNotASoutionJustATool #AiBS #EmpiresEnd #TheFederation #NotHAL