home.social

#aisalami — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #aisalami, aggregated by home.social.

  1. > V3 was trained at a reported cost of about US$5.58 million. This is dramatically cheaper than GPT-4, for example, which cost more than US$100 million to develop.
    > DeepSeek also claims to have trained V3 using around 2,000 specialised computer chips, specifically H800 GPUs made by NVIDIA. This is again much fewer than other companies, which may have used up to 16,000 of the more powerful H100 chips.
    counterpunch.org/2025/01/29/de
    #DeepSeek #AISalami #AIChips
    #TongliangLiu

  2. I wonder if it wastes as much electricity and water as Gringo LLMs?
    > China’s DeepSeek has launched DeepSeek-R1, an MIT-licensed, fully open-source reasoning model that rivals OpenAI’s o1—and developers are losing it. Unlike OpenAI’s closed ecosystem, DeepSeek-R1 is free to modify, fine-tune, and commercialise, making it a dream come true for researchers and businesses alike.
    analyticsindiamagazine.substac
    #DeepSeek #AiSalami #LLMs ??

  3. > “It was a 100% male team, not even in fashionable clothes,” she said. “Like, come on. This is what you’ve got to offer?”

    This is what people want to build new nukes for, and restart Three Mile Island for... and order more Gas Turbines to use up all the energy, and water and carbon budgets....

    fortune.com/2024/10/30/chanel-
    #AISalami
    /HT @amydiehl via @jikodesu

  4. > The median split of AI wisdom is this: either you understand that current neural networks struggle mightily with outliers (just as their 1990s predecessors did) – and therefore understand why current AI is doomed to fail on many of its most lavish promises – or you don’t.

    theguardian.com/technology/art
    #GaryMarcus #AIsLLMs #AISalami
    /HT Tinmit Gebru @ dair-community

  5. > ... because we're being inundated with fake AI content, the researchers say there have been instances when "high profile individuals are able to explain away unfavourable evidence as AI-generated, shifting the burden of proof in costly and inefficient ways."

    futurism.com/the-byte/google-r

    #InvisibleFoot #AISalami #InvisibleFootAI #FraudulentAi
    /HT @scalzi via @sef

  6. A shorter name to replace "AI" for "LLMs" came to mind,
    "NSA": Natural Stupidity Automated..

    A Meme that came through my FB feed a few times sparked the epiphany along with a recent Stephen Colbert post about TechBros automating making up BS...

    From Stefano Quintarelli (he/him)
    > Let’s forget the term AI. Let’s call them Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI).

    blog.quintarelli.it/2019/11/le

    #AISalami #NaturalStupidiyAutomated

  7. Automated Plagiarism for BS (not me, the other BS, Harry Frankfurt's) :
    > Robots.txt is a single bit of code that's been used since the late 1990s as a way for websites to tell bot crawlers they don't want their data scraped and collected. It was widely accepted as one of the unofficial rules supporting the web...
    > The world's top two #AIStartups are ignoring requests by media publishers to stop scraping their web content for free model #TrainingData,
    businessinsider.com/openai-ant
    #AiSalami #LLM

  8. Icarus: The #FutureOfScience , the book is on-line.
    > #icarus having been taught to fly by his father Daedalus, was destroyed by his rashness. I fear that the same fate may overtake the populations whom modern men of science have taught to fly. Some of the dangers inherent in the #ProgressOfCcience while we retain our present political and #EconomicInstitutions are set forth in the following pages.
    archive.org/details/icarusorfu
    Same but different with #LLM #AISalami #GenerativeAI #CyberneticAI ?

  9. > Simulative AI reproduces human skills, while cybernetic AI embeds them..
    > .. In #SimulativeAI, intelligence manifests in a relation of comparison or resemblance of skills across external boundaries of humans and machines. In #CyberneticAI, intelligence is an emergent and distributed capacity of the hybrid human-machine assemblages as a whole.. power relations that make the human a functional part of that machine.
    #RainerMühlhoff
    /HT @RainerMuehlhoff via I can't remember, sorry!
    #AiSALAMI

  10. I wonder if Shelden has said anything about #ChatGPT or #AIText (#AISalami) in general?

    > Michael Shelden calls Newspeak "the perfect language for a society of bad writers...because it reduces the number of choices available to them".[10] Shelden says that Newspeak first corrupts writers morally, then politically, "since it allows writers to cheat themselves and their readers with ready-made prose".
    en.wikipedia.org/wiki/Politics
    #MichaelShelden #Orwell #GeorgeOrwell #PoliticsAndTheEnglishLanguage

  11. CW: Sarcasm about children, cars, disease

    #VRU is a new term for toddlers, often referred to as disease-vectors during flu-season.

    > “We have low exposure to small #VRUs” — #VulnerableRoadUsers, a reference to #children — “so very few events to estimate risk from,” the materials say. Another section concedes #CruiseVehicles’ “lack of a high-precision Small VRU classifier,” or machine learning software that would automatically detect child-shaped objects

    theintercept.com/2023/11/06/cr
    #AISalami #AI

  12. > Professor Chung Eun-Gwi, a long-time translator of Korean literature, shares her own personal journey with poetry translation while stressing that

    artificial intelligence ignores the decision-making process of human translators.

    Finally, Kim Seungbok, head of the publishing company CUON, delves into the driving forces behind the remarkable surge in Korean literature’s popularity in Japan..
    kln.or.kr/magazine/magazineVie
    #TranslationStudies #AISalami #Korean #KoreanLiterature #AI #Translation

  13. I wonder if #SmartAgriculture is an appropriate name for

    > ...a rising area bringing the benefits of digitalization through big data, artificial intelligence and linked data into the agricultural domain.

    #AI #AISalami is doing so well in other areas why not ram it into food production... I wonder what #LangdonWinner has to say about this...

  14. > ... computers are often too dumb to avoid hurting us...

    > San Francisco’s fire chief, Jeanine Nicholson, recently testified that as of August, autonomous vehicles interfered with firefighting duties 55 times this year. Tesla’s autopilot software, a driver-assistance system, has been involved in 736 crashes and 17 fatalities nationwide since 2019.

    nytimes.com/2023/10/11/opinion
    #AISalami #AI #Tesla #SanFrancisco #DriverlessCars #AutoPilot
    \HT @danmcquillan

  15. .> I love that the Writers Guild of America, as part of their strike negotiations, prevented AI from writing scripts because you can absolutely imagine Hollywood grinding out kind of regurgitated versions of old stories so they don't have to pay or deal with writers. God knows Hollywood is regurgitating mediocre stories already. They don't really need help with that.
    #RebeccaSolnit
    #Hollywood #ChatGPT #AISalami #AI

    @[email protected]

  16. > ... If you don't want to write, don't write. The idea of having a machine fake it, it is still kind of problematic, but more than that, the entire world of capitalist innovation is creating solutions to problems that don't necessarily exist. I have not heard anybody explain that there's a problem that AI is somehow fixing, as opposed to AI becoming a problem in itself, when it comes to writing and language.
    #ChatGPT #AI #AISalami

  17. > It’s always been clear that tech basically considers everything it can grab available for its uses, and has little to no respect for the rights of human beings and our agency or privacy, our right to set boundaries.

    bloomberg.com/news/newsletters

    #RebeccaSolnit on #AISalami #PrivacyRigjts #TechMiners

  18. > “The training data... [has] problem[s]... resulting in models that encode stereotypical and derogatory associations along gender, race, ethnicity, and disability status,” Gebru’s paper reads. “White supremacist and misogynistic, ageist, etc., views are overrepresented in the training data, not only exceeding their prevalence in the general population but also setting up models trained on these datasets to further amplify biases and harms.”
    rollingstone.com/culture/cultu
    /HT @nnedi #AISalami #AI

  19. > “Most people are not aware of the resource usage underlying ChatGPT,” Ren said. “If you’re not aware of the resource usage, then there’s no way that we can help conserve the resources.”
    > Google reported a 20% growth in water use in the same period, which Ren also largely attributes to its AI work.

    fortune.com/2023/09/09/ai-chat

    #ChatGPT #AISalami #WaterWaste #WaterConsumption #GoogleWater #MSWater #BlueGold #FlowForLoveOfWater #MaudeBarlowe

  20. > Everybody was supposed to prefer AI over conventional search. And it turns out that nobody cares.

    > The only areas where AI is flourishing are shamming, spamming & scamming

    > AI is like genetically-modified foods, which are invariably forced upon an unwilling public.

    > AI is getting worse at doing math over time... AI is getting more sycophantic and willing to agree with false statements over time.

    honest-broker.com/p/ugly-numbe

    #TedGioia #HonestBroker? #BingAI #AISalami #SalamiAI #ChatGPT #GMO

  21. > Since 1945 we have known that nuclear technology could generate cheap energy for the benefit of humans—but could also physically destroy human civilisation.

    The article seems stupid, but part of it motivated me to see if Amitav Ghosh wrote about AI(Salami)
    > We have always appreciated the power of stories...

    #YuvalNoahHarari is like a recent #AlvinToffler stringing together cliches that seem smart?
    #NuclearTechnoloty #Nukes #NuclearWeapons #NuclearEnergy #AISalami #Salami

  22. > We’ve filed law­suits chal­leng­ing Chat­GPT and LLaMA, indus­trial-strength pla­gia­rists that vio­late the rights of book authors.
    Because AI needs to be fair & eth­i­cal for every­one.
    llmlitigation.com/
    #LLMlitigation #AIorSalami #AIsalami #ChatGPT #LlaMa #Plagiarism

  23. > .> Joseph Saveri and Matthew Butterick, the attorneys representing the authors, in January also sued Stability AI on behalf of visual artists who accused the "parasite" app of glomming off their work. Last year the duo filed a lawsuit against GitHub, alleging its AI-assisted coding tool built on stolen coders' work.
    - cbsnews.com/news/sarah-silverm

    #MatthewButterick of #BeautifulRacket in court against #StabilityAI and #GitHub for #AIcoding along with other #AISalami apps like #ChatGPT

  24. > The suits alleges, among other things, that OpenAI’s ChatGPT and Meta’s LLaMA were trained on illegally-acquired datasets containing their works, which they say were acquired from “shadow library” websites like Bibliotik, Library Genesis, Z-Library, and others, noting the books are “available in bulk via torrent systems.”
    theverge.com/2023/7/9/23788741
    /HT #MoveToAmend
    #ChatGPT #OPenAI #AIorSalami #AISalami #SarahSilverman #Meta #LLaMa

  25. .> Timnit Gebru, founder of the Distributed AI Research Institute (DAIR), described the use of AI in debt collection as "punishing those who are already struggling."
    .> "In a time when income inequality is off the charts, when we should be reducing things like student debt, are we really trying to build tools to put even more pressures on those who are struggling? This would be true even if the software was working as intended," Gebru said.
    .> "In addition to this, we know that there are so many biases that these LLM based systems have, encoding hegemonic and stereotypical views,” Gebru added, referring to the findings of the paper on large AI models that she co-authored with several other researchers. “The fact that we don't even know what they're doing and they're not required to tell us is also incredibly concerning."
    .> Some of the companies that stand to benefit most from AI integration are those that purely exist to collect debt. These companies, known as debt buyers, purchase “distressed” debt from other creditors at steep discounts—usually pennies on the dollar—then try as hard as they can to get debtors to repay in full. They don’t issue loans, or provide any kind of service that clients might owe them for; it’s a business model built on profiting from people who fell behind on payments to someone else.
    - https://www.vice.com/en/article/bvjmm5/debt-collectors-want-to-use-ai-chatbots-to-hustle-people-for-money

    #AISalami #ChatGPT #DebtCollection #VultureFunds #TinmitGebru #LLM #LLMBias

  26. > .. #MollyCrabapple is.. challenging this theft. “#AI art generators are trained on enormous datasets... millions upon millions of copyrighted images, harvested without their creator’s knowledge, let alone compensation or consent. This is effectively the greatest art heist in history. Perpetrated by respectable-seeming corporate entities backed by #SiliconValley venture capital. It’s daylight robbery,” a new open letter she co-drafted states.
    theguardian.com/commentisfree/?
    #NaomiKlein #AiSalami

  27. ... think about the purpose the utopian hallucinations about AI are serving. What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history ( #Microsoft, #Apple, #Google, #Meta, #Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.
    This should not be legal...
    - https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein?

    #NaomiKlein on #Ai #ChatGPT #AiSalami #AiHallucinations #TechnoNecro #OnBullshit #ProprietarySoftware #ProprietaryProducts #WalledGardens

  28. .> ... large-scale AI models are indeed big water consumers. For example, training GPT‑3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough to produce 370 BMW cars or 320 Tesla electric vehicles), and the water consumption would have been tripled if training were done in Microsoft’s data centers in Asia. These numbers do not include the off-site water footprint associated with electricity generation.
    .> ChatGPT needs a 500-ml bottle of water for a short conversation of roughly 20 to 50 questions and answers, depending on when and where the model is deployed. Given ChatGPT’s huge user base, the total water footprint for inference can be enormous.
    .> ... if we only consider carbon footprint reduction (say, by scheduling more AI training around noon), we’ll likely end up with higher water consumption, which is not truly sustainable for AI.
    .> ... the vast majority of data centers still use potable water and cooling towers. For example, even tech giants such as Google heavily rely on cooling towers and consume billions of liters of potable water each year. Such huge water consumption has produced a stress on the local water infrastructure; Google’s data center used more than a quarter of all the water in The Dalles, Ore.
    .> ... some AI conferences have requested that authors declare their AI models’ carbon footprint in their papers; we believe that with transparency and awareness, authors can also declare their AI models’ water footprint as part of the environmental impact.
    - The Markup: Water Footprint of AI Technology
    - A conversation with
    Shaolei Ren and Nabiha Syed

    #TheMarkup #NabihaSyed #ShaoleiRen #AISalami #ChatGPT #CarbonFootprint #WaterFootprint #California #Oregon #DallesOregon #Virginia #DataCenterCapital #VirginiaLoudon #LoudonCounty

  29. .> ... large-scale AI models are indeed big water consumers. For example, training GPT‑3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough to produce 370 BMW cars or 320 Tesla electric vehicles), and the water consumption would have been tripled if training were done in Microsoft’s data centers in Asia. These numbers do not include the off-site water footprint associated with electricity generation.
    .> ChatGPT needs a 500-ml bottle of water for a short conversation of roughly 20 to 50 questions and answers, depending on when and where the model is deployed. Given ChatGPT’s huge user base, the total water footprint for inference can be enormous.
    .> ... if we only consider carbon footprint reduction (say, by scheduling more AI training around noon), we’ll likely end up with higher water consumption, which is not truly sustainable for AI.
    .> ... the vast majority of data centers still use potable water and cooling towers. For example, even tech giants such as Google heavily rely on cooling towers and consume billions of liters of potable water each year. Such huge water consumption has produced a stress on the local water infrastructure; Google’s data center used more than a quarter of all the water in The Dalles, Ore.
    .> ... some AI conferences have requested that authors declare their AI models’ carbon footprint in their papers; we believe that with transparency and awareness, authors can also declare their AI models’ water footprint as part of the environmental impact.
    - The Markup: Water Footprint of AI Technology
    - A conversation with
    Shaolei Ren and Nabiha Syed

    #TheMarkup #NabihaSyed #ShaoleiRen #AISalami #ChatGPT #CarbonFootprint #WaterFootprint #California #Oregon #DallesOregon #Virginia #DataCenterCapital #VirginiaLoudon #LoudonCounty

  30. .> ... large-scale AI models are indeed big water consumers. For example, training GPT‑3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough to produce 370 BMW cars or 320 Tesla electric vehicles), and the water consumption would have been tripled if training were done in Microsoft’s data centers in Asia. These numbers do not include the off-site water footprint associated with electricity generation.
    .> ChatGPT needs a 500-ml bottle of water for a short conversation of roughly 20 to 50 questions and answers, depending on when and where the model is deployed. Given ChatGPT’s huge user base, the total water footprint for inference can be enormous.
    .> ... if we only consider carbon footprint reduction (say, by scheduling more AI training around noon), we’ll likely end up with higher water consumption, which is not truly sustainable for AI.
    .> ... the vast majority of data centers still use potable water and cooling towers. For example, even tech giants such as Google heavily rely on cooling towers and consume billions of liters of potable water each year. Such huge water consumption has produced a stress on the local water infrastructure; Google’s data center used more than a quarter of all the water in The Dalles, Ore.
    .> ... some AI conferences have requested that authors declare their AI models’ carbon footprint in their papers; we believe that with transparency and awareness, authors can also declare their AI models’ water footprint as part of the environmental impact.
    - The Markup: Water Footprint of AI Technology
    - A conversation with
    Shaolei Ren and Nabiha Syed

    #TheMarkup #NabihaSyed #ShaoleiRen #AISalami #ChatGPT #CarbonFootprint #WaterFootprint #California #Oregon #DallesOregon #Virginia #DataCenterCapital #VirginiaLoudon #LoudonCounty

  31. .> ... large-scale AI models are indeed big water consumers. For example, training GPT‑3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough to produce 370 BMW cars or 320 Tesla electric vehicles), and the water consumption would have been tripled if training were done in Microsoft’s data centers in Asia. These numbers do not include the off-site water footprint associated with electricity generation.
    .> ChatGPT needs a 500-ml bottle of water for a short conversation of roughly 20 to 50 questions and answers, depending on when and where the model is deployed. Given ChatGPT’s huge user base, the total water footprint for inference can be enormous.
    .> ... if we only consider carbon footprint reduction (say, by scheduling more AI training around noon), we’ll likely end up with higher water consumption, which is not truly sustainable for AI.
    .> ... the vast majority of data centers still use potable water and cooling towers. For example, even tech giants such as Google heavily rely on cooling towers and consume billions of liters of potable water each year. Such huge water consumption has produced a stress on the local water infrastructure; Google’s data center used more than a quarter of all the water in The Dalles, Ore.
    .> ... some AI conferences have requested that authors declare their AI models’ carbon footprint in their papers; we believe that with transparency and awareness, authors can also declare their AI models’ water footprint as part of the environmental impact.
    - The Markup: Water Footprint of AI Technology
    - A conversation with
    Shaolei Ren and Nabiha Syed

    #TheMarkup #NabihaSyed #ShaoleiRen #AISalami #ChatGPT #CarbonFootprint #WaterFootprint #California #Oregon #DallesOregon #Virginia #DataCenterCapital #VirginiaLoudon #LoudonCounty

  32. .> ... large-scale AI models are indeed big water consumers. For example, training GPT‑3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough to produce 370 BMW cars or 320 Tesla electric vehicles), and the water consumption would have been tripled if training were done in Microsoft’s data centers in Asia. These numbers do not include the off-site water footprint associated with electricity generation.
    .> ChatGPT needs a 500-ml bottle of water for a short conversation of roughly 20 to 50 questions and answers, depending on when and where the model is deployed. Given ChatGPT’s huge user base, the total water footprint for inference can be enormous.
    .> ... if we only consider carbon footprint reduction (say, by scheduling more AI training around noon), we’ll likely end up with higher water consumption, which is not truly sustainable for AI.
    .> ... the vast majority of data centers still use potable water and cooling towers. For example, even tech giants such as Google heavily rely on cooling towers and consume billions of liters of potable water each year. Such huge water consumption has produced a stress on the local water infrastructure; Google’s data center used more than a quarter of all the water in The Dalles, Ore.
    .> ... some AI conferences have requested that authors declare their AI models’ carbon footprint in their papers; we believe that with transparency and awareness, authors can also declare their AI models’ water footprint as part of the environmental impact.
    - The Markup: Water Footprint of AI Technology
    - A conversation with
    Shaolei Ren and Nabiha Syed

    #TheMarkup #NabihaSyed #ShaoleiRen #AISalami #ChatGPT #CarbonFootprint #WaterFootprint #California #Oregon #DallesOregon #Virginia #DataCenterCapital #VirginiaLoudon #LoudonCounty

  33. Like the #AISalami bullshit helps us re-examine a system where teachers don't know if their students wrote something themselves. Schools became mechanical, huh?
    Micro-blogging "debates" where human posters are suspected of being bots might tell us that other exchanges are too mechanical also.
    > A quick glance into hot-topic political Twitter threads reveals people accusing one another of being bots simply because they disagree.
    yalereview.org/article/computa
    #RenéeDiResta #ReneeDiResta #ChatGPT

  34. > .. insurers are using unregulated predictive algorithms, under the guise of scientific rigor, to pinpoint the precise moment when they can plausibly cut off payment for an older patient’s treatment. The denials that follow are setting off heated disputes between doctors and insurers, often delaying treatment of seriously ill patients who are neither aware of the algorithms, nor able to question their calculations.
    statnews.com/2023/03/13/medica
    #AISalami #AgenticShift #MedicareAdvantage #Medicare

  35. In 2012, Noam Chomsky mentioned a Lisp-language guy, #PatWinston. Winston said that, in the AI^1 field, people were directed away from "original questions." That sent me to #Chomsky's #PowersAndProspects(1996). _Prospects_ reminded me of Alex Carey's _Taking the Risk Out of Democracy_ and I now see #Mayo's #HawthorneStudies as 1927-1932 #PHacking. #GeorgeEltonMayo! in textbooks.
    ^1 AI, short for SALAMI: Systematic Approaches to Learning Algorithms and Machine Inferences #AISalami

  36. > .. heated debate.. in.. cognitive sciences, artificial intelligence, and philosophy of mind, but it is hard to see that any serious question has been posed.. the #TuringTest is reminiscent of the #Cartesian approach to.. minds. But the comparison is misleading..Jacques de #Vaucanson, the great artificer of the period, was concerned to understand the animate systems he was modelling.. in order to formulate and validate theories.. not to satisfy some performance criterion.
    #AISalami

  37. > "Does this #SALAMI deserve #HumanRights?”^1
    > ..the first and foremost AI bias is its name. It induces analogies that have limited adherence to reality.. generates infinite speculations.. Because of this misconception.. we should drop [the name #AI and].. adopt a more appropriate and scoped-limited terminology.. Systematic Approaches to Learning Algorithms and Machine Inferences.
    ^1 nymag.com/intelligencer/articl
    ^2 blog.quintarelli.it/2019/11/le
    #StefanoQuintarelli #AiSalami
    #SalamiAI #PropertyRights