home.social

#aiboosters — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #aiboosters, aggregated by home.social.

  1. @mnl
    My ignorance is load-bearing. Please respect it.
    I have prepared a block and a weak insult, in that order. The insult will involve "techbro" and possibly "slop." It will not be funny but it will be righteous, and that is the same thing

    My credentials: I used ChatGPT once in 2024, counted the Rs in "strawberry," and have been seething ever since. You become an expert in a subject by hating it. Everyone knows this

    My anti-datacentre blog is hosted in a datacentre. It's one of the ethical ones, because it runs me

    I boost every preprint titled "Cognitive Collapse: How LLMs Are Hollowing Out The Something" from the Institute for Studies at the University of I Didn't Check. I have not read them. Reading is what the models do to us. I am resisting

    The environmental cost keeps me up at night, specifically on flights, where I have time to think.
    I refuse to engage with actual AI regulation because the boosters call regulation-curious people traitors, and I will not be caught agreeing with a booster even by accident

    Do not reply. I am not interested

    #principledIncuriosity #aislop #aiboosters #airegulation #llm #ai #blocked

  2. "AI models have one undeniable virtue: the increase in speed and efficiency with which they can carry out tasks that were once the province of human beings. Language models can produce functional text for a wide range of contexts, while image generation models are giving us the capability to render into existence whatever image or video takes our fancy. This is widely taken as clear evidence of the benefits of AI. For Mumford, this type of thinking is precisely the problem. The myth of the machine is dehumanizing because it subordinates human values to machine values: speed and efficiency.

    The most striking evidence of the myth’s cultural pervasiveness is that many avid accelerationists do not deny that AI could mean the end of humanity. They merely differ from the doomers in believing that this risk is necessary—even desirable—to achieve the spectacular increases in efficiency and productivity promised by AGI. Mumford foresaw this extreme endpoint. “The myth of the machine,” he wrote, “the basic religion of our present culture, has so captured the modern mind that no human sacrifice seems too great provided it is offered up to the insolent Marduks and Molochs of science and technology.”

    Those branded as skeptics or doomers also still accept the premises of the myth of the machine. The stated aim of many organizations concerned with avoiding the worst AI outcomes is that we should “realize the benefits while mitigating the risks” of the technology. Mumford would argue the first half of this statement concedes too much, accepting the basic premise of the myth of the machine while presenting the task as removing the obstacles to realize its benefits. Many skeptics also share a basic misanthropic premise of machine superiority, focusing as they do on the biased, irrational, and flawed nature of human beings that needs machinic augmentation."

    compactmag.com/article/ai-and-

    #AI #Neoluddism #AIBoosters #AIHype #AIDoomers #GenerativeAI #Mumford #STS #MediaEcology

  3. "AI models have one undeniable virtue: the increase in speed and efficiency with which they can carry out tasks that were once the province of human beings. Language models can produce functional text for a wide range of contexts, while image generation models are giving us the capability to render into existence whatever image or video takes our fancy. This is widely taken as clear evidence of the benefits of AI. For Mumford, this type of thinking is precisely the problem. The myth of the machine is dehumanizing because it subordinates human values to machine values: speed and efficiency.

    The most striking evidence of the myth’s cultural pervasiveness is that many avid accelerationists do not deny that AI could mean the end of humanity. They merely differ from the doomers in believing that this risk is necessary—even desirable—to achieve the spectacular increases in efficiency and productivity promised by AGI. Mumford foresaw this extreme endpoint. “The myth of the machine,” he wrote, “the basic religion of our present culture, has so captured the modern mind that no human sacrifice seems too great provided it is offered up to the insolent Marduks and Molochs of science and technology.”

    Those branded as skeptics or doomers also still accept the premises of the myth of the machine. The stated aim of many organizations concerned with avoiding the worst AI outcomes is that we should “realize the benefits while mitigating the risks” of the technology. Mumford would argue the first half of this statement concedes too much, accepting the basic premise of the myth of the machine while presenting the task as removing the obstacles to realize its benefits. Many skeptics also share a basic misanthropic premise of machine superiority, focusing as they do on the biased, irrational, and flawed nature of human beings that needs machinic augmentation."

    compactmag.com/article/ai-and-

    #AI #Neoluddism #AIBoosters #AIHype #AIDoomers #GenerativeAI #Mumford #STS #MediaEcology

  4. "AI models have one undeniable virtue: the increase in speed and efficiency with which they can carry out tasks that were once the province of human beings. Language models can produce functional text for a wide range of contexts, while image generation models are giving us the capability to render into existence whatever image or video takes our fancy. This is widely taken as clear evidence of the benefits of AI. For Mumford, this type of thinking is precisely the problem. The myth of the machine is dehumanizing because it subordinates human values to machine values: speed and efficiency.

    The most striking evidence of the myth’s cultural pervasiveness is that many avid accelerationists do not deny that AI could mean the end of humanity. They merely differ from the doomers in believing that this risk is necessary—even desirable—to achieve the spectacular increases in efficiency and productivity promised by AGI. Mumford foresaw this extreme endpoint. “The myth of the machine,” he wrote, “the basic religion of our present culture, has so captured the modern mind that no human sacrifice seems too great provided it is offered up to the insolent Marduks and Molochs of science and technology.”

    Those branded as skeptics or doomers also still accept the premises of the myth of the machine. The stated aim of many organizations concerned with avoiding the worst AI outcomes is that we should “realize the benefits while mitigating the risks” of the technology. Mumford would argue the first half of this statement concedes too much, accepting the basic premise of the myth of the machine while presenting the task as removing the obstacles to realize its benefits. Many skeptics also share a basic misanthropic premise of machine superiority, focusing as they do on the biased, irrational, and flawed nature of human beings that needs machinic augmentation."

    compactmag.com/article/ai-and-

    #AI #Neoluddism #AIBoosters #AIHype #AIDoomers #GenerativeAI #Mumford #STS #MediaEcology

  5. "AI models have one undeniable virtue: the increase in speed and efficiency with which they can carry out tasks that were once the province of human beings. Language models can produce functional text for a wide range of contexts, while image generation models are giving us the capability to render into existence whatever image or video takes our fancy. This is widely taken as clear evidence of the benefits of AI. For Mumford, this type of thinking is precisely the problem. The myth of the machine is dehumanizing because it subordinates human values to machine values: speed and efficiency.

    The most striking evidence of the myth’s cultural pervasiveness is that many avid accelerationists do not deny that AI could mean the end of humanity. They merely differ from the doomers in believing that this risk is necessary—even desirable—to achieve the spectacular increases in efficiency and productivity promised by AGI. Mumford foresaw this extreme endpoint. “The myth of the machine,” he wrote, “the basic religion of our present culture, has so captured the modern mind that no human sacrifice seems too great provided it is offered up to the insolent Marduks and Molochs of science and technology.”

    Those branded as skeptics or doomers also still accept the premises of the myth of the machine. The stated aim of many organizations concerned with avoiding the worst AI outcomes is that we should “realize the benefits while mitigating the risks” of the technology. Mumford would argue the first half of this statement concedes too much, accepting the basic premise of the myth of the machine while presenting the task as removing the obstacles to realize its benefits. Many skeptics also share a basic misanthropic premise of machine superiority, focusing as they do on the biased, irrational, and flawed nature of human beings that needs machinic augmentation."

    compactmag.com/article/ai-and-

    #AI #Neoluddism #AIBoosters #AIHype #AIDoomers #GenerativeAI #Mumford #STS #MediaEcology

  6. "AI models have one undeniable virtue: the increase in speed and efficiency with which they can carry out tasks that were once the province of human beings. Language models can produce functional text for a wide range of contexts, while image generation models are giving us the capability to render into existence whatever image or video takes our fancy. This is widely taken as clear evidence of the benefits of AI. For Mumford, this type of thinking is precisely the problem. The myth of the machine is dehumanizing because it subordinates human values to machine values: speed and efficiency.

    The most striking evidence of the myth’s cultural pervasiveness is that many avid accelerationists do not deny that AI could mean the end of humanity. They merely differ from the doomers in believing that this risk is necessary—even desirable—to achieve the spectacular increases in efficiency and productivity promised by AGI. Mumford foresaw this extreme endpoint. “The myth of the machine,” he wrote, “the basic religion of our present culture, has so captured the modern mind that no human sacrifice seems too great provided it is offered up to the insolent Marduks and Molochs of science and technology.”

    Those branded as skeptics or doomers also still accept the premises of the myth of the machine. The stated aim of many organizations concerned with avoiding the worst AI outcomes is that we should “realize the benefits while mitigating the risks” of the technology. Mumford would argue the first half of this statement concedes too much, accepting the basic premise of the myth of the machine while presenting the task as removing the obstacles to realize its benefits. Many skeptics also share a basic misanthropic premise of machine superiority, focusing as they do on the biased, irrational, and flawed nature of human beings that needs machinic augmentation."

    compactmag.com/article/ai-and-

    #AI #Neoluddism #AIBoosters #AIHype #AIDoomers #GenerativeAI #Mumford #STS #MediaEcology

  7. The ugly pathologisation of ‘AI boosters’

    A trend I’m noticing increasingly in the online critical discourse about LLMs is increasingly vitriolic accounts of ‘AI boosters’. Consider this recent instance from Audrey Watters, whose work I’m otherwise a huge fan of:

    Ed’s piece is titled “How to Argue with an AI Booster,” but honestly (and contrary to what some people seem to believe about me), I’m not interested in arguing with these people. Frankly I don’t think there’s anything that one can say to change their minds. It’s like arguing with addicts or cultists — what’s the point?! Boosters will hear none of it — no surprise, since they’re spending their days basking in the sycophancy and comfort of their machine-oracles.

    “Addicts or cultists”… I’ll just leave that line to sit there. This is probably the most explicit example I’ve encountered but I’ve seen increasing amounts of this. It was one of many reasons I got sick of Bluesky and deactivated my account. Ed Zitron offers a quite specific account of what constitutes a booster:

    So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs, the less likely they are to emphatically argue that I’m missing out by not doing so myself.

    No, the AI booster is symbolically aligned with generative AI. They are fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat they can find, their Sundays living and dying by the success of the team, except even fans of the Dallas Cowboys have a tighter grasp on reality.

    However my fear is that distinctions are getting flattened here, so that ‘AI booster’ will start to slide into anyone who doesn’t entirely share my critique of LLMs or even anyone who willingly uses LLMs. There’s a terminally online character to the definition here (i.e. many of Zitron’s points ultimately relate to how people relate to him on social media) which suggests how these fault lines are inflected through the argumentative dynamics of social media. I’m sympathetic to Zitron’s post at some points but at others it feels like it’s one step away from “how to TOTALLY DESTROY AI boosters” in the worst YouTube style. I think he’s explicitly drawing a distinction where ‘AI boosters’ are a specific group but he’s talking about how you recognise a booster in a way which as a much wider scope in practice.

    #AIBoosters #AiCritics #AudreyWattesr #edZitron #SocialMedia

  8. The ugly pathologisation of ‘AI boosters’

    A trend I’m noticing increasingly in the online critical discourse about LLMs is increasingly vitriolic accounts of ‘AI boosters’. Consider this recent instance from Audrey Watters, whose work I’m otherwise a huge fan of:

    Ed’s piece is titled “How to Argue with an AI Booster,” but honestly (and contrary to what some people seem to believe about me), I’m not interested in arguing with these people. Frankly I don’t think there’s anything that one can say to change their minds. It’s like arguing with addicts or cultists — what’s the point?! Boosters will hear none of it — no surprise, since they’re spending their days basking in the sycophancy and comfort of their machine-oracles.

    “Addicts or cultists”… I’ll just leave that line to sit there. This is probably the most explicit example I’ve encountered but I’ve seen increasing amounts of this. It was one of many reasons I got sick of Bluesky and deactivated my account. Ed Zitron offers a quite specific account of what constitutes a booster:

    So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs, the less likely they are to emphatically argue that I’m missing out by not doing so myself.

    No, the AI booster is symbolically aligned with generative AI. They are fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat they can find, their Sundays living and dying by the success of the team, except even fans of the Dallas Cowboys have a tighter grasp on reality.

    However my fear is that distinctions are getting flattened here, so that ‘AI booster’ starts to slide into anyone who doesn’t entirely share my critique of LLMs or even anyone who willingly uses LLMs. There’s a terminally online character to the definition here (i.e. many of Zitron’s points ultimately relate to how people relate to him on social media) which suggests how these fault lines are inflected through the argumentative dynamics of social media. I’m sympathetic to Zitron’s post at some points but at others it feels like it’s one step away from “how to TOTALLY DESTROY AI boosters” in the worst YouTube style.

    #AIBoosters #AiCritics #AudreyWattesr #edZitron #SocialMedia

  9. The ugly pathologisation of ‘AI boosters’

    A trend I’m noticing increasingly in the online critical discourse about LLMs is increasingly vitriolic accounts of ‘AI boosters’. Consider this recent instance from Audrey Watters, whose work I’m otherwise a huge fan of:

    Ed’s piece is titled “How to Argue with an AI Booster,” but honestly (and contrary to what some people seem to believe about me), I’m not interested in arguing with these people. Frankly I don’t think there’s anything that one can say to change their minds. It’s like arguing with addicts or cultists — what’s the point?! Boosters will hear none of it — no surprise, since they’re spending their days basking in the sycophancy and comfort of their machine-oracles.

    “Addicts or cultists”… I’ll just leave that line to sit there. This is probably the most explicit example I’ve encountered but I’ve seen increasing amounts of this. It was one of many reasons I got sick of Bluesky and deactivated my account. Ed Zitron offers a quite specific account of what constitutes a booster:

    So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs, the less likely they are to emphatically argue that I’m missing out by not doing so myself.

    No, the AI booster is symbolically aligned with generative AI. They are fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat they can find, their Sundays living and dying by the success of the team, except even fans of the Dallas Cowboys have a tighter grasp on reality.

    However my fear is that distinctions are getting flattened here, so that ‘AI booster’ will start to slide into anyone who doesn’t entirely share my critique of LLMs or even anyone who willingly uses LLMs. There’s a terminally online character to the definition here (i.e. many of Zitron’s points ultimately relate to how people relate to him on social media) which suggests how these fault lines are inflected through the argumentative dynamics of social media. I’m sympathetic to Zitron’s post at some points but at others it feels like it’s one step away from “how to TOTALLY DESTROY AI boosters” in the worst YouTube style. I think he’s explicitly drawing a distinction where ‘AI boosters’ are a specific group but he’s talking about how you recognise a booster in a way which as a much wider scope in practice.

    #AIBoosters #AiCritics #AudreyWattesr #edZitron #SocialMedia

  10. The ugly pathologisation of ‘AI boosters’

    A trend I’m noticing increasingly in the online critical discourse about LLMs is increasingly vitriolic accounts of ‘AI boosters’. Consider this recent instance from Audrey Watters, whose work I’m otherwise a huge fan of:

    Ed’s piece is titled “How to Argue with an AI Booster,” but honestly (and contrary to what some people seem to believe about me), I’m not interested in arguing with these people. Frankly I don’t think there’s anything that one can say to change their minds. It’s like arguing with addicts or cultists — what’s the point?! Boosters will hear none of it — no surprise, since they’re spending their days basking in the sycophancy and comfort of their machine-oracles.

    “Addicts or cultists”… I’ll just leave that line to sit there. This is probably the most explicit example I’ve encountered but I’ve seen increasing amounts of this. It was one of many reasons I got sick of Bluesky and deactivated my account. Ed Zitron offers a quite specific account of what constitutes a booster:

    So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs, the less likely they are to emphatically argue that I’m missing out by not doing so myself.

    No, the AI booster is symbolically aligned with generative AI. They are fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat they can find, their Sundays living and dying by the success of the team, except even fans of the Dallas Cowboys have a tighter grasp on reality.

    However my fear is that distinctions are getting flattened here, so that ‘AI booster’ will start to slide into anyone who doesn’t entirely share my critique of LLMs or even anyone who willingly uses LLMs. There’s a terminally online character to the definition here (i.e. many of Zitron’s points ultimately relate to how people relate to him on social media) which suggests how these fault lines are inflected through the argumentative dynamics of social media. I’m sympathetic to Zitron’s post at some points but at others it feels like it’s one step away from “how to TOTALLY DESTROY AI boosters” in the worst YouTube style. I think he’s explicitly drawing a distinction where ‘AI boosters’ are a specific group but he’s talking about how you recognise a booster in a way which as a much wider scope in practice.

    #AIBoosters #AiCritics #AudreyWattesr #edZitron #SocialMedia

  11. The ugly pathologisation of ‘AI boosters’

    A trend I’m noticing increasingly in the online critical discourse about LLMs is increasingly vitriolic accounts of ‘AI boosters’. Consider this recent instance from Audrey Watters, whose work I’m otherwise a huge fan of:

    Ed’s piece is titled “How to Argue with an AI Booster,” but honestly (and contrary to what some people seem to believe about me), I’m not interested in arguing with these people. Frankly I don’t think there’s anything that one can say to change their minds. It’s like arguing with addicts or cultists — what’s the point?! Boosters will hear none of it — no surprise, since they’re spending their days basking in the sycophancy and comfort of their machine-oracles.

    “Addicts or cultists”… I’ll just leave that line to sit there. This is probably the most explicit example I’ve encountered but I’ve seen increasing amounts of this. It was one of many reasons I got sick of Bluesky and deactivated my account. Ed Zitron offers a quite specific account of what constitutes a booster:

    So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs, the less likely they are to emphatically argue that I’m missing out by not doing so myself.

    No, the AI booster is symbolically aligned with generative AI. They are fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat they can find, their Sundays living and dying by the success of the team, except even fans of the Dallas Cowboys have a tighter grasp on reality.

    However my fear is that distinctions are getting flattened here, so that ‘AI booster’ will start to slide into anyone who doesn’t entirely share my critique of LLMs or even anyone who willingly uses LLMs. There’s a terminally online character to the definition here (i.e. many of Zitron’s points ultimately relate to how people relate to him on social media) which suggests how these fault lines are inflected through the argumentative dynamics of social media. I’m sympathetic to Zitron’s post at some points but at others it feels like it’s one step away from “how to TOTALLY DESTROY AI boosters” in the worst YouTube style. I think he’s explicitly drawing a distinction where ‘AI boosters’ are a specific group but he’s talking about how you recognise a booster in a way which as a much wider scope in practice.

    #AIBoosters #AiCritics #AudreyWattesr #edZitron #SocialMedia