#davidchapman — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #davidchapman, aggregated by home.social.
-
"Since ISIS is pretty much the worst thing in the world now ..."
This claim has not aged well. It's pretty obvious that Orange Stalinism is the worst thing in the world right now. But for reasons explained on the linked page, I think its days are numbered, so my claim too has a shelf life. As dies any such claim by its very nature.
-
"Since ISIS is pretty much the worst thing in the world now ..."
This claim has not aged well. It's pretty obvious that Orange Stalinism is the worst thing in the world right now. But for reasons explained on the linked page, I think its days are numbered, so my claim too has a shelf life. As dies any such claim by its very nature.
-
"Since ISIS is pretty much the worst thing in the world now ..."
This claim has not aged well. It's pretty obvious that Orange Stalinism is the worst thing in the world right now. But for reasons explained on the linked page, I think its days are numbered, so my claim too has a shelf life. As dies any such claim by its very nature.
-
"Since ISIS is pretty much the worst thing in the world now ..."
This claim has not aged well. It's pretty obvious that Orange Stalinism is the worst thing in the world right now. But for reasons explained on the linked page, I think its days are numbered, so my claim too has a shelf life. As dies any such claim by its very nature.
-
"Left behind by modernity, and then by postmodernity, much of the third world never had a working systematic mode, and so now doesn’t understand why that can’t work. As in the West in the 1930s, the obvious response is to try to make eternalism work by force. Fundamentalism and totalitarian nationalism—fused in every third-world version—are attempts. As these fail, they become ever more desperate, and therefore ever more extreme and violent."
https://meaningness.com/fundamentalism-countercultural-modernism
(1/2)
-
"Left behind by modernity, and then by postmodernity, much of the third world never had a working systematic mode, and so now doesn’t understand why that can’t work. As in the West in the 1930s, the obvious response is to try to make eternalism work by force. Fundamentalism and totalitarian nationalism—fused in every third-world version—are attempts. As these fail, they become ever more desperate, and therefore ever more extreme and violent."
https://meaningness.com/fundamentalism-countercultural-modernism
(1/2)
-
"Left behind by modernity, and then by postmodernity, much of the third world never had a working systematic mode, and so now doesn’t understand why that can’t work. As in the West in the 1930s, the obvious response is to try to make eternalism work by force. Fundamentalism and totalitarian nationalism—fused in every third-world version—are attempts. As these fail, they become ever more desperate, and therefore ever more extreme and violent."
https://meaningness.com/fundamentalism-countercultural-modernism
(1/2)
-
"Left behind by modernity, and then by postmodernity, much of the third world never had a working systematic mode, and so now doesn’t understand why that can’t work. As in the West in the 1930s, the obvious response is to try to make eternalism work by force. Fundamentalism and totalitarian nationalism—fused in every third-world version—are attempts. As these fail, they become ever more desperate, and therefore ever more extreme and violent."
https://meaningness.com/fundamentalism-countercultural-modernism
(1/2)
-
This seems connected to the push to privatise management of health data as an opportunity for corporate profit;
"Everyone can spread the word that companies and government agencies carelessly allowing cybercriminals and hostile states to get access to private personal data is outrageous and unacceptable. Make a point of this on social media. Demand legislation for financial and legal accountability."
#DavidChapman, Better Without AI, 2023
-
This seems connected to the push to privatise management of health data as an opportunity for corporate profit;
"Everyone can spread the word that companies and government agencies carelessly allowing cybercriminals and hostile states to get access to private personal data is outrageous and unacceptable. Make a point of this on social media. Demand legislation for financial and legal accountability."
#DavidChapman, Better Without AI, 2023
-
This seems connected to the push to privatise management of health data as an opportunity for corporate profit;
"Everyone can spread the word that companies and government agencies carelessly allowing cybercriminals and hostile states to get access to private personal data is outrageous and unacceptable. Make a point of this on social media. Demand legislation for financial and legal accountability."
#DavidChapman, Better Without AI, 2023
-
This seems connected to the push to privatise management of health data as an opportunity for corporate profit;
"Everyone can spread the word that companies and government agencies carelessly allowing cybercriminals and hostile states to get access to private personal data is outrageous and unacceptable. Make a point of this on social media. Demand legislation for financial and legal accountability."
#DavidChapman, Better Without AI, 2023
-
"Surprisingly, vampires have played a significant role in Buddhism, in Asia, for centuries. They are not a Western invention.
And, contemporary vampire fiction—“preternatural romance”—provides tools for presenting aspects of Buddhism that are otherwise difficult to communicate."
-
"Surprisingly, vampires have played a significant role in Buddhism, in Asia, for centuries. They are not a Western invention.
And, contemporary vampire fiction—“preternatural romance”—provides tools for presenting aspects of Buddhism that are otherwise difficult to communicate."
-
"Surprisingly, vampires have played a significant role in Buddhism, in Asia, for centuries. They are not a Western invention.
And, contemporary vampire fiction—“preternatural romance”—provides tools for presenting aspects of Buddhism that are otherwise difficult to communicate."
-
"Surprisingly, vampires have played a significant role in Buddhism, in Asia, for centuries. They are not a Western invention.
And, contemporary vampire fiction—“preternatural romance”—provides tools for presenting aspects of Buddhism that are otherwise difficult to communicate."
-
@b_cavello
> No one man should have all that power"It is not intelligence that is dangerous; it is power."
-
@b_cavello
> No one man should have all that power"It is not intelligence that is dangerous; it is power."
-
@b_cavello
> No one man should have all that power"It is not intelligence that is dangerous; it is power."
-
@b_cavello
> No one man should have all that power"It is not intelligence that is dangerous; it is power."
-
PS "Standing down requires breaking the confusion/fear/anger/aggression cycle."
https://meaningness.com/counterculture-war
This is what I am arguing for.
-
PS "Standing down requires breaking the confusion/fear/anger/aggression cycle."
https://meaningness.com/counterculture-war
This is what I am arguing for.
-
PS "Standing down requires breaking the confusion/fear/anger/aggression cycle."
https://meaningness.com/counterculture-war
This is what I am arguing for.
-
PS "Standing down requires breaking the confusion/fear/anger/aggression cycle."
https://meaningness.com/counterculture-war
This is what I am arguing for.
-
"Media coverage of politics is awful; deliberately making everything worse in pursuit of advertising dollars."
https://meaningness.com/counterculture-war
Audio book version;
https://fluidity.libsyn.com/wreckage-the-culture-war
All the more so on ad-funded digital platforms.
-
#TIL that Matt Arnold is recording audio versions of a number of books on his Fluidity blog. Most recently Better Without AI, David Chapman's non-fiction primer on LLMs, which starts here;
https://fluidity.libsyn.com/only-you-can-stop-an-ai-apocalypse
Prior to this, he was reading from Chapman's other work, including the unfinished hypertext books Meaningness and Time, and In The Cells Of The Eggplant.
#podcasts #AudioBooks #Fluidity #MattArnold #BetterWithoutAI #DavidChapman #AI #MOLE
-
#TIL that Matt Arnold is recording audio versions of a number of books on his Fluidity blog. Most recently Better Without AI, David Chapman's non-fiction primer on LLMs, which starts here;
https://fluidity.libsyn.com/only-you-can-stop-an-ai-apocalypse
Prior to this, he was reading from Chapman's other work, including the unfinished hypertext books Meaningness and Time, and In The Cells Of The Eggplant.
#podcasts #AudioBooks #Fluidity #MattArnold #BetterWithoutAI #DavidChapman #AI #MOLE
-
#TIL that Matt Arnold is recording audio versions of a number of books on his Fluidity blog. Most recently Better Without AI, David Chapman's non-fiction primer on LLMs, which starts here;
https://fluidity.libsyn.com/only-you-can-stop-an-ai-apocalypse
Prior to this, he was reading from Chapman's other work, including the unfinished hypertext books Meaningness and Time, and In The Cells Of The Eggplant.
#podcasts #AudioBooks #Fluidity #MattArnold #BetterWithoutAI #DavidChapman #AI #MOLE
-
#TIL that Matt Arnold is recording audio versions of a number of books on his Fluidity blog. Most recently Better Without AI, David Chapman's non-fiction primer on LLMs, which starts here;
https://fluidity.libsyn.com/only-you-can-stop-an-ai-apocalypse
Prior to this, he was reading from Chapman's other work, including the unfinished hypertext books Meaningness and Time, and In The Cells Of The Eggplant.
#podcasts #AudioBooks #Fluidity #MattArnold #BetterWithoutAI #DavidChapman #AI #MOLE
-
"We’ve seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies."
-
"We’ve seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies."
-
"We’ve seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies."
-
"We’ve seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies."
-
"People, societies, and cultures produce intelligence, not brains. Brains are involved, as are (for example) stories. A brain would not be sufficient to produce intelligence, if one could somehow be disentangled from the person, society, and culture."
https://betterwithout.ai/backpropaganda#fn_people_not_brains
-
"People, societies, and cultures produce intelligence, not brains. Brains are involved, as are (for example) stories. A brain would not be sufficient to produce intelligence, if one could somehow be disentangled from the person, society, and culture."
https://betterwithout.ai/backpropaganda#fn_people_not_brains
-
"People, societies, and cultures produce intelligence, not brains. Brains are involved, as are (for example) stories. A brain would not be sufficient to produce intelligence, if one could somehow be disentangled from the person, society, and culture."
https://betterwithout.ai/backpropaganda#fn_people_not_brains
-
"People, societies, and cultures produce intelligence, not brains. Brains are involved, as are (for example) stories. A brain would not be sufficient to produce intelligence, if one could somehow be disentangled from the person, society, and culture."
https://betterwithout.ai/backpropaganda#fn_people_not_brains
-
"Two dangerous falsehoods afflict decisions about artificial intelligence:
* First, that neural networks are impossible to understand. Therefore, there is no point in trying.
* Second, that neural networks are the only and inevitable method for achieving advanced AI. Therefore, there is no reason to develop better alternatives."
-
"Two dangerous falsehoods afflict decisions about artificial intelligence:
* First, that neural networks are impossible to understand. Therefore, there is no point in trying.
* Second, that neural networks are the only and inevitable method for achieving advanced AI. Therefore, there is no reason to develop better alternatives."
-
"Two dangerous falsehoods afflict decisions about artificial intelligence:
* First, that neural networks are impossible to understand. Therefore, there is no point in trying.
* Second, that neural networks are the only and inevitable method for achieving advanced AI. Therefore, there is no reason to develop better alternatives."
-
"Two dangerous falsehoods afflict decisions about artificial intelligence:
* First, that neural networks are impossible to understand. Therefore, there is no point in trying.
* Second, that neural networks are the only and inevitable method for achieving advanced AI. Therefore, there is no reason to develop better alternatives."
-
One of the biggest problems with the phrase "artificial intelligence" is that decades of criti-hyping sci-fi has endowed it with the meaning "simulated mind". But human technology is no closer to creating that than we were in the 1950s. As AI experts like #DavidChapman tirelessly point out, humans haven't even developed a philosophy of mind accurate enough to tell us what a simulated mind would be simulating.
(1/2)
-
One of the biggest problems with the phrase "artificial intelligence" is that decades of criti-hyping sci-fi has endowed it with the meaning "simulated mind". But human technology is no closer to creating that than we were in the 1950s. As AI experts like #DavidChapman tirelessly point out, humans haven't even developed a philosophy of mind accurate enough to tell us what a simulated mind would be simulating.
(1/2)
-
One of the biggest problems with the phrase "artificial intelligence" is that decades of criti-hyping sci-fi has endowed it with the meaning "simulated mind". But human technology is no closer to creating that than we were in the 1950s. As AI experts like #DavidChapman tirelessly point out, humans haven't even developed a philosophy of mind accurate enough to tell us what a simulated mind would be simulating.
(1/2)
-
One of the biggest problems with the phrase "artificial intelligence" is that decades of criti-hyping sci-fi has endowed it with the meaning "simulated mind". But human technology is no closer to creating that than we were in the 1950s. As AI experts like #DavidChapman tirelessly point out, humans haven't even developed a philosophy of mind accurate enough to tell us what a simulated mind would be simulating.
(1/2)
-
"So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities.
In short: they are bad."
#DavidChapman, Gradient Dissent
-
"So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities.
In short: they are bad."
#DavidChapman, Gradient Dissent
-
"So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities.
In short: they are bad."
#DavidChapman, Gradient Dissent
-
"So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities.
In short: they are bad."
#DavidChapman, Gradient Dissent
-
"AI is about power and control. The technical details are interesting for some of us, but they’re a sideshow.
Superintelligence is a fantasy of power, not intelligence. Intelligence is just a technical detail."
https://betterwithout.ai/one-bit-future
I've already posted quotes from this book that make this point, but I think it's worth reiterating. Plus I just really like this quote.