home.social

#fallacy — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #fallacy, aggregated by home.social.

  1. rot13 rot_gag.nfo

    install rot13 for your OS

    I've always liked playing with rot13

    ROT13 is a simple letter substitution cipher that replaces a letter with the 13th letter after it in the Latin alphabet. It is a special case of the Caesar cipher which was developed in ancient Rome, and used by Julius Caesar in the 1st century BC[1] (see timeline of cryptography).

    Usage

    rot13 filename

    1. Sbe lbhe frphevgl, guvf cbfg unf orra rapelcgrq jvgu EBG-13, gjvpr.
    2. ZvyvgnelTenqrRapelcgvba

    sources:

    man rot13(1)

    man tr(1)

    en.wikipedia.org/wiki/ROT13

    en.wikipedia.org/wiki/Timeline

    #programming #ROT13 #encryption #fallacy #fun #joke #mathematics

  2. Ich hab' gerade ein Loch im Hirn: Wie heisst diese Fallacy (?) dass man auf falschen Voraussetzungen keine richtigen Schlüsse aufbauen kann?
    #FediAsk #Fallacy

  3. I'd like to propose a new category of logical fallacy called the hallucinated herring.  It's like a red-herring in that it distracts you from the correct logical argument except the herring doesn't even exist, it was just an AI hallucination. #logic #reason #fallacy #TheFishThatWasntThere

  4. There are still quite a lot of people who think that whatever an authority, such as the government, says is accepted as true. Argumentum ad verecundiam or appeal to authority. Thankfully I'm not like that, especially since I'm knowledgeable and well educated.

    #sofiaflorina #ソフィアフロリナ #authority #authorities #becritical #government #governments #thegovernment #appealtoauthority #logicalfallacy #fallacy #fallacies #itstrue #itistrue #thatstrue #thatistrue #reality #thereality #provemewrong

  5. I saw this on Mastodon and almost had a stroke.

    @davidgerard wrote:

    “Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse

    But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.

    These guys had one good experience with the bot, they got one-shotted, and now if you say “perhaps the bot is not all that” they act like you’re trying to take their cocaine away.”

    First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.

    There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.

    You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.

    A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.

    This is a good paper that goes into it:

    In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

    https://arxiv.org/abs/2508.21634

    Something I’ve started to notice about a lot of the content on social media platforms is that most of the posts people are liking, sharing, and memetically mutating—and then spreading virally—usually don’t include any citations, sources, or receipts. It’s often just some out-of-context screenshot with no reference link or actual sources.

    A lot of the anti-AI content is not genuine critique. It’s often misinformation, but people who hate AI don’t question it or ask for sources because it aligns with their biases. The propaganda on social media has gotten so bad that anything other than heavily curated and vetted feeds is pretty much useless, and it’s filled with all sorts of memetic contagions with nasty hooks that are optimized for you algorithmically. I am at the point where I will disregard anything that is not followed up with a source. Period. It is all optimized to persuade, coerce, or piss you off. I am only writing about this because this I’m actually able to contribute genuine information about the topic.

    That they said symbolic propositions written by AI agents (i.e., code) are non-disprovable because they were written by AI boggles my mind. It’s like saying that an article written in English by AI is not English because AI generated it. It might be a bad piece of text, but it’s syntactically, semantically, and grammatically English.

    Basically, any string of data can be represented in a base-2 system, where it can be interpreted as bits (0s and 1s). Those bits can be used as the basis for symbolic reasoning. In formal propositional logic, a proposition is a sequence of symbols constructed according to strict syntax rules (atomic variables plus logical connectives). Under a given semantics, it is assigned exactly one truth value (true or false) in a two-valued logic system.

    They are essentially saying that code written by AI is not binary, isn’t symbolically logical at all, and cannot be evaluated as true or false by implying it is nondisproveable. At the lowest level, compiled code consists of binary machine instructions that a processor executes. At higher levels, source code is written in symbolic syntax that humans and tools use to express logic and structure. You can also translate parts of code into formal logic expressions. For example, conditions and assertions in a program can be modeled as Boolean formulas. Tools like SAT/SMT solvers or symbolic execution engines check those formulas for satisfiability or correctness. It blows my mind how confidently people talk about things they do not understand.

    Furthermore that they don’t realize the projection is wild to me.

    @davidgerard wrote:

    “But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.”

    They are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.


    It comes down to this: they feel that people ought not to use AI, so they are tacitly committed to a future in which people do not use AI. For example, a major argument against AI is the damage it is doing to resources, which is driving up the prices of computer components, as well as the ecological harm it causes. They feel justified in lying and misinforming others if it achieves the outcome they want—people not using AI because it is bad for the environment. That is a very strong point, but most people don’t care about that, which is why they lie about things people would care about.

    It’s corrupt. And what’s really scary is that people don’t recognize when they are part of corruption or a corrupt conspiracy to misinform. Well, they recognize it when they see the other side doing it, that is. No one is more dangerous than people who feel righteous in what they are doing.

  6. I saw this on Mastodon and almost had a stroke.

    @davidgerard wrote:

    “Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse

    But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.

    These guys had one good experience with the bot, they got one-shotted, and now if you say “perhaps the bot is not all that” they act like you’re trying to take their cocaine away.”

    First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.

    There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.

    You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.

    A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.

    This is a good paper that goes into it:

    In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

    https://arxiv.org/abs/2508.21634

    The big problem in discussions about AI in programming is the either-or thinking, when it’s not about using it everywhere or banning it entirely. Tools like AI have specific strengths and weaknesses. Saying ‘never’ or ‘always’ oversimplifies the issue and turns the narrative into propaganda that creates moral panic or shills AI. It’s a bit like saying you shouldn’t use a hammer just because it’s not good for brushing your teeth.

    AI tends to produce code that’s simple, often a bit repetitive, and very verbose. It’s usually pretty easy to read and tweak. This helps with long-term maintenance. But AI doesn’t reason about code the way an experienced developer does. It makes mistakes that a human wouldn’t, potentially introducing security flaws. That doesn’t mean we shouldn’t use for where it works well, which is not everywhere.

    AI works well for certain tasks, especially when the scope is narrow and the risk is low. Examples include generating boilerplate code, internal utilities, or prototypes. In these cases, the tradeoff is manageable. However, it’s not suitable for critical code like kernels, operating systems, compilers, or cryptographic libraries. A small mistake memory safety or privilege separation can lead to major failures. Problems with synchronization, pointer management, or access control can cause major problems, too.

    Other areas where AI should not be used include memory allocation handling, scheduling, process isolation, or device drivers. A lot of that depends on implicit assumptions in the system’s architecture. Generative models don’t grasp these nuances. Instead of carefully considering the design, AI tends to replicate code patterns that seem statistically likely, doing so without understanding the purpose behind them.

    Yes, I’m aware that Microsoft is using AI to write code everywhere I said it should not be used. That is the problem. However, political pundits, lobbyists, and anti-tech talking heads are discussing something they have no understanding of and aren’t specifying what the problem actually is. This means they can’t possibly lead grassroots initiatives into actual laws that specify where AI should not be used, which is why we have this weird astroturfing bullshit.

    They’re taking advantage of the reaction to Microsoft using AI-generated code where it shouldn’t be used to argue that AI shouldn’t be used anywhere at all in any generative context. AI is useful for tasks like writing documentation, generating tests, suggesting code improvements, or brainstorming alternative approaches. These ideas should then be thoroughly vetted by human developers.

    Something I’ve started to notice about a lot of the content on social media platforms is that most of the posts people are liking, sharing, and memetically mutating—and then spreading virally—usually don’t include any citations, sources, or receipts. It’s often just some out-of-context screenshot with no reference link or actual sources.

    A lot of the anti-AI content is not genuine critique. It’s often misinformation, but people who hate AI don’t question it or ask for sources because it aligns with their biases. The propaganda on social media has gotten so bad that anything other than heavily curated and vetted feeds is pretty much useless, and it’s filled with all sorts of memetic contagions with nasty hooks that are optimized for you algorithmically. I am at the point where I will disregard anything that is not followed up with a source. Period. It is all optimized to persuade, coerce, or piss you off. I am only writing about this because this I’m actually able to contribute genuine information about the topic.

    That they said symbolic propositions written by AI agents (i.e., code) are non-disprovable because they were written by AI boggles my mind. It’s like saying that an article written in English by AI is not English because AI generated it. It might be a bad piece of text, but it’s syntactically, semantically, and grammatically English.

    Basically, any string of data can be represented in a base-2 system, where it can be interpreted as bits (0s and 1s). Those bits can be used as the basis for symbolic reasoning. In formal propositional logic, a proposition is a sequence of symbols constructed according to strict syntax rules (atomic variables plus logical connectives). Under a given semantics, it is assigned exactly one truth value (true or false) in a two-valued logic system.

    They are essentially saying that code written by AI is not binary, isn’t symbolically logical at all, and cannot be evaluated as true or false by implying it is nondisproveable. At the lowest level, compiled code consists of binary machine instructions that a processor executes. At higher levels, source code is written in symbolic syntax that humans and tools use to express logic and structure. You can also translate parts of code into formal logic expressions. For example, conditions and assertions in a program can be modeled as Boolean formulas. Tools like SAT/SMT solvers or symbolic execution engines check those formulas for satisfiability or correctness. It blows my mind how confidently people talk about things they do not understand.

    Furthermore that they don’t realize the projection is wild to me.

    @davidgerard wrote:

    “But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.”

    They are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.

    It comes down to this: they feel that people ought not to use AI, so they are tacitly committed to a future in which people do not use AI. For example, a major argument against AI is the damage it is doing to resources, which is driving up the prices of computer components, as well as the ecological harm it causes. They feel justified in lying and misinforming others if it achieves the outcome they want—people not using AI because it is bad for the environment. That is a very strong point, but most people don’t care about that, which is why they lie about things people would care about.

    It’s corrupt. And what’s really scary is that people don’t recognize when they are part of corruption or a corrupt conspiracy to misinform. Well, they recognize it when they see the other side doing it, that is. No one is more dangerous than people who feel righteous in what they are doing.

    It’s wild to me that the idea that if you cannot persuade someone, it is okay to bully, coerce, harass them, or spread misinformation to get what you want—because your side is right—has become so normalized on the Internet that people can’t see why it is problematic.

    That people think it is okay to hurt others to get them to agree is the most disturbing part of all of this. People have become so hateful. That is a large reason why I don’t interact with people on social media, really consume things from social media, or respond on social media and am writing a blog post about it instead of engaging with who prompted it.

  7. I saw this on Mastodon and almost had a stroke.

    @davidgerard wrote:

    “Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse

    But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.

    These guys had one good experience with the bot, they got one-shotted, and now if you say “perhaps the bot is not all that” they act like you’re trying to take their cocaine away.”

    First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.

    There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.

    You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.

    A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.

    This is a good paper that goes into it:

    In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

    https://arxiv.org/abs/2508.21634

    Something I’ve started to notice about a lot of the content on social media platforms is that most of the posts people are liking, sharing, and memetically mutating—and then spreading virally—usually don’t include any citations, sources, or receipts. It’s often just some out-of-context screenshot with no reference link or actual sources.

    A lot of the anti-AI content is not genuine critique. It’s often misinformation, but people who hate AI don’t question it or ask for sources because it aligns with their biases. The propaganda on social media has gotten so bad that anything other than heavily curated and vetted feeds is pretty much useless, and it’s filled with all sorts of memetic contagions with nasty hooks that are optimized for you algorithmically. I am at the point where I will disregard anything that is not followed up with a source. Period. It is all optimized to persuade, coerce, or piss you off. I am only writing about this because this I’m actually able to contribute genuine information about the topic.

    That they said symbolic propositions written by AI agents (i.e., code) are non-disprovable because they were written by AI boggles my mind. It’s like saying that an article written in English by AI is not English because AI generated it. It might be a bad piece of text, but it’s syntactically, semantically, and grammatically English.

    Basically, any string of data can be represented in a base-2 system, where it can be interpreted as bits (0s and 1s). Those bits can be used as the basis for symbolic reasoning. In formal propositional logic, a proposition is a sequence of symbols constructed according to strict syntax rules (atomic variables plus logical connectives). Under a given semantics, it is assigned exactly one truth value (true or false) in a two-valued logic system.

    They are essentially saying that code written by AI is not binary, isn’t symbolically logical at all, and cannot be evaluated as true or false by implying it is nondisproveable. At the lowest level, compiled code consists of binary machine instructions that a processor executes. At higher levels, source code is written in symbolic syntax that humans and tools use to express logic and structure. You can also translate parts of code into formal logic expressions. For example, conditions and assertions in a program can be modeled as Boolean formulas. Tools like SAT/SMT solvers or symbolic execution engines check those formulas for satisfiability or correctness. It blows my mind how confidently people talk about things they do not understand.

    Furthermore that they don’t realize the projection is wild to me.

    @davidgerard wrote:

    “But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.”

    They are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.


    It comes down to this: they feel that people ought not to use AI, so they are tacitly committed to a future in which people do not use AI. For example, a major argument against AI is the damage it is doing to resources, which is driving up the prices of computer components, as well as the ecological harm it causes. They feel justified in lying and misinforming others if it achieves the outcome they want—people not using AI because it is bad for the environment. That is a very strong point, but most people don’t care about that, which is why they lie about things people would care about.

    It’s corrupt. And what’s really scary is that people don’t recognize when they are part of corruption or a corrupt conspiracy to misinform. Well, they recognize it when they see the other side doing it, that is. No one is more dangerous than people who feel righteous in what they are doing.

  8. I saw this on Mastodon and almost had a stroke.

    @davidgerard wrote:

    “Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse

    But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.

    These guys had one good experience with the bot, they got one-shotted, and now if you say “perhaps the bot is not all that” they act like you’re trying to take their cocaine away.”

    First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.

    There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.

    You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.

    A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.

    This is a good paper that goes into it:

    In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

    https://arxiv.org/abs/2508.21634

    The big problem in discussions about AI in programming is the either-or thinking, when it’s not about using it everywhere or banning it entirely. Tools like AI have specific strengths and weaknesses. Saying ‘never’ or ‘always’ oversimplifies the issue and turns the narrative into propaganda that creates moral panic or shills AI. It’s a bit like saying you shouldn’t use a hammer just because it’s not good for brushing your teeth.

    AI tends to produce code that’s simple, often a bit repetitive, and very verbose. It’s usually pretty easy to read and tweak. This helps with long-term maintenance. But AI doesn’t reason about code the way an experienced developer does. It makes mistakes that a human wouldn’t, potentially introducing security flaws. That doesn’t mean we shouldn’t use for where it works well, which is not everywhere.

    AI works well for certain tasks, especially when the scope is narrow and the risk is low. Examples include generating boilerplate code, internal utilities, or prototypes. In these cases, the tradeoff is manageable. However, it’s not suitable for critical code like kernels, operating systems, compilers, or cryptographic libraries. A small mistake memory safety or privilege separation can lead to major failures. Problems with synchronization, pointer management, or access control can cause major problems, too.

    Other areas where AI should not be used include memory allocation handling, scheduling, process isolation, or device drivers. A lot of that depends on implicit assumptions in the system’s architecture. Generative models don’t grasp these nuances. Instead of carefully considering the design, AI tends to replicate code patterns that seem statistically likely, doing so without understanding the purpose behind them.

    Yes, I’m aware that Microsoft is using AI to write code everywhere I said it should not be used. That is the problem. However, political pundits, lobbyists, and anti-tech talking heads are discussing something they have no understanding of and aren’t specifying what the problem actually is. This means they can’t possibly lead grassroots initiatives into actual laws that specify where AI should not be used, which is why we have this weird astroturfing bullshit.

    They’re taking advantage of the reaction to Microsoft using AI-generated code where it shouldn’t be used to argue that AI shouldn’t be used anywhere at all in any generative context. AI is useful for tasks like writing documentation, generating tests, suggesting code improvements, or brainstorming alternative approaches. These ideas should then be thoroughly vetted by human developers.

    Something I’ve started to notice about a lot of the content on social media platforms is that most of the posts people are liking, sharing, and memetically mutating—and then spreading virally—usually don’t include any citations, sources, or receipts. It’s often just some out-of-context screenshot with no reference link or actual sources.

    A lot of the anti-AI content is not genuine critique. It’s often misinformation, but people who hate AI don’t question it or ask for sources because it aligns with their biases. The propaganda on social media has gotten so bad that anything other than heavily curated and vetted feeds is pretty much useless, and it’s filled with all sorts of memetic contagions with nasty hooks that are optimized for you algorithmically. I am at the point where I will disregard anything that is not followed up with a source. Period. It is all optimized to persuade, coerce, or piss you off. I am only writing about this because this I’m actually able to contribute genuine information about the topic.

    That they said symbolic propositions written by AI agents (i.e., code) are non-disprovable because they were written by AI boggles my mind. It’s like saying that an article written in English by AI is not English because AI generated it. It might be a bad piece of text, but it’s syntactically, semantically, and grammatically English.

    Basically, any string of data can be represented in a base-2 system, where it can be interpreted as bits (0s and 1s). Those bits can be used as the basis for symbolic reasoning. In formal propositional logic, a proposition is a sequence of symbols constructed according to strict syntax rules (atomic variables plus logical connectives). Under a given semantics, it is assigned exactly one truth value (true or false) in a two-valued logic system.

    They are essentially saying that code written by AI is not binary, isn’t symbolically logical at all, and cannot be evaluated as true or false by implying it is nondisproveable. At the lowest level, compiled code consists of binary machine instructions that a processor executes. At higher levels, source code is written in symbolic syntax that humans and tools use to express logic and structure. You can also translate parts of code into formal logic expressions. For example, conditions and assertions in a program can be modeled as Boolean formulas. Tools like SAT/SMT solvers or symbolic execution engines check those formulas for satisfiability or correctness. It blows my mind how confidently people talk about things they do not understand.

    Furthermore that they don’t realize the projection is wild to me.

    @davidgerard wrote:

    “But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.”

    They are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.

    It comes down to this: they feel that people ought not to use AI, so they are tacitly committed to a future in which people do not use AI. For example, a major argument against AI is the damage it is doing to resources, which is driving up the prices of computer components, as well as the ecological harm it causes. They feel justified in lying and misinforming others if it achieves the outcome they want—people not using AI because it is bad for the environment. That is a very strong point, but most people don’t care about that, which is why they lie about things people would care about.

    It’s corrupt. And what’s really scary is that people don’t recognize when they are part of corruption or a corrupt conspiracy to misinform. Well, they recognize it when they see the other side doing it, that is. No one is more dangerous than people who feel righteous in what they are doing.

    It’s wild to me that the idea that if you cannot persuade someone, it is okay to bully, coerce, harass them, or spread misinformation to get what you want—because your side is right—has become so normalized on the Internet that people can’t see why it is problematic.

    That people think it is okay to hurt others to get them to agree is the most disturbing part of all of this. People have become so hateful. That is a large reason why I don’t interact with people on social media, really consume things from social media, or respond on social media and am writing a blog post about it instead of engaging with who prompted it.

  9. I saw this on Mastodon and almost had a stroke.

    @davidgerard wrote:

    “Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse

    But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.

    These guys had one good experience with the bot, they got one-shotted, and now if you say “perhaps the bot is not all that” they act like you’re trying to take their cocaine away.”

    First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.

    There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.

    You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.

    A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.

    This is a good paper that goes into it:

    In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

    https://arxiv.org/abs/2508.21634

    The big problem in discussions about AI in programming is the either-or thinking, when it’s not about using it everywhere or banning it entirely. Tools like AI have specific strengths and weaknesses. Saying ‘never’ or ‘always’ oversimplifies the issue and turns the narrative into propaganda that creates moral panic or shills AI. It’s a bit like saying you shouldn’t use a hammer just because it’s not good for brushing your teeth.

    AI tends to produce code that’s simple, often a bit repetitive, and very verbose. It’s usually pretty easy to read and tweak. This helps with long-term maintenance. But AI doesn’t reason about code the way an experienced developer does. It makes mistakes that a human wouldn’t, potentially introducing security flaws. That doesn’t mean we shouldn’t use for where it works well, which is not everywhere.

    AI works well for certain tasks, especially when the scope is narrow and the risk is low. Examples include generating boilerplate code, internal utilities, or prototypes. In these cases, the tradeoff is manageable. However, it’s not suitable for critical code like kernels, operating systems, compilers, or cryptographic libraries. A small mistake memory safety or privilege separation can lead to major failures. Problems with synchronization, pointer management, or access control can cause major problems, too.

    Other areas where AI should not be used include memory allocation handling, scheduling, process isolation, or device drivers. A lot of that depends on implicit assumptions in the system’s architecture. Generative models don’t grasp these nuances. Instead of carefully considering the design, AI tends to replicate code patterns that seem statistically likely, doing so without understanding the purpose behind them.

    Yes, I’m aware that Microsoft is using AI to write code everywhere I said it should not be used. That is the problem. However, political pundits, lobbyists, and anti-tech talking heads are discussing something they have no understanding of and aren’t specifying what the problem actually is. This means they can’t possibly lead grassroots initiatives into actual laws that specify where AI should not be used, which is why we have this weird astroturfing bullshit.

    They’re taking advantage of the reaction to Microsoft using AI-generated code where it shouldn’t be used to argue that AI shouldn’t be used anywhere at all in any generative context. AI is useful for tasks like writing documentation, generating tests, suggesting code improvements, or brainstorming alternative approaches. These ideas should then be thoroughly vetted by human developers.

    Something I’ve started to notice about a lot of the content on social media platforms is that most of the posts people are liking, sharing, and memetically mutating—and then spreading virally—usually don’t include any citations, sources, or receipts. It’s often just some out-of-context screenshot with no reference link or actual sources.

    A lot of the anti-AI content is not genuine critique. It’s often misinformation, but people who hate AI don’t question it or ask for sources because it aligns with their biases. The propaganda on social media has gotten so bad that anything other than heavily curated and vetted feeds is pretty much useless, and it’s filled with all sorts of memetic contagions with nasty hooks that are optimized for you algorithmically. I am at the point where I will disregard anything that is not followed up with a source. Period. It is all optimized to persuade, coerce, or piss you off. I am only writing about this because this I’m actually able to contribute genuine information about the topic.

    That they said symbolic propositions written by AI agents (i.e., code) are non-disprovable because they were written by AI boggles my mind. It’s like saying that an article written in English by AI is not English because AI generated it. It might be a bad piece of text, but it’s syntactically, semantically, and grammatically English.

    Basically, any string of data can be represented in a base-2 system, where it can be interpreted as bits (0s and 1s). Those bits can be used as the basis for symbolic reasoning. In formal propositional logic, a proposition is a sequence of symbols constructed according to strict syntax rules (atomic variables plus logical connectives). Under a given semantics, it is assigned exactly one truth value (true or false) in a two-valued logic system.

    They are essentially saying that code written by AI is not binary, isn’t symbolically logical at all, and cannot be evaluated as true or false by implying it is nondisproveable. At the lowest level, compiled code consists of binary machine instructions that a processor executes. At higher levels, source code is written in symbolic syntax that humans and tools use to express logic and structure. You can also translate parts of code into formal logic expressions. For example, conditions and assertions in a program can be modeled as Boolean formulas. Tools like SAT/SMT solvers or symbolic execution engines check those formulas for satisfiability or correctness. It blows my mind how confidently people talk about things they do not understand.

    Furthermore that they don’t realize the projection is wild to me.

    @davidgerard wrote:

    “But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.”

    They are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.

    It comes down to this: they feel that people ought not to use AI, so they are tacitly committed to a future in which people do not use AI. For example, a major argument against AI is the damage it is doing to resources, which is driving up the prices of computer components, as well as the ecological harm it causes. They feel justified in lying and misinforming others if it achieves the outcome they want—people not using AI because it is bad for the environment. That is a very strong point, but most people don’t care about that, which is why they lie about things people would care about.

    It’s corrupt. And what’s really scary is that people don’t recognize when they are part of corruption or a corrupt conspiracy to misinform. Well, they recognize it when they see the other side doing it, that is. No one is more dangerous than people who feel righteous in what they are doing.

    It’s wild to me that the idea that if you cannot persuade someone, it is okay to bully, coerce, harass them, or spread misinformation to get what you want—because your side is right—has become so normalized on the Internet that people can’t see why it is problematic.

    That people think it is okay to hurt others to get them to agree is the most disturbing part of all of this. People have become so hateful. That is a large reason why I don’t interact with people on social media, really consume things from social media, or respond on social media and am writing a blog post about it instead of engaging with who prompted it.

  10. Have we been #Duped by the primary #Energy #Fallacy? : Medium

    How Much #Diet #Soda Is Too Much Diet Soda? #Doctors Weigh In. : Misc

    How to #Dominate #AI before it dominates us : Misc

    Check our latest #KnowledgeLinks

    knowledgezone.co.in/resources/

  11. Why can economic #inequality depress the #minimumWage?

    An is-ought #fallacy?

    From over 135,000 people in #protests, experiments, and #processTracing studies, scientists found that people seemed to infer what people OUGHT to earn from what they DO earn.

    doi.org/10.1037/xge0001772

  12. Why can economic #inequality depress the #minimumWage?

    An is-ought #fallacy?

    From over 135,000 people in #protests, experiments, and #processTracing studies, scientists found that people seemed to infer what people OUGHT to earn from what they DO earn.

    doi.org/10.1037/xge0001772

  13. Why can economic #inequality depress the #minimumWage?

    An is-ought #fallacy?

    From over 135,000 people in #protests, experiments, and #processTracing studies, scientists found that people seemed to infer what people OUGHT to earn from what they DO earn.

    doi.org/10.1037/xge0001772

  14. Why can economic #inequality depress the #minimumWage?

    An is-ought #fallacy?

    From over 135,000 people in #protests, experiments, and #processTracing studies, scientists found that people seemed to infer what people OUGHT to earn from what they DO earn.

    doi.org/10.1037/xge0001772

  15. Why can economic #inequality depress the #minimumWage?

    An is-ought #fallacy?

    From over 135,000 people in #protests, experiments, and #processTracing studies, scientists found that people seemed to infer what people OUGHT to earn from what they DO earn.

    doi.org/10.1037/xge0001772

  16. I just participated in the first W3C Authentic Web Mini Workshop¹ hosted by the Credible Web Community Group² (of which I’m a longtime member) and up front I noted that our very discussion itself needed to be careful about its own credibility, extra critical of any technologies discussed or assertions made, and initially identified two flaws to avoid on a meta level, having seen them occur many times in technical or standards discussions:

    1. Politician’s Syllogism — "Something must be done about this problem. Here is something, let's do it!"

    2. Solutions Looking For Problems — "I am interested in how tech X can solve problem Y"

    After some back and forth and arguments in the Zoom chat, I observed participants questioning speakers of arguments rather than the arguments themselves, so I had to identify a third fallacy to avoid:

    3. Ad Hominem — while obvious examples are name-calling (which is usually against codes of conduct), less obvious examples (witnessed in the meeting) include questioning a speaker’s education (or lack thereof) like what they have or have not read, or would benefit from reading.

    I am blogging these here both as a reminder (should you choose to participate in such discussions), and as a resource to cite in future discussions.

    We need to all develop expertise in recognizing these logical and methodological flaws & fallacies, and call them out when we see them, especially when used against others.

    We need to promptly prune these flawed methods of discussion, so we can focus on actual productive, relevant, and yes, credible discussions.

    #W3C #credweb #credibleWeb #authenticWeb #flaw #fallacy #fallacies #logicalFallacy #logicalFallacies


    Glossary

    Ad Hominem
      attacking an attribute of the person making an argument rather than the argument itself
      https://en.wikipedia.org/wiki/Ad_hominem

    Politician's syllogism
      https://en.wikipedia.org/wiki/Politician%27s_syllogism

    Solutions Looking For Problems (related: #solutionism, #solutioneering)
      Promoting a technology that either has not identified a real problem for it to solve, or actively pitching a specific technology to any problem that seems related. Wikipedia has no page on this but has two related pages:
      * https://en.wikipedia.org/wiki/Law_of_the_instrument
      * https://en.wikipedia.org/wiki/Technological_fix
      Wikipedia does have an essay on this specific to Wikipedia:
      * https://en.wikipedia.org/wiki/Wikipedia:Solutions_looking_for_a_problem
      Stack Exchange has a thread on "solution in search of a problem":
      * https://english.stackexchange.com/questions/250320/a-word-that-means-a-solution-in-search-of-a-problem
      Forbes has an illustrative anecdote:  
      * https://www.forbes.com/sites/stephanieburns/2019/05/28/solution-looking-for-a-problem/


    References

    ¹ https://www.w3.org/events/workshops/2025/authentic-web-workshop/
    ² https://credweb.org/ and https://www.w3.org/community/credibility/


    Previously in 2019 I participated in #MisinfoCon:
    * https://tantek.com/2019/296/t1/london-misinfocon-discuss-spectrum-recency
    * https://tantek.com/2019/296/t2/misinfocon-roundtable-spectrums-misinformation

  17. I just participated in the first W3C Authentic Web Mini Workshop¹ hosted by the Credible Web Community Group² (of which I’m a longtime member) and up front I noted that our very discussion itself needed to be careful about its own credibility, extra critical of any technologies discussed or assertions made, and initially identified two flaws to avoid on a meta level, having seen them occur many times in technical or standards discussions:

    1. Politician’s Syllogism — "Something must be done about this problem. Here is something, let's do it!"

    2. Solutions Looking For Problems — "I am interested in how tech X can solve problem Y"

    After some back and forth and arguments in the Zoom chat, I observed participants questioning speakers of arguments rather than the arguments themselves, so I had to identify a third fallacy to avoid:

    3. Ad Hominem — while obvious examples are name-calling (which is usually against codes of conduct), less obvious examples (witnessed in the meeting) include questioning a speaker’s education (or lack thereof) like what they have or have not read, or would benefit from reading.

    I am blogging these here both as a reminder (should you choose to participate in such discussions), and as a resource to cite in future discussions.

    We need to all develop expertise in recognizing these logical and methodological flaws & fallacies, and call them out when we see them, especially when used against others.

    We need to promptly prune these flawed methods of discussion, so we can focus on actual productive, relevant, and yes, credible discussions.

    #W3C #credweb #credibleWeb #authenticWeb #flaw #fallacy #fallacies #logicalFallacy #logicalFallacies


    Glossary

    Ad Hominem
      attacking an attribute of the person making an argument rather than the argument itself
      https://en.wikipedia.org/wiki/Ad_hominem

    Politician's syllogism
      https://en.wikipedia.org/wiki/Politician%27s_syllogism

    Solutions Looking For Problems (related: #solutionism, #solutioneering)
      Promoting a technology that either has not identified a real problem for it to solve, or actively pitching a specific technology to any problem that seems related. Wikipedia has no page on this but has two related pages:
      * https://en.wikipedia.org/wiki/Law_of_the_instrument
      * https://en.wikipedia.org/wiki/Technological_fix
      Wikipedia does have an essay on this specific to Wikipedia:
      * https://en.wikipedia.org/wiki/Wikipedia:Solutions_looking_for_a_problem
      Stack Exchange has a thread on "solution in search of a problem":
      * https://english.stackexchange.com/questions/250320/a-word-that-means-a-solution-in-search-of-a-problem
      Forbes has an illustrative anecdote:  
      * https://www.forbes.com/sites/stephanieburns/2019/05/28/solution-looking-for-a-problem/


    References

    ¹ https://www.w3.org/events/workshops/2025/authentic-web-workshop/
    ² https://credweb.org/ and https://www.w3.org/community/credibility/


    Previously in 2019 I participated in #MisinfoCon:
    * https://tantek.com/2019/296/t1/london-misinfocon-discuss-spectrum-recency
    * https://tantek.com/2019/296/t2/misinfocon-roundtable-spectrums-misinformation

  18. I just participated in the first W3C Authentic Web Mini Workshop¹ hosted by the Credible Web Community Group² (of which I’m a longtime member) and up front I noted that our very discussion itself needed to be careful about its own credibility, extra critical of any technologies discussed or assertions made, and initially identified two flaws to avoid on a meta level, having seen them occur many times in technical or standards discussions:

    1. Politician’s Syllogism — "Something must be done about this problem. Here is something, let's do it!"

    2. Solutions Looking For Problems — "I am interested in how tech X can solve problem Y"

    After some back and forth and arguments in the Zoom chat, I observed participants questioning speakers of arguments rather than the arguments themselves, so I had to identify a third fallacy to avoid:

    3. Ad Hominem — while obvious examples are name-calling (which is usually against codes of conduct), less obvious examples (witnessed in the meeting) include questioning a speaker’s education (or lack thereof) like what they have or have not read, or would benefit from reading.

    I am blogging these here both as a reminder (should you choose to participate in such discussions), and as a resource to cite in future discussions.

    We need to all develop expertise in recognizing these logical and methodological flaws & fallacies, and call them out when we see them, especially when used against others.

    We need to promptly prune these flawed methods of discussion, so we can focus on actual productive, relevant, and yes, credible discussions.

    #W3C #credweb #credibleWeb #authenticWeb #flaw #fallacy #fallacies #logicalFallacy #logicalFallacies


    Glossary

    Ad Hominem
      attacking an attribute of the person making an argument rather than the argument itself
      https://en.wikipedia.org/wiki/Ad_hominem

    Politician's syllogism
      https://en.wikipedia.org/wiki/Politician%27s_syllogism

    Solutions Looking For Problems (related: #solutionism, #solutioneering)
      Promoting a technology that either has not identified a real problem for it to solve, or actively pitching a specific technology to any problem that seems related. Wikipedia has no page on this but has two related pages:
      * https://en.wikipedia.org/wiki/Law_of_the_instrument
      * https://en.wikipedia.org/wiki/Technological_fix
      Wikipedia does have an essay on this specific to Wikipedia:
      * https://en.wikipedia.org/wiki/Wikipedia:Solutions_looking_for_a_problem
      Stack Exchange has a thread on "solution in search of a problem":
      * https://english.stackexchange.com/questions/250320/a-word-that-means-a-solution-in-search-of-a-problem
      Forbes has an illustrative anecdote:  
      * https://www.forbes.com/sites/stephanieburns/2019/05/28/solution-looking-for-a-problem/


    References

    ¹ https://www.w3.org/events/workshops/2025/authentic-web-workshop/
    ² https://credweb.org/ and https://www.w3.org/community/credibility/


    Previously in 2019 I participated in #MisinfoCon:
    * https://tantek.com/2019/296/t1/london-misinfocon-discuss-spectrum-recency
    * https://tantek.com/2019/296/t2/misinfocon-roundtable-spectrums-misinformation

  19. I just participated in the first W3C Authentic Web Mini Workshop¹ hosted by the Credible Web Community Group² (of which I’m a longtime member) and up front I noted that our very discussion itself needed to be careful about its own credibility, extra critical of any technologies discussed or assertions made, and initially identified two flaws to avoid on a meta level, having seen them occur many times in technical or standards discussions:

    1. Politician’s Syllogism — "Something must be done about this problem. Here is something, let's do it!"

    2. Solutions Looking For Problems — "I am interested in how tech X can solve problem Y"

    After some back and forth and arguments in the Zoom chat, I observed participants questioning speakers of arguments rather than the arguments themselves, so I had to identify a third fallacy to avoid:

    3. Ad Hominem — while obvious examples are name-calling (which is usually against codes of conduct), less obvious examples (witnessed in the meeting) include questioning a speaker’s education (or lack thereof) like what they have or have not read, or would benefit from reading.

    I am blogging these here both as a reminder (should you choose to participate in such discussions), and as a resource to cite in future discussions.

    We need to all develop expertise in recognizing these logical and methodological flaws & fallacies, and call them out when we see them, especially when used against others.

    We need to promptly prune these flawed methods of discussion, so we can focus on actual productive, relevant, and yes, credible discussions.

    #W3C #credweb #credibleWeb #authenticWeb #flaw #fallacy #fallacies #logicalFallacy #logicalFallacies


    Glossary

    Ad Hominem
      attacking an attribute of the person making an argument rather than the argument itself
      https://en.wikipedia.org/wiki/Ad_hominem

    Politician's syllogism
      https://en.wikipedia.org/wiki/Politician%27s_syllogism

    Solutions Looking For Problems (related: #solutionism, #solutioneering)
      Promoting a technology that either has not identified a real problem for it to solve, or actively pitching a specific technology to any problem that seems related. Wikipedia has no page on this but has two related pages:
      * https://en.wikipedia.org/wiki/Law_of_the_instrument
      * https://en.wikipedia.org/wiki/Technological_fix
      Wikipedia does have an essay on this specific to Wikipedia:
      * https://en.wikipedia.org/wiki/Wikipedia:Solutions_looking_for_a_problem
      Stack Exchange has a thread on "solution in search of a problem":
      * https://english.stackexchange.com/questions/250320/a-word-that-means-a-solution-in-search-of-a-problem
      Forbes has an illustrative anecdote:  
      * https://www.forbes.com/sites/stephanieburns/2019/05/28/solution-looking-for-a-problem/


    References

    ¹ https://www.w3.org/events/workshops/2025/authentic-web-workshop/
    ² https://credweb.org/ and https://www.w3.org/community/credibility/


    Previously in 2019 I participated in #MisinfoCon:
    * https://tantek.com/2019/296/t1/london-misinfocon-discuss-spectrum-recency
    * https://tantek.com/2019/296/t2/misinfocon-roundtable-spectrums-misinformation

  20. I just participated in the first W3C Authentic Web Mini Workshop¹ hosted by the Credible Web Community Group² (of which I’m a longtime member) and up front I noted that our very discussion itself needed to be careful about its own credibility, extra critical of any technologies discussed or assertions made, and initially identified two flaws to avoid on a meta level, having seen them occur many times in technical or standards discussions:

    1. Politician’s Syllogism — "Something must be done about this problem. Here is something, let's do it!"

    2. Solutions Looking For Problems — "I am interested in how tech X can solve problem Y"

    After some back and forth and arguments in the Zoom chat, I observed participants questioning speakers of arguments rather than the arguments themselves, so I had to identify a third fallacy to avoid:

    3. Ad Hominem — while obvious examples are name-calling (which is usually against codes of conduct), less obvious examples (witnessed in the meeting) include questioning a speaker’s education (or lack thereof) like what they have or have not read, or would benefit from reading.

    I am blogging these here both as a reminder (should you choose to participate in such discussions), and as a resource to cite in future discussions.

    We need to all develop expertise in recognizing these logical and methodological flaws & fallacies, and call them out when we see them, especially when used against others.

    We need to promptly prune these flawed methods of discussion, so we can focus on actual productive, relevant, and yes, credible discussions.

    #W3C #credweb #credibleWeb #authenticWeb #flaw #fallacy #fallacies #logicalFallacy #logicalFallacies


    Glossary

    Ad Hominem
      attacking an attribute of the person making an argument rather than the argument itself
      https://en.wikipedia.org/wiki/Ad_hominem

    Politician's syllogism
      https://en.wikipedia.org/wiki/Politician%27s_syllogism

    Solutions Looking For Problems (related: #solutionism, #solutioneering)
      Promoting a technology that either has not identified a real problem for it to solve, or actively pitching a specific technology to any problem that seems related. Wikipedia has no page on this but has two related pages:
      * https://en.wikipedia.org/wiki/Law_of_the_instrument
      * https://en.wikipedia.org/wiki/Technological_fix
      Wikipedia does have an essay on this specific to Wikipedia:
      * https://en.wikipedia.org/wiki/Wikipedia:Solutions_looking_for_a_problem
      Stack Exchange has a thread on "solution in search of a problem":
      * https://english.stackexchange.com/questions/250320/a-word-that-means-a-solution-in-search-of-a-problem
      Forbes has an illustrative anecdote:  
      * https://www.forbes.com/sites/stephanieburns/2019/05/28/solution-looking-for-a-problem/


    References

    ¹ https://www.w3.org/events/workshops/2025/authentic-web-workshop/
    ² https://credweb.org/ and https://www.w3.org/community/credibility/


    Previously in 2019 I participated in #MisinfoCon:
    * https://tantek.com/2019/296/t1/london-misinfocon-discuss-spectrum-recency
    * https://tantek.com/2019/296/t2/misinfocon-roundtable-spectrums-misinformation

  21. Another aspect of debt,
    considered as an information technology,
    is that if affects the information environment of the borrower.

    If you are managing a company which has borrowed money,
    making your payments becomes one of the survival conditions for that company.

    At low levels of debt, generating short term cash flow is one priority among others,
    but for a highly indebted company it becomes a signal which swamps all others.

    You might want to change the world, but if you don’t meet the coupon payments, you’ll never get the chance to see if your other strategic priorities would have worked.

    Consequently, a company with lots of debt cannot help but have a bias toward the short term.

    Which might be considered problematic,
    as the last few decades in the Western capitalist world have seen the rise of an industry
    (leveraged buyouts, or “private equity”)

    which has made it part of its fundamental operating strategy to load companies up with debt.

    Considered in this light, debt is a technology of control as well as of information
    – it’s a means of exerting discipline on management teams who might otherwise be tempted to follow priorities other than short-term financial returns.

    This is, as far as I can tell, the real meaning behind the populist critiques of “#financialisation” in the economy.

    There’s really nothing particularly bad about the growth of the financial sector,
    even to the extent that it’s outstripped the growth of the “real” economy.

    Quite simple mathematics ought to be enough to convince us that as the economy grows,
    the number of links and relationships between producers, consumers and investors will grow at a faster rate,
    and so you’d expect the parts of the economy in which decision making and information processing take place to grow faster than the “real” economy.

    It’s the same logic by which the brains of primates take up proportionally more energy than those of rodents;

    finance is part of the real economy, just like the cerebellum is a real organ.

    What’s bad about “financialisation” is neither more nor less than the over-use of debt.

    Modern corporations do often behave badly,
    and they make systematically worse decisions than they used to,
    this isn’t a delusion of age.

    They do this partly because they have outsourced key functions
    (cutting themselves off from important sources of information),

    and partly because their priorities are warped by the need to generate short term cash flow.

    Both of these problems can in large part be traced back to the private equity industry,
    working either as a direct driver of excess leverage,
    or as a constant threat which makes managers behave as if they were already subject to its discipline.

    #Management #science and #cybernetic #history is all about things which began as solutions,
    💥then turned into problems because the world changed.

    Once upon a time, back in the 1970s,
    private equity and LBOs were the solution to a problem of lazy, sclerotic incumbent management teams,
    self-dealing and failing to make tough decisions.

    But it’s now the 2020s, and private equity may itself be the biggest problem in our global information processing system.

    The way that corporate history progresses is that we try to keep up with the ever-increasing complexity of the world,
    ♦️and then when this is no longer possible, we have a crisis and reorganise.

    We’ve had the crisis
    – or perhaps we are still going through it
    – and now it’s time to think about how to reorganise.

    (3/3)

    amazon.com/stores/Dan-Davies/a

    #debt #information #technology #criminogenic #organisation #Stafford #Beer #Barry #Clemson #accountability #sink #Boeing #737MAX #Boeing #merger #McDonnell #Douglas #engineering #culture #cost #control #Ricardian #Fallacy #hard #data #culture #best #practice

  22. It’s really striking to compare the two big crises of the last two decades. 

    The Global Financial Crisis beginning in 2007 was purely a matter of book entries in computers. 

    No actual physical capital was destroyed, nobody died. 

    By contrast, the COVID-19 pandemic beginning in 2020 was a massive blow to productive capacity
    – millions of people died,
    buildings were rendered unusable. 

    But it was the first of these two crises that led to massive scarring and a prolonged global recession, not the second. 

    Why?

    It might be said that the reason why is that if you consider the world economy as an organism,
    the pandemic attacked its muscles and sinews
    while the financial crisis attacked its brain. 

    The global financial services industry is a crucial part of the distributed decision-making system of the world,
    and its core component is a very old, but still surprisingly poorly understood technology
    called #debt.

    In the strictest, purest sense,
    debt is an “#information #technology

    – it’s one of the mechanisms human beings have invented to handle information. 

    By structuring an investment in someone else’s project as a debt,
    you immediately reduce the space of possible outcomes to two
    – you get paid back, or you don’t. 

    There are a lot of other information-processing techniques that banks and investors use,
    from statistical credit scoring to modern portfolio theory,
    but this is the big one. 

    It allows a modern bank to keep track of vastly more financial investments than would ever have been possible for a medieval merchant in the first days of double-entry book-keeping. 

    Rather than having to preserve face-to-face relationships with every single borrower,
    you can rely on the fact that 99% of mortgage loans get paid back in full and on time,
    and concentrate your attention on managing the 1% of cases where something goes wrong.

    🔥The trouble is that if you build a business on this basis, what happens when it turns out that there’s a small variance❓

    Unfortunately, a small variance in the proportion of good loans from 99% to 97% means a tripling in the number of bad loans❗️

    and consequently a huge excess load on the systems that are meant to deal with them. 

    Faced with this massive cognitive overload,
    the system froze. 

    And even more unfortunately,
    in a world in which trillions of dollars need to be rolled over and refinanced every day,
    the one thing that the financial sector cannot do is stop for a moment to regain its bearings. 

    If information processing was free and the bankruptcy process frictionless,
    the Global Financial Crisis would have been over in a month. 

    As it was, all the information which had been attenuated by the use of multiple layers of secured debt came back,
    suddenly unattenuated and needing to be dealt with.

    That’s the “cybernetic history” of the debt crisis which I outline in my book,
    and I think it’s a useful alternative perspective to the economic one,
    and one which makes it more comprehensible that a relatively small market for synthetic CDOs turned into a continental crisis. 

    But this might not even have been the most pernicious use of debt seen in our lifetimes.

    (2/3)

    profilebooks.com/work/the-unac

    #criminogenic #organisation #Stafford #Beer #Barry #Clemson #accountability #sink #Boeing #737MAX #Boeing #merger #McDonnell #Douglas #engineering #culture #cost #control #Ricardian #Fallacy #hard #data #culture #best #practice

  23. More from Dan Davies:

    About five years ago, I started to get very interested in an obscure subject called “#management #cybernetics”. 

    It was a product of the technological dreamscape of the 1960s and 70s;

    after the invention of the computer, but before it became ubiquitous,
    in a period when there was room for speculation about how the new world of artificial intelligence would change our world.

    I had just finished my previous book
    (Lying for Money: How Legendary Frauds Reveal the Workings of the World),
    and was keen to say a bit more about how organisations go wrong. 

    It seemed to me that it might be possible to expand the concept of a “#criminogenic #organisation
    (one where the incentives structurally produce illegal behaviour)
    to a more general 🔸“bad-decision-o-genic organisation”.

    And furthermore, that the weird mixture of pure mathematics, philosophy, accountancy, physics and economics that came together in the work of now-forgotten management gurus like #Stafford #Beer and #Barry #Clemson might be the way to think about it.

    What do bad decision-making organizations have in common❓

    Quite a few things,
    but one of the clearest signs is something you might call an “#accountability #sink”. 

    This is something that might be familiar to anyone who has been bumped from an overbooked flight. 

    There is no point getting angry at the gate attendant;
    they are just implementing a corporate policy which they have no power to change. 

    But nor can you complain to the person who made the decision
    – that is also forbidden by the policy. 

    The airline has created an arrangement whereby the gate attendant speaks to you with the voice of an amorphous algorithm
    -- but you have to speak back as if to a human being like yourself. 

    The communication between the decision-maker and the decided-upon has been broken
    – they have created a handy #sink into which #negative #feedback can be poured without any danger of it affecting anything.

    ⚠️This breaking of the feedback links is, I think, one of the most important things that has happened to large organisations
    – banks, but also large corporations and government departments
    – over the last fifty years. 

    In most cases, it’s not been carried out purely as a responsibility-dodging exercise,
    or as part of a conscious effort to make things worse. 

    That has happened, on occasion,
    but for the most part, after spending a lot of time looking into examples,
    I concluded that feedback links were being broken simply because 💥they had to be. 

    The world keeps growing and getting more complicated,
    which means that individual managers gradually become overwhelmed;

    the problem of trying to get a sensible drink from the firehose of information that pours into any large organisation every day has become unbearable.

    And this is why institutions have started ❌ delegating decisions to systems
    – credit scoring algorithms, regulatory risk weighting formulas
    and the like. 

    As well as allowing decision-making to be automated and industrialised,
    they provide a psychological defense system,
    👉preventing individual human beings from the consequences of having to make a decision and own it.

    Most of the time, these systems work well. 

    But when they break down, the consequences can be spectacular. 

    Because every such algorithm or rulebook is, implicitly, based on a #model of the thing they’re meant to govern. 

    And every such model is capable of failure. 

    And when something comes along that’s outside the model
    – like, for example, a sustained nationwide fall in US house prices
    – you end up in a situation where literally nobody knows what to do.

    (1/3)
    netinterest.co/p/decisions-nob

    @pluralistic
    #Boeing #737MAX #Boeing #merger #McDonnell #Douglas #engineering #culture #cost #control #Ricardian #Fallacy #hard #data #culture #best #practice

  24. Dan Davies:

    These thoughts struck me while listening to the ** Odd Lots podcast on Boeing,
    which I thoroughly recommend.

    The #Boeing #737MAX is one of the core case studies in
    "The Unaccountability Machine" (you can buy it now!),

    because it’s a really graphic example of 💥a decision-making system which generated an awful result -- ⚠️without any identifiable natural person being responsible for it.

    The Odd Lots episode, focuses on #Boeing's 1997 #merger with #McDonnell #Douglas as the inflection point in Boeing’s history. 

    ♦️This caused a thorough cultural change from Boeing’s historical “#engineering #culture” -- based on ❇️getting things right and doing what was needed,

    ♦️to something more in tune with the Jack Welch / Shareholder Value spirit of the times,
    🆘focused on #cost #control and "return on investment".

    It's kind of odd, though. 

    Boeing was the acquirer and the larger company;
    McDonnell was actually not in that great shape;
    it had a good defence business but a bad civilian aircraft business;
    in fact, one of the attractions for Boeing was that McDonnell had spare factory capacity that it could use to accelerate its own overflowing order book. 

    As you’d expect from a company run along financial lines, it was quite indebted too. 

    So why was it McDonnell’s culture that became dominant?❓

    ⬇️ Let’s take a step back into economics. 

    Joseph Schumpeter identified something he called “The #Ricardian #Fallacy”. 

    This is the tendency of economists to 🔹build a theoretical model,
    🔹solve the model
    and then 🔸act as if they have solved the problem in the real world. 

    To an extent, this isn’t particular to economists
    – it’s the nature of modelling that having made a representation of reality specifically in order to attenuate its complexity and make a problem manageable,
    --⭐️ you don’t then go back and unattenuate it.

    But bearing that in mind, the Ricardian Fallacy then interacts with another thing that economists do;
    --⭐️ they collect data. 

    Data gathering is almost never a neutral activity;
    it takes place within a theoretical framework. 

    And what this means is that if the system for gathering, classifying and tabulating the data was designed by people who had a particular model,
    👉then the data will most likely support that model. 

    Everything which is part of the model will be well-verified, data-driven, empirically based and so on. 

    Everything which isn’t part of the model will be handwavey, subjective, “hard to quantify” and other synonyms for “probably special pleading and made up”. 

    In the book, I have a subsection called “How Ricardians Win Arguments”
    and this is how:
    – they collect the data.

    ⬆️ Returning from the digression,
    the important thing to understand is that the financial accounts are a model of the business. 

    They incorporate a lot of assumptions, of which perhaps the least analysed but most important one is “the financial year is a meaningful time period for this process”. 

    Some things appear in the accounts, and they are the things which can be backed up with numbers. 

    Other things don’t, and therefore they can’t. 
    (Or best case, they can only be backed up with ad hoc, unaudited numbers which everyone will be suspicious of).

    💥I think that’s one of the deep causes of what went wrong in Boeing;
    ♦️the McDonnell-Douglas executives were the ones who could back up their business cases with a ream of #hard #data
    ♦️The legacy Boeing executives were left talking about #culture and #best #practice and all sorts of soft-sounding things that were hard to put into a model. 

    ⚠️The Ricardians won the argument, and 🔥the disastrous decisions turned out to have been made without anyone realising they were making them -- when they decided to use the financial reporting system as a tool of management

    (0/3)
    backofmind.substack.com/p/how-

    ** Odd Lots Podcast
    podcasts.apple.com/us/podcast/

  25. #Irrational escalation of #commitment is reflected in such proverbial images as "throwing good money after bad", or "In for a penny, in for a pound", or "It's never the wrong time to make the right decision", or "If you find yourself in a hole, stop digging". en.wikipedia.org/wiki/Escalati

    #SunkCost #fallacy #Arianespace #ESA

  26. Guardians of the Fallacy.

    By Nick Anderson.

    (With image captions. Follow @alt_text or @PleaseCaption to be reminded when you forget.)

    #Fallacy #GOP #TheRight

  27. According to research, many men seem to think that the exploitation of animals adds up to their manliness. While it is an absurd and restraining belief, we should take it serious. plantbasednews.org/lifestyle/f #MaleFragility #ToxicMasculinity #fallacy #meatConsumption

  28. Is there a name for the following fallacy?

    You don’t keep going with something because you have already sunk so many costs. You keep going because you underestimate the remaining costs and think it‘s a good deal. In reality you just keep bashing your head against the wall.

    #followerpower #fallacy #fallacies #sunkcost

  29. CW: Against neoliberalism

    There is only so far that wealth #inequality can expand before it becomes clear that there is a greater risk of #tyranny from the wealthy than there ever was from the #elected #government. I can fire my government every couple of years, but #billionaires are harder to get rid of. At least early modern aristocracy had expectations about their moral conduct. #Neoliberalism is cancer and a #fallacy. It is #plutocratic tyranny.