home.social

#empiricism — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #empiricism, aggregated by home.social.

  1. I saw this on Mastodon and almost had a stroke.

    @davidgerard wrote:

    “Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse

    But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.

    These guys had one good experience with the bot, they got one-shotted, and now if you say “perhaps the bot is not all that” they act like you’re trying to take their cocaine away.”

    First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.

    There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.

    You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.

    A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.

    This is a good paper that goes into it:

    In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

    https://arxiv.org/abs/2508.21634

    Something I’ve started to notice about a lot of the content on social media platforms is that most of the posts people are liking, sharing, and memetically mutating—and then spreading virally—usually don’t include any citations, sources, or receipts. It’s often just some out-of-context screenshot with no reference link or actual sources.

    A lot of the anti-AI content is not genuine critique. It’s often misinformation, but people who hate AI don’t question it or ask for sources because it aligns with their biases. The propaganda on social media has gotten so bad that anything other than heavily curated and vetted feeds is pretty much useless, and it’s filled with all sorts of memetic contagions with nasty hooks that are optimized for you algorithmically. I am at the point where I will disregard anything that is not followed up with a source. Period. It is all optimized to persuade, coerce, or piss you off. I am only writing about this because this I’m actually able to contribute genuine information about the topic.

    That they said symbolic propositions written by AI agents (i.e., code) are non-disprovable because they were written by AI boggles my mind. It’s like saying that an article written in English by AI is not English because AI generated it. It might be a bad piece of text, but it’s syntactically, semantically, and grammatically English.

    Basically, any string of data can be represented in a base-2 system, where it can be interpreted as bits (0s and 1s). Those bits can be used as the basis for symbolic reasoning. In formal propositional logic, a proposition is a sequence of symbols constructed according to strict syntax rules (atomic variables plus logical connectives). Under a given semantics, it is assigned exactly one truth value (true or false) in a two-valued logic system.

    They are essentially saying that code written by AI is not binary, isn’t symbolically logical at all, and cannot be evaluated as true or false by implying it is nondisproveable. At the lowest level, compiled code consists of binary machine instructions that a processor executes. At higher levels, source code is written in symbolic syntax that humans and tools use to express logic and structure. You can also translate parts of code into formal logic expressions. For example, conditions and assertions in a program can be modeled as Boolean formulas. Tools like SAT/SMT solvers or symbolic execution engines check those formulas for satisfiability or correctness. It blows my mind how confidently people talk about things they do not understand.

    Furthermore that they don’t realize the projection is wild to me.

    @davidgerard wrote:

    “But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.”

    They are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.


    It comes down to this: they feel that people ought not to use AI, so they are tacitly committed to a future in which people do not use AI. For example, a major argument against AI is the damage it is doing to resources, which is driving up the prices of computer components, as well as the ecological harm it causes. They feel justified in lying and misinforming others if it achieves the outcome they want—people not using AI because it is bad for the environment. That is a very strong point, but most people don’t care about that, which is why they lie about things people would care about.

    It’s corrupt. And what’s really scary is that people don’t recognize when they are part of corruption or a corrupt conspiracy to misinform. Well, they recognize it when they see the other side doing it, that is. No one is more dangerous than people who feel righteous in what they are doing.

  2. I saw this on Mastodon and almost had a stroke.

    @davidgerard wrote:

    “Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse

    But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.

    These guys had one good experience with the bot, they got one-shotted, and now if you say “perhaps the bot is not all that” they act like you’re trying to take their cocaine away.”

    First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.

    There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.

    You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.

    A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.

    This is a good paper that goes into it:

    In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

    https://arxiv.org/abs/2508.21634

    The big problem in discussions about AI in programming is the either-or thinking, when it’s not about using it everywhere or banning it entirely. Tools like AI have specific strengths and weaknesses. Saying ‘never’ or ‘always’ oversimplifies the issue and turns the narrative into propaganda that creates moral panic or shills AI. It’s a bit like saying you shouldn’t use a hammer just because it’s not good for brushing your teeth.

    AI tends to produce code that’s simple, often a bit repetitive, and very verbose. It’s usually pretty easy to read and tweak. This helps with long-term maintenance. But AI doesn’t reason about code the way an experienced developer does. It makes mistakes that a human wouldn’t, potentially introducing security flaws. That doesn’t mean we shouldn’t use for where it works well, which is not everywhere.

    AI works well for certain tasks, especially when the scope is narrow and the risk is low. Examples include generating boilerplate code, internal utilities, or prototypes. In these cases, the tradeoff is manageable. However, it’s not suitable for critical code like kernels, operating systems, compilers, or cryptographic libraries. A small mistake memory safety or privilege separation can lead to major failures. Problems with synchronization, pointer management, or access control can cause major problems, too.

    Other areas where AI should not be used include memory allocation handling, scheduling, process isolation, or device drivers. A lot of that depends on implicit assumptions in the system’s architecture. Generative models don’t grasp these nuances. Instead of carefully considering the design, AI tends to replicate code patterns that seem statistically likely, doing so without understanding the purpose behind them.

    Yes, I’m aware that Microsoft is using AI to write code everywhere I said it should not be used. That is the problem. However, political pundits, lobbyists, and anti-tech talking heads are discussing something they have no understanding of and aren’t specifying what the problem actually is. This means they can’t possibly lead grassroots initiatives into actual laws that specify where AI should not be used, which is why we have this weird astroturfing bullshit.

    They’re taking advantage of the reaction to Microsoft using AI-generated code where it shouldn’t be used to argue that AI shouldn’t be used anywhere at all in any generative context. AI is useful for tasks like writing documentation, generating tests, suggesting code improvements, or brainstorming alternative approaches. These ideas should then be thoroughly vetted by human developers.

    Something I’ve started to notice about a lot of the content on social media platforms is that most of the posts people are liking, sharing, and memetically mutating—and then spreading virally—usually don’t include any citations, sources, or receipts. It’s often just some out-of-context screenshot with no reference link or actual sources.

    A lot of the anti-AI content is not genuine critique. It’s often misinformation, but people who hate AI don’t question it or ask for sources because it aligns with their biases. The propaganda on social media has gotten so bad that anything other than heavily curated and vetted feeds is pretty much useless, and it’s filled with all sorts of memetic contagions with nasty hooks that are optimized for you algorithmically. I am at the point where I will disregard anything that is not followed up with a source. Period. It is all optimized to persuade, coerce, or piss you off. I am only writing about this because this I’m actually able to contribute genuine information about the topic.

    That they said symbolic propositions written by AI agents (i.e., code) are non-disprovable because they were written by AI boggles my mind. It’s like saying that an article written in English by AI is not English because AI generated it. It might be a bad piece of text, but it’s syntactically, semantically, and grammatically English.

    Basically, any string of data can be represented in a base-2 system, where it can be interpreted as bits (0s and 1s). Those bits can be used as the basis for symbolic reasoning. In formal propositional logic, a proposition is a sequence of symbols constructed according to strict syntax rules (atomic variables plus logical connectives). Under a given semantics, it is assigned exactly one truth value (true or false) in a two-valued logic system.

    They are essentially saying that code written by AI is not binary, isn’t symbolically logical at all, and cannot be evaluated as true or false by implying it is nondisproveable. At the lowest level, compiled code consists of binary machine instructions that a processor executes. At higher levels, source code is written in symbolic syntax that humans and tools use to express logic and structure. You can also translate parts of code into formal logic expressions. For example, conditions and assertions in a program can be modeled as Boolean formulas. Tools like SAT/SMT solvers or symbolic execution engines check those formulas for satisfiability or correctness. It blows my mind how confidently people talk about things they do not understand.

    Furthermore that they don’t realize the projection is wild to me.

    @davidgerard wrote:

    “But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.”

    They are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.

    It comes down to this: they feel that people ought not to use AI, so they are tacitly committed to a future in which people do not use AI. For example, a major argument against AI is the damage it is doing to resources, which is driving up the prices of computer components, as well as the ecological harm it causes. They feel justified in lying and misinforming others if it achieves the outcome they want—people not using AI because it is bad for the environment. That is a very strong point, but most people don’t care about that, which is why they lie about things people would care about.

    It’s corrupt. And what’s really scary is that people don’t recognize when they are part of corruption or a corrupt conspiracy to misinform. Well, they recognize it when they see the other side doing it, that is. No one is more dangerous than people who feel righteous in what they are doing.

    It’s wild to me that the idea that if you cannot persuade someone, it is okay to bully, coerce, harass them, or spread misinformation to get what you want—because your side is right—has become so normalized on the Internet that people can’t see why it is problematic.

    That people think it is okay to hurt others to get them to agree is the most disturbing part of all of this. People have become so hateful. That is a large reason why I don’t interact with people on social media, really consume things from social media, or respond on social media and am writing a blog post about it instead of engaging with who prompted it.

  3. I saw this on Mastodon and almost had a stroke.

    @davidgerard wrote:

    “Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse

    But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.

    These guys had one good experience with the bot, they got one-shotted, and now if you say “perhaps the bot is not all that” they act like you’re trying to take their cocaine away.”

    First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.

    There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.

    You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.

    A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.

    This is a good paper that goes into it:

    In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

    https://arxiv.org/abs/2508.21634

    Something I’ve started to notice about a lot of the content on social media platforms is that most of the posts people are liking, sharing, and memetically mutating—and then spreading virally—usually don’t include any citations, sources, or receipts. It’s often just some out-of-context screenshot with no reference link or actual sources.

    A lot of the anti-AI content is not genuine critique. It’s often misinformation, but people who hate AI don’t question it or ask for sources because it aligns with their biases. The propaganda on social media has gotten so bad that anything other than heavily curated and vetted feeds is pretty much useless, and it’s filled with all sorts of memetic contagions with nasty hooks that are optimized for you algorithmically. I am at the point where I will disregard anything that is not followed up with a source. Period. It is all optimized to persuade, coerce, or piss you off. I am only writing about this because this I’m actually able to contribute genuine information about the topic.

    That they said symbolic propositions written by AI agents (i.e., code) are non-disprovable because they were written by AI boggles my mind. It’s like saying that an article written in English by AI is not English because AI generated it. It might be a bad piece of text, but it’s syntactically, semantically, and grammatically English.

    Basically, any string of data can be represented in a base-2 system, where it can be interpreted as bits (0s and 1s). Those bits can be used as the basis for symbolic reasoning. In formal propositional logic, a proposition is a sequence of symbols constructed according to strict syntax rules (atomic variables plus logical connectives). Under a given semantics, it is assigned exactly one truth value (true or false) in a two-valued logic system.

    They are essentially saying that code written by AI is not binary, isn’t symbolically logical at all, and cannot be evaluated as true or false by implying it is nondisproveable. At the lowest level, compiled code consists of binary machine instructions that a processor executes. At higher levels, source code is written in symbolic syntax that humans and tools use to express logic and structure. You can also translate parts of code into formal logic expressions. For example, conditions and assertions in a program can be modeled as Boolean formulas. Tools like SAT/SMT solvers or symbolic execution engines check those formulas for satisfiability or correctness. It blows my mind how confidently people talk about things they do not understand.

    Furthermore that they don’t realize the projection is wild to me.

    @davidgerard wrote:

    “But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.”

    They are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.


    It comes down to this: they feel that people ought not to use AI, so they are tacitly committed to a future in which people do not use AI. For example, a major argument against AI is the damage it is doing to resources, which is driving up the prices of computer components, as well as the ecological harm it causes. They feel justified in lying and misinforming others if it achieves the outcome they want—people not using AI because it is bad for the environment. That is a very strong point, but most people don’t care about that, which is why they lie about things people would care about.

    It’s corrupt. And what’s really scary is that people don’t recognize when they are part of corruption or a corrupt conspiracy to misinform. Well, they recognize it when they see the other side doing it, that is. No one is more dangerous than people who feel righteous in what they are doing.

  4. I saw this on Mastodon and almost had a stroke.

    @davidgerard wrote:

    “Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse

    But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.

    These guys had one good experience with the bot, they got one-shotted, and now if you say “perhaps the bot is not all that” they act like you’re trying to take their cocaine away.”

    First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.

    There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.

    You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.

    A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.

    This is a good paper that goes into it:

    In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

    https://arxiv.org/abs/2508.21634

    The big problem in discussions about AI in programming is the either-or thinking, when it’s not about using it everywhere or banning it entirely. Tools like AI have specific strengths and weaknesses. Saying ‘never’ or ‘always’ oversimplifies the issue and turns the narrative into propaganda that creates moral panic or shills AI. It’s a bit like saying you shouldn’t use a hammer just because it’s not good for brushing your teeth.

    AI tends to produce code that’s simple, often a bit repetitive, and very verbose. It’s usually pretty easy to read and tweak. This helps with long-term maintenance. But AI doesn’t reason about code the way an experienced developer does. It makes mistakes that a human wouldn’t, potentially introducing security flaws. That doesn’t mean we shouldn’t use for where it works well, which is not everywhere.

    AI works well for certain tasks, especially when the scope is narrow and the risk is low. Examples include generating boilerplate code, internal utilities, or prototypes. In these cases, the tradeoff is manageable. However, it’s not suitable for critical code like kernels, operating systems, compilers, or cryptographic libraries. A small mistake memory safety or privilege separation can lead to major failures. Problems with synchronization, pointer management, or access control can cause major problems, too.

    Other areas where AI should not be used include memory allocation handling, scheduling, process isolation, or device drivers. A lot of that depends on implicit assumptions in the system’s architecture. Generative models don’t grasp these nuances. Instead of carefully considering the design, AI tends to replicate code patterns that seem statistically likely, doing so without understanding the purpose behind them.

    Yes, I’m aware that Microsoft is using AI to write code everywhere I said it should not be used. That is the problem. However, political pundits, lobbyists, and anti-tech talking heads are discussing something they have no understanding of and aren’t specifying what the problem actually is. This means they can’t possibly lead grassroots initiatives into actual laws that specify where AI should not be used, which is why we have this weird astroturfing bullshit.

    They’re taking advantage of the reaction to Microsoft using AI-generated code where it shouldn’t be used to argue that AI shouldn’t be used anywhere at all in any generative context. AI is useful for tasks like writing documentation, generating tests, suggesting code improvements, or brainstorming alternative approaches. These ideas should then be thoroughly vetted by human developers.

    Something I’ve started to notice about a lot of the content on social media platforms is that most of the posts people are liking, sharing, and memetically mutating—and then spreading virally—usually don’t include any citations, sources, or receipts. It’s often just some out-of-context screenshot with no reference link or actual sources.

    A lot of the anti-AI content is not genuine critique. It’s often misinformation, but people who hate AI don’t question it or ask for sources because it aligns with their biases. The propaganda on social media has gotten so bad that anything other than heavily curated and vetted feeds is pretty much useless, and it’s filled with all sorts of memetic contagions with nasty hooks that are optimized for you algorithmically. I am at the point where I will disregard anything that is not followed up with a source. Period. It is all optimized to persuade, coerce, or piss you off. I am only writing about this because this I’m actually able to contribute genuine information about the topic.

    That they said symbolic propositions written by AI agents (i.e., code) are non-disprovable because they were written by AI boggles my mind. It’s like saying that an article written in English by AI is not English because AI generated it. It might be a bad piece of text, but it’s syntactically, semantically, and grammatically English.

    Basically, any string of data can be represented in a base-2 system, where it can be interpreted as bits (0s and 1s). Those bits can be used as the basis for symbolic reasoning. In formal propositional logic, a proposition is a sequence of symbols constructed according to strict syntax rules (atomic variables plus logical connectives). Under a given semantics, it is assigned exactly one truth value (true or false) in a two-valued logic system.

    They are essentially saying that code written by AI is not binary, isn’t symbolically logical at all, and cannot be evaluated as true or false by implying it is nondisproveable. At the lowest level, compiled code consists of binary machine instructions that a processor executes. At higher levels, source code is written in symbolic syntax that humans and tools use to express logic and structure. You can also translate parts of code into formal logic expressions. For example, conditions and assertions in a program can be modeled as Boolean formulas. Tools like SAT/SMT solvers or symbolic execution engines check those formulas for satisfiability or correctness. It blows my mind how confidently people talk about things they do not understand.

    Furthermore that they don’t realize the projection is wild to me.

    @davidgerard wrote:

    “But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.”

    They are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.

    It comes down to this: they feel that people ought not to use AI, so they are tacitly committed to a future in which people do not use AI. For example, a major argument against AI is the damage it is doing to resources, which is driving up the prices of computer components, as well as the ecological harm it causes. They feel justified in lying and misinforming others if it achieves the outcome they want—people not using AI because it is bad for the environment. That is a very strong point, but most people don’t care about that, which is why they lie about things people would care about.

    It’s corrupt. And what’s really scary is that people don’t recognize when they are part of corruption or a corrupt conspiracy to misinform. Well, they recognize it when they see the other side doing it, that is. No one is more dangerous than people who feel righteous in what they are doing.

    It’s wild to me that the idea that if you cannot persuade someone, it is okay to bully, coerce, harass them, or spread misinformation to get what you want—because your side is right—has become so normalized on the Internet that people can’t see why it is problematic.

    That people think it is okay to hurt others to get them to agree is the most disturbing part of all of this. People have become so hateful. That is a large reason why I don’t interact with people on social media, really consume things from social media, or respond on social media and am writing a blog post about it instead of engaging with who prompted it.

  5. I saw this on Mastodon and almost had a stroke.

    @davidgerard wrote:

    “Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse

    But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.

    These guys had one good experience with the bot, they got one-shotted, and now if you say “perhaps the bot is not all that” they act like you’re trying to take their cocaine away.”

    First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.

    There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.

    You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.

    A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.

    This is a good paper that goes into it:

    In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

    https://arxiv.org/abs/2508.21634

    The big problem in discussions about AI in programming is the either-or thinking, when it’s not about using it everywhere or banning it entirely. Tools like AI have specific strengths and weaknesses. Saying ‘never’ or ‘always’ oversimplifies the issue and turns the narrative into propaganda that creates moral panic or shills AI. It’s a bit like saying you shouldn’t use a hammer just because it’s not good for brushing your teeth.

    AI tends to produce code that’s simple, often a bit repetitive, and very verbose. It’s usually pretty easy to read and tweak. This helps with long-term maintenance. But AI doesn’t reason about code the way an experienced developer does. It makes mistakes that a human wouldn’t, potentially introducing security flaws. That doesn’t mean we shouldn’t use for where it works well, which is not everywhere.

    AI works well for certain tasks, especially when the scope is narrow and the risk is low. Examples include generating boilerplate code, internal utilities, or prototypes. In these cases, the tradeoff is manageable. However, it’s not suitable for critical code like kernels, operating systems, compilers, or cryptographic libraries. A small mistake memory safety or privilege separation can lead to major failures. Problems with synchronization, pointer management, or access control can cause major problems, too.

    Other areas where AI should not be used include memory allocation handling, scheduling, process isolation, or device drivers. A lot of that depends on implicit assumptions in the system’s architecture. Generative models don’t grasp these nuances. Instead of carefully considering the design, AI tends to replicate code patterns that seem statistically likely, doing so without understanding the purpose behind them.

    Yes, I’m aware that Microsoft is using AI to write code everywhere I said it should not be used. That is the problem. However, political pundits, lobbyists, and anti-tech talking heads are discussing something they have no understanding of and aren’t specifying what the problem actually is. This means they can’t possibly lead grassroots initiatives into actual laws that specify where AI should not be used, which is why we have this weird astroturfing bullshit.

    They’re taking advantage of the reaction to Microsoft using AI-generated code where it shouldn’t be used to argue that AI shouldn’t be used anywhere at all in any generative context. AI is useful for tasks like writing documentation, generating tests, suggesting code improvements, or brainstorming alternative approaches. These ideas should then be thoroughly vetted by human developers.

    Something I’ve started to notice about a lot of the content on social media platforms is that most of the posts people are liking, sharing, and memetically mutating—and then spreading virally—usually don’t include any citations, sources, or receipts. It’s often just some out-of-context screenshot with no reference link or actual sources.

    A lot of the anti-AI content is not genuine critique. It’s often misinformation, but people who hate AI don’t question it or ask for sources because it aligns with their biases. The propaganda on social media has gotten so bad that anything other than heavily curated and vetted feeds is pretty much useless, and it’s filled with all sorts of memetic contagions with nasty hooks that are optimized for you algorithmically. I am at the point where I will disregard anything that is not followed up with a source. Period. It is all optimized to persuade, coerce, or piss you off. I am only writing about this because this I’m actually able to contribute genuine information about the topic.

    That they said symbolic propositions written by AI agents (i.e., code) are non-disprovable because they were written by AI boggles my mind. It’s like saying that an article written in English by AI is not English because AI generated it. It might be a bad piece of text, but it’s syntactically, semantically, and grammatically English.

    Basically, any string of data can be represented in a base-2 system, where it can be interpreted as bits (0s and 1s). Those bits can be used as the basis for symbolic reasoning. In formal propositional logic, a proposition is a sequence of symbols constructed according to strict syntax rules (atomic variables plus logical connectives). Under a given semantics, it is assigned exactly one truth value (true or false) in a two-valued logic system.

    They are essentially saying that code written by AI is not binary, isn’t symbolically logical at all, and cannot be evaluated as true or false by implying it is nondisproveable. At the lowest level, compiled code consists of binary machine instructions that a processor executes. At higher levels, source code is written in symbolic syntax that humans and tools use to express logic and structure. You can also translate parts of code into formal logic expressions. For example, conditions and assertions in a program can be modeled as Boolean formulas. Tools like SAT/SMT solvers or symbolic execution engines check those formulas for satisfiability or correctness. It blows my mind how confidently people talk about things they do not understand.

    Furthermore that they don’t realize the projection is wild to me.

    @davidgerard wrote:

    “But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.”

    They are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.

    It comes down to this: they feel that people ought not to use AI, so they are tacitly committed to a future in which people do not use AI. For example, a major argument against AI is the damage it is doing to resources, which is driving up the prices of computer components, as well as the ecological harm it causes. They feel justified in lying and misinforming others if it achieves the outcome they want—people not using AI because it is bad for the environment. That is a very strong point, but most people don’t care about that, which is why they lie about things people would care about.

    It’s corrupt. And what’s really scary is that people don’t recognize when they are part of corruption or a corrupt conspiracy to misinform. Well, they recognize it when they see the other side doing it, that is. No one is more dangerous than people who feel righteous in what they are doing.

    It’s wild to me that the idea that if you cannot persuade someone, it is okay to bully, coerce, harass them, or spread misinformation to get what you want—because your side is right—has become so normalized on the Internet that people can’t see why it is problematic.

    That people think it is okay to hurt others to get them to agree is the most disturbing part of all of this. People have become so hateful. That is a large reason why I don’t interact with people on social media, really consume things from social media, or respond on social media and am writing a blog post about it instead of engaging with who prompted it.

  6. Who Gets to Speak On Discord, Who Gets Banned, and Why That’s Always Political in Spaces with No Politics Rules

    So, a thing I find very interesting about the fragility of the esteem among chronic Discord users is that it’s common for admins and moderators to ban or make fun of people who leave. Essentially, they’re responding to being rejected or not chosen, so they think it’s reasonable to retaliate

    A Discord server I am lurking in has a “no politics” rule and is a religious, esoteric, and philosophical server. What I find very funny about this is that politics is:

    “Politics is who gets what, when, and how.”

    — Harold D. Lasswell, Politics: Who Gets What, When, How (1936)

    I find it very funny that the most minimal form of being “not political” in a virtual community is a Temporary Autonomous Zone (TAZ). I was part of an IRC chaos magick channel when I was a teenager, and I submitted to a zine under my old handle (which is not Rayn) when I was 20. No, I’m not going to reveal the name I wrote under, which was published in chaos magick zines back in the day, because I’ve had a bucket of crazies following me around since 2008, with the insane network of anarchists circa 2020 being the latest instance.

    ChanServ was a bot used on IRC (Internet Relay Chat) networks to manage channel operations such as bans, who got voiced, and permissions. Think of it as an early, early moderation bot. In an IRC TAZ, everyone who entered got all the permissions from Chanserv, so anyone could ban, voice, unban, deop, or op anyone else. No one had more power than anyone else, so there was minimal negotiation over channel resources. A TAZ is still an inherently political construct; however, it is a minimal political construct because there is minimal negotiation of resources and an equal, random, and chaotic authority structure. That’s not Discord, though.

    Discord inherently has a hierarchical system defined by roles, a TOS, and members are expected to abide by the rules of that server. So, when you say there is a no-politics rule on Discord, you are inherently contradicting yourself because Discord is structurally political in how you, as a moderator, interact with others. How people negotiate conversations and interact with each other to access the resources of your Discord server is inherently political.

    Discord’s structure makes any “no-politics” rule itself a political act. Moderators exercise power by granting, restricting, or revoking permissions, and that distribution of power is the very politics the rule tries to avoid. So while the intention is to keep discussions “apolitical,” it creates local Discord politics by determining who gets to speak and who gets silenced (e.g., banned, timed out, kicked, or limited to certain channels). A “no politics” rule shifts political dynamics into moderation decisions rather than eliminating them.

    What prompted this was me observing a typical pragmatic versus moral realism argument that you’d see in any philosophy course or forum. I’m an academic and a computational scientist, but I don’t try to shut down any arguments with that, because that’s an explicit fallacy and a dishonest, bad-faith tactic.

    Technically, I am a biologist. Yes, I have a biology degree and a biotech degree. I also have philosophy, mathematics, and computer science and engineering degrees under my belt. I have to work with people like this on a daily basis, and I find them insufferable, so the last thing I want to do in my free time after looking at stacks of dumbass papers is argue with people on Reddit or Discord when I could be fucking, getting fucked, or spending time with my husband. But, alas, they have no life. Keep in mind, as a computational biologist that reviews a lot of shit, I get paid to argue. These idiots are arguing on the Internet for free! The reason why Redditors, Reddit moderators, and Discord moderators get shat on so much is that all of their labor is unpaid! People with lives don’t take it that seriously!

    On to the convo:

    A new person in the community defined morals as: morals = {a, b, c} exhaustively. An established member of that community responded that, for them, morals are either {x, y, z…}, non-exhaustive and polymorphic, or not inherently defined by the tradition itself but supplied externally by the individual. The new person replied, effectively, “According to my definition of a, b, c, that still constitutes a moral framework.” An established member who is also a scientist pushed back as if no definition of morals had been proposed at all, when in actuality they were disagreeing with the scope and applicability of the given definition, not the act of defining itself.

    By the way, the symbolic way I’m defining this is ambiguous. You have no clue what anything is; however, it is ontologically defined, and the logic makes sense. That is the problem. An ontological definition was given, so arguing that no definition was proposed—simply because they disagreed with it—is in bad faith. Personally, I am a constructivist, poststructuralist, pragmatist, instrumentalist, and anti-realist, so I don’t care too much about the realism of the ontological propositions and expressions. I am pointing out logical mistakes.

    This is especially egregious when individuals rely on their authority in a domain where their degree is not pertinent. A well-known issue with scientists is that their curiosity can outstrip their morality. Essentially, an ethics board composed mostly of scientists without degrees in ethics, law, or philosophy will make poor decisions and saturate the political sphere they occupy with advocates and lobbyists to bend laws to their interests. Therefore, a board with no philosophers is pretty sinister.

    Morals and ethics are philosophical problems. To my knowledge, many people who sit on ethics boards that seriously address ethical issues have philosophy, and not just astronomy, degrees. Relevant degrees include psychology, sociology, theology, philosophy, etc. For example, I have a philosophy degree, so I am technically qualified and credentialed by a university to have these discussions. An astronomy degree alone does not make someone qualified to discuss ethics—maybe if they also had a theology degree?

    The thing I find really funny about this group is that they avoid dilemmas. Morals and ethics are developed through ethical dilemmas. Their response to any type of dilemma is to exert their local authority and exclude, deny, or shut down conversations.

    The difference between science and philosophy is that science is a little less messy and more defined. We can all see something and agree on what we see, right? The difference with philosophical questions and moral dilemmas is that they are relatively open-ended and ambiguous. It’s really amusing to me how those who try to argue philosophy are uncomfortable with indefinite answers that are open to interpretation.

    It’s just funny how they tacitly assume that they are the only academics in their field in existence and that their opinion on things is the consensus, especially on metaphysical issues where there is no consensus. No human knows what the right thing to do is all the time. It’s great to know that they have somehow achieved a level of inhuman perfection.

  7. Who Gets to Speak On Discord, Who Gets Banned, and Why That’s Always Political in Spaces with No Politics Rules

    So, a thing I find very interesting about the fragility of the esteem among chronic Discord users is that it’s common for admins and moderators to ban or make fun of people who leave. Essentially, they’re responding to being rejected or not chosen, so they think it’s reasonable to retaliate

    A Discord server I am lurking in has a “no politics” rule and is a religious, esoteric, and philosophical server. What I find very funny about this is that politics is:

    “Politics is who gets what, when, and how.”

    — Harold D. Lasswell, Politics: Who Gets What, When, How (1936)

    I find it very funny that the most minimal form of being “not political” in a virtual community is a Temporary Autonomous Zone (TAZ). I was part of an IRC chaos magick channel when I was a teenager, and I submitted to a zine under my old handle (which is not Rayn) when I was 20. No, I’m not going to reveal the name I wrote under, which was published in chaos magick zines back in the day, because I’ve had a bucket of crazies following me around since 2008, with the insane network of anarchists circa 2020 being the latest instance.

    ChanServ was a bot used on IRC (Internet Relay Chat) networks to manage channel operations such as bans, who got voiced, and permissions. Think of it as an early, early moderation bot. In an IRC TAZ, everyone who entered got all the permissions from Chanserv, so anyone could ban, voice, unban, deop, or op anyone else. No one had more power than anyone else, so there was minimal negotiation over channel resources. A TAZ is still an inherently political construct; however, it is a minimal political construct because there is minimal negotiation of resources and an equal, random, and chaotic authority structure. That’s not Discord, though.

    Discord inherently has a hierarchical system defined by roles, a TOS, and members are expected to abide by the rules of that server. So, when you say there is a no-politics rule on Discord, you are inherently contradicting yourself because Discord is structurally political in how you, as a moderator, interact with others. How people negotiate conversations and interact with each other to access the resources of your Discord server is inherently political.

    Discord’s structure makes any “no-politics” rule itself a political act. Moderators exercise power by granting, restricting, or revoking permissions, and that distribution of power is the very politics the rule tries to avoid. So while the intention is to keep discussions “apolitical,” it creates local Discord politics by determining who gets to speak and who gets silenced (e.g., banned, timed out, kicked, or limited to certain channels). A “no politics” rule shifts political dynamics into moderation decisions rather than eliminating them.

    What prompted this was me observing a typical pragmatic versus moral realism argument that you’d see in any philosophy course or forum. I’m an academic and a computational scientist, but I don’t try to shut down any arguments with that, because that’s an explicit fallacy and a dishonest, bad-faith tactic.

    Technically, I am a biologist. Yes, I have a biology degree and a biotech degree. I also have philosophy, mathematics, and computer science and engineering degrees under my belt. I have to work with people like this on a daily basis, and I find them insufferable, so the last thing I want to do in my free time after looking at stacks of dumbass papers is argue with people on Reddit or Discord when I could be fucking, getting fucked, or spending time with my husband. But, alas, they have no life. Keep in mind, as a computational biologist that reviews a lot of shit, I get paid to argue. These idiots are arguing on the Internet for free! The reason why Redditors, Reddit moderators, and Discord moderators get shat on so much is that all of their labor is unpaid! People with lives don’t take it that seriously!

    On to the convo:

    A new person in the community defined morals as: morals = {a, b, c} exhaustively. An established member of that community responded that, for them, morals are either {x, y, z…}, non-exhaustive and polymorphic, or not inherently defined by the tradition itself but supplied externally by the individual. The new person replied, effectively, “According to my definition of a, b, c, that still constitutes a moral framework.” An established member who is also a scientist pushed back as if no definition of morals had been proposed at all, when in actuality they were disagreeing with the scope and applicability of the given definition, not the act of defining itself.

    By the way, the symbolic way I’m defining this is ambiguous. You have no clue what anything is; however, it is ontologically defined, and the logic makes sense. That is the problem. An ontological definition was given, so arguing that no definition was proposed—simply because they disagreed with it—is in bad faith. Personally, I am a constructivist, poststructuralist, pragmatist, instrumentalist, and anti-realist, so I don’t care too much about the realism of the ontological propositions and expressions. I am pointing out logical mistakes.

    This is especially egregious when individuals rely on their authority in a domain where their degree is not pertinent. A well-known issue with scientists is that their curiosity can outstrip their morality. Essentially, an ethics board composed mostly of scientists without degrees in ethics, law, or philosophy will make poor decisions and saturate the political sphere they occupy with advocates and lobbyists to bend laws to their interests. Therefore, a board with no philosophers is pretty sinister.

    Morals and ethics are philosophical problems. To my knowledge, many people who sit on ethics boards that seriously address ethical issues have philosophy, and not just astronomy, degrees. Relevant degrees include psychology, sociology, theology, philosophy, etc. For example, I have a philosophy degree, so I am technically qualified and credentialed by a university to have these discussions. An astronomy degree alone does not make someone qualified to discuss ethics—maybe if they also had a theology degree?

    The thing I find really funny about this group is that they avoid dilemmas. Morals and ethics are developed through ethical dilemmas. Their response to any type of dilemma is to exert their local authority and exclude, deny, or shut down conversations.

    The difference between science and philosophy is that science is a little less messy and more defined. We can all see something and agree on what we see, right? The difference with philosophical questions and moral dilemmas is that they are relatively open-ended and ambiguous. It’s really amusing to me how those who try to argue philosophy are uncomfortable with indefinite answers that are open to interpretation.

    It’s just funny how they tacitly assume that they are the only academics in their field in existence and that their opinion on things is the consensus, especially on metaphysical issues where there is no consensus. No human knows what the right thing to do is all the time. It’s great to know that they have somehow achieved a level of inhuman perfection.

  8. Scrum Isn’t a Belief System—It’s a Learning System - Scrum.org Blog (Steven Deneir)

    That’s it.
    Empiricism means:
    You’ve seen something happen or you’ve done something, and
    You’ve learned from it.

    scrum.org/resources/blog/scrum

    #scrum #agile #scrumMaster #Empiricism #scrumdotorg


  9. I did, against all odds, manage to get some work done today. We have confirmed you can't scan someone with braces on an MRI no matter how much they claim it's only a little metal, but bridges and retainers and everything else we've got in our dataset are surprisingly ok. #empiricism
    😂

  10. How many seminal ideas do you know that gave birth to Agile Software Development?

    How many consequential contributions do you know that advanced Agile effectiveness, understanding and everyday practice?

    Test your knowledge and compare your list here => bit.ly/AgileTimeMachine

    Get also a sense of its original essence and ethos.

    #Agile #History #WaysOfWorking #Agility #Mastery
    #Empiricism #Collaboration #SharedResponsibility

  11. With this Sadler's Lectures podcast episode, we finish up the series on David Hume's Enquiry Concerning Human Understanding. There are now podcast episodes covering every key idea, distinction, & argument in that text

    soundcloud.com/gregorybsadler/
    #Podcast #Hume #Philosophy #Enquiry #Empiricism #Skepticism

  12. Looking for resources on David Hume, particularly on the Enquiry Concerning Human Understanding? Here's my playlist with 37 videos in it so far. Talks, course lectures, and shorter core concept videos

    youtube.com/playlist?list=PL4g
    #Hume #Philosophy #Videos #Resources #Empiricism #Skepticism

  13. Here is the last of the core concept videos on David Hume's Enquiry Concerning Human Understanding. I have now published videos covering the entirety of that work, examining each key idea, argument, explanation or distinction!

    youtu.be/uFgq-SE2oFM
    #Video #Hume #Philosophy #Empiricism #Skepticism #MoralPhilosophy #HumanNature

  14. Back when I produced core concept videos for my students on David Hume's Enquiry Concerning Human Understanding, I skipped over section 1. I've started shooting videos filling in that gap. Here's the first of those!

    youtu.be/CIYHlSCv8U8
    #Video #Hume #Philosophy #Empiricism #Skepticism #Moral #Rigor

  15. Theme One Program • Motivation 1
    inquiryintoinquiry.com/2024/06

    The main idea behind the Theme One program is the efficient use of graph‑theoretic data structures for the tasks of “learning” and “reasoning”.

    I am thinking of “learning” in the sense of learning about an environment, in essence, gaining information about the nature of an environment and being able to apply the information acquired to a specific purpose.

    Under the heading of “reasoning” I am simply lumping together all the ordinary sorts of practical activities which would probably occur to most people under that name.

    There is a natural relation between the tasks. Learning the character of an environment leads to the recognition of laws which govern the environment and making full use of that recognition requires the ability to reason logically about those laws in abstract terms.

    Resources —

    Theme One Program • Overview
    oeis.org/wiki/Theme_One_Progra

    Theme One Program • Exposition
    oeis.org/wiki/Theme_One_Progra

    Theme One Program • User Guide
    academia.edu/5211369/Theme_One

    Survey of Theme One Program
    inquiryintoinquiry.com/2024/02

    #ThemeOneProgram #Learning #Reasoning
    #Logic #LogicalGraphs #FormalLanguages
    #Algorithm #DataStructure #GraphTheory
    #Peirce #PragmaticSemioticInformation
    #Empiricism #Rationalism #Pragmatism

  16. This week on Brains we have a symposium about Experience, Phenomenology, and Quantum Mechanics featuring 4 scholars!

    Philipp Berghofer kicks things off and we get engaging commentaries by Mahdi Khalili, Andrea Reichenberger, and Harald Wiltsche before Berghofer replies.

    Here's the introduction and link to all five posts: philosophyofbrains.com/2024/02

    #phenomenology #QuantumMechanics #physics #empiricism #philosophyOfMind

  17. In the Way of Inquiry • Reconciling Accounts
    inquiryintoinquiry.com/2023/01

    The Reader may share with the Author a feeling of discontent at this point, attempting to reconcile the formal intentions of this inquiry with the cardinal contentions of experience. Let me try to express the difficulty in the form of a question:

    What is the bond between form and content in experience, between the abstract formal categories and the concrete material contents residing in experience?

    Once toward the end of my undergrad years a professor asked me how I'd personally define mathematics and I told him I saw it as “the form of experience and the experience of form”. This is not the place to argue for the virtues of that formulation but it does afford me one of the handles I have on the bond between form and content in experience.

    I have no more than a tentative way of approaching the question. I take there to be a primitive category of “form‑in‑experience” — I don’t have a handy name for it yet but it looks to have a flexible nature which from the standpoint of a given agent easily passes from the “structure of experience” to the “experience of structure”.

    Overview
    oeis.org/wiki/Inquiry_Driven_S

    Obstacles
    oeis.org/wiki/Inquiry_Driven_S

    #Peirce #Inquiry #InquiryIntoInquiry #InquiryDrivenSystems
    #Semiotics #SignRelations #Semiositis #ObstaclesToInquiry
    #Logic #Abduction #Deduction #Induction #ScientificMethod
    #Experience #Expectation #EffectiveDescription #FiniteMeans
    #Abstraction #Analogy #Form #Matter #Empiricism #Rationalism
    #Concretion #Information #Comprehension #Extension #Intension

  18. In the Way of Inquiry • Material Exigency 2
    inquiryintoinquiry.com/2023/01

    A turn of events so persistent must have a cause, a force of reason to explain the dynamics of its recurring moment in the history of ideas. The nub of it's not born on the sleeve of its first and last stages, where the initial explosion and the final collapse march along their stubborn course in lockstep fashion, but is embodied more naturally in the middle of the above narrative.

    Experience exposes and explodes expectations. How can experiences impact expectations unless the two types of entities are both reflected in one medium, for instance and perhaps without loss of generality, in the form of representation constituting the domain of signs?

    However complex its world may be, internal or external to itself or on the boundaries of its being, a finite creature's description of it rests in a finite number of finite terms or a finite sketch of finite lines. Finite terms and lines are signs. What they indicate need not be finite but what they are, must be.

    Fragments —

    The common sensorium.

    The common sense and the senses of “common”.

    This is the point where the empirical and the rational meet.

    I describe as “empirical” any method which exposes theoretical descriptions of an object to further experience with that object.

    Overview
    oeis.org/wiki/Inquiry_Driven_S

    Obstacles
    oeis.org/wiki/Inquiry_Driven_S

    #Peirce #Inquiry #InquiryIntoInquiry #InquiryDrivenSystems
    #Semiotics #SignRelations #Semiositis #ObstaclesToInquiry
    #Logic #Abduction #Deduction #Induction #ScientificMethod
    #Experience #Expectation #EffectiveDescription #FiniteMeans
    #Abstraction #Analogy #Form #Matter #Empiricism #Rationalism

  19. In the Way of Inquiry • Material Exigency 1
    inquiryintoinquiry.com/2023/01

    Our survey of obstacles to inquiry has dealt at length with blocks arising from its formal aspects. On the other hand, I have cast this project as an empirical inquiry, proposing to represent experimental hypotheses in the form of computer programs. At the heart of that empirical attitude is a feeling all formal theories should arise from and bear on experience.

    Every season of growth in empirical knowledge begins with a rush to the sources of experience. Every fresh‑thinking reed of intellect is raised to pipe up and chime in with the still‑viable canons of inquiry in one glorious paean to the personal encounter with natural experience.

    But real progress in the community of inquiry depends on observers being able to orient themselves to objects of common experience — the uncontrolled exaltation of individual phenomenologies leads as a rule to the disappointment and disillusionment which befalls the lot of unshared enthusiasms and fragmented impressions.

    Look again at the end of the season and see it faltering to a close, with every novice scribe rapped on the knuckles for departing from that uninspired identification with impersonal authority which expresses itself in third‑person passive accounts of one's own experience.

    Overview
    oeis.org/wiki/Inquiry_Driven_S

    Obstacles
    oeis.org/wiki/Inquiry_Driven_S

    #Peirce #Inquiry #InquiryIntoInquiry #InquiryDrivenSystems
    #Semiotics #SignRelations #Semiositis #ObstaclesToInquiry
    #Logic #Abduction #Deduction #Induction #ScientificMethod
    #Abstraction #Analogy #Form #Matter #Empiricism #Rationalism

  20. "If you study philosophy at a British or American #university, your #education in the history of the subject will likely be modest. Most universities teach #Plato and #Aristotle, skip about two millennia to #Descartes, zip through the highlights of #Empiricism and #Rationalism to #Kant, and then drop things again until the 20th Century, where #Frege and #Russell arise from the mists of the previous centuries’ Idealism ...”

    @philosophy
    #philosophy
    prospectmagazine.co.uk/ideas/p

  21. Theme One Program • Exposition 1.2
    inquiryintoinquiry.com/2022/06

    The Idea↑Form Flag

    The graph-theoretic data structures used by the program are built up from a basic data structure called an “idea-form flag”. That structure is defined as a pair of Pascal data types by means of the following specifications.

    Figure 1. Type Idea = ^Form
    inquiryintoinquiry.files.wordp

    Figure 2. Code Box
    • type idea = ^form;
    • form = record
    • sign: char;
    • as, up, on, by: idea;
    • code: numb
    • end;

    An “idea” is a pointer to a “form”.
    • A “form” is a record consisting of:
    • A “sign” of type “char”;
    • Four pointers, “as”, “up”, “on”, “by”, of type “idea”;
    • A “code” of type “numb”, that is, an integer in [0, max integer].

    Represented in terms of “digraphs”, or directed graphs, the combination of an idea pointer and a form record is most easily pictured as an “arc”, or directed edge, leading to a node labeled with the other data, in this case, a letter and a number.

    #ThemeOneProgram #Learning #Reasoning
    #Logic #LogicalGraphs #FormalLanguages
    #Algorithm #DataStructure #GraphTheory
    #Peirce #PragmaticSemioticInformation
    #Empiricism #Rationalism #Pragmatism

  22. Theme One Program • Exposition 1.1
    inquiryintoinquiry.com/2024/06

    Theme One is a program for constructing and transforming a particular species of graph‑theoretic data structures, forms designed to support a variety of fundamental learning and reasoning tasks.

    The program evolved over the course of an exploration into the integration of contrasting types of activities involved in learning and reasoning, especially the types of algorithms and data structures capable of supporting all sorts of inquiry processes, from everyday problem solving to scientific investigation. In its current state, Theme One integrates over a common data structure fundamental algorithms for one type of inductive learning and one type of deductive reasoning.

    We begin by describing the class of graph-theoretic data structures used by the program, as determined by their local and global features. It will be the usual practice to shift around and view these graphs at many different levels of detail, from their abstract definition to their concrete implementation, and many points in between.

    The main work of the Theme One program is achieved by building and transforming a single species of graph-theoretic data structures. In their abstract form these structures are closely related to the graphs called cacti and conifers in graph theory, so we’ll generally refer to them under those names.

    #ThemeOneProgram #Learning #Reasoning
    #Logic #LogicalGraphs #FormalLanguages
    #Algorithm #DataStructure #GraphTheory
    #Peirce #PragmaticSemioticInformation
    #Empiricism #Rationalism #Pragmatism

  23. Theme One Program • Motivation 6
    inquiryintoinquiry.com/2022/08

    Comments I made in reply to a correspondent’s questions about delimiters and tokenizing in the Learner module may be worth sharing here.

    In one of the projects I submitted toward a Master’s in psychology I used the Theme One program to analyze samples of data from my advisor’s funded research study on family dynamics. In one phase of the study observers viewed video-taped sessions of family members (parent and child) interacting in various modes (“play” or “work”) and coded qualitative features of each moment’s activity over a period of time.

    The following page describes the application in more detail and reflects on its implications for the conduct of scientific inquiry in general.

    Exploratory Qualitative Analysis of Sequential Observation Data
    oeis.org/wiki/User:Jon_Awbrey/

    In this application a “phrase” or “string” is a fixed-length sequence of qualitative features and a “clause” or “strand” is a sequence of such phrases delimited by what the observer judges to be a significant pause in the action.

    In the qualitative research phases of the study one is simply attempting to discern any significant or recurring patterns in the data one possibly can.

    In this case the observers are tokenizing the observations according to a codebook that has passed enough intercoder reliability studies to afford them all a measure of confidence it captures meaningful aspects of whatever reality is passing before their eyes and ears.

    #ThemeOneProgram #Learning #Reasoning
    #Logic #LogicalGraphs #FormalLanguages
    #Algorithm #DataStructure #GraphTheory
    #Peirce #PragmaticSemioticInformation
    #Empiricism #Rationalism #Pragmatism

  24. Theme One Program • Motivation 5
    inquiryintoinquiry.com/2022/08

    Since I’m working from decades-old memories of first inklings I thought I might peruse the web for current information about Zipf’s Law. I see there is now something called the Zipf–Mandelbrot (and sometimes –Pareto) Law and that was interesting because my wife Susan Awbrey made use of Mandelbrot’s ideas about self-similarity in her dissertation and communicated with him about it. So there’s more to read up on.

    Just off-hand, though, I think my Learner is dealing with a different problem. It has more to do with the savings in effort a learner gets by anticipating future experiences based on its record of past experiences than the savings it gets by minimizing bits of storage as far as mechanically possible. There is still a type of compression involved but it’s more like Korzybski’s “time-binding” than space-savings proper. Speaking of old memories …

    The other difference I see is that Zipf’s Law applies to an established and preferably large corpus of linguistic material, while my Learner has to start from scratch, accumulating experience over time, making the best of whatever data it has at the outset and every moment thereafter.

    #ThemeOneProgram #Learning #Reasoning
    #Logic #LogicalGraphs #FormalLanguages
    #Algorithm #DataStructure #GraphTheory
    #Peirce #PragmaticSemioticInformation
    #Empiricism #Rationalism #Pragmatism

  25. Theme One Program • Motivation 4
    inquiryintoinquiry.com/2022/08

    From Zipf’s Law and the category of “things that vary inversely with frequency” I got my first brush with the idea that keeping track of usage frequencies is part and parcel of building efficient codes.

    In its first application the environment the Learner has to learn is the usage behavior of its user, as given by finite sequences of characters from a finite alphabet, which sequences of characters might as well be called “words”, together with finite sequences of those words which might as well be called “phrases” or “sentences”. In other words, Job One for the Learner is the job of constructing a “user model”.

    In that frame of mind we are not seeking anything so grand as a Universal Induction Algorithm but simply looking for any approach to give us a leg up, complexity wise, in Interactive Real Time.

    #ThemeOneProgram #Learning #Reasoning
    #Logic #LogicalGraphs #FormalLanguages
    #Algorithm #DataStructure #GraphTheory
    #Peirce #PragmaticSemioticInformation
    #Empiricism #Rationalism #Pragmatism

  26. Theme One Program • Motivation 3
    inquiryintoinquiry.com/2022/08

    Sometime around 1970 John B. Eulenberg came from Stanford to direct Michigan State’s Artificial Language Lab, where I would come to spend many interesting hours hanging out all through the 70s and 80s. Along with its research program the lab did a lot of work on augmentative communication technology for limited mobility users and the observations I made there prompted the first inklings of my Learner program.

    Early in that period I visited John’s course in mathematical linguistics, which featured Laws of Form among its readings, along with the more standard fare of Wall, Chomsky, Jackendoff, and the Unified Science volume by Charles Morris which credited Peirce with pioneering the pragmatic theory of signs. I learned about Zipf’s Law relating the lengths of codes to their usage frequencies and I named the earliest avatar of my Learner program XyPh, partly after Zipf and playing on the xylem and phloem of its tree data structures.

    #ThemeOneProgram #Learning #Reasoning
    #Logic #LogicalGraphs #FormalLanguages
    #Algorithm #DataStructure #GraphTheory
    #Peirce #PragmaticSemioticInformation
    #Empiricism #Rationalism #Pragmatism

  27. Theme One Program • Motivation 2.2
    inquiryintoinquiry.com/2022/08

    As I mentioned, work on those two projects proceeded in a parallel series of fits and starts through interwoven summers for a number of years, until one day it dawned on me how the Learner, one of whose aliases was Index, could be put to work helping with sundry substitution tasks the Modeler needed to carry out.

    So I began integrating the functions of the Learner and the Modeler, at first still working on the two component modules in an alternating manner, but devoting a portion of effort to amalgamating their principal data structures, bringing them into convergence with each other, and unifying them over a common basis.

    Another round of seasons and many changes of mind and programming style, I arrived at a unified graph-theoretic data structure, strung like a wire through the far‑flung pearls of my programmed wit. But the pearls I polished in alternate years maintained their shine along axes of polarization whose grains remained skew in regard to each other. To put it more plainly, the strategies I imagined were the smartest tricks to pull from the standpoint of optimizing the program’s performance on the Learning task I found the next year were the dumbest moves to pull from the standpoint of its performance on the Reasoning task. I gradually came to appreciate that trade-off as a discovery.

    #ThemeOneProgram #Learning #Reasoning
    #Logic #LogicalGraphs #FormalLanguages
    #Algorithm #DataStructure #GraphTheory
    #Peirce #PragmaticSemioticInformation
    #Empiricism #Rationalism #Pragmatism

  28. Theme One Program • Motivation 2.1
    inquiryintoinquiry.com/2022/08

    A side-effect of working on the Theme One program over the course of a decade was the measure of insight it gave me into the reasons why empiricists and rationalists have so much trouble understanding each other, even when those two styles of thinking inhabit the very same soul.

    The way it came about was this. The code from which the program is currently assembled initially came from two distinct programs, ones I developed in alternate years, at first only during the summers.

    In the Learner program I sought to implement a Humean empiricist style of learning algorithm for the adaptive uptake of coded sequences of occurrences in the environment, say, as codified in a formal language. I knew all the theorems from formal language theory telling how limited any such strategy must ultimately be in terms of its generative capacity, but I wanted to explore the boundaries of that capacity in concrete computational terms.

    In the Modeler program I aimed to implement a variant of Peirce’s graphical syntax for propositional logic, making use of graph-theoretic extensions I had developed over the previous decade.

    #ThemeOneProgram #Learning #Reasoning
    #Logic #LogicalGraphs #FormalLanguages
    #Algorithm #DataStructure #GraphTheory
    #Peirce #PragmaticSemioticInformation
    #Empiricism #Rationalism #Pragmatism

  29. Theme One Program • Motivation 1
    inquiryintoinquiry.com/2022/08

    The main idea behind the Theme One program is the efficient use of graph-theoretic data structures for the tasks of “learning” and “reasoning”.

    I am thinking of learning in the sense of learning about an environment, in essence, gaining information about the nature of an environment and being able to apply the information acquired to a specific purpose.

    Under the heading of reasoning I am simply lumping together all the ordinary sorts of practical activities which would probably occur to most people under that name.

    There is a natural relation between the tasks. Learning the character of an environment leads to the recognition of laws which govern the environment and making full use of that recognition requires the ability to reason logically about those laws in abstract terms.

    #ThemeOneProgram #Learning #Reasoning
    #Logic #LogicalGraphs #FormalLanguages
    #Algorithm #DataStructure #GraphTheory
    #Peirce #PragmaticSemioticInformation
    #Empiricism #Rationalism #Pragmatism

  30. An Empirical Razor for Discerning Rare, False, and Faked Phenomena

    In an HN discussion of whether or not the supposed oldest person to have ever lived, Jeanne Calment, was a fraud or not, I noted that the one modern social phenomenon that’s been most strongly associated with a reduction in the number of superannuated persons in a population is accurate demographic recordkeeping… See, e.g., Saul Justin Newman, “Supercentenarians and the oldest-old are concentrated into regions with no birth certificates and short lifespans” (2019). ...

    joindiaspora.com/posts/874e7c1

    #empiricism #philosophy #epistemology #manifestation #sensibility #perception #fraud #hallucination #reality #truth #AlbrechtDurer #rhinocerous

  31. #DAK of a rule / law of likelihood of truth/falsity given increased sensing / recording capabilities regarding phenomena?

    The idea's occurred to me. I doubt it's original. Spelled out here:

    "An Empirical Razor for Discerning Rare, False, and Faked Phenomena"

    joindiaspora.com/posts/874e7c1

    #empiricism #philosophy #epistemology #manifestation #sensibility #perception #fraud #hallucination #reality #truth