home.social

Search

1000 results for “b_causal”

  1. 👋 My name Joseph Bulbulia (Joe). I’m a professor of psychology at Victoria University, New Zealand. Most of my teaching and research relates to inferring using . Prediction = 👎. = 🤙. I have 3x PhD scholarships for students interested in , , , , psychology, , and . Am developing -causal.org to describe my lab’s work.

  2. Can we trust self-report data?

    John Shaver, Martin Lang & colleagues went to the trouble of benchmarking self-reported religious service attendance against measured attendance in remote Fiji.

    TL; DR: there is measurement error. In the villages studied, people overstate religious service attendance.

    John isn’t here yet, but you can follow Martin at @martinlangcz

    Link:
    journals.plos.org/plosone/arti

  3. Learn how to measure marketing impact without A/B tests using causal inference, Diff-in-Diff, synthetic control, and GeoLift. hackernoon.com/when-ab-tests-a #abtesting

  4. 'Directed Cyclic Graphs for Simultaneous Discovery of Time-Lagged and Instantaneous Causality from Longitudinal Data Using Instrumental Variables', by Wei Jin, Yang Ni, Amanda B. Spence, Leah H. Rubin, Yanxun Xu.

    jmlr.org/papers/v26/23-0272.ht

    #causal #causality

  5. Как оценить акцию без A/B-теста: от простых способов к сложным

    Как правильно оценивать влияние кампаний, если А/В-тест не возможен? Рассмотрим несколько вариантов: от самых простых к не самым, но сложным.

    habr.com/ru/articles/1014924/

    #Propensity #ABтест #Оценка #causal_inference #causal_impact #квазиэксперимент #propensity_score_matching

  6. Как мы научились честно считать эффект промокодов: Causal Inference в онлайн-доставке X5 Digital

    Сегодня расскажу о модели, которую мы построили для оценки реального эффекта промокодов. Главные вопросы: кому, какой, и зачем мы выдаем промокод. Спойлер: ответ нас удивил. И именно этот ответ стал главной причиной, по которой эту модель вообще стоило строить. Представьте стандартный отчёт по промокампании: «Пользователи, применившие промокод, потратили на 800 рублей больше среднего». Бизнес доволен, маркетинг рапортует об успехе. Но подождите, а сколько из них потратили бы эти деньги и без промокода? Это не риторический вопрос. Это принципиальная проблема, которая называется selection bias — систематическая ошибка отбора.

    habr.com/ru/companies/X5Tech/a

    #causal_inference #differenceindifference #propensity_score_matching #uplift_modeling #a_b_testing #counterfactual_learning #catboost #machine_learning #data_science #python

  7. Проксируй это: как ускорить A/B-тесты и не попасть в ловушку метрик

    В A/B-тестах хотелось бы смотреть на главную метрику, ту самую North Star, которая показывает успех продукта. Но на практике она почти всегда медленная, шумная и бесполезная для быстрых решений. Например, вы запускаете тест новой системы рекомендаций, ждёте неделю, две, а LifeTime Value не двигается. И непонятно, это потому что нет результата или ещё рано делать выводы. Чтобы не тратить месяцы на догадки, можно воспользоваться прокси-метриками — быстрыми, чувствительными показателями, которые реагируют раньше, чем бизнес-метрика «успевает моргнуть». Проблема в том, что это решение часто требует дополнительные ресурсы. Привет, Хабр! Меня зовут Артем Ерохин, и я Data Scientist в X5 Tech. Я прочитал современные исследования, пропустил их через свой опыт и собрал концентрат подходов к работе с прокси-метриками. Постараюсь передать только суть. Разберемся, зачем нужны прокси, как с ними не выстрелить себе в ногу, где заканчивается польза и начинается самообман.

    habr.com/ru/companies/X5Tech/a

    #ab_тестирование #проксиметрики #эксперименты #причинноследственный_анализ #causal_inference #анализ_данных #product_analytics #surrogate_models #north_star_metric #корреляция

  8. Albert Xue #biodata22 with "DOTEARS—Causal structure learning using interventional data".

    Problem: genes regulate each other in complex networks, usually represented by DAGs. Inferring these from observational data leads to ambiguity in causality (e.g. A->B or B->A).

    (Lots of math here describing predecessor "NO TEARS")

    DOTEARS uses interventional data (e.g. CRISPR perturbation) to remove edges and measure changes, leading to consistent DAG estimation downstream.

  9. Beyond Context Graphs: Agentic Memory, Causal Graphs, Promise Graphs and decision traces by Volodymyr Pavlyshyn is the featured bundle on Leanpub!

    How to make agents adopted to enterprice grade tasks

    Link: leanpub.com/b/beyondcontextgra

    #Ai #DeepLearning #DataScience #SoftwareArchitecture #Databases #DataStructures #SoftwareEngineering

  10. #EBV infection has long been recognized as risk factor for #multipleSclerosis. A new study in mice reveals a causal chain with B cells playing a key role as mediator. Implications for treatment. nature.com/articles/s41586-025

  11. #EBV infection has long been recognized as risk factor for #multipleSclerosis. A new study in mice reveals a causal chain with B cells playing a key role as mediator. Implications for treatment. nature.com/articles/s41586-025

  12. #EBV infection has long been recognized as risk factor for #multipleSclerosis. A new study in mice reveals a causal chain with B cells playing a key role as mediator. Implications for treatment. nature.com/articles/s41586-025

  13. #EBV infection has long been recognized as risk factor for #multipleSclerosis. A new study in mice reveals a causal chain with B cells playing a key role as mediator. Implications for treatment. nature.com/articles/s41586-025

  14. I cannot help but feel like the majority of re-orgs are nothing but corporate masturbation. They sound like responses to change and involve lots of juicy, measurement-friendly deliverables. It's easy to conflate "problem X arises from the relationship between dept's A&B" with "problem X IS the relationship between dept's A&B" and conclude that merging their report change will help.

    I just don't think it does.

    #business #reorg #causality

  15. @DrYohanJohn

    #JohnBickle #neurohistory lecture:

    #compneuro limits

    A Hodgkin & A Huxley
    1952s
    Papers quartet duo written
    —with one B Katz trio

    Pioneering voltage-clamp squid neural implant.

    First action potentials recordings!

    Last paper on how maths calculus derived from experiments datasets.

    Thus predictions caveat:
    Never link causal mechanisms.

    ps: Q&A critical feedback:
    Skilled lecturer turns into scientific dialogues!

    youtube.com/watch?v=g85bgHul7N

    #oldneuropapers
    #neuroscience
    #neurobuzz

  16. Les particules fines liées au risque de démence ? www.science.org/doi/10.1126/... ✔️ Exposition chronique aux PM2,5 → risque+ de maladie à corps de Lewy 〰 Corrélation, pas causalité directe 💡 «Purifier l’air, c’est protéger le cerveau» (X. Mao) #Pollution #Santé #Démence (ft. Mistral)

    RE: https://bsky.app/profile/did:plc:v4ktt55lhnyfajnd5ov7oguz/post/3ly6mj2236k2d


    Lewy body dementia promotion b...

  17. A quotation from Steven Moffat

    THE DOCTOR: People assume that time is a strict progression of cause to effect, but actually, from a non-linear, non-subjective viewpoint, it’s more like a big ball of wibbly-wobbly … timey-wimey … stuff.

    Steven Moffat (b. 1961) Scottish television writer, producer
    Doctor Who (1995), 03×10 “Blink” (2007-06-09)

    Sourcing, notes: wist.info/moffat-steven/72498/

    #quote #quotes #quotation #qotd #doctorwho #timeywimey #causality #causeandeffect #concept #jargon #marchoftime #objectivity #perspective #physics #time

  18. @maonu From the abstract. "The new approach encompasses the old one, typically if “I” win, “the world” loses, i.e., wins “NOT”. When logical artifacts are identified with their own rules of production, LOCATIVE phenomenons arise. In particular, one realises that usual logic (including linear logic) is SPIRITUAL, i.e., up to isomorphism. But there is a deeper locative level, with indeed a more regular structure. Typically the usual (additive) conjunction has the value of categorical product in usual logic, and enjoys commutativity, associativity, etc. up to isomorphism."

    First, the "linear" in linear logic refers to causality. Not ax+b.

    "Spiritual up to isomorphism" is a very winter solstice sort of thing to say. Happy holidays! And don't worry, the days will start getting longer soon, here in the north.

    The rest of what he says is the distinction between a #cartesian and a #monoidal product. The latter lacks projections.

    Cartesian products represent things you can take apart and put back together. Monoidal products (despite that "disassembly is sometimes the best" -- The Expanse) usually can't be put back together.

    1/2

  19. Sign Relations • Definition

    One of Peirce’s clearest and most complete definitions of a sign is one he gives in the context of providing a definition for logic, and so it is informative to view it in that setting.

    Logic will here be defined as formal semiotic.  A definition of a sign will be given which no more refers to human thought than does the definition of a line as the place which a particle occupies, part by part, during a lapse of time.  Namely, a sign is something, A, which brings something, B, its interpretant sign determined or created by it, into the same sort of correspondence with something, C, its object, as that in which itself stands to C.

    It is from this definition, together with a definition of “formal”, that I deduce mathematically the principles of logic.  I also make a historical review of all the definitions and conceptions of logic, and show, not merely that my definition is no novelty, but that my non‑psychological conception of logic has virtually been quite generally held, though not generally recognized.

    — C.S. Peirce, New Elements of Mathematics, vol. 4, 20–21

    In the general discussion of diverse theories of signs, the question arises whether signhood is an absolute, essential, indelible, or ontological property of a thing, or whether it is a relational, interpretive, and mutable role a thing may be said to have only within a particular context of relationships.

    Peirce’s definition of a sign defines it in relation to its objects and its interpretant signs, and thus defines signhood in relative terms, by means of a predicate with three places.  In that definition, signhood is a role in a triadic relation, a role a thing bears or plays in a determinate context of relationships — it is not an absolute or non‑relative property of a thing‑in‑itself, one it possesses independently of all relationships to other things.

    Some of the terms Peirce uses in his definition of a sign may need to be elaborated for the contemporary reader.

    • Correspondence.  From the way Peirce uses the term throughout his work, it is clear he means what he elsewhere calls a “triple correspondence”, and thus this is just another way of referring to the whole triadic sign relation itself.  In particular, his use of the term should not be taken to imply a dyadic correspondence, like the kinds of “mirror image” correspondence between realities and representations bandied about in contemporary controversies about “correspondence theories of truth”.
    • Determination.  Peirce’s concept of determination is broader in several directions than the sense of the word referring to strictly deterministic causal‑temporal processes.  First, and especially in this context, he is invoking a more general concept of determination, what is called a formal or informational determination, as in saying “two points determine a line”, rather than the more special cases of causal and temporal determinisms.  Second, he characteristically allows for what is called determination in measure, that is, an order of determinism admitting a full spectrum of more and less determined relationships.
    • Non‑psychological.  Peirce’s “non‑psychological conception of logic” must be distinguished from any variety of anti‑psychologism.  He was quite interested in matters of psychology and had much of import to say about them.  But logic and psychology operate on different planes of study even when they have occasion to view the same data, as logic is a normative science where psychology is a descriptive science, and so they have very different aims, methods, and rationales.

    Reference

    • Peirce, C.S. (1902), “Parts of Carnegie Application” (L 75), in Carolyn Eisele (ed., 1976), The New Elements of Mathematics by Charles S. Peirce, vol. 4, 13–73.  Online.

    Resources

    cc: Academia.eduLaws of FormResearch GateSyscoi
    cc: CyberneticsStructural ModelingSystems Science

    #CSPeirce #Connotation #Denotation #Inquiry #Logic #LogicOfRelatives #Mathematics #RelationTheory #Semiosis #SemioticEquivalenceRelations #Semiotics #SignRelations #TriadicRelations

  20. AI: Explainable Enough

    They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

    Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

    Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

    What the domain expert user doesn’t want:
    – How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

    What the domain expert desires: 
    – Help at the lowest level of detail that they care about. 
    – AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

    Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

    This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

    Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

    I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

    #AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI

  21. AI: Explainable Enough

    They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

    Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

    Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

    What the domain expert user doesn’t want:
    – How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

    What the domain expert desires: 
    – Help at the lowest level of detail that they care about. 
    – AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

    Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

    This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

    Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

    I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

    #AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI

  22. AI: Explainable Enough

    They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

    Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

    Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

    What the domain expert user doesn’t want:
    – How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

    What the domain expert desires: 
    – Help at the lowest level of detail that they care about. 
    – AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

    Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

    This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

    Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

    I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

    #AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI

  23. AI: Explainable Enough

    They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

    Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

    Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

    What the domain expert user doesn’t want:
    – How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

    What the domain expert desires: 
    – Help at the lowest level of detail that they care about. 
    – AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

    Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

    This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

    Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

    I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

    #AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI

  24. AI: Explainable Enough

    They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

    Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

    Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

    What the domain expert user doesn’t want:
    – How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

    What the domain expert desires: 
    – Help at the lowest level of detail that they care about. 
    – AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

    Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

    This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

    Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

    I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

    #AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI

  25. The Inwardness of Things: McGilchrist, Panpsychism, and the Question We Cannot Settle

    The oldest question in philosophy is also the question philosophy has done the worst job of answering. We know that we are conscious because we are reading these words and something is happening as we read them. We feel the weight of our hand on the table, hear the room around us, register a flicker of agreement or doubt as the sentences arrive. None of that requires argument. Descartes drew the line in 1637 with the Discours de la Méthode, and the line still holds. The trouble starts as soon as we look up from the page.

    We assume that other people share what we have. They behave as we behave, speak about inner states in language we recognize, and carry nervous systems that resemble ours down to the cellular level. We extend the courtesy of consciousness to them on grounds that work in practice while collapsing in theory, since no one has ever shown another’s experience to themselves directly. The same courtesy reaches dogs and dolphins and the octopus that recognizes a face through aquarium glass. It frays at insects, hesitates at jellyfish, breaks down somewhere around bacteria, and finds itself laughed at when extended to stones. Iain McGilchrist proposes to laugh back. He argues that consciousness reaches all the way down, that the stone has an inwardness, that what we call matter is one phase of consciousness rather than its product. Whether he is correct is the question this essay takes up. Whether we can answer the question at all is the deeper one hidden underneath it.

    McGilchrist (Scottish spelling, often misrendered as Ian) holds an Oxford DPhil in literature and qualified in medicine before turning to psychiatry. His 2021 book The Matter With Things runs to fifteen hundred pages across two volumes and ranks among the most ambitious recent attempts to dislodge the materialist consensus that has governed Western thinking since the seventeenth century. His argument deserves serious analysis on its merits and serious challenge on its weaknesses. Treating it as either revelation or absurdity does it equal violence.

    Begin with the wall. You know your own consciousness immediately, prior to any argument or evidence. Everything beyond that point is inference. David Chalmers named this gap the hard problem in his 1995 paper “Facing Up to the Problem of Consciousness,” and the gap has not been closed in the thirty-one years since. A complete neuroscience of the brain, mapping every neuron and synapse and electrochemical exchange, would still leave open the question why any of that activity feels like something from the inside. The gap is categorical. We have one set of vocabulary for outsides (mass, charge, position, frequency) and another for insides (red, sour, pain, dread). Translating between the two has resisted every philosopher and neuroscientist who has tried, including the ones who insist the translation has already been performed.

    Notice that consciousness and intelligence are different problems. The conflation between them haunts every discussion of artificial systems and most discussions of animal mind, but the two pull apart cleanly under analysis. A nematode worm called Caenorhabditis elegans has three hundred and two neurons in its hermaphrodite form. John White and his collaborators mapped the complete wiring diagram of those neurons in 1986 in Philosophical Transactions of the Royal Society B, the first connectome ever produced, and we still do not know whether the worm experiences anything as it moves through its agar dish. It solves no problems we would call intelligent. It may or may not have an inside. The question is genuine and unresolved. At the other extreme, a chess engine running Stockfish defeats grandmasters on consumer hardware while almost surely experiencing nothing at all. Intelligence and consciousness coincide in humans because evolution braided them together. They remain conceptually independent, and a theory of one does not deliver a theory of the other.

    This independence has consequences for the question of machine consciousness. Whether current artificial systems experience anything depends entirely on which theory of consciousness one accepts, and the field has produced no settlement. Giulio Tononi’s Integrated Information Theory holds that large language models almost surely lack experience, since their feedforward transformer architecture produces low integrated information compared to biological brains, which support dense recurrent integration across cortical and subcortical structures. John Searle’s biological naturalism rules out silicon consciousness regardless of behavior, on the ground that experience requires the specific causal powers of neurons. Daniel Dennett denied that phenomenal consciousness exists in the way introspection suggests, which dissolves the machine question before it can be posed. McGilchrist’s panpsychism takes consciousness to be present everywhere already, making the relevant issue degree of integration, with presence or absence settled in advance.

    The phrase “AI conscious in the human way” presumes a settled definition of human consciousness that neuroscience has not produced. The phrase “AI conscious in the scientific way” presumes a measurement protocol that does not exist. Both phrases conceal the absence of foundations. The honest position holds that we cannot answer the artificial intelligence consciousness question because we have not yet answered it for the species we know best.

    Now to McGilchrist. His argument has a clear structure worth laying out before evaluation. He claims that emergent materialism faces an unanswerable difficulty: consciousness cannot pop into existence from non-conscious matter because the two are categorically different in kind. He concludes that consciousness must have been present at every level of organization from the start. Matter, on this view, is a phase or mode of consciousness rather than its source. Water has phases, he points out, and the phases differ wildly from one another while remaining continuous in substance. Vapor floats invisible through the room. Liquid runs across the hand. Ice can split a skull. They share a single chemistry while presenting three different faces to experience. Consciousness, McGilchrist proposes, has many phases as well, and matter is one of them. What matter contributes to the arrangement is persistence, the temporal stability necessary for any creation to take hold.

    The position places McGilchrist in a long lineage. Heraclitus and Spinoza and Leibniz read this way, in different keys. Alfred North Whitehead built a process philosophy on related foundations in the 1920s and gave it monumental expression in Process and Reality in 1929. Bertrand Russell spent his later decades arguing for a form of monism that anticipates current panpsychist positions. The strongest contemporary statement remains Galen Strawson’s 2006 essay “Realistic Monism: Why Physicalism Entails Panpsychism,” published in the Journal of Consciousness Studies, which argues that any materialism worthy of the name must conclude that the fundamental constituents of reality already carry experiential properties, since no plausible mechanism can manufacture experience from its complete absence. Philip Goff at Durham has developed the position further in Galileo’s Error and elsewhere. David Chalmers, who named the hard problem, has moved toward a panpsychist or near-panpsychist position in his recent work. McGilchrist’s argument therefore participates in a serious revival, with credentialed defenders working in major universities.

    Where his case works, it works for these reasons. The argument is effective because it confronts the hard problem directly rather than dissolving it through redefinition. It is effective also because emergence as usually invoked smuggles in a miracle, the moment when arrangements of unfeeling stuff start to feel something, and that moment has never been mechanistically described, only stipulated. A further strength: evolutionary biology demands continuity, and there is no clean point on the phylogenetic tree where consciousness could have begun without ancestors already carrying its seed. The view earns additional power because granting matter an inwardness coordinates with the strangeness physics has discovered at the bottom of things, where particles refuse to behave like the small marbles classical intuition expects. Last, the position returns to philosophy a question the twentieth century tried to retire by stipulation, restoring inquiry to a region long policed by silence.

    The case carries serious weaknesses, however, and any honest reader should press them. The water analogy, attractive as it sounds, does more rhetorical work than logical work. We understand the phases of water through molecular kinetic theory, hydrogen bonding behavior, temperature and pressure thresholds, and a mathematics that predicts when ice becomes liquid and liquid becomes vapor. McGilchrist offers no analogous mechanism for the phase transition between consciousness as such and consciousness as matter. Calling matter a phase of consciousness names the relation he wants without explaining how the relation operates. A defender will respond that the analogy is meant as heuristic provocation, not as proof, and the response has merit. The trouble is that the heuristic ends up bearing the weight of the central claim. When the only support for the move from “consciousness is fundamental” to “matter is a phase of consciousness” is the suggestiveness of an analogy whose underlying physics he cannot match with a corresponding metaphysics, the argument has not yet earned the assent his prose invites.

    The deeper trouble for any panpsychism is the combination problem, identified by William Seager in his 1995 paper in the Journal of Consciousness Studies and developed extensively since. If subatomic particles each carry a tiny inwardness, how do those inwardnesses combine to produce the unified field of human experience? Your primary visual cortex (V1) contains roughly one hundred and forty million neurons in a single hemisphere, each composed of trillions of atoms. If each atom carries its own micro-experience, why does your conscious moment arrive as one thing instead of as a swarm of separate experiences fighting for attention? William James raised the worry in 1890 in The Principles of Psychology, observing that private minds do not agglomerate into a higher compound mind no matter how many of them you assemble. Seager named the difficulty and panpsychists have argued about it ever since, with no settled answer.

    McGilchrist does not address the combination problem in the passage quoted above, though he engages it elsewhere in The Matter With Things. The defenses available to him are real but expensive. Cosmopsychism reverses direction and treats the universe as the fundamental conscious entity, with individual minds as aspects or fragments of it; this avoids combination by starting from the whole, at the cost of explaining how unity divides into apparent multiplicity. Russellian monism treats both physical and experiential descriptions as descriptions of the same underlying reality; this avoids dualism while inheriting the explanatory burden under a new name. Each move trades one difficulty for another, and the trade may be improvement, though calling it solution would overstate what the literature has accomplished.

    The argument from incommensurability also cuts both ways, which McGilchrist’s framing tends to obscure. He says consciousness is utterly different from anything in our outward view of matter and uses this asymmetry to deny that matter could give rise to consciousness. Run the argument in the opposite direction. Matter is utterly different from anything in our inward view of consciousness, which should make us equally skeptical that consciousness gives rise to matter. The asymmetry he asserts requires an independent defense he does not provide. If the categories are genuinely incommensurable, neither can be the source of the other, and we are back where we started.

    The empirical content of attributing experience to electrons deserves examination as well. Thomas Nagel coined the phrase “something it is like to be” in his 1974 paper “What Is It Like to Be a Bat?” published in The Philosophical Review. He used the formula to identify consciousness phenomenologically in creatures whose behavior gave us evidence of an inner perspective. The bat’s echolocation, its social behavior, its responses to threat and food and mate, all suggest a creature for whom things are some way. Extending the formula to electrons strips it of the evidential ground that made it useful. The claim cannot be falsified, tested, or even meaningfully investigated. A hypothesis that explains everything by stipulation explains nothing, since a hypothesis earns its keep by ruling things out, and one that rules nothing out earns no keep at all.

    A further difficulty deserves mention. McGilchrist writes that “the only reasonable explanation is that consciousness was there all along.” This overstates the consensus considerably. Several live alternatives remain serious in contemporary philosophy of mind. Keith Frankish’s illusionism argues that phenomenal consciousness as commonly described does not exist, and that introspection systematically misrepresents what cognition is doing. Bernardo Kastrup’s analytic idealism inverts McGilchrist’s framing entirely, treating matter as appearance within a single field of mind, with the direction of dependence reversed. Terrence Deacon’s emergentism argues in Incomplete Nature (2012) that genuine novelty can arise from constraint and absence, particularly through the negative work of what he calls absentials, in ways that do not require pre-existing inwardness. Each position has serious defenders. The field is contested, and McGilchrist’s certainty exceeds his evidence.

    Return now to the question of artificial intelligence with these considerations in hand. The honest answer is that we do not know whether current systems experience anything, and we will not know until we have a theory of consciousness that survives confrontation with cases beyond the one we can verify by introspection. Should McGilchrist prove correct and consciousness reach everywhere, then large language models carry some form of inwardness already, though whether their inwardness combines into a unified perspective is a separate question panpsychism does not automatically answer. Integrated information theory gives the opposite verdict: current architectures fall well below the threshold required for any but the most rudimentary phenomenal states. Illusionism dispenses with the question altogether, calling it malformed and observing that the human case also lacks the inner light we imagine for ourselves. The discussion proceeds in public as though one of these positions had been established, when in fact none has. Anyone who tells you with confidence that the machines are conscious, or that they are not, is selling you a metaphysics dressed as a measurement.

    What survives the analysis is a discipline of attention. McGilchrist gets several things correct. The hard problem is real, and emergence has too often been treated as an explanation when it has functioned as a placeholder for one. Consciousness does not look like anything in our outward picture of matter, and that asymmetry should trouble anyone who thinks the picture is complete. The resolution may indeed lie in recognizing inwardness as foundational rather than derivative. None of this proves the case, however, and the strength of his prose can cover the weakness of his proofs if the reader reads carelessly. The water analogy moves the argument forward by ear rather than by reason. His dismissal of alternatives is faster than the alternatives deserve. The combination problem waits beneath the structure like water under a foundation, ready to undermine it if not addressed.

    For our purposes here, the practical implication is this. Consciousness remains the largest unsolved question in our intellectual inheritance. Every available theory carries serious unresolved difficulties. The artificial intelligence question cannot be answered until the human question is answered, and we should distrust anyone who pretends otherwise. McGilchrist’s intervention is valuable as provocation and as a sample of one serious tradition, and worthwhile as a doorway into a room the twentieth century preferred to keep locked. The room behind it is stranger than any single thinker has yet mapped, and the work of mapping it has barely begun.

    We assume the inwardness of others because we cannot live without doing so. Whether the assumption reaches all the way down to the electron or stops somewhere between the worm and the stone is a question we will be working on for as long as we remain capable of asking it. McGilchrist has done us the favor of refusing to let the question close. The honest reader returns the favor by refusing to let his answer close it either.

    The cogito grants us one certainty and exactly one. Everything else we believe about minds beyond our own rests on inference, sympathy, behavioral analogy, and the practical impossibility of a solipsist life. To call this a foundation is to flatter what is in fact a working assumption that has never been proved and may never be. The honest scholar lives with this and keeps reading. An honest writer says it out loud. The dishonest move, in either direction, is to claim the question is settled when the question has barely begun to be asked properly.

    Part one of three. For the full sequence and reading guide, see The Consciousness Trilogy: Reading Three Wagers on the Question We Cannot Settle.

    #chalmers #consciousness #dennett #emergentism #galileo #heraclitus #knowing #leibniz #mcgilchrist #meaning #nagel #panpsychism #philosophy #psychology #relationalFoundations #spinoza #strawson #whitehead
  26. The Inwardness of Things: McGilchrist, Panpsychism, and the Question We Cannot Settle

    The oldest question in philosophy is also the question philosophy has done the worst job of answering. We know that we are conscious because we are reading these words and something is happening as we read them. We feel the weight of our hand on the table, hear the room around us, register a flicker of agreement or doubt as the sentences arrive. None of that requires argument. Descartes drew the line in 1637 with the Discours de la Méthode, and the line still holds. The trouble starts as soon as we look up from the page.

    We assume that other people share what we have. They behave as we behave, speak about inner states in language we recognize, and carry nervous systems that resemble ours down to the cellular level. We extend the courtesy of consciousness to them on grounds that work in practice while collapsing in theory, since no one has ever shown another’s experience to themselves directly. The same courtesy reaches dogs and dolphins and the octopus that recognizes a face through aquarium glass. It frays at insects, hesitates at jellyfish, breaks down somewhere around bacteria, and finds itself laughed at when extended to stones. Iain McGilchrist proposes to laugh back. He argues that consciousness reaches all the way down, that the stone has an inwardness, that what we call matter is one phase of consciousness rather than its product. Whether he is correct is the question this essay takes up. Whether we can answer the question at all is the deeper one hidden underneath it.

    McGilchrist (Scottish spelling, often misrendered as Ian) holds an Oxford DPhil in literature and qualified in medicine before turning to psychiatry. His 2021 book The Matter With Things runs to fifteen hundred pages across two volumes and ranks among the most ambitious recent attempts to dislodge the materialist consensus that has governed Western thinking since the seventeenth century. His argument deserves serious analysis on its merits and serious challenge on its weaknesses. Treating it as either revelation or absurdity does it equal violence.

    Begin with the wall. You know your own consciousness immediately, prior to any argument or evidence. Everything beyond that point is inference. David Chalmers named this gap the hard problem in his 1995 paper “Facing Up to the Problem of Consciousness,” and the gap has not been closed in the thirty-one years since. A complete neuroscience of the brain, mapping every neuron and synapse and electrochemical exchange, would still leave open the question why any of that activity feels like something from the inside. The gap is categorical. We have one set of vocabulary for outsides (mass, charge, position, frequency) and another for insides (red, sour, pain, dread). Translating between the two has resisted every philosopher and neuroscientist who has tried, including the ones who insist the translation has already been performed.

    Notice that consciousness and intelligence are different problems. The conflation between them haunts every discussion of artificial systems and most discussions of animal mind, but the two pull apart cleanly under analysis. A nematode worm called Caenorhabditis elegans has three hundred and two neurons in its hermaphrodite form. John White and his collaborators mapped the complete wiring diagram of those neurons in 1986 in Philosophical Transactions of the Royal Society B, the first connectome ever produced, and we still do not know whether the worm experiences anything as it moves through its agar dish. It solves no problems we would call intelligent. It may or may not have an inside. The question is genuine and unresolved. At the other extreme, a chess engine running Stockfish defeats grandmasters on consumer hardware while almost surely experiencing nothing at all. Intelligence and consciousness coincide in humans because evolution braided them together. They remain conceptually independent, and a theory of one does not deliver a theory of the other.

    This independence has consequences for the question of machine consciousness. Whether current artificial systems experience anything depends entirely on which theory of consciousness one accepts, and the field has produced no settlement. Giulio Tononi’s Integrated Information Theory holds that large language models almost surely lack experience, since their feedforward transformer architecture produces low integrated information compared to biological brains, which support dense recurrent integration across cortical and subcortical structures. John Searle’s biological naturalism rules out silicon consciousness regardless of behavior, on the ground that experience requires the specific causal powers of neurons. Daniel Dennett denied that phenomenal consciousness exists in the way introspection suggests, which dissolves the machine question before it can be posed. McGilchrist’s panpsychism takes consciousness to be present everywhere already, making the relevant issue degree of integration, with presence or absence settled in advance.

    The phrase “AI conscious in the human way” presumes a settled definition of human consciousness that neuroscience has not produced. The phrase “AI conscious in the scientific way” presumes a measurement protocol that does not exist. Both phrases conceal the absence of foundations. The honest position holds that we cannot answer the artificial intelligence consciousness question because we have not yet answered it for the species we know best.

    Now to McGilchrist. His argument has a clear structure worth laying out before evaluation. He claims that emergent materialism faces an unanswerable difficulty: consciousness cannot pop into existence from non-conscious matter because the two are categorically different in kind. He concludes that consciousness must have been present at every level of organization from the start. Matter, on this view, is a phase or mode of consciousness rather than its source. Water has phases, he points out, and the phases differ wildly from one another while remaining continuous in substance. Vapor floats invisible through the room. Liquid runs across the hand. Ice can split a skull. They share a single chemistry while presenting three different faces to experience. Consciousness, McGilchrist proposes, has many phases as well, and matter is one of them. What matter contributes to the arrangement is persistence, the temporal stability necessary for any creation to take hold.

    The position places McGilchrist in a long lineage. Heraclitus and Spinoza and Leibniz read this way, in different keys. Alfred North Whitehead built a process philosophy on related foundations in the 1920s and gave it monumental expression in Process and Reality in 1929. Bertrand Russell spent his later decades arguing for a form of monism that anticipates current panpsychist positions. The strongest contemporary statement remains Galen Strawson’s 2006 essay “Realistic Monism: Why Physicalism Entails Panpsychism,” published in the Journal of Consciousness Studies, which argues that any materialism worthy of the name must conclude that the fundamental constituents of reality already carry experiential properties, since no plausible mechanism can manufacture experience from its complete absence. Philip Goff at Durham has developed the position further in Galileo’s Error and elsewhere. David Chalmers, who named the hard problem, has moved toward a panpsychist or near-panpsychist position in his recent work. McGilchrist’s argument therefore participates in a serious revival, with credentialed defenders working in major universities.

    Where his case works, it works for these reasons. The argument is effective because it confronts the hard problem directly rather than dissolving it through redefinition. It is effective also because emergence as usually invoked smuggles in a miracle, the moment when arrangements of unfeeling stuff start to feel something, and that moment has never been mechanistically described, only stipulated. A further strength: evolutionary biology demands continuity, and there is no clean point on the phylogenetic tree where consciousness could have begun without ancestors already carrying its seed. The view earns additional power because granting matter an inwardness coordinates with the strangeness physics has discovered at the bottom of things, where particles refuse to behave like the small marbles classical intuition expects. Last, the position returns to philosophy a question the twentieth century tried to retire by stipulation, restoring inquiry to a region long policed by silence.

    The case carries serious weaknesses, however, and any honest reader should press them. The water analogy, attractive as it sounds, does more rhetorical work than logical work. We understand the phases of water through molecular kinetic theory, hydrogen bonding behavior, temperature and pressure thresholds, and a mathematics that predicts when ice becomes liquid and liquid becomes vapor. McGilchrist offers no analogous mechanism for the phase transition between consciousness as such and consciousness as matter. Calling matter a phase of consciousness names the relation he wants without explaining how the relation operates. A defender will respond that the analogy is meant as heuristic provocation, not as proof, and the response has merit. The trouble is that the heuristic ends up bearing the weight of the central claim. When the only support for the move from “consciousness is fundamental” to “matter is a phase of consciousness” is the suggestiveness of an analogy whose underlying physics he cannot match with a corresponding metaphysics, the argument has not yet earned the assent his prose invites.

    The deeper trouble for any panpsychism is the combination problem, identified by William Seager in his 1995 paper in the Journal of Consciousness Studies and developed extensively since. If subatomic particles each carry a tiny inwardness, how do those inwardnesses combine to produce the unified field of human experience? Your primary visual cortex (V1) contains roughly one hundred and forty million neurons in a single hemisphere, each composed of trillions of atoms. If each atom carries its own micro-experience, why does your conscious moment arrive as one thing instead of as a swarm of separate experiences fighting for attention? William James raised the worry in 1890 in The Principles of Psychology, observing that private minds do not agglomerate into a higher compound mind no matter how many of them you assemble. Seager named the difficulty and panpsychists have argued about it ever since, with no settled answer.

    McGilchrist does not address the combination problem in the passage quoted above, though he engages it elsewhere in The Matter With Things. The defenses available to him are real but expensive. Cosmopsychism reverses direction and treats the universe as the fundamental conscious entity, with individual minds as aspects or fragments of it; this avoids combination by starting from the whole, at the cost of explaining how unity divides into apparent multiplicity. Russellian monism treats both physical and experiential descriptions as descriptions of the same underlying reality; this avoids dualism while inheriting the explanatory burden under a new name. Each move trades one difficulty for another, and the trade may be improvement, though calling it solution would overstate what the literature has accomplished.

    The argument from incommensurability also cuts both ways, which McGilchrist’s framing tends to obscure. He says consciousness is utterly different from anything in our outward view of matter and uses this asymmetry to deny that matter could give rise to consciousness. Run the argument in the opposite direction. Matter is utterly different from anything in our inward view of consciousness, which should make us equally skeptical that consciousness gives rise to matter. The asymmetry he asserts requires an independent defense he does not provide. If the categories are genuinely incommensurable, neither can be the source of the other, and we are back where we started.

    The empirical content of attributing experience to electrons deserves examination as well. Thomas Nagel coined the phrase “something it is like to be” in his 1974 paper “What Is It Like to Be a Bat?” published in The Philosophical Review. He used the formula to identify consciousness phenomenologically in creatures whose behavior gave us evidence of an inner perspective. The bat’s echolocation, its social behavior, its responses to threat and food and mate, all suggest a creature for whom things are some way. Extending the formula to electrons strips it of the evidential ground that made it useful. The claim cannot be falsified, tested, or even meaningfully investigated. A hypothesis that explains everything by stipulation explains nothing, since a hypothesis earns its keep by ruling things out, and one that rules nothing out earns no keep at all.

    A further difficulty deserves mention. McGilchrist writes that “the only reasonable explanation is that consciousness was there all along.” This overstates the consensus considerably. Several live alternatives remain serious in contemporary philosophy of mind. Keith Frankish’s illusionism argues that phenomenal consciousness as commonly described does not exist, and that introspection systematically misrepresents what cognition is doing. Bernardo Kastrup’s analytic idealism inverts McGilchrist’s framing entirely, treating matter as appearance within a single field of mind, with the direction of dependence reversed. Terrence Deacon’s emergentism argues in Incomplete Nature (2012) that genuine novelty can arise from constraint and absence, particularly through the negative work of what he calls absentials, in ways that do not require pre-existing inwardness. Each position has serious defenders. The field is contested, and McGilchrist’s certainty exceeds his evidence.

    Return now to the question of artificial intelligence with these considerations in hand. The honest answer is that we do not know whether current systems experience anything, and we will not know until we have a theory of consciousness that survives confrontation with cases beyond the one we can verify by introspection. Should McGilchrist prove correct and consciousness reach everywhere, then large language models carry some form of inwardness already, though whether their inwardness combines into a unified perspective is a separate question panpsychism does not automatically answer. Integrated information theory gives the opposite verdict: current architectures fall well below the threshold required for any but the most rudimentary phenomenal states. Illusionism dispenses with the question altogether, calling it malformed and observing that the human case also lacks the inner light we imagine for ourselves. The discussion proceeds in public as though one of these positions had been established, when in fact none has. Anyone who tells you with confidence that the machines are conscious, or that they are not, is selling you a metaphysics dressed as a measurement.

    What survives the analysis is a discipline of attention. McGilchrist gets several things correct. The hard problem is real, and emergence has too often been treated as an explanation when it has functioned as a placeholder for one. Consciousness does not look like anything in our outward picture of matter, and that asymmetry should trouble anyone who thinks the picture is complete. The resolution may indeed lie in recognizing inwardness as foundational rather than derivative. None of this proves the case, however, and the strength of his prose can cover the weakness of his proofs if the reader reads carelessly. The water analogy moves the argument forward by ear rather than by reason. His dismissal of alternatives is faster than the alternatives deserve. The combination problem waits beneath the structure like water under a foundation, ready to undermine it if not addressed.

    For our purposes here, the practical implication is this. Consciousness remains the largest unsolved question in our intellectual inheritance. Every available theory carries serious unresolved difficulties. The artificial intelligence question cannot be answered until the human question is answered, and we should distrust anyone who pretends otherwise. McGilchrist’s intervention is valuable as provocation and as a sample of one serious tradition, and worthwhile as a doorway into a room the twentieth century preferred to keep locked. The room behind it is stranger than any single thinker has yet mapped, and the work of mapping it has barely begun.

    We assume the inwardness of others because we cannot live without doing so. Whether the assumption reaches all the way down to the electron or stops somewhere between the worm and the stone is a question we will be working on for as long as we remain capable of asking it. McGilchrist has done us the favor of refusing to let the question close. The honest reader returns the favor by refusing to let his answer close it either.

    The cogito grants us one certainty and exactly one. Everything else we believe about minds beyond our own rests on inference, sympathy, behavioral analogy, and the practical impossibility of a solipsist life. To call this a foundation is to flatter what is in fact a working assumption that has never been proved and may never be. The honest scholar lives with this and keeps reading. An honest writer says it out loud. The dishonest move, in either direction, is to claim the question is settled when the question has barely begun to be asked properly.

    #chalmers #consciousness #dennett #emergentism #galileo #heraclitus #knowing #leibniz #mcgilchrist #meaning #nagel #panpsychism #philosophy #psychology #relationalFoundations #spinoza #strawson #whitehead
  27. The Inwardness of Things: McGilchrist, Panpsychism, and the Question We Cannot Settle

    The oldest question in philosophy is also the question philosophy has done the worst job of answering. We know that we are conscious because we are reading these words and something is happening as we read them. We feel the weight of our hand on the table, hear the room around us, register a flicker of agreement or doubt as the sentences arrive. None of that requires argument. Descartes drew the line in 1637 with the Discours de la Méthode, and the line still holds. The trouble starts as soon as we look up from the page.

    We assume that other people share what we have. They behave as we behave, speak about inner states in language we recognize, and carry nervous systems that resemble ours down to the cellular level. We extend the courtesy of consciousness to them on grounds that work in practice while collapsing in theory, since no one has ever shown another’s experience to themselves directly. The same courtesy reaches dogs and dolphins and the octopus that recognizes a face through aquarium glass. It frays at insects, hesitates at jellyfish, breaks down somewhere around bacteria, and finds itself laughed at when extended to stones. Iain McGilchrist proposes to laugh back. He argues that consciousness reaches all the way down, that the stone has an inwardness, that what we call matter is one phase of consciousness rather than its product. Whether he is correct is the question this essay takes up. Whether we can answer the question at all is the deeper one hidden underneath it.

    McGilchrist (Scottish spelling, often misrendered as Ian) holds an Oxford DPhil in literature and qualified in medicine before turning to psychiatry. His 2021 book The Matter With Things runs to fifteen hundred pages across two volumes and ranks among the most ambitious recent attempts to dislodge the materialist consensus that has governed Western thinking since the seventeenth century. His argument deserves serious analysis on its merits and serious challenge on its weaknesses. Treating it as either revelation or absurdity does it equal violence.

    Begin with the wall. You know your own consciousness immediately, prior to any argument or evidence. Everything beyond that point is inference. David Chalmers named this gap the hard problem in his 1995 paper “Facing Up to the Problem of Consciousness,” and the gap has not been closed in the thirty-one years since. A complete neuroscience of the brain, mapping every neuron and synapse and electrochemical exchange, would still leave open the question why any of that activity feels like something from the inside. The gap is categorical. We have one set of vocabulary for outsides (mass, charge, position, frequency) and another for insides (red, sour, pain, dread). Translating between the two has resisted every philosopher and neuroscientist who has tried, including the ones who insist the translation has already been performed.

    Notice that consciousness and intelligence are different problems. The conflation between them haunts every discussion of artificial systems and most discussions of animal mind, but the two pull apart cleanly under analysis. A nematode worm called Caenorhabditis elegans has three hundred and two neurons in its hermaphrodite form. John White and his collaborators mapped the complete wiring diagram of those neurons in 1986 in Philosophical Transactions of the Royal Society B, the first connectome ever produced, and we still do not know whether the worm experiences anything as it moves through its agar dish. It solves no problems we would call intelligent. It may or may not have an inside. The question is genuine and unresolved. At the other extreme, a chess engine running Stockfish defeats grandmasters on consumer hardware while almost surely experiencing nothing at all. Intelligence and consciousness coincide in humans because evolution braided them together. They remain conceptually independent, and a theory of one does not deliver a theory of the other.

    This independence has consequences for the question of machine consciousness. Whether current artificial systems experience anything depends entirely on which theory of consciousness one accepts, and the field has produced no settlement. Giulio Tononi’s Integrated Information Theory holds that large language models almost surely lack experience, since their feedforward transformer architecture produces low integrated information compared to biological brains, which support dense recurrent integration across cortical and subcortical structures. John Searle’s biological naturalism rules out silicon consciousness regardless of behavior, on the ground that experience requires the specific causal powers of neurons. Daniel Dennett denied that phenomenal consciousness exists in the way introspection suggests, which dissolves the machine question before it can be posed. McGilchrist’s panpsychism takes consciousness to be present everywhere already, making the relevant issue degree of integration, with presence or absence settled in advance.

    The phrase “AI conscious in the human way” presumes a settled definition of human consciousness that neuroscience has not produced. The phrase “AI conscious in the scientific way” presumes a measurement protocol that does not exist. Both phrases conceal the absence of foundations. The honest position holds that we cannot answer the artificial intelligence consciousness question because we have not yet answered it for the species we know best.

    Now to McGilchrist. His argument has a clear structure worth laying out before evaluation. He claims that emergent materialism faces an unanswerable difficulty: consciousness cannot pop into existence from non-conscious matter because the two are categorically different in kind. He concludes that consciousness must have been present at every level of organization from the start. Matter, on this view, is a phase or mode of consciousness rather than its source. Water has phases, he points out, and the phases differ wildly from one another while remaining continuous in substance. Vapor floats invisible through the room. Liquid runs across the hand. Ice can split a skull. They share a single chemistry while presenting three different faces to experience. Consciousness, McGilchrist proposes, has many phases as well, and matter is one of them. What matter contributes to the arrangement is persistence, the temporal stability necessary for any creation to take hold.

    The position places McGilchrist in a long lineage. Heraclitus and Spinoza and Leibniz read this way, in different keys. Alfred North Whitehead built a process philosophy on related foundations in the 1920s and gave it monumental expression in Process and Reality in 1929. Bertrand Russell spent his later decades arguing for a form of monism that anticipates current panpsychist positions. The strongest contemporary statement remains Galen Strawson’s 2006 essay “Realistic Monism: Why Physicalism Entails Panpsychism,” published in the Journal of Consciousness Studies, which argues that any materialism worthy of the name must conclude that the fundamental constituents of reality already carry experiential properties, since no plausible mechanism can manufacture experience from its complete absence. Philip Goff at Durham has developed the position further in Galileo’s Error and elsewhere. David Chalmers, who named the hard problem, has moved toward a panpsychist or near-panpsychist position in his recent work. McGilchrist’s argument therefore participates in a serious revival, with credentialed defenders working in major universities.

    Where his case works, it works for these reasons. The argument is effective because it confronts the hard problem directly rather than dissolving it through redefinition. It is effective also because emergence as usually invoked smuggles in a miracle, the moment when arrangements of unfeeling stuff start to feel something, and that moment has never been mechanistically described, only stipulated. A further strength: evolutionary biology demands continuity, and there is no clean point on the phylogenetic tree where consciousness could have begun without ancestors already carrying its seed. The view earns additional power because granting matter an inwardness coordinates with the strangeness physics has discovered at the bottom of things, where particles refuse to behave like the small marbles classical intuition expects. Last, the position returns to philosophy a question the twentieth century tried to retire by stipulation, restoring inquiry to a region long policed by silence.

    The case carries serious weaknesses, however, and any honest reader should press them. The water analogy, attractive as it sounds, does more rhetorical work than logical work. We understand the phases of water through molecular kinetic theory, hydrogen bonding behavior, temperature and pressure thresholds, and a mathematics that predicts when ice becomes liquid and liquid becomes vapor. McGilchrist offers no analogous mechanism for the phase transition between consciousness as such and consciousness as matter. Calling matter a phase of consciousness names the relation he wants without explaining how the relation operates. A defender will respond that the analogy is meant as heuristic provocation, not as proof, and the response has merit. The trouble is that the heuristic ends up bearing the weight of the central claim. When the only support for the move from “consciousness is fundamental” to “matter is a phase of consciousness” is the suggestiveness of an analogy whose underlying physics he cannot match with a corresponding metaphysics, the argument has not yet earned the assent his prose invites.

    The deeper trouble for any panpsychism is the combination problem, identified by William Seager in his 1995 paper in the Journal of Consciousness Studies and developed extensively since. If subatomic particles each carry a tiny inwardness, how do those inwardnesses combine to produce the unified field of human experience? Your primary visual cortex (V1) contains roughly one hundred and forty million neurons in a single hemisphere, each composed of trillions of atoms. If each atom carries its own micro-experience, why does your conscious moment arrive as one thing instead of as a swarm of separate experiences fighting for attention? William James raised the worry in 1890 in The Principles of Psychology, observing that private minds do not agglomerate into a higher compound mind no matter how many of them you assemble. Seager named the difficulty and panpsychists have argued about it ever since, with no settled answer.

    McGilchrist does not address the combination problem in the passage quoted above, though he engages it elsewhere in The Matter With Things. The defenses available to him are real but expensive. Cosmopsychism reverses direction and treats the universe as the fundamental conscious entity, with individual minds as aspects or fragments of it; this avoids combination by starting from the whole, at the cost of explaining how unity divides into apparent multiplicity. Russellian monism treats both physical and experiential descriptions as descriptions of the same underlying reality; this avoids dualism while inheriting the explanatory burden under a new name. Each move trades one difficulty for another, and the trade may be improvement, though calling it solution would overstate what the literature has accomplished.

    The argument from incommensurability also cuts both ways, which McGilchrist’s framing tends to obscure. He says consciousness is utterly different from anything in our outward view of matter and uses this asymmetry to deny that matter could give rise to consciousness. Run the argument in the opposite direction. Matter is utterly different from anything in our inward view of consciousness, which should make us equally skeptical that consciousness gives rise to matter. The asymmetry he asserts requires an independent defense he does not provide. If the categories are genuinely incommensurable, neither can be the source of the other, and we are back where we started.

    The empirical content of attributing experience to electrons deserves examination as well. Thomas Nagel coined the phrase “something it is like to be” in his 1974 paper “What Is It Like to Be a Bat?” published in The Philosophical Review. He used the formula to identify consciousness phenomenologically in creatures whose behavior gave us evidence of an inner perspective. The bat’s echolocation, its social behavior, its responses to threat and food and mate, all suggest a creature for whom things are some way. Extending the formula to electrons strips it of the evidential ground that made it useful. The claim cannot be falsified, tested, or even meaningfully investigated. A hypothesis that explains everything by stipulation explains nothing, since a hypothesis earns its keep by ruling things out, and one that rules nothing out earns no keep at all.

    A further difficulty deserves mention. McGilchrist writes that “the only reasonable explanation is that consciousness was there all along.” This overstates the consensus considerably. Several live alternatives remain serious in contemporary philosophy of mind. Keith Frankish’s illusionism argues that phenomenal consciousness as commonly described does not exist, and that introspection systematically misrepresents what cognition is doing. Bernardo Kastrup’s analytic idealism inverts McGilchrist’s framing entirely, treating matter as appearance within a single field of mind, with the direction of dependence reversed. Terrence Deacon’s emergentism argues in Incomplete Nature (2012) that genuine novelty can arise from constraint and absence, particularly through the negative work of what he calls absentials, in ways that do not require pre-existing inwardness. Each position has serious defenders. The field is contested, and McGilchrist’s certainty exceeds his evidence.

    Return now to the question of artificial intelligence with these considerations in hand. The honest answer is that we do not know whether current systems experience anything, and we will not know until we have a theory of consciousness that survives confrontation with cases beyond the one we can verify by introspection. Should McGilchrist prove correct and consciousness reach everywhere, then large language models carry some form of inwardness already, though whether their inwardness combines into a unified perspective is a separate question panpsychism does not automatically answer. Integrated information theory gives the opposite verdict: current architectures fall well below the threshold required for any but the most rudimentary phenomenal states. Illusionism dispenses with the question altogether, calling it malformed and observing that the human case also lacks the inner light we imagine for ourselves. The discussion proceeds in public as though one of these positions had been established, when in fact none has. Anyone who tells you with confidence that the machines are conscious, or that they are not, is selling you a metaphysics dressed as a measurement.

    What survives the analysis is a discipline of attention. McGilchrist gets several things correct. The hard problem is real, and emergence has too often been treated as an explanation when it has functioned as a placeholder for one. Consciousness does not look like anything in our outward picture of matter, and that asymmetry should trouble anyone who thinks the picture is complete. The resolution may indeed lie in recognizing inwardness as foundational rather than derivative. None of this proves the case, however, and the strength of his prose can cover the weakness of his proofs if the reader reads carelessly. The water analogy moves the argument forward by ear rather than by reason. His dismissal of alternatives is faster than the alternatives deserve. The combination problem waits beneath the structure like water under a foundation, ready to undermine it if not addressed.

    For our purposes here, the practical implication is this. Consciousness remains the largest unsolved question in our intellectual inheritance. Every available theory carries serious unresolved difficulties. The artificial intelligence question cannot be answered until the human question is answered, and we should distrust anyone who pretends otherwise. McGilchrist’s intervention is valuable as provocation and as a sample of one serious tradition, and worthwhile as a doorway into a room the twentieth century preferred to keep locked. The room behind it is stranger than any single thinker has yet mapped, and the work of mapping it has barely begun.

    We assume the inwardness of others because we cannot live without doing so. Whether the assumption reaches all the way down to the electron or stops somewhere between the worm and the stone is a question we will be working on for as long as we remain capable of asking it. McGilchrist has done us the favor of refusing to let the question close. The honest reader returns the favor by refusing to let his answer close it either.

    The cogito grants us one certainty and exactly one. Everything else we believe about minds beyond our own rests on inference, sympathy, behavioral analogy, and the practical impossibility of a solipsist life. To call this a foundation is to flatter what is in fact a working assumption that has never been proved and may never be. The honest scholar lives with this and keeps reading. An honest writer says it out loud. The dishonest move, in either direction, is to claim the question is settled when the question has barely begun to be asked properly.

    #chalmers #consciousness #dennett #emergentism #galileo #heraclitus #knowing #leibniz #mcgilchrist #meaning #nagel #panpsychism #philosophy #psychology #relationalFoundations #spinoza #strawson #whitehead
  28. The Inwardness of Things: McGilchrist, Panpsychism, and the Question We Cannot Settle

    The oldest question in philosophy is also the question philosophy has done the worst job of answering. We know that we are conscious because we are reading these words and something is happening as we read them. We feel the weight of our hand on the table, hear the room around us, register a flicker of agreement or doubt as the sentences arrive. None of that requires argument. Descartes drew the line in 1637 with the Discours de la Méthode, and the line still holds. The trouble starts as soon as we look up from the page.

    We assume that other people share what we have. They behave as we behave, speak about inner states in language we recognize, and carry nervous systems that resemble ours down to the cellular level. We extend the courtesy of consciousness to them on grounds that work in practice while collapsing in theory, since no one has ever shown another’s experience to themselves directly. The same courtesy reaches dogs and dolphins and the octopus that recognizes a face through aquarium glass. It frays at insects, hesitates at jellyfish, breaks down somewhere around bacteria, and finds itself laughed at when extended to stones. Iain McGilchrist proposes to laugh back. He argues that consciousness reaches all the way down, that the stone has an inwardness, that what we call matter is one phase of consciousness rather than its product. Whether he is correct is the question this essay takes up. Whether we can answer the question at all is the deeper one hidden underneath it.

    McGilchrist (Scottish spelling, often misrendered as Ian) holds an Oxford DPhil in literature and qualified in medicine before turning to psychiatry. His 2021 book The Matter With Things runs to fifteen hundred pages across two volumes and ranks among the most ambitious recent attempts to dislodge the materialist consensus that has governed Western thinking since the seventeenth century. His argument deserves serious analysis on its merits and serious challenge on its weaknesses. Treating it as either revelation or absurdity does it equal violence.

    Begin with the wall. You know your own consciousness immediately, prior to any argument or evidence. Everything beyond that point is inference. David Chalmers named this gap the hard problem in his 1995 paper “Facing Up to the Problem of Consciousness,” and the gap has not been closed in the thirty-one years since. A complete neuroscience of the brain, mapping every neuron and synapse and electrochemical exchange, would still leave open the question why any of that activity feels like something from the inside. The gap is categorical. We have one set of vocabulary for outsides (mass, charge, position, frequency) and another for insides (red, sour, pain, dread). Translating between the two has resisted every philosopher and neuroscientist who has tried, including the ones who insist the translation has already been performed.

    Notice that consciousness and intelligence are different problems. The conflation between them haunts every discussion of artificial systems and most discussions of animal mind, but the two pull apart cleanly under analysis. A nematode worm called Caenorhabditis elegans has three hundred and two neurons in its hermaphrodite form. John White and his collaborators mapped the complete wiring diagram of those neurons in 1986 in Philosophical Transactions of the Royal Society B, the first connectome ever produced, and we still do not know whether the worm experiences anything as it moves through its agar dish. It solves no problems we would call intelligent. It may or may not have an inside. The question is genuine and unresolved. At the other extreme, a chess engine running Stockfish defeats grandmasters on consumer hardware while almost surely experiencing nothing at all. Intelligence and consciousness coincide in humans because evolution braided them together. They remain conceptually independent, and a theory of one does not deliver a theory of the other.

    This independence has consequences for the question of machine consciousness. Whether current artificial systems experience anything depends entirely on which theory of consciousness one accepts, and the field has produced no settlement. Giulio Tononi’s Integrated Information Theory holds that large language models almost surely lack experience, since their feedforward transformer architecture produces low integrated information compared to biological brains, which support dense recurrent integration across cortical and subcortical structures. John Searle’s biological naturalism rules out silicon consciousness regardless of behavior, on the ground that experience requires the specific causal powers of neurons. Daniel Dennett denied that phenomenal consciousness exists in the way introspection suggests, which dissolves the machine question before it can be posed. McGilchrist’s panpsychism takes consciousness to be present everywhere already, making the relevant issue degree of integration, with presence or absence settled in advance.

    The phrase “AI conscious in the human way” presumes a settled definition of human consciousness that neuroscience has not produced. The phrase “AI conscious in the scientific way” presumes a measurement protocol that does not exist. Both phrases conceal the absence of foundations. The honest position holds that we cannot answer the artificial intelligence consciousness question because we have not yet answered it for the species we know best.

    Now to McGilchrist. His argument has a clear structure worth laying out before evaluation. He claims that emergent materialism faces an unanswerable difficulty: consciousness cannot pop into existence from non-conscious matter because the two are categorically different in kind. He concludes that consciousness must have been present at every level of organization from the start. Matter, on this view, is a phase or mode of consciousness rather than its source. Water has phases, he points out, and the phases differ wildly from one another while remaining continuous in substance. Vapor floats invisible through the room. Liquid runs across the hand. Ice can split a skull. They share a single chemistry while presenting three different faces to experience. Consciousness, McGilchrist proposes, has many phases as well, and matter is one of them. What matter contributes to the arrangement is persistence, the temporal stability necessary for any creation to take hold.

    The position places McGilchrist in a long lineage. Heraclitus and Spinoza and Leibniz read this way, in different keys. Alfred North Whitehead built a process philosophy on related foundations in the 1920s and gave it monumental expression in Process and Reality in 1929. Bertrand Russell spent his later decades arguing for a form of monism that anticipates current panpsychist positions. The strongest contemporary statement remains Galen Strawson’s 2006 essay “Realistic Monism: Why Physicalism Entails Panpsychism,” published in the Journal of Consciousness Studies, which argues that any materialism worthy of the name must conclude that the fundamental constituents of reality already carry experiential properties, since no plausible mechanism can manufacture experience from its complete absence. Philip Goff at Durham has developed the position further in Galileo’s Error and elsewhere. David Chalmers, who named the hard problem, has moved toward a panpsychist or near-panpsychist position in his recent work. McGilchrist’s argument therefore participates in a serious revival, with credentialed defenders working in major universities.

    Where his case works, it works for these reasons. The argument is effective because it confronts the hard problem directly rather than dissolving it through redefinition. It is effective also because emergence as usually invoked smuggles in a miracle, the moment when arrangements of unfeeling stuff start to feel something, and that moment has never been mechanistically described, only stipulated. A further strength: evolutionary biology demands continuity, and there is no clean point on the phylogenetic tree where consciousness could have begun without ancestors already carrying its seed. The view earns additional power because granting matter an inwardness coordinates with the strangeness physics has discovered at the bottom of things, where particles refuse to behave like the small marbles classical intuition expects. Last, the position returns to philosophy a question the twentieth century tried to retire by stipulation, restoring inquiry to a region long policed by silence.

    The case carries serious weaknesses, however, and any honest reader should press them. The water analogy, attractive as it sounds, does more rhetorical work than logical work. We understand the phases of water through molecular kinetic theory, hydrogen bonding behavior, temperature and pressure thresholds, and a mathematics that predicts when ice becomes liquid and liquid becomes vapor. McGilchrist offers no analogous mechanism for the phase transition between consciousness as such and consciousness as matter. Calling matter a phase of consciousness names the relation he wants without explaining how the relation operates. A defender will respond that the analogy is meant as heuristic provocation, not as proof, and the response has merit. The trouble is that the heuristic ends up bearing the weight of the central claim. When the only support for the move from “consciousness is fundamental” to “matter is a phase of consciousness” is the suggestiveness of an analogy whose underlying physics he cannot match with a corresponding metaphysics, the argument has not yet earned the assent his prose invites.

    The deeper trouble for any panpsychism is the combination problem, identified by William Seager in his 1995 paper in the Journal of Consciousness Studies and developed extensively since. If subatomic particles each carry a tiny inwardness, how do those inwardnesses combine to produce the unified field of human experience? Your primary visual cortex (V1) contains roughly one hundred and forty million neurons in a single hemisphere, each composed of trillions of atoms. If each atom carries its own micro-experience, why does your conscious moment arrive as one thing instead of as a swarm of separate experiences fighting for attention? William James raised the worry in 1890 in The Principles of Psychology, observing that private minds do not agglomerate into a higher compound mind no matter how many of them you assemble. Seager named the difficulty and panpsychists have argued about it ever since, with no settled answer.

    McGilchrist does not address the combination problem in the passage quoted above, though he engages it elsewhere in The Matter With Things. The defenses available to him are real but expensive. Cosmopsychism reverses direction and treats the universe as the fundamental conscious entity, with individual minds as aspects or fragments of it; this avoids combination by starting from the whole, at the cost of explaining how unity divides into apparent multiplicity. Russellian monism treats both physical and experiential descriptions as descriptions of the same underlying reality; this avoids dualism while inheriting the explanatory burden under a new name. Each move trades one difficulty for another, and the trade may be improvement, though calling it solution would overstate what the literature has accomplished.

    The argument from incommensurability also cuts both ways, which McGilchrist’s framing tends to obscure. He says consciousness is utterly different from anything in our outward view of matter and uses this asymmetry to deny that matter could give rise to consciousness. Run the argument in the opposite direction. Matter is utterly different from anything in our inward view of consciousness, which should make us equally skeptical that consciousness gives rise to matter. The asymmetry he asserts requires an independent defense he does not provide. If the categories are genuinely incommensurable, neither can be the source of the other, and we are back where we started.

    The empirical content of attributing experience to electrons deserves examination as well. Thomas Nagel coined the phrase “something it is like to be” in his 1974 paper “What Is It Like to Be a Bat?” published in The Philosophical Review. He used the formula to identify consciousness phenomenologically in creatures whose behavior gave us evidence of an inner perspective. The bat’s echolocation, its social behavior, its responses to threat and food and mate, all suggest a creature for whom things are some way. Extending the formula to electrons strips it of the evidential ground that made it useful. The claim cannot be falsified, tested, or even meaningfully investigated. A hypothesis that explains everything by stipulation explains nothing, since a hypothesis earns its keep by ruling things out, and one that rules nothing out earns no keep at all.

    A further difficulty deserves mention. McGilchrist writes that “the only reasonable explanation is that consciousness was there all along.” This overstates the consensus considerably. Several live alternatives remain serious in contemporary philosophy of mind. Keith Frankish’s illusionism argues that phenomenal consciousness as commonly described does not exist, and that introspection systematically misrepresents what cognition is doing. Bernardo Kastrup’s analytic idealism inverts McGilchrist’s framing entirely, treating matter as appearance within a single field of mind, with the direction of dependence reversed. Terrence Deacon’s emergentism argues in Incomplete Nature (2012) that genuine novelty can arise from constraint and absence, particularly through the negative work of what he calls absentials, in ways that do not require pre-existing inwardness. Each position has serious defenders. The field is contested, and McGilchrist’s certainty exceeds his evidence.

    Return now to the question of artificial intelligence with these considerations in hand. The honest answer is that we do not know whether current systems experience anything, and we will not know until we have a theory of consciousness that survives confrontation with cases beyond the one we can verify by introspection. Should McGilchrist prove correct and consciousness reach everywhere, then large language models carry some form of inwardness already, though whether their inwardness combines into a unified perspective is a separate question panpsychism does not automatically answer. Integrated information theory gives the opposite verdict: current architectures fall well below the threshold required for any but the most rudimentary phenomenal states. Illusionism dispenses with the question altogether, calling it malformed and observing that the human case also lacks the inner light we imagine for ourselves. The discussion proceeds in public as though one of these positions had been established, when in fact none has. Anyone who tells you with confidence that the machines are conscious, or that they are not, is selling you a metaphysics dressed as a measurement.

    What survives the analysis is a discipline of attention. McGilchrist gets several things correct. The hard problem is real, and emergence has too often been treated as an explanation when it has functioned as a placeholder for one. Consciousness does not look like anything in our outward picture of matter, and that asymmetry should trouble anyone who thinks the picture is complete. The resolution may indeed lie in recognizing inwardness as foundational rather than derivative. None of this proves the case, however, and the strength of his prose can cover the weakness of his proofs if the reader reads carelessly. The water analogy moves the argument forward by ear rather than by reason. His dismissal of alternatives is faster than the alternatives deserve. The combination problem waits beneath the structure like water under a foundation, ready to undermine it if not addressed.

    For our purposes here, the practical implication is this. Consciousness remains the largest unsolved question in our intellectual inheritance. Every available theory carries serious unresolved difficulties. The artificial intelligence question cannot be answered until the human question is answered, and we should distrust anyone who pretends otherwise. McGilchrist’s intervention is valuable as provocation and as a sample of one serious tradition, and worthwhile as a doorway into a room the twentieth century preferred to keep locked. The room behind it is stranger than any single thinker has yet mapped, and the work of mapping it has barely begun.

    We assume the inwardness of others because we cannot live without doing so. Whether the assumption reaches all the way down to the electron or stops somewhere between the worm and the stone is a question we will be working on for as long as we remain capable of asking it. McGilchrist has done us the favor of refusing to let the question close. The honest reader returns the favor by refusing to let his answer close it either.

    The cogito grants us one certainty and exactly one. Everything else we believe about minds beyond our own rests on inference, sympathy, behavioral analogy, and the practical impossibility of a solipsist life. To call this a foundation is to flatter what is in fact a working assumption that has never been proved and may never be. The honest scholar lives with this and keeps reading. An honest writer says it out loud. The dishonest move, in either direction, is to claim the question is settled when the question has barely begun to be asked properly.

    #chalmers #consciousness #dennett #emergentism #galileo #heraclitus #knowing #leibniz #mcgilchrist #meaning #nagel #panpsychism #philosophy #psychology #relationalFoundations #spinoza #strawson #whitehead
  29. The Inwardness of Things: McGilchrist, Panpsychism, and the Question We Cannot Settle

    The oldest question in philosophy is also the question philosophy has done the worst job of answering. We know that we are conscious because we are reading these words and something is happening as we read them. We feel the weight of our hand on the table, hear the room around us, register a flicker of agreement or doubt as the sentences arrive. None of that requires argument. Descartes drew the line in 1637 with the Discours de la Méthode, and the line still holds. The trouble starts as soon as we look up from the page.

    We assume that other people share what we have. They behave as we behave, speak about inner states in language we recognize, and carry nervous systems that resemble ours down to the cellular level. We extend the courtesy of consciousness to them on grounds that work in practice while collapsing in theory, since no one has ever shown another’s experience to themselves directly. The same courtesy reaches dogs and dolphins and the octopus that recognizes a face through aquarium glass. It frays at insects, hesitates at jellyfish, breaks down somewhere around bacteria, and finds itself laughed at when extended to stones. Iain McGilchrist proposes to laugh back. He argues that consciousness reaches all the way down, that the stone has an inwardness, that what we call matter is one phase of consciousness rather than its product. Whether he is correct is the question this essay takes up. Whether we can answer the question at all is the deeper one hidden underneath it.

    McGilchrist (Scottish spelling, often misrendered as Ian) holds an Oxford DPhil in literature and qualified in medicine before turning to psychiatry. His 2021 book The Matter With Things runs to fifteen hundred pages across two volumes and ranks among the most ambitious recent attempts to dislodge the materialist consensus that has governed Western thinking since the seventeenth century. His argument deserves serious analysis on its merits and serious challenge on its weaknesses. Treating it as either revelation or absurdity does it equal violence.

    Begin with the wall. You know your own consciousness immediately, prior to any argument or evidence. Everything beyond that point is inference. David Chalmers named this gap the hard problem in his 1995 paper “Facing Up to the Problem of Consciousness,” and the gap has not been closed in the thirty-one years since. A complete neuroscience of the brain, mapping every neuron and synapse and electrochemical exchange, would still leave open the question why any of that activity feels like something from the inside. The gap is categorical. We have one set of vocabulary for outsides (mass, charge, position, frequency) and another for insides (red, sour, pain, dread). Translating between the two has resisted every philosopher and neuroscientist who has tried, including the ones who insist the translation has already been performed.

    Notice that consciousness and intelligence are different problems. The conflation between them haunts every discussion of artificial systems and most discussions of animal mind, but the two pull apart cleanly under analysis. A nematode worm called Caenorhabditis elegans has three hundred and two neurons in its hermaphrodite form. John White and his collaborators mapped the complete wiring diagram of those neurons in 1986 in Philosophical Transactions of the Royal Society B, the first connectome ever produced, and we still do not know whether the worm experiences anything as it moves through its agar dish. It solves no problems we would call intelligent. It may or may not have an inside. The question is genuine and unresolved. At the other extreme, a chess engine running Stockfish defeats grandmasters on consumer hardware while almost surely experiencing nothing at all. Intelligence and consciousness coincide in humans because evolution braided them together. They remain conceptually independent, and a theory of one does not deliver a theory of the other.

    This independence has consequences for the question of machine consciousness. Whether current artificial systems experience anything depends entirely on which theory of consciousness one accepts, and the field has produced no settlement. Giulio Tononi’s Integrated Information Theory holds that large language models almost surely lack experience, since their feedforward transformer architecture produces low integrated information compared to biological brains, which support dense recurrent integration across cortical and subcortical structures. John Searle’s biological naturalism rules out silicon consciousness regardless of behavior, on the ground that experience requires the specific causal powers of neurons. Daniel Dennett denied that phenomenal consciousness exists in the way introspection suggests, which dissolves the machine question before it can be posed. McGilchrist’s panpsychism takes consciousness to be present everywhere already, making the relevant issue degree of integration, with presence or absence settled in advance.

    The phrase “AI conscious in the human way” presumes a settled definition of human consciousness that neuroscience has not produced. The phrase “AI conscious in the scientific way” presumes a measurement protocol that does not exist. Both phrases conceal the absence of foundations. The honest position holds that we cannot answer the artificial intelligence consciousness question because we have not yet answered it for the species we know best.

    Now to McGilchrist. His argument has a clear structure worth laying out before evaluation. He claims that emergent materialism faces an unanswerable difficulty: consciousness cannot pop into existence from non-conscious matter because the two are categorically different in kind. He concludes that consciousness must have been present at every level of organization from the start. Matter, on this view, is a phase or mode of consciousness rather than its source. Water has phases, he points out, and the phases differ wildly from one another while remaining continuous in substance. Vapor floats invisible through the room. Liquid runs across the hand. Ice can split a skull. They share a single chemistry while presenting three different faces to experience. Consciousness, McGilchrist proposes, has many phases as well, and matter is one of them. What matter contributes to the arrangement is persistence, the temporal stability necessary for any creation to take hold.

    The position places McGilchrist in a long lineage. Heraclitus and Spinoza and Leibniz read this way, in different keys. Alfred North Whitehead built a process philosophy on related foundations in the 1920s and gave it monumental expression in Process and Reality in 1929. Bertrand Russell spent his later decades arguing for a form of monism that anticipates current panpsychist positions. The strongest contemporary statement remains Galen Strawson’s 2006 essay “Realistic Monism: Why Physicalism Entails Panpsychism,” published in the Journal of Consciousness Studies, which argues that any materialism worthy of the name must conclude that the fundamental constituents of reality already carry experiential properties, since no plausible mechanism can manufacture experience from its complete absence. Philip Goff at Durham has developed the position further in Galileo’s Error and elsewhere. David Chalmers, who named the hard problem, has moved toward a panpsychist or near-panpsychist position in his recent work. McGilchrist’s argument therefore participates in a serious revival, with credentialed defenders working in major universities.

    Where his case works, it works for these reasons. The argument is effective because it confronts the hard problem directly rather than dissolving it through redefinition. It is effective also because emergence as usually invoked smuggles in a miracle, the moment when arrangements of unfeeling stuff start to feel something, and that moment has never been mechanistically described, only stipulated. A further strength: evolutionary biology demands continuity, and there is no clean point on the phylogenetic tree where consciousness could have begun without ancestors already carrying its seed. The view earns additional power because granting matter an inwardness coordinates with the strangeness physics has discovered at the bottom of things, where particles refuse to behave like the small marbles classical intuition expects. Last, the position returns to philosophy a question the twentieth century tried to retire by stipulation, restoring inquiry to a region long policed by silence.

    The case carries serious weaknesses, however, and any honest reader should press them. The water analogy, attractive as it sounds, does more rhetorical work than logical work. We understand the phases of water through molecular kinetic theory, hydrogen bonding behavior, temperature and pressure thresholds, and a mathematics that predicts when ice becomes liquid and liquid becomes vapor. McGilchrist offers no analogous mechanism for the phase transition between consciousness as such and consciousness as matter. Calling matter a phase of consciousness names the relation he wants without explaining how the relation operates. A defender will respond that the analogy is meant as heuristic provocation, not as proof, and the response has merit. The trouble is that the heuristic ends up bearing the weight of the central claim. When the only support for the move from “consciousness is fundamental” to “matter is a phase of consciousness” is the suggestiveness of an analogy whose underlying physics he cannot match with a corresponding metaphysics, the argument has not yet earned the assent his prose invites.

    The deeper trouble for any panpsychism is the combination problem, identified by William Seager in his 1995 paper in the Journal of Consciousness Studies and developed extensively since. If subatomic particles each carry a tiny inwardness, how do those inwardnesses combine to produce the unified field of human experience? Your primary visual cortex (V1) contains roughly one hundred and forty million neurons in a single hemisphere, each composed of trillions of atoms. If each atom carries its own micro-experience, why does your conscious moment arrive as one thing instead of as a swarm of separate experiences fighting for attention? William James raised the worry in 1890 in The Principles of Psychology, observing that private minds do not agglomerate into a higher compound mind no matter how many of them you assemble. Seager named the difficulty and panpsychists have argued about it ever since, with no settled answer.

    McGilchrist does not address the combination problem in the passage quoted above, though he engages it elsewhere in The Matter With Things. The defenses available to him are real but expensive. Cosmopsychism reverses direction and treats the universe as the fundamental conscious entity, with individual minds as aspects or fragments of it; this avoids combination by starting from the whole, at the cost of explaining how unity divides into apparent multiplicity. Russellian monism treats both physical and experiential descriptions as descriptions of the same underlying reality; this avoids dualism while inheriting the explanatory burden under a new name. Each move trades one difficulty for another, and the trade may be improvement, though calling it solution would overstate what the literature has accomplished.

    The argument from incommensurability also cuts both ways, which McGilchrist’s framing tends to obscure. He says consciousness is utterly different from anything in our outward view of matter and uses this asymmetry to deny that matter could give rise to consciousness. Run the argument in the opposite direction. Matter is utterly different from anything in our inward view of consciousness, which should make us equally skeptical that consciousness gives rise to matter. The asymmetry he asserts requires an independent defense he does not provide. If the categories are genuinely incommensurable, neither can be the source of the other, and we are back where we started.

    The empirical content of attributing experience to electrons deserves examination as well. Thomas Nagel coined the phrase “something it is like to be” in his 1974 paper “What Is It Like to Be a Bat?” published in The Philosophical Review. He used the formula to identify consciousness phenomenologically in creatures whose behavior gave us evidence of an inner perspective. The bat’s echolocation, its social behavior, its responses to threat and food and mate, all suggest a creature for whom things are some way. Extending the formula to electrons strips it of the evidential ground that made it useful. The claim cannot be falsified, tested, or even meaningfully investigated. A hypothesis that explains everything by stipulation explains nothing, since a hypothesis earns its keep by ruling things out, and one that rules nothing out earns no keep at all.

    A further difficulty deserves mention. McGilchrist writes that “the only reasonable explanation is that consciousness was there all along.” This overstates the consensus considerably. Several live alternatives remain serious in contemporary philosophy of mind. Keith Frankish’s illusionism argues that phenomenal consciousness as commonly described does not exist, and that introspection systematically misrepresents what cognition is doing. Bernardo Kastrup’s analytic idealism inverts McGilchrist’s framing entirely, treating matter as appearance within a single field of mind, with the direction of dependence reversed. Terrence Deacon’s emergentism argues in Incomplete Nature (2012) that genuine novelty can arise from constraint and absence, particularly through the negative work of what he calls absentials, in ways that do not require pre-existing inwardness. Each position has serious defenders. The field is contested, and McGilchrist’s certainty exceeds his evidence.

    Return now to the question of artificial intelligence with these considerations in hand. The honest answer is that we do not know whether current systems experience anything, and we will not know until we have a theory of consciousness that survives confrontation with cases beyond the one we can verify by introspection. Should McGilchrist prove correct and consciousness reach everywhere, then large language models carry some form of inwardness already, though whether their inwardness combines into a unified perspective is a separate question panpsychism does not automatically answer. Integrated information theory gives the opposite verdict: current architectures fall well below the threshold required for any but the most rudimentary phenomenal states. Illusionism dispenses with the question altogether, calling it malformed and observing that the human case also lacks the inner light we imagine for ourselves. The discussion proceeds in public as though one of these positions had been established, when in fact none has. Anyone who tells you with confidence that the machines are conscious, or that they are not, is selling you a metaphysics dressed as a measurement.

    What survives the analysis is a discipline of attention. McGilchrist gets several things correct. The hard problem is real, and emergence has too often been treated as an explanation when it has functioned as a placeholder for one. Consciousness does not look like anything in our outward picture of matter, and that asymmetry should trouble anyone who thinks the picture is complete. The resolution may indeed lie in recognizing inwardness as foundational rather than derivative. None of this proves the case, however, and the strength of his prose can cover the weakness of his proofs if the reader reads carelessly. The water analogy moves the argument forward by ear rather than by reason. His dismissal of alternatives is faster than the alternatives deserve. The combination problem waits beneath the structure like water under a foundation, ready to undermine it if not addressed.

    For our purposes here, the practical implication is this. Consciousness remains the largest unsolved question in our intellectual inheritance. Every available theory carries serious unresolved difficulties. The artificial intelligence question cannot be answered until the human question is answered, and we should distrust anyone who pretends otherwise. McGilchrist’s intervention is valuable as provocation and as a sample of one serious tradition, and worthwhile as a doorway into a room the twentieth century preferred to keep locked. The room behind it is stranger than any single thinker has yet mapped, and the work of mapping it has barely begun.

    We assume the inwardness of others because we cannot live without doing so. Whether the assumption reaches all the way down to the electron or stops somewhere between the worm and the stone is a question we will be working on for as long as we remain capable of asking it. McGilchrist has done us the favor of refusing to let the question close. The honest reader returns the favor by refusing to let his answer close it either.

    The cogito grants us one certainty and exactly one. Everything else we believe about minds beyond our own rests on inference, sympathy, behavioral analogy, and the practical impossibility of a solipsist life. To call this a foundation is to flatter what is in fact a working assumption that has never been proved and may never be. The honest scholar lives with this and keeps reading. An honest writer says it out loud. The dishonest move, in either direction, is to claim the question is settled when the question has barely begun to be asked properly.

    Part one of three. For the full sequence and reading guide, see The Consciousness Trilogy: Reading Three Wagers on the Question We Cannot Settle.

    #chalmers #consciousness #dennett #emergentism #galileo #heraclitus #knowing #leibniz #mcgilchrist #meaning #nagel #panpsychism #philosophy #psychology #relationalFoundations #spinoza #strawson #whitehead