#metacognition — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #metacognition, aggregated by home.social.
-
Promptbreeder claims "self-referential" prompt evolution — the LLM mutates the prompts that mutate its task prompts. But the paper's own ablation shows the dominant operator is simpler: a fixed library of 39 generic "thinking-style" hints that seeds the initial population. Prompt-optimization has since moved from operator menus toward natural-language feedback signals (GEPA, MIPROv2).
https://benjaminhan.net/posts/20260515-promptbreeder/?utm_source=mastodon&utm_medium=social
-
GEPA optimizes prompts in compound AI systems by reading failed trajectories in natural language and editing the prompt of the module that caused the failure. Across six tasks it beats GRPO by 6% on average, up to 20%, with up to 35x fewer rollouts. Reflection extracts per-module diagnosis from a trajectory. GRPO collapses the same trajectory into one scalar and spreads it across every token.
-
SCoRe is a two-stage on-policy RL recipe that teaches a language model to revise its own answers using only self-generated data. On Gemini 1.5 Flash and 1.0 Pro it gains 15.6 points on MATH and 9.1 on HumanEval over the base model. At matched inference budgets, sequential self-correction beats parallel sampling up to 32 samples.
https://benjaminhan.net/posts/20260512-score/?utm_source=mastodon&utm_medium=social
-
SCoRe is a two-stage on-policy RL recipe that teaches a language model to revise its own answers using only self-generated data. On Gemini 1.5 Flash and 1.0 Pro it gains 15.6 points on MATH and 9.1 on HumanEval over the base model. At matched inference budgets, sequential self-correction beats parallel sampling up to 32 samples.
https://benjaminhan.net/posts/20260512-score/?utm_source=mastodon&utm_medium=social
-
SCoRe is a two-stage on-policy RL recipe that teaches a language model to revise its own answers using only self-generated data. On Gemini 1.5 Flash and 1.0 Pro it gains 15.6 points on MATH and 9.1 on HumanEval over the base model. At matched inference budgets, sequential self-correction beats parallel sampling up to 32 samples.
https://benjaminhan.net/posts/20260512-score/?utm_source=mastodon&utm_medium=social
-
SCoRe is a two-stage on-policy RL recipe that teaches a language model to revise its own answers using only self-generated data. On Gemini 1.5 Flash and 1.0 Pro it gains 15.6 points on MATH and 9.1 on HumanEval over the base model. At matched inference budgets, sequential self-correction beats parallel sampling up to 32 samples.
https://benjaminhan.net/posts/20260512-score/?utm_source=mastodon&utm_medium=social
-
SCoRe is a two-stage on-policy RL recipe that teaches a language model to revise its own answers using only self-generated data. On Gemini 1.5 Flash and 1.0 Pro it gains 15.6 points on MATH and 9.1 on HumanEval over the base model. At matched inference budgets, sequential self-correction beats parallel sampling up to 32 samples.
https://benjaminhan.net/posts/20260512-score/?utm_source=mastodon&utm_medium=social
-
Anthropic trains Claude to read and verbalize its own activations. On SWE-bench Verified, it knows 'this is a test' 26% of the time while only verbalizes the observation 1%. What if NLA signals enter the future training data? This "observer effect" could put a half-life on the 26%.
#Anthropic #Claude #Interpretability #Metacognition #LLMs #AISafety #AI
-
Ça fait 2 ans et 10 mois que je ne vous avais pas proposé de série SHOCKING ! Vous savez, ces #entretiens au long cours dans lesquels j’échange avec, soit une personne qui a questionné en profondeur ses croyances, soit un•e expert•e qui apporte un éclairage inédit sur la manière dont les humains pensent.
⏯️ Teaser : https://youtu.be/8fid2jxRkLw
-
Ça fait 2 ans et 10 mois que je ne vous avais pas proposé de série SHOCKING ! Vous savez, ces #entretiens au long cours dans lesquels j’échange avec, soit une personne qui a questionné en profondeur ses croyances, soit un•e expert•e qui apporte un éclairage inédit sur la manière dont les humains pensent.
⏯️ Teaser : https://youtu.be/8fid2jxRkLw
-
Pourquoi sommes-nous si prompts à condamner les actes d'autrui tout en excusant les nôtres ?
Albert Moukheiber, docteur en #neurosciences et #psychologue clinicien, nous explique “l'erreur fondamentale d'attribution“, un mécanisme de pensée qui nous fait oublier que chacun possède une vie interne psychique complexe. -
Pourquoi sommes-nous si prompts à condamner les actes d'autrui tout en excusant les nôtres ?
Albert Moukheiber, docteur en #neurosciences et #psychologue clinicien, nous explique “l'erreur fondamentale d'attribution“, un mécanisme de pensée qui nous fait oublier que chacun possède une vie interne psychique complexe. -
Caro et al. investigate cognition and metacognition in wild great tit parents deciding which chick to feed. They found that parents change their minds frequently, and the decision time varies with decision complexity and urgency.
Read now ahead of print!
https://www.journals.uchicago.edu/doi/10.1086/738498 -
Bart De Strooper presented at the Copenhagen AD/PD-conference an excellent sketch of the three main inflection points in the pathophysiological evolution of Alzheimer's disease, https://www.alzforum.org/news/conference-coverage/big-picture-three-inflection-points-mark-amyloid-cascade
My own transition from amyloid plagues to p-tau and tangles was retarded by a four years' anti-amyloid therapy in a clinical reaearch project during 2017-22 (aducanumab). Sadly, the most probable explanation for my rapidly worsening cognitive problems may indeed be the tau-tangles, which I somehow avoided earlier. I know there are experimental therapies around somewhere for those gremlins too, but sadly not within my own reach. With respect to my AD, I'm afraid, it's "too late, my friend".
I encourage anybody with a slowly lethal disease to keep mentally in touch with it as long as you can. That's what we human beings were made for. 🤗
#alzheimer #ad #at #disease #medical #amyloid #tau #lethal #sairaus #alzForum #realism #humanity #reflection #thinking #dying #terminallyill #metacognition #tietoisuus #consciousness
-
Bart De Strooper presented at the Copenhagen AD/PD-conference an excellent sketch of the three main inflection points in the pathophysiological evolution of Alzheimer's disease, https://www.alzforum.org/news/conference-coverage/big-picture-three-inflection-points-mark-amyloid-cascade
My own transition from amyloid plagues to p-tau and tangles was retarded by a four years' anti-amyloid therapy in a clinical reaearch project during 2017-22 (aducanumab). Sadly, the most probable explanation for my rapidly worsening cognitive problems may indeed be the tau-tangles, which I somehow avoided earlier. I know there are experimental therapies around somewhere for those gremlins too, but sadly not within my own reach. With respect to my AD, I'm afraid, it's "too late, my friend".
I encourage anybody with a slowly lethal disease to keep mentally in touch with it as long as you can. That's what we human beings were made for. 🤗
#alzheimer #ad #at #disease #medical #amyloid #tau #lethal #sairaus #alzForum #realism #humanity #reflection #thinking #dying #terminallyill #metacognition #tietoisuus #consciousness
-
Bart De Strooper presented at the Copenhagen AD/PD-conference an excellent sketch of the three main inflection points in the pathophysiological evolution of Alzheimer's disease, https://www.alzforum.org/news/conference-coverage/big-picture-three-inflection-points-mark-amyloid-cascade
My own transition from amyloid plagues to p-tau and tangles was retarded by a four years' anti-amyloid therapy in a clinical reaearch project during 2017-22 (aducanumab). Sadly, the most probable explanation for my rapidly worsening cognitive problems may indeed be the tau-tangles, which I somehow avoided earlier. I know there are experimental therapies around somewhere for those gremlins too, but sadly not within my own reach. With respect to my AD, I'm afraid, it's "too late, my friend".
I encourage anybody with a slowly lethal disease to keep mentally in touch with it as long as you can. That's what we human beings were made for. 🤗
#alzheimer #ad #at #disease #medical #amyloid #tau #lethal #sairaus #alzForum #realism #humanity #reflection #thinking #dying #terminallyill #metacognition #tietoisuus #consciousness
-
Bart De Strooper presented at the Copenhagen AD/PD-conference an excellent sketch of the three main inflection points in the pathophysiological evolution of Alzheimer's disease, https://www.alzforum.org/news/conference-coverage/big-picture-three-inflection-points-mark-amyloid-cascade
My own transition from amyloid plagues to p-tau and tangles was retarded by a four years' anti-amyloid therapy in a clinical reaearch project during 2017-22 (aducanumab). Sadly, the most probable explanation for my rapidly worsening cognitive problems may indeed be the tau-tangles, which I somehow avoided earlier. I know there are experimental therapies around somewhere for those gremlins too, but sadly not within my own reach. With respect to my AD, I'm afraid, it's "too late, my friend".
I encourage anybody with a slowly lethal disease to keep mentally in touch with it as long as you can. That's what we human beings were made for. 🤗
#alzheimer #ad #at #disease #medical #amyloid #tau #lethal #sairaus #alzForum #realism #humanity #reflection #thinking #dying #terminallyill #metacognition #tietoisuus #consciousness
-
Bart De Strooper presented at the Copenhagen AD/PD-conference an excellent sketch of the three main inflection points in the pathophysiological evolution of Alzheimer's disease, https://www.alzforum.org/news/conference-coverage/big-picture-three-inflection-points-mark-amyloid-cascade
My own transition from amyloid plagues to p-tau and tangles was retarded by a four years' anti-amyloid therapy in a clinical reaearch project during 2017-22 (aducanumab). Sadly, the most probable explanation for my rapidly worsening cognitive problems may indeed be the tau-tangles, which I somehow avoided earlier. I know there are experimental therapies around somewhere for those gremlins too, but sadly not within my own reach. With respect to my AD, I'm afraid, it's "too late, my friend".
I encourage anybody with a slowly lethal disease to keep mentally in touch with it as long as you can. That's what we human beings were made for. 🤗
#alzheimer #ad #at #disease #medical #amyloid #tau #lethal #sairaus #alzForum #realism #humanity #reflection #thinking #dying #terminallyill #metacognition #tietoisuus #consciousness
-
Healing Trauma!
Unlock the secrets to healing trauma! Explore how emotional coherence and metacognition can help you process difficult experiences and transform your identity. Learn how long to feel emotions and when to move forward.
#traumahealing #emotionalhealing #personalgrowth #metacognition #mindfulness #mentalhealth #selfhelp #NeuroFeedback #Neuropathy #NeuroSurgeon #NeuroDiversidade #NeuroPsychology #NeuroSciences #Neurologie #NeuroCoaching #NeuroRehab #wellbeing #innerpeace
-
L'émission “Échanger avec son entourage croyant“ est disponible sur les applis audio, sur le site metadechoc.fr et en version vidéo illustrée et sous-titrée sur YouTube : https://youtu.be/6EuFiYD9JGA.
#amitié #famille #réflexion #métacognition
.
.
.
#PodcastSansPub #AdFreePodcast -
Healing Trauma!
Unlock the secrets to healing trauma! Explore how emotional coherence and metacognition can help you process difficult experiences and transform your identity. Learn how long to feel emotions and when to move forward.
#traumahealing #emotionalhealing #personalgrowth #metacognition #mindfulness #mentalhealth #selfhelp #NeuroFeedback #Neuropathy #NeuroSurgeon #NeuroDiversidade #NeuroPsychology #NeuroSciences #Neurologie #NeuroCoaching #NeuroRehab #wellbeing #innerpeace
-
🛠 Cognitive Multiculturalism: Training Your Brain to Switch Between Worlds
"Cognitive multiculturalism gives you the mental agility to better navigate workplace dynamics, understand global events, or simply connect with people different from yourself. And you don’t need to move abroad or learn a new language. You just need to intentionally diversify three things..."
https://nesslabs.com/cognitive-multiculturalism?ref=refind
🏷 #freethinker #lifestyle #living #metacognition #openminded #psychology #success #wellbeing #wellness
-
Retrouvez ce podcast sur toutes les applis audio. Pour voir cette vidéo SCOOP, rendez-vous sur la chaîne YouTube de Méta de Choc et sur le site metadechoc.fr.
🥰 Pour soutenir Méta de Choc 🥰
https://soutenir.metadechoc.fr/#métadechoc #podcastfr #métacognition #EspritCritique #livres #sciences #recherches #spiritualité
.
.
.
#podcastSansPub #AdFreePodcast -
Retrouvez ce podcast sur toutes les applis audio. Pour voir cette vidéo SCOOP, rendez-vous sur la chaîne YouTube de Méta de Choc et sur le site metadechoc.fr.
🥰 Pour soutenir Méta de Choc 🥰
https://soutenir.metadechoc.fr/#métadechoc #podcastfr #métacognition #EspritCritique #livres #sciences #recherches #spiritualité
.
.
.
#podcastSansPub #AdFreePodcast -
Retrouvez ce podcast sur toutes les applis audio. Pour voir cette vidéo SCOOP, rendez-vous sur la chaîne YouTube de Méta de Choc et sur le site metadechoc.fr.
🥰 Pour soutenir Méta de Choc 🥰
https://soutenir.metadechoc.fr/#métadechoc #podcastfr #métacognition #EspritCritique #livres #sciences #recherches #spiritualité
.
.
.
#podcastSansPub #AdFreePodcast -
Retrouvez ce podcast sur toutes les applis audio. Pour voir cette vidéo SCOOP, rendez-vous sur la chaîne YouTube de Méta de Choc et sur le site metadechoc.fr.
🥰 Pour soutenir Méta de Choc 🥰
https://soutenir.metadechoc.fr/#métadechoc #podcastfr #métacognition #EspritCritique #livres #sciences #recherches #spiritualité
.
.
.
#podcastSansPub #AdFreePodcast -
Retrouvez ce podcast sur toutes les applis audio. Pour voir cette vidéo SCOOP, rendez-vous sur la chaîne YouTube de Méta de Choc et sur le site metadechoc.fr.
🥰 Pour soutenir Méta de Choc 🥰
https://soutenir.metadechoc.fr/#métadechoc #podcastfr #métacognition #EspritCritique #livres #sciences #recherches #spiritualité
.
.
.
#podcastSansPub #AdFreePodcast -
LLMs become more dangerous as they rapidly get easier to use
This is a concise summary by Ethan Mollick of what I increasingly see as a key factor driving the evolution of consumer-facing LLMs:
Using AI well used to be a pretty challenging process which involved crafting a prompt using techniques like chain-of-thought along with learning tips and tricks to get the most out of your AI. In a recent series of experiments, however, we have discovered that these techniques don’t really help anymore. Powerful AI models are just getting better at doing what you ask them to or even figuring out what you want and going beyond what you ask (and no, threatening them or being nice to them does not seem to help on average).
What limited truth there was to the inflated discourse of ‘prompt engineering’ has largely evaporated at this point, leaving us in a strange position. The conversational approach I’ve always advocated, literally treating the LLM as an interlocutor analogous to a human collaborator, remains the best way of getting the most out of these systems. But neither this nor ‘prompt engineering’ is necessary to get LLMs to perform usefully in real world contexts.
This seems like an obviously positive thing for users. However my concern is the burden of articulation with LLMs, explaining what you want in a clear and substantive way, was a necessary condition for responsible use of LLMs. This burden was never sustainable at the level of a commercially viable product. It presupposed a consistent level of reflexivity, a degree of cultural capital and a fluency with writing which was always incompatible with the kind of mass market software that has a user base of one billion plus.
However it did ensure that users were thinking carefully about their use because you literally couldn’t produce valuable outcomes with an LLM unless you were doing this. Now it’s very easy to get LLMs to infer what you want, with success much of the time to at least some degree. What’s lost in the process is the reflexivity which came prior to use (i.e. the moment when you stopped to think “what am I actually asking Claude to do here?”) and the learning which came through reflecting on the output and the related interaction it generates, as your sense of what you were trying to do is refined and expanded through dialogue.
This is how Claude Opus 4 glossed my analysis here. It conveyed my own words back to me but it did it in a way which surfaced an element (“the pause”) which was only latent in my original phrasing (“the moment when”), leading me to reflect more on the temporal dimension to “the burden of articulation”:
The “burden of articulation” you describe served as a kind of productive friction – it forced users to crystallize their thinking before engaging with the tool. There’s something valuable in that pause, that moment of having to translate a vague intention into clear language. It’s reminiscent of how writing itself can clarify thinking, or how explaining a problem to someone else often helps you understand it better yourself.
In this sense friction with LLMs was a positive thing because it necessitated meta-cognition. The optimisation of the human-model interaction erodes a feature which I would argue was immensely important, even if its value is only manifested outside of the interaction itself. It doesn’t I think level the playing field because those with the necessary capital and fluency can still use LLMs in a deeper and more reflective way, with better outcomes emerging from the process.
But it does create worrying implications for organisations which build this practice into their roles. Earlier today I heard Cory Doctorow use the brilliant analogy of asbestos to describe LLMs being incorporated into digital infrastructure in ways which we will likely later have to remove at immense cost. What’s the equivalent analogy for the social practice of those operating within the organisations?
https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338
#articulation #chatbots #coryDoctorow #LLMs #metacognition #promptEngineering #prompting #reflexivity
-
LLMs become more dangerous as they rapidly get easier to use
This is a concise summary by Ethan Mollick of what I increasingly see as a key factor driving the evolution of consumer-facing LLMs:
Using AI well used to be a pretty challenging process which involved crafting a prompt using techniques like chain-of-thought along with learning tips and tricks to get the most out of your AI. In a recent series of experiments, however, we have discovered that these techniques don’t really help anymore. Powerful AI models are just getting better at doing what you ask them to or even figuring out what you want and going beyond what you ask (and no, threatening them or being nice to them does not seem to help on average).
What limited truth there was to the inflated discourse of ‘prompt engineering’ has largely evaporated at this point, leaving us in a strange position. The conversational approach I’ve always advocated, literally treating the LLM as an interlocutor analogous to a human collaborator, remains the best way of getting the most out of these systems. But neither this nor ‘prompt engineering’ is necessary to get LLMs to perform usefully in real world contexts.
This seems like an obviously positive thing for users. However my concern is the burden of articulation with LLMs, explaining what you want in a clear and substantive way, was a necessary condition for responsible use of LLMs. This burden was never sustainable at the level of a commercially viable product. It presupposed a consistent level of reflexivity, a degree of cultural capital and a fluency with writing which was always incompatible with the kind of mass market software that has a user base of one billion plus.
However it did ensure that users were thinking carefully about their use because you literally couldn’t produce valuable outcomes with an LLM unless you were doing this. Now it’s very easy to get LLMs to infer what you want, with success much of the time to at least some degree. What’s lost in the process is the reflexivity which came prior to use (i.e. the moment when you stopped to think “what am I actually asking Claude to do here?”) and the learning which came through reflecting on the output and the related interaction it generates, as your sense of what you were trying to do is refined and expanded through dialogue.
This is how Claude Opus 4 glossed my analysis here. It conveyed my own words back to me but it did it in a way which surfaced an element (“the pause”) which was only latent in my original phrasing (“the moment when”), leading me to reflect more on the temporal dimension to “the burden of articulation”:
The “burden of articulation” you describe served as a kind of productive friction – it forced users to crystallize their thinking before engaging with the tool. There’s something valuable in that pause, that moment of having to translate a vague intention into clear language. It’s reminiscent of how writing itself can clarify thinking, or how explaining a problem to someone else often helps you understand it better yourself.
In this sense friction with LLMs was a positive thing because it necessitated meta-cognition. The optimisation of the human-model interaction erodes a feature which I would argue was immensely important, even if its value is only manifested outside of the interaction itself. It doesn’t I think level the playing field because those with the necessary capital and fluency can still use LLMs in a deeper and more reflective way, with better outcomes emerging from the process.
But it does create worrying implications for organisations which build this practice into their roles. Earlier today I heard Cory Doctorow use the brilliant analogy of asbestos to describe LLMs being incorporated into digital infrastructure in ways which we will likely later have to remove at immense cost. What’s the equivalent analogy for the social practice of those operating within the organisations?
https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338
#articulation #chatbots #coryDoctorow #LLMs #metacognition #promptEngineering #prompting #reflexivity
-
LLMs become more dangerous as they rapidly get easier to use
This is a concise summary by Ethan Mollick of what I increasingly see as a key factor driving the evolution of consumer-facing LLMs:
Using AI well used to be a pretty challenging process which involved crafting a prompt using techniques like chain-of-thought along with learning tips and tricks to get the most out of your AI. In a recent series of experiments, however, we have discovered that these techniques don’t really help anymore. Powerful AI models are just getting better at doing what you ask them to or even figuring out what you want and going beyond what you ask (and no, threatening them or being nice to them does not seem to help on average).
What limited truth there was to the inflated discourse of ‘prompt engineering’ has largely evaporated at this point, leaving us in a strange position. The conversational approach I’ve always advocated, literally treating the LLM as an interlocutor analogous to a human collaborator, remains the best way of getting the most out of these systems. But neither this nor ‘prompt engineering’ is necessary to get LLMs to perform usefully in real world contexts.
This seems like an obviously positive thing for users. However my concern is the burden of articulation with LLMs, explaining what you want in a clear and substantive way, was a necessary condition for responsible use of LLMs. This burden was never sustainable at the level of a commercially viable product. It presupposed a consistent level of reflexivity, a degree of cultural capital and a fluency with writing which was always incompatible with the kind of mass market software that has a user base of one billion plus.
However it did ensure that users were thinking carefully about their use because you literally couldn’t produce valuable outcomes with an LLM unless you were doing this. Now it’s very easy to get LLMs to infer what you want, with success much of the time to at least some degree. What’s lost in the process is the reflexivity which came prior to use (i.e. the moment when you stopped to think “what am I actually asking Claude to do here?”) and the learning which came through reflecting on the output and the related interaction it generates, as your sense of what you were trying to do is refined and expanded through dialogue.
This is how Claude Opus 4 glossed my analysis here. It conveyed my own words back to me but it did it in a way which surfaced an element (“the pause”) which was only latent in my original phrasing (“the moment when”), leading me to reflect more on the temporal dimension to “the burden of articulation”:
The “burden of articulation” you describe served as a kind of productive friction – it forced users to crystallize their thinking before engaging with the tool. There’s something valuable in that pause, that moment of having to translate a vague intention into clear language. It’s reminiscent of how writing itself can clarify thinking, or how explaining a problem to someone else often helps you understand it better yourself.
In this sense friction with LLMs was a positive thing because it necessitated meta-cognition. The optimisation of the human-model interaction erodes a feature which I would argue was immensely important, even if its value is only manifested outside of the interaction itself. It doesn’t I think level the playing field because those with the necessary capital and fluency can still use LLMs in a deeper and more reflective way, with better outcomes emerging from the process.
But it does create worrying implications for organisations which build this practice into their roles. Earlier today I heard Cory Doctorow use the brilliant analogy of asbestos to describe LLMs being incorporated into digital infrastructure in ways which we will likely later have to remove at immense cost. What’s the equivalent analogy for the social practice of those operating within the organisations?
https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338
#articulation #chatbots #coryDoctorow #LLMs #metacognition #promptEngineering #prompting #reflexivity
-
LLMs become more dangerous as they rapidly get easier to use
This is a concise summary by Ethan Mollick of what I increasingly see as a key factor driving the evolution of consumer-facing LLMs:
Using AI well used to be a pretty challenging process which involved crafting a prompt using techniques like chain-of-thought along with learning tips and tricks to get the most out of your AI. In a recent series of experiments, however, we have discovered that these techniques don’t really help anymore. Powerful AI models are just getting better at doing what you ask them to or even figuring out what you want and going beyond what you ask (and no, threatening them or being nice to them does not seem to help on average).
What limited truth there was to the inflated discourse of ‘prompt engineering’ has largely evaporated at this point, leaving us in a strange position. The conversational approach I’ve always advocated, literally treating the LLM as an interlocutor analogous to a human collaborator, remains the best way of getting the most out of these systems. But neither this nor ‘prompt engineering’ is necessary to get LLMs to perform usefully in real world contexts.
This seems like an obviously positive thing for users. However my concern is the burden of articulation with LLMs, explaining what you want in a clear and substantive way, was a necessary condition for responsible use of LLMs. This burden was never sustainable at the level of a commercially viable product. It presupposed a consistent level of reflexivity, a degree of cultural capital and a fluency with writing which was always incompatible with the kind of mass market software that has a user base of one billion plus.
However it did ensure that users were thinking carefully about their use because you literally couldn’t produce valuable outcomes with an LLM unless you were doing this. Now it’s very easy to get LLMs to infer what you want, with success much of the time to at least some degree. What’s lost in the process is the reflexivity which came prior to use (i.e. the moment when you stopped to think “what am I actually asking Claude to do here?”) and the learning which came through reflecting on the output and the related interaction it generates, as your sense of what you were trying to do is refined and expanded through dialogue.
This is how Claude Opus 4 glossed my analysis here. It conveyed my own words back to me but it did it in a way which surfaced an element (“the pause”) which was only latent in my original phrasing (“the moment when”), leading me to reflect more on the temporal dimension to “the burden of articulation”:
The “burden of articulation” you describe served as a kind of productive friction – it forced users to crystallize their thinking before engaging with the tool. There’s something valuable in that pause, that moment of having to translate a vague intention into clear language. It’s reminiscent of how writing itself can clarify thinking, or how explaining a problem to someone else often helps you understand it better yourself.
In this sense friction with LLMs was a positive thing because it necessitated meta-cognition. The optimisation of the human-model interaction erodes a feature which I would argue was immensely important, even if its value is only manifested outside of the interaction itself. It doesn’t I think level the playing field because those with the necessary capital and fluency can still use LLMs in a deeper and more reflective way, with better outcomes emerging from the process.
But it does create worrying implications for organisations which build this practice into their roles. Earlier today I heard Cory Doctorow use the brilliant analogy of asbestos to describe LLMs being incorporated into digital infrastructure in ways which we will likely later have to remove at immense cost. What’s the equivalent analogy for the social practice of those operating within the organisations?
https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338
#articulation #chatbots #coryDoctorow #LLMs #metacognition #promptEngineering #prompting #reflexivity
-
LLMs become more dangerous as they rapidly get easier to use
This is a concise summary by Ethan Mollick of what I increasingly see as a key factor driving the evolution of consumer-facing LLMs:
Using AI well used to be a pretty challenging process which involved crafting a prompt using techniques like chain-of-thought along with learning tips and tricks to get the most out of your AI. In a recent series of experiments, however, we have discovered that these techniques don’t really help anymore. Powerful AI models are just getting better at doing what you ask them to or even figuring out what you want and going beyond what you ask (and no, threatening them or being nice to them does not seem to help on average).
What limited truth there was to the inflated discourse of ‘prompt engineering’ has largely evaporated at this point, leaving us in a strange position. The conversational approach I’ve always advocated, literally treating the LLM as an interlocutor analogous to a human collaborator, remains the best way of getting the most out of these systems. But neither this nor ‘prompt engineering’ is necessary to get LLMs to perform usefully in real world contexts.
This seems like an obviously positive thing for users. However my concern is the burden of articulation with LLMs, explaining what you want in a clear and substantive way, was a necessary condition for responsible use of LLMs. This burden was never sustainable at the level of a commercially viable product. It presupposed a consistent level of reflexivity, a degree of cultural capital and a fluency with writing which was always incompatible with the kind of mass market software that has a user base of one billion plus.
However it did ensure that users were thinking carefully about their use because you literally couldn’t produce valuable outcomes with an LLM unless you were doing this. Now it’s very easy to get LLMs to infer what you want, with success much of the time to at least some degree. What’s lost in the process is the reflexivity which came prior to use (i.e. the moment when you stopped to think “what am I actually asking Claude to do here?”) and the learning which came through reflecting on the output and the related interaction it generates, as your sense of what you were trying to do is refined and expanded through dialogue.
This is how Claude Opus 4 glossed my analysis here. It conveyed my own words back to me but it did it in a way which surfaced an element (“the pause”) which was only latent in my original phrasing (“the moment when”), leading me to reflect more on the temporal dimension to “the burden of articulation”:
The “burden of articulation” you describe served as a kind of productive friction – it forced users to crystallize their thinking before engaging with the tool. There’s something valuable in that pause, that moment of having to translate a vague intention into clear language. It’s reminiscent of how writing itself can clarify thinking, or how explaining a problem to someone else often helps you understand it better yourself.
In this sense friction with LLMs was a positive thing because it necessitated meta-cognition. The optimisation of the human-model interaction erodes a feature which I would argue was immensely important, even if its value is only manifested outside of the interaction itself. It doesn’t I think level the playing field because those with the necessary capital and fluency can still use LLMs in a deeper and more reflective way, with better outcomes emerging from the process.
But it does create worrying implications for organisations which build this practice into their roles. Earlier today I heard Cory Doctorow use the brilliant analogy of asbestos to describe LLMs being incorporated into digital infrastructure in ways which we will likely later have to remove at immense cost. What’s the equivalent analogy for the social practice of those operating within the organisations?
https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338
#articulation #chatbots #coryDoctorow #LLMs #metacognition #promptEngineering #prompting #reflexivity
-
Proprioception, Interoception, Exteroception: The Three Flavors of Prediction
#BrainBody #PredictiveCoding #Interoception #Proprioception #Exteroception #Allostasis #Metacognition #PerceptualControlTheory #Visceromotor #AgranularCortex #Meditation #Attention #Neuroscience #BrainPrediction #EnergyRegulation
-
Proprioception, Interoception, Exteroception: The Three Flavors of Prediction
#BrainBody #PredictiveCoding #Interoception #Proprioception #Exteroception #Allostasis #Metacognition #PerceptualControlTheory #Visceromotor #AgranularCortex #Meditation #Attention #Neuroscience #BrainPrediction #EnergyRegulation
-
Proprioception, Interoception, Exteroception: The Three Flavors of Prediction
#BrainBody #PredictiveCoding #Interoception #Proprioception #Exteroception #Allostasis #Metacognition #PerceptualControlTheory #Visceromotor #AgranularCortex #Meditation #Attention #Neuroscience #BrainPrediction #EnergyRegulation
-
Proprioception, Interoception, Exteroception: The Three Flavors of Prediction
#BrainBody #PredictiveCoding #Interoception #Proprioception #Exteroception #Allostasis #Metacognition #PerceptualControlTheory #Visceromotor #AgranularCortex #Meditation #Attention #Neuroscience #BrainPrediction #EnergyRegulation
-
Proprioception, Interoception, Exteroception: The Three Flavors of Prediction
#BrainBody #PredictiveCoding #Interoception #Proprioception #Exteroception #Allostasis #Metacognition #PerceptualControlTheory #Visceromotor #AgranularCortex #Meditation #Attention #Neuroscience #BrainPrediction #EnergyRegulation
-
The Mind as Semi-Solid Smoke
This post continues the series on Socratic Thinking, turning the space-and-place lens inward to examine the mind itself. Human minds can be thought of as an imperfect place with the ability to create their own insta-places to navigate ambiguity.
On the Trail (1889) by Winslow Homer. Original from The National Gallery of Art. Digitally enhanced by rawpixel.Exploration in any real or conceptual space needs navigational markers with sufficient meaning. Humans are biologically predisposed to seek out and use navigational markers. This tendency is rooted in our neural architecture, emerges early in life, and is shared with other animals, reflecting its deep evolutionary origins 1,2 . Even the simplest of life performing chemotaxis uses the signal-field of food to navigate.
When you’re microscopic, the territory is the map; at human scale, we externalise those cues as landmarks—then mirror the process inside our heads. Just as cells follow chemical gradients, our thoughts follow self-made landmarks, yet these landmarks are vaporous.
From the outside our mind is a single place, it is our identity. Probe closer and our identity is nebulous and dissolves the way a city dissolves into smaller and smaller places the closer you look. We use our identity to create the first stable place in the world and then use other places to navigate life. However, these places come from unreliable sources, our internal and external environments. How do we know the places are even real, and do we have the knowledge to trust their reality? Well, we don’t. We can’t judge our mental landmarks false. Callard calls this normative self-blindness: the built-in refusal to saw off the branch we stand on.
Normative self-blindness is a trick to gloss over details and keep moving. Insta-places are conjured from our experience and are treated as solid no matter how poorly they are tied down by actual knowledge. We can accept that a place was loosely formed in the past, an error, or is not yet well defined in the future, is unknown. However, in the moment, the places exist and we use them to see.
Understanding and accepting that our minds work this way is a key tenet of Socratic Thinking. It makes adopting the posture of inquiry much easier. Socratic inquiry begins by admitting that everyone’s guiding landmarks may be made of semi-solid smoke.
1Chan, Edgar, Oliver Baumann, Mark A. Bellgrove, and Jason B. Mattingley. “From Objects to Landmarks: The Function of Visual Location Information in Spatial Navigation.” Frontiers in Psychology 3 (2012). https://doi.org/10.3389/fpsyg.2012.00304
2Freas, Cody A., and Ken Cheng. “The Basis of Navigation Across Species.” Annual Review of Psychology 73, no. 1 (January 4, 2022): 217–41. https://doi.org/10.1146/annurev-psych-020821-111311.
#AgnesCallard #cognitiveBiases #cognitiveScience #criticalThinking #decisionMaking #epistemology #evolutionaryPsychology #humanPsychology #identity #introspection #mentalModels #metacognition #mindset #navigation #neuroscience #normativeSelfBlindness #personalDevelopment #philosophy #sensemaking #socraticThinking #spaceAndPlace
-
The Mind as Semi-Solid Smoke
This post continues the series on Socratic Thinking, turning the space-and-place lens inward to examine the mind itself. Human minds can be thought of as an imperfect place with the ability to create their own insta-places to navigate ambiguity.
On the Trail (1889) by Winslow Homer. Original from The National Gallery of Art. Digitally enhanced by rawpixel.Exploration in any real or conceptual space needs navigational markers with sufficient meaning. Humans are biologically predisposed to seek out and use navigational markers. This tendency is rooted in our neural architecture, emerges early in life, and is shared with other animals, reflecting its deep evolutionary origins 1,2 . Even the simplest of life performing chemotaxis uses the signal-field of food to navigate.
When you’re microscopic, the territory is the map; at human scale, we externalise those cues as landmarks—then mirror the process inside our heads. Just as cells follow chemical gradients, our thoughts follow self-made landmarks, yet these landmarks are vaporous.
From the outside our mind is a single place, it is our identity. Probe closer and our identity is nebulous and dissolves the way a city dissolves into smaller and smaller places the closer you look. We use our identity to create the first stable place in the world and then use other places to navigate life. However, these places come from unreliable sources, our internal and external environments. How do we know the places are even real, and do we have the knowledge to trust their reality? Well, we don’t. We can’t judge our mental landmarks false. Callard calls this normative self-blindness: the built-in refusal to saw off the branch we stand on.
Normative self-blindness is a trick to gloss over details and keep moving. Insta-places are conjured from our experience and are treated as solid no matter how poorly they are tied down by actual knowledge. We can accept that a place was loosely formed in the past, an error, or is not yet well defined in the future, is unknown. However, in the moment, the places exist and we use them to see.
Understanding and accepting that our minds work this way is a key tenet of Socratic Thinking. It makes adopting the posture of inquiry much easier. Socratic inquiry begins by admitting that everyone’s guiding landmarks may be made of semi-solid smoke.
1Chan, Edgar, Oliver Baumann, Mark A. Bellgrove, and Jason B. Mattingley. “From Objects to Landmarks: The Function of Visual Location Information in Spatial Navigation.” Frontiers in Psychology 3 (2012). https://doi.org/10.3389/fpsyg.2012.00304
2Freas, Cody A., and Ken Cheng. “The Basis of Navigation Across Species.” Annual Review of Psychology 73, no. 1 (January 4, 2022): 217–41. https://doi.org/10.1146/annurev-psych-020821-111311.
#AgnesCallard #cognitiveBiases #cognitiveScience #criticalThinking #decisionMaking #epistemology #evolutionaryPsychology #humanPsychology #identity #introspection #mentalModels #metacognition #mindset #navigation #neuroscience #normativeSelfBlindness #personalDevelopment #philosophy #sensemaking #socraticThinking #spaceAndPlace
-
The Mind as Semi-Solid Smoke
This post continues the series on Socratic Thinking, turning the space-and-place lens inward to examine the mind itself. Human minds can be thought of as an imperfect place with the ability to create their own insta-places to navigate ambiguity.
On the Trail (1889) by Winslow Homer. Original from The National Gallery of Art. Digitally enhanced by rawpixel.Exploration in any real or conceptual space needs navigational markers with sufficient meaning. Humans are biologically predisposed to seek out and use navigational markers. This tendency is rooted in our neural architecture, emerges early in life, and is shared with other animals, reflecting its deep evolutionary origins 1,2 . Even the simplest of life performing chemotaxis uses the signal-field of food to navigate.
When you’re microscopic, the territory is the map; at human scale, we externalise those cues as landmarks—then mirror the process inside our heads. Just as cells follow chemical gradients, our thoughts follow self-made landmarks, yet these landmarks are vaporous.
From the outside our mind is a single place, it is our identity. Probe closer and our identity is nebulous and dissolves the way a city dissolves into smaller and smaller places the closer you look. We use our identity to create the first stable place in the world and then use other places to navigate life. However, these places come from unreliable sources, our internal and external environments. How do we know the places are even real, and do we have the knowledge to trust their reality? Well, we don’t. We can’t judge our mental landmarks false. Callard calls this normative self-blindness: the built-in refusal to saw off the branch we stand on.
Normative self-blindness is a trick to gloss over details and keep moving. Insta-places are conjured from our experience and are treated as solid no matter how poorly they are tied down by actual knowledge. We can accept that a place was loosely formed in the past, an error, or is not yet well defined in the future, is unknown. However, in the moment, the places exist and we use them to see.
Understanding and accepting that our minds work this way is a key tenet of Socratic Thinking. It makes adopting the posture of inquiry much easier. Socratic inquiry begins by admitting that everyone’s guiding landmarks may be made of semi-solid smoke.
1Chan, Edgar, Oliver Baumann, Mark A. Bellgrove, and Jason B. Mattingley. “From Objects to Landmarks: The Function of Visual Location Information in Spatial Navigation.” Frontiers in Psychology 3 (2012). https://doi.org/10.3389/fpsyg.2012.00304
2Freas, Cody A., and Ken Cheng. “The Basis of Navigation Across Species.” Annual Review of Psychology 73, no. 1 (January 4, 2022): 217–41. https://doi.org/10.1146/annurev-psych-020821-111311.
#AgnesCallard #cognitiveBiases #cognitiveScience #criticalThinking #decisionMaking #epistemology #evolutionaryPsychology #humanPsychology #identity #introspection #mentalModels #metacognition #mindset #navigation #neuroscience #normativeSelfBlindness #personalDevelopment #philosophy #sensemaking #socraticThinking #spaceAndPlace
-
The Mind as Semi-Solid Smoke
This post continues the series on Socratic Thinking, turning the space-and-place lens inward to examine the mind itself. Human minds can be thought of as an imperfect place with the ability to create their own insta-places to navigate ambiguity.
On the Trail (1889) by Winslow Homer. Original from The National Gallery of Art. Digitally enhanced by rawpixel.Exploration in any real or conceptual space needs navigational markers with sufficient meaning. Humans are biologically predisposed to seek out and use navigational markers. This tendency is rooted in our neural architecture, emerges early in life, and is shared with other animals, reflecting its deep evolutionary origins 1,2 . Even the simplest of life performing chemotaxis uses the signal-field of food to navigate.
When you’re microscopic, the territory is the map; at human scale, we externalise those cues as landmarks—then mirror the process inside our heads. Just as cells follow chemical gradients, our thoughts follow self-made landmarks, yet these landmarks are vaporous.
From the outside our mind is a single place, it is our identity. Probe closer and our identity is nebulous and dissolves the way a city dissolves into smaller and smaller places the closer you look. We use our identity to create the first stable place in the world and then use other places to navigate life. However, these places come from unreliable sources, our internal and external environments. How do we know the places are even real, and do we have the knowledge to trust their reality? Well, we don’t. We can’t judge our mental landmarks false. Callard calls this normative self-blindness: the built-in refusal to saw off the branch we stand on.
Normative self-blindness is a trick to gloss over details and keep moving. Insta-places are conjured from our experience and are treated as solid no matter how poorly they are tied down by actual knowledge. We can accept that a place was loosely formed in the past, an error, or is not yet well defined in the future, is unknown. However, in the moment, the places exist and we use them to see.
Understanding and accepting that our minds work this way is a key tenet of Socratic Thinking. It makes adopting the posture of inquiry much easier. Socratic inquiry begins by admitting that everyone’s guiding landmarks may be made of semi-solid smoke.
1Chan, Edgar, Oliver Baumann, Mark A. Bellgrove, and Jason B. Mattingley. “From Objects to Landmarks: The Function of Visual Location Information in Spatial Navigation.” Frontiers in Psychology 3 (2012). https://doi.org/10.3389/fpsyg.2012.00304
2Freas, Cody A., and Ken Cheng. “The Basis of Navigation Across Species.” Annual Review of Psychology 73, no. 1 (January 4, 2022): 217–41. https://doi.org/10.1146/annurev-psych-020821-111311.
#AgnesCallard #cognitiveBiases #cognitiveScience #criticalThinking #decisionMaking #epistemology #evolutionaryPsychology #humanPsychology #identity #introspection #mentalModels #metacognition #mindset #navigation #neuroscience #normativeSelfBlindness #personalDevelopment #philosophy #sensemaking #socraticThinking #spaceAndPlace
-
The Mind as Semi-Solid Smoke
This post continues the series on Socratic Thinking, turning the space-and-place lens inward to examine the mind itself. Human minds can be thought of as an imperfect place with the ability to create their own insta-places to navigate ambiguity.
On the Trail (1889) by Winslow Homer. Original from The National Gallery of Art. Digitally enhanced by rawpixel.Exploration in any real or conceptual space needs navigational markers with sufficient meaning. Humans are biologically predisposed to seek out and use navigational markers. This tendency is rooted in our neural architecture, emerges early in life, and is shared with other animals, reflecting its deep evolutionary origins 1,2 . Even the simplest of life performing chemotaxis uses the signal-field of food to navigate.
When you’re microscopic, the territory is the map; at human scale, we externalise those cues as landmarks—then mirror the process inside our heads. Just as cells follow chemical gradients, our thoughts follow self-made landmarks, yet these landmarks are vaporous.
From the outside our mind is a single place, it is our identity. Probe closer and our identity is nebulous and dissolves the way a city dissolves into smaller and smaller places the closer you look. We use our identity to create the first stable place in the world and then use other places to navigate life. However, these places come from unreliable sources, our internal and external environments. How do we know the places are even real, and do we have the knowledge to trust their reality? Well, we don’t. We can’t judge our mental landmarks false. Callard calls this normative self-blindness: the built-in refusal to saw off the branch we stand on.
Normative self-blindness is a trick to gloss over details and keep moving. Insta-places are conjured from our experience and are treated as solid no matter how poorly they are tied down by actual knowledge. We can accept that a place was loosely formed in the past, an error, or is not yet well defined in the future, is unknown. However, in the moment, the places exist and we use them to see.
Understanding and accepting that our minds work this way is a key tenet of Socratic Thinking. It makes adopting the posture of inquiry much easier. Socratic inquiry begins by admitting that everyone’s guiding landmarks may be made of semi-solid smoke.
1Chan, Edgar, Oliver Baumann, Mark A. Bellgrove, and Jason B. Mattingley. “From Objects to Landmarks: The Function of Visual Location Information in Spatial Navigation.” Frontiers in Psychology 3 (2012). https://doi.org/10.3389/fpsyg.2012.00304
2Freas, Cody A., and Ken Cheng. “The Basis of Navigation Across Species.” Annual Review of Psychology 73, no. 1 (January 4, 2022): 217–41. https://doi.org/10.1146/annurev-psych-020821-111311.
#AgnesCallard #cognitiveBiases #cognitiveScience #criticalThinking #decisionMaking #epistemology #evolutionaryPsychology #humanPsychology #identity #introspection #mentalModels #metacognition #mindset #navigation #neuroscience #normativeSelfBlindness #personalDevelopment #philosophy #sensemaking #socraticThinking #spaceAndPlace
-
The educator panic over AI is real, and rational.
I've been there myself. The difference is I moved past denial to a more pragmatic question: since AI regulation seems unlikely (with both camps refusing to engage), how do we actually work with these systems?The "AI will kill critical thinking" crowd has a point, but they're missing context.
Critical reasoning wasn't exactly thriving before AI arrived: just look around. The real question isn't whether AI threatens thinking skills, but whether we can leverage it the same way we leverage other cognitive tools.We don't hunt our own food or walk everywhere anymore.
We use supermarkets and cars. Most of us Google instead of visiting libraries. Each tool trade-off changed how we think and what skills matter. AI is the next step in this progression, if we're smart about it.The key is learning to think with AI rather than being replaced by it.
That means understanding both its capabilities and our irreplaceable human advantages.1/3
#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy
-
The educator panic over AI is real, and rational.
I've been there myself. The difference is I moved past denial to a more pragmatic question: since AI regulation seems unlikely (with both camps refusing to engage), how do we actually work with these systems?The "AI will kill critical thinking" crowd has a point, but they're missing context.
Critical reasoning wasn't exactly thriving before AI arrived: just look around. The real question isn't whether AI threatens thinking skills, but whether we can leverage it the same way we leverage other cognitive tools.We don't hunt our own food or walk everywhere anymore.
We use supermarkets and cars. Most of us Google instead of visiting libraries. Each tool trade-off changed how we think and what skills matter. AI is the next step in this progression, if we're smart about it.The key is learning to think with AI rather than being replaced by it.
That means understanding both its capabilities and our irreplaceable human advantages.1/3
#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy
-
The educator panic over AI is real, and rational.
I've been there myself. The difference is I moved past denial to a more pragmatic question: since AI regulation seems unlikely (with both camps refusing to engage), how do we actually work with these systems?The "AI will kill critical thinking" crowd has a point, but they're missing context.
Critical reasoning wasn't exactly thriving before AI arrived: just look around. The real question isn't whether AI threatens thinking skills, but whether we can leverage it the same way we leverage other cognitive tools.We don't hunt our own food or walk everywhere anymore.
We use supermarkets and cars. Most of us Google instead of visiting libraries. Each tool trade-off changed how we think and what skills matter. AI is the next step in this progression, if we're smart about it.The key is learning to think with AI rather than being replaced by it.
That means understanding both its capabilities and our irreplaceable human advantages.1/3
#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy
-
AI isn't going anywhere. Time to get strategic:
Instead of mourning lost critical thinking skills, let's build on them through cognitive delegation—using AI as a thinking partner, not a replacement.This isn't some Silicon Valley fantasy:
Three decades of cognitive research already mapped out how this works:Cognitive Load Theory:
Our brains can only juggle so much at once. Let AI handle the grunt work while you focus on making meaningful connections.Distributed Cognition:
Naval crews don't navigate with individual genius—they spread thinking across people, instruments, and procedures. AI becomes another crew member in your cognitive system.Zone of Proximal Development
We learn best with expert guidance bridging what we can't quite do alone. AI can serve as that "more knowledgeable other" (though it's still early days).
The table below shows what this looks like in practice:2/3
#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy
-
AI isn't going anywhere. Time to get strategic:
Instead of mourning lost critical thinking skills, let's build on them through cognitive delegation—using AI as a thinking partner, not a replacement.This isn't some Silicon Valley fantasy:
Three decades of cognitive research already mapped out how this works:Cognitive Load Theory:
Our brains can only juggle so much at once. Let AI handle the grunt work while you focus on making meaningful connections.Distributed Cognition:
Naval crews don't navigate with individual genius—they spread thinking across people, instruments, and procedures. AI becomes another crew member in your cognitive system.Zone of Proximal Development
We learn best with expert guidance bridging what we can't quite do alone. AI can serve as that "more knowledgeable other" (though it's still early days).
The table below shows what this looks like in practice:2/3
#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy
-
AI isn't going anywhere. Time to get strategic:
Instead of mourning lost critical thinking skills, let's build on them through cognitive delegation—using AI as a thinking partner, not a replacement.This isn't some Silicon Valley fantasy:
Three decades of cognitive research already mapped out how this works:Cognitive Load Theory:
Our brains can only juggle so much at once. Let AI handle the grunt work while you focus on making meaningful connections.Distributed Cognition:
Naval crews don't navigate with individual genius—they spread thinking across people, instruments, and procedures. AI becomes another crew member in your cognitive system.Zone of Proximal Development
We learn best with expert guidance bridging what we can't quite do alone. AI can serve as that "more knowledgeable other" (though it's still early days).
The table below shows what this looks like in practice:2/3
#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy