home.social

#cognitivedebt — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #cognitivedebt, aggregated by home.social.

  1. Software #Developers Say #AI Is Rotting Their Brains

    by Emanuel Maiberg, May 13, 2026

    Excerpt: "The developers I talked to found AI useful for some tasks. Several developers said that it was good for experimentation, allowing them to quickly prototype an idea or to implement something in a domain they’re unfamiliar with. One developer said it was a good information interface. Specifically, he said, the AI helped him find where on the server a certain request is handled, summarize logs, or find documentation related to code changes.

    "The problem all the developers I talked to agreed on is that the more they relied on AI to code, the more the skills they’ve honed for years deteriorated. This is by now a well studied phenomenon sometimes referred to as '#CognitiveDebt' or '#CognitiveAtrophy.' The idea is that people who use AI to automate certain parts of their job lose the ability to do those tasks well, therefore #DeSkilling themselves.

    " 'I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code,' the software developer at a small web design firm told me.

    " 'It's making me dumber for sure,' the fintech software developer told me. “It's like when we got #cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing ‘thinking’ in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before.'

    " 'When I was using it for code generation, I found myself having a lot of trouble building and maintaining a #MentalModel of the code I was working with,' the software engineer at the FAANG told me. 'Another aspect is that I joined late last year and [the company’s] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that. '"

    Read more:
    404media.co/software-developer

    Archived version:
    archive.ph/2vjJm

    #AISucks #DumbingUsDown #BrainRot #AIBrainRot #MentalMaps #CriticalThinkingSkills #UseYourBrain #UseItOrLoseIt

  2. Software #Developers Say #AI Is Rotting Their Brains

    by Emanuel Maiberg, May 13, 2026

    Excerpt: "The developers I talked to found AI useful for some tasks. Several developers said that it was good for experimentation, allowing them to quickly prototype an idea or to implement something in a domain they’re unfamiliar with. One developer said it was a good information interface. Specifically, he said, the AI helped him find where on the server a certain request is handled, summarize logs, or find documentation related to code changes.

    "The problem all the developers I talked to agreed on is that the more they relied on AI to code, the more the skills they’ve honed for years deteriorated. This is by now a well studied phenomenon sometimes referred to as '#CognitiveDebt' or '#CognitiveAtrophy.' The idea is that people who use AI to automate certain parts of their job lose the ability to do those tasks well, therefore #DeSkilling themselves.

    " 'I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code,' the software developer at a small web design firm told me.

    " 'It's making me dumber for sure,' the fintech software developer told me. “It's like when we got #cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing ‘thinking’ in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before.'

    " 'When I was using it for code generation, I found myself having a lot of trouble building and maintaining a #MentalModel of the code I was working with,' the software engineer at the FAANG told me. 'Another aspect is that I joined late last year and [the company’s] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that. '"

    Read more:
    404media.co/software-developer

    Archived version:
    archive.ph/2vjJm

    #AISucks #DumbingUsDown #BrainRot #AIBrainRot #MentalMaps #CriticalThinkingSkills #UseYourBrain #UseItOrLoseIt

  3. Software #Developers Say #AI Is Rotting Their Brains

    by Emanuel Maiberg, May 13, 2026

    Excerpt: "The developers I talked to found AI useful for some tasks. Several developers said that it was good for experimentation, allowing them to quickly prototype an idea or to implement something in a domain they’re unfamiliar with. One developer said it was a good information interface. Specifically, he said, the AI helped him find where on the server a certain request is handled, summarize logs, or find documentation related to code changes.

    "The problem all the developers I talked to agreed on is that the more they relied on AI to code, the more the skills they’ve honed for years deteriorated. This is by now a well studied phenomenon sometimes referred to as '#CognitiveDebt' or '#CognitiveAtrophy.' The idea is that people who use AI to automate certain parts of their job lose the ability to do those tasks well, therefore #DeSkilling themselves.

    " 'I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code,' the software developer at a small web design firm told me.

    " 'It's making me dumber for sure,' the fintech software developer told me. “It's like when we got #cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing ‘thinking’ in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before.'

    " 'When I was using it for code generation, I found myself having a lot of trouble building and maintaining a #MentalModel of the code I was working with,' the software engineer at the FAANG told me. 'Another aspect is that I joined late last year and [the company’s] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that. '"

    Read more:
    404media.co/software-developer

    Archived version:
    archive.ph/2vjJm

    #AISucks #DumbingUsDown #BrainRot #AIBrainRot #MentalMaps #CriticalThinkingSkills #UseYourBrain #UseItOrLoseIt

  4. Software #Developers Say #AI Is Rotting Their Brains

    by Emanuel Maiberg, May 13, 2026

    Excerpt: "The developers I talked to found AI useful for some tasks. Several developers said that it was good for experimentation, allowing them to quickly prototype an idea or to implement something in a domain they’re unfamiliar with. One developer said it was a good information interface. Specifically, he said, the AI helped him find where on the server a certain request is handled, summarize logs, or find documentation related to code changes.

    "The problem all the developers I talked to agreed on is that the more they relied on AI to code, the more the skills they’ve honed for years deteriorated. This is by now a well studied phenomenon sometimes referred to as '#CognitiveDebt' or '#CognitiveAtrophy.' The idea is that people who use AI to automate certain parts of their job lose the ability to do those tasks well, therefore #DeSkilling themselves.

    " 'I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code,' the software developer at a small web design firm told me.

    " 'It's making me dumber for sure,' the fintech software developer told me. “It's like when we got #cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing ‘thinking’ in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before.'

    " 'When I was using it for code generation, I found myself having a lot of trouble building and maintaining a #MentalModel of the code I was working with,' the software engineer at the FAANG told me. 'Another aspect is that I joined late last year and [the company’s] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that. '"

    Read more:
    404media.co/software-developer

    Archived version:
    archive.ph/2vjJm

    #AISucks #DumbingUsDown #BrainRot #AIBrainRot #MentalMaps #CriticalThinkingSkills #UseYourBrain #UseItOrLoseIt

  5. Software #Developers Say #AI Is Rotting Their Brains

    by Emanuel Maiberg, May 13, 2026

    Excerpt: "The developers I talked to found AI useful for some tasks. Several developers said that it was good for experimentation, allowing them to quickly prototype an idea or to implement something in a domain they’re unfamiliar with. One developer said it was a good information interface. Specifically, he said, the AI helped him find where on the server a certain request is handled, summarize logs, or find documentation related to code changes.

    "The problem all the developers I talked to agreed on is that the more they relied on AI to code, the more the skills they’ve honed for years deteriorated. This is by now a well studied phenomenon sometimes referred to as '#CognitiveDebt' or '#CognitiveAtrophy.' The idea is that people who use AI to automate certain parts of their job lose the ability to do those tasks well, therefore #DeSkilling themselves.

    " 'I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code,' the software developer at a small web design firm told me.

    " 'It's making me dumber for sure,' the fintech software developer told me. “It's like when we got #cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing ‘thinking’ in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before.'

    " 'When I was using it for code generation, I found myself having a lot of trouble building and maintaining a #MentalModel of the code I was working with,' the software engineer at the FAANG told me. 'Another aspect is that I joined late last year and [the company’s] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that. '"

    Read more:
    404media.co/software-developer

    Archived version:
    archive.ph/2vjJm

    #AISucks #DumbingUsDown #BrainRot #AIBrainRot #MentalMaps #CriticalThinkingSkills #UseYourBrain #UseItOrLoseIt

  6. 🧠🤯 Ah, the internet's latest buzzword: "Cognitive Debt"—because "confusion" wasn't hip enough! 🤓 The author's noble quest to enlighten us is as clear as mud, with enough #jargon to make even the most seasoned #buzzword #bingo player weep tears of bewilderment. 🙄
    margaretstorey.com/blog/2026/0 #CognitiveDebt #Confusion #Enlightenment #HackerNews #ngated

  7. @alastor8472.bsky.social
    thank you. That is an excellent article about #claude and #CognitiveDebt. I am surprised that more people are not talking about this and no one is talking about the solution to this.

  8. New one from Cal Newport. New research shows that using #AI increases shallow efforts while decreasing the amount of time spent on tasks that move the needle. It follows history or other productivity tools that haven't necessarily produced stronger work or a better work environment.

    #AISlop #WorkSlop #CognitiveDebt

    youtube.com/watch?v=NDyuJcR2GH4

  9. *It turns out that "Artificial Intelligence" is an *inherently hallucinatory technology,* and that's not the dismissive summary, that is the *initial premise* #ZeroGravityMindset #cognitivedebt #bubble

  10. #AIagents: As #generativeAI and #agenticAI are adopted, #cognitivedebt, the accumulation of knowledge and understanding lost due to #rapiddevelopment, becomes a greater threat than #technicaldebt. This debt, residing in developers’ minds, can hinder progress and understanding of software systems. margaretstorey.com/blog/2026/0 #tech #media #news

  11. I normally like Kohler’s column when he unpacks economic news, but when he digresses into tech topics, I’m less likely to read him. I did this time (because things may change after all) but all I got from it is GenAI KoolAid and vaguely hinted at problems that #TechBros are keen on glossing over.

    It is a pity that the author is not turning his analytical mind to take a closer look at what the frenetic pace of #GenAI development and the #GenAISlop it splashes about the place as it inexorably pushes forward with not only #TechnicalDebt but also #CognitiveDebt laden implementations. Sooner than we think, we’ll be faced with problems we do not know how to solve for want of the ability to #FaultFind in a sea of #sloppy and #incomprehensible machine produced code mashups.

    Where to an economy held hostage to runaway coding machines? The ‘wagering system’ that is the stock market will start to look like ‘doctored’ one-arm bandits of yore. Where the wealth will flow is very obvious.

    #EatTheRich #Antifa #Resistance #RedistributeWealth #UBI
    #TaxReform

    abc.net.au/news/2026-02-16/ai-

  12. I normally like Kohler’s column when he unpacks economic news, but when he digresses into tech topics, I’m less likely to read him. I did this time (because things may change after all) but all I got from it is GenAI KoolAid and vaguely hinted at problems that #TechBros are keen on glossing over.

    It is a pity that the author is not turning his analytical mind to take a closer look at what the frenetic pace of #GenAI development and the #GenAISlop it splashes about the place as it inexorably pushes forward with not only #TechnicalDebt but also #CognitiveDebt laden implementations. Sooner than we think, we’ll be faced with problems we do not know how to solve for want of the ability to #FaultFind in a sea of #sloppy and #incomprehensible machine produced code mashups.

    Where to an economy held hostage to runaway coding machines? The ‘wagering system’ that is the stock market will start to look like ‘doctored’ one-arm bandits of yore. Where the wealth will flow is very obvious.

    #EatTheRich #Antifa #Resistance #RedistributeWealth #UBI
    #TaxReform

    abc.net.au/news/2026-02-16/ai-

  13. I normally like Kohler’s column when he unpacks economic news, but when he digresses into tech topics, I’m less likely to read him. I did this time (because things may change after all) but all I got from it is GenAI KoolAid and vaguely hinted at problems that #TechBros are keen on glossing over.

    It is a pity that the author is not turning his analytical mind to take a closer look at what the frenetic pace of #GenAI development and the #GenAISlop it splashes about the place as it inexorably pushes forward with not only #TechnicalDebt but also #CognitiveDebt laden implementations. Sooner than we think, we’ll be faced with problems we do not know how to solve for want of the ability to #FaultFind in a sea of #sloppy and #incomprehensible machine produced code mashups.

    Where to an economy held hostage to runaway coding machines? The ‘wagering system’ that is the stock market will start to look like ‘doctored’ one-arm bandits of yore. Where the wealth will flow is very obvious.

    #EatTheRich #Antifa #Resistance #RedistributeWealth #UBI
    #TaxReform

    abc.net.au/news/2026-02-16/ai-

  14. I normally like Kohler’s column when he unpacks economic news, but when he digresses into tech topics, I’m less likely to read him. I did this time (because things may change after all) but all I got from it is GenAI KoolAid and vaguely hinted at problems that #TechBros are keen on glossing over.

    It is a pity that the author is not turning his analytical mind to take a closer look at what the frenetic pace of #GenAI development and the #GenAISlop it splashes about the place as it inexorably pushes forward with not only #TechnicalDebt but also #CognitiveDebt laden implementations. Sooner than we think, we’ll be faced with problems we do not know how to solve for want of the ability to #FaultFind in a sea of #sloppy and #incomprehensible machine produced code mashups.

    Where to an economy held hostage to runaway coding machines? The ‘wagering system’ that is the stock market will start to look like ‘doctored’ one-arm bandits of yore. Where the wealth will flow is very obvious.

    #EatTheRich #Antifa #Resistance #RedistributeWealth #UBI
    #TaxReform

    abc.net.au/news/2026-02-16/ai-

  15. I normally like Kohler’s column when he unpacks economic news, but when he digresses into tech topics, I’m less likely to read him. I did this time (because things may change after all) but all I got from it is GenAI KoolAid and vaguely hinted at problems that #TechBros are keen on glossing over.

    It is a pity that the author is not turning his analytical mind to take a closer look at what the frenetic pace of #GenAI development and the #GenAISlop it splashes about the place as it inexorably pushes forward with not only #TechnicalDebt but also #CognitiveDebt laden implementations. Sooner than we think, we’ll be faced with problems we do not know how to solve for want of the ability to #FaultFind in a sea of #sloppy and #incomprehensible machine produced code mashups.

    Where to an economy held hostage to runaway coding machines? The ‘wagering system’ that is the stock market will start to look like ‘doctored’ one-arm bandits of yore. Where the wealth will flow is very obvious.

    #EatTheRich #Antifa #Resistance #RedistributeWealth #UBI
    #TaxReform

    abc.net.au/news/2026-02-16/ai-

  16. RE: mastodon.green/@gerrymcgovern/

    One of this article's many great points: Using #GenAI is a "metacognitive mirage".

    > When participants used #ChatGPT to draft essays, brain scans revealed [-47%] in neural connectivity across regions associated with memory, language, and critical reasoning. Their brains worked less, but they felt just as engaged
    > Students aren’t just learning less; their brains are learning not to learn.

    #cognitiveDebt #StochasticParrots #MRI #brainDevelopment

    #Chatversity replaces learning with cheating.

  17. Will AI Bury Future Generations in Cognitive Debt? project-syndicate.org/commenta
    "As companies seek to automate repetitive tasks in the name of cost-cutting, they should consider the longer-term implications. If we transfer all codified knowledge to machines, we will bequeath to future generations a world where it will be ever harder to learn by doing, to achieve mastery, and thus to aspire to creative freedom.

    …Emerging markets and developing economies, which are leapfrogging straight to native, widespread #AI adoption, may view things differently. The #cognitiveDebt that we are leaving for younger people in advanced economies may be their opportunity. It will be our duty to pay attention. For now, though, acknowledging that the debt exists, and will grow, is the first step toward addressing it."
    #economics

  18. Part two of a three part series of the Artefacts newsletter - what does an LLM look like, and how does that help us think about how we use them?

    buttondown.com/artefacts/archi

    #cognitivedebt #ai #llm

  19. Fascinating MIT study. tl;dr using AI to do your writing makes you stupid(er).

    Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

    media.mit.edu/publications/you

    PDF file: arxiv.org/pdf/2506.08872

    h/t Cal Newport's Deep Questions podcast, ep. 359

  20. A good podcast that raises red flags about that MIT Media Lab paper

    I felt a little sheepish suggesting that the writing in the Media Lab paper about “cognitive debt” and ChatGPT needed some work. Ashley Juavinett, Professor of Neurobiology at UC San Diego, and psychologist Cat Hicks have no such qualms. Their podcast, “You Deserve Better Brain Research,” addresses some serious problems with this “weird document,” from the writing to methods and research design. I’m putting it up here because I enjoyed and learned from it, and I hope others will, too.

    https://open.spotify.com/episode/0XLGvUjtmrdEtHVaYUBo5X

    Type your email…

    Subscribe

    #artificialIntelligence #cognitiveDebt #dialogue #humanEncounter #LLMs #sharedCommitment

  21. A good podcast that raises red flags about that MIT Media Lab paper

    I felt a little sheepish suggesting that the writing in the Media Lab paper about “cognitive debt” and ChatGPT needed some work. Ashley Juavinett, Professor of Neurobiology at UC San Diego, and psychologist Cat Hicks have no such qualms. Their podcast, “You Deserve Better Brain Research,” addresses some serious problems with this “weird document,” from the writing to methods and research design. I’m putting it up here because I enjoyed and learned from it, and I hope others will, too.

    https://open.spotify.com/episode/0XLGvUjtmrdEtHVaYUBo5X

    Type your email…

    Subscribe

    #artificialIntelligence #cognitiveDebt #dialogue #humanEncounter #LLMs #sharedCommitment

  22. Read the conclusion of the recent Media Lab paper about LLMs. It’s a Non-Friction Nightmare.

    No, that’s not a typo in my title. (Yes, it’s a bad pun.)

    I’ve just had my first look at the MIT Media Lab paper that is making the rounds: “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” 

    This paper is disturbing, to say the least. What the authors call “friction” is what we used to call thinking, or it’s at least an essential element of thinking, the effort of it. That effort includes the give and take of inquiry, the difficulty of dialogue, the sweat of education, the work of human language and human encounter. 

    The paper’s conclusion only scratches the surface of this problem when it addresses “ethical considerations.”

    Consider what is probably the most alarming sentence here, which describes what happens when you reduce friction: people reach the conclusions the algorithm wants them to reach – or, rather, the algorithm reaches conclusions for them; people reach for nothing at all.

    It’s surrender.  Not just to machines, mind you, not just to the algorithm, but also to the interests (“the priorities”) the algorithm represents.

    By surrendering to the these priorities, allowing ourselves to be guided by them, we’re also throwing in the towel on shared human experience, co-coordination and mutual guidance, reliance on each other and shared commitment — which is the only way we can work out our own priorities.

    Finally, I can’t post this on my blog (a little center of friction in its own right) without saying something about the writing here.

    I know this is a draft paper, but this conclusion sure could use another going-over. It’s not just the typo in the penultimate paragraph (“theis” instead of “their”) that needs correcting; there’s also that awkward bit about “net positive for the humans” in the final paragraph (which sounds like it came straight from an LLM) and the resort to cliche (“technological crossroads”) and industry jargon (“unprecedented opportunities for enhancing learning and information access”). The findings here deserve more clarity.

    I’d also like to see a little more about the social and political consequences that would seem to follow inevitably from the “cognitive consequences” the authors document. But maybe that’s a matter for another paper.   

    As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with LLM integration in educational and informational contexts. While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking, and intellectual independence demands a very careful consideration and continued research. 

    The LLM undeniably reduced the friction involved in answering participants’ questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM’s output or ”opinions” (probabilistic answers based on the training datasets). This highlights a concerning evolution of the ‘echo chamber’ effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM’s shareholders…. 

    Only a few participants in the interviews mentioned that they did not follow the “thinking” [124] aspect of the LLMs and pursued their line of ideation and thinking. 

    Regarding ethical considerations, participants who were in the Brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to other groups. Essays written with the help of LLM carried a lesser significance or value to the participants (impaired ownership, Figure 8), as they spent less time on writing (Figure 33), and mostly failed to provide a quote from theis [sic] essays (Session 1, Figure 6, Figure 7). 

    Human teachers “closed the loop” by detecting the LLM-generated essays, as they recognized the conventional structure and homogeneity of the delivered points for each essay within the topic and group. 

    We believe that the longitudinal studies are needed in order to understand the long-term impact of the LLMs on the human brain, before LLMs are recognized as something that is net positive for the humans.

    Type your email…

    Subscribe

    #artificialIntelligence #cognitiveDebt #dialogue #friction #humanEncounter #language #LLMs #resistance #sharedCommitment

  23. Read the conclusion of the recent Media Lab paper about LLMs. It’s a Non-Friction Nightmare.

    No, that’s not a typo in my title.

    I’ve just had my first look at the MIT Media Lab paper that is making the rounds: “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” 

    This paper is disturbing, to say the least. What the authors call “friction” is what we used to call thinking, or it’s at least an essential element of thinking, the effort of it. That effort includes the give and take of inquiry, the difficulty of dialogue, the sweat of education, the work of human language and human encounter. 

    The paper’s conclusion only scratches the surface of this problem when it addresses “ethical considerations.”

    Consider what is probably the most alarming sentence here, which describes what happens when you reduce friction: people reach the conclusions the algorithm wants them to reach – or, rather, the algorithm reaches conclusions for them; people reach for nothing at all.

    It’s surrender.  Not just to machines, mind you, not just to the algorithm, but also to the interests (“the priorities”) the algorithm represents.

    By surrendering to the these priorities, allowing ourselves to be guided by them, we’re also throwing in the towel on shared human experience, co-coordination and mutual guidance, reliance on each other and shared commitment — which is the only way we can work out our own priorities.

    Finally, I can’t post this on my blog (a little center of friction in its own right) without saying something about the writing here.

    I know this is a draft paper, but this conclusion sure could use another going-over. It’s not just the typo in the penultimate paragraph (“theis” instead of “their”) that needs correcting; there’s also that awkward bit about “net positive for the humans” in the final paragraph (which sounds like it came straight from an LLM) and the resort to cliche (“technological crossroads”) and industry jargon (“unprecedented opportunities for enhancing learning and information access”). The findings here deserve more clarity.

    Last, I’d like to see a little more about the social and political consequences that would seem to follow inevitably from the “cognitive consequences” the authors document. But maybe that’s a matter for another paper.   

    As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with LLM integration in educational and informational contexts. While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking, and intellectual independence demands a very careful consideration and continued research. 

    The LLM undeniably reduced the friction involved in answering participants’ questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM’s output or ”opinions” (probabilistic answers based on the training datasets). This highlights a concerning evolution of the ‘echo chamber’ effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM’s shareholders…. 

    Only a few participants in the interviews mentioned that they did not follow the “thinking” [124] aspect of the LLMs and pursued their line of ideation and thinking. 

    Regarding ethical considerations, participants who were in the Brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to other groups. Essays written with the help of LLM carried a lesser significance or value to the participants (impaired ownership, Figure 8), as they spent less time on writing (Figure 33), and mostly failed to provide a quote from theis [sic] essays (Session 1, Figure 6, Figure 7). 

    Human teachers “closed the loop” by detecting the LLM-generated essays, as they recognized the conventional structure and homogeneity of the delivered points for each essay within the topic and group. 

    We believe that the longitudinal studies are needed in order to understand the long-term impact of the LLMs on the human brain, before LLMs are recognized as something that is net positive for the humans.

    Type your email…

    Subscribe

    #artificialIntelligence #cognitiveDebt #dialogue #friction #humanEncounter #language #LLMs #resistance #sharedCommitment

  24. Read the conclusion of the recent Media Lab paper about LLMs. It’s a Non-Friction Nightmare.

    No, that’s not a typo in my title.

    I’ve just had my first look at the MIT Media Lab paper that is making the rounds: “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” 

    This paper is disturbing, to say the least. What the authors call “friction” is what we used to call thinking, or it’s at least an essential element of thinking, the effort of it. That effort includes the give and take of inquiry, the difficulty of dialogue, the sweat of education, the work of human language and human encounter. 

    The paper’s conclusion only scratches the surface of this problem when it addresses “ethical considerations.”

    Consider what is probably the most alarming sentence here, which describes what happens when you reduce friction: people reach the conclusions the algorithm wants them to reach – or, rather, the algorithm reaches conclusions for them; people reach for nothing at all.

    It’s surrender.  Not just to machines, mind you, not just to the algorithm, but also to the interests (“the priorities”) the algorithm represents.

    By surrendering to the these priorities, allowing ourselves to be guided by them, we’re also throwing in the towel on shared human experience, co-coordination and mutual guidance, reliance on each other and shared commitment — which is the only way we can work out our own priorities.

    Finally, I can’t post this on my blog (a little center of friction in its own right) without saying something about the writing here.

    I know this is a draft paper, but this conclusion sure could use another going-over. It’s not just the typo in the penultimate paragraph (“theis” instead of “their”) that needs correcting; there’s also that awkward bit about “net positive for the humans” in the final paragraph (which sounds like it came straight from an LLM) and the resort to cliche (“technological crossroads”) and industry jargon (“unprecedented opportunities for enhancing learning and information access”). The findings here deserve more clarity.

    Last, I’d like to see a little more about the social and political consequences that would seem to follow inevitably from the “cognitive consequences” the authors document. But maybe that’s a matter for another paper.   

    As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with LLM integration in educational and informational contexts. While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking, and intellectual independence demands a very careful consideration and continued research. 

    The LLM undeniably reduced the friction involved in answering participants’ questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM’s output or ”opinions” (probabilistic answers based on the training datasets). This highlights a concerning evolution of the ‘echo chamber’ effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM’s shareholders…. 

    Only a few participants in the interviews mentioned that they did not follow the “thinking” [124] aspect of the LLMs and pursued their line of ideation and thinking. 

    Regarding ethical considerations, participants who were in the Brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to other groups. Essays written with the help of LLM carried a lesser significance or value to the participants (impaired ownership, Figure 8), as they spent less time on writing (Figure 33), and mostly failed to provide a quote from theis [sic] essays (Session 1, Figure 6, Figure 7). 

    Human teachers “closed the loop” by detecting the LLM-generated essays, as they recognized the conventional structure and homogeneity of the delivered points for each essay within the topic and group. 

    We believe that the longitudinal studies are needed in order to understand the long-term impact of the LLMs on the human brain, before LLMs are recognized as something that is net positive for the humans.

    Type your email…

    Subscribe

    #artificialIntelligence #cognitiveDebt #dialogue #friction #humanEncounter #language #LLMs #resistance #sharedCommitment

  25. Read the conclusion of the recent Media Lab paper about LLMs. It’s a Non-Friction Nightmare.

    No, that’s not a typo in my title.

    I’ve just had my first look at the MIT Media Lab paper that is making the rounds: “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” 

    This paper is disturbing, to say the least. What the authors call “friction” is what we used to call thinking, or it’s at least an essential element of thinking, the effort of it. That effort includes the give and take of inquiry, the difficulty of dialogue, the sweat of education, the work of human language and human encounter. 

    The paper’s conclusion only scratches the surface of this problem when it addresses “ethical considerations.”

    Consider what is probably the most alarming sentence here, which describes what happens when you reduce friction: people reach the conclusions the algorithm wants them to reach – or, rather, the algorithm reaches conclusions for them; people reach for nothing at all.

    It’s surrender.  Not just to machines, mind you, not just to the algorithm, but also to the interests (“the priorities”) the algorithm represents.

    By surrendering to the these priorities, allowing ourselves to be guided by them, we’re also throwing in the towel on shared human experience, co-coordination and mutual guidance, reliance on each other and shared commitment — which is the only way we can work out our own priorities.

    Finally, I can’t post this on my blog (a little center of friction in its own right) without saying something about the writing here.

    I know this is a draft paper, but this conclusion sure could use another going-over. It’s not just the typo in the penultimate paragraph (“theis” instead of “their”) that needs correcting; there’s also that awkward bit about “net positive for the humans” in the final paragraph (which sounds like it came straight from an LLM) and the resort to cliche (“technological crossroads”) and industry jargon (“unprecedented opportunities for enhancing learning and information access”). The findings here deserve more clarity.

    Last, I’d like to see a little more about the social and political consequences that would seem to follow inevitably from the “cognitive consequences” the authors document. But maybe that’s a matter for another paper.   

    As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with LLM integration in educational and informational contexts. While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking, and intellectual independence demands a very careful consideration and continued research. 

    The LLM undeniably reduced the friction involved in answering participants’ questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM’s output or ”opinions” (probabilistic answers based on the training datasets). This highlights a concerning evolution of the ‘echo chamber’ effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM’s shareholders…. 

    Only a few participants in the interviews mentioned that they did not follow the “thinking” [124] aspect of the LLMs and pursued their line of ideation and thinking. 

    Regarding ethical considerations, participants who were in the Brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to other groups. Essays written with the help of LLM carried a lesser significance or value to the participants (impaired ownership, Figure 8), as they spent less time on writing (Figure 33), and mostly failed to provide a quote from theis [sic] essays (Session 1, Figure 6, Figure 7). 

    Human teachers “closed the loop” by detecting the LLM-generated essays, as they recognized the conventional structure and homogeneity of the delivered points for each essay within the topic and group. 

    We believe that the longitudinal studies are needed in order to understand the long-term impact of the LLMs on the human brain, before LLMs are recognized as something that is net positive for the humans.

    Type your email…

    Subscribe

    #artificialIntelligence #cognitiveDebt #dialogue #friction #humanEncounter #language #LLMs #resistance #sharedCommitment

  26. Read the conclusion of the recent Media Lab paper about LLMs. It’s a Non-Friction Nightmare.

    No, that’s not a typo in my title.

    I’ve just had my first look at the MIT Media Lab paper that is making the rounds: “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” 

    This paper is disturbing, to say the least. What the authors call “friction” is what we used to call thinking, or it’s at least an essential element of thinking, the effort of it. That effort includes the give and take of inquiry, the difficulty of dialogue, the sweat of education, the work of human language and human encounter. 

    The paper’s conclusion only scratches the surface of this problem when it addresses “ethical considerations.”

    Consider what is probably the most alarming sentence here, which describes what happens when you reduce friction: people reach the conclusions the algorithm wants them to reach – or, rather, the algorithm reaches conclusions for them; people reach for nothing at all.

    It’s surrender.  Not just to machines, mind you, not just to the algorithm, but also to the interests (“the priorities”) the algorithm represents.

    By surrendering to the these priorities, allowing ourselves to be guided by them, we’re also throwing in the towel on shared human experience, co-coordination and mutual guidance, reliance on each other and shared commitment — which is the only way we can work out our own priorities.

    Finally, I can’t post this on my blog (a little center of friction in its own right) without saying something about the writing here.

    I know this is a draft paper, but this conclusion sure could use another going-over. It’s not just the typo in the penultimate paragraph (“theis” instead of “their”) that needs correcting; there’s also that awkward bit about “net positive for the humans” in the final paragraph (which sounds like it came straight from an LLM) and the resort to cliche (“technological crossroads”) and industry jargon (“unprecedented opportunities for enhancing learning and information access”). The findings here deserve more clarity.

    Last, I’d like to see a little more about the social and political consequences that would seem to follow inevitably from the “cognitive consequences” the authors document. But maybe that’s a matter for another paper.   

    As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with LLM integration in educational and informational contexts. While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking, and intellectual independence demands a very careful consideration and continued research. 

    The LLM undeniably reduced the friction involved in answering participants’ questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM’s output or ”opinions” (probabilistic answers based on the training datasets). This highlights a concerning evolution of the ‘echo chamber’ effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM’s shareholders…. 

    Only a few participants in the interviews mentioned that they did not follow the “thinking” [124] aspect of the LLMs and pursued their line of ideation and thinking. 

    Regarding ethical considerations, participants who were in the Brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to other groups. Essays written with the help of LLM carried a lesser significance or value to the participants (impaired ownership, Figure 8), as they spent less time on writing (Figure 33), and mostly failed to provide a quote from theis [sic] essays (Session 1, Figure 6, Figure 7). 

    Human teachers “closed the loop” by detecting the LLM-generated essays, as they recognized the conventional structure and homogeneity of the delivered points for each essay within the topic and group. 

    We believe that the longitudinal studies are needed in order to understand the long-term impact of the LLMs on the human brain, before LLMs are recognized as something that is net positive for the humans.

    Type your email…

    Subscribe

    #artificialIntelligence #cognitiveDebt #dialogue #friction #humanEncounter #language #LLMs #resistance #sharedCommitment

  27. 🤖🧠 Oh no, #AI is making your brain lazy! The tragic tale of "cognitive debt" piles up like unpaid student loans, as ChatGPT writes your essays and you forget how to use a pen. 🙄💸 Thank goodness for the Simons Foundation, because someone has to save these poor souls from themselves! 🤦‍♂️
    arxiv.org/abs/2506.08872 #CognitiveDebt #BrainHealth #SimonsFoundation #TechImpact #HackerNews #ngated