#prompting — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #prompting, aggregated by home.social.
-
Agree, but what about #writers?
https://www.ft.com/content/a4b771cb-2d03-4af8-ae5e-c9675b46707c?shem=dsdf,sharefoc,agadiscoversdl,,sh/x/discover/m1/4
Unless we're all just #prompting at this point and not #writing my take here
https://nkozphoto.com/index.php/2026/03/09/writing-critical-thinking-and-ai-where-are-we-headed/ #gentrification #realestate #nyc #brooklyn #publishing #ai #tech #chatbots #academia #books #journalism -
#Design #Approaches
AI design has no soul · The role of typography in the AI age https://ilo.im/16cqxt_____
#AI #Prompting #Thinking #Feelings #Typography #Workflows #ProductDesign #UxDesign #UiDesign #WebDesign -
#Design #Approaches
AI design has no soul · The role of typography in the AI age https://ilo.im/16cqxt_____
#AI #Prompting #Thinking #Feelings #Typography #Workflows #ProductDesign #UxDesign #UiDesign #WebDesign -
#Design #Approaches
AI design has no soul · The role of typography in the AI age https://ilo.im/16cqxt_____
#AI #Prompting #Thinking #Feelings #Typography #Workflows #ProductDesign #UxDesign #UiDesign #WebDesign -
#Design #Approaches
AI design has no soul · The role of typography in the AI age https://ilo.im/16cqxt_____
#AI #Prompting #Thinking #Feelings #Typography #Workflows #ProductDesign #UxDesign #UiDesign #WebDesign -
#Development #Analyses
The duality of AI models in the browser · “I’m bullish, but also have concerns.” https://ilo.im/16cnmd_____
#Prompting #AI #LLMs #SLMs #Chrome #Browser #PromptAPI #APIs #WebDev #Frontend -
Don’t expect good results from AI if your instructions are unclear. What you get depends on what you give. Be specific about rules (constraints, safety), services or tools to use, what proof or sources to include, and the exact tone and format you want. Example: “3-paragraph explainer for marketers, casual tone, cite sources.” If you’re too general, outputs will be weak and less useful. #AI #Prompting #Clarity #Productivity
-
#Development #Pitfalls
Black box AI drift · “AI tools are making design decisions nobody asked for.” https://ilo.im/16cea6_____
#Business #Intents #Decisions #Prompting #AI #AiAssistance #DevOps #WebDev #Frontend #Backend -
The Delimiter hypothesis: does prompt format actually matter? https://systima.ai/blog/delimiter-hypothesis #AI #Prompting #XML #JSON #Markdown
-
Structured prompting techniques #AI #JSON #XML #Prompting https://codeconductor.ai/blog/structured-prompting-techniques-xml-json/
-
Hört auf zu prompten!
Wenn man heute Selbstständigerwerbende nach ihrem Verhältnis zu Künstlicher Intelligenz fragt, hört man oft Sätze wie: „Ich teste das mal nebenbei” oder „Ich lerne gerade, wie man richtig promptet”.
#prompting, #aiagenten, #workflowautomation, #kikompetenz, #usecases, #innovation, #transformation, #disruption, #make, #n8n, #zapier, #ClaudeCode, #claudecowork
-
Hört auf zu prompten!
Wenn man heute Selbstständigerwerbende nach ihrem Verhältnis zu Künstlicher Intelligenz fragt, hört man oft Sätze wie: „Ich teste das mal nebenbei” oder „Ich lerne gerade, wie man richtig promptet”.
#prompting, #aiagenten, #workflowautomation, #kikompetenz, #usecases, #innovation, #transformation, #disruption, #make, #n8n, #zapier, #ClaudeCode, #claudecowork
-
Hört auf zu prompten!
Wenn man heute Selbstständigerwerbende nach ihrem Verhältnis zu Künstlicher Intelligenz fragt, hört man oft Sätze wie: „Ich teste das mal nebenbei” oder „Ich lerne gerade, wie man richtig promptet”.
#prompting, #aiagenten, #workflowautomation, #kikompetenz, #usecases, #innovation, #transformation, #disruption, #make, #n8n, #zapier, #ClaudeCode, #claudecowork
-
Chain-of-Thought (CoT) Prompting
Chain-of-Thought (CoT) prompting is a technique where asking questions, rather than issuing direct instructions activates a model’s full internal reasoning pathway.
The key insight from the original framing is that instructions skip steps 1–3, jumping straight to synthesis, while questions force the model to work through the entire reasoning chain.
https://neurodoctor.com/2026/03/20/chain-of-thought-cot-prompting/
#chainofthought #cot #ai #llm #prompt #prompts #prompting #claude #chatgpt #gemini #ericschmidt
-
Chain-of-Thought (CoT) Prompting
Chain-of-Thought (CoT) prompting is a technique where asking questions, rather than issuing direct instructions activates a model’s full internal reasoning pathway.
The key insight from the original framing is that instructions skip steps 1–3, jumping straight to synthesis, while questions force the model to work through the entire reasoning chain.
https://neurodoctor.com/2026/03/20/chain-of-thought-cot-prompting/
#chainofthought #cot #ai #llm #prompt #prompts #prompting #claude #chatgpt #gemini #ericschmidt
-
Chain-of-Thought (CoT) Prompting
Chain-of-Thought (CoT) prompting is a technique where asking questions, rather than issuing direct instructions activates a model’s full internal reasoning pathway.
The key insight from the original framing is that instructions skip steps 1–3, jumping straight to synthesis, while questions force the model to work through the entire reasoning chain.
https://neurodoctor.com/2026/03/20/chain-of-thought-cot-prompting/
#chainofthought #cot #ai #llm #prompt #prompts #prompting #claude #chatgpt #gemini #ericschmidt
-
Chain-of-Thought (CoT) Prompting
Chain-of-Thought (CoT) prompting is a technique where asking questions, rather than issuing direct instructions activates a model’s full internal reasoning pathway.
The key insight from the original framing is that instructions skip steps 1–3, jumping straight to synthesis, while questions force the model to work through the entire reasoning chain.
https://neurodoctor.com/2026/03/20/chain-of-thought-cot-prompting/
#chainofthought #cot #ai #llm #prompt #prompts #prompting #claude #chatgpt #gemini #ericschmidt
-
Chain-of-Thought (CoT) Prompting
Chain-of-Thought (CoT) prompting is a technique where asking questions, rather than issuing direct instructions activates a model’s full internal reasoning pathway.
The key insight from the original framing is that instructions skip steps 1–3, jumping straight to synthesis, while questions force the model to work through the entire reasoning chain.
https://neurodoctor.com/2026/03/20/chain-of-thought-cot-prompting/
#chainofthought #cot #ai #llm #prompt #prompts #prompting #claude #chatgpt #gemini #ericschmidt
-
#Design #Approaches
Durable patterns in AI product design · UI models that remain relevant as AI evolves https://ilo.im/16bddn_____
#Prompting #Orchestrating #AI #Agents #Chatbots #DesignPatterns #ProductDesign #UxDesign #UiDesign #WebDesign -
#LLRX February 2026 Issue - 7 New Articles; 6 New Columns; #AgenticAI in the Wild: Lessons from #Moltbook. #OpenClaw; #AI #Prompting for #Legal Professionals; How I Use #ChatGPT to Create a CLE #PowerPoint Deck; Don’t Build Your House on Rented Land: Why Writers Should Avoid Platform Dependency and How They Can Do So #substack; #AI Under the Hood; #Trump Administration’s Continued War Against #Science, #Research, #PublicHealth, and the #RuleofLaw Part 7 https://www.llrx.com/
-
Is AI productivity prompting burnout? Study finds new pattern of "AI brain fry"
https://misryoum.com/us/technology-ai/is-ai-productivity-prompting-burnout-study-finds-new/
The promise of artificial intelligence has been simple: let the machines do the work. Instead, it may be creating a new headache from babysitting the machines. A new study published in Harvard Business Review suggests that instead of making...
#productivity #prompting #burnout #Study #finds #new #pattern #brain #fry #US_News_Hub #misryoum_com
-
RE: https://mastodon.social/@dw_innovation/115892666280848280
Large Language Mistake
Current AI models are not on the path to artificial general intelligence
Update: One week after The Verge published my essay, it was cited in a federal district court decision to support the proposition that LLMs do not reason the way that humans do.
https://buildcognitiveresonance.substack.com/p/large-language-mistake
#theverge #thomasriley #humanintelligence #llms #ai #agi #cogneurosci #neuroscience #machineintelligence #reasoning #understanding #cleverness #computing #stochasticparrot #prompting #vibecoding #ml
-
#Design #Introductions
Google Stitch for UI design · What the AI-powered UI generator delivers https://ilo.im/169dwd_____
#AI #Prompting #Ideation #ProductDesign #UxDesign #UiDesign #WebDesign #Development #WebDev #Frontend -
#Design #Introductions
Google Stitch for UI design · What the AI-powered UI generator delivers https://ilo.im/169dwd_____
#AI #Prompting #Ideation #ProductDesign #UxDesign #UiDesign #WebDesign #Development #WebDev #Frontend -
#Design #Introductions
Google Stitch for UI design · What the AI-powered UI generator delivers https://ilo.im/169dwd_____
#AI #Prompting #Ideation #ProductDesign #UxDesign #UiDesign #WebDesign #Development #WebDev #Frontend -
#Design #Introductions
Google Stitch for UI design · What the AI-powered UI generator delivers https://ilo.im/169dwd_____
#AI #Prompting #Ideation #ProductDesign #UxDesign #UiDesign #WebDesign #Development #WebDev #Frontend -
If you want to spend time on AI you can best spend it on lectures like this. No hype, just science, but in this case also very practical.
https://youtu.be/k1njvbBmfsw?si=yWJPqmcIUSgJyekk
#AI #Stanford #RAG #Prompting #Chainofthought #agenticAI -
Program-of-Thought Prompting Outperforms Chain-of-Thought by 15% (2022)
https://arxiv.org/abs/2211.12588
#HackerNews #ProgramOfThought #Prompting #ChainOfThought #AIResearch #MachineLearning #2022Study
-
#Design #Comparisons
AI image editing showdown · State-of-the-art image editing models compared https://ilo.im/16804q_____
#Prompting #ImageEditing #GenerativeAI #AI #Images #ProductDesign #UiDesign #VisualDesign #WebDesign -
#Design #Guides
Using AI agents in product design · A tutorial to create an AI agent in ChatGPT https://ilo.im/167fua_____
#Prompting #AI #AiAgents #ChatGPT #DesignProcess #ProductDesign #UxDesign #WebDesign -
Via #LLRX - How #poisoned #data can trick #AI − and how to stop it – Hadi Amini and Ervin Moore discuss how the quality of the #information that the AI offers depends on the quality of the data it learns from. But if someone tries to interfere by tampering with their #trainingdata – either the initial data used to build the system or data the system collects as it’s operating to improve – trouble could ensue. #learning #prompting #Education #truth #facts #knowledge #data https://www.llrx.com/2025/08/how-poisoned-data-can-trick-ai-and-how-to-stop-it/
-
Via #LLRX - Beyond the Tool: Why True #AI #Literacy is About #criticalthinking Not #Prompting. The nature of #AI literacy is largely misunderstood within the #education community. Ultimately, the goal of AI literacy should not be to make #students better at using AI, but to #empower them to be more discerning #thinkers, more #ethical #citizens and more self-aware #human beings in a world where AI exists. https://www.llrx.com/2025/08/beyond-the-tool-why-true-ai-literacy-is-about-critical-thinking-not-prompting/ #education #knowledge #criticalthinking
-
LLMs become more dangerous as they rapidly get easier to use
This is a concise summary by Ethan Mollick of what I increasingly see as a key factor driving the evolution of consumer-facing LLMs:
Using AI well used to be a pretty challenging process which involved crafting a prompt using techniques like chain-of-thought along with learning tips and tricks to get the most out of your AI. In a recent series of experiments, however, we have discovered that these techniques don’t really help anymore. Powerful AI models are just getting better at doing what you ask them to or even figuring out what you want and going beyond what you ask (and no, threatening them or being nice to them does not seem to help on average).
What limited truth there was to the inflated discourse of ‘prompt engineering’ has largely evaporated at this point, leaving us in a strange position. The conversational approach I’ve always advocated, literally treating the LLM as an interlocutor analogous to a human collaborator, remains the best way of getting the most out of these systems. But neither this nor ‘prompt engineering’ is necessary to get LLMs to perform usefully in real world contexts.
This seems like an obviously positive thing for users. However my concern is the burden of articulation with LLMs, explaining what you want in a clear and substantive way, was a necessary condition for responsible use of LLMs. This burden was never sustainable at the level of a commercially viable product. It presupposed a consistent level of reflexivity, a degree of cultural capital and a fluency with writing which was always incompatible with the kind of mass market software that has a user base of one billion plus.
However it did ensure that users were thinking carefully about their use because you literally couldn’t produce valuable outcomes with an LLM unless you were doing this. Now it’s very easy to get LLMs to infer what you want, with success much of the time to at least some degree. What’s lost in the process is the reflexivity which came prior to use (i.e. the moment when you stopped to think “what am I actually asking Claude to do here?”) and the learning which came through reflecting on the output and the related interaction it generates, as your sense of what you were trying to do is refined and expanded through dialogue.
This is how Claude Opus 4 glossed my analysis here. It conveyed my own words back to me but it did it in a way which surfaced an element (“the pause”) which was only latent in my original phrasing (“the moment when”), leading me to reflect more on the temporal dimension to “the burden of articulation”:
The “burden of articulation” you describe served as a kind of productive friction – it forced users to crystallize their thinking before engaging with the tool. There’s something valuable in that pause, that moment of having to translate a vague intention into clear language. It’s reminiscent of how writing itself can clarify thinking, or how explaining a problem to someone else often helps you understand it better yourself.
In this sense friction with LLMs was a positive thing because it necessitated meta-cognition. The optimisation of the human-model interaction erodes a feature which I would argue was immensely important, even if its value is only manifested outside of the interaction itself. It doesn’t I think level the playing field because those with the necessary capital and fluency can still use LLMs in a deeper and more reflective way, with better outcomes emerging from the process.
But it does create worrying implications for organisations which build this practice into their roles. Earlier today I heard Cory Doctorow use the brilliant analogy of asbestos to describe LLMs being incorporated into digital infrastructure in ways which we will likely later have to remove at immense cost. What’s the equivalent analogy for the social practice of those operating within the organisations?
https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338
#articulation #chatbots #coryDoctorow #LLMs #metacognition #promptEngineering #prompting #reflexivity
-
LLMs become more dangerous as they rapidly get easier to use
This is a concise summary by Ethan Mollick of what I increasingly see as a key factor driving the evolution of consumer-facing LLMs:
Using AI well used to be a pretty challenging process which involved crafting a prompt using techniques like chain-of-thought along with learning tips and tricks to get the most out of your AI. In a recent series of experiments, however, we have discovered that these techniques don’t really help anymore. Powerful AI models are just getting better at doing what you ask them to or even figuring out what you want and going beyond what you ask (and no, threatening them or being nice to them does not seem to help on average).
What limited truth there was to the inflated discourse of ‘prompt engineering’ has largely evaporated at this point, leaving us in a strange position. The conversational approach I’ve always advocated, literally treating the LLM as an interlocutor analogous to a human collaborator, remains the best way of getting the most out of these systems. But neither this nor ‘prompt engineering’ is necessary to get LLMs to perform usefully in real world contexts.
This seems like an obviously positive thing for users. However my concern is the burden of articulation with LLMs, explaining what you want in a clear and substantive way, was a necessary condition for responsible use of LLMs. This burden was never sustainable at the level of a commercially viable product. It presupposed a consistent level of reflexivity, a degree of cultural capital and a fluency with writing which was always incompatible with the kind of mass market software that has a user base of one billion plus.
However it did ensure that users were thinking carefully about their use because you literally couldn’t produce valuable outcomes with an LLM unless you were doing this. Now it’s very easy to get LLMs to infer what you want, with success much of the time to at least some degree. What’s lost in the process is the reflexivity which came prior to use (i.e. the moment when you stopped to think “what am I actually asking Claude to do here?”) and the learning which came through reflecting on the output and the related interaction it generates, as your sense of what you were trying to do is refined and expanded through dialogue.
This is how Claude Opus 4 glossed my analysis here. It conveyed my own words back to me but it did it in a way which surfaced an element (“the pause”) which was only latent in my original phrasing (“the moment when”), leading me to reflect more on the temporal dimension to “the burden of articulation”:
The “burden of articulation” you describe served as a kind of productive friction – it forced users to crystallize their thinking before engaging with the tool. There’s something valuable in that pause, that moment of having to translate a vague intention into clear language. It’s reminiscent of how writing itself can clarify thinking, or how explaining a problem to someone else often helps you understand it better yourself.
In this sense friction with LLMs was a positive thing because it necessitated meta-cognition. The optimisation of the human-model interaction erodes a feature which I would argue was immensely important, even if its value is only manifested outside of the interaction itself. It doesn’t I think level the playing field because those with the necessary capital and fluency can still use LLMs in a deeper and more reflective way, with better outcomes emerging from the process.
But it does create worrying implications for organisations which build this practice into their roles. Earlier today I heard Cory Doctorow use the brilliant analogy of asbestos to describe LLMs being incorporated into digital infrastructure in ways which we will likely later have to remove at immense cost. What’s the equivalent analogy for the social practice of those operating within the organisations?
https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338
#articulation #chatbots #coryDoctorow #LLMs #metacognition #promptEngineering #prompting #reflexivity
-
LLMs become more dangerous as they rapidly get easier to use
This is a concise summary by Ethan Mollick of what I increasingly see as a key factor driving the evolution of consumer-facing LLMs:
Using AI well used to be a pretty challenging process which involved crafting a prompt using techniques like chain-of-thought along with learning tips and tricks to get the most out of your AI. In a recent series of experiments, however, we have discovered that these techniques don’t really help anymore. Powerful AI models are just getting better at doing what you ask them to or even figuring out what you want and going beyond what you ask (and no, threatening them or being nice to them does not seem to help on average).
What limited truth there was to the inflated discourse of ‘prompt engineering’ has largely evaporated at this point, leaving us in a strange position. The conversational approach I’ve always advocated, literally treating the LLM as an interlocutor analogous to a human collaborator, remains the best way of getting the most out of these systems. But neither this nor ‘prompt engineering’ is necessary to get LLMs to perform usefully in real world contexts.
This seems like an obviously positive thing for users. However my concern is the burden of articulation with LLMs, explaining what you want in a clear and substantive way, was a necessary condition for responsible use of LLMs. This burden was never sustainable at the level of a commercially viable product. It presupposed a consistent level of reflexivity, a degree of cultural capital and a fluency with writing which was always incompatible with the kind of mass market software that has a user base of one billion plus.
However it did ensure that users were thinking carefully about their use because you literally couldn’t produce valuable outcomes with an LLM unless you were doing this. Now it’s very easy to get LLMs to infer what you want, with success much of the time to at least some degree. What’s lost in the process is the reflexivity which came prior to use (i.e. the moment when you stopped to think “what am I actually asking Claude to do here?”) and the learning which came through reflecting on the output and the related interaction it generates, as your sense of what you were trying to do is refined and expanded through dialogue.
This is how Claude Opus 4 glossed my analysis here. It conveyed my own words back to me but it did it in a way which surfaced an element (“the pause”) which was only latent in my original phrasing (“the moment when”), leading me to reflect more on the temporal dimension to “the burden of articulation”:
The “burden of articulation” you describe served as a kind of productive friction – it forced users to crystallize their thinking before engaging with the tool. There’s something valuable in that pause, that moment of having to translate a vague intention into clear language. It’s reminiscent of how writing itself can clarify thinking, or how explaining a problem to someone else often helps you understand it better yourself.
In this sense friction with LLMs was a positive thing because it necessitated meta-cognition. The optimisation of the human-model interaction erodes a feature which I would argue was immensely important, even if its value is only manifested outside of the interaction itself. It doesn’t I think level the playing field because those with the necessary capital and fluency can still use LLMs in a deeper and more reflective way, with better outcomes emerging from the process.
But it does create worrying implications for organisations which build this practice into their roles. Earlier today I heard Cory Doctorow use the brilliant analogy of asbestos to describe LLMs being incorporated into digital infrastructure in ways which we will likely later have to remove at immense cost. What’s the equivalent analogy for the social practice of those operating within the organisations?
https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338
#articulation #chatbots #coryDoctorow #LLMs #metacognition #promptEngineering #prompting #reflexivity
-
LLMs become more dangerous as they rapidly get easier to use
This is a concise summary by Ethan Mollick of what I increasingly see as a key factor driving the evolution of consumer-facing LLMs:
Using AI well used to be a pretty challenging process which involved crafting a prompt using techniques like chain-of-thought along with learning tips and tricks to get the most out of your AI. In a recent series of experiments, however, we have discovered that these techniques don’t really help anymore. Powerful AI models are just getting better at doing what you ask them to or even figuring out what you want and going beyond what you ask (and no, threatening them or being nice to them does not seem to help on average).
What limited truth there was to the inflated discourse of ‘prompt engineering’ has largely evaporated at this point, leaving us in a strange position. The conversational approach I’ve always advocated, literally treating the LLM as an interlocutor analogous to a human collaborator, remains the best way of getting the most out of these systems. But neither this nor ‘prompt engineering’ is necessary to get LLMs to perform usefully in real world contexts.
This seems like an obviously positive thing for users. However my concern is the burden of articulation with LLMs, explaining what you want in a clear and substantive way, was a necessary condition for responsible use of LLMs. This burden was never sustainable at the level of a commercially viable product. It presupposed a consistent level of reflexivity, a degree of cultural capital and a fluency with writing which was always incompatible with the kind of mass market software that has a user base of one billion plus.
However it did ensure that users were thinking carefully about their use because you literally couldn’t produce valuable outcomes with an LLM unless you were doing this. Now it’s very easy to get LLMs to infer what you want, with success much of the time to at least some degree. What’s lost in the process is the reflexivity which came prior to use (i.e. the moment when you stopped to think “what am I actually asking Claude to do here?”) and the learning which came through reflecting on the output and the related interaction it generates, as your sense of what you were trying to do is refined and expanded through dialogue.
This is how Claude Opus 4 glossed my analysis here. It conveyed my own words back to me but it did it in a way which surfaced an element (“the pause”) which was only latent in my original phrasing (“the moment when”), leading me to reflect more on the temporal dimension to “the burden of articulation”:
The “burden of articulation” you describe served as a kind of productive friction – it forced users to crystallize their thinking before engaging with the tool. There’s something valuable in that pause, that moment of having to translate a vague intention into clear language. It’s reminiscent of how writing itself can clarify thinking, or how explaining a problem to someone else often helps you understand it better yourself.
In this sense friction with LLMs was a positive thing because it necessitated meta-cognition. The optimisation of the human-model interaction erodes a feature which I would argue was immensely important, even if its value is only manifested outside of the interaction itself. It doesn’t I think level the playing field because those with the necessary capital and fluency can still use LLMs in a deeper and more reflective way, with better outcomes emerging from the process.
But it does create worrying implications for organisations which build this practice into their roles. Earlier today I heard Cory Doctorow use the brilliant analogy of asbestos to describe LLMs being incorporated into digital infrastructure in ways which we will likely later have to remove at immense cost. What’s the equivalent analogy for the social practice of those operating within the organisations?
https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338
#articulation #chatbots #coryDoctorow #LLMs #metacognition #promptEngineering #prompting #reflexivity
-
LLMs become more dangerous as they rapidly get easier to use
This is a concise summary by Ethan Mollick of what I increasingly see as a key factor driving the evolution of consumer-facing LLMs:
Using AI well used to be a pretty challenging process which involved crafting a prompt using techniques like chain-of-thought along with learning tips and tricks to get the most out of your AI. In a recent series of experiments, however, we have discovered that these techniques don’t really help anymore. Powerful AI models are just getting better at doing what you ask them to or even figuring out what you want and going beyond what you ask (and no, threatening them or being nice to them does not seem to help on average).
What limited truth there was to the inflated discourse of ‘prompt engineering’ has largely evaporated at this point, leaving us in a strange position. The conversational approach I’ve always advocated, literally treating the LLM as an interlocutor analogous to a human collaborator, remains the best way of getting the most out of these systems. But neither this nor ‘prompt engineering’ is necessary to get LLMs to perform usefully in real world contexts.
This seems like an obviously positive thing for users. However my concern is the burden of articulation with LLMs, explaining what you want in a clear and substantive way, was a necessary condition for responsible use of LLMs. This burden was never sustainable at the level of a commercially viable product. It presupposed a consistent level of reflexivity, a degree of cultural capital and a fluency with writing which was always incompatible with the kind of mass market software that has a user base of one billion plus.
However it did ensure that users were thinking carefully about their use because you literally couldn’t produce valuable outcomes with an LLM unless you were doing this. Now it’s very easy to get LLMs to infer what you want, with success much of the time to at least some degree. What’s lost in the process is the reflexivity which came prior to use (i.e. the moment when you stopped to think “what am I actually asking Claude to do here?”) and the learning which came through reflecting on the output and the related interaction it generates, as your sense of what you were trying to do is refined and expanded through dialogue.
This is how Claude Opus 4 glossed my analysis here. It conveyed my own words back to me but it did it in a way which surfaced an element (“the pause”) which was only latent in my original phrasing (“the moment when”), leading me to reflect more on the temporal dimension to “the burden of articulation”:
The “burden of articulation” you describe served as a kind of productive friction – it forced users to crystallize their thinking before engaging with the tool. There’s something valuable in that pause, that moment of having to translate a vague intention into clear language. It’s reminiscent of how writing itself can clarify thinking, or how explaining a problem to someone else often helps you understand it better yourself.
In this sense friction with LLMs was a positive thing because it necessitated meta-cognition. The optimisation of the human-model interaction erodes a feature which I would argue was immensely important, even if its value is only manifested outside of the interaction itself. It doesn’t I think level the playing field because those with the necessary capital and fluency can still use LLMs in a deeper and more reflective way, with better outcomes emerging from the process.
But it does create worrying implications for organisations which build this practice into their roles. Earlier today I heard Cory Doctorow use the brilliant analogy of asbestos to describe LLMs being incorporated into digital infrastructure in ways which we will likely later have to remove at immense cost. What’s the equivalent analogy for the social practice of those operating within the organisations?
https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338
#articulation #chatbots #coryDoctorow #LLMs #metacognition #promptEngineering #prompting #reflexivity
-
GPT-4.1: Новый уровень промптинга. Гайд от OpenAI для максимальной отдачи
Авторы оригинального гайда: Noah MacCallum (OpenAI), Julian Lee (OpenAI). Дата публикации гайда: 14 апреля 2025 г. Источник: GPT-4.1 Prompting Guide GPT-4.1 от OpenAI значительно превосходит GPT-4o в написании кода, следовании инструкциям и работе с длинным контекстом. Но чтобы раскрыть весь потенциал, придется адаптировать свои подходы к составлению промптов. Этот материал — выжимка ключевых советов из официального гайда OpenAI, основанных на их внутреннем тестировании. Он поможет вам перейти на новый уровень взаимодействия с ИИ. Старые добрые практики, такие как предоставление примеров в контексте, максимальная ясность инструкций и поощрение планирования через промпт, все еще актуальны. Однако GPT-4.1 обучен следовать инструкциям более точно и буквально, чем его предшественники, которые чаще домысливали намерения пользователя. Это означает, что GPT-4.1 чрезвычайно управляем и отзывчив на четко сформулированные промпты. Если поведение модели отличается от ожидаемого, обычно достаточно одного предложения, твердо и однозначно разъясняющего желаемое поведение, чтобы направить модель на верный путь.
-
The fine art of human prompt engineering: How to talk to a person like ChatGPT - Enlarge / With these tips, you too can prompt people successfully.
... - https://arstechnica.com/?p=2010159 #largelanguagemodels #promptinjections #machinelearning #jailbreaks #prompting #features #aimodels #chatgpt #chatgtp #biz #humans #openai #humor #ai
-
𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐂𝐨𝐩𝐢𝐥𝐨𝐭: 𝐭𝐡𝐞 𝐚𝐫𝐭 𝐨𝐟 𝐩𝐫𝐨𝐦𝐩𝐭𝐢𝐧𝐠 𝐟𝐨𝐫 𝐞𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐭 𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐢𝐧𝐯𝐞𝐬𝐭𝐢𝐠𝐚𝐭𝐢𝐨𝐧 𝐬𝐮𝐦𝐦𝐚𝐫𝐢𝐞𝐬
Security Copilot employs promptbooks—a series of user-input-driven prompts that analyze cybersecurity threats. Every interaction within Security Copilot, be it an individual prompt or a promptbook, generates a session. These sessions, which are storable and shareable within your workspace.
Generating a summary within Security Copilot can vary in complexity and detail, influenced by how you craft your prompt.
More details:
#ai #genai #security #copilot #securitycopilot #microsoft #microsoftsecurity #azure #xdr #soc #llm #cybersecurity #prompt #prompting #promptengineering #promptbooks #securityincident #hunting #triage