#technicalwriting — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #technicalwriting, aggregated by home.social.
-
I would love to see more technical articles and books written with the same level of clarity.
"Index 1,600,000,000 Keys with Automata and Rust" (2015) by Andrew Gallant.
https://burntsushi.net/transducers/
#article #rustlang #TechnicalWriting #automaton #datastructures #algorithms
-
I would love to see more technical articles and books written with the same level of clarity.
"Index 1,600,000,000 Keys with Automata and Rust" (2015) by Andrew Gallant.
https://burntsushi.net/transducers/
#article #rustlang #TechnicalWriting #automaton #datastructures #algorithms
-
I would love to see more technical articles and books written with the same level of clarity.
"Index 1,600,000,000 Keys with Automata and Rust" (2015) by Andrew Gallant.
https://burntsushi.net/transducers/
#article #rustlang #TechnicalWriting #automaton #datastructures #algorithms
-
I would love to see more technical articles and books written with the same level of clarity.
"Index 1,600,000,000 Keys with Automata and Rust" (2015) by Andrew Gallant.
https://burntsushi.net/transducers/
#article #rustlang #TechnicalWriting #automaton #datastructures #algorithms
-
I would love to see more technical articles and books written with the same level of clarity.
"Index 1,600,000,000 Keys with Automata and Rust" (2015) by Andrew Gallant.
https://burntsushi.net/transducers/
#article #rustlang #TechnicalWriting #automaton #datastructures #algorithms
-
Food for thought: When documenting the behavior of a highly customizable and configurable product or feature based on questions from a single customer, a technical writer should always consider the scope of the final documentation artifact. This requires making a clear distinction between what is specific to that customer and what is generalizable and easily reproducible by all.
If we don't enforce that strict separation, we run the risk of transforming the artifact into a laundry list of uncontextualized items that are mostly unintelligible for the other customers. In those cases involving very specific scenarios, sometimes what the customer really needs is not necessarily more documentation but rather a detailed technical conversation from our engineering team with their developers explaining how they can by themselves find answers for those questions whenever they need them :)
-
Food for thought: When documenting the behavior of a highly customizable and configurable product or feature based on questions from a single customer, a technical writer should always consider the scope of the final documentation artifact. This requires making a clear distinction between what is specific to that customer and what is generalizable and easily reproducible by all.
If we don't enforce that strict separation, we run the risk of transforming the artifact into a laundry list of uncontextualized items that are mostly unintelligible for the other customers. In those cases involving very specific scenarios, sometimes what the customer really needs is not necessarily more documentation but rather a detailed technical conversation from our engineering team with their developers explaining how they can by themselves find answers for those questions whenever they need them :)
-
Food for thought: When documenting the behavior of a highly customizable and configurable product or feature based on questions from a single customer, a technical writer should always consider the scope of the final documentation artifact. This requires making a clear distinction between what is specific to that customer and what is generalizable and easily reproducible by all.
If we don't enforce that strict separation, we run the risk of transforming the artifact into a laundry list of uncontextualized items that are mostly unintelligible for the other customers. In those cases involving very specific scenarios, sometimes what the customer really needs is not necessarily more documentation but rather a detailed technical conversation from our engineering team with their developers explaining how they can by themselves find answers for those questions whenever they need them :)
-
Food for thought: When documenting the behavior of a highly customizable and configurable product or feature based on questions from a single customer, a technical writer should always consider the scope of the final documentation artifact. This requires making a clear distinction between what is specific to that customer and what is generalizable and easily reproducible by all.
If we don't enforce that strict separation, we run the risk of transforming the artifact into a laundry list of uncontextualized items that are mostly unintelligible for the other customers. In those cases involving very specific scenarios, sometimes what the customer really needs is not necessarily more documentation but rather a detailed technical conversation from our engineering team with their developers explaining how they can by themselves find answers for those questions whenever they need them :)
-
Food for thought: When documenting the behavior of a highly customizable and configurable product or feature based on questions from a single customer, a technical writer should always consider the scope of the final documentation artifact. This requires making a clear distinction between what is specific to that customer and what is generalizable and easily reproducible by all.
If we don't enforce that strict separation, we run the risk of transforming the artifact into a laundry list of uncontextualized items that are mostly unintelligible for the other customers. In those cases involving very specific scenarios, sometimes what the customer really needs is not necessarily more documentation but rather a detailed technical conversation from our engineering team with their developers explaining how they can by themselves find answers for those questions whenever they need them :)
-
A practical, honest guide to technical writing for developers. Start with problems you solved, write like you speak, and build a skill that compounds over time. https://hackernoon.com/a-practical-guide-to-technical-writing-for-developers-who-want-to-start-sharing-what-they-know #technicalwriting
-
A practical, honest guide to technical writing for developers. Start with problems you solved, write like you speak, and build a skill that compounds over time. https://hackernoon.com/a-practical-guide-to-technical-writing-for-developers-who-want-to-start-sharing-what-they-know #technicalwriting
-
A practical, honest guide to technical writing for developers. Start with problems you solved, write like you speak, and build a skill that compounds over time. https://hackernoon.com/a-practical-guide-to-technical-writing-for-developers-who-want-to-start-sharing-what-they-know #technicalwriting
-
A practical, honest guide to technical writing for developers. Start with problems you solved, write like you speak, and build a skill that compounds over time. https://hackernoon.com/a-practical-guide-to-technical-writing-for-developers-who-want-to-start-sharing-what-they-know #technicalwriting
-
A practical, honest guide to technical writing for developers. Start with problems you solved, write like you speak, and build a skill that compounds over time. https://hackernoon.com/a-practical-guide-to-technical-writing-for-developers-who-want-to-start-sharing-what-they-know #technicalwriting
-
"Back in 2024 I wrote that AI helps me remove boring work at the margins. This is fine for a lone writer, but how to scale this to an entire team of technical writers? How to make the system helpful but not intrusive? These are all questions I’m starting to answer now, partly through experimentation, but also through dialogue with practitioners and colleagues. One answer I’m testing these days relies on GitHub Agentic Workflows.
Following Four modes of AI-augmented technical writing, I thought of a way of distributing tooling effort across all modes through a tiered system where each level holds a different relationship with the writer. The result is four tiers: intake, local assistance, automated governance, and an MCP server that provides reliable knowledge to all. The idea is that AI assists the writer not just while writing, but also before and after they work on docs."
https://passo.uno/agentic-workflows-for-docs/
#TechnicalWriting #AI #GenerativeAI #LLMs #AIAgents #AgenticAI #AgenticWorkflows #SoftwareDocumentation #GitHub #DocsAsCode
-
"Back in 2024 I wrote that AI helps me remove boring work at the margins. This is fine for a lone writer, but how to scale this to an entire team of technical writers? How to make the system helpful but not intrusive? These are all questions I’m starting to answer now, partly through experimentation, but also through dialogue with practitioners and colleagues. One answer I’m testing these days relies on GitHub Agentic Workflows.
Following Four modes of AI-augmented technical writing, I thought of a way of distributing tooling effort across all modes through a tiered system where each level holds a different relationship with the writer. The result is four tiers: intake, local assistance, automated governance, and an MCP server that provides reliable knowledge to all. The idea is that AI assists the writer not just while writing, but also before and after they work on docs."
https://passo.uno/agentic-workflows-for-docs/
#TechnicalWriting #AI #GenerativeAI #LLMs #AIAgents #AgenticAI #AgenticWorkflows #SoftwareDocumentation #GitHub #DocsAsCode
-
"Back in 2024 I wrote that AI helps me remove boring work at the margins. This is fine for a lone writer, but how to scale this to an entire team of technical writers? How to make the system helpful but not intrusive? These are all questions I’m starting to answer now, partly through experimentation, but also through dialogue with practitioners and colleagues. One answer I’m testing these days relies on GitHub Agentic Workflows.
Following Four modes of AI-augmented technical writing, I thought of a way of distributing tooling effort across all modes through a tiered system where each level holds a different relationship with the writer. The result is four tiers: intake, local assistance, automated governance, and an MCP server that provides reliable knowledge to all. The idea is that AI assists the writer not just while writing, but also before and after they work on docs."
https://passo.uno/agentic-workflows-for-docs/
#TechnicalWriting #AI #GenerativeAI #LLMs #AIAgents #AgenticAI #AgenticWorkflows #SoftwareDocumentation #GitHub #DocsAsCode
-
"Back in 2024 I wrote that AI helps me remove boring work at the margins. This is fine for a lone writer, but how to scale this to an entire team of technical writers? How to make the system helpful but not intrusive? These are all questions I’m starting to answer now, partly through experimentation, but also through dialogue with practitioners and colleagues. One answer I’m testing these days relies on GitHub Agentic Workflows.
Following Four modes of AI-augmented technical writing, I thought of a way of distributing tooling effort across all modes through a tiered system where each level holds a different relationship with the writer. The result is four tiers: intake, local assistance, automated governance, and an MCP server that provides reliable knowledge to all. The idea is that AI assists the writer not just while writing, but also before and after they work on docs."
https://passo.uno/agentic-workflows-for-docs/
#TechnicalWriting #AI #GenerativeAI #LLMs #AIAgents #AgenticAI #AgenticWorkflows #SoftwareDocumentation #GitHub #DocsAsCode
-
"Back in 2024 I wrote that AI helps me remove boring work at the margins. This is fine for a lone writer, but how to scale this to an entire team of technical writers? How to make the system helpful but not intrusive? These are all questions I’m starting to answer now, partly through experimentation, but also through dialogue with practitioners and colleagues. One answer I’m testing these days relies on GitHub Agentic Workflows.
Following Four modes of AI-augmented technical writing, I thought of a way of distributing tooling effort across all modes through a tiered system where each level holds a different relationship with the writer. The result is four tiers: intake, local assistance, automated governance, and an MCP server that provides reliable knowledge to all. The idea is that AI assists the writer not just while writing, but also before and after they work on docs."
https://passo.uno/agentic-workflows-for-docs/
#TechnicalWriting #AI #GenerativeAI #LLMs #AIAgents #AgenticAI #AgenticWorkflows #SoftwareDocumentation #GitHub #DocsAsCode
-
#WriteTheDocs just released a bunch of new videos from their most recent conference and unfortunately some of them shill AI, but one video caught my attention and is definitely worth watching: The Most Human Documentation.
It talks about the things to avoid in order to not be mistaken by machine generation, and what it means for your #TechnicalWriting and #Documentation to feel human. The talk comments on this from a cultural perspective and it matches what I've been thinking about recently.
With the rise of LLMs and their gross effect on online text now there's humans that work on humanizing machine content, be it with deterministic tools or with humans. In the end humans prefer content that feels human, and human content continues to provide more value.
-
#WriteTheDocs just released a bunch of new videos from their most recent conference and unfortunately some of them shill AI, but one video caught my attention and is definitely worth watching: The Most Human Documentation.
It talks about the things to avoid in order to not be mistaken by machine generation, and what it means for your #TechnicalWriting and #Documentation to feel human. The talk comments on this from a cultural perspective and it matches what I've been thinking about recently.
With the rise of LLMs and their gross effect on online text now there's humans that work on humanizing machine content, be it with deterministic tools or with humans. In the end humans prefer content that feels human, and human content continues to provide more value.
-
#WriteTheDocs just released a bunch of new videos from their most recent conference and unfortunately some of them shill AI, but one video caught my attention and is definitely worth watching: The Most Human Documentation.
It talks about the things to avoid in order to not be mistaken by machine generation, and what it means for your #TechnicalWriting and #Documentation to feel human. The talk comments on this from a cultural perspective and it matches what I've been thinking about recently.
With the rise of LLMs and their gross effect on online text now there's humans that work on humanizing machine content, be it with deterministic tools or with humans. In the end humans prefer content that feels human, and human content continues to provide more value.
-
#WriteTheDocs just released a bunch of new videos from their most recent conference and unfortunately some of them shill AI, but one video caught my attention and is definitely worth watching: The Most Human Documentation.
It talks about the things to avoid in order to not be mistaken by machine generation, and what it means for your #TechnicalWriting and #Documentation to feel human. The talk comments on this from a cultural perspective and it matches what I've been thinking about recently.
With the rise of LLMs and their gross effect on online text now there's humans that work on humanizing machine content, be it with deterministic tools or with humans. In the end humans prefer content that feels human, and human content continues to provide more value.
-
Writing isn’t about fame. It’s how developers turn confusion into clarity, build quiet confidence, and create a career that compounds over time. Start today. https://hackernoon.com/how-writing-helps-developers-think-clearly #technicalwriting
-
Writing isn’t about fame. It’s how developers turn confusion into clarity, build quiet confidence, and create a career that compounds over time. Start today. https://hackernoon.com/how-writing-helps-developers-think-clearly #technicalwriting
-
Writing isn’t about fame. It’s how developers turn confusion into clarity, build quiet confidence, and create a career that compounds over time. Start today. https://hackernoon.com/how-writing-helps-developers-think-clearly #technicalwriting
-
Writing isn’t about fame. It’s how developers turn confusion into clarity, build quiet confidence, and create a career that compounds over time. Start today. https://hackernoon.com/how-writing-helps-developers-think-clearly #technicalwriting
-
Writing isn’t about fame. It’s how developers turn confusion into clarity, build quiet confidence, and create a career that compounds over time. Start today. https://hackernoon.com/how-writing-helps-developers-think-clearly #technicalwriting
-
I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?
Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.
That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.
But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D
"I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."
https://robertdelwood.medium.com/more-about-api-documentation-errors-part-i-969999176c9f
#TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation
-
I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?
Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.
That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.
But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D
"I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."
https://robertdelwood.medium.com/more-about-api-documentation-errors-part-i-969999176c9f
#TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation
-
I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?
Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.
That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.
But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D
"I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."
https://robertdelwood.medium.com/more-about-api-documentation-errors-part-i-969999176c9f
#TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation
-
I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?
Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.
That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.
But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D
"I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."
https://robertdelwood.medium.com/more-about-api-documentation-errors-part-i-969999176c9f
#TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation
-
I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?
Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.
That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.
But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D
"I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."
https://robertdelwood.medium.com/more-about-api-documentation-errors-part-i-969999176c9f
#TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation
-
That seems a bit similar to what I currently do... ;-)
"To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.
After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.
Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."
https://idratherbewriting.com/blog/internal-skills-release-docs
#TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation
-
That seems a bit similar to what I currently do... ;-)
"To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.
After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.
Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."
https://idratherbewriting.com/blog/internal-skills-release-docs
#TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation
-
That seems a bit similar to what I currently do... ;-)
"To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.
After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.
Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."
https://idratherbewriting.com/blog/internal-skills-release-docs
#TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation
-
That seems a bit similar to what I currently do... ;-)
"To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.
After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.
Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."
https://idratherbewriting.com/blog/internal-skills-release-docs
#TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation
-
That seems a bit similar to what I currently do... ;-)
"To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.
After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.
Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."
https://idratherbewriting.com/blog/internal-skills-release-docs
#TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation
-
I had a few days off work last week. My daughter was visiting, and in the evening we sat and watched episodes of Being Gordon Ramsey, a documentary series following the chef as he opens a huge restaurant project in London. It’s an interesting watch, but the thing that comes across through every word and action of Ramsey is how much he cares. He cares about the food, the experience, his family, the young chefs he mentors, and the friends he has made during his long career. The tiny things matter as much as the big ones, and fixing them is important. His high standards apply to himself and everyone around him, because it all deeply matters to him.
Yesterday I saw the following post on Bluesky, and my mind immediately linked it to my thoughts around Ramsey’s documentary.
Claude Design unlocks the project manager’s dream of making the button bigger without pushback. Half of your design team will be fired in a month, the design system and branding guidelines thrown in the trash, and the App Store ratings down one or two points in a few months time.— Jae (@jaehanley.social) April 20, 2026 at 5:12 AM
Before generative AI became a thing, many of the roles that support software engineering could be described as things engineers don’t like doing. There are, of course, plenty of engineers who recognise the skill and craft of people like technical writers and designers, however I’ve also met a large number who see us as people who do the lower value work. Work that engineers absolutely could do, but it’s not worth their time. I’ve seen this very clearly in the way I’m treated as a writer, over how I was treated as a developer.
Now we have generative AI, which can also do this work. To a person who doesn’t care about the craft, some generated documentation or design elements are exactly the same as those handed over by a writer or designer. Someone, or something, else has done the work. However, machines don’t care.
Design systems and editorial style guides need people who care. They need people who care about the small details, who obsess over consistency. They need people who are willing to push back, and who are happy to say no to the endless requests to ignore the guidelines just this one time. Yes, we’re sometimes very annoying to people who just want to ship the app or publish their blog post, but we know that consistency matters. The problem for us is that it’s very difficult to demonstrate the impact of it until it’s gone. Even then, people may not connect the fact that support requests have gone up with poor quality documentation, or poor reviews with an unintuitive UI.
I don’t think this problem is exclusive to these roles. Tech is full of people who care deeply about their area of chosen specialism, and we’re all struggling in a world where doing lots of stuff really fast has become the most important thing.
It’s unlikely that our industry can close the door on generative AI, and I think there is a place for tooling of all kinds in helping to speed up and remove toil from production. However, if those tools aren’t under the control of experts who care deeply about the end result, what you end up with is slop. It’s why I believe strongly that AI-led content operations must be owned by writers. There has to be someone who cares enough to push back, with the experience and organisational power to argue for quality over speed when it matters.
Returning to Gordon Ramsey, the other thing I noted from watching his documentary was how his approach and deep level of care was replicated by his entire team. You might think that perhaps it’s because they are afraid of him, given his sweary Kitchen Nightmares persona. However, the lifelong friendships with people who have worked for him seem to show otherwise. Leaders who care build teams who care, they nurture people who are willing to go the extra mile. Leaders who care develop people who have the confidence to tell others that we need to slow down, fix the issues, comply with the design system or style guide, nail the details, for the benefit of the product and the users.
https://rachelandrew.co.uk/archives/2026/04/21/the-importance-of-people-who-care/
-
I had a few days off work last week. My daughter was visiting, and in the evening we sat and watched episodes of Being Gordon Ramsey, a documentary series following the chef as he opens a huge restaurant project in London. It’s an interesting watch, but the thing that comes across through every word and action of Ramsey is how much he cares. He cares about the food, the experience, his family, the young chefs he mentors, and the friends he has made during his long career. The tiny things matter as much as the big ones, and fixing them is important. His high standards apply to himself and everyone around him, because it all deeply matters to him.
Yesterday I saw the following post on Bluesky, and my mind immediately linked it to my thoughts around Ramsey’s documentary.
Claude Design unlocks the project manager’s dream of making the button bigger without pushback. Half of your design team will be fired in a month, the design system and branding guidelines thrown in the trash, and the App Store ratings down one or two points in a few months time.— Jae (@jaehanley.social) April 20, 2026 at 5:12 AM
Before generative AI became a thing, many of the roles that support software engineering could be described as things engineers don’t like doing. There are, of course, plenty of engineers who recognise the skill and craft of people like technical writers and designers, however I’ve also met a large number who see us as people who do the lower value work. Work that engineers absolutely could do, but it’s not worth their time. I’ve seen this very clearly in the way I’m treated as a writer, over how I was treated as a developer.
Now we have generative AI, which can also do this work. To a person who doesn’t care about the craft, some generated documentation or design elements are exactly the same as those handed over by a writer or designer. Someone, or something, else has done the work. However, machines don’t care.
Design systems and editorial style guides need people who care. They need people who care about the small details, who obsess over consistency. They need people who are willing to push back, and who are happy to say no to the endless requests to ignore the guidelines just this one time. Yes, we’re sometimes very annoying to people who just want to ship the app or publish their blog post, but we know that consistency matters. The problem for us is that it’s very difficult to demonstrate the impact of it until it’s gone. Even then, people may not connect the fact that support requests have gone up with poor quality documentation, or poor reviews with an unintuitive UI.
I don’t think this problem is exclusive to these roles. Tech is full of people who care deeply about their area of chosen specialism, and we’re all struggling in a world where doing lots of stuff really fast has become the most important thing.
It’s unlikely that our industry can close the door on generative AI, and I think there is a place for tooling of all kinds in helping to speed up and remove toil from production. However, if those tools aren’t under the control of experts who care deeply about the end result, what you end up with is slop. It’s why I believe strongly that AI-led content operations must be owned by writers. There has to be someone who cares enough to push back, with the experience and organisational power to argue for quality over speed when it matters.
Returning to Gordon Ramsey, the other thing I noted from watching his documentary was how his approach and deep level of care was replicated by his entire team. You might think that perhaps it’s because they are afraid of him, given his sweary Kitchen Nightmares persona. However, the lifelong friendships with people who have worked for him seem to show otherwise. Leaders who care build teams who care, they nurture people who are willing to go the extra mile. Leaders who care develop people who have the confidence to tell others that we need to slow down, fix the issues, comply with the design system or style guide, nail the details, for the benefit of the product and the users.
https://rachelandrew.co.uk/archives/2026/04/21/the-importance-of-people-who-care/
-
"Docs are beautiful when conceptual docs let the reader see the architecture, or when a tutorial lets them see their own hands on the keyboard. Diagrams and screenshots help, but they often compensate for prose that failed to produce an image on its own. Docs are visible when the reader can close their eyes and still see what the page described."
https://passo.uno/what-makes-docs-beautiful/
#Documentation #DocsAsProduct #SoftwareDocumentation #TechnicalWriting #SoftwareDevelopment
-
"Docs are beautiful when conceptual docs let the reader see the architecture, or when a tutorial lets them see their own hands on the keyboard. Diagrams and screenshots help, but they often compensate for prose that failed to produce an image on its own. Docs are visible when the reader can close their eyes and still see what the page described."
https://passo.uno/what-makes-docs-beautiful/
#Documentation #DocsAsProduct #SoftwareDocumentation #TechnicalWriting #SoftwareDevelopment
-
"Docs are beautiful when conceptual docs let the reader see the architecture, or when a tutorial lets them see their own hands on the keyboard. Diagrams and screenshots help, but they often compensate for prose that failed to produce an image on its own. Docs are visible when the reader can close their eyes and still see what the page described."
https://passo.uno/what-makes-docs-beautiful/
#Documentation #DocsAsProduct #SoftwareDocumentation #TechnicalWriting #SoftwareDevelopment
-
"Docs are beautiful when conceptual docs let the reader see the architecture, or when a tutorial lets them see their own hands on the keyboard. Diagrams and screenshots help, but they often compensate for prose that failed to produce an image on its own. Docs are visible when the reader can close their eyes and still see what the page described."
https://passo.uno/what-makes-docs-beautiful/
#Documentation #DocsAsProduct #SoftwareDocumentation #TechnicalWriting #SoftwareDevelopment
-
"Docs are beautiful when conceptual docs let the reader see the architecture, or when a tutorial lets them see their own hands on the keyboard. Diagrams and screenshots help, but they often compensate for prose that failed to produce an image on its own. Docs are visible when the reader can close their eyes and still see what the page described."
https://passo.uno/what-makes-docs-beautiful/
#Documentation #DocsAsProduct #SoftwareDocumentation #TechnicalWriting #SoftwareDevelopment
-
The great thing that Claude Code - or OpenAI Codex - brings to technical writers is that they can assess the accuracy of any piece of documentation that relies on software code, by analyzing the relevant code base(s).
This is extremely helpful because it definitely helps you fact-check your docs against the source code, namely to see if the Subject Matter Experts (SMEs) were bullshitting you or if the docs became outdated due to the cadence of new releases.
Another advantage is that even if you work in an organization with its own QA team, you can help them catch bugs at an earlier stage of the Software Development Life Cycle (SDLC). For example, yesterday I found an inconsistency between how a certain behavior was coded in the backend and how that same behavior was interpreted by the frontend.
And the best thing is that, since Claude Code does not entirely relies on neural networks but rather also uses regular expressions, the results have an higher degree of determinancy than the ones offered by common LLMs. Even though it's not perfect and you always have to tell it where to direct its attention (meaning the name of the most relevant repository) , this ability of taking advantage of a "Ai-based sniffer" for code is terrific.
For all these reasons I believe that every technical writer that doesn't use Claude Code or a similar toll in its own regular workflows will be immensely disadvantaged.
#TechnicalWriting #AI #GenerativeAI #SoftwareDocumentation #Claude #LLMs #GenerativeAI #ClaudeCode #SoftwareDevelopment #QA
-
The great thing that Claude Code - or OpenAI Codex - brings to technical writers is that they can assess the accuracy of any piece of documentation that relies on software code, by analyzing the relevant code base(s).
This is extremely helpful because it definitely helps you fact-check your docs against the source code, namely to see if the Subject Matter Experts (SMEs) were bullshitting you or if the docs became outdated due to the cadence of new releases.
Another advantage is that even if you work in an organization with its own QA team, you can help them catch bugs at an earlier stage of the Software Development Life Cycle (SDLC). For example, yesterday I found an inconsistency between how a certain behavior was coded in the backend and how that same behavior was interpreted by the frontend.
And the best thing is that, since Claude Code does not entirely relies on neural networks but rather also uses regular expressions, the results have an higher degree of determinancy than the ones offered by common LLMs. Even though it's not perfect and you always have to tell it where to direct its attention (meaning the name of the most relevant repository) , this ability of taking advantage of a "Ai-based sniffer" for code is terrific.
For all these reasons I believe that every technical writer that doesn't use Claude Code or a similar toll in its own regular workflows will be immensely disadvantaged.
#TechnicalWriting #AI #GenerativeAI #SoftwareDocumentation #Claude #LLMs #GenerativeAI #ClaudeCode #SoftwareDevelopment #QA
-
The great thing that Claude Code - or OpenAI Codex - brings to technical writers is that they can assess the accuracy of any piece of documentation that relies on software code, by analyzing the relevant code base(s).
This is extremely helpful because it definitely helps you fact-check your docs against the source code, namely to see if the Subject Matter Experts (SMEs) were bullshitting you or if the docs became outdated due to the cadence of new releases.
Another advantage is that even if you work in an organization with its own QA team, you can help them catch bugs at an earlier stage of the Software Development Life Cycle (SDLC). For example, yesterday I found an inconsistency between how a certain behavior was coded in the backend and how that same behavior was interpreted by the frontend.
And the best thing is that, since Claude Code does not entirely relies on neural networks but rather also uses regular expressions, the results have an higher degree of determinancy than the ones offered by common LLMs. Even though it's not perfect and you always have to tell it where to direct its attention (meaning the name of the most relevant repository) , this ability of taking advantage of a "Ai-based sniffer" for code is terrific.
For all these reasons I believe that every technical writer that doesn't use Claude Code or a similar toll in its own regular workflows will be immensely disadvantaged.
#TechnicalWriting #AI #GenerativeAI #SoftwareDocumentation #Claude #LLMs #GenerativeAI #ClaudeCode #SoftwareDevelopment #QA
-
The great thing that Claude Code - or OpenAI Codex - brings to technical writers is that they can assess the accuracy of any piece of documentation that relies on software code, by analyzing the relevant code base(s).
This is extremely helpful because it definitely helps you fact-check your docs against the source code, namely to see if the Subject Matter Experts (SMEs) were bullshitting you or if the docs became outdated due to the cadence of new releases.
Another advantage is that even if you work in an organization with its own QA team, you can help them catch bugs at an earlier stage of the Software Development Life Cycle (SDLC). For example, yesterday I found an inconsistency between how a certain behavior was coded in the backend and how that same behavior was interpreted by the frontend.
And the best thing is that, since Claude Code does not entirely relies on neural networks but rather also uses regular expressions, the results have an higher degree of determinancy than the ones offered by common LLMs. Even though it's not perfect and you always have to tell it where to direct its attention (meaning the name of the most relevant repository) , this ability of taking advantage of a "Ai-based sniffer" for code is terrific.
For all these reasons I believe that every technical writer that doesn't use Claude Code or a similar toll in its own regular workflows will be immensely disadvantaged.
#TechnicalWriting #AI #GenerativeAI #SoftwareDocumentation #Claude #LLMs #GenerativeAI #ClaudeCode #SoftwareDevelopment #QA