#softwaredocumentation — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #softwaredocumentation, aggregated by home.social.
-
"Back in 2024 I wrote that AI helps me remove boring work at the margins. This is fine for a lone writer, but how to scale this to an entire team of technical writers? How to make the system helpful but not intrusive? These are all questions I’m starting to answer now, partly through experimentation, but also through dialogue with practitioners and colleagues. One answer I’m testing these days relies on GitHub Agentic Workflows.
Following Four modes of AI-augmented technical writing, I thought of a way of distributing tooling effort across all modes through a tiered system where each level holds a different relationship with the writer. The result is four tiers: intake, local assistance, automated governance, and an MCP server that provides reliable knowledge to all. The idea is that AI assists the writer not just while writing, but also before and after they work on docs."
https://passo.uno/agentic-workflows-for-docs/
#TechnicalWriting #AI #GenerativeAI #LLMs #AIAgents #AgenticAI #AgenticWorkflows #SoftwareDocumentation #GitHub #DocsAsCode
-
"Back in 2024 I wrote that AI helps me remove boring work at the margins. This is fine for a lone writer, but how to scale this to an entire team of technical writers? How to make the system helpful but not intrusive? These are all questions I’m starting to answer now, partly through experimentation, but also through dialogue with practitioners and colleagues. One answer I’m testing these days relies on GitHub Agentic Workflows.
Following Four modes of AI-augmented technical writing, I thought of a way of distributing tooling effort across all modes through a tiered system where each level holds a different relationship with the writer. The result is four tiers: intake, local assistance, automated governance, and an MCP server that provides reliable knowledge to all. The idea is that AI assists the writer not just while writing, but also before and after they work on docs."
https://passo.uno/agentic-workflows-for-docs/
#TechnicalWriting #AI #GenerativeAI #LLMs #AIAgents #AgenticAI #AgenticWorkflows #SoftwareDocumentation #GitHub #DocsAsCode
-
"Back in 2024 I wrote that AI helps me remove boring work at the margins. This is fine for a lone writer, but how to scale this to an entire team of technical writers? How to make the system helpful but not intrusive? These are all questions I’m starting to answer now, partly through experimentation, but also through dialogue with practitioners and colleagues. One answer I’m testing these days relies on GitHub Agentic Workflows.
Following Four modes of AI-augmented technical writing, I thought of a way of distributing tooling effort across all modes through a tiered system where each level holds a different relationship with the writer. The result is four tiers: intake, local assistance, automated governance, and an MCP server that provides reliable knowledge to all. The idea is that AI assists the writer not just while writing, but also before and after they work on docs."
https://passo.uno/agentic-workflows-for-docs/
#TechnicalWriting #AI #GenerativeAI #LLMs #AIAgents #AgenticAI #AgenticWorkflows #SoftwareDocumentation #GitHub #DocsAsCode
-
"Back in 2024 I wrote that AI helps me remove boring work at the margins. This is fine for a lone writer, but how to scale this to an entire team of technical writers? How to make the system helpful but not intrusive? These are all questions I’m starting to answer now, partly through experimentation, but also through dialogue with practitioners and colleagues. One answer I’m testing these days relies on GitHub Agentic Workflows.
Following Four modes of AI-augmented technical writing, I thought of a way of distributing tooling effort across all modes through a tiered system where each level holds a different relationship with the writer. The result is four tiers: intake, local assistance, automated governance, and an MCP server that provides reliable knowledge to all. The idea is that AI assists the writer not just while writing, but also before and after they work on docs."
https://passo.uno/agentic-workflows-for-docs/
#TechnicalWriting #AI #GenerativeAI #LLMs #AIAgents #AgenticAI #AgenticWorkflows #SoftwareDocumentation #GitHub #DocsAsCode
-
"Back in 2024 I wrote that AI helps me remove boring work at the margins. This is fine for a lone writer, but how to scale this to an entire team of technical writers? How to make the system helpful but not intrusive? These are all questions I’m starting to answer now, partly through experimentation, but also through dialogue with practitioners and colleagues. One answer I’m testing these days relies on GitHub Agentic Workflows.
Following Four modes of AI-augmented technical writing, I thought of a way of distributing tooling effort across all modes through a tiered system where each level holds a different relationship with the writer. The result is four tiers: intake, local assistance, automated governance, and an MCP server that provides reliable knowledge to all. The idea is that AI assists the writer not just while writing, but also before and after they work on docs."
https://passo.uno/agentic-workflows-for-docs/
#TechnicalWriting #AI #GenerativeAI #LLMs #AIAgents #AgenticAI #AgenticWorkflows #SoftwareDocumentation #GitHub #DocsAsCode
-
I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?
Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.
That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.
But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D
"I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."
https://robertdelwood.medium.com/more-about-api-documentation-errors-part-i-969999176c9f
#TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation
-
I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?
Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.
That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.
But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D
"I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."
https://robertdelwood.medium.com/more-about-api-documentation-errors-part-i-969999176c9f
#TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation
-
I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?
Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.
That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.
But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D
"I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."
https://robertdelwood.medium.com/more-about-api-documentation-errors-part-i-969999176c9f
#TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation
-
I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?
Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.
That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.
But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D
"I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."
https://robertdelwood.medium.com/more-about-api-documentation-errors-part-i-969999176c9f
#TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation
-
I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?
Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.
That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.
But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D
"I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."
https://robertdelwood.medium.com/more-about-api-documentation-errors-part-i-969999176c9f
#TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation
-
That seems a bit similar to what I currently do... ;-)
"To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.
After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.
Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."
https://idratherbewriting.com/blog/internal-skills-release-docs
#TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation
-
That seems a bit similar to what I currently do... ;-)
"To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.
After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.
Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."
https://idratherbewriting.com/blog/internal-skills-release-docs
#TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation
-
That seems a bit similar to what I currently do... ;-)
"To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.
After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.
Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."
https://idratherbewriting.com/blog/internal-skills-release-docs
#TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation
-
That seems a bit similar to what I currently do... ;-)
"To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.
After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.
Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."
https://idratherbewriting.com/blog/internal-skills-release-docs
#TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation
-
That seems a bit similar to what I currently do... ;-)
"To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.
After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.
Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."
https://idratherbewriting.com/blog/internal-skills-release-docs
#TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation
-
"Docs are beautiful when conceptual docs let the reader see the architecture, or when a tutorial lets them see their own hands on the keyboard. Diagrams and screenshots help, but they often compensate for prose that failed to produce an image on its own. Docs are visible when the reader can close their eyes and still see what the page described."
https://passo.uno/what-makes-docs-beautiful/
#Documentation #DocsAsProduct #SoftwareDocumentation #TechnicalWriting #SoftwareDevelopment
-
"Docs are beautiful when conceptual docs let the reader see the architecture, or when a tutorial lets them see their own hands on the keyboard. Diagrams and screenshots help, but they often compensate for prose that failed to produce an image on its own. Docs are visible when the reader can close their eyes and still see what the page described."
https://passo.uno/what-makes-docs-beautiful/
#Documentation #DocsAsProduct #SoftwareDocumentation #TechnicalWriting #SoftwareDevelopment
-
"Docs are beautiful when conceptual docs let the reader see the architecture, or when a tutorial lets them see their own hands on the keyboard. Diagrams and screenshots help, but they often compensate for prose that failed to produce an image on its own. Docs are visible when the reader can close their eyes and still see what the page described."
https://passo.uno/what-makes-docs-beautiful/
#Documentation #DocsAsProduct #SoftwareDocumentation #TechnicalWriting #SoftwareDevelopment
-
"Docs are beautiful when conceptual docs let the reader see the architecture, or when a tutorial lets them see their own hands on the keyboard. Diagrams and screenshots help, but they often compensate for prose that failed to produce an image on its own. Docs are visible when the reader can close their eyes and still see what the page described."
https://passo.uno/what-makes-docs-beautiful/
#Documentation #DocsAsProduct #SoftwareDocumentation #TechnicalWriting #SoftwareDevelopment
-
"Docs are beautiful when conceptual docs let the reader see the architecture, or when a tutorial lets them see their own hands on the keyboard. Diagrams and screenshots help, but they often compensate for prose that failed to produce an image on its own. Docs are visible when the reader can close their eyes and still see what the page described."
https://passo.uno/what-makes-docs-beautiful/
#Documentation #DocsAsProduct #SoftwareDocumentation #TechnicalWriting #SoftwareDevelopment
-
The great thing that Claude Code - or OpenAI Codex - brings to technical writers is that they can assess the accuracy of any piece of documentation that relies on software code, by analyzing the relevant code base(s).
This is extremely helpful because it definitely helps you fact-check your docs against the source code, namely to see if the Subject Matter Experts (SMEs) were bullshitting you or if the docs became outdated due to the cadence of new releases.
Another advantage is that even if you work in an organization with its own QA team, you can help them catch bugs at an earlier stage of the Software Development Life Cycle (SDLC). For example, yesterday I found an inconsistency between how a certain behavior was coded in the backend and how that same behavior was interpreted by the frontend.
And the best thing is that, since Claude Code does not entirely relies on neural networks but rather also uses regular expressions, the results have an higher degree of determinancy than the ones offered by common LLMs. Even though it's not perfect and you always have to tell it where to direct its attention (meaning the name of the most relevant repository) , this ability of taking advantage of a "Ai-based sniffer" for code is terrific.
For all these reasons I believe that every technical writer that doesn't use Claude Code or a similar toll in its own regular workflows will be immensely disadvantaged.
#TechnicalWriting #AI #GenerativeAI #SoftwareDocumentation #Claude #LLMs #GenerativeAI #ClaudeCode #SoftwareDevelopment #QA
-
The great thing that Claude Code - or OpenAI Codex - brings to technical writers is that they can assess the accuracy of any piece of documentation that relies on software code, by analyzing the relevant code base(s).
This is extremely helpful because it definitely helps you fact-check your docs against the source code, namely to see if the Subject Matter Experts (SMEs) were bullshitting you or if the docs became outdated due to the cadence of new releases.
Another advantage is that even if you work in an organization with its own QA team, you can help them catch bugs at an earlier stage of the Software Development Life Cycle (SDLC). For example, yesterday I found an inconsistency between how a certain behavior was coded in the backend and how that same behavior was interpreted by the frontend.
And the best thing is that, since Claude Code does not entirely relies on neural networks but rather also uses regular expressions, the results have an higher degree of determinancy than the ones offered by common LLMs. Even though it's not perfect and you always have to tell it where to direct its attention (meaning the name of the most relevant repository) , this ability of taking advantage of a "Ai-based sniffer" for code is terrific.
For all these reasons I believe that every technical writer that doesn't use Claude Code or a similar toll in its own regular workflows will be immensely disadvantaged.
#TechnicalWriting #AI #GenerativeAI #SoftwareDocumentation #Claude #LLMs #GenerativeAI #ClaudeCode #SoftwareDevelopment #QA
-
The great thing that Claude Code - or OpenAI Codex - brings to technical writers is that they can assess the accuracy of any piece of documentation that relies on software code, by analyzing the relevant code base(s).
This is extremely helpful because it definitely helps you fact-check your docs against the source code, namely to see if the Subject Matter Experts (SMEs) were bullshitting you or if the docs became outdated due to the cadence of new releases.
Another advantage is that even if you work in an organization with its own QA team, you can help them catch bugs at an earlier stage of the Software Development Life Cycle (SDLC). For example, yesterday I found an inconsistency between how a certain behavior was coded in the backend and how that same behavior was interpreted by the frontend.
And the best thing is that, since Claude Code does not entirely relies on neural networks but rather also uses regular expressions, the results have an higher degree of determinancy than the ones offered by common LLMs. Even though it's not perfect and you always have to tell it where to direct its attention (meaning the name of the most relevant repository) , this ability of taking advantage of a "Ai-based sniffer" for code is terrific.
For all these reasons I believe that every technical writer that doesn't use Claude Code or a similar toll in its own regular workflows will be immensely disadvantaged.
#TechnicalWriting #AI #GenerativeAI #SoftwareDocumentation #Claude #LLMs #GenerativeAI #ClaudeCode #SoftwareDevelopment #QA
-
The great thing that Claude Code - or OpenAI Codex - brings to technical writers is that they can assess the accuracy of any piece of documentation that relies on software code, by analyzing the relevant code base(s).
This is extremely helpful because it definitely helps you fact-check your docs against the source code, namely to see if the Subject Matter Experts (SMEs) were bullshitting you or if the docs became outdated due to the cadence of new releases.
Another advantage is that even if you work in an organization with its own QA team, you can help them catch bugs at an earlier stage of the Software Development Life Cycle (SDLC). For example, yesterday I found an inconsistency between how a certain behavior was coded in the backend and how that same behavior was interpreted by the frontend.
And the best thing is that, since Claude Code does not entirely relies on neural networks but rather also uses regular expressions, the results have an higher degree of determinancy than the ones offered by common LLMs. Even though it's not perfect and you always have to tell it where to direct its attention (meaning the name of the most relevant repository) , this ability of taking advantage of a "Ai-based sniffer" for code is terrific.
For all these reasons I believe that every technical writer that doesn't use Claude Code or a similar toll in its own regular workflows will be immensely disadvantaged.
#TechnicalWriting #AI #GenerativeAI #SoftwareDocumentation #Claude #LLMs #GenerativeAI #ClaudeCode #SoftwareDevelopment #QA
-
The great thing that Claude Code - or OpenAI Codex - brings to technical writers is that they can assess the accuracy of any piece of documentation that relies on software code, by analyzing the relevant code base(s).
This is extremely helpful because it definitely helps you fact-check your docs against the source code, namely to see if the Subject Matter Experts (SMEs) were bullshitting you or if the docs became outdated due to the cadence of new releases.
Another advantage is that even if you work in an organization with its own QA team, you can help them catch bugs at an earlier stage of the Software Development Life Cycle (SDLC). For example, yesterday I found an inconsistency between how a certain behavior was coded in the backend and how that same behavior was interpreted by the frontend.
And the best thing is that, since Claude Code does not entirely relies on neural networks but rather also uses regular expressions, the results have an higher degree of determinancy than the ones offered by common LLMs. Even though it's not perfect and you always have to tell it where to direct its attention (meaning the name of the most relevant repository) , this ability of taking advantage of a "Ai-based sniffer" for code is terrific.
For all these reasons I believe that every technical writer that doesn't use Claude Code or a similar toll in its own regular workflows will be immensely disadvantaged.
#TechnicalWriting #AI #GenerativeAI #SoftwareDocumentation #Claude #LLMs #GenerativeAI #ClaudeCode #SoftwareDevelopment #QA
-
This article is great in the sense that it describes most of what I'm doing nowadays as a technical writer. I even put different LLMs reviewing each other's drafts, which is a lot of fun. That's why, personally, I can't be so pessimistic as others are currently being. LLMs are just a new technology that you need to incorporate in your workflows. Of course, there are some skills that will probably become atrofied. At the same time, a new set of skills is emerging. If you don't see that. you will be completely left behind. You just need to use these tools by making use of critical thinking.
"After deliberation for a few months, I reached a conclusion about what I wanted to say: the model that’s emerging is a cyborg model of technical writing, a humans + AI combination. This is in contrast to the many articles, which now seem to come at an even faster pace, saying that AI will replace human labor. I realize there’s a lot of opinion on this debate, but my argument for why the humans + AI (cyborgs) model is the winning one, rather than replacement, is because of this observation: almost no tech writers at my work have automated complex processes using AI. And in my own use of AI over the past few years, the model that’s emerged is a close intertwining of machine and human interaction to produce content. I’m talking with AI all day. It’s not doing much on its own without my constant steering, direction, and feedback."
https://idratherbewriting.com/blog/cyborg-model-emerging-talk
#AI #GenerativeAI #LLMs #Chatbots #TechnicalWriting #TechnicalDocumentation #SoftwareDevelopment #SoftwareDocumentation
-
This article is great in the sense that it describes most of what I'm doing nowadays as a technical writer. I even put different LLMs reviewing each other's drafts, which is a lot of fun. That's why, personally, I can't be so pessimistic as others are currently being. LLMs are just a new technology that you need to incorporate in your workflows. Of course, there are some skills that will probably become atrofied. At the same time, a new set of skills is emerging. If you don't see that. you will be completely left behind. You just need to use these tools by making use of critical thinking.
"After deliberation for a few months, I reached a conclusion about what I wanted to say: the model that’s emerging is a cyborg model of technical writing, a humans + AI combination. This is in contrast to the many articles, which now seem to come at an even faster pace, saying that AI will replace human labor. I realize there’s a lot of opinion on this debate, but my argument for why the humans + AI (cyborgs) model is the winning one, rather than replacement, is because of this observation: almost no tech writers at my work have automated complex processes using AI. And in my own use of AI over the past few years, the model that’s emerged is a close intertwining of machine and human interaction to produce content. I’m talking with AI all day. It’s not doing much on its own without my constant steering, direction, and feedback."
https://idratherbewriting.com/blog/cyborg-model-emerging-talk
#AI #GenerativeAI #LLMs #Chatbots #TechnicalWriting #TechnicalDocumentation #SoftwareDevelopment #SoftwareDocumentation
-
This article is great in the sense that it describes most of what I'm doing nowadays as a technical writer. I even put different LLMs reviewing each other's drafts, which is a lot of fun. That's why, personally, I can't be so pessimistic as others are currently being. LLMs are just a new technology that you need to incorporate in your workflows. Of course, there are some skills that will probably become atrofied. At the same time, a new set of skills is emerging. If you don't see that. you will be completely left behind. You just need to use these tools by making use of critical thinking.
"After deliberation for a few months, I reached a conclusion about what I wanted to say: the model that’s emerging is a cyborg model of technical writing, a humans + AI combination. This is in contrast to the many articles, which now seem to come at an even faster pace, saying that AI will replace human labor. I realize there’s a lot of opinion on this debate, but my argument for why the humans + AI (cyborgs) model is the winning one, rather than replacement, is because of this observation: almost no tech writers at my work have automated complex processes using AI. And in my own use of AI over the past few years, the model that’s emerged is a close intertwining of machine and human interaction to produce content. I’m talking with AI all day. It’s not doing much on its own without my constant steering, direction, and feedback."
https://idratherbewriting.com/blog/cyborg-model-emerging-talk
#AI #GenerativeAI #LLMs #Chatbots #TechnicalWriting #TechnicalDocumentation #SoftwareDevelopment #SoftwareDocumentation
-
This article is great in the sense that it describes most of what I'm doing nowadays as a technical writer. I even put different LLMs reviewing each other's drafts, which is a lot of fun. That's why, personally, I can't be so pessimistic as others are currently being. LLMs are just a new technology that you need to incorporate in your workflows. Of course, there are some skills that will probably become atrofied. At the same time, a new set of skills is emerging. If you don't see that. you will be completely left behind. You just need to use these tools by making use of critical thinking.
"After deliberation for a few months, I reached a conclusion about what I wanted to say: the model that’s emerging is a cyborg model of technical writing, a humans + AI combination. This is in contrast to the many articles, which now seem to come at an even faster pace, saying that AI will replace human labor. I realize there’s a lot of opinion on this debate, but my argument for why the humans + AI (cyborgs) model is the winning one, rather than replacement, is because of this observation: almost no tech writers at my work have automated complex processes using AI. And in my own use of AI over the past few years, the model that’s emerged is a close intertwining of machine and human interaction to produce content. I’m talking with AI all day. It’s not doing much on its own without my constant steering, direction, and feedback."
https://idratherbewriting.com/blog/cyborg-model-emerging-talk
#AI #GenerativeAI #LLMs #Chatbots #TechnicalWriting #TechnicalDocumentation #SoftwareDevelopment #SoftwareDocumentation
-
"With AI, the writer’s role moves to what I call context ownership. This is not a soft concept. A context owner is the person in your organization who governs what your AI tools know, how your content is structured, whether the output meets your quality and accuracy standards, and how your documentation systems connect to your product and engineering workflows.
In practice, context ownership looks like this:
A context owner defines and maintains the templates, standards, and structural rules that AI tools follow. Without these, AI produces content that is internally consistent within a single document but inconsistent across your documentation as a whole. Your customers notice, even if you don’t.
A context owner reviews and validates AI-generated drafts against product reality. AI tools do not know what your product actually does in edge cases. They do not know what changed in the last release that hasn’t been documented yet. They do not know that the API endpoint described in the engineering spec was modified during implementation. The context owner does.
A context owner manages the documentation pipeline. In a modern documentation operation, this means version control, docs-as-code workflows, API-driven publishing, and automated quality checks. These are technical systems that require technical management. AI can operate within these systems, but it cannot design, maintain, or troubleshoot them.
A context owner bridges engineering and customer-facing content. This is the function that has never been automated in any transition, and AI has not changed that. Someone has to understand what engineering built, determine what customers need to know about it, and make sure the documentation connects those two realities accurately.
(...)
This is not a diminished version of the writer’s role. It is a more senior, more technical role than “writer” has traditionally implied"
https://greenmtndocs.com/2026-03-25-ive-seen-this-before/
#AI #LLMs #TechnicalWriting #SoftwareDocumentation #ContextEngineering -
"To begin with, everything you document has to be in a format that's as structured and machine-readable as possible. The key here is to disambiguate as much as you can, even if you have to repeat yourself. So, don't bother with the formatting of your documentation or the look and feel of your API portal. Instead, focus on using well-known API definition standards based on machine-readable formats. Use OpenAPI for documenting REST APIs, AsyncAPI for asynchronous APIs, Protocol Buffers for gRPC, and the GraphQL Schema Definition Language. Whenever possible, store the API definitions in several formats, such as JSON and YAML, for easy interpretation by AI agents.
But that's not enough. If you don't have all your operations clearly defined, AI agents will have a hard time understanding what they can do. Make sure you clearly define all operation parameters. Specify what the input types are so there are no misunderstandings. So, instead of saying that everything is a "string," identify each individual input format."
https://apichangelog.substack.com/p/api-documentation-for-machines
#APIs #APIDocumentation #AI #AIAgents #LLMs #OpenAPI #TechnicalWriting #SoftwareDocumentation #Programming
-
The problem is that most companies with the resources to properly implement role fluidity only want to hire "unicorns." Having worked in hybrid roles at smaller companies before and after the widespread adoption of LLMs, I must say that it's a recipe for burnout. This is not only because it's difficult to assess the quality of your work, but also because, in practice, companies don't care much about documentation. In reality, you'd mostly be a software developer doing some documentation in your "free time."
Another problem with this model of a fluid software documentation team is that it assumes there are or will be software companies willing to prioritize documentation as a sector that deserves its own department. However, technical writers are often placed under the product umbrella, which isn't necessarily bad. In fact, it's much better than being placed under "marketing." Unfortunately, if role fluidity ever becomes the norm, I'm afraid it will most likely start with engineering.
https://passo.uno/docs-team-of-the-future/
#TechnicalWriting #SoftwareDocumentation #Programming #SoftwareDevelopment #AI #LLMs
-
For the forseeable future, AI tools will continue to generate such incomplete and sometime hallucinated outputs that there will be a continuing need for a "human-in-the-loop" to not only use several LLMs to review each other's output but to fact-check the final output. Using one LLM alone results in mediocre quality. Using two LLMs results in (sometimes very) good quality. Use three LLMs with human verification for great/outstanding results.
"1,131 people across the documentation industry responded to the 2026 State of Docs survey — more than 2.5x the number of respondents last year. But the size of the sample matters less than what it represents: a genuine cross-section of the people who create, manage, evaluate, and depend on documentation.
Documentation’s role in purchase decisions is stable and strong, and the case that docs drive business value is well established. The shift this year is in what documentation is being asked to do, and who — and what — is consuming it.
AI has crossed the mainstream threshold for documentation, both in how docs get written and how they get consumed. Users are arriving through AI-powered search tools, coding assistants, and MCP servers. Documentation is becoming the data layer that feeds AI products, onboarding wizards, and developer tools. The teams investing in this shift are treating documentation as context infrastructure, not just a collection of pages.
But adoption has outrun governance, and the gap matters. Most teams are using AI without guidelines in place, and documentation carries a higher accuracy bar than most content. After all, one wrong instruction can break a user’s implementation and erode trust in the product.
(...)
Writers are spending less time drafting and more time fact-checking, validating, and building the context systems that make AI output worth refining."https://www.stateofdocs.com/2026/introduction-and-demographics
#TechnicalWriting #TechnicalCommunication #SoftwareDocumentation #DocsAsProduct #AI #GenerativeAI
-
"Start small:
Pick one repeatable task that an agent currently handles without explicit guidance. Document it as a skill with entry criteria, steps, and exit criteria.
Validate it. Install skill-validator and run skill-validator check against your skill. Fix what it finds.
Test it with the agent. Invoke the skill explicitly and observe whether the agent follows it as written. Where it deviates, the skill is probably ambiguous.
Add validation to CI. Once you have a few skills, the CI integration keeps them from degrading as the project evolves.
Perhaps unsurprisingly, this is the same pattern I described for project descriptions: start with one file, observe how agents respond, iterate. The difference is that skills demand more precision because they're more prescriptive. That higher quality bar makes deterministic validation tooling valuable; you get feedback on skill quality before the agent runs, not after."
https://instructionmanuel.com/writing-skills-agents-can-execute
#AI #AIAgents #GenerativeAI #Skills #LLMs #TechnicalWriting #Documentation #SoftwareDocumentation
-
"Too often, API documentation writing is introduced as a series of rules or gut feeling about what seems obvious. Beginning writing, that’s a good approach. They’re easily understood and conform to. They’re rarely wrong. They’re far from complete, however.
API documentation writing is an art, not a science. As the artist, your influence is no less important than anyone else’s. But you’ll need to understand more in order to take the writing to a new level. You’ll need to know theory, the hows and whys, and to think like a programmer. The theory here is not only to connect with clients but also to present information in the most efficient way possible. It’s the last points that learning API documentation writing does not do well.
The following is a talk through. I talk about an element in conversational detail. I aim to discuss the important points, why an approach may be inappropriate, what the goals should be, and how to fix it. Along the way, I may make blunt statements. I do that for effect. By exposing the reason for the critique, we can get an understanding of the solution. We’ll look at this from the writer’s perspective."
#TechnicalWriting #APIDocumentation #SoftwareDocumentation #SoftwareDevelopment #Programming #APIs #TechnicalCommunication
-
"Test your documentation site against the Agent-Friendly Documentation Spec.
Agents don't use docs like humans. They hit truncation limits, get walls of CSS instead of content, can't follow cross-host redirects, and don't know about quality-of-life improvements like llms.txt or .md docs pages that would make life swell. Maybe this is because the industry has lacked guidance - until now.
afdocs runs 21 checks across 8 categories to evaluate how well your docs serve agent consumers. 10 are fully implemented; the rest return skip until completed."
https://www.npmjs.com/package/afdocs
#TechnicalWriting #SoftwareDocumentation #AI #AIAgents #Afdocs #Markdown #DocsAsCode #LLMSTXT
-
"When we write documentation, we often assume someone will read it top to bottom. Even when we skim, we start at the top, absorb context, build a mental model. And we infer stuff, like if you’re reading design system docs, you probably already know what a design system is.
AI agents don’t work like this. They retrieve the most relevant chunk based on semantic similarity and produce a response from that slice. If the definition is three paragraphs in and the agent retrieves paragraph one, it fills in the gaps.
That’s where hallucination creeps in. You’re absolutely right! Not because the model is careless, but because much of our documentation is structured for narrative flow, not retrieval. It was always fragile, humans were just good at compensating.
Writing for AI agents accidentally makes documentation more accessible. A screen reader user navigating by headings needs the same explicitness an AI agent needs. A new team member needs definitions that don’t assume prior knowledge. A developer working in a second language needs sentences that say exactly what they mean. Explicitness helps anyone who can’t rely on context to fill gaps.
Look at well-documented APIs. The ones that specify exactly what parameters do, what they return, what breaks. They’re used more, trusted more, cause fewer support tickets. Explicitness scales."
https://gerireid.com/blog/ai-is-accidently-making-documentation-accessible/
#TechnicalWriting #Acessibility #TechnicalCommunication #AI #SoftwareDocumentation #AIAgents #APIDocumentation #Markdown
-
"The reality is that documentation is no longer just a piece of context or data found when an external developer runs into an issue — it’s a first-class context object that needs to be treated with the same focus and intentionality as the API itself. Within this context, MCP offers something more than just putting all the documentation in a single store and hoping for the best — it provides a direct pathway between the developer and the provider, allowing you to discover intent, and clarity like no other process currently on offer.
As we move towards a future focused around API discovery, we need to rethink how we look at documentation and its discovery — and solutions like MCP are going to play a huge part in making documentation and data clearer, more contextual, and more available."
https://nordicapis.com/using-mcp-for-api-documentation-discovery/
#AI #GenerativeAI #AIAgents #AgenticAI #MCP #MCPServers #Documentation #APIDocumentation #SoftwareDocumentation #DeveloperDocumentation #APIs #APIDiscovery
-
"The secret of how tech writers develop trustworthy information is that they practice genuine care for the end-to-end experience, learn the product experience inside and out (even if they start as an outsider), and build context that you can’t just get from writing.
This is “earned” context through building trust across other teams and stakeholders and through shipping successful (and unsuccessful) products in different environments and working cultures.
Tech writers (or whatever they’re called these days) are paid to develop (and maintain) trustworthy information, not just move it around. Automation can help with maintaining trustworthy information but it’s not all that technical writers do.
Earning or developing context well also takes real skills and demands a certain kind of character.
Skills & traits for developing trustworthy information
- Asking the right questions in the right way in the right place and at the right time. It’s no accident that some of the best technical writers I’ve worked with used to work as journalists.
- Constantly tracking what you know and don’t know with intellectual humility and rigor (Stay tuned for my future “Assumption Tracker app!”)
- Facilitating discussions to help surface internal confusion, misalignments, and other kinds of conversation debt
- Evaluating the trustworthiness of information
- Evaluating what others know, what product language world they live in, and what they don’t know
- Intellectual humility, honesty, and rigor"https://jessicacanepa.com/blog/developing-trustworthy-information/
#TechnicalWriting #SoftwareDocumentation #TechnicalCommunication #SoftwareDevelopment #APIDocumentation #Automation
-
"In this scenario, it doesn't make a lot of sense to target your API documentation exclusively at developers. It's definitely time to write it in a way that your non-technical stakeholders understand. So, how can you document your API capabilities?
Identifying capabilities is an exercise that begins with understanding the benefits your API offers to consumers. Benefits are the things that consumers obtain after they use your API, not what your API has to offer. You need to understand how consumers use your API and what their daily habits are. A good framework is studying your consumers' jobs-to-be-done (JTBD). Each JTBD represents something one or more consumers are trying to accomplish by using your API. After you group JTBDs by categories according to their similarity, you'll notice that clusters of benefits begin to emerge. If you then order those clusters by degree of criticality, you end up with a list of the most important benefits your API offers to potential consumers. I think you can already see where this is leading. With the list of benefits, you can get to a list of API capabilities by translating each one. While the benefit is what consumers achieve, the capability is what helps them get there. Let's look at an example to make things easier."
https://apichangelog.substack.com/p/documenting-your-api-around-its-capabilities
#API #APIs #APIDocumentation #TransactionEnrichment #TechnicalWriting #SoftwareDocumentation #SoftwareDevelopment
-
This is so obvious, yet at the same time it seems to me that people rarely really understand what is at stake.
"Most of the breakdowns in a product don’t come from bad intentions or bad work. They come from the way teams divide responsibilities. Engineering focuses on the system behavior. Design focuses on screens and interaction. Product focuses on requirements and scope. Each group works inside its own boundary, and no one is responsible for checking whether all those boundaries line up.
That’s how you end up with a feature that technically “works,” but only if the user magically knows the right order of steps, or already understands a concept that was never explained, or follows a path that only makes sense inside one team’s mental model.
The writer is the first person who has to confront that reality. Not because we’re smarter—because documentation forces us to walk the entire experience in the same order the user will. No shortcuts. No inherited assumptions. No internal knowledge. Just: If I follow this from beginning to end, does it hold together?
That’s when the real gaps surface:
- Prerequisite steps that were never shown in the UI, but the system silently requires.
- Terminology drift where three teams named the same thing differently and nobody noticed.
- Logic branches that exist in engineering’s head, but not in the design or the workflow.
- Error states with no recovery, dropping users into dead ends no one tested.
- Hidden dependencies, like expecting users to configure something before they even have access to it.
- Contradictory steps created by teams working from different assumptions about how something is “supposed” to work.None of these are “documentation problems.” They’re structural problems that documentation exposes because documentation is the first time anyone tries to describe the system as one connected experience."
https://brandihopkins.com/involve-writers-early/
#TechnicalWriting #SoftwareDevelopment #SoftwareDocumentation #DocsAsProduct #UX
-
🔍 / #software / #documentation / #github
Managing releases in a repository - GitHub Docs
You can create releases to bundle and deliver iterations of a project to users.
-
"For our technical writing teams, the velocity of Google Cloud's development presents two core problems: how do we keep pace with documenting new features and capabilities, and how do we ensure the existing documentation remains accurate?
To accelerate the creation process, we have integrated Gemini directly into our writers' authoring environments. This acts as a productivity multiplier, streamlining common tasks like generating formatted tables from unstructured content, translating between markup languages, and applying complex style guides with a single click. More significantly, the adoption of AI solutions enables writers to focus their time on strategic documentation solutions and ensure high quality content.
Just as important as creation is validation. For years, automated regression testing has been a staple for catching bugs in code. We are now bringing that same discipline to documentation—a goal that was long considered a dream due to the ambiguity of natural language. For our quickstarts, we use Gemini to read the procedural steps and automatically generate web orchestration scripts (using frameworks like Playwright). These scripts then execute the steps in a real Google Cloud environment, automatically verifying that our documentation accurately reflects the product's behavior. We run over 100 of these tests daily, ensuring our quickstarts are continuously validated and that you can trust the steps you're following."
#TechnicalWriting #AI #GenerativeAI #GoogleCloud #SoftwareDocumentation #APIDocumentation
-
"The basic idea of doc bug zero, as I explained in Defining bug zero, is to clear out all the tickets in the doc issue queue, essentially to finish all your documentation work. Doing so would be the ultimate statement about the productivity gains from AI. Despite my attempts to get to bug zero, it still eludes me. I’m realizing that there’s an art to working through a bug queue, and AI can only take me so far. Good project skills are also needed. One of those skills, which I’ll address in this post, is making it easy for people to review the changelists, or pull requests. (The terminology used in my area of doc work is changelists, or CLs, so that’s how I’ll refer to them here.)"
https://idratherbewriting.com/blog/make-easy-to-review-changelists
#TechnicalWriting #SoftwareDocumentation #ZeroBugs #Changelists #PullRequests
-
"In this podcast episode, Fabrizio Ferri Benedetti and I chat with guest Anandi Knuppel about MCP servers and the role that technical writers can play in shaping AI capabilities and outcomes. Anandi shares insights on how writers can optimize documentation for LLM performance and expands on opportunities to collaborate with developers around AI tools. Our discussion also touches on ways to automate style consistency in docs, and the future directions of technical writing given the abundance of AI tools, MCP servers, and the central role that language plays in it all."
https://idratherbewriting.com/blog/mcp-tools-language-tech-writing
#TechnicalWriting #AI #GenerativeAI #MCP #MCPServers #LLMs #SoftwareDocumentation #Docs
-
"Finding the right API is rarely straightforward. But once the AI locates an API, it needs to be evaluated. This is where API documentation comes in. Detailed descriptions tell AI what the API does, what data formats it uses, what authentication systems are in place, and any limitations it might have.
Good API documentation allows developers to speak directly to machines as well as human users. To enhance this process, generative engine optimization (GEO) is becoming increasingly important. Clear, well-defined data, articulate endpoint descriptions, parameter explanations, code snippets, sample calls, and real-world use cases all aid GEO as they provide context for picking the right API and improving understandability. llms.txt, an emerging standard similar to robots.txt but for AI, is becoming more useful for discovery, as it tells an LLM exactly what to look for instead of assessing each site path and making its best guess.
Improving API discoverability helps guarantee that the LLM always gets the most up-to-date information and data. It’s also a vital component of retrieval-augmentation generation (RAG), which makes good API documentation doubly vital as it allows AI to discover internal APIs as well as public ones, and supply the generation layer with accurate, relevant details."
https://nordicapis.com/how-to-optimize-api-documentation-for-ai-discoverability/
#APIs #APIDocumentation #GEO #APIDiscoverability #RAG #LLMs #AI #GenerativeAI #Metadata #TechnicalWriting #SoftwareDocumentation
-
"Finally, I haven’t even pitched the strongest argument for why technical writers and documentation will continue to be relevant in the future: AI tools are terrible without good documentation. In the same way that you need valid, accurate context when using AI tools to create documentation, AI tools need an accurate body of documentation to produce useful, hallucination-free outputs. Informal tests by my colleagues show that AI outputs improve by orders of magnitude when trained on more abundant and accurate documentation.
(...)
In other words, technical writers will create and package information specifically for AI consumption, ensuring the AI has the necessary context to produce accurate and relevant results.There’s a sales motive for keeping technical writers around, too. Let’s say an external developer needs to create, say, a mapping application for their project, and they decide they need routing logic. Following a vibecoding approach, they integrate your company’s MCP server into their IDE and tell their AI tool to create an app that draws routes from one point to another. If the AI tool can successfully fulfill the developer’s needs, requiring only that they provide an API key (which then initiates billing), the company that has provided this solution will sell more API services. No one wants to fiddle and fuss with hard-to-configure technology that doesn’t work, and by hard-to-configure, I mean APIs that require manual configuration rather than APIs you can configure with natural language."
https://idratherbewriting.com/blog/strategies-to-succeed-in-context-of-ai
#TechnicalWriting #TechnicalCommunication #AI #GenerativeAI #SoftwareDocumentation ##SoftwareDevelopment #APIDocumentation
-
"A documentation platform is a product that provides capabilities—some free; some paid—for a range of activities, like authoring, editing, collaborating, monitoring, building, deploying, and publishing documentation.
Docs-as-code platforms are more common these days, as more people can code or leverage systems like AI that help them code. Traditionally, authoring to publication might take place locally, or via a software product that offered manual versioning, limited collaboration, and limited-to-no support for documentation pipeline automation, like setting up a CI/CD pipeline to deploy health documentation when the `main` has a new commit. If you remember the yesteryears of authoring, products like Subversion and TortoiseSVN may come to mind.
The platforms I’m writing about today are modern and all offer some type of free documentation generation, usually through static site generation; however, outside of what’s provided out of the box, there are notable differences between what’s provided for free and what’s provided at cost.
My goal for this article is to point out some core differences across add-ons, features, and enterprise-level support across these documentation platforms so that you choose the platform that’s best for you and your documentation readers.
For this article, I researched Fern, Mintlify, ReadMe, and Redocly."
https://www.copytree.io/post/choosing-modern-docs-platform
#TechnicalWriting #SoftwareDocumentation #APIDocumentation #APIs #SoftwareDevelopment #Programming #DocsAsCode
-
"Errors in content, especially technical documentation, lead to mistrust. When you write a piece of content, consider the future of the content.
The future of the content depends on the purpose and type of content that you’re writing. This list contains some common expectations that readers might have about various content types:
- A blog post has a date stamp and isn’t kept continually updated.
- Technical documentation always matches the product version that it references.
- Architecture documents reflect the current state of the microservice architecture.
- An email gets the point across and can’t be edited after you send it.
You must consider the future and maintenance of any content that you write if your readers expect it to be kept up-to-date. To figure out how difficult maintaining your content will be, you can ask yourself these questions:
- How frequently does the thing I’m writing about change?
- How reliable does my content need to be?
- How quickly does my content need to be accurate (e.g., after a product release)?
By answering these questions, you can then make decisions about how you write your content.
- What level of detail will you include in your content?
- Will you focus your efforts on accuracy, speed, or content coverage?
- Do you want to include high-fidelity screenshots, gifs, or complex diagrams?
- Do you want to automate any part of your content creation?
- Who will review your content? How quickly and thoroughly will they review it?"
https://thisisimportant.net/posts/how-can-i-get-better-at-writing/
#TechnicalWriting #TechnicalCommunication #ProfessionalWriting #SoftwareDocumentation #InformationArchitecture
-
"How to leverage documentation effectively in Cursor through prompting, external sources, and internal context
Why documentation mattersDocumentation provides current, accurate context. Without it, models use outdated or incomplete training data. Documentation helps models understand things like:
- Current APIs and parameters
- Best practices
- Organization conventions
- Domain terminologyAnd much more. Read on to learn how to use documentation right in Cursor without having to context switch."
https://docs.cursor.com/guides/advanced/working-with-documentation
#AI #GenerativeAI #Cursor #TechnicalWriting #Documentation #SoftwareDevelopment #APIDocumentation #SoftwareDocumentation