home.social

#apis — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #apis, aggregated by home.social.

  1. Google Ads DevCast: all 4 episodes on AI, APIs, and scale: Google's Ads DevCast covers MCP servers, Data Manager API, developer workflows, and advertising API integration best practices across all four 2026 episodes. ppc.land/google-ads-devcast-al #GoogleAds #DevCast #AI #APIs #Advertising

  2. I can’t remember where I found it but it’s in my queue: Apives, a directory of APIs. From the front page: “Apives curates APIs with clear pricing, stability, access types, and real endpoint examples. This helps developers avoid guesswork caused by incomplete docs or outdated GitHub repositories.”

    https://rbfirehose.com/2026/05/13/apives-a-directory-of-apis/
  3. 🎉 Microcks Community Meeting. Bring your questions, thoughts, experiments, or just your curiosity.

    🗓 May 12, 2026 ⏰ 9–10 a.m. CET / 1–2 p.m. Bengaluru

    🔗 Join live: github.com/microcks/community/

    #OpenSource #APIs #DeveloperExperience #Community #Collaboration 🙌

  4. I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?

    Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.

    That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.

    But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D

    "I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."

    robertdelwood.medium.com/more-

    #TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation

  5. I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?

    Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.

    That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.

    But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D

    "I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."

    robertdelwood.medium.com/more-

    #TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation

  6. I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?

    Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.

    That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.

    But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D

    "I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."

    robertdelwood.medium.com/more-

    #TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation

  7. I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?

    Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.

    That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.

    But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D

    "I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."

    robertdelwood.medium.com/more-

    #TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation

  8. I don't agree at all with the statement that "API documentation is not technical writing" and also with the notion a technical writer can't necessary know enough about programming without being a software developer - hint: how many Udemy courses are there about API development, API design and AP programming? Hundreds? Thousands?

    Also, nowadays, with tools like Claude Code and Codex, testing APIs through platforms like Postman should be seen as stuff for QA analysts and not exactly for technical writers, since AI tools such as those allow you to have a more contextualized look at what a specific API endpoint does, specifically in terms of edge cases and "odd balls". As a technical writer, I can ask these tools to highlight specific use cases where the endpoint can be really useful.

    That's not to say that the process can be completely automated. Not at all. Specially because an how-to guide explaining how to make use of an API endpoint couldn't essentially be completely triggered by an LLM. Besides, for the foreseeable future and probably even beyond that, the final output should always be reviewed by an engineer. In any case, what I'm talking about is totally different from automatically generally API reference documentation.

    But there is no point in knowing how to send a request to an API endpoint and the typical response will be - both in case of success and error -, if I, as a developer, don't have a compelling enough reason to use that endpoint. Another totally different thing is an API Integration tutorial, that is, how to integrate a complete API into your own app. But here you will, of course, also need the intervention of a, guess what, TECHNICAL WRITER!! :-D

    "I have said that API documentation is not technical writing and that it is a mistake to try. There are many details clients need to have. This includes format, presentation, and client experience."

    robertdelwood.medium.com/more-

    #TechnicalWriting #API #APIs #APIDocumentation #SoftwareDocumentation

  9. FYI: HubSpot bets on open APIs and MCP to let any agent run its CRM: HubSpot's CPTO today detailed a plan for full API parity and an open MCP server so any AI agent can access CRM data and intelligence across 280,000+ customers. ppc.land/hubspot-bets-on-open- #HubSpot #APIs #CRM #ArtificialIntelligence #MCP

  10. Huge news: Microcks is officially a CNCF Incubating project! 🚀

    We are thrilled to announce that the CNCF has voted to move Microcks from Sandbox to the Incubating stage. This milestone is a testament to our incredible community of contributors and 30+ public adopters who believe in making API mocking and testing effortless.

    Read the full announcement here: cncf.io/blog/2026/05/07/microc

    #Microcks #CNCF #CloudNative #APIs #OpenSource

  11. I randomly followed Fayner years ago via rss because I liked her writing about REST API's and here is another one primer on #ReST and #HATEOAS. As a consumer of #APIs I struggled with many of the points she says developers leave out.

    Also, I love #JSON for some unknown reason.

    fagnerbrack.com/what-is-a-rest

  12. That seems a bit similar to what I currently do... ;-)

    "To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.

    After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.

    Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."

    idratherbewriting.com/blog/int

    #TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation

  13. That seems a bit similar to what I currently do... ;-)

    "To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.

    After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.

    Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."

    idratherbewriting.com/blog/int

    #TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation

  14. That seems a bit similar to what I currently do... ;-)

    "To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.

    After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.

    Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."

    idratherbewriting.com/blog/int

    #TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation

  15. That seems a bit similar to what I currently do... ;-)

    "To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.

    After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.

    Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."

    idratherbewriting.com/blog/int

    #TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation

  16. That seems a bit similar to what I currently do... ;-)

    "To use an analogy about my process, compare the scenario to a senior tech writer (TW) working next to a junior TW, where the senior TW mostly provides observation and feedback (in this analogy, the junior TW represents the AI agent). The junior TW creates some docs and presents them to the senior TW, who leaves comments explaining what needs to change. The junior TW takes notes about all the feedback in a journal. By the end of the process, the junior TW has three pages of notes.

    After the process finishes, those notes aren’t lost. They form the basis of the SKILL file. The next time the senior TW sits down with another junior TW (a different one, as the session changed), the new junior TW produces much better output thanks to the notes. With each iteration, the notes get more detailed — anticipating common errors, adding validation checks, laying a foundation so that each step doesn’t build from faulty information. After a dozen iterations, the senior TW finds they have less and less feedback to give.

    Eventually, the senior TW no longer needs to sit next to the junior TW in close observation. The junior TW proceeds autonomously through each step in the SKILL and just shows the final result. One key difference from real mentorship, though: the AI agent doesn’t carry any memory between sessions. It reads the SKILL file cold each time. All the “learning” lives in the document, not in the agent. This makes the SKILL file itself the critical asset — if it’s vague or incomplete, the agent’s output regresses immediately."

    idratherbewriting.com/blog/int

    #TechnicalWriting #APIs #APIDocumentation #Skills #AgenticAI #AI #GenerativeAI #LLMs #SoftwareDocumentation