home.social

#json — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #json, aggregated by home.social.

  1. Moonrepo Releases Moon v2.0 with WASM Plugin Toolchains and Overhauled CLI

    Moonrepo, the developer productivity platform for monorepo management, has released moon v2.0, codenamed “Phobos”, its first major version…
    #NewsBeep #News #Artsanddesign #Arts #ArtsAndDesign #Design #Development #Entertainment #JSON #moonrepo2release #Repository #UK #UnitedKingdom #webdevelopment
    newsbeep.com/uk/583673/

  2. Архитектура автоматической трансформации данных JSON и XML любой структуры унифицированным способом

    В современном IT ландшафте широко используютя форматы представления данных JSON и XML, используемые в качестве своеобразного "общего языка", lingua franca для обмене информацией. Данная статья представит архитектуру интеграции данных иерархических форматов, позволяющую кардинально уменьшить трудоемкость процесса до практически полностью универсального пайплайна, обрабатывающего любые виды исходных документов вплоть до автоматического маппинга в табличные структуры данных.

    habr.com/ru/articles/1034884/

    #xml #json #api #nosql #sql #базы_данных #алгоритмы #архитектура #автоматизация

  3. Архитектура автоматической трансформации данных JSON и XML любой структуры унифицированным способом

    В современном IT ландшафте широко используютя форматы представления данных JSON и XML, используемые в качестве своеобразного "общего языка", lingua franca для обмене информацией. Данная статья представит архитектуру интеграции данных иерархических форматов, позволяющую кардинально уменьшить трудоемкость процесса до практически полностью универсального пайплайна, обрабатывающего любые виды исходных документов вплоть до автоматического маппинга в табличные структуры данных.

    habr.com/ru/articles/1034884/

    #xml #json #api #nosql #sql #базы_данных #алгоритмы #архитектура #автоматизация

  4. Архитектура автоматической трансформации данных JSON и XML любой структуры унифицированным способом

    В современном IT ландшафте широко используютя форматы представления данных JSON и XML, используемые в качестве своеобразного "общего языка", lingua franca для обмене информацией. Данная статья представит архитектуру интеграции данных иерархических форматов, позволяющую кардинально уменьшить трудоемкость процесса до практически полностью универсального пайплайна, обрабатывающего любые виды исходных документов вплоть до автоматического маппинга в табличные структуры данных.

    habr.com/ru/articles/1034884/

    #xml #json #api #nosql #sql #базы_данных #алгоритмы #архитектура #автоматизация

  5. Архитектура автоматической трансформации данных JSON и XML любой структуры унифицированным способом

    В современном IT ландшафте широко используютя форматы представления данных JSON и XML, используемые в качестве своеобразного "общего языка", lingua franca для обмене информацией. Данная статья представит архитектуру интеграции данных иерархических форматов, позволяющую кардинально уменьшить трудоемкость процесса до практически полностью универсального пайплайна, обрабатывающего любые виды исходных документов вплоть до автоматического маппинга в табличные структуры данных.

    habr.com/ru/articles/1034884/

    #xml #json #api #nosql #sql #базы_данных #алгоритмы #архитектура #автоматизация

  6. JSON n'est pas toujours le format le plus performant lorsqu'il s'agit de sérialiser un objet (transformer une structure de données en mémoire au format texte pour pouvoir par exemple l'échanger avec d'autres systèmes).

    🔗 adamfaulkner.github.io/seriali

    #JSON #NodeJS #performances

  7. JSON n'est pas toujours le format le plus performant lorsqu'il s'agit de sérialiser un objet (transformer une structure de données en mémoire au format texte pour pouvoir par exemple l'échanger avec d'autres systèmes).

    🔗 adamfaulkner.github.io/seriali

    #JSON #NodeJS #performances

  8. JSON n'est pas toujours le format le plus performant lorsqu'il s'agit de sérialiser un objet (transformer une structure de données en mémoire au format texte pour pouvoir par exemple l'échanger avec d'autres systèmes).

    🔗 adamfaulkner.github.io/seriali

    #JSON #NodeJS #performances

  9. I randomly followed Fayner years ago via rss because I liked her writing about REST API's and here is another one primer on #ReST and #HATEOAS. As a consumer of #APIs I struggled with many of the points she says developers leave out.

    Also, I love #JSON for some unknown reason.

    fagnerbrack.com/what-is-a-rest

  10. CraftHub для VS Code: редактируй JSON как таблицу прямо в редакторе

    Если вы хоть раз ловили себя на том, что ищете нужную строку в 300-строчном JSON — эта статья для вас. CraftHub теперь живёт прямо в VS Code: открыл файл, переключился в таблицу, поправил, переключился обратно

    habr.com/ru/articles/1030994/

    #json #typescript #vscode_extension #vscode

  11. CraftHub для VS Code: редактируй JSON как таблицу прямо в редакторе

    Если вы хоть раз ловили себя на том, что ищете нужную строку в 300-строчном JSON — эта статья для вас. CraftHub теперь живёт прямо в VS Code: открыл файл, переключился в таблицу, поправил, переключился обратно

    habr.com/ru/articles/1030994/

    #json #typescript #vscode_extension #vscode

  12. CraftHub для VS Code: редактируй JSON как таблицу прямо в редакторе

    Если вы хоть раз ловили себя на том, что ищете нужную строку в 300-строчном JSON — эта статья для вас. CraftHub теперь живёт прямо в VS Code: открыл файл, переключился в таблицу, поправил, переключился обратно

    habr.com/ru/articles/1030994/

    #json #typescript #vscode_extension #vscode

  13. With the next candidate of we will also ship three new example scripts showing

    - how general binary files can be read

    - how one can translate C file structures into NumeRe code during file import and

    - how files can be traversed using DictStruct instances.

  14. With the next #release candidate of #NumeRe we will also ship three new example scripts showing

    - how general binary files can be read

    - how one can translate C file structures into NumeRe code during file import and

    - how #JSON files can be traversed using DictStruct instances.

  15. With the next #release candidate of #NumeRe we will also ship three new example scripts showing

    - how general binary files can be read

    - how one can translate C file structures into NumeRe code during file import and

    - how #JSON files can be traversed using DictStruct instances.

  16. 🎩🤖 Oh, bravo! Another #developer has bestowed us with the revolutionary ability to control desktop apps via #JSON and AI—because, clearly, what the world desperately needed was yet another layer of #complexity to break when you just need to open a spreadsheet. 🥴 Meanwhile, the rest of us will be over here, waiting for our token savings to materialize like unicorns. 🦄
    github.com/lahfir/agent-desktop #innovation #AI #desktopapps #HackerNews #ngated

  17. 🎩🤖 Oh, bravo! Another #developer has bestowed us with the revolutionary ability to control desktop apps via #JSON and AI—because, clearly, what the world desperately needed was yet another layer of #complexity to break when you just need to open a spreadsheet. 🥴 Meanwhile, the rest of us will be over here, waiting for our token savings to materialize like unicorns. 🦄
    github.com/lahfir/agent-desktop #innovation #AI #desktopapps #HackerNews #ngated

  18. 🎩🤖 Oh, bravo! Another #developer has bestowed us with the revolutionary ability to control desktop apps via #JSON and AI—because, clearly, what the world desperately needed was yet another layer of #complexity to break when you just need to open a spreadsheet. 🥴 Meanwhile, the rest of us will be over here, waiting for our token savings to materialize like unicorns. 🦄
    github.com/lahfir/agent-desktop #innovation #AI #desktopapps #HackerNews #ngated

  19. 🎩🤖 Oh, bravo! Another #developer has bestowed us with the revolutionary ability to control desktop apps via #JSON and AI—because, clearly, what the world desperately needed was yet another layer of #complexity to break when you just need to open a spreadsheet. 🥴 Meanwhile, the rest of us will be over here, waiting for our token savings to materialize like unicorns. 🦄
    github.com/lahfir/agent-desktop #innovation #AI #desktopapps #HackerNews #ngated

  20. If JavaScript is Lisp for the masses (I don't have an attribution), JSON must necessarily be S-expressions for the masses.
    There is at least one programming language whose programs are written as JSON forms.

    And "XML is a giant step in no direction at all" (Erik Naggum, as far as I know).
    And yes, there is at least one programming language whose programs are written as XML documents.

    #JSON
    #Lisp
    #SExpressions

    @janneke

  21. If JavaScript is Lisp for the masses (I don't have an attribution), JSON must necessarily be S-expressions for the masses.
    There is at least one programming language whose programs are written as JSON forms.

    And "XML is a giant step in no direction at all" (Erik Naggum, as far as I know).
    And yes, there is at least one programming language whose programs are written as XML documents.

    #JSON
    #Lisp
    #SExpressions

    @janneke

  22. If JavaScript is Lisp for the masses (I don't have an attribution), JSON must necessarily be S-expressions for the masses.
    There is at least one programming language whose programs are written as JSON forms.

    And "XML is a giant step in no direction at all" (Erik Naggum, as far as I know).
    And yes, there is at least one programming language whose programs are written as XML documents.

    #JSON
    #Lisp
    #SExpressions

    @janneke

  23. If JavaScript is Lisp for the masses (I don't have an attribution), JSON must necessarily be S-expressions for the masses.
    There is at least one programming language whose programs are written as JSON forms.

    And "XML is a giant step in no direction at all" (Erik Naggum, as far as I know).
    And yes, there is at least one programming language whose programs are written as XML documents.

    #JSON
    #Lisp
    #SExpressions

    @janneke

  24. Every time that "json" is mentioned in one sentence with a Lisp (machine), a fairy dies.

    Sorry for killing two fairies here, please try not to boost! 💕

    #emacs
    #emacsJsonRpc
    #json
    #lisp
    #scheme

  25. Every time that "json" is mentioned in one sentence with a Lisp (machine), a fairy dies.

    Sorry for killing two fairies here, please try not to boost! 💕

    #emacs
    #emacsJsonRpc
    #json
    #lisp
    #scheme

  26. Every time that "json" is mentioned in one sentence with a Lisp (machine), a fairy dies.

    Sorry for killing two fairies here, please try not to boost! 💕

    #emacs
    #emacsJsonRpc
    #json
    #lisp
    #scheme

  27. Every time that "json" is mentioned in one sentence with a Lisp (machine), a fairy dies.

    Sorry for killing two fairies here, please try not to boost! 💕

    #emacs
    #emacsJsonRpc
    #json
    #lisp
    #scheme

  28. archive DataHoarders

    markdown formatted

    A nice presentable DataHoarders archive has been created regarding the epstein files

    The archive is online accessible as given in the sources matrix.

    Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.

    YMMV

    When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.

    The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.

    There are 305 references in this document

    When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ

    Quotes from the archive creators:

    Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.

    354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.

    For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said

    For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.

    The frontend is React Native, infrastructure runs through Cloudflare.

    We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.

    We are not stopping here. There is still a lot to do and we are pushing updates constantly.

    Z

    Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!

    Sources:

    exposingepstein.com/home

    en.wikipedia.org/wiki/Jeffrey_

    reddit.com/r/DataHoarder/comme

    #programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG

  29. archive DataHoarders

    markdown formatted

    A nice presentable DataHoarders archive has been created regarding the epstein files

    The archive is online accessible as given in the sources matrix.

    Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.

    YMMV

    When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.

    The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.

    There are 305 references in this document

    When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ

    Quotes from the archive creators:

    Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.

    354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.

    For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said

    For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.

    The frontend is React Native, infrastructure runs through Cloudflare.

    We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.

    We are not stopping here. There is still a lot to do and we are pushing updates constantly.

    Z

    Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!

    Sources:

    exposingepstein.com/home

    en.wikipedia.org/wiki/Jeffrey_

    reddit.com/r/DataHoarder/comme

    #programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG

  30. archive DataHoarders

    markdown formatted

    A nice presentable DataHoarders archive has been created regarding the epstein files

    The archive is online accessible as given in the sources matrix.

    Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.

    YMMV

    When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.

    The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.

    There are 305 references in this document

    When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ

    Quotes from the archive creators:

    Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.

    354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.

    For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said

    For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.

    The frontend is React Native, infrastructure runs through Cloudflare.

    We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.

    We are not stopping here. There is still a lot to do and we are pushing updates constantly.

    Z

    Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!

    Sources:

    exposingepstein.com/home

    en.wikipedia.org/wiki/Jeffrey_

    reddit.com/r/DataHoarder/comme

    #programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG

  31. archive DataHoarders

    markdown formatted

    A nice presentable DataHoarders archive has been created regarding the epstein files

    The archive is online accessible as given in the sources matrix.

    Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.

    YMMV

    When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.

    The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.

    There are 305 references in this document

    When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ

    Quotes from the archive creators:

    Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.

    354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.

    For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said

    For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.

    The frontend is React Native, infrastructure runs through Cloudflare.

    We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.

    We are not stopping here. There is still a lot to do and we are pushing updates constantly.

    Z

    Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!

    Sources:

    exposingepstein.com/home

    en.wikipedia.org/wiki/Jeffrey_

    reddit.com/r/DataHoarder/comme

    #programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG

  32. archive DataHoarders

    markdown formatted

    A nice presentable DataHoarders archive has been created regarding the epstein files

    The archive is online accessible as given in the sources matrix.

    Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.

    YMMV

    When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.

    The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.

    There are 305 references in this document

    When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ

    Quotes from the archive creators:

    Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.

    354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.

    For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said

    For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.

    The frontend is React Native, infrastructure runs through Cloudflare.

    We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.

    We are not stopping here. There is still a lot to do and we are pushing updates constantly.

    Z

    Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!

    Sources:

    exposingepstein.com/home

    en.wikipedia.org/wiki/Jeffrey_

    reddit.com/r/DataHoarder/comme

    #programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG

  33. TIL:

    JSON-LD, used heavily in #ActivityPub, is a way of representing #RDF triplets.

    #JSON

  34. TIL:

    JSON-LD, used heavily in #ActivityPub, is a way of representing #RDF triplets.

    #JSON

  35. TIL:

    JSON-LD, used heavily in #ActivityPub, is a way of representing #RDF triplets.

    #JSON

  36. TIL:

    JSON-LD, used heavily in #ActivityPub, is a way of representing #RDF triplets.

    #JSON

  37. TIL:

    JSON-LD, used heavily in #ActivityPub, is a way of representing #RDF triplets.

    #JSON

  38. Any good recommendations for free and secure JSON readers? #askfedi #json