#json — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #json, aggregated by home.social.
-
Moonrepo Releases Moon v2.0 with WASM Plugin Toolchains and Overhauled CLI
Moonrepo, the developer productivity platform for monorepo management, has released moon v2.0, codenamed “Phobos”, its first major version…
#NewsBeep #News #Artsanddesign #Arts #ArtsAndDesign #Design #Development #Entertainment #JSON #moonrepo2release #Repository #UK #UnitedKingdom #webdevelopment
https://www.newsbeep.com/uk/583673/ -
https://www.europesays.com/ie/484371/ Moonrepo Releases Moon v2.0 with WASM Plugin Toolchains and Overhauled CLI #Arts #ArtsAndDesign #ArtsAndDesign #ArtsDesign #Design #Development #Éire #Entertainment #IE #Ireland #JSON #Moonrepo2Release #Repository #WebDevelopment
-
Архитектура автоматической трансформации данных JSON и XML любой структуры унифицированным способом
В современном IT ландшафте широко используютя форматы представления данных JSON и XML, используемые в качестве своеобразного "общего языка", lingua franca для обмене информацией. Данная статья представит архитектуру интеграции данных иерархических форматов, позволяющую кардинально уменьшить трудоемкость процесса до практически полностью универсального пайплайна, обрабатывающего любые виды исходных документов вплоть до автоматического маппинга в табличные структуры данных.
https://habr.com/ru/articles/1034884/
#xml #json #api #nosql #sql #базы_данных #алгоритмы #архитектура #автоматизация
-
Архитектура автоматической трансформации данных JSON и XML любой структуры унифицированным способом
В современном IT ландшафте широко используютя форматы представления данных JSON и XML, используемые в качестве своеобразного "общего языка", lingua franca для обмене информацией. Данная статья представит архитектуру интеграции данных иерархических форматов, позволяющую кардинально уменьшить трудоемкость процесса до практически полностью универсального пайплайна, обрабатывающего любые виды исходных документов вплоть до автоматического маппинга в табличные структуры данных.
https://habr.com/ru/articles/1034884/
#xml #json #api #nosql #sql #базы_данных #алгоритмы #архитектура #автоматизация
-
Архитектура автоматической трансформации данных JSON и XML любой структуры унифицированным способом
В современном IT ландшафте широко используютя форматы представления данных JSON и XML, используемые в качестве своеобразного "общего языка", lingua franca для обмене информацией. Данная статья представит архитектуру интеграции данных иерархических форматов, позволяющую кардинально уменьшить трудоемкость процесса до практически полностью универсального пайплайна, обрабатывающего любые виды исходных документов вплоть до автоматического маппинга в табличные структуры данных.
https://habr.com/ru/articles/1034884/
#xml #json #api #nosql #sql #базы_данных #алгоритмы #архитектура #автоматизация
-
Архитектура автоматической трансформации данных JSON и XML любой структуры унифицированным способом
В современном IT ландшафте широко используютя форматы представления данных JSON и XML, используемые в качестве своеобразного "общего языка", lingua franca для обмене информацией. Данная статья представит архитектуру интеграции данных иерархических форматов, позволяющую кардинально уменьшить трудоемкость процесса до практически полностью универсального пайплайна, обрабатывающего любые виды исходных документов вплоть до автоматического маппинга в табличные структуры данных.
https://habr.com/ru/articles/1034884/
#xml #json #api #nosql #sql #базы_данных #алгоритмы #архитектура #автоматизация
-
第270回 MySQL Shellのプロンプトを改造してみる
https://gihyo.jp/article/2026/05/mysql-rcn0270?utm_source=feed -
JSON n'est pas toujours le format le plus performant lorsqu'il s'agit de sérialiser un objet (transformer une structure de données en mémoire au format texte pour pouvoir par exemple l'échanger avec d'autres systèmes).
🔗 https://adamfaulkner.github.io/serialization_from_nodejs.html
-
JSON n'est pas toujours le format le plus performant lorsqu'il s'agit de sérialiser un objet (transformer une structure de données en mémoire au format texte pour pouvoir par exemple l'échanger avec d'autres systèmes).
🔗 https://adamfaulkner.github.io/serialization_from_nodejs.html
-
JSON n'est pas toujours le format le plus performant lorsqu'il s'agit de sérialiser un objet (transformer une structure de données en mémoire au format texte pour pouvoir par exemple l'échanger avec d'autres systèmes).
🔗 https://adamfaulkner.github.io/serialization_from_nodejs.html
-
I randomly followed Fayner years ago via rss because I liked her writing about REST API's and here is another one primer on #ReST and #HATEOAS. As a consumer of #APIs I struggled with many of the points she says developers leave out.
Also, I love #JSON for some unknown reason.
https://fagnerbrack.com/what-is-a-rest-api-and-why-yours-probably-isnt-one-7e5fb65ece4d
-
#Development #ReportsAdding author context to RSS · A proposal to bring author identity to web feeds https://ilo.im/16cmp6_____#Business #WebFeeds #RSS #Atom #JSON #Websites #SmallWeb #IndieWeb #Development #WebDev
-
#Development #ReportsAdding author context to RSS · A proposal to bring author identity to web feeds https://ilo.im/16cmp6_____#Business #WebFeeds #RSS #Atom #JSON #Websites #SmallWeb #IndieWeb #Development #WebDev
-
#Development #ReportsAdding author context to RSS · A proposal to bring author identity to web feeds https://ilo.im/16cmp6_____#Business #WebFeeds #RSS #Atom #JSON #Websites #SmallWeb #IndieWeb #Development #WebDev
-
#Development #ReportsAdding author context to RSS · A proposal to bring author identity to web feeds https://ilo.im/16cmp6_____#Business #WebFeeds #RSS #Atom #JSON #Websites #SmallWeb #IndieWeb #Development #WebDev
-
CraftHub для VS Code: редактируй JSON как таблицу прямо в редакторе
Если вы хоть раз ловили себя на том, что ищете нужную строку в 300-строчном JSON — эта статья для вас. CraftHub теперь живёт прямо в VS Code: открыл файл, переключился в таблицу, поправил, переключился обратно
-
CraftHub для VS Code: редактируй JSON как таблицу прямо в редакторе
Если вы хоть раз ловили себя на том, что ищете нужную строку в 300-строчном JSON — эта статья для вас. CraftHub теперь живёт прямо в VS Code: открыл файл, переключился в таблицу, поправил, переключился обратно
-
CraftHub для VS Code: редактируй JSON как таблицу прямо в редакторе
Если вы хоть раз ловили себя на том, что ищете нужную строку в 300-строчном JSON — эта статья для вас. CraftHub теперь живёт прямо в VS Code: открыл файл, переключился в таблицу, поправил, переключился обратно
-
#Development #Reports
Adding author context to RSS · A proposal to bring author identity to web feeds https://ilo.im/16cmp6_____
#Business #WebFeeds #RSS #Atom #JSON #Websites #SmallWeb #IndieWeb #Development #WebDev -
#Development #Reports
Adding author context to RSS · A proposal to bring author identity to web feeds https://ilo.im/16cmp6_____
#Business #WebFeeds #RSS #Atom #JSON #Websites #SmallWeb #IndieWeb #Development #WebDev -
#Development #Reports
Adding author context to RSS · A proposal to bring author identity to web feeds https://ilo.im/16cmp6_____
#Business #WebFeeds #RSS #Atom #JSON #Websites #SmallWeb #IndieWeb #Development #WebDev -
🎩🤖 Oh, bravo! Another #developer has bestowed us with the revolutionary ability to control desktop apps via #JSON and AI—because, clearly, what the world desperately needed was yet another layer of #complexity to break when you just need to open a spreadsheet. 🥴 Meanwhile, the rest of us will be over here, waiting for our token savings to materialize like unicorns. 🦄
https://github.com/lahfir/agent-desktop #innovation #AI #desktopapps #HackerNews #ngated -
🎩🤖 Oh, bravo! Another #developer has bestowed us with the revolutionary ability to control desktop apps via #JSON and AI—because, clearly, what the world desperately needed was yet another layer of #complexity to break when you just need to open a spreadsheet. 🥴 Meanwhile, the rest of us will be over here, waiting for our token savings to materialize like unicorns. 🦄
https://github.com/lahfir/agent-desktop #innovation #AI #desktopapps #HackerNews #ngated -
🎩🤖 Oh, bravo! Another #developer has bestowed us with the revolutionary ability to control desktop apps via #JSON and AI—because, clearly, what the world desperately needed was yet another layer of #complexity to break when you just need to open a spreadsheet. 🥴 Meanwhile, the rest of us will be over here, waiting for our token savings to materialize like unicorns. 🦄
https://github.com/lahfir/agent-desktop #innovation #AI #desktopapps #HackerNews #ngated -
🎩🤖 Oh, bravo! Another #developer has bestowed us with the revolutionary ability to control desktop apps via #JSON and AI—because, clearly, what the world desperately needed was yet another layer of #complexity to break when you just need to open a spreadsheet. 🥴 Meanwhile, the rest of us will be over here, waiting for our token savings to materialize like unicorns. 🦄
https://github.com/lahfir/agent-desktop #innovation #AI #desktopapps #HackerNews #ngated -
If JavaScript is Lisp for the masses (I don't have an attribution), JSON must necessarily be S-expressions for the masses.
There is at least one programming language whose programs are written as JSON forms.And "XML is a giant step in no direction at all" (Erik Naggum, as far as I know).
And yes, there is at least one programming language whose programs are written as XML documents. -
If JavaScript is Lisp for the masses (I don't have an attribution), JSON must necessarily be S-expressions for the masses.
There is at least one programming language whose programs are written as JSON forms.And "XML is a giant step in no direction at all" (Erik Naggum, as far as I know).
And yes, there is at least one programming language whose programs are written as XML documents. -
If JavaScript is Lisp for the masses (I don't have an attribution), JSON must necessarily be S-expressions for the masses.
There is at least one programming language whose programs are written as JSON forms.And "XML is a giant step in no direction at all" (Erik Naggum, as far as I know).
And yes, there is at least one programming language whose programs are written as XML documents. -
If JavaScript is Lisp for the masses (I don't have an attribution), JSON must necessarily be S-expressions for the masses.
There is at least one programming language whose programs are written as JSON forms.And "XML is a giant step in no direction at all" (Erik Naggum, as far as I know).
And yes, there is at least one programming language whose programs are written as XML documents. -
Every time that "json" is mentioned in one sentence with a Lisp (machine), a fairy dies.
Sorry for killing two fairies here, please try not to boost! 💕
-
Every time that "json" is mentioned in one sentence with a Lisp (machine), a fairy dies.
Sorry for killing two fairies here, please try not to boost! 💕
-
Every time that "json" is mentioned in one sentence with a Lisp (machine), a fairy dies.
Sorry for killing two fairies here, please try not to boost! 💕
-
Every time that "json" is mentioned in one sentence with a Lisp (machine), a fairy dies.
Sorry for killing two fairies here, please try not to boost! 💕
-
Loader Object: Simplified Data Management #Datamanagement #Json #Unity #Scriptableobjects #Performance #Decoupling #Serialization #AssetStore
https://u3dn.com/packages/loader-object-simplified-data-management-229331
-
archive DataHoarders
markdown formatted
A nice presentable DataHoarders archive has been created regarding the epstein files
The archive is online accessible as given in the sources matrix.
Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.
YMMV
When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.
The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.
There are 305 references in this document
When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ
Quotes from the archive creators:
Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.
354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.
For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said
For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.
The frontend is React Native, infrastructure runs through Cloudflare.
We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.
We are not stopping here. There is still a lot to do and we are pushing updates constantly.
Z
Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!
Sources:
https://exposingepstein.com/home
https://en.wikipedia.org/wiki/Jeffrey_Epstein
#programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG
-
archive DataHoarders
markdown formatted
A nice presentable DataHoarders archive has been created regarding the epstein files
The archive is online accessible as given in the sources matrix.
Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.
YMMV
When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.
The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.
There are 305 references in this document
When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ
Quotes from the archive creators:
Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.
354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.
For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said
For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.
The frontend is React Native, infrastructure runs through Cloudflare.
We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.
We are not stopping here. There is still a lot to do and we are pushing updates constantly.
Z
Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!
Sources:
https://exposingepstein.com/home
https://en.wikipedia.org/wiki/Jeffrey_Epstein
#programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG
-
archive DataHoarders
markdown formatted
A nice presentable DataHoarders archive has been created regarding the epstein files
The archive is online accessible as given in the sources matrix.
Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.
YMMV
When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.
The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.
There are 305 references in this document
When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ
Quotes from the archive creators:
Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.
354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.
For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said
For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.
The frontend is React Native, infrastructure runs through Cloudflare.
We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.
We are not stopping here. There is still a lot to do and we are pushing updates constantly.
Z
Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!
Sources:
https://exposingepstein.com/home
https://en.wikipedia.org/wiki/Jeffrey_Epstein
#programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG
-
archive DataHoarders
markdown formatted
A nice presentable DataHoarders archive has been created regarding the epstein files
The archive is online accessible as given in the sources matrix.
Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.
YMMV
When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.
The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.
There are 305 references in this document
When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ
Quotes from the archive creators:
Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.
354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.
For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said
For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.
The frontend is React Native, infrastructure runs through Cloudflare.
We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.
We are not stopping here. There is still a lot to do and we are pushing updates constantly.
Z
Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!
Sources:
https://exposingepstein.com/home
https://en.wikipedia.org/wiki/Jeffrey_Epstein
#programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG
-
archive DataHoarders
markdown formatted
A nice presentable DataHoarders archive has been created regarding the epstein files
The archive is online accessible as given in the sources matrix.
Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.
YMMV
When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.
The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.
There are 305 references in this document
When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ
Quotes from the archive creators:
Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.
354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.
For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said
For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.
The frontend is React Native, infrastructure runs through Cloudflare.
We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.
We are not stopping here. There is still a lot to do and we are pushing updates constantly.
Z
Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!
Sources:
https://exposingepstein.com/home
https://en.wikipedia.org/wiki/Jeffrey_Epstein
#programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG
-
-
-
-
-
-