#sparql — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #sparql, aggregated by home.social.
-
[#veille] Mise en ligne de l'application web GARANCE - Référentiels et entités sémantisés des Archives nationales
https://www.archives-nationales.culture.gouv.fr/actualites-professionnelles/mise-en-ligne-de-lapplication-web-garance
#lod #archivesnationales #linkeddata #archives #rdf #sparql #ricO #skos #fairdata -
[#veille] Mise en ligne de l'application web GARANCE - Référentiels et entités sémantisés des Archives nationales
https://www.archives-nationales.culture.gouv.fr/actualites-professionnelles/mise-en-ligne-de-lapplication-web-garance
#lod #archivesnationales #linkeddata #archives #rdf #sparql #ricO #skos #fairdata -
[#veille] Mise en ligne de l'application web GARANCE - Référentiels et entités sémantisés des Archives nationales
https://www.archives-nationales.culture.gouv.fr/actualites-professionnelles/mise-en-ligne-de-lapplication-web-garance
#lod #archivesnationales #linkeddata #archives #rdf #sparql #ricO #skos #fairdata -
[#veille] Mise en ligne de l'application web GARANCE - Référentiels et entités sémantisés des Archives nationales
https://www.archives-nationales.culture.gouv.fr/actualites-professionnelles/mise-en-ligne-de-lapplication-web-garance
#lod #archivesnationales #linkeddata #archives #rdf #sparql #ricO #skos #fairdata -
[#veille] Mise en ligne de l'application web GARANCE - Référentiels et entités sémantisés des Archives nationales
https://www.archives-nationales.culture.gouv.fr/actualites-professionnelles/mise-en-ligne-de-lapplication-web-garance
#lod #archivesnationales #linkeddata #archives #rdf #sparql #ricO #skos #fairdata -
#Gitterdan GmbH has released, #SPARQLMojo, our #ORM library for #SPARQL as an #OpenSource project on #Codeberg.
https://pypi.org/project/SPARQLMojo/
(At some point I'll write a blog post about what it's like developing a library like this from scratch using #Claude. 😅)
-
#Gitterdan GmbH has released, #SPARQLMojo, our #ORM library for #SPARQL as an #OpenSource project on #Codeberg.
https://pypi.org/project/SPARQLMojo/
(At some point I'll write a blog post about what it's like developing a library like this from scratch using #Claude. 😅)
-
#Gitterdan GmbH has released, #SPARQLMojo, our #ORM library for #SPARQL as an #OpenSource project on #Codeberg.
https://pypi.org/project/SPARQLMojo/
(At some point I'll write a blog post about what it's like developing a library like this from scratch using #Claude. 😅)
-
#Gitterdan GmbH has released, #SPARQLMojo, our #ORM library for #SPARQL as an #OpenSource project on #Codeberg.
https://pypi.org/project/SPARQLMojo/
(At some point I'll write a blog post about what it's like developing a library like this from scratch using #Claude. 😅)
-
Live SPARQL Query Page Links:
[1] https://tinyurl.com/Query-Definition
[2] https://tinyurl.com/Query-Solution-Page
#Wikidata #SPARQL #VirtuosoRDBMS #LODCloud #LinkedData #KnowledgeGraphs #SemanticWeb
-
🔹 RDF.ex 3.0 is out!
The headline feature: the new RDF.Data.Source protocol, following Elixir's Enumerable/Enum pattern for RDF data. Implement a small set of primitives, get a full API for iteration, transformation, navigation, and aggregation - across Descriptions, Graphs, Datasets, and custom implementations.
Plus various other improvements.
Hex: https://hex.pm/packages/rdf User Guide: https://rdf-elixir.dev/rdf-ex/rdf-data
#RDF #SPARQL #SemanticWeb #LinkedData #KnowledgeGraph #Elixir -
🔹 Gno is an Elixir library for managing RDF datasets in SPARQL triple stores.
A unified API across Fuseki, Oxigraph, QLever, and GraphDB - with an extensible commit system featuring a middleware pipeline, transactional execution, and automatic rollback.
Ontogen's entire versioning logic is implemented as Gno commit middleware.
Hex: https://hex.pm/packages/gno User Guide: https://rdf-elixir.dev/gno/
#RDF #SPARQL #SemanticWeb #LinkedData #KnowledgeGraph #Elixir -
[#veille] "Construction et ouverture de données historiques structurées sur le corps préfectoral" - Carnet de recherche du département d’#histoire du corps préfectoral et du ministère de l’Intérieur
https://hdp.hypotheses.org/1328
Ou comment enrichir mutuellement des inventaires d'#archives et #wikidata (avec du #sparql dans une carte #uMap 😇)
#opencontent #opendata #prosopographie #archivesnationales #corpsprefectoral #workinprogress #viedarchiviste #histoire #19esiecle #20esiecle #histodons
-
new blog: "Rescuing Scholia #3: We did it!" https://chem-bla-ics.linkedchemistry.info/2026/02/28/rescuing-scholia-3-we-did-it.html https://doi.org/10.59350/kd793-2fe02
"For now, however, please use qlever.scholia.wiki." https://qlever.scholia.wiki/
Replies show up in the blog.
-
Prima Workshop zu #Wikibase am Beispiel der #Kirchengeschichte @GermaniaSacra in @FactGrid 😊 heute. Praxisnah und mit der Möglichkeit zu fragen, so dass wir auch sparkling #SPARQL-Ergebnisse haben. ✨ ☀️
Vielen Dank!!@DHdKonferenz #BärbelKröger #ChristianPopp @OlafSimons @t_duan
#DHd2026 #Normdaten #Datenmodellierung #Objektdaten #Objektbiografie #Sammlungen #Kulturerbe #Datenintegration #Museum @nfdi4objects @NFDI4Memory #Informationsprovenienz #Datenqualität
-
Released wikibase-cli v20.0.0 ✨
It includes a new command – `wb graph-path` – to find the path between a subject and an object via a given property, on the entities relations graph.
Package: https://www.npmjs.com/package/wikibase-cli
Changelog: https://github.com/maxlath/wikibase-cli/blob/main/CHANGELOG.md#2000---2026-02-11 -
A paper of mine was published in the Journal of Open Humanities Data: "Developing a Data Model for Theatre Productions in a Wikibase Instance: A Case Study Approach" ☺️ 🎭
I developed a data model that can map #productions, #costumes, spatial settings, and interactive audience elements. Implemented in a #Wikibase instance, it enables #SPARQL queries on these elements of productions as well as roles, and casts.
You can read it here: https://doi.org/10.5334/johd.434
-
What is in #wikidata? The answer to this question on the Wikidata:Statistics page (https://www.wikidata.org/wiki/Wikidata:Statistics) hasn't been updated for a while. This #sparql query run in #qlever provides a snapshot of the current situation (it's not entirely identical as P31/P279* times out even in qlever but it conveys the idea): https://qlever.dev/wikidata/k9TJoh
Articles have the lion's share, paintings and manifestations of wirtten works are well represented, though.
-
new blog post: "Rescuing @wdscholia #2: getting closer" https://chem-bla-ics.linkedchemistry.info/2025/12/31/rescuing-scholia-2-getting-close.html https://doi.org/10.59350/6t2qh-2f839
"But we are getting close. So, please give qlever.scholia.wiki a go, and let us know your observations. As Linus’s law writes: Given enough eyeballs, all bugs are shallow."
Replying to this post makes it show up in my blog.
-
we have moved closer to having the option to switch to a #QLever backend. Beta testers can assist by exploring the interim QLever-backed Scholia instance: https://qlever.scholia.wiki/
Any issues can reported here (we appreciate it!): https://github.com/ad-freiburg/scholia/issues
-
Ok, it now seems as if #qlever mapped the data-namespace (data:Q42 a schema:Dataset) onto the wd namespace (wd:Q42 a wikibase:Item) which makes it more compatible with #wikidata. I find it still a bit confusing as it differs from the RDF source. If you want to get, say, the number of sitelinks of an element you must now use "wd:Q42 wikibase:sitelinks ?n" in your #SPARQL both in qlever and in #WDQS. Previously, IIRC, you had to do "?dataset schema:about wd:Q42; ?dataset wikibase:sitelinks ?n".
-
heute Abend (20:00–21:00) im Free Knowledge Habitat: Wikidata Live Querying! wir denken uns gemeinsam interessante SPARQL-Anfragen für Wikidata aus ^^ (kommt zahlreich weil alleine werd ich nicht so viele Ideen haben :blobfoxwinkmlem:) https://pretalx.wikimedia.de/39c3-2025/talk/VTELBK/
-
#MarcXML encapsulated as a string in #JSON is a pain to work with as I am finding out trying to query #HathiTrust APIs for digitised items with a list of #OCLC numbers obtained via #SPARQL from #Wikidata.
Things I wish for:
1. direct access to the MARCXML from HathiTrust
2. a publicly accessible API for #Worldcat / OCLC to find holdings -
RE: https://social.edu.nl/@tgx_um/115645028833529913
on Monday we have a go/no-go and we really want to go
So, are you considering? sign up before Sunday evening!
-
900 lexèmes en #lorrain sur #Wikidata ! 🥳
Allez, quelques stats marrantes en plus :
- 521 noms communs et 227 noms propres
- parmi les noms communs, 303 sont masculins, 214 féminins
- en nombre de lexèmes, le lorrain représente 0,05964 % du total sur Wikidata ! 🤏Pour celleux qui veulent un petit défi #SPARQL: quel lexème a pour sens un potamonyme (Q21661721) ? 😏
(Oui j'ai découvert le mot "potamonyme" il y a une heure 🤓)Sur ce, je vais pouvoir dormir au calme ! 😴
-
@moinluk das sind die verwendeten Properties nach Häufigkeit sortiert: https://qlever.dev/wikidata/djOQS8
-
It's happening again.
👉 https://semweb.pro/conference/2025/
Nous serons présents jeudi 27 novembre prochain à la conférence @semwebpro qui aura lieu au FIAP Paris !
#semwebpro #semweb #websem #opendata #linkeddata #linkedopendata #knowledgegraph #thesaurus #ontology #RDF #SPARQL #SHACL #OWL #JSONLD
-
If you had the necessary resources (time and money) to digitise Arabic periodicals, one should probably start with this #SPARQL query to check #Wikidata for unique holdings: https://query-chest.toolforge.org/redirect/r9Gn5nKhkWUYgyYwOU08qAk64wWOQO4ycuCcsSi6Syc.
-
kleiner Service-#SPARQL-Query für Radfahrer:innen in #berlin Fahrradreparaturstationen in Berlin laut #openstreetmap, abgefragt mit #qlever: https://qlever.dev/osm-planet/7rq0zw?exec=true
-
@jwoitkowitz hebt für das Kooperationsprojekt zwischen #GStA und @stabi_berlin hervor, dass öffentlich zugängliche Informationen zum Seminar für Orientalische Sprachen im Grunde nur die deutschen Mitglieder des Seminars betreffen. Hier gibt es eklatante Lücken, wie z.B. diese GND- #Sparql-Abfrage zeigt: https://sparql.dnb.de/mKRppf (10 Personen mit Affiliation SOS, allesamt dt. Namen). Die beiden Institutionen planen im Februar einen #Editathon.
-
Nouvelle étape pour le #lorrain sur #Wikidata : on peut désormais requêter des transcriptions en alphabet phonétique international.
Cette requête #SPARQL donne la liste : https://w.wiki/FxyB
Pour l'instant ça ne concerne que 29 formes sur les 877 présentes dans Wikidata, mais mes petits doigts vont travailler à combler cet écart dans les prochaines semaines :blobfoxcomputer:
p'tit plaisir du soir, j'vous dit :blobfoxcomfyhappy:
-
Gestern kam im #SPARQL-Workshop auf, wieso man überhaupt noch den #WDQS-Service benutzen sollte und nicht gleich den performanteren #wikidata-Endpunkt von #qlever . Gründe sind für mich aktuell noch:
- Autocomplete funktioniert in #wdqs insofern besser, als man ihn gezielter triggern kann
- mehr Optionen der Ergebnisvisualisierung
- gut nutzbare Code-Snippets
- aktuellere DatenOft schreibe ich die Queries in WDQS und führe sie dann in Qlever aus.
-
According to #openstreetmap (as imported into #qlever) there are 5446 "Via Dante Alighieri" in Italy (https://qlever.dev/osm-planet/GT1RO7). The distribution is not perfectly even.
-
👻 Probiert es mal aus, egal ob gruselig oder heilig, der „Geist” erwartet euch (es funktionieren auch andere Suchbegriffe, Notizen oder Bilder 😊 )
🧠 P.S. Für alle #CultureKnowledgeGraph- und #SPARQL-Kenner unter euch, bitte hier entlang 👉 https://nfdi4culture.de/resources/knowledge-graph.html
#NFDIrocks #CultureKnowledgeGraph #CultureDataSearch #Multimodality #CulturalHeritage @fiz_karlsruhe
Tabea Tietz @tabea Etienne Posthumus @epoz Harald Sack @lysander07
Linnaea Söhn @linnaea
Torsten Schrade^ks (^zs)
3/3
-
I'll try and ask here as well: Does anybody know if there's an overview of where #WDQS differs from standard #SPARQL? I mean e.g. mapping of data-namespace onto wdt at rdf level, different handling of dates (now() - ?date is possible in wdqs) etc. With alternatives such as qlever becoming more popular, it'd be useful to know what works just in wdqs and what not.
-
Y a quelques jours je vous montrais une requête #SPARQL sur #Wikidata pour obtenir une carte des noms de lieux en #lorrain.
Aujourd'hui, avec l'aide de @lepticed, on peut faire le sens inverse : avoir une carte des emprises des dialectes qui ont un nom pour ladite ville.
Par exemple, pour Woippy : https://w.wiki/Fng6
:blobfoxhappy:
-
situation is somewhat different when it comes to collection items with image on #wikicommons: https://qlever.dev/wikidata/ixgIcV
-
I also wrote a #SPARQL query to see the linguistic composition of the periodical press until 1930 at all locations with titles published in languages of the Eastern Mediterranean: #Arabic, #Ottoman, #Armenian, #Coptic, #Greek, #Farsi, #Ladino, #Azerbaijani
As a table: https://query-chest.toolforge.org/redirect/iDOqXyQ6u8ciKSGCYUkEoS02maygCMy8EccoSa8yuWw
As a map with layers for each language, because sometimes geographic distribution is interesting: https://query-chest.toolforge.org/redirect/XvayrLG3RwUiau4MWWuUEkkuccWmusCSy4gO888Q489
#Wikidata #PeriodicalStudies #ArabPeriodicalStudies #الصحافة_العربية #Multilinguality #multilingualDH
-
Of course a map is nice to have, but a simple table might often be the more useful thing. So I just wrote the #SPARQL to query #Wikidata for all Arabic periodicals published before 1930 with indicators whether there are known holdings and digitised collections. The table allows to quickly search for titles, years and places of publication.
As a boon to #multilingualDH, the language of results depends on your OS’s settings.
-
Just stumbled over this useful chart on the #SPARQL Query Execution Sequence by @dbeckett . The slides of his talk on SPARQL 1.1 from 2011 are still perfectly accessible online (https://www.dajobe.org/talks/201105-sparql-11/). Many of the external resources, however, have been moved (but can still be found just by googling) #linkrot
Thanks to the open and transparent license, the infographic could be stored on #wikicommons as well (https://commons.wikimedia.org/wiki/File:SPARQL_1.1_Query_Execution_Sequence.png)
-
#Wikidata @wikidata is so inhomogeneous. Here the subclasses of #teachingmaterial. #SPARQL https://w.wiki/FTNj Better use the Art & Architecture Thesaurus, though it is not exhaustive either. #AAT
-
The joint plenary by #NFDI4Biodiversity and #NFDI4Earth continues with two Bar Camp style sessions with diverse topics contributed by the participants, pitched to the group, and magically ✨ turned into a programme.
Great job by the organisers who made it possible that we provide an opportunity for anybody to share their work, questions and get new inputs.
#RDM #EduTrain #ResearchSoftware #KnowledgeGraph #Base4NFDI #CloudComputing #SPARQL #Biodiversity
-
🌞 Pendant l'été, le comité de programme de SemWeb.Pro n'a pas chômé et vous a concocté une belle journée !
🗓️ Pour SemWeb.Pro 2025 qui aura lieu le 27 novembre 2025 à Paris ,nous aurons 7 présentations et un peu moins d'une dizaine de posters: https://semweb.pro/conference/2025/
🎫 Les inscriptions sont maintenant ouvertes et vous pouvez profiter du tarif réduit jusqu'à début octobre.
🔍 On reviendra en détail sur le programme dans les prochains jours.
-
Er is nu een publiek #SPARQL-endpoint beschikbaar voor #TOOI - Thesauri en Ontologieën voor OverheidsInformatie.
Je kunt het hier zelf proberen:
https://standaarden.overheid.nl/tooi/sparql
Ben je geïnteresseerd in TOOI, schrijf je dan in voor de nieuwsbrief via:
https://standaarden.overheid.nl/tooi/nieuwsbrief
Dit is de laatste nieuwsbrief:
https://standaarden.overheid.nl/tooi/nieuwsbrief/2025/1
#RDF #LinkedData #OWL #SHACL #overheid #FOSS #opensource #OpenTenzij #standaard #consultatie
-
Hannah Bast & team created a #benchmark evaluating the #SPARQL query performance on the #dblp #knowledgegraph. According to the results, using the #Qlever engine was the right decision. ~MRA
https://qlever.cs.uni-freiburg.de/evaluation-paper/www/#/comparison?kb=dblp
https://ad-publications.cs.uni-freiburg.de/ISWC_sparqloscope_BKTU_2025.pdf
https://sparql.dblp.org/ -
I just added holding information for 100+ Arabic newspapers and magazines published before 1930 from the National Library of Israel to #Wikidata (many are marked as belonging “to the Absentee Property Collection (AP)” in the MARC files from the catalogue and thus a direct result of the #Nakba ). This significantly extends the coverage of pre-Nakba Palestinian periodicals and will allow more people to discover the rich cultural heritage of #Palestine.
URL for a map of all known holdings of pre-Nakba periodicals in Palestine: https://query-chest.toolforge.org/redirect/FastY9YqoKky0AAggWsk4mMeQW284SIOkIGGiEEMQKd
For a documentation of the process and our larger project see my ‘Adding Every Arabic Periodical Published Before 1930 to Wikidata: Moving the Scholarly Crowd-Sourcing Project Jarāʾid to the Digital Commons’. Transformations: A DARIAH Journal 1: Workflows (July 2025): 1–39. https://doi.org/10.46298/transformations.14749.
#PeriodicalStudies #SPARQL #ArabPeriodicalStudies #الصحافة_العربية
-
Für Formate gibt es bei DCAT-AP.de ein vorgeschriebenes Vokabular der EU: https://publications.europa.eu/resource/authority/file-type
Super praktisch, da man dann nicht wissen muss, ob jemand WFS, Web Feature Service, Downloaddienst oder sonst etwas geschrieben hat.
Leider halten sich nicht alle daran, wie meine kleine Auswertung bei @opendata ergeben hat.
Was da sonst so reingeschrieben wird, kann man mit der SPARQL-Abfrage herausbekommen.
-
當我們談論人工智慧(AI)的未來發展,Wikidata 正悄悄成為其中一股不可忽視的力量。在【Wikidata’s next leap: the open database powering tomorrow’s AI and Wikipedia】一文中描繪了 Wikidata (維基數據)如何從支撐 Wikipedia (維基百科)的資料庫,蛻變為驅動生成式 AI 的核心基礎設施。
Wikidata 是一個開放且可編輯的 #知識圖譜 ,目前擁有超過 13 億筆結構化資料,並透過 #SPARQL 查詢語言與 REST API 提供即時存取。這些資料不僅支援全球企業 IT 系統與公民科技平台,也為 AI 模型提供可驗證的事實來源,降低「幻覺」與錯誤資訊的風險。
文中特別提及幾個具代表性的應用案例: #巴西 的 #AletheiaFact 平台用來查核政治言論、 #孟加拉 的 #Sangkalak 開放在地文獻、 #印度 的醫療地圖專案改善偏鄉醫療資訊。這些專案都展現了 Wikidata 在促進資訊透明以及推動社會創新的潛力。
此外,Wikidata 正推動「Embedding Project」,將知識圖譜轉換為向量資料,使 AI 能以語意方式理解與運用 Wikidata 的內容。不僅提升 AI 的準確性,也讓開源社群能更容易參與 AI 應用的開發。
文中最後強調:Wikidata 的願景是打造一個由社群驅動、去中心化的開放資料網路,讓地方政府、博物館、研究機構都能建立自己的 Wikibase,並與全球資料互通。這不只是技術的進步,更是對民主知識共享的承諾。
-
It took me a while to understand that #getty:broader and variants are not equal to #SKOS:broader. Here are the "#hyponyms" of "people by degree of qualification" from The Art & Architecture Thesaurus (#AAT) - I am not quite convinced.
https://tinyurl.com/mve25dmw