#fileformats — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #fileformats, aggregated by home.social.
-
Maintenance begins at creation, so why are we not creating better?
by @beet_keeperThe beats are the same. You work for government, or academia (lets face it, that’s probably where 90% of the work is) you have a deliverable; you save it; you print to PDF; you store it on an institutional repository with some metadata (or Zenodo, OSF or equivalent) and its done.
There’s a small chance that it’s FAIR (Findable, Accessible, Interoperable, Reusable) right? It has metadata that can be discovered by an audience looking for it and can be indexed by search engines. The data is potentially accessible if published correctly. They’re not particularly interoperable or easily converted, and PDFs aren’t really designed for reuse, even if tools like Apache Tika help ease the burden of extracting artifacts. It’s just a PDF, why are we even talking about FAIR? There begins a story…
The beats are the same, yet, we work in digital preservation, our backgrounds are in GLAM or software, why do we want to shoot ourselves in the foot? Why are we not using our skills to create better?
#Archives #BetterPoster #ContinuumModel #createToMaintain #digipres #DigitalArchiving #DigitalContunuity #digitalLiteracy #DigitalPreservation #FAIR #FileFormats #GLAM #informationRecordsMangagement #NationalDigitalStewardshipAlliance #NDSA #OpenAccess #OpenData #PDF #RDM #ResearchDataLifecycle #RIM -
Maintenance begins at creation, so why are we not creating better?
by @beet_keeperThe beats are the same. You work for government, or academia (lets face it, that’s probably where 90% of the work is) you have a deliverable; you save it; you print to PDF; you store it on an institutional repository with some metadata (or Zenodo, OSF or equivalent) and its done.
There’s a small chance that it’s FAIR (Findable, Accessible, Interoperable, Reusable) right? It has metadata that can be discovered by an audience looking for it and can be indexed by search engines. The data is potentially accessible if published correctly. They’re not particularly interoperable or easily converted, and PDFs aren’t really designed for reuse, even if tools like Apache Tika help ease the burden of extracting artifacts. It’s just a PDF, why are we even talking about FAIR? There begins a story…
The beats are the same, yet, we work in digital preservation, our backgrounds are in GLAM or software, why do we want to shoot ourselves in the foot? Why are we not using our skills to create better?
Continue reading “Maintenance begins at creation, so why are we not creating better?”…
#Archives #BetterPoster #ContinuumModel #createToMaintain #digipres #DigitalArchiving #DigitalContunuity #digitalLiteracy #DigitalPreservation #FAIR #FileFormats #GLAM #informationRecordsMangagement #NationalDigitalStewardshipAlliance #NDSA #OpenAccess #OpenData #PDF #RDM #ResearchDataLifecycle #RIM -
Maintenance begins at creation, so why are we not creating better?
by @beet_keeperThe beats are the same. You work for government, or academia (lets face it, that’s probably where 90% of the work is) you have a deliverable; you save it; you print to PDF; you store it on an institutional repository with some metadata (or Zenodo, OSF or equivalent) and its done.
There’s a small chance that it’s FAIR (Findable, Accessible, Interoperable, Reusable) right? It has metadata that can be discovered by an audience looking for it and can be indexed by search engines. The data is potentially accessible if published correctly. They’re not particularly interoperable or easily converted, and PDFs aren’t really designed for reuse, even if tools like Apache Tika help ease the burden of extracting artifacts. It’s just a PDF, why are we even talking about FAIR? There begins a story…
The beats are the same, yet, we work in digital preservation, our backgrounds are in GLAM or software, why do we want to shoot ourselves in the foot? Why are we not using our skills to create better?
#Archives #ContinuumModel #createToMaintain #digipres #DigitalArchiving #DigitalContunuity #digitalLiteracy #DigitalPreservation #FAIR #FileFormats #GLAM #informationRecordsMangagement #NationalDigitalStewardshipAlliance #NDSA #OpenData #PDF #RDM #ResearchDataLifecycle #RIM -
Maintenance begins at creation, so why are we not creating better?
by @beet_keeperThe beats are the same. You work for government, or academia (lets face it, that’s probably where 90% of the work is) you have a deliverable; you save it; you print to PDF; you store it on an institutional repository with some metadata (or Zenodo, OSF or equivalent) and its done.
There’s a small chance that it’s FAIR (Findable, Accessible, Interoperable, Reusable) right? It has metadata that can be discovered by an audience looking for it and can be indexed by search engines. The data is potentially accessible if published correctly. They’re not particularly interoperable or easily converted, and PDFs aren’t really designed for reuse, even if tools like Apache Tika help ease the burden of extracting artifacts. It’s just a PDF, why are we even talking about FAIR? There begins a story…
The beats are the same, yet, we work in digital preservation, our backgrounds are in GLAM or software, why do we want to shoot ourselves in the foot? Why are we not using our skills to create better?
#Archives #ContinuumModel #createToMaintain #digipres #DigitalArchiving #DigitalContunuity #digitalLiteracy #DigitalPreservation #FAIR #FileFormats #GLAM #informationRecordsMangagement #NationalDigitalStewardshipAlliance #NDSA #OpenData #PDF #RDM #ResearchDataLifecycle #RIM -
Maintenance begins at creation, so why are we not creating better?
by @beet_keeperThe beats are the same. You work for government, or academia (lets face it, that’s probably where 90% of the work is) you have a deliverable; you save it; you print to PDF; you store it on an institutional repository with some metadata (or Zenodo, OSF or equivalent) and its done.
There’s a small chance that it’s FAIR (Findable, Accessible, Interoperable, Reusable) right? It has metadata that can be discovered by an audience looking for it and can be indexed by search engines. The data is potentially accessible if published correctly. They’re not particularly interoperable or easily converted, and PDFs aren’t really designed for reuse, even if tools like Apache Tika help ease the burden of extracting artifacts. It’s just a PDF, why are we even talking about FAIR? There begins a story…
The beats are the same, yet, we work in digital preservation, our backgrounds are in GLAM or software, why do we want to shoot ourselves in the foot? Why are we not using our skills to create better?
#Archives #BetterPoster #ContinuumModel #createToMaintain #digipres #DigitalArchiving #DigitalContunuity #digitalLiteracy #DigitalPreservation #FAIR #FileFormats #GLAM #informationRecordsMangagement #NationalDigitalStewardshipAlliance #NDSA #OpenAccess #OpenData #PDF #RDM #ResearchDataLifecycle #RIM -
All in a day's work for #archival #superheroes :
> What the data is that data?
> If only I had #METADATA ! 😁 🍰 -
Peeking Inside Gigantic Zips with Only Kilobytes
https://ritiksahni.com/blog/peeking-inside-gigantic-zips-with-only-kilobytes/
#HackerNews #Peeking #Inside #Gigantic #Zips #with #Only #Kilobytes #datacompression #fileformats #technology #programming #HackerNews
-
Peeking Inside Gigantic Zips with Only Kilobytes
https://ritiksahni.com/blog/peeking-inside-gigantic-zips-with-only-kilobytes/
#HackerNews #Peeking #Inside #Gigantic #Zips #with #Only #Kilobytes #datacompression #fileformats #technology #programming #HackerNews
-
Peeking Inside Gigantic Zips with Only Kilobytes
https://ritiksahni.com/blog/peeking-inside-gigantic-zips-with-only-kilobytes/
#HackerNews #Peeking #Inside #Gigantic #Zips #with #Only #Kilobytes #datacompression #fileformats #technology #programming #HackerNews
-
Peeking Inside Gigantic Zips with Only Kilobytes
https://ritiksahni.com/blog/peeking-inside-gigantic-zips-with-only-kilobytes/
#HackerNews #Peeking #Inside #Gigantic #Zips #with #Only #Kilobytes #datacompression #fileformats #technology #programming #HackerNews
-
Peeking Inside Gigantic Zips with Only Kilobytes
https://ritiksahni.com/blog/peeking-inside-gigantic-zips-with-only-kilobytes/
#HackerNews #Peeking #Inside #Gigantic #Zips #with #Only #Kilobytes #datacompression #fileformats #technology #programming #HackerNews
-
Sure, we've all seen Mark YAML and even JSON Statham, but have y'all met TOML Holland yet?
-
Sure, we've all seen Mark YAML and even JSON Statham, but have y'all met TOML Holland yet?
-
Sure, we've all seen Mark YAML and even JSON Statham, but have y'all met TOML Holland yet?
-
Sure, we've all seen Mark YAML and even JSON Statham, but have y'all met TOML Holland yet?
-
Sure, we've all seen Mark YAML and even JSON Statham, but have y'all met TOML Holland yet?
-
Ah, the "Binary Formats Gallery"—because what the world really needed was a #museum for file formats, complete with GraphViz diagrams and hex dump visualizers! 🤓🖥️ I mean, who doesn't want to spend their Sunday afternoon compiling Kaitai Struct libraries just to admire the metadata of a Quake 2 model? 🌟🔍
https://formats.kaitai.io/ #BinaryFormatsGallery #FileFormats #TechArt #KaitaiStruct #Quake2 #HackerNews #ngated -
Ah, the "Binary Formats Gallery"—because what the world really needed was a #museum for file formats, complete with GraphViz diagrams and hex dump visualizers! 🤓🖥️ I mean, who doesn't want to spend their Sunday afternoon compiling Kaitai Struct libraries just to admire the metadata of a Quake 2 model? 🌟🔍
https://formats.kaitai.io/ #BinaryFormatsGallery #FileFormats #TechArt #KaitaiStruct #Quake2 #HackerNews #ngated -
Ah, the "Binary Formats Gallery"—because what the world really needed was a #museum for file formats, complete with GraphViz diagrams and hex dump visualizers! 🤓🖥️ I mean, who doesn't want to spend their Sunday afternoon compiling Kaitai Struct libraries just to admire the metadata of a Quake 2 model? 🌟🔍
https://formats.kaitai.io/ #BinaryFormatsGallery #FileFormats #TechArt #KaitaiStruct #Quake2 #HackerNews #ngated -
Ah, the "Binary Formats Gallery"—because what the world really needed was a #museum for file formats, complete with GraphViz diagrams and hex dump visualizers! 🤓🖥️ I mean, who doesn't want to spend their Sunday afternoon compiling Kaitai Struct libraries just to admire the metadata of a Quake 2 model? 🌟🔍
https://formats.kaitai.io/ #BinaryFormatsGallery #FileFormats #TechArt #KaitaiStruct #Quake2 #HackerNews #ngated -
Revisiting bsdiff as a tool for digital preservation
by @beet_keeperI introduced bsdiff in a blog in 2014. bsdiff compares the differences between two files, e.g. broken_file_a and corrected_file_b and creates a patch that can be applied to broken_file_a to generate a byte-for-byte match for corrected_file_b.
On the face of it, in an archive, we probably only care about corrected_file_2 and so why would we care about a technology that patches a broken file?
In all of the use-cases we can imagine the primary reasons are cost savings and removing redundancy in file storage or transmission of digital information. In one very special case we can record the difference between broken_file_a and corrected_file_b and give users a totally objective method of recreating corrected_file_b from broken_file_a providing 100% verifiable proof of the migration pathway taken between the two files.
#ac3 #Archives #audio #audiovisual #Audit #authenticity #av #Bash #bsdiff #checksums #Code4Lib #corruption #corruptionIndex #digipres #DigitalArchiving #DigitalForensics #digitalLiteracy #DigitalPreservation #DigitalStorage #diplomatics #FileFormats #flac #glitch #glitchAudio #GlitchArt #integrity #mp3 #PreservationAnalysis #PreservationMetadata #provenance #sensitivityIndex #Storage #wav
-
One File, Six Formats: Just Change The Extension https://hackaday.com/2025/08/08/one-file-six-formats-just-change-the-extension/ #SoftwareHacks #fileformats #fileformat #mp4 #pdf #png
-
One File, Six Formats: Just Change The Extension - Normally, if you change a file’s extension in Windows, it doesn’t do anything posi... - https://hackaday.com/2025/08/08/one-file-six-formats-just-change-the-extension/ #softwarehacks #fileformats #fileformat #mp4 #pdf #png
-
I have three .avi files that have identical size but different md5sum.
A simple binary diff tells me that there are a couple of single-byte differences. Is there some kind of tool that would allow me to tell what those differences amount to? (Video, audio, whatever else?)
EDIT: SOLVED, see below
-
University of Michigan: U-M develops free tool to empower municipalities, modernize financial reporting. “When Congress passed the Financial Data and Transparency Act in 2022, it required most municipalities in the U.S. to modernize and digitize their financial reports. This is a heavy lift for small towns and school districts, most of which still report their financial information in PDF […]
-
🎉 Another day, another tech blogger unleashing their inner Shakespeare on the thrilling topic of file formats! 🌟 This riveting tale somehow manages to make the nuances of #.gif vs. #.jpeg debate sound like the climax of a daytime soap opera. 🤯✨ How will the world ever survive without this critical analysis? 🙄📂
https://solhsa.com/oldernews2025.html#ON-FILE-FORMATS #techblogger #fileformats #analysis #soapopera #HackerNews #ngated -
File formats as Emoji: 0xffae
by @beet_keepertldr: https://emoji.exponentialdecay.co.uk
File Formats As Emoji (0xFFAE or 0xffae) might be my most random file format hack yet. Indeed, it is a random page generator! But it generates random pages of file formats represented as Emoji.
The idea came in 2016 with radare releasing a new version that supported an emoji hexdump. I wondered whether I could do something fun combining file
#0xffae #Code #Coding #digipres #digitalLiteracy #DigitalPreservation #emoji #FileFormat #FileFormatIdentification #FileFormats #learning #PRONOM #pyscript #Python #SkeletonTestCorpus #teaching
-
File format building blocks: primitives in digital preservation
by @beet_keeperA primitive in software development can be described as:
a fundamental data type or code that can be used to build more complex software programs or interfaces.
– via https://www.capterra.com/glossary/primitive/ (also Wiki: language primitives)
Like bricks and mortar in the building industry, or oil and acrylic for a painter, a primitive helps a software developer to create increasingly more complex software, from your shell scripts, to entire digital preservation systems.
Primitives also help us to create file formats, as we’ve seen with the Eyeglass example I have presented previously, the file format is at its most fundamental level a representation of a data structure as a binary stream, that can be read out of the data structure onto disk, and likewise from disk to a data structure from code.
For the file format developer we have at our disposal all of the primitives that the software developer has, and like them, we also have “file formats” (as we tend to understand them in digital preservation terms) that serve as our primitives as well.
#Archives #digipres #DigitalPreservation #DigitalPreservationEssentialism #diplomatics #eyeglass #eygl #FileFormats #InformationRecordsManagement #IRM #JSON #OpenData #OpenSource #RDM #ResearchData #ResearchDataManagement #XML
-
The latest version of #PRONOM, v120 has been released! Identification for 30 new PUIDs, 32 new signatures and 32 updates. #digipres #fileformats #digitalpreservation https://www.nationalarchives.gov.uk/aboutapps/pronom/release-notes.xml
-
The sensitivity index: Corrupting Y2K
by @beet_keeperIn December I asked “What will you bitflip today?” Not long after, Johan’s (@bitsgalore) Digtial Dark Age Crew released its long lost hidden single Y2K — well, I couldn’t resist corrupting it.
Fixity is an interesting property enabled by digital technologies. Checksums allow us to demonstrate mathematically that a file has not been changed. An often cited definition of fixity is:
Fixity, in the preservation sense, means the assurance that a digital file has remained unchanged, i.e. fixed — Bailey (2014)
It’s very much linked to the concept of integrity. A UNESCO definition of which:
The state of being whole, uncorrupted and free of unauthorized and undocumented changes.
Integrity is massively important these days. It gives us the guarantees we need that digital objects we work with aren’t harboring their own sinister secrets in the form of malware and other potentially damaging payloads.
These values are contingent on bit-level preservation, the field of digital preservation largely assumes this; that we will be able to look after our content without losing information. As feasible as this may be these days, what happens if we lose some information? Where does authenticity come into play?
Through corrupting Y2K, I took time to reflect on integrity versus authenticity, as well as create some interesting glitched outputs. I also uncovered what may be the first audio that reveals what the Millennium Bug itself may have sounded like! Keen to hear it? Read on to find out more.
#ac3 #Archives #audio #audiovisual #authenticity #av #Bash #checksums #Code4Lib #corruption #corruptionIndex #digipres #DigitalArchiving #digitalLiteracy #DigitalPreservation #diplomatics #FileFormats #flac #glitch #GlitchArt #glitchaudio #integrity #mp3 #sensitivityIndex #wav
-
thought it might be nice to sign #sphinx releases with #minisign and #ssh #eddsa keys, straight outta sphinx. minisign #privkeys are okish (they do need 40 B of entropy, 8 extra for a "keyid"). but did you know, that in ssh the public key is stored 3x in the ed25519 private #key? one time i can understand (could be 0 though), but 3 times? what have they been drinking? #fileformats
-
What will you bitflip today?
by @beet_keeperI want to let you into a secret: I enjoy corruption. Corrupting digital objects leads to undefined behavior (C++’s definition is fun). And flipping bits in objects can tell us something both about the fragility, and robustness of our digital files and the applications that work with them.
I had a pull-request for bitflip accepted the other day. Bitflip is by Antoine Grondin and is a simple utility for flipping bits in digital files. I wrote in my COPTR entry for it that it reminds me of shotGun by Manfred Thaller. The utility is exceptionally easy to use (and of course update and maintain written in Golang) and has some nice features for flipping individual bits or a uniform percentage of bits across a digital file.
My pull-request was a simple one updating Goreleaser and its GitHub workflow to provide binaries for Windows and FreeBSD. I only needed to use Windows for a short amount of time thankfully, but it’s an environment I believe is prevalent for a lot of digital preservationists in corporate IT environments.
Bitflip is a useful utility to improve your testing of digital preservation systems, or simply for outreach, but let’s have a quick look at it in action.
#Archive30 #Archives #Art #Binary #bitflip #bitrot #Code #Coding #digipres #digital #DigitalArchiving #digitalLiteracy #DigitalPreservation #FileFormats #GenerativeArt #GlitchArt #outreach #Ravensburger #SomethingFun #Vagabond
-
A year in file formats 2024
by @beet_keeperA great write up from Francesca at TNA about the past year for PRONOM via Georgia and the OPF: https://digipres.club/@Georgia/113633592204634150.
It’s great to see the continuing work including vital translation of guides into other languages. Francesca includes a couple of shout outs to some pieces I have contributed in my spare time this year; including a collaborative workshop with Francesca, David, and Tyler at iPRES2024.
#Archives #Conferences #digipres #DigitalPreservation #DROID #FileFormat #FileFormats #ipres2024 #outreach #PRONOM
-
PRONOM’s dustiest records
NB. because of the complexity of this post, it may be easier to read in original blog form, than on Mastodon here: https://exponentialdecay.co.uk/blog/pronoms-dustiest-records/
Tyler’s recent blog post for the PRONOM Hack-a-thon Week 2024 (my previous for this week), brought up an interesting point about two of PRONOM’s oldest outline records, Real Video Clip (fmt/204) and Real Video (x-fmt/277). How did they end up in PRONOM?
Tyler suggests:
I assume PRONOM originally added these based on MIME types available.
I thought I knew the answer, and so it prompted a forensic look at the records to see if what I thought I knew aligned with reality!
As a PRONOM maintainer at The National Archives, UK from 2009-2012 I knew a little bit of the history of the system, we see some of that history impact us today, for example, when we look at the number of records that don’t have descriptions or file format signatures, 156 of those records are so-called x-PUIDs. A mechanism in PRONOM that was never meant to make it into the wild for working on file formats internally without polluting the public record. There are 455 x-PUIDs in total. They made it into the wild anyway (before my time) and so they exist as a symbol of PRONOM’s dustiest oldest records.
Even by the time I had started, PRONOM still had a lot of what we started to call outline records. One of the more positive changes we made to the process back in the day was that we would stop creating outline records; instead, we would focus on records that could be tied to signatures. This didn’t necessarily make the records more correctly aligned with reality, but it meant records had utility and file formats identified by DROID could be tied back to something that PRONOM “knew about”. I believe the process is a bit more flexible these days, allowing individuals to contribute information to records that tie them back to information like MIMEtypes and specifications. It’s clearer the format is “real” even if a signature is yet to be developed (and of course there are a large number of data formats that are hard to even represent in traditional PRONOM signatures any more and so they need a record, even if there isn’t a neat concept of a signature for them).
Okay old-man, but what about Tyler’s thesis?
Stellent and PRONOM
I learned sometime in my tenure at The National Archives that PRONOM had been seeded with a lot of the formats listed in a technology called OutsideIn previously owned by Stellent and now owned by Oracle.
Oracle OutsideInhttps://docs.oracle.com/outsidein/853/oit/OutsideIn (2010)https://web.archive.org/web/20101016164937/http://www.oracle.com/technetwork/middleware/content-management/oit-all-085236.htmlData sheet – Formats (2011)https://web.archive.org/web/20110125024733/http://www.oracle.com/technetwork/middleware/content-management/ds-oitfiles-133032.pdfCOPTR entryhttps://coptr.digipres.org/index.php/Oracle_Outside_In_TechnologyI had always had a feeling that that the scope of this list was largely exaggerated by the company selling the software as it is a marketing tool; and if not exaggerated, perhaps, just not as clearly delineated by format than PRONOM, and rather, by Software, regardless of the properties of a given “format”, e.g. WinZip, and PKZip.
Back to the story though, I was also reasonably sure I would find Tyler’s RealVideo formats in the format listing but, I did not!
I downloaded a CSV summarizing the PRONOM records from api.pronom.ffdev.info with:
curl -X 'GET' \'https://api.pronom.ffdev.info/pronom_summary_csv' \-H 'accept: application/csv'I filtered on outline entries and those without signatures only. I went through the entries still remaining and looked for name matches. I did find some name-for-name matches and some that were close, but no RealVideo or RealVideo Clip.
The matches:
7-bit ANSI Textyes7-bit ASCII Textyes8-bit ANSI Textyes8-bit ASCII TextyesEBCDIC-USyesFramework Database IIIyesIBM DisplayWrite Document 2yesIBM DisplayWrite Document 3yesMicrografx Designer 3.1yesNota Bene Text FileyesUnicode Text FileyesThe maybes:
Cascading Style SheetmaybeFreelance File 1.0-2.1maybeMacPaint GraphicsmaybeMicrosoft Office Binder File for Windows 95maybeMicrosoft Works DatabasemaybeMicrosoft Works Database for DOS 2.0maybeMicrosoft Works Database for Windows 3.0maybeMicrosoft Works Database for Windows 4.0maybeProfessional Write Text FilemaybeWordPerfect for Windows Document 5.2maybeXYWrite DocumentmaybeXYWrite Document IIImaybeXYWrite Document III+maybe11 exact matches! It’s hardly a headline!
I had hoped that if I found more exact matches it would provide some clues to where some of the older PRONOM entries came from. I expected most of the outline records to come from this list, alas, it isn’t nearly as many as anticipated.
I hoped too that going through the list I might get more clues as to formats that could potentially be deprecated in PRONOM.
As it stands, from the OutsideIn list, the only records I would personally recommend for deprecation are:
7-bit ANSI Text7-bit ASCII Text8-bit ANSI Text8-bit ASCII TextEBCDIC-USUnicode Text File
We know enough now to be almost certain that if something that looks like these files arrives in the archive it will present as a standard text file, and that we will need to rely on determining the character encoding using tools such as Richard Lehane’s characterize (see characterize’s README for more background). It is unlikely we will be able to attach a signature to these records, and we know there are a great deal more encodings in the world than need be represented as PRONOM identifiers.
NB. this might be something to formalize in a PRONOM decision making rubric, connected also, to formalizing approaches for XML based signatures.
A bit of a let down, or is it?
Still uncomfortable with so many outline records and little provenance for them, I wanted to find more information about the source of PRONOM data and so I decided to take a different path — I surfed the internet for answers!
Out of the list of outline records I found a few to be overly specific, or slightly weird, i.e. not really things we hear much about day-to-day, some examples:
ACBM GraphicsApple SoundAutoCAD Plot Configuration File 1.0-R13AutoCAD Plot Configuration File R14AutoSketch DrawingBtrieve Database 5.1CorelDraw PatternDEC Data Exchange FileDEC WPS Plus DocumentDr Halo BitmapGeneric Library FileHTML Extension FileHewlett Packard AdvanceWrite Text FileInkwriter/Notetaker TemplateInset Systems BitmapInstalit ScriptInterleaf DocumentMicrosoft Excel Add-InMicrosoft Excel ODBC QueryMicrosoft Excel ToolbarMicrosoft Powerpoint Design TemplateMicrosoft Print FileMicrostation CAD Drawing 95NAP MetafileNota Bene Text FileOS/2 Change Control FileRevit External GroupSAP DocumentSAS Data FileScanstudio 16-Colour BitmapSchedule+ ContactsSpeller Custom DictionaryUnisys (Sperry) System Data FileWordperfect Secondary File 5.0Wordperfect Secondary File 5.1/5.2form*Z Project FileACBM graphics? Dr Halo Bitmap? Btrieve database, “5.1”? where are the other five?!!
It gave me pause. I didn’t believe these were all formats well-known to folks who created PRONOM, and I know we didn’t have such an advanced digital transfer program at the time that meant agencies were submitting huge variations of formats to PRONOM for future preservation.
I felt they had to come from somewhere, but where?
Enter Filext.com
Because these formats were very specific I found listings on the internet that I knew had to be part of the story. I had immediate luck just looking for combinations of these names, e.g. ACBM Graphics + NAP Metafile.
In particular I found listings on different websites from hobbyists or universities that all looked the same or similar, e.g.
There were definite matches with PRONOM which we will get to, but I started to wonder about the provenance of these extensions.
I kept looking and I found one clue, a header and footer of a file that looked like those above and read as follows:
Copyright © 2002 Computer KnowledgeAll Rights ReservedThis download for personal use only. Do NOT distributeit to others either alone or incorporated into anysoftware without prior permission from Computer Knowledge.Developers who wish to incorporate portions of the listplease see the comments at the end of this file.
Developer permissions....This total file may not be included in any other software orproject which presents the data to the public or portions ofthe public. Any developer who wishes to include up to (butnot more than) 2,000 individual entries from this file is freeto do so provided certain conditions are met. These are:. 1) Credit must be given to FILExt. If links are available in the developed product then one must also be provided to FILExt as http://filext.com.. Suggested text: "File extension list courtesy of FILExt. For a more extensive list visit http://filext.com.". 2) Once the extensions are chosen for one product by any developer then these same extensions must continue to be used by that developer for any other projects (i.e., you cannot take one set of 2,000 for one project and a different set of 2,000 for another project; it's a total of 2,000).. 3) If links are available in the developed product then any links appearing associated with any of the 2,000 picked extensions must be included in the product. (This covers future plans to include such links in this list.).When the project is complete please notify FILExt with thespecifics at [email protected]. We're always interestedin how the list is being used. Thank you.
Filext.com!
And so I asked myself, how long had filext been around?
As it turns out, quite a while! It was forked from a site called cknow around 2002. cknow.com was registered around 1996 and filext.com registered in 2001.
The first appearance of cknow in the internet archive is late 1996: https://web.archive.org/web/19961219035827/http://www.cknow.com/ and Filext early 2001: https://web.archive.org/web/20010522235126/http://www.filext.com/
The sites were founded by Tom Simondi. It looks like he has been responsible for a lot of the 90s and 00s work around demystifying extensions and getting more information to folk about what to do with them.
Could it be the source of the first PRONOM records?
Comparing some of the many other text-based lists I had found with cknow and filext gave me some confidence that there was some shared heritage with the them, and so I asked, could the cknow and filext lists have also seeded PRONOM?
I picked a list close to 2002 (cknow Extensions: 2000) when PRONOM was first started and began to compare entries for exact matches.
ACBM GraphicsyesAutoCAD Compiled MenuyesAutoSketch DrawingyesBtrieve Database 5.1yesDataFlex Query Tag NameyesDeluxe Paint bitmapyesDesignCAD DrawingyesDigital VideoyesDr Halo BitmapyesFrame Vector MetafileyesFramework Database IIyesFramework Database IIIyesFramework Database IVyesInformation or Setup FileyesInset Systems BitmapyesInterBase DatabaseyesLotus Approach View FileyesMathematica NotebookyesMicrosoft Excel Add-InyesMicrosoft Excel ODBC QueryyesMicrosoft Excel OLAP QueryyesMicrosoft Excel OLE DB QueryyesMicrosoft Excel Web QueryyesMicrosoft FoxPro LibraryyesMicrosoft Outlook Address BookyesMicrosoft PowerPoint Graphics FileyesMicrosoft Powerpoint Add-InyesMicrosoft Visual FoxPro TableyesMicrosoft Works DatabaseyesMicrosoft Works DocumentyesMicrostation CAD Drawing 95yesNAP MetafileyesNota Bene Text FileyesOS/2 Change Control FileyesPICS AnimationyesPageMaker Document 3.0yesPageMaker Time Stamp File 4.0yesProfessional Write Text FileyesQuicken Data FileyesRealVideo Clip <– cc. Tyler!yesSchedule+ ContactsyesStatGraphics Data FileyesStructured Query Language DatayesVentura Publisher Vector GraphicsyesXYWrite Document IIIyesXYWrite Document IVyes46 matches!
Apple SoundmaybeAutoCAD Device-Independent Binary Plotter FilemaybeAutoCAD Drawing TemplatemaybeCascading Style SheetmaybeDEC Data Exchange FilemaybeDEC WPS Plus DocumentmaybeFreelance File 1.0-2.1maybeJava Servlet PagemaybeMicrografx Designer 3.1maybeMicrosoft Office Binder File for Windows 95maybeMicrosoft Office Binder Template for Windows 95maybeMicrosoft Office Binder Template for Windows 97-2003maybeMicrosoft Office Binder Wizard for Windows 95maybeMicrosoft Office Binder Wizard for Windows 97-2003maybeVentura PublishermaybeXYWrite DocumentmaybeXYWrite Document III+maybe17 maybes!
What did we answer?
Okay, 46 exact matches does not the full listing make (although many (now) full-entries may still have been made from these early listings). Filext may have been an important resource for the first PRONOM records, but it’s also likely that PRONOM had other sources of information. For example, for a number of the Microsoft formats with outline records read like export or save-as listings in previous versions of Microsoft software. E.g. Excel:
NB. I wasn’t actively researching this side of things writing this blog, but I can already see some commonalities, especially Unicode Text!
I know we also had a copy of the Dr Dobb’s Essential Books on File Formats CD-ROM in the archive, and so that may also have been an important resource when PRONOM was creating its first records.
I count only two overlaps with the Stellent list, Framework Database III and Nota Bene Text File.
We did, however, find the RealVideo Clip! And I think we found some decent correlation with a resource that looks likely to have been used partially to populate the PRONOM database.
The era of file extensions
- Throughout my research, I found a lot of similar websites. Filext seems to go furthest back and has the greater pedigree, but in the noughties a lot of other sites seemed to appear to try and provide similar information to internet users, a few of note that seemed comprehensive and particularly well presented:
- Endungen.de (Internet Archive circa. 2002) https://web.archive.org/web/20020603193356/http://www.endungen.de/index.php?changelanguage=049
- Dotwhat.net (Internet Archive circa. 2005): https://web.archive.org/web/20050615021635/http://www.dotwhat.net/
- File-extension.net (Internet Archive circa. 2007): https://web.archive.org/web/20070521093707/https://file-extension.net/
- Filesuffix.com (Internet Archive circa. 2006): https://web.archive.org/web/20060615083930/http://www.filesuffix.com/browse/browse-1.html
- Filext backup: https://www.fileext.com/
I am sure we looked at these sites during my time on PRONOM, although with less frequency given the need to reduce outline records and increase the number with actionable information.
NB. I also learned that TrID has been around since 2003! https://web.archive.org/web/20030612031252/http://mark0.ngi.it:80/
Provenance and prior art
It’s not entirely productive to say I wish we had better provenance for PRONOM records back in the day – but I do!
It makes me reflect on the importance of looking outside of our own walls in digital preservation instead of the constant redundancy of reinvention or ownership.
Often as academics, or those with archival views of the world, we can provide a polish and precision to technology as it exists to make it more usable in an archival context.
But cknow has been around so long, and the Unix utility File was created in 1986.
There’s a parallel history here that we should be recognizing and sharing for our next colleagues.
I arrived at TNA in 2009 and learned about File maybe two years later. As a Windows guy at the time, that might not be uncommon, but I do feel it is on me to have known more. I also think it should have been trivial to access the provenance around some of the records in the database at the time, but more than that – as a field, shouldn’t we all know Tom Simondi? What if the same academic rigour of PRONOM and DROID could have been applied to existing tools like File? What if we had expanded our bubble and recognized digital preservation (or the tools for it) is something people have been doing in all but name for the longest time? What if the people working in parallel on these projects and websites were part of the digital preservation inner-circle community today?
I don’t have answers, but I feel there are lessons there for the future. Not reinventing or rebuilding without good reason is important, but even if we build something new and we have been inspired by something else, continuing to recognize and acknowledge prior art is important.
What do you think?
Also, how do we get these people into a room and celebrate their work, and learn more!
What next?
I don’t think I got very far here but I found it interesting, and I hope other readers may as well.
This is meant to be a PRONOM hack-a-thon blog and I don’t know if I have pushed the sticks forward that much but maybe there’s a bit more to reason about in the outline records, for example, around the plain-text formats mentioned above and a few more identified along the way.
7-bit ANSI Textx-fmt/21Recommend deprecation7-bit ASCII Textx-fmt/22Recommend deprecation8-bit ANSI Textx-fmt/282Recommend deprecation8-bit ASCII Textx-fmt/283Recommend deprecationUnicode Text Filex-fmt/16Recommend deprecationEBCDIC-USfmt/159Recommend deprecationMS-DOS Text File with line breaksx-fmt/130Recommend deprecationI noticed in the outline entries some low-hanging fruit that I might focus on next opportunity if someone else doesn’t get there first, these would be:
Cascading Style Sheetx-fmt/224Consider adding CSS to the record nameA signature should be feasibleDocument Type Definitionx-fmt/315Consider adding DTD to the record nameA signature should be feasibleExtensible Stylesheet Languagex-fmt/281Consider adding XSL to the record nameA signature should be feasibleHTML Extension Filex-fmt/417Related to Microsoft’s ISS serverA signature may be possibleStandard Generalized Markup Languagex-fmt/195Consider adding SGML to the record nameA signature may be possibleStill Picture Interchange File Format 2.0fmt/113Related to JPEGA signature should be possibleStructured Query Language Datafmt/206Consider adding SQL to the record nameA signature may be possibleDreamweaver Lock Filefmt/335A system file, there may be an entry in the NSRL databaseA signature may be possibleA little more on the history of extensions websites
The complete filext text file (allext.zip)
It took a few jumps, but I found the complete downloadable text file from Filext.com. I don’t think it exists any more and I don’t think the internet archive managed to grab a copy. Apparently it was quite a chunk of data to download on the web once upon a time, but they eventually found a way to release a zipped text file:
Via one jump we get to the “whole list” page:
https://web.archive.org/web/20020605164206/http://filext.com/wholelist.htm
And then to confirm our absolute interest in downloading it, we get to the a2z file:
https://web.archive.org/web/20020606071418/http://filext.com/a2z.htm
Which would have taken us to the zip file, alas, never captured on the Internet Archive anyway, maybe it is on other Memento compatible servers:
https://web.archive.org/web/20060117000000*/http://www.filext.com:80/allext.zip
Keeping filext up to date
Filext still asks for registry data to help keep it up to date. That’s pretty cool!
https://filext.com/faq/gather_data_for_filext.html
1 │ Echo OFF2 │ CLS3 │ assoc > filext_submission_output.txt4 │ Echo ---------- >> filext_submission_output.txt5 │ ftype >> filext_submission_output.txt6 │ Echo Thank you. The output file has been created and7 │ Echo named filext_submission_output.txt and it should8 │ Echo be in the same place where you saved this batch9 │ Echo file. All that is left now is to send that file10 │ Echo to FILExt. Attach it to an E-mail sent to the11 │ Echo address: [email protected]12 │ Echo The E-mail subject should be: Submission13 │ Echo Thank you.14 │ Pause15 │ ExitFilext as a source of learning
The filext faqs and community seemed particularly helpful and interesting back in the day:
https://web.archive.org/web/20090322040812/http://filext.com/faq/
File extension aggregator
The file-extension.net website started an aggregator project around 2007 and it’s still running today!
http://file-extension.net/seeker/
Some bonus images…
As I was working on this, I found irony in Google Sheets glitching, I managed to grab some screenshots along the way. Thanks for reading everyone!
#digipres #DigitalPreservation #DROID #FileFormat #FileFormats #PRONOM #WDPD #WDPD2024
-
literally who hurt genomics to make you all encode one specific kind of number as the ASCII characters from ! to ~ as an integer input to some logarithm function, but then others of you changed the function but kept encoding it as a single ASCII character ranging from -5 to 62 (???), and then later they decided that -5 to 62 was silly and so they changed that to 0-62 and throwing away half the original range for no reason, except actually it's 0-40 by convention.
did anyone consider "encoding it as a number"
-
simpledroid: completing the circle
It’s nearing the end of 2024 and that must mean a PRONOM hackathon as part of the World Digital Preservation Day (#WDPD2024).
My contribution is a follow-up on my work earlier in the year to produce a valid DROID signature file from Wikidata in wddroidy.
simpledroid is available on GitHub and creates a simple DROID signature file from PRONOM itself, creating a scripted pathway to create a signature file using official PRONOM data that doesn’t require the current PRONOM database and its legacy stored procedures.
It also does away with a lot of the excess data in the current DROID signature file which was previously an optimization for its Boyer Moore Horspool search algorithm, as described by Matthew Palmer.
The primary reason for simpledroid was to complete the circle on my previous efforts and to prove that it was possible to create a simplified signature file and for it to work with DROID. The result is about 80-90% there, with only a few skeleton files that remain unidentified – it should only require a small amount of forensic research to determine the reason.
The output provides a way for simplifying the signature file generation process, offering new opportunities to create alternative versions, or filtering what’s already there, e.g. filtering out any signatures that aren’t explicitly for image identification, e.g. in a digitization workflow.
It may provide another way into PRONOM data for those who might look at DROID first as well as opening up different ways to modify and test signatures.
It is possible to see in the reference output, that the signatures are much easier to understand via this simplified DROID file.
simpledroid outputs a file with a smaller footprint than the current file:
1.2M DROID_SignatureFile_Simple_2024-11-11T12-29-22Z.xml3.4M DROID_SignatureFile_V118.xmlIt also contains all of the file classification data e.g.
FormatType="Video"from PRONOM that will be added into DROID in a future release (and is already available in Siegfried).Unlike the wddroidy work, priorities have also been added to the signature file so the mechanics of the signature file are pretty close to the official version (DROID uses the signature sequence and offsets to identify a file, but it then uses a priority to determine what results to display to the user where there may otherwise be positive matches for formats that provide the foundation for another, e.g. how XML forms the basis of SVG or XHTML.
It might be possible to remove some data around minimum and maximum offsets in the new file after discovering that simplified droid syntax requires curly bracket syntax at the beginning and end of sequences to mimic the same behavior, e.g.
With a
BOFoffset,min_offset = 2, and signature =BADF00D1, the signature needs to become{2}BADF00D1to work.The code is pretty straightforward and uses a few tricks to output XML sensibly without having to build the document’s tree (DOM) in a more verbose way. There are probably a few other shortcuts I’d fix with time if the code was ever useful, including improving variable naming and adding tests.
I’m not sure this code will ever be needed, or used by anyone, but for a quick hack and a quick proof of concept, it felt good to put it out there. Maybe someone will look at this or the wddroidy work and see there may be a way to federate different sources of signature information together into something DROID can use. Or it might be a useful demonstration to the DROID team that allows them to simplify PRONOM’s database and output mechanisms in a way that remains compatible with existing tools.
Previous research week work
My previous work for PRONOM research week includes a dashboard and API for getting more information out of PRONOM, including listings of those records still requiring descriptions or signatures. You may find that work interesting and it is available at https://pronom.ffdev.info and https://api.pronom.ffdev.info.
And if you want to get in on the signature development work, signature development utility 2.0 (https://ffdev.info) was also a previous effort of mine for research week 2020 and will hopefully also benefit from outputting DROID’s simplified syntax.
A week of file formats
Of course with World Digital Preservation Day, file formats were pretty popular.
Andrew Jackson attempted to calculate how many distinct formats might be out there using methods used to calculate ecological diversity.
Amanda Tome described the scope of their work and shared a number of useful resources including useful links to the PRONOM starter pack and to the PRONOM drop-in sessions.
You might also find out a bit more about yourself by playing this File Format Dating Game from Lotte Wijsman and colleagues: Susanne van den Eijkel, Anton van Es, Elaine Murray, Francesca Mackenzie, Ellie O’Leary, and Sharon McMeekin. (I ended up on a date with FASTA (FDD000622) in my first play-through!)
Not specifically for WDPD, but in the same week I also enjoyed this presentation from Ange Albertini looking at different ways of identifying file formats. One big take away for me was thinking about how to get more forensic information out of a file format identification. DROID doesn’t tell us a lot, but is there a world in which one day it could?
Let me know if you find any of this work useful at all; and good luck on your file format endeavors this week.
#digipres #DigitalPreservation #DROID #FileFormats #PRONOM #Python #siegfried #SkeletonTestCorpus #WDPD #WDPD2024
-
#PRONOM hackathon is still in full swing! Want to contribute? Adding a description is an easy way to help build on this amazing resource! #digipres #fileformats https://github.com/digital-preservation/PRONOM_Research/blob/main/PRONOM_Research_Week/Research_Week_2024.md#descriptathon
-
Wikidata is a good service, Wikibase (on which Wikidata is built) is a better platform.
I have spoken before about its potential to be added into the file-format registry ecosystem in a federated model.
If we are to use it as a registry that can perhaps complement the pipelines going into PRONOM, e.g. in vendor’s digital preservation platforms such as the Rosetta Format Library, a Wikidata should be able to output different serializations of signature file for tools such as Siegfried, DROID or FIDO.
- Siegfried ✅: https://github.com/richardlehane/siegfried/wiki/Wikidata-identifier
- Fido ❌: I’ll need to revisit this!
And what about DROID?
Conversion to DROID
It’s not straightforward to say to a Wikibase/Wikidata Query Service, “output XML in the shape of a DROID signature file”, but it is straightforward to write a converter script.
I had this very thought last week while presenting with colleagues at a File Format Workshop at iPRES in Ghent.
It dawned on me that the conversion script would actually be simple thanks to a change in format to DROID whereby it can process all its own signatures, where previously it required DROID to pre-process them. It’s a long story, a more simple rendition is that DROID no longer requires DROID byte-code to record information about an identification pattern, and can instead store signatures in the attribute of a byte sequence element as-is, i.e. a PRONOM formatted regular expression from PRONOM itself, or Wikidata.
This realization resulted in my writing a conversion script (it took just over a half-day) during some down-time on the train home this past weekend.
The script is called wddroidy (after WD-40 🙄🥁) and can be found here.
Results
We can see using the skeleton suite from Richard Lehane’s Builder that we can positively identify files using the new signature file.
Links can also be made to work with Wikidata identifiers by modifying the PUID URL pattern in the DROID configuration, e.g. to:
http://wikidata.org/entity/%sThe screenshot below shows where in the dialog that setting is:
Reference signature file
A reference signature file can be found in the wddroidy repository here. There are approximately 8119 file formats listed and 8195 file format signatures for those.
NB. We know there are different issues with Wikidata including how to identify a “format” and the quality of the signatures. We capture some of these in a global repository: https://github.com/ffdev-info/wikidp-issues/issues
DROID simplified format
The real headline here might be how easy it was to create the output using the DROID simplified format.
I have spoken about it briefly before but not in any detail.
In-short DROID no longer uses its own byte-code encoding that included strange terms such as
DefaultShift, Shift Byte, andSubSequence(instructions to DROID about how to perform Boyer Moore Horspool search). See below and note especially how the bytes are split inShift Byteattributes and elements:<?xml version="1.0" encoding="UTF-8"?><FFSignatureFile xmlns="http://www.nationalarchives.gov.uk/pronom/SignatureFile" Version="1" DateCreated="2024-09-23T18:16:09+00:00"> <InternalSignatureCollection> <InternalSignature ID="1" Specificity="Specific"> <ByteSequence Reference="BOFoffset"> <SubSequence MinFragLength="0" Position="1" SubSeqMaxOffset="0" SubSeqMinOffset="0"> <Sequence>255044462D312E34</Sequence> <DefaultShift>9</DefaultShift> <Shift Byte="25">8</Shift> <Shift Byte="50">7</Shift> <Shift Byte="44">6</Shift> <Shift Byte="46">5</Shift> <Shift Byte="2D">4</Shift> <Shift Byte="31">3</Shift> <Shift Byte="2E">2</Shift> <Shift Byte="34">1</Shift> </SubSequence> </ByteSequence> </InternalSignature> </InternalSignatureCollection> <FileFormatCollection> <FileFormat ID="1" Name="Development Signature" PUID="dev/1" Version="1.0" MIMEType="application/octet-stream"> <InternalSignatureID>1</InternalSignatureID> <Extension>ext</Extension> </FileFormat> </FileFormatCollection></FFSignatureFile>
The updated format was made possible via Matt Palmer via his ByteSeek work, and can now except a regularly encoded PRONOM formatted regular expression (regex) in an attribute in the
ByteSequenceelement. See here for a signature file equivalent to the above:<?xml version="1.0" encoding="UTF-8"?><FFSignatureFile xmlns="http://www.nationalarchives.gov.uk/pronom/SignatureFile" Version="1" DateCreated="2024-09-23T18:16:09+00:00"> <InternalSignatureCollection> <InternalSignature ID="1" Specificity="Specific"> <ByteSequence Reference="BOFoffset" Sequence="255044462D312E34" Offset="0" /> </InternalSignature> </InternalSignatureCollection> <FileFormatCollection> <FileFormat ID="1" Name="Development Signature" PUID="dev/1" Version="1.0" MIMEType="application/octet-stream"> <InternalSignatureID>1</InternalSignatureID> <Extension>ext</Extension> </FileFormat> </FileFormatCollection></FFSignatureFile>
The format is much easier to read, and after a bit of time sitting with the DROID signature file format you realize it is fairly easy to output as well. I use some very rudimentary templates in wddroidy using Python’s f-strings.
It means other sources of PRONOM encoded signatures can output much simpler signature files and they can be used by DROID. I myself need to add it to the signature development utility – this would allow the utility to run standalone on anyone’s PC.
One next step for this approach might be to confirm that it does work entirely as expected by extracting all of PRONOM’s signatures proper and performing a mapping to the simplified format – if we can match against all the skeleton files in the latest Builder release then we should be looking good!
Priorities
I am always reminded, but always forget about priorities! This is part of how DROID resolves a file format into a single identifier, e.g. where SVG can match XML, we often want the more specific format returned, and so a priority is used to prioritize that one over the other, resulting in a single unambiguous identification for the DROID user. It manifests in the signature file as:
<FileFormat ID="634" MIMEType="image/svg+xml"Name="Scalable Vector Graphics"PUID="fmt/91"Version="1.0"><InternalSignatureID>24</InternalSignatureID><Extension>svg</Extension><HasPriorityOverFileFormatID>638</HasPriorityOverFileFormatID></FileFormat>More work needs to be done with Wikidata to understand if priorities can be properly applied to a DROID signature file. They are not written into the reference signature file above.Using the results
Using the results can be done for two things:
- (Probably) There are a greater number of patterns in the Wikidata output than in PRONOM. If you have a file that remains unidentified, you can try the reference file for clues as to what it may be. I’d only use caution and investigate the exact byte sequence used for a match and understand its properties. I’d also check that the mapping also looks accurate, I’ve tried one or two runs using the identifier and it looks good, but there may still be mistakes.
- For improving the quality of the sources in Wikidata. As you can see from the Skeleton suite there are a lot of gaps. We a) have a rough idea what these are, and b) know the identification doesn’t work via Wikidata. Why is that? Is the signature in Wikidata simply not good enough? Are patterns missing? Is there another error or issue we can help with given our expertise in file format identification?
Hacking wddroidy
You can hack wddroidy. Currently it allows you to limit the number of results returned, and also modify the ISO language code used by the tool. You can see this in the command line arguments:
python wddroidy.py --helpusage: wddroidy [-h] [--definitions DEFINITIONS] [--wdqs] [--lang LANG] [--limit LIMIT] [--output OUTPUT] [--output-date] [--endpoint ENDPOINT]create a DROID compatible signature file from Wikidataoptions: -h, --help show this help message and exit --definitions DEFINITIONS use a local definitions file, e.g. from Siegfried --wdqs, -w live results from Wikidata --lang LANG, -l LANG change Wikidata language results --limit LIMIT, -n LIMIT limit the number of resukts --output OUTPUT, -o OUTPUT filename to output to --output-date, -t output a default file with the current timestamp --endpoint ENDPOINT, -url ENDPOINT url of the WDQSfor more information visit https://github.com/ross-spencer/wddroidy
The actual SPARQL query used can be manually edited in the src folder. E.g. you can limit the query by format or family or classification. I provide some more inspiration in the Siegfried Wiki.
Let me know if it’s useful!
This is really just a quick hack and it needs a lot more testing to improve the quality of the output. Most can be dealt with on the Wikidata side I am sure, but some might need to be done in the tool. If it’s useful, reach out, and let’s discuss what can be changed or how it can be used in your work.
Data quality
It will quickly become apparent the data quality isn’t what it is with PRONOM and that is why a curated and authoritative service such as PRONOM is always going to be needed. As mentioned in previous talks, this can in theory be complemented with downstream data in federated databases. This might mean curating Wikidata better using some of the tools available, or curating data into a Wikibase (the platfom Wikidata is built upon). Both options bring different benefits and advantages such as creating a bigger tent of signature developers on Wikidata, or, another example, more expressive signatures being made available via federated Wikibases.
And a word on Wikiba.se
A reminder too, that setting up a Wikibase can take some effort (I was once running three at the same time 😬) but a service called https://wikiba.se/ exists. wikiba.se could form an excellent scratch pad to begin thinking about mapping PRONOM like data to a Wikibase and also begin solving some of the other issues around mapping container signatures and outputting those in a way that is compatible for DROID. Let me know if you give it a whirl, or want to collab on any of that.
Otherwise, thanks in advance! And enjoy wddroidy!
https://exponentialdecay.co.uk/blog/making-droid-work-with-wikidata/
#Code #Coding #digipres #DigitalPreservation #DROID #FileFormat #FileFormats #OpenData #PRONOM #siegfried #SoftwareDevelopment #wikidata
-
Where do I see the file/format/disc header found in https://ia800200.us.archive.org/26/items/BMUG-TVRToo-Update/TVR%20Too%20Update.iso (https://archive.org/details/BMUG-TVRToo-Update)
It's not ISO9660 but I don't know the right keywords to search for much older CD formats.
#FileFormats #SoftwareHistory #SoftwareArchaeology #ReverseEngineering #RetroComputing
Previously this asked: Does anyone recognise a file header of 42 44 AA 25? But I don't think this is the actual file header.
-
56142 characters later i have another draft of pDBv1 - an encrypted password database format as a successor to pDBv0
https://github.com/TruncatedDinoSour/armour/blob/main/doc/d/pDB-1.md
#cryptography #database #encryption #passwords #pdb #fileformats
( still a work in progress but yeah )
-
What is SFASTA?
Genomic and bioinformatic-adjacent sequences (RNA, Protein, Peptides) are stored as FASTA files. Sequencing reads off a machine are stored as FASTQ files, adding a quality score associated with each nucleotide. Currently, these are non-human-readable plaintext files. As sequencing increases, we need to be able to process many more gigabytes and terabytes of files rapidly and with random access (currently solved by bgzip/tabix). becomes incredibly important.
SFASTA, my focus-on-random-access-speed FASTA/Q format replacement, has worked well for medium and large FASTA files, defining large as anything smaller than NT nucleotide database (~203Gb gzip-9 compressed, but likely larger whenever you are reading this). Small files did not benefit from stream compression and crazy indices, although the time cost for small files is irrelevant. But the conversion of nt to SFASTA took an inordinate amount of time, and reading the index into memory did as well. While still smaller and faster than gzip-9, this does not accomplish what I want.
Why?
FASTA files are frequently compressed with an outdated, slow, inefficient compression algorithm (gzip). Modern alternatives provide better compression ratios, decompression ratios, and faster throughput. The speed of reading FASTA files is quite important, with multiple tools that try to be the fastest. Clearly, this is an unsolved problem, and sticking to a text-based, non-human-readable format is a choice that only occurs due to the momentum of existing tools.
Genomics is moving to “Genomics at scale” and away from single-genome analyses. A flat file format adds unnecessary processing time to query hundreds of genomes instantly. For my own usage, I’d like to be able to query NT and fill up the GPU with random, on-the-fly examples. This is entirely achievable with modern computers but not with outdated file formats and compression.
What does SFASTA mean?
SFASTA previously stood “Snappy FASTA” as it used the Snappy algorithm, but now it uses ZSTD. The name remains as the command remains sfa, which can be typed with one hand on the home line on a standard keyboard.
Further Speed-ups
So, it’s clear my custom-built index was a failure. Enter B+-tree. While fighting post-COVID brain fog, I eventually managed to build a naive implementation. My benchmarks for creating a tree with 128 million key/value pairs threatened to take over 20 hours (for 20 samples, so 1 hour each). Hacking away at that, I did shrink it, but only some. Then, I modified a copy to use the sorted-vec crate. Finally, while reading further up on the topic, I discovered fractal trees, which merely add a buffer to each node and process it when calling for a flush or exceeding the buffer size. I am now within a minute of creating such a large index. For this implementation, the fractal tree uses sorted-vecs as the key vector.
For B+ trees and fractal trees, the order of the nodes (how many children each node can have) is incredibly important. For creating trees, an order of 32 seems to be the sweet spot (this is tested on u64 as both keys and values). For fractal trees, 64 with a larger buffer seems to be the sweet spot. The figure below shows the fastest order, 64, and buffer 128. The image below is for 1 million items.
Text is difficult to read, but the number is the order, and for fractal trees, the second number is the buffer size.
The Big Tree
My NT test dataset is a bit over 128 million entries, u64 range 0 to 128_369_206, with keys and values as the xxh3 hashed integer. You can see the spread below. Here the larger buffer size (up to 256) performs the best, but many are in the less than a minute sweet spot.
Searching the Tree
Now that building a tree for NT takes under a minute, compressing and queueing the nucleotide sequences and IDs into the file will be the bottleneck for creation. Building the tree is also a one-time cost, so it is not the highest priority. The focus now is searching the tree, which will happen quite frequently depending on the final use case.
I’m just now getting to start on this, but as you can see below, where input is the order of the nodes, a larger order decreases the time to find a key. This is an even better sign for the fractal trees, as they are more efficient with larger orders. The image below shows very little difference, with sorted vec having a bit of a slowdown. I have no idea why, possibly due to a line of code that did not change as I’m simultaneously playing around with three versions. As my fractal tree implementation uses sorted-vec, these results are quite equivalent. The search code is nearly identical. This is the next step.
Here, the x-axis is node order, with tests for 16, 32, 64, and 128.
What Hasn’t Worked
- 2bit/4bit nucleotide encodings – did not increase throughput or decrease on-disk size. Still worth further investigation.
Immediate Next Steps
As this is a write-once file format, at least at this stage, I plan to do the following:
- Smaller struct for read-only mode, i.e., buffer is no longer needed
- Benchmark sorted-vec against Eytzinger order
- Load only parts of the tree from disk, have efficient serialization
- Possibly try a bumpalo arena for querying the on-disk tree
- Batch insertion – Maybe this was all for naught
- Stream VBytes storage for keys/values of tree?
Ultimate Goals
- LD_PRELOAD to work with existing tools
- Python library
- C API
I’ve been programming in Rust for a couple of years and have experimented with many different things, including the bevy game engine. I would still argue I’m a middling skilled Rust developer, as I’m also a population geneticist. Thus, some weeks are spent without writing a single line of code or only writing in Python for statistical analysis. Thus, I expect much room for improvement, although I’m proud of where I’ve gotten this so far.
Plots made with criterion.
-
Here I was, all these years, thinking the macOS Finder reports file size in MiB (binary based, basically dividing the number of bytes by 1024 twice), but this just shows how old I am now. Fact is, it's been reporting MB (#bytes / 1000 / 1000) since OS X 10.6 (Snow Leopard)–which came out in 2009.
(via https://apple.stackexchange.com/questions/7817/how-can-i-force-macos-to-make-a-binary-conversion-of-filesizes-in-mb-gb and https://support.apple.com/en-us/102119)
And yes, this doesn't relate to the zips directly, but I found out about this while opening one, so ... it counts.
-
Shattering the eyeglass: Using Kaitai Structs to dissect the eyeglass’ contents
by @beet_keeperIn my post from 2012: Genesis of a File Format, I created a new file format – the Eyeglass file format. The format provides a mechanism to persist information about a patient’s eye health following a checkup at an opticians. Today in 2023 we can use the format to understand how to make use of Kaitai Structs for understanding file formats.
Given the disclaimer that I am not actually an optician and that the format is purely illustrative, let’s look at the eyeglass again below.
#Code #Coding #digipres #digitalLiteracy #DigitalPreservation #FileFormat #FileFormatAnalysis #FileFormats #kaitai #PRONOM #YYYY
-
fluffy rambles: In defense of WebP https://beesbuzz.biz/blog/12693-In-defense-of-WebP #FluffyRambles #FileFormats #Progress #Rants #Webp #Web
-
On Paperwork vs. Digital Formats
tired: Our customer's paperwork is profit. Our own paperwork is loss.[1]
wired: Your proprietay data format is loss. Our proprietary data format is profit.I'd remembered the first aphorism from a long-ago collection of Murphy's Laws.
Thinking through my struggles at organising online and digital media, references, etc., I realised that a huge problem is that these formats don't serve my goals. They're designed far more around their authors' goals, or even more often, the publishers' goals, largely around advertising, marketing, tracking, building lock-in, creating and defending monopolies, and the like.
Digital formats that are in the end-user's interest and specification serve the user. Those that are in the publisher's specification serve the publisher.
A related thought is that a key affordance of printed periodicals (newspapers, magazines, journals) is that of garbage collection, to put a contemporary spin on it.
When you're done reading a newspaper or magazine, you pick up the whole lot and throw it out. There's an intermediate level of organisation other than "the article" and "the whole collection" (that is, everything published in your office or home), "the issue". (Or perhaps a box or shelf of archived media.) That is, _there are multiple naturally-occurring levels of aggregation.)
When you're trying to sort through a set of browser tabs, you generally have only two levels of aggregation: the individual tab, or the entire session. There are typically no intermediate levels, and sorting through what you want to keep (or re-read, or work with) means you've got to go through the set one at a time and resolve disposition. The data format serves the browser vendor, but not the user.
Tools such as Tree-Style Tabs, an absolutely essential Firefox extension, give a higher level of natural organisation, the tab tree. Here, a structure emerges, without user effort, of related content. At the top of the tree is whatever page began an exploration, and as you descend it, you go further down into the search. When cleaning up, it's possible to pick any given tab, branch, or whole tree, and close it out in one fell swoop. Garbage collection costs are reduced.
(Three guesses as to what I've been attempting to do, and the first two don't count.)
#media #paperwork #DigitalMedia #DigitalFormats #FileFormats #DataFormats #kfc #docfs #UserCentricDesign #TreeStyleTabs