#snakemakehackathon2026 — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #snakemakehackathon2026, aggregated by home.social.
-
RE: https://fediscience.org/@snakemake/116571962095785816
This little bit "performance improvements" lowered the number of file system access events for considerably! #Snakemake trigger many such events for keeping track of metadata. Which is important, but may cause some delays due to file system overhead - particularly on parallel and/or network file systems. The feature to outsource parts of this to sqlite was implemented during the #SnakemakeHackathon2026 . I hope, I can test the improvements next Monday!
-
RE: https://fediscience.org/@snakemake/116571962095785816
This little bit "performance improvements" lowered the number of file system access events for considerably! #Snakemake trigger many such events for keeping track of metadata. Which is important, but may cause some delays due to file system overhead - particularly on parallel and/or network file systems. The feature to outsource parts of this to sqlite was implemented during the #SnakemakeHackathon2026 . I hope, I can test the improvements next Monday!
-
RE: https://fediscience.org/@snakemake/116571962095785816
This little bit "performance improvements" lowered the number of file system access events for considerably! #Snakemake trigger many such events for keeping track of metadata. Which is important, but may cause some delays due to file system overhead - particularly on parallel and/or network file systems. The feature to outsource parts of this to sqlite was implemented during the #SnakemakeHackathon2026 . I hope, I can test the improvements next Monday!
-
RE: https://fediscience.org/@snakemake/116571962095785816
This little bit "performance improvements" lowered the number of file system access events for considerably! #Snakemake trigger many such events for keeping track of metadata. Which is important, but may cause some delays due to file system overhead - particularly on parallel and/or network file systems. The feature to outsource parts of this to sqlite was implemented during the #SnakemakeHackathon2026 . I hope, I can test the improvements next Monday!
-
RE: https://fediscience.org/@snakemake/116571962095785816
This little bit "performance improvements" lowered the number of file system access events for considerably! #Snakemake trigger many such events for keeping track of metadata. Which is important, but may cause some delays due to file system overhead - particularly on parallel and/or network file systems. The feature to outsource parts of this to sqlite was implemented during the #SnakemakeHackathon2026 . I hope, I can test the improvements next Monday!
-
RE: https://fediscience.org/@biohackrxiv/116477505622903440
Today, we published the summary (as a #BioHackrXiv preprint) of the #SnakemakeHackathon2026 . All the accomplishments which contribute once more to improve the #Snakemake "ecosystem" for #reproducibleResearch & #Dataanalysis .
Many thanks to @egonw from BioHackrXiv for helping us!
-
RE: https://fediscience.org/@biohackrxiv/116477505622903440
Today, we published the summary (as a #BioHackrXiv preprint) of the #SnakemakeHackathon2026 . All the accomplishments which contribute once more to improve the #Snakemake "ecosystem" for #reproducibleResearch & #Dataanalysis .
Many thanks to @egonw from BioHackrXiv for helping us!
-
RE: https://fediscience.org/@biohackrxiv/116477505622903440
Today, we published the summary (as a #BioHackrXiv preprint) of the #SnakemakeHackathon2026 . All the accomplishments which contribute once more to improve the #Snakemake "ecosystem" for #reproducibleResearch & #Dataanalysis .
Many thanks to @egonw from BioHackrXiv for helping us!
-
RE: https://fediscience.org/@biohackrxiv/116477505622903440
Today, we published the summary (as a #BioHackrXiv preprint) of the #SnakemakeHackathon2026 . All the accomplishments which contribute once more to improve the #Snakemake "ecosystem" for #reproducibleResearch & #Dataanalysis .
Many thanks to @egonw from BioHackrXiv for helping us!
-
RE: https://fediscience.org/@biohackrxiv/116477505622903440
Today, we published the summary (as a #BioHackrXiv preprint) of the #SnakemakeHackathon2026 . All the accomplishments which contribute once more to improve the #Snakemake "ecosystem" for #reproducibleResearch & #Dataanalysis .
Many thanks to @egonw from BioHackrXiv for helping us!
-
"Snakemake Hackathon 2026" https://doi.org/10.37044/osf.io/h6zqj_v1
"Nonetheless, the platform’s continued evolution faces several open challenges: improving core performance on heterogeneous high-performance-computing (HPC) resources, extending the plugin architecture for domain-specific extensions, and lowering the entry barrier for novice users while preserving full reproducibility. Here we report on the Snakemake Hackathon 2026, convened in Munich, Germany (9–13 March 2026) with more than 40 participants representing academia, industry, and national-level research infrastructure." https://index.biohackrxiv.org/2026/04/27/h6zqj.html
-
"Snakemake Hackathon 2026" https://doi.org/10.37044/osf.io/h6zqj_v1
"Nonetheless, the platform’s continued evolution faces several open challenges: improving core performance on heterogeneous high-performance-computing (HPC) resources, extending the plugin architecture for domain-specific extensions, and lowering the entry barrier for novice users while preserving full reproducibility. Here we report on the Snakemake Hackathon 2026, convened in Munich, Germany (9–13 March 2026) with more than 40 participants representing academia, industry, and national-level research infrastructure." https://index.biohackrxiv.org/2026/04/27/h6zqj.html
-
"Snakemake Hackathon 2026" https://doi.org/10.37044/osf.io/h6zqj_v1
"Nonetheless, the platform’s continued evolution faces several open challenges: improving core performance on heterogeneous high-performance-computing (HPC) resources, extending the plugin architecture for domain-specific extensions, and lowering the entry barrier for novice users while preserving full reproducibility. Here we report on the Snakemake Hackathon 2026, convened in Munich, Germany (9–13 March 2026) with more than 40 participants representing academia, industry, and national-level research infrastructure." https://index.biohackrxiv.org/2026/04/27/h6zqj.html
-
"Snakemake Hackathon 2026" https://doi.org/10.37044/osf.io/h6zqj_v1
"Nonetheless, the platform’s continued evolution faces several open challenges: improving core performance on heterogeneous high-performance-computing (HPC) resources, extending the plugin architecture for domain-specific extensions, and lowering the entry barrier for novice users while preserving full reproducibility. Here we report on the Snakemake Hackathon 2026, convened in Munich, Germany (9–13 March 2026) with more than 40 participants representing academia, industry, and national-level research infrastructure." https://index.biohackrxiv.org/2026/04/27/h6zqj.html
-
"Snakemake Hackathon 2026" https://doi.org/10.37044/osf.io/h6zqj_v1
"Nonetheless, the platform’s continued evolution faces several open challenges: improving core performance on heterogeneous high-performance-computing (HPC) resources, extending the plugin architecture for domain-specific extensions, and lowering the entry barrier for novice users while preserving full reproducibility. Here we report on the Snakemake Hackathon 2026, convened in Munich, Germany (9–13 March 2026) with more than 40 participants representing academia, industry, and national-level research infrastructure." https://index.biohackrxiv.org/2026/04/27/h6zqj.html
-
The #SnakemakeHackathon2026 has ended, we are still preparing our preprint release. But, our host has prepared a note on their homepage: https://go.tum.de/946236 🥳
-
The #SnakemakeHackathon2026 has ended, we are still preparing our preprint release. But, our host has prepared a note on their homepage: https://go.tum.de/946236 🥳
-
The #SnakemakeHackathon2026 has ended, we are still preparing our preprint release. But, our host has prepared a note on their homepage: https://go.tum.de/946236 🥳
-
The #SnakemakeHackathon2026 has ended, we are still preparing our preprint release. But, our host has prepared a note on their homepage: https://go.tum.de/946236 🥳
-
The #SnakemakeHackathon2026 has ended, we are still preparing our preprint release. But, our host has prepared a note on their homepage: https://go.tum.de/946236 🥳
-
RE: https://fediscience.org/@snakemake/116295568336688286
This is a big step forward: The SLURM plugin for Snakemake now supports so-called job arrays. These are cluster jobs, with ~ equal resource requirements in terms of memory and compute resources.
The change in itself was big: The purpose of a workflow system is to make use of the vast resources of an HPC cluster. Hence, jobs are submitted to run concurrently. However, for a job array, we have to "wait" for all eligible jobs to be ready. And then we submit.
To preserve concurrent execution of other jobs which are ready to be executed, a thread pool has been introduced. In itself, I do not see job arrays as such a big feature: The LSF system profited much more from arrays than the rather lean SLURM implementation does.
BUT: the new code base will ease further development to pooling many shared memory tasks (applications which support no parallel execution or are confined to one computer by "only" supporting threading). Until then, there is more work to do.
#HPC #SLURM #Snakemake #SnakemakeHackathon2026 #ReproducibleComputing #OpenScience
-
RE: https://fediscience.org/@snakemake/116295568336688286
This is a big step forward: The SLURM plugin for Snakemake now supports so-called job arrays. These are cluster jobs, with ~ equal resource requirements in terms of memory and compute resources.
The change in itself was big: The purpose of a workflow system is to make use of the vast resources of an HPC cluster. Hence, jobs are submitted to run concurrently. However, for a job array, we have to "wait" for all eligible jobs to be ready. And then we submit.
To preserve concurrent execution of other jobs which are ready to be executed, a thread pool has been introduced. In itself, I do not see job arrays as such a big feature: The LSF system profited much more from arrays than the rather lean SLURM implementation does.
BUT: the new code base will ease further development to pooling many shared memory tasks (applications which support no parallel execution or are confined to one computer by "only" supporting threading). Until then, there is more work to do.
#HPC #SLURM #Snakemake #SnakemakeHackathon2026 #ReproducibleComputing #OpenScience
-
RE: https://fediscience.org/@snakemake/116295568336688286
This is a big step forward: The SLURM plugin for Snakemake now supports so-called job arrays. These are cluster jobs, with ~ equal resource requirements in terms of memory and compute resources.
The change in itself was big: The purpose of a workflow system is to make use of the vast resources of an HPC cluster. Hence, jobs are submitted to run concurrently. However, for a job array, we have to "wait" for all eligible jobs to be ready. And then we submit.
To preserve concurrent execution of other jobs which are ready to be executed, a thread pool has been introduced. In itself, I do not see job arrays as such a big feature: The LSF system profited much more from arrays than the rather lean SLURM implementation does.
BUT: the new code base will ease further development to pooling many shared memory tasks (applications which support no parallel execution or are confined to one computer by "only" supporting threading). Until then, there is more work to do.
#HPC #SLURM #Snakemake #SnakemakeHackathon2026 #ReproducibleComputing #OpenScience
-
RE: https://fediscience.org/@snakemake/116295568336688286
This is a big step forward: The SLURM plugin for Snakemake now supports so-called job arrays. These are cluster jobs, with ~ equal resource requirements in terms of memory and compute resources.
The change in itself was big: The purpose of a workflow system is to make use of the vast resources of an HPC cluster. Hence, jobs are submitted to run concurrently. However, for a job array, we have to "wait" for all eligible jobs to be ready. And then we submit.
To preserve concurrent execution of other jobs which are ready to be executed, a thread pool has been introduced. In itself, I do not see job arrays as such a big feature: The LSF system profited much more from arrays than the rather lean SLURM implementation does.
BUT: the new code base will ease further development to pooling many shared memory tasks (applications which support no parallel execution or are confined to one computer by "only" supporting threading). Until then, there is more work to do.
#HPC #SLURM #Snakemake #SnakemakeHackathon2026 #ReproducibleComputing #OpenScience
-
RE: https://fediscience.org/@snakemake/116295568336688286
This is a big step forward: The SLURM plugin for Snakemake now supports so-called job arrays. These are cluster jobs, with ~ equal resource requirements in terms of memory and compute resources.
The change in itself was big: The purpose of a workflow system is to make use of the vast resources of an HPC cluster. Hence, jobs are submitted to run concurrently. However, for a job array, we have to "wait" for all eligible jobs to be ready. And then we submit.
To preserve concurrent execution of other jobs which are ready to be executed, a thread pool has been introduced. In itself, I do not see job arrays as such a big feature: The LSF system profited much more from arrays than the rather lean SLURM implementation does.
BUT: the new code base will ease further development to pooling many shared memory tasks (applications which support no parallel execution or are confined to one computer by "only" supporting threading). Until then, there is more work to do.
#HPC #SLURM #Snakemake #SnakemakeHackathon2026 #ReproducibleComputing #OpenScience
-
Did you know? We started to gather cluster profiles for Snakemake users in a little repo: https://github.com/snakemake/snakemake-cluster-profiles
They should serve as a template for others.
#HPC #Snakemake #ReproducibleComputing #SnakemakeHackathon2026
-
Did you know? We started to gather cluster profiles for Snakemake users in a little repo: https://github.com/snakemake/snakemake-cluster-profiles
They should serve as a template for others.
#HPC #Snakemake #ReproducibleComputing #SnakemakeHackathon2026
-
Did you know? We started to gather cluster profiles for Snakemake users in a little repo: https://github.com/snakemake/snakemake-cluster-profiles
They should serve as a template for others.
#HPC #Snakemake #ReproducibleComputing #SnakemakeHackathon2026
-
Did you know? We started to gather cluster profiles for Snakemake users in a little repo: https://github.com/snakemake/snakemake-cluster-profiles
They should serve as a template for others.
#HPC #Snakemake #ReproducibleComputing #SnakemakeHackathon2026
-
Did you know? We started to gather cluster profiles for Snakemake users in a little repo: https://github.com/snakemake/snakemake-cluster-profiles
They should serve as a template for others.
#HPC #Snakemake #ReproducibleComputing #SnakemakeHackathon2026
-
Personally, the week in Munich at the #SnakemakeHackathon2026 was really neat. I met friends and acquaintances, took the time to meet an old friend of mine not working in academia any more and the wonderful @FrankSonntag from our #FediScience association.
And now, tired, on my way back. Thanks to the railway service in Germany, I enjoy some boredom and the opportunity to do something else. Even reading a disc world novel (which is not "novel" any more) does not help.
-
I learned, that I am the first to write a reporter plugin, which is part of the #Snakemake organization.
That will change. @fbartusch is working on an #ROCrate plugin. Yours truly is working on a #nanopub plugin. Both will ease publishing workflow analysis metadata and making our computing a bit more transparent. Our motivation? Well, did you ever read a data analysis paper (e.g. from an #Bioinformatis group) recently? See?
-
I learned, that I am the first to write a reporter plugin, which is part of the #Snakemake organization.
That will change. @fbartusch is working on an #ROCrate plugin. Yours truly is working on a #nanopub plugin. Both will ease publishing workflow analysis metadata and making our computing a bit more transparent. Our motivation? Well, did you ever read a data analysis paper (e.g. from an #Bioinformatis group) recently? See?
-
I learned, that I am the first to write a reporter plugin, which is part of the #Snakemake organization.
That will change. @fbartusch is working on an #ROCrate plugin. Yours truly is working on a #nanopub plugin. Both will ease publishing workflow analysis metadata and making our computing a bit more transparent. Our motivation? Well, did you ever read a data analysis paper (e.g. from an #Bioinformatis group) recently? See?
-
I learned, that I am the first to write a reporter plugin, which is part of the #Snakemake organization.
That will change. @fbartusch is working on an #ROCrate plugin. Yours truly is working on a #nanopub plugin. Both will ease publishing workflow analysis metadata and making our computing a bit more transparent. Our motivation? Well, did you ever read a data analysis paper (e.g. from an #Bioinformatis group) recently? See?
-
I learned, that I am the first to write a reporter plugin, which is part of the #Snakemake organization.
That will change. @fbartusch is working on an #ROCrate plugin. Yours truly is working on a #nanopub plugin. Both will ease publishing workflow analysis metadata and making our computing a bit more transparent. Our motivation? Well, did you ever read a data analysis paper (e.g. from an #Bioinformatis group) recently? See?
-
Which new features did I like the most?
Well, there are so many, it merits a preprint for which @egonw already lend tremendous support.
Anyway, here are my favourites:
- #Snakemake tracks all #metadata during the workflow execution. This caused many(!) file access requrests. Now, we have a SQLite-DB for that purpose, thereby lifting quite some overhead.
- containerizing workflows to a Dockerfile was possible for long time. With "--containerize apptainer" there is direct support for #apptainer
- when a workflow is aborted abruptly it cannot delete its lockfile. Now running `--unlock` will not require calculating the DAG any more.Oh, there is much, much more. But the Changelog is already linked.
-
Which new features did I like the most?
Well, there are so many, it merits a preprint for which @egonw already lend tremendous support.
Anyway, here are my favourites:
- #Snakemake tracks all #metadata during the workflow execution. This caused many(!) file access requrests. Now, we have a SQLite-DB for that purpose, thereby lifting quite some overhead.
- containerizing workflows to a Dockerfile was possible for long time. With "--containerize apptainer" there is direct support for #apptainer
- when a workflow is aborted abruptly it cannot delete its lockfile. Now running `--unlock` will not require calculating the DAG any more.Oh, there is much, much more. But the Changelog is already linked.
-
Which new features did I like the most?
Well, there are so many, it merits a preprint for which @egonw already lend tremendous support.
Anyway, here are my favourites:
- #Snakemake tracks all #metadata during the workflow execution. This caused many(!) file access requrests. Now, we have a SQLite-DB for that purpose, thereby lifting quite some overhead.
- containerizing workflows to a Dockerfile was possible for long time. With "--containerize apptainer" there is direct support for #apptainer
- when a workflow is aborted abruptly it cannot delete its lockfile. Now running `--unlock` will not require calculating the DAG any more.Oh, there is much, much more. But the Changelog is already linked.
-
Which new features did I like the most?
Well, there are so many, it merits a preprint for which @egonw already lend tremendous support.
Anyway, here are my favourites:
- #Snakemake tracks all #metadata during the workflow execution. This caused many(!) file access requrests. Now, we have a SQLite-DB for that purpose, thereby lifting quite some overhead.
- containerizing workflows to a Dockerfile was possible for long time. With "--containerize apptainer" there is direct support for #apptainer
- when a workflow is aborted abruptly it cannot delete its lockfile. Now running `--unlock` will not require calculating the DAG any more.Oh, there is much, much more. But the Changelog is already linked.
-
Which new features did I like the most?
Well, there are so many, it merits a preprint for which @egonw already lend tremendous support.
Anyway, here are my favourites:
- #Snakemake tracks all #metadata during the workflow execution. This caused many(!) file access requrests. Now, we have a SQLite-DB for that purpose, thereby lifting quite some overhead.
- containerizing workflows to a Dockerfile was possible for long time. With "--containerize apptainer" there is direct support for #apptainer
- when a workflow is aborted abruptly it cannot delete its lockfile. Now running `--unlock` will not require calculating the DAG any more.Oh, there is much, much more. But the Changelog is already linked.
-
As for the little executor plugin for the #SLURM batch system (for which I promised a release supporting array job support) ... Well, only a little bug fix release could be accomplished: https://github.com/snakemake/snakemake-executor-plugin-slurm/releases/tag/v2.5.4
Unfortunately, I wanted to use the common #Snakemake logo without the letters "#HPC" and missed one entry. So our announcement bot did not work.
Anyway, a faulty file system connection kept me from debugging the new feature. Stay tuned. It is almost ready.
-
As for the little executor plugin for the #SLURM batch system (for which I promised a release supporting array job support) ... Well, only a little bug fix release could be accomplished: https://github.com/snakemake/snakemake-executor-plugin-slurm/releases/tag/v2.5.4
Unfortunately, I wanted to use the common #Snakemake logo without the letters "#HPC" and missed one entry. So our announcement bot did not work.
Anyway, a faulty file system connection kept me from debugging the new feature. Stay tuned. It is almost ready.
-
As for the little executor plugin for the #SLURM batch system (for which I promised a release supporting array job support) ... Well, only a little bug fix release could be accomplished: https://github.com/snakemake/snakemake-executor-plugin-slurm/releases/tag/v2.5.4
Unfortunately, I wanted to use the common #Snakemake logo without the letters "#HPC" and missed one entry. So our announcement bot did not work.
Anyway, a faulty file system connection kept me from debugging the new feature. Stay tuned. It is almost ready.
-
As for the little executor plugin for the #SLURM batch system (for which I promised a release supporting array job support) ... Well, only a little bug fix release could be accomplished: https://github.com/snakemake/snakemake-executor-plugin-slurm/releases/tag/v2.5.4
Unfortunately, I wanted to use the common #Snakemake logo without the letters "#HPC" and missed one entry. So our announcement bot did not work.
Anyway, a faulty file system connection kept me from debugging the new feature. Stay tuned. It is almost ready.
-
As for the little executor plugin for the #SLURM batch system (for which I promised a release supporting array job support) ... Well, only a little bug fix release could be accomplished: https://github.com/snakemake/snakemake-executor-plugin-slurm/releases/tag/v2.5.4
Unfortunately, I wanted to use the common #Snakemake logo without the letters "#HPC" and missed one entry. So our announcement bot did not work.
Anyway, a faulty file system connection kept me from debugging the new feature. Stay tuned. It is almost ready.
-
RE: https://fediscience.org/@snakemake/116222696140712833
What a week at the #SnakemakeHackathon2026 !
What a wonderful week with wonderful people!
We were pretty productive and this #Snakemake release is just the peak of it. The list of features, bug fixes, performance improvement and additional documentation is so long — our little announcement robot cannot display it all. Even here on FediiScience with its 1500-character limit!
-
Finally, some personal progress: Thanks to @fbartusch a bug of the #SLURM executor plugin for Snakemake was fixed (dealing with nested quoting). A release is upcoming.
And: I generated my first (still faulty) test #nanopub from Snakemake 🥳
-
Finally, some personal progress: Thanks to @fbartusch a bug of the #SLURM executor plugin for Snakemake was fixed (dealing with nested quoting). A release is upcoming.
And: I generated my first (still faulty) test #nanopub from Snakemake 🥳
-
Finally, some personal progress: Thanks to @fbartusch a bug of the #SLURM executor plugin for Snakemake was fixed (dealing with nested quoting). A release is upcoming.
And: I generated my first (still faulty) test #nanopub from Snakemake 🥳