home.social

#snakemakehackathon2026 — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #snakemakehackathon2026, aggregated by home.social.

  1. RE: fediscience.org/@snakemake/116

    This little bit "performance improvements" lowered the number of file system access events for considerably! #Snakemake trigger many such events for keeping track of metadata. Which is important, but may cause some delays due to file system overhead - particularly on parallel and/or network file systems. The feature to outsource parts of this to sqlite was implemented during the #SnakemakeHackathon2026 . I hope, I can test the improvements next Monday!

    #HPC #ReproducibleComputing

  2. RE: fediscience.org/@snakemake/116

    This little bit "performance improvements" lowered the number of file system access events for considerably! #Snakemake trigger many such events for keeping track of metadata. Which is important, but may cause some delays due to file system overhead - particularly on parallel and/or network file systems. The feature to outsource parts of this to sqlite was implemented during the #SnakemakeHackathon2026 . I hope, I can test the improvements next Monday!

    #HPC #ReproducibleComputing

  3. RE: fediscience.org/@snakemake/116

    This little bit "performance improvements" lowered the number of file system access events for considerably! #Snakemake trigger many such events for keeping track of metadata. Which is important, but may cause some delays due to file system overhead - particularly on parallel and/or network file systems. The feature to outsource parts of this to sqlite was implemented during the #SnakemakeHackathon2026 . I hope, I can test the improvements next Monday!

    #HPC #ReproducibleComputing

  4. RE: fediscience.org/@snakemake/116

    This little bit "performance improvements" lowered the number of file system access events for considerably! #Snakemake trigger many such events for keeping track of metadata. Which is important, but may cause some delays due to file system overhead - particularly on parallel and/or network file systems. The feature to outsource parts of this to sqlite was implemented during the #SnakemakeHackathon2026 . I hope, I can test the improvements next Monday!

    #HPC #ReproducibleComputing

  5. RE: fediscience.org/@snakemake/116

    This little bit "performance improvements" lowered the number of file system access events for considerably! #Snakemake trigger many such events for keeping track of metadata. Which is important, but may cause some delays due to file system overhead - particularly on parallel and/or network file systems. The feature to outsource parts of this to sqlite was implemented during the #SnakemakeHackathon2026 . I hope, I can test the improvements next Monday!

    #HPC #ReproducibleComputing

  6. RE: fediscience.org/@biohackrxiv/1

    Today, we published the summary (as a #BioHackrXiv preprint) of the #SnakemakeHackathon2026 . All the accomplishments which contribute once more to improve the #Snakemake "ecosystem" for #reproducibleResearch & #Dataanalysis .

    Many thanks to @egonw from BioHackrXiv for helping us!

  7. RE: fediscience.org/@biohackrxiv/1

    Today, we published the summary (as a #BioHackrXiv preprint) of the #SnakemakeHackathon2026 . All the accomplishments which contribute once more to improve the #Snakemake "ecosystem" for #reproducibleResearch & #Dataanalysis .

    Many thanks to @egonw from BioHackrXiv for helping us!

  8. RE: fediscience.org/@biohackrxiv/1

    Today, we published the summary (as a #BioHackrXiv preprint) of the #SnakemakeHackathon2026 . All the accomplishments which contribute once more to improve the #Snakemake "ecosystem" for #reproducibleResearch & #Dataanalysis .

    Many thanks to @egonw from BioHackrXiv for helping us!

  9. RE: fediscience.org/@biohackrxiv/1

    Today, we published the summary (as a #BioHackrXiv preprint) of the #SnakemakeHackathon2026 . All the accomplishments which contribute once more to improve the #Snakemake "ecosystem" for #reproducibleResearch & #Dataanalysis .

    Many thanks to @egonw from BioHackrXiv for helping us!

  10. RE: fediscience.org/@biohackrxiv/1

    Today, we published the summary (as a #BioHackrXiv preprint) of the #SnakemakeHackathon2026 . All the accomplishments which contribute once more to improve the #Snakemake "ecosystem" for #reproducibleResearch & #Dataanalysis .

    Many thanks to @egonw from BioHackrXiv for helping us!

  11. "Snakemake Hackathon 2026" doi.org/10.37044/osf.io/h6zqj_

    "Nonetheless, the platform’s continued evolution faces several open challenges: improving core performance on heterogeneous high-performance-computing (HPC) resources, extending the plugin architecture for domain-specific extensions, and lowering the entry barrier for novice users while preserving full reproducibility. Here we report on the Snakemake Hackathon 2026, convened in Munich, Germany (9–13 March 2026) with more than 40 participants representing academia, industry, and national-level research infrastructure." index.biohackrxiv.org/2026/04/

    #SnakemakeHackathon2026 #biohackathon #snakemake

  12. "Snakemake Hackathon 2026" doi.org/10.37044/osf.io/h6zqj_

    "Nonetheless, the platform’s continued evolution faces several open challenges: improving core performance on heterogeneous high-performance-computing (HPC) resources, extending the plugin architecture for domain-specific extensions, and lowering the entry barrier for novice users while preserving full reproducibility. Here we report on the Snakemake Hackathon 2026, convened in Munich, Germany (9–13 March 2026) with more than 40 participants representing academia, industry, and national-level research infrastructure." index.biohackrxiv.org/2026/04/

    #SnakemakeHackathon2026 #biohackathon #snakemake

  13. "Snakemake Hackathon 2026" doi.org/10.37044/osf.io/h6zqj_

    "Nonetheless, the platform’s continued evolution faces several open challenges: improving core performance on heterogeneous high-performance-computing (HPC) resources, extending the plugin architecture for domain-specific extensions, and lowering the entry barrier for novice users while preserving full reproducibility. Here we report on the Snakemake Hackathon 2026, convened in Munich, Germany (9–13 March 2026) with more than 40 participants representing academia, industry, and national-level research infrastructure." index.biohackrxiv.org/2026/04/

    #SnakemakeHackathon2026 #biohackathon #snakemake

  14. "Snakemake Hackathon 2026" doi.org/10.37044/osf.io/h6zqj_

    "Nonetheless, the platform’s continued evolution faces several open challenges: improving core performance on heterogeneous high-performance-computing (HPC) resources, extending the plugin architecture for domain-specific extensions, and lowering the entry barrier for novice users while preserving full reproducibility. Here we report on the Snakemake Hackathon 2026, convened in Munich, Germany (9–13 March 2026) with more than 40 participants representing academia, industry, and national-level research infrastructure." index.biohackrxiv.org/2026/04/

    #SnakemakeHackathon2026 #biohackathon #snakemake

  15. "Snakemake Hackathon 2026" doi.org/10.37044/osf.io/h6zqj_

    "Nonetheless, the platform’s continued evolution faces several open challenges: improving core performance on heterogeneous high-performance-computing (HPC) resources, extending the plugin architecture for domain-specific extensions, and lowering the entry barrier for novice users while preserving full reproducibility. Here we report on the Snakemake Hackathon 2026, convened in Munich, Germany (9–13 March 2026) with more than 40 participants representing academia, industry, and national-level research infrastructure." index.biohackrxiv.org/2026/04/

    #SnakemakeHackathon2026 #biohackathon #snakemake

  16. The #SnakemakeHackathon2026 has ended, we are still preparing our preprint release. But, our host has prepared a note on their homepage: go.tum.de/946236 🥳

    #Snakemake #ReproducibleComputing

  17. The #SnakemakeHackathon2026 has ended, we are still preparing our preprint release. But, our host has prepared a note on their homepage: go.tum.de/946236 🥳

    #Snakemake #ReproducibleComputing

  18. The #SnakemakeHackathon2026 has ended, we are still preparing our preprint release. But, our host has prepared a note on their homepage: go.tum.de/946236 🥳

    #Snakemake #ReproducibleComputing

  19. The #SnakemakeHackathon2026 has ended, we are still preparing our preprint release. But, our host has prepared a note on their homepage: go.tum.de/946236 🥳

    #Snakemake #ReproducibleComputing

  20. The #SnakemakeHackathon2026 has ended, we are still preparing our preprint release. But, our host has prepared a note on their homepage: go.tum.de/946236 🥳

    #Snakemake #ReproducibleComputing

  21. RE: fediscience.org/@snakemake/116

    This is a big step forward: The SLURM plugin for Snakemake now supports so-called job arrays. These are cluster jobs, with ~ equal resource requirements in terms of memory and compute resources.

    The change in itself was big: The purpose of a workflow system is to make use of the vast resources of an HPC cluster. Hence, jobs are submitted to run concurrently. However, for a job array, we have to "wait" for all eligible jobs to be ready. And then we submit.

    To preserve concurrent execution of other jobs which are ready to be executed, a thread pool has been introduced. In itself, I do not see job arrays as such a big feature: The LSF system profited much more from arrays than the rather lean SLURM implementation does.

    BUT: the new code base will ease further development to pooling many shared memory tasks (applications which support no parallel execution or are confined to one computer by "only" supporting threading). Until then, there is more work to do.

    #HPC #SLURM #Snakemake #SnakemakeHackathon2026 #ReproducibleComputing #OpenScience

  22. RE: fediscience.org/@snakemake/116

    This is a big step forward: The SLURM plugin for Snakemake now supports so-called job arrays. These are cluster jobs, with ~ equal resource requirements in terms of memory and compute resources.

    The change in itself was big: The purpose of a workflow system is to make use of the vast resources of an HPC cluster. Hence, jobs are submitted to run concurrently. However, for a job array, we have to "wait" for all eligible jobs to be ready. And then we submit.

    To preserve concurrent execution of other jobs which are ready to be executed, a thread pool has been introduced. In itself, I do not see job arrays as such a big feature: The LSF system profited much more from arrays than the rather lean SLURM implementation does.

    BUT: the new code base will ease further development to pooling many shared memory tasks (applications which support no parallel execution or are confined to one computer by "only" supporting threading). Until then, there is more work to do.

    #HPC #SLURM #Snakemake #SnakemakeHackathon2026 #ReproducibleComputing #OpenScience

  23. RE: fediscience.org/@snakemake/116

    This is a big step forward: The SLURM plugin for Snakemake now supports so-called job arrays. These are cluster jobs, with ~ equal resource requirements in terms of memory and compute resources.

    The change in itself was big: The purpose of a workflow system is to make use of the vast resources of an HPC cluster. Hence, jobs are submitted to run concurrently. However, for a job array, we have to "wait" for all eligible jobs to be ready. And then we submit.

    To preserve concurrent execution of other jobs which are ready to be executed, a thread pool has been introduced. In itself, I do not see job arrays as such a big feature: The LSF system profited much more from arrays than the rather lean SLURM implementation does.

    BUT: the new code base will ease further development to pooling many shared memory tasks (applications which support no parallel execution or are confined to one computer by "only" supporting threading). Until then, there is more work to do.

    #HPC #SLURM #Snakemake #SnakemakeHackathon2026 #ReproducibleComputing #OpenScience

  24. RE: fediscience.org/@snakemake/116

    This is a big step forward: The SLURM plugin for Snakemake now supports so-called job arrays. These are cluster jobs, with ~ equal resource requirements in terms of memory and compute resources.

    The change in itself was big: The purpose of a workflow system is to make use of the vast resources of an HPC cluster. Hence, jobs are submitted to run concurrently. However, for a job array, we have to "wait" for all eligible jobs to be ready. And then we submit.

    To preserve concurrent execution of other jobs which are ready to be executed, a thread pool has been introduced. In itself, I do not see job arrays as such a big feature: The LSF system profited much more from arrays than the rather lean SLURM implementation does.

    BUT: the new code base will ease further development to pooling many shared memory tasks (applications which support no parallel execution or are confined to one computer by "only" supporting threading). Until then, there is more work to do.

    #HPC #SLURM #Snakemake #SnakemakeHackathon2026 #ReproducibleComputing #OpenScience

  25. RE: fediscience.org/@snakemake/116

    This is a big step forward: The SLURM plugin for Snakemake now supports so-called job arrays. These are cluster jobs, with ~ equal resource requirements in terms of memory and compute resources.

    The change in itself was big: The purpose of a workflow system is to make use of the vast resources of an HPC cluster. Hence, jobs are submitted to run concurrently. However, for a job array, we have to "wait" for all eligible jobs to be ready. And then we submit.

    To preserve concurrent execution of other jobs which are ready to be executed, a thread pool has been introduced. In itself, I do not see job arrays as such a big feature: The LSF system profited much more from arrays than the rather lean SLURM implementation does.

    BUT: the new code base will ease further development to pooling many shared memory tasks (applications which support no parallel execution or are confined to one computer by "only" supporting threading). Until then, there is more work to do.

    #HPC #SLURM #Snakemake #SnakemakeHackathon2026 #ReproducibleComputing #OpenScience

  26. Personally, the week in Munich at the #SnakemakeHackathon2026 was really neat. I met friends and acquaintances, took the time to meet an old friend of mine not working in academia any more and the wonderful @FrankSonntag from our #FediScience association.

    And now, tired, on my way back. Thanks to the railway service in Germany, I enjoy some boredom and the opportunity to do something else. Even reading a disc world novel (which is not "novel" any more) does not help.

    #lifeis2short

  27. I learned, that I am the first to write a reporter plugin, which is part of the #Snakemake organization.

    That will change. @fbartusch is working on an #ROCrate plugin. Yours truly is working on a #nanopub plugin. Both will ease publishing workflow analysis metadata and making our computing a bit more transparent. Our motivation? Well, did you ever read a data analysis paper (e.g. from an #Bioinformatis group) recently? See?

    #SnakemakeHackathon2026

  28. I learned, that I am the first to write a reporter plugin, which is part of the #Snakemake organization.

    That will change. @fbartusch is working on an #ROCrate plugin. Yours truly is working on a #nanopub plugin. Both will ease publishing workflow analysis metadata and making our computing a bit more transparent. Our motivation? Well, did you ever read a data analysis paper (e.g. from an #Bioinformatis group) recently? See?

    #SnakemakeHackathon2026

  29. I learned, that I am the first to write a reporter plugin, which is part of the #Snakemake organization.

    That will change. @fbartusch is working on an #ROCrate plugin. Yours truly is working on a #nanopub plugin. Both will ease publishing workflow analysis metadata and making our computing a bit more transparent. Our motivation? Well, did you ever read a data analysis paper (e.g. from an #Bioinformatis group) recently? See?

    #SnakemakeHackathon2026

  30. I learned, that I am the first to write a reporter plugin, which is part of the #Snakemake organization.

    That will change. @fbartusch is working on an #ROCrate plugin. Yours truly is working on a #nanopub plugin. Both will ease publishing workflow analysis metadata and making our computing a bit more transparent. Our motivation? Well, did you ever read a data analysis paper (e.g. from an #Bioinformatis group) recently? See?

    #SnakemakeHackathon2026

  31. I learned, that I am the first to write a reporter plugin, which is part of the #Snakemake organization.

    That will change. @fbartusch is working on an #ROCrate plugin. Yours truly is working on a #nanopub plugin. Both will ease publishing workflow analysis metadata and making our computing a bit more transparent. Our motivation? Well, did you ever read a data analysis paper (e.g. from an #Bioinformatis group) recently? See?

    #SnakemakeHackathon2026

  32. Which new features did I like the most?

    Well, there are so many, it merits a preprint for which @egonw already lend tremendous support.

    Anyway, here are my favourites:
    - #Snakemake tracks all #metadata during the workflow execution. This caused many(!) file access requrests. Now, we have a SQLite-DB for that purpose, thereby lifting quite some overhead.
    - containerizing workflows to a Dockerfile was possible for long time. With "--containerize apptainer" there is direct support for #apptainer
    - when a workflow is aborted abruptly it cannot delete its lockfile. Now running `--unlock` will not require calculating the DAG any more.

    Oh, there is much, much more. But the Changelog is already linked.

    #SnakemakeHackathon2026

  33. Which new features did I like the most?

    Well, there are so many, it merits a preprint for which @egonw already lend tremendous support.

    Anyway, here are my favourites:
    - #Snakemake tracks all #metadata during the workflow execution. This caused many(!) file access requrests. Now, we have a SQLite-DB for that purpose, thereby lifting quite some overhead.
    - containerizing workflows to a Dockerfile was possible for long time. With "--containerize apptainer" there is direct support for #apptainer
    - when a workflow is aborted abruptly it cannot delete its lockfile. Now running `--unlock` will not require calculating the DAG any more.

    Oh, there is much, much more. But the Changelog is already linked.

    #SnakemakeHackathon2026

  34. Which new features did I like the most?

    Well, there are so many, it merits a preprint for which @egonw already lend tremendous support.

    Anyway, here are my favourites:
    - #Snakemake tracks all #metadata during the workflow execution. This caused many(!) file access requrests. Now, we have a SQLite-DB for that purpose, thereby lifting quite some overhead.
    - containerizing workflows to a Dockerfile was possible for long time. With "--containerize apptainer" there is direct support for #apptainer
    - when a workflow is aborted abruptly it cannot delete its lockfile. Now running `--unlock` will not require calculating the DAG any more.

    Oh, there is much, much more. But the Changelog is already linked.

    #SnakemakeHackathon2026

  35. Which new features did I like the most?

    Well, there are so many, it merits a preprint for which @egonw already lend tremendous support.

    Anyway, here are my favourites:
    - #Snakemake tracks all #metadata during the workflow execution. This caused many(!) file access requrests. Now, we have a SQLite-DB for that purpose, thereby lifting quite some overhead.
    - containerizing workflows to a Dockerfile was possible for long time. With "--containerize apptainer" there is direct support for #apptainer
    - when a workflow is aborted abruptly it cannot delete its lockfile. Now running `--unlock` will not require calculating the DAG any more.

    Oh, there is much, much more. But the Changelog is already linked.

    #SnakemakeHackathon2026

  36. Which new features did I like the most?

    Well, there are so many, it merits a preprint for which @egonw already lend tremendous support.

    Anyway, here are my favourites:
    - #Snakemake tracks all #metadata during the workflow execution. This caused many(!) file access requrests. Now, we have a SQLite-DB for that purpose, thereby lifting quite some overhead.
    - containerizing workflows to a Dockerfile was possible for long time. With "--containerize apptainer" there is direct support for #apptainer
    - when a workflow is aborted abruptly it cannot delete its lockfile. Now running `--unlock` will not require calculating the DAG any more.

    Oh, there is much, much more. But the Changelog is already linked.

    #SnakemakeHackathon2026

  37. As for the little executor plugin for the #SLURM batch system (for which I promised a release supporting array job support) ... Well, only a little bug fix release could be accomplished: github.com/snakemake/snakemake

    Unfortunately, I wanted to use the common #Snakemake logo without the letters "#HPC" and missed one entry. So our announcement bot did not work.

    Anyway, a faulty file system connection kept me from debugging the new feature. Stay tuned. It is almost ready.

    #SnakemakeHackathon2026

  38. As for the little executor plugin for the #SLURM batch system (for which I promised a release supporting array job support) ... Well, only a little bug fix release could be accomplished: github.com/snakemake/snakemake

    Unfortunately, I wanted to use the common #Snakemake logo without the letters "#HPC" and missed one entry. So our announcement bot did not work.

    Anyway, a faulty file system connection kept me from debugging the new feature. Stay tuned. It is almost ready.

    #SnakemakeHackathon2026

  39. As for the little executor plugin for the #SLURM batch system (for which I promised a release supporting array job support) ... Well, only a little bug fix release could be accomplished: github.com/snakemake/snakemake

    Unfortunately, I wanted to use the common #Snakemake logo without the letters "#HPC" and missed one entry. So our announcement bot did not work.

    Anyway, a faulty file system connection kept me from debugging the new feature. Stay tuned. It is almost ready.

    #SnakemakeHackathon2026

  40. As for the little executor plugin for the #SLURM batch system (for which I promised a release supporting array job support) ... Well, only a little bug fix release could be accomplished: github.com/snakemake/snakemake

    Unfortunately, I wanted to use the common #Snakemake logo without the letters "#HPC" and missed one entry. So our announcement bot did not work.

    Anyway, a faulty file system connection kept me from debugging the new feature. Stay tuned. It is almost ready.

    #SnakemakeHackathon2026

  41. As for the little executor plugin for the #SLURM batch system (for which I promised a release supporting array job support) ... Well, only a little bug fix release could be accomplished: github.com/snakemake/snakemake

    Unfortunately, I wanted to use the common #Snakemake logo without the letters "#HPC" and missed one entry. So our announcement bot did not work.

    Anyway, a faulty file system connection kept me from debugging the new feature. Stay tuned. It is almost ready.

    #SnakemakeHackathon2026

  42. RE: fediscience.org/@snakemake/116

    What a week at the #SnakemakeHackathon2026 !

    What a wonderful week with wonderful people!

    We were pretty productive and this #Snakemake release is just the peak of it. The list of features, bug fixes, performance improvement and additional documentation is so long — our little announcement robot cannot display it all. Even here on FediiScience with its 1500-character limit!

    #ReproducibleComputing #OpenScience

  43. Finally, some personal progress: Thanks to @fbartusch a bug of the #SLURM executor plugin for Snakemake was fixed (dealing with nested quoting). A release is upcoming.

    And: I generated my first (still faulty) test #nanopub from Snakemake 🥳

    #SnakemakeHackathon2026

  44. Finally, some personal progress: Thanks to @fbartusch a bug of the #SLURM executor plugin for Snakemake was fixed (dealing with nested quoting). A release is upcoming.

    And: I generated my first (still faulty) test #nanopub from Snakemake 🥳

    #SnakemakeHackathon2026

  45. Finally, some personal progress: Thanks to @fbartusch a bug of the #SLURM executor plugin for Snakemake was fixed (dealing with nested quoting). A release is upcoming.

    And: I generated my first (still faulty) test #nanopub from Snakemake 🥳

    #SnakemakeHackathon2026