#hpc — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #hpc, aggregated by home.social.
-
Great example of codesign for storage. 40x246 TB SSDs can only be reliable with storage software that does content-aware erasure codes and supports fail-in-place. You’d never run Lustre (or any other system that assumes reliable block) using this. Time to recover from drive failure would be atrocious.
This is 100% for scale-out object for AI environments. It’s not economical for anything else.
-
RE: https://fediscience.org/@snakemake/116571962095785816
This little bit "performance improvements" lowered the number of file system access events for considerably! #Snakemake trigger many such events for keeping track of metadata. Which is important, but may cause some delays due to file system overhead - particularly on parallel and/or network file systems. The feature to outsource parts of this to sqlite was implemented during the #SnakemakeHackathon2026 . I hope, I can test the improvements next Monday!
-
New #HPC Job! 🕶️: Title: Marketing & Business Development Lead for Supercomputing, AI and Enterprise Data at HMx Labs
Location: London, UK
Employer: HMx Labs
Remote: Hybrid
https://www.linkedin.com/jobs/view/4413498596/ -
The cooling transition (air → liquid → immersion) is a multi-year infrastructure shift that most market analyses under-weight. It affects everything: rack density, power distribution, data center location strategy, and ultimately who can build at scale.
-
We had a very productive F2F meeting last week at the Argonne Leadership Computing Facility, with many thanks to our great hosts at the Argonne National Lab. The main objective was to feature-freeze OpenMP API version 6.1 and we accomplished that mission!
-
New #HPC Job! 👀️: Title: Lead GPU Infrastructure Engineer (HPC Fleet)
Location: https://www.linkedin.com/jobs/view/4410948568/
Employer: Alpha Compute Corp.
Remote: Fully remote
https://www.linkedin.com/jobs/view/4410948568/ -
#hpc #supercomputing #linux.
🚨🚨🚨🚨🚨
The first NVL4 shelves is provisioning over the network! This is the beginning of the new system! -
Unread Slack channels can be amusing at times - here's two sets (in this order) from 2 different Slacks.
Meet the new #HPC band:
Slurm Music and the General Random Schedulers.
-
I was checking a faculty candidate what the #HPC is offering on its newest iteration of the cluster (that is fee per CPU second use). The high mem instance? 64 GB of RAM. Which is what my <checks notes> decade+ ancient server.
Within the last two weeks I was running a relatively simple random forest that needed 80+ GB on version 1 of the cluster. Well I guess I'm not doing that after it's decomissioned. Which sent me down the rabbit hole. What would it take for me to build a 256 GB RAM server?
-
At the Salishan HPC conference last week, I gave a talk called "AI doesn't need massive supercomputers after all!"
A couple people asked me for the slides (as crappy as they were), so here they are in hastily written blog format.
https://blog.glennklockwood.com/2026/05/ai-doesnt-need-giant-supercomputers.html
-
At the Salishan HPC conference last week, I gave a talk called "AI doesn't need massive supercomputers after all!" A couple people asked me for the slides (as crappy as they were), so here they are in hastily written blog format. blog.glennklockwood.com/2026/05/ai-d... #AI #HPC
AI doesn't need giant supercom... -
New #HPC Job! 😍️: Title: DevOps Engineer HPC
Location: Remote, with one day a week in London
Employer: Alliances Services +
Remote: Hybrid
https://groupeeolen-1721751826.teamtailor.com/jobs/7459617-ingenieur-devops-hpc-gpu-rust -
New #HPC Job! ⭐️: Title: DevOps Engineer HPC
Location: Remote, with one day a week in London
Employer: Alliance Services +
Remote: Hybrid
https://groupeeolen-1721751826.teamtailor.com/jobs/7459617-ingenieur-devops-hpc-gpu-rust -
New #HPC Job! 🤩️: Title: DevOps Engineer HPC
Location: Remote, with one day a week in London
Employer: Alliances Services +
Remote: Hybrid
https://groupeeolen-1721751826.teamtailor.com/jobs/7459617-ingenieur-devops-hpc-gpu-rust -
New #HPC Job! 😍️: Title: IT System Administrator or HPC Infrastructure Manager
Location: Göttingen, Germany
Employer: Max Planck Institute for Multidisciplinary Sciences (MPI-NAT), Department of Theoretical and Computational Biophysics
Remote: Fully onsite
https://www.mpinat.mpg.de/5256103/13-26_2?c=2501032 -
New #HPC Job! 💼️: Title: IT System Administrator or HPC Infrastructure Manager
Location: Göttingen, Germany
Employer: Max Planck Institute for Multidisciplinary Sciences (MPI-NAT), Department of Theoretical and Computational Biophysics
Remote: Fully onsite
https://www.mpinat.mpg.de/5256103/13-26_2?c=2501032 -
OpenCL 3.1 is here.
The Khronos Group has moved several capabilities into the core spec, including SPIR-V kernels, subgroups, and integer dot products.
Also includes improvements to the memory model and synchronization, plus better alignment with Vulkan via device UUID queries.
Implementations are already underway across major vendors and open source projects.
- Full Blog: https://www.khronos.org/blog/opencl-3.1-is-here?utm_medium=social&utm_source=mastodon&utm_campaign=OpenCL_3.1_is_here&utm_content=blog
- OpenCL specification GitHub
- Khronos Discord -
Das Management von #Forschungsdaten wird immer wichtiger. Wie ein professionelles Forschungsdaten-Management nicht nur für Effizienz sorgt, sondern auch für Nachvollziehbarkeit und Transparenz, lesen Sie in unserem aktuellen Newsletter: https://tiny.badw.de/PSREV
Außerdem geht es um #photonischeProzessoren, die Planung von #Kernfusionstechnik mit #HPC, innovative #Suchdienste und noch einiges mehr!
-
@hannorein That would be a bad idea especially if someone wants to run REBOUND on an #HPC cluster.
Dealing with simulations that create tons of files will put a lot of extra load on the shared file systems.
On your own little cluster, you probably wouldn't notice, but on the Alliance's national systems, on which thousands of users are working at any given time, this all stacks up. -
#Linux #HPC #Supercomputing All, if you're affected by copy fail (CVE-2026-31431), and cannot patch your system, apply the mitigations ASAP and reboot. It's a PITA for cluster admins, but it's better than the alternative. Even if someone chooses _NOT_ to run the exploit, any system binary they target will become at the minimum, corrupted.
-
What setups are others using for their #bioinformatics compute in a small team if they don’t have access to a large institute #HPC. Local on prem sever, local data centre setup, cloud, something else? Would love to hear how others have set it up and what’s worked well.
-
💽 Rund 9,9 Millionen Euro für KI-Hochleistungsrechner in Nordrhein-Westfalen
🤖 Die Ministerin für Kultur und Wissenschaft des Landes Nordrhein-Westfalen, Ina Brandes MdL, hat an der Universität zu Köln den Förderbescheid für neue Hochleistungsrechenkapazitäten übergeben.
Mehr dazu:
➡️ https://uni.koeln/YFB23 -
Are you interested in #DH and #socialsciences ?
- Do you want to know how to use #hpc resources easily? ✅
- Is #FAIR and #opensource research also important to you? ✅Join the next meeting of DHXpresso ☕ on May 8th!
@galaxyfreiburg will introduce @galaxyproject and @gtn and how it can be applied to the #humanities !
https://galaxyproject.org/events/2026-05-08-dhxpresso/
We are looking forward to seeing you there! 🤓
-
There’s a surprising amount of attendees here that are retired. What does that say about the dedication of the #HPC community? And what does it say about the actual goals of closed-door conferences like this?
-
WiFi is struggling to cope at Salishan #HPC conference - close to non functional. So lots of attendees turning on personal phone hotspots to get their laptops online. Fascinating to browse the list of personal hotspot names to see what name people use for their phone / hotspot.
-
IWOCL 2026 is next week!
Join the global OpenCL and SYCL community in Heilbronn, Germany (May 6–8) for the premier forum dedicated to open compute languages and heterogeneous platform programming. The program includes the latest technical talks, Khronos Working Group updates, application case studies, and ample opportunity to connect with peers across industry and academia.
Registration remains open: www.iwocl.org
See you there.
#IWOCL #OpenCL #SYCL #HPC #Heterogeneous #Compute -
An international team of physicists, including researchers from Jülich, used supercomputers to deliver a new high-precision calculation of the muon’s magnetic moment ⚡🧬
The result reduces theoretical uncertainty and strongly validates the Standard Model of particle physics.
The paper appeared in Nature.
Read more:
https://www.fz-juelich.de/en/news/archive/press-release/2026/muon#FZJ #Supercomputing #HPC #ParticlePhysics #exa_JUPITER
Image credit: Peter Allen, Ryan Allen, Universität Wuppertal
-
Has anyone ever had #dsync make source files 0 length while synchronizing them? Or any idea what could cause that? Some other job removing them mid sync, and dsync touching the original?
#mpi #mpifileutils #hpc -
Has anyone ever had #dsync make source files 0 length while synchronizing them? Or any idea what could cause that? Some other job removing them mid sync, and dsync touching the original?
#mpi #mpifileutils #hpc -
Has anyone ever had #dsync make source files 0 length while synchronizing them? Or any idea what could cause that? Some other job removing them mid sync, and dsync touching the original?
#mpi #mpifileutils #hpc -
Has anyone ever had #dsync make source files 0 length while synchronizing them? Or any idea what could cause that? Some other job removing them mid sync, and dsync touching the original?
#mpi #mpifileutils #hpc -
Has anyone ever had #dsync make source files 0 length while synchronizing them? Or any idea what could cause that? Some other job removing them mid sync, and dsync touching the original?
#mpi #mpifileutils #hpc -
The countdown is on — IWOCL 2026 is just two weeks away.
Join the global OpenCL and SYCL community in Heilbronn, Germany (May 6–8) for the premier forum dedicated to open compute languages and heterogeneous platform programming. Expect the latest technical talks, Khronos Working Group updates, and ample opportunity to connect with peers across industry and academia.
Registration is open: www.iwocl.org
#IWOCL #OpenCL #SYCL #HPC #Khronos #HeterogeneousComputing -
🎉 Celebrating 30 years of HPC: where the #hardware sprints ahead like Usain Bolt on a caffeine binge, yet languages crawl slower than a snail stuck in molasses! 🐌💻 Our top-notch scientists have achieved the impossible: advancing #technology without advancing the tools to use it. Bravo! 🙄👏
https://chapel-lang.org/blog/posts/30years/ #HPC #30years #progress #software #advancement #snailpace #HackerNews #ngated -
My implementation of the GaussSeidel smoother using a Diagnol direct access scheme in OpenFOAM as compared to the default GS smoother LDU gives almost ~50% improvements in cache misses and hits for a structured 3D cavity tutorial. Profiled using the amazing LIKWID profiler. Will share a deep technical report soon. Check it out and use - https://github.com/amartyadav/DIAGaussSeidel-Smoother-OpenFOAM
#hpc #scientificcomputing #computationalphysics #cfd #openfoam #likwid #softwaredevelopment #cpp -
Released my DIA-format Gauss-Seidel smoother plugin for OpenFOAM v13. MIT licensed.
Replaces the default LDU smoother on structured hex meshes — DIA stores diagonal bands contiguously, reducing pointer indirection and DRAM pressure. Expecting 10–20% wall-clock gains and better cache utilisation based on standalone profiling. Full OpenFOAM benchmarks incoming.
https://github.com/amartyadav/DIAGaussSeidel-Smoother-OpenFOAM
#OpenFOAM #HPC #CFD #NumericalMethods #FOSS #computerscience #physics
-
The path toward sustainable AI lies in combining quantum computing with energy-efficient application design. https://www.japantimes.co.jp/commentary/2026/04/06/world/quantum-computing-ais-sustainability-problem/?utm_medium=Social&utm_source=mastodon #commentary #worldnews #ai #hpc #grok4 #google #un #sdgs #sqai #quantumcomputing #qubits #pasqal #electricity #powergrids #windfarms #energy
-
Keynote at IWOCL 2026: Paulius Velesko presents chipStar — compiling unmodified CUDA/HIP code into OpenCL & SPIR-V fat binaries that run on Intel, AMD, NVIDIA, ARM, and RISC-V hardware. No recompilation needed.
Join us at IWOCL 2026, May 6–8 in Heilbronn, Germany to hear more.
View the full program at: https://www.iwocl.org/iwocl-2026/conference-program/
#IWOCL2026 #OpenCL #SYCL #CUDA #HPC #HeterogeneousComputing #RISCV #CUDA -
Today we are at the Julia4PDEs workshop, organized in collaboration with the @eScienceCenter for two days of talks about the various projects tackling partial differential equations in Julia.
-
🚀 Excited to collaborate with Ericsson to explore advanced AI and high-performance computing technologies for the evolution of #5G and future development of #6G networks 📡📶
Combining global telecom leadership with Jülich’s HPC expertise – including Europe’s #JUPITER⚡exascale system – we aim to develop more efficient, sustainable, and intelligent network technologies 🤖🌐
More: https://www.fz-juelich.de/en/news/archive/press-release/2026/mou-ericsson-fzj
#FZJ #HPC #exa_JUPITER #Collaboration #AI #Telecommunication
-
Price of #Nvidia's #VeraRubin #NVL72 racks skyrockets to $8.8M apiece, but #server makers' margins will be tight — Nvidia is moving closer to shipping entire full-scale systems
#Blackwell #NVL72 #rackscale system costs $2.8 – $3.4M for an #AI training and #HPC NVL72 #GB200 and $6M to $6.5M for an AI inference NVL72 #GB300
Vera Rubin NVL72 #VR200 systems are currently quoted at $5M - $7M per unit.
Nvidia has never confirmed the list prices of its NVL72 or #NVL144 products.
https://www.tomshardware.com/tech-industry/artificial-intelligence/price-of-nvidias-vera-rubin-nvl72-racks-skyrockets-to-as-much-as-usd8-8-million-apiece-but-server-makers-margins-will-be-tight-nvidia-is-moving-closer-to-shipping-entire-full-scale-systems -
My work for the month is to optimise the matrix adressing scheme of OpenFOAM to reduce cache miss. Initial idea is to replace the LDU Matrix addressing scheme with Diagnol matrix addressing scheme using multiple arrays for structures meshes. Will check it out on a simple 2D Poisson equation using Gauss-Seidel solver to check performance benefits, and will eventually proceed to introduce a new solver as a plugin if benefits are noticeable.
Wish me luck. -
sn-news: #open #electronics #semiconductors #processors #hpc OpenTitan shipping in production https://opensource.googleblog.com/2026/03/opentitan-shipping-in-production.html
-
The IWOCL 2026 program is live!
The 14th International Workshop on OpenCL and SYCL is coming to Heilbronn, Germany this May 6–8, and the full conference program has just been published at iwocl.org.
This year's lineup is packed. Whether you're working on heterogeneous HPC, GPU portability, or the future of open compute standards, this is the event for you.
Explore the program at https://www.iwocl.org/iwocl-2026/conference-program/
#SYCL #OpenCL #Khronos #SPIRV #IWOCL2026 #HighPerformanceComputing #HPC #GPU -
#MIETMODELL: 200kW #Schnelllader zum Mitnehmen! 💪🏽
Du brauchst mal eben für einen Monat einen #HPC im Vorgarten? Schall-E vermietet dir einen anschlussfertig und kümmert sich um Transport und Technik!
@StrombockNino
-
New supercomputer in the search for nuclear fusion. But will it run Doom! ;-)
#HPC #AI #UKAEA #science #supercomputer #energy #sunrise
https://www.ukaea.org/news/45-million-for-uks-first-ai-supercomputer-to-accelerate-fusion-energy/
-
Deadline extended until March 16th!
- Final weekend to submit your feedback -
Can you spare a minute for a very short survey? If you have ever used our OpenMP API Examples book, we are asking for your feedback in how we can improve it. The survey is short and quick.
Survey: https://link.openmp.org/4
(We will *not* add you to our contact list or sell your information)
#openmp #parallel #programming #HPC -
🪦Why Sierra the Supercomputer Had to Die
「 Neither the IBM nor the Nvidia components are still in production, and IBM no longer supports the version of the operating system—Red Hat Enterprise Linux—that Sierra used. “It's really about resources,” says Ann Dunkin, the former chief information officer of the US Department of Energy 」
https://www.wired.com/story/why-sierra-the-supercomputer-had-to-die/
-
Das Nebeneinander von #gpu und #cpu , die Integration von innovativen Beschleunigern sowie #quantencomputer: Die Komplexität beim #HPC und in #Supercomputer wächst. Deshalb ist ein Testsystem namens Blue Cubs am LRZ installiert, an dem Forschende #Codes ausprobieren und anpassen. So können sie sich auf den nächsten LRZ-Supercomputer Blue Lion vorbereiten – im Rechnergebäude laufen schon Bauarbeiten für die neue Versorgungsinfrastruktur: https://www.lrz.de/news/detail/warmlaufen-fuer-blue-lion