#parallelcomputing — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #parallelcomputing, aggregated by home.social.
-
We had a very productive F2F meeting last week at the Argonne Leadership Computing Facility, with many thanks to our great hosts at the Argonne National Lab. The main objective was to feature-freeze OpenMP API version 6.1 and we accomplished that mission!
-
We had a very productive F2F meeting last week at the Argonne Leadership Computing Facility, with many thanks to our great hosts at the Argonne National Lab. The main objective was to feature-freeze OpenMP API version 6.1 and we accomplished that mission!
-
We had a very productive F2F meeting last week at the Argonne Leadership Computing Facility, with many thanks to our great hosts at the Argonne National Lab. The main objective was to feature-freeze OpenMP API version 6.1 and we accomplished that mission!
-
We had a very productive F2F meeting last week at the Argonne Leadership Computing Facility, with many thanks to our great hosts at the Argonne National Lab. The main objective was to feature-freeze OpenMP API version 6.1 and we accomplished that mission!
-
The OpenMP Architecture Review Board has formed a #Python Language Subcommittee — a significant step toward bringing standardized shared-memory parallelism to the world's most widely used programming language.
The subcommittee's goal is to define #OpenMP directive support for Python and include it in the OpenMP API 7.0 specification, targeted for 2029.
https://www.openmp.org/2026/python-subcomittee/
#HPC #parallelcomputing -
The OpenMP Architecture Review Board has formed a #Python Language Subcommittee — a significant step toward bringing standardized shared-memory parallelism to the world's most widely used programming language.
The subcommittee's goal is to define #OpenMP directive support for Python and include it in the OpenMP API 7.0 specification, targeted for 2029.
https://www.openmp.org/2026/python-subcomittee/
#HPC #parallelcomputing -
The OpenMP Architecture Review Board has formed a #Python Language Subcommittee — a significant step toward bringing standardized shared-memory parallelism to the world's most widely used programming language.
The subcommittee's goal is to define #OpenMP directive support for Python and include it in the OpenMP API 7.0 specification, targeted for 2029.
https://www.openmp.org/2026/python-subcomittee/
#HPC #parallelcomputing -
The OpenMP Architecture Review Board has formed a #Python Language Subcommittee — a significant step toward bringing standardized shared-memory parallelism to the world's most widely used programming language.
The subcommittee's goal is to define #OpenMP directive support for Python and include it in the OpenMP API 7.0 specification, targeted for 2029.
https://www.openmp.org/2026/python-subcomittee/
#HPC #parallelcomputing -
The OpenMP Architecture Review Board has formed a #Python Language Subcommittee — a significant step toward bringing standardized shared-memory parallelism to the world's most widely used programming language.
The subcommittee's goal is to define #OpenMP directive support for Python and include it in the OpenMP API 7.0 specification, targeted for 2029.
https://www.openmp.org/2026/python-subcomittee/
#HPC #parallelcomputing -
Today I introduced a much-needed feature to #GPUSPH.
Our code supports multi-GPU and even multi-node, so in general if you have a large simulation you'll want to distribute it over all your GPUs using our internal support for it.
However, in some cases, you need to run a battery of simulations and your problem size isn't large enough to justify the use of more than a couple of GPUs for each simulation.
In this case, rather than running the simulations in your set serially (one after the other) using all GPUs for each, you'll want to run them in parallel, potentially even each on a single GPUs.
The idea is to find the next avaialble (set of) GPU(s) and launch a simulation on them while there are still available sets, then wait until a “slot” frees up and start the new one(s) as slots get freed.
Until now, we've been doing this manually by partitioning the set of simulations to do and start them in different shells.
There is actually a very powerful tool to achieve this on the command, line, GNU Parallel. As with all powerful tools, however, this is somewhat cumbersome to configure to get the intended result. And after Doing It Right™ one must remember the invocation magic …
So today I found some time to write a wrapper around GNU Parallel that basically (1) enumerates the available GPUs and (2) appends the appropriate --device command-line option to the invocation of GPUSPH, based on the slot number.
#GPGPU #ParallelComputing #DistributedComputing #GNUParallel
-
Wir freuen uns, Euch auch in diesem Jahr wieder spannende MATLAB-Kurse im Online-Format in der GWDG Academy anzubieten, welche von MathWorks-Mitarbeitern durchgeführt werden:
💠 Parallel Computing with MATLAB
Termin: 17.11.2025, 10:00 – 13:00 Uhr
💠 Demo Session: Scaling up MATLAB to the GWDG Scientific Compute Cluster
Termin: 19.11.2025, 15:00 – 16:30 Uhr
💠 Introduction to Research Software Development with MATLAB
Termin: 20.11.2025, 09:00 – 12:00 Uhr
💠 Connecting MATLAB with Python and other Open Source Tools
Termin: 20.11.2025 14:00 – 17:00 UhrDie Kurstermine werden ergänzt um eine sogenannte Office Hour (online) am 21.11.2025, 14:00 – 15:00 Uhr, während der Fragen zu den vorgestellten Themen der Kurse ausgiebig gestellt und behandelt werden können, um einen Austausch zwischen den Teilnehmer*innen und den Dozenten zu erreichen.
#gwdg #academy #gwdgacademy #kurs #matlab #parallelcomputing #göttingen #unigöttingen #mathswork
-
So What is a Supercomputer Anyway? - Over the decades there have been many denominations coined to classify computer sy... - https://hackaday.com/2025/03/19/so-what-is-a-supercomputer-anyway/ #parallelcomputing #computerhacks #supercomputer #featured #history #illiac #eniac
-
🚀 Parallel Python Made Easy! 🐍
We're hosting a hands-on tutorial on PyOMP, a system bringing OpenMP parallelism to Python! By combining OpenMP directives (as strings) with Numba's JIT compiler, PyOMP taps into LLVM's OpenMP support, delivering C-like performance in Python's simplicity.
Our participants are mastering this game-changing tool to supercharge their workflows.
Stay tuned for updates!
-
🚀 Parallel Python Made Easy! 🐍
We're hosting a hands-on tutorial on PyOMP, a system bringing OpenMP parallelism to Python! By combining OpenMP directives (as strings) with Numba's JIT compiler, PyOMP taps into LLVM's OpenMP support, delivering C-like performance in Python's simplicity.
Our participants are mastering this game-changing tool to supercharge their workflows.
Stay tuned for updates!
-
🚀 Parallel Python Made Easy! 🐍
We're hosting a hands-on tutorial on PyOMP, a system bringing OpenMP parallelism to Python! By combining OpenMP directives (as strings) with Numba's JIT compiler, PyOMP taps into LLVM's OpenMP support, delivering C-like performance in Python's simplicity.
Our participants are mastering this game-changing tool to supercharge their workflows.
Stay tuned for updates!
-
🚀 Parallel Python Made Easy! 🐍
We're hosting a hands-on tutorial on PyOMP, a system bringing OpenMP parallelism to Python! By combining OpenMP directives (as strings) with Numba's JIT compiler, PyOMP taps into LLVM's OpenMP support, delivering C-like performance in Python's simplicity.
Our participants are mastering this game-changing tool to supercharge their workflows.
Stay tuned for updates!
-
🚀 Parallel Python Made Easy! 🐍
We're hosting a hands-on tutorial on PyOMP, a system bringing OpenMP parallelism to Python! By combining OpenMP directives (as strings) with Numba's JIT compiler, PyOMP taps into LLVM's OpenMP support, delivering C-like performance in Python's simplicity.
Our participants are mastering this game-changing tool to supercharge their workflows.
Stay tuned for updates!
-
🚀 Exciting News: OpenMP 6.0 Public Comment Draft Released! 🎉
This draft contains the following groundbreaking features:
🔹 Improved Tasking Support
🔹 Enhanced Device Support
🔹 Advanced C & C++ Support
🔹 Extended Loop Transformations
🔹 Enhanced Memory Allocators
🔹 Memory Spaces API RoutinesExplore the draft and help shape the future of parallel programming. Your feedback is invaluable!
👉 https://openmp.org/wp-content/uploads/openmp-TR13.pdf
#OpenMP #HPC #Embedded #ParallelComputing #PublicComment #APIRelease
-
Independent verification of results is an important part of the #scientific process. However - in #physics at least - #replication and #verification studies rarely seem to be published. Despite this, I decided to attempt to verify the results of a groundbreaking Nature Physics paper from 2012, in which the authors describe the first dynamical #quantum #simulator. You can read the fruits of my labour in my #arxiv preprint: "Classical verification of a quantum simulator: local relaxation of a 1D Bose gas". I hope you find it interesting.
https://scirate.com/arxiv/2401.05301
#ScientificProcess #QuantumSimulator #QuantumSimulation #QuantumAdvantage #science #ClassicalVerification #ComputationalPhysics #ParallelComputing #HPC #HighPerfomanceComputing #supercomputer #TensorNetworks #MatrixProductStates #TEBD