home.social

#rocm — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #rocm, aggregated by home.social.

  1. So I am looking for a European company creating AI machines, ideally without Windows. Since I want to work on a fully OSS AI -stackI need #ROCm, which currently means AMD 395 AI cards. #Tuxedo is out since the yonly have the regular Ryzen and NVidia. Best option I found is #Framework and their PC but they are US.

  2. Monthly reminder do fuck off about #ROCm.

    It will take combined effort between #AMD, #Intel and #Qualcomm to offer a stack that can compete against #NVIDIA and #Apple.

    Until then, happy to know software only runs on either Mac or an RTX.

  3. One of the main reasons I want to use 26.04 is for the desktop. 😅 All that package handling. 😇

    I didn't need more reasons for the release. 😁

    phoronix.com/review/ubuntu-260

  4. #AMD #ROCm 7.12 Tech Preview Brings More Consumer #APU & #GPU Support
    Exciting with the ROCm 7.12 Tech Preview is supporting more consumer hardware. #RyzenAI400 series is now supported along with #Ryzen 200 series, the old Instinct #MI100 support is restored, and also there is now official support for #Radeon #RX7600 and #RX7700 XE graphics cards. Radeon RX 7600 is nearly three years old, it's long overdue but nice finally seeing it officially mentioned for ROCm support.
    phoronix.com/news/AMD-ROCm-7.1

  5. Once I got hardware-accelerated working under on my AI mini workstation from HP, my next goal was to make it easier to use. From this blog, you can read about my initial experiments with on @fedora Linux.

    peter.czanik.hu/posts/new-toy-

  6. Ever since I bought my mini from HP, my goal was to run hardware accelerated workloads in a environment. Read more to learn how things turned out on and @fedora !

    peter.czanik.hu/posts/new-toy-

  7. I've never seen more hangs with ROCm 7 than at the Tyburn Tree.

    Just tried to use on a 760M with 32GB of RAM. I had to resort to Vulkan.

    #AMD #AI #Troubleshooting #ROCm #ROCm7

  8. Comrades, all of #ROCm/HIP 7.1 has just made it into #Guix proper, thanks to the hard work of David Elsing—we’re almost done with this milestone 👇
    codeberg.org/guix/guix/milesto

    Just 2 years after AMD contributed the initial packages to the Guix-HPC channel:
    hpc.guix.info/blog/2024/01/hip

    #HPC

  9. The failure of the #AI community to properly support #AMD GPUs is disappointing. It is slowly improving, but very slowly and from a poor starting point.

    It has never been a good idea to cherish a monopoly.

    #ROCm #GPU

  10. Do I upgrade my daily driver (which is currently on #Ubuntu Noble) to a non-LTS version now so I can run #Podman containers with #ROCm, or do I wait until April when the next LTS comes out?

  11. Triton, Flash-attension, Sage-attension и bitsandbytes с Rocm7 в Windows

    В конце января 2026 вышел triton-windows 3.6.0.post25 , который позволяет использовать flash-attention , sage-attention (v1) и другие библиотеки, использующие Triton, на картах AMD с поддержкой rocWMMA в Windows. Также, несмотря на то, что в официальном репозитории bitsandbytes еще не приняли PR для поддержки ROCm 7, его все же можно собрать, внеся небольшие изменения в код. Эти изменения я уже сделал в своем форке. В этой статье я расскажу, как установить все это себе, а также для примера запустим пару тестов в ComfyUI, в том числе со свежей LTX-2, и сделаем Qlora адаптер для модели Gemma 3.

    habr.com/ru/articles/987672/

    #triton #amd #rx7900 #sageattention #flashattention #bitsandbytes #rocm #rocm7 #comfyui #ltx2

  12. ̶К̶у̶п̶а̶н̶и̶е̶ тестирование «красного» ̶к̶о̶н̶я̶ представителя GPU от АМД с приставкой ИИ — RADEON AI PRO R9700

    Тестировать друг за другом видеокарты от NVIDIA надоедает, благо разница в последнем поколении только в мощностях процессоров семейства Blackwell, объеме памяти и ширине шины. А вот посмотреть, что предлагают конкуренты, а тем более громко называя это «ИИ», уже интересней. Мы проверили Radeon AI PRO R9700 с 32 Гб памяти на реальных задачах: LLM, генерация графики и видео, 3D-рендеринг, и сравнили с NVIDIA.

    habr.com/ru/companies/hostkey/

    #hostkey #Radeon_AI_PRO_R9700 #ROCm #инференс_LLM #генеративный_ИИ #видеокарта #NVIDIA_RTX #Ollama #RDNA_4 #GDDR6

  13. this was sort of interesting

    The “Famous” Claude Code Has Managed to Port NVIDIA’s CUDA Backend to ROCm in Just 30 Minutes, and Folks Are Calling It the End of the CUDA Moat wccftech

    one of these days it will actually get it letter perfect but i suspect it may be a year or two for more homogenous ai and cuda codebase #rocm

  14. 🤬

    #ROCm decided to fucking crash the computer and let me lose all afternoon progress.

    I'll never recommend AMD shit for #AI, ever again, until they gift me a fucking server to vent out.

    👉 It's either #NVIDIA or #AppleSilicon for serious business.

    #ArtificialIntelligence #GeForce #RTX #Mac #iMac #MacMini #MacBook #MacBookPro #AMD #Radeon #Rebrandeon

  15. So... I managed to use #ROCm on an 8845HS in #Linux using #Jan:

    1. Download JAN
    2. Using Flatseal, enable "Graphics acceleration", env variable "HSA_OVERRIDE_GFX_VERSION=11.0.0"
    3. Download lemonade-sdk/llamacpp-rocm latest release for "Ubuntu gfx110X".
    4. Go into the backends directory of Jan, decompress the file.
    5. Ensure the path is `bXXXX/linux-rocm7-gfx110x-x64/build/bin` with all libraries and add execute bit (chmod +x ./)
    6. Restart Jan, profit.

    #AMD #OSS #AI #LLM #Flatpak #Radeon

  16. With the cluster and storage down for maintenance this week, I thought about doing some #pytorch development locally an my laptop. No dedicated GPU, but a recent #intel Core Ultra 2 CPU. So first I went for the CPU-only packages.

    Only yesterday I realized that the Windows task manager shows something called #npu in its charts. Fair enough, let's see whether pytorch can use it ...

    I have to say, the #xpu pytorch integration has gone a long way. In my case, I could use it as a drop-in replacement for #cuda / #rocm pytorch, yet with the obvious performance differences between HPC GPUs and a laptop iGPU.

    docs.pytorch.org/docs/stable/n

    Always nice to have alternatives. Especially for basic testing I don't need big (expensive) GPUs.

  17. #AMD Just Showed Off Its Own DGX Spark Rival, The Ryzen #AIHalo: Full #ROCm Support, Day-0 Support For Leading #AI Models, Available In Q2
    AMD Ryzen AI MAX family, codenamed #StrixHalo, has seen adoption, from laptops to handhelds and Mini PC; every consumer PC segment. High-performance and premium SoCs offer amazing performance thanks to #Zen5 #CPU, #RDNA 3.5 #GPU, & #XDNA 2 #NPU architectures. AMD developed its own Mini PC featuring these chips, called the #RyzenAIHalo.
    wccftech.com/amd-showed-off-it

  18. AMD just reinvented the wheel with a #GPU #debugger that is as useful as a screen door on a submarine. 🚢🔧 After years of pondering why GPUs don't have CPU-like debuggers, someone invented a tool that almost works... if you're in the #ROCm club. 🤦‍♂️🎉 Good luck debugging outside that sandbox, folks!
    thegeeko.me/blog/amd-gpu-debug #AMD #Innovation #TechNews #DebuggingTools #HackerNews #ngated

  19. AMD just reinvented the wheel with a #GPU #debugger that is as useful as a screen door on a submarine. 🚢🔧 After years of pondering why GPUs don't have CPU-like debuggers, someone invented a tool that almost works... if you're in the #ROCm club. 🤦‍♂️🎉 Good luck debugging outside that sandbox, folks!
    thegeeko.me/blog/amd-gpu-debug #AMD #Innovation #TechNews #DebuggingTools #HackerNews #ngated

  20. #ZLUDA 5 Released With An Offline #Compiler For #CUDA On Non-NVIDIA #GPUs
    #ZLUDA5 introduces "zoc" as an offline compiler for ZLUDA. #Zoc will take a NVIDIA PTX file as input and output the #LLVM intermediate representation (IR) and the #RDNA Assembly for the GPU in use generated in turn by the #ROCm compiler.
    phoronix.com/news/ZLUDA-5-Rele

  21. It's #ROCm getting better? Yes

    Will you still use #CUDA? Yes.

    youtube.com/watch?v=wCBLMXgk3N

    What #AMD should focus on is to bring all of their SKU to use ROCm stable on all platforms. Currently that isn't possible, which is frustrating given their cards have more memory than #RTX at the same price.

    #AI #LLM #OLlama #Llama #NVIDIA #GeForce #ArtificialIntelligence #OpenCompute #GPUOpen #Computer #Computers #Technology #PC #PCHardware #Hardware #GPU #dGPU #Laptop #Laptops #StrixHalo #Radeon

  22. It's #ROCm getting better? Yes

    Will you still use #CUDA? Yes.

    youtube.com/watch?v=wCBLMXgk3N

    What #AMD should focus on is to bring all of their SKU to use ROCm stable on all platforms. Currently that isn't possible, which is frustrating given their cards have more memory than #RTX at the same price.

    #AI #LLM #OLlama #Llama #NVIDIA #GeForce #ArtificialIntelligence #OpenCompute #GPUOpen #Computer #Computers #Technology #PC #PCHardware #Hardware #GPU #dGPU #Laptop #Laptops #StrixHalo #Radeon

  23. It's #ROCm getting better? Yes

    Will you still use #CUDA? Yes.

    youtube.com/watch?v=wCBLMXgk3N

    What #AMD should focus on is to bring all of their SKU to use ROCm stable on all platforms. Currently that isn't possible, which is frustrating given their cards have more memory than #RTX at the same price.

    #AI #LLM #OLlama #Llama #NVIDIA #GeForce #ArtificialIntelligence #OpenCompute #GPUOpen #Computer #Computers #Technology #PC #PCHardware #Hardware #GPU #dGPU #Laptop #Laptops #StrixHalo #Radeon

  24. It's #ROCm getting better? Yes

    Will you still use #CUDA? Yes.

    youtube.com/watch?v=wCBLMXgk3N

    What #AMD should focus on is to bring all of their SKU to use ROCm stable on all platforms. Currently that isn't possible, which is frustrating given their cards have more memory than #RTX at the same price.

    #AI #LLM #OLlama #Llama #NVIDIA #GeForce #ArtificialIntelligence #OpenCompute #GPUOpen #Computer #Computers #Technology #PC #PCHardware #Hardware #GPU #dGPU #Laptop #Laptops #StrixHalo #Radeon

  25. It's #ROCm getting better? Yes

    Will you still use #CUDA? Yes.

    youtube.com/watch?v=wCBLMXgk3N

    What #AMD should focus on is to bring all of their SKU to use ROCm stable on all platforms. Currently that isn't possible, which is frustrating given their cards have more memory than #RTX at the same price.

    #AI #LLM #OLlama #Llama #NVIDIA #GeForce #ArtificialIntelligence #OpenCompute #GPUOpen #Computer #Computers #Technology #PC #PCHardware #Hardware #GPU #dGPU #Laptop #Laptops #StrixHalo #Radeon

  26. So, good news. ROCm 6.3.4 and PyTorch 2.4.0 seems stable enough with gfx1103 if I use HSA override for 11.0.0, using latest firmware blobs and kernel 6.13.10 on Fedora 41.

    In your Dockerfile, build your AI app from:
    ```
    FROM rocm/pytorch:rocm6.3.4_ubuntu24.04_py3.12_pytorch_release_2.4.0
    ```

    #rocm #pytorch #amdgpu #igpu #Radeon780M

  27. So, good news. ROCm 6.3.4 and PyTorch 2.4.0 seems stable enough with gfx1103 if I use HSA override for 11.0.0, using latest firmware blobs and kernel 6.13.10 on Fedora 41.

    In your Dockerfile, build your AI app from:
    ```
    FROM rocm/pytorch:rocm6.3.4_ubuntu24.04_py3.12_pytorch_release_2.4.0
    ```

    #rocm #pytorch #amdgpu #igpu #Radeon780M

  28. So, good news. ROCm 6.3.4 and PyTorch 2.4.0 seems stable enough with gfx1103 if I use HSA override for 11.0.0, using latest firmware blobs and kernel 6.13.10 on Fedora 41.

    In your Dockerfile, build your AI app from:
    ```
    FROM rocm/pytorch:rocm6.3.4_ubuntu24.04_py3.12_pytorch_release_2.4.0
    ```

    #rocm #pytorch #amdgpu #igpu #Radeon780M

  29. So, good news. ROCm 6.3.4 and PyTorch 2.4.0 seems stable enough with gfx1103 if I use HSA override for 11.0.0, using latest firmware blobs and kernel 6.13.10 on Fedora 41.

    In your Dockerfile, build your AI app from:
    ```
    FROM rocm/pytorch:rocm6.3.4_ubuntu24.04_py3.12_pytorch_release_2.4.0
    ```

  30. It seems that #Vulkan could be the real alternative for using #AI on GPUs or CPUs of any brand, without necessarily having to rely on #CUDA or #AMD's #ROCm. I thought #SYCL was the alternative. This might finally free us from of monopoly #Nvidia.
    #Khronos