#mi300x β Public Fediverse posts
Live and recent posts from across the Fediverse tagged #mi300x, aggregated by home.social.
-
#AMD threatens to go medieval on Nvidia with #Epyc and #Instinct: What we know so far
AMD teased its next-generation #AI accelerators at #CES2026, with CEO Lisa Su boasting the #MI500-series will deliver a 1,000x uplift in performance over its two-year-old #MI300X #GPU.
if AMD wants to stay competitive with Nvidia, MI500-series will need to deliver performance on par with if not better than Rubin Ultra Kyber racks.
AMD joins the #rackscale race with #MI455X #Helios racks.
https://www.theregister.com/2026/01/07/mi500x_amd_ai/ -
AMD and OpenAI team up for massive 6 gigawatt GPU partnership
-
Battle of the giants: Nvidia #Blackwell B200 takes the lead in FluidX3D CFD performance
#Nvidia #B200 just launched, and I'm one of the first people to benchmark 8x B200 via Shadeform, in a WhiteFiber server with 2x #Intel #Xeon6 6960P 72-core CPUs. ππ
8x Nvidia B200 go head-to-head with 8x #AMD #MI300X in the #FluidX3D #CFD benchmark, winning overall (with FP16S storage) at 219300 MLUPs/s (~17TB/s combined VRAM bandwidth), but losing in FP32 & FP16C storage. 8x MI300X achieve 204924 MLUPs/s.
-
Hot Aisle's 8x AMD #MI300X server is the fastest computer I've ever tested in #FluidX3D #CFD, achieving a peak #LBM performance of 205 GLUPs/s, and a combined VRAM bandwidth of 23 TB/s. ππ€―
The #RTX 5090 looks like a toy in comparison.MI300X beats even Nvidia's GH200 94GB. This marks a very fascinating inflection point in #GPGPU: #CUDA is not the performance leader anymore. ππ
You need a cross-vendor language like #OpenCL to leverage its power.FluidX3D on #GitHub: https://github.com/ProjectPhysX/FluidX3D
-
Sizing up #MI300Aβs #GPU
Itβs well ahead of #Nvidiaβs #H100 PCIe for just about every major category of 32- or 64-bit operations. MI300A can achieve 113.2 TFLOPS of #FP32 throughput, with each FMA counting as two floating point operations. For comparison, H100 PCIe achieved 49.3 TFLOPS in same test.
#AMD cut down #MI300Xβs GPU to create MI300A. 24 #Zen4 cores is a lot of #CPU power, and occupies one quadrant on the MI300 chip. But MI300βs main attraction is still the GPU.
https://chipsandcheese.com/p/sizing-up-mi300as-gpu -
Sizing up #MI300Aβs #GPU
Itβs well ahead of #Nvidiaβs #H100 PCIe for just about every major category of 32- or 64-bit operations. MI300A can achieve 113.2 TFLOPS of #FP32 throughput, with each FMA counting as two floating point operations. For comparison, H100 PCIe achieved 49.3 TFLOPS in same test.
#AMD cut down #MI300Xβs GPU to create MI300A. 24 #Zen4 cores is a lot of #CPU power, and occupies one quadrant on the MI300 chip. But MI300βs main attraction is still the GPU.
https://chipsandcheese.com/p/sizing-up-mi300as-gpu -
Sizing up #MI300Aβs #GPU
Itβs well ahead of #Nvidiaβs #H100 PCIe for just about every major category of 32- or 64-bit operations. MI300A can achieve 113.2 TFLOPS of #FP32 throughput, with each FMA counting as two floating point operations. For comparison, H100 PCIe achieved 49.3 TFLOPS in same test.
#AMD cut down #MI300Xβs GPU to create MI300A. 24 #Zen4 cores is a lot of #CPU power, and occupies one quadrant on the MI300 chip. But MI300βs main attraction is still the GPU.
https://chipsandcheese.com/p/sizing-up-mi300as-gpu -
Sizing up #MI300Aβs #GPU
Itβs well ahead of #Nvidiaβs #H100 PCIe for just about every major category of 32- or 64-bit operations. MI300A can achieve 113.2 TFLOPS of #FP32 throughput, with each FMA counting as two floating point operations. For comparison, H100 PCIe achieved 49.3 TFLOPS in same test.
#AMD cut down #MI300Xβs GPU to create MI300A. 24 #Zen4 cores is a lot of #CPU power, and occupies one quadrant on the MI300 chip. But MI300βs main attraction is still the GPU.
https://chipsandcheese.com/p/sizing-up-mi300as-gpu -
Sizing up #MI300Aβs #GPU
Itβs well ahead of #Nvidiaβs #H100 PCIe for just about every major category of 32- or 64-bit operations. MI300A can achieve 113.2 TFLOPS of #FP32 throughput, with each FMA counting as two floating point operations. For comparison, H100 PCIe achieved 49.3 TFLOPS in same test.
#AMD cut down #MI300Xβs GPU to create MI300A. 24 #Zen4 cores is a lot of #CPU power, and occupies one quadrant on the MI300 chip. But MI300βs main attraction is still the GPU.
https://chipsandcheese.com/p/sizing-up-mi300as-gpu -
#AMD posts first Instinct #MI300X #MLPerf #benchmark results β roughly in line with #Nvidia #H100 (But only in #Llama2 70B).
Based on data AMD shared, 8x MI300X processors only slightly slower (23,512 TOPS) than 8x H100 SXM3 (24,323 TOPS), which can probably be called 'competitive' given how well Nvidia's software stack is optimized for popular #LLM like Llama 2. AMD MI300X system is slightly faster than the H100 machine in more or less real-world server benchmarks.
https://www.tomshardware.com/tech-industry/artificial-intelligence/amd-posts-first-instinct-mi300x-mlperf-benchmark-results-roughly-in-line-with-nvidia-h100-performance -
#AMD Has a #GPU to Rival #Nvidiaβs #H100
#MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidiaβs H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
https://www.hpcwire.com/2023/06/13/amd-has-a-gpu-to-rival-nvidias-h100/ -
#AMD Has a #GPU to Rival #Nvidiaβs #H100
#MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidiaβs H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
https://www.hpcwire.com/2023/06/13/amd-has-a-gpu-to-rival-nvidias-h100/ -
#AMD Has a #GPU to Rival #Nvidiaβs #H100
#MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidiaβs H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
https://www.hpcwire.com/2023/06/13/amd-has-a-gpu-to-rival-nvidias-h100/ -
#AMD Has a #GPU to Rival #Nvidiaβs #H100
#MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidiaβs H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
https://www.hpcwire.com/2023/06/13/amd-has-a-gpu-to-rival-nvidias-h100/ -
#AMD Has a #GPU to Rival #Nvidiaβs #H100
#MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidiaβs H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
https://www.hpcwire.com/2023/06/13/amd-has-a-gpu-to-rival-nvidias-h100/