#mi300a — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #mi300a, aggregated by home.social.
-
Sizing up #MI300A’s #GPU
It’s well ahead of #Nvidia’s #H100 PCIe for just about every major category of 32- or 64-bit operations. MI300A can achieve 113.2 TFLOPS of #FP32 throughput, with each FMA counting as two floating point operations. For comparison, H100 PCIe achieved 49.3 TFLOPS in same test.
#AMD cut down #MI300X’s GPU to create MI300A. 24 #Zen4 cores is a lot of #CPU power, and occupies one quadrant on the MI300 chip. But MI300’s main attraction is still the GPU.
https://chipsandcheese.com/p/sizing-up-mi300as-gpu -
#AMD #Instinct #MI300A #APU With #CDNA3 #GPU, #Zen4 #CPU & #UnifiedMemory Offers Up To 4x Speedup Versus Discrete GPUs In #HPC
Since the AMD Instinct MI300A accelerator uses a unified #HBM interface, it eliminates the need for data replication and does not require a #programming distinction between the host and the device memory spaces.
https://wccftech.com/amd-instinct-mi300a-apu-cdna-3-gpu-zen-4-cpu-unified-memory-up-to-4x-speedup-versus-discrete-gpus/ -
#AMD bets demand for its #MI300 accelerator will balance dips across other product lines
CEO #LisaSu boasted AMD now predicts #datacenter GPUs – this includes the #MI300A – to see consistent sequential growth over the next several quarters, driving revenues of $3.5 billion in 2024 alone. Tthe MI300 represents AMD’s first compelling contender in the #AI arena, with its previous-gen #MI200 parts primarily designed for #highperformancecomputing (#hpc) deployments
https://www.theregister.com/2024/01/31/amd_q4_2023/ #GPU #APU -
#AMD Has a #GPU to Rival #Nvidia’s #H100
#MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidia’s H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
https://www.hpcwire.com/2023/06/13/amd-has-a-gpu-to-rival-nvidias-h100/ -
#AMD Has a #GPU to Rival #Nvidia’s #H100
#MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidia’s H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
https://www.hpcwire.com/2023/06/13/amd-has-a-gpu-to-rival-nvidias-h100/ -
#AMD Has a #GPU to Rival #Nvidia’s #H100
#MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidia’s H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
https://www.hpcwire.com/2023/06/13/amd-has-a-gpu-to-rival-nvidias-h100/ -
#AMD Has a #GPU to Rival #Nvidia’s #H100
#MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidia’s H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
https://www.hpcwire.com/2023/06/13/amd-has-a-gpu-to-rival-nvidias-h100/ -
#AMD Has a #GPU to Rival #Nvidia’s #H100
#MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidia’s H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
https://www.hpcwire.com/2023/06/13/amd-has-a-gpu-to-rival-nvidias-h100/ -
#AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
https://www.servethehome.com/amd-instinct-mi300-is-the-chance-to-chip-into-nvidia-ai-share/ -
#AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
https://www.servethehome.com/amd-instinct-mi300-is-the-chance-to-chip-into-nvidia-ai-share/ -
#AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
https://www.servethehome.com/amd-instinct-mi300-is-the-chance-to-chip-into-nvidia-ai-share/ -
#AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
https://www.servethehome.com/amd-instinct-mi300-is-the-chance-to-chip-into-nvidia-ai-share/ -
#AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
https://www.servethehome.com/amd-instinct-mi300-is-the-chance-to-chip-into-nvidia-ai-share/