home.social

#mi300a — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #mi300a, aggregated by home.social.

  1. Sizing up #MI300A’s #GPU
    It’s well ahead of #Nvidia’s #H100 PCIe for just about every major category of 32- or 64-bit operations. MI300A can achieve 113.2 TFLOPS of #FP32 throughput, with each FMA counting as two floating point operations. For comparison, H100 PCIe achieved 49.3 TFLOPS in same test.
    #AMD cut down #MI300X’s GPU to create MI300A. 24 #Zen4 cores is a lot of #CPU power, and occupies one quadrant on the MI300 chip. But MI300’s main attraction is still the GPU.
    chipsandcheese.com/p/sizing-up

  2. #AMD #Instinct #MI300A #APU With #CDNA3 #GPU, #Zen4 #CPU & #UnifiedMemory Offers Up To 4x Speedup Versus Discrete GPUs In #HPC
    Since the AMD Instinct MI300A accelerator uses a unified #HBM interface, it eliminates the need for data replication and does not require a #programming distinction between the host and the device memory spaces.
    wccftech.com/amd-instinct-mi30

  3. #AMD bets demand for its #MI300 accelerator will balance dips across other product lines
    CEO #LisaSu boasted AMD now predicts #datacenter GPUs – this includes the #MI300A – to see consistent sequential growth over the next several quarters, driving revenues of $3.5 billion in 2024 alone. Tthe MI300 represents AMD’s first compelling contender in the #AI arena, with its previous-gen #MI200 parts primarily designed for #highperformancecomputing (#hpc) deployments
    theregister.com/2024/01/31/amd #GPU #APU

  4. #AMD Has a #GPU to Rival #Nvidia’s #H100
    #MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidia’s H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
    hpcwire.com/2023/06/13/amd-has

  5. #AMD Has a #GPU to Rival #Nvidia’s #H100
    #MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidia’s H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
    hpcwire.com/2023/06/13/amd-has

  6. Has a to Rival ’s
    is a GPU-only version of previously announced supercomputing chip, which includes a and . The MI300A will be in El Capitan, a supercomputer coming next year to the . El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of , which Su said was 2.4 times more memory density than Nvidia’s H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
    hpcwire.com/2023/06/13/amd-has

  7. #AMD Has a #GPU to Rival #Nvidia’s #H100
    #MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidia’s H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
    hpcwire.com/2023/06/13/amd-has

  8. #AMD Has a #GPU to Rival #Nvidia’s #H100
    #MI300X is a GPU-only version of previously announced #MI300A supercomputing chip, which includes a #CPU and #GPU. The MI300A will be in El Capitan, a supercomputer coming next year to the #LosAlamos #NationalLaboratory. El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of #HBM3, which Su said was 2.4 times more memory density than Nvidia’s H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
    hpcwire.com/2023/06/13/amd-has

  9. #AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
    NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
    servethehome.com/amd-instinct-

  10. #AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
    NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
    servethehome.com/amd-instinct-

  11. Instinct#MI300 is THE Chance to Chip into Share
    NVIDIA is facing very long lead times for its and , if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional , MI300 is GPU-only part. All four center tiles are GPU. With 192GB , & can simply fit more onto a single GPU than NVIDIA. has 24 , GPU cores, and 128GB . This is CPU deployed in the El Capitan 2+ Exaflop .
    servethehome.com/amd-instinct-

  12. #AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
    NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
    servethehome.com/amd-instinct-

  13. #AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
    NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
    servethehome.com/amd-instinct-