home.social

#cdna3 — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #cdna3, aggregated by home.social.

  1. #AMD unwraps Instinct #MI500 boasting 1,000X more performance versus MI300X — setting the stage for the era of #YottaFLOPS data centers
    Achieving a 1000X performance increase in four years is a major achievement, though we should keep in mind that between the Instinct MI300X and Instinct MI500 there is a three-generational instruction set architecture (ISA) gap (#CDNA3 => #CDNA6).
    Next-generation #CDNA 6 architecture on-track for 2027.
    tomshardware.com/tech-industry
    Probably #FP4

  2. #AMD unwraps Instinct #MI500 boasting 1,000X more performance versus MI300X — setting the stage for the era of #YottaFLOPS data centers
    Achieving a 1000X performance increase in four years is a major achievement, though we should keep in mind that between the Instinct MI300X and Instinct MI500 there is a three-generational instruction set architecture (ISA) gap (#CDNA3 => #CDNA6).
    Next-generation #CDNA 6 architecture on-track for 2027.
    tomshardware.com/tech-industry
    Probably #FP4

  3. unwraps Instinct boasting 1,000X more performance versus MI300X — setting the stage for the era of data centers
    Achieving a 1000X performance increase in four years is a major achievement, though we should keep in mind that between the Instinct MI300X and Instinct MI500 there is a three-generational instruction set architecture (ISA) gap ( => ).
    Next-generation 6 architecture on-track for 2027.
    tomshardware.com/tech-industry
    Probably

  4. #AMD unwraps Instinct #MI500 boasting 1,000X more performance versus MI300X — setting the stage for the era of #YottaFLOPS data centers
    Achieving a 1000X performance increase in four years is a major achievement, though we should keep in mind that between the Instinct MI300X and Instinct MI500 there is a three-generational instruction set architecture (ISA) gap (#CDNA3 => #CDNA6).
    Next-generation #CDNA 6 architecture on-track for 2027.
    tomshardware.com/tech-industry
    Probably #FP4

  5. #AMD unwraps Instinct #MI500 boasting 1,000X more performance versus MI300X — setting the stage for the era of #YottaFLOPS data centers
    Achieving a 1000X performance increase in four years is a major achievement, though we should keep in mind that between the Instinct MI300X and Instinct MI500 there is a three-generational instruction set architecture (ISA) gap (#CDNA3 => #CDNA6).
    Next-generation #CDNA 6 architecture on-track for 2027.
    tomshardware.com/tech-industry
    Probably #FP4

  6. #AMD #Instinct #MI300A #APU With #CDNA3 #GPU, #Zen4 #CPU & #UnifiedMemory Offers Up To 4x Speedup Versus Discrete GPUs In #HPC
    Since the AMD Instinct MI300A accelerator uses a unified #HBM interface, it eliminates the need for data replication and does not require a #programming distinction between the host and the device memory spaces.
    wccftech.com/amd-instinct-mi30

  7. #AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
    NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
    servethehome.com/amd-instinct-

  8. #AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
    NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
    servethehome.com/amd-instinct-

  9. Instinct#MI300 is THE Chance to Chip into Share
    NVIDIA is facing very long lead times for its and , if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional , MI300 is GPU-only part. All four center tiles are GPU. With 192GB , & can simply fit more onto a single GPU than NVIDIA. has 24 , GPU cores, and 128GB . This is CPU deployed in the El Capitan 2+ Exaflop .
    servethehome.com/amd-instinct-

  10. #AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
    NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
    servethehome.com/amd-instinct-

  11. #AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
    NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
    servethehome.com/amd-instinct-

  12. AMD hat in Servern viel vor. Die Instinct MI300 ist ein riesiger Kombiprozessor mit HBM, Siena kommt in Edge-Server und Genoa-X erhält erneut massig Cache.
    HPC-Roadmap: AMDs Riesen-APU mit HBM kommt endlich