home.social

#nvlink — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #nvlink, aggregated by home.social.

  1. NVIDIA’s new Vera Rubin platform brings together specialized chips (Vera CPUs, Rubin GPUs, Groq LPUs, and BlueField-4 DPUs) into coordinated, rack-scale systems designed for real-time AI.

    The big shift: AI isn’t just about training models anymore — it’s about orchestrating entire systems to power intelligent, autonomous agents in real time.
    buysellram.com/blog/the-agenti
    #NVIDIAGTC #AgenticAI #VeraRubin #DataCenter #GPU #InferenceFactory #AIInfrastructure #Groq #NVIDIA #NVLink #AIHardware #technology

  2. Mixture of Experts Powers the Most Intelligent Frontier AI Models, Runs 10x Faster to Deliver 1/10 the Token Cost on NVIDIA Blackwell NVL72 The top 10 most intelligent open-source models all use a ...

    #Data #Center #Artificial #Intelligence #Dynamo #Inference #NVIDIA #Blackwell #NVLink #Open #Source

    Origin | Interest | Match
  3. Mixture of Experts Powers the Most Intelligent Frontier AI Models, Runs 10x Faster to Deliver 1/10 the Token Cost on NVIDIA Blackwell NVL72 The top 10 most intelligent open-source models all use a ...

    #Data #Center #Artificial #Intelligence #Dynamo #Inference #NVIDIA #Blackwell #NVLink #Open #Source

    Origin | Interest | Match
  4. Mixture of Experts Powers the Most Intelligent Frontier AI Models, Runs 10x Faster on NVIDIA Blackwell NVL72 The top 10 most intelligent open-source models all use a mixture-of-experts architecture...

    #Data #Center #Artificial #Intelligence #Dynamo #Inference #NVIDIA #Blackwell #NVLink #Open #Source

    Origin | Interest | Match
  5. #AWS announced #Trainium3, a new #AItrainingchip with significant performance and energy efficiency improvements. #Trainium4, already in development, will offer even better performance and support #Nvidia’s #NVLink Fusion technology, potentially attracting more AI applications to AWS. techcrunch.com/2025/12/02/amaz #tech #media #news

  6. #Arm and #Nvidia are #partnering to integrate Arm-based #Neoverse #CPUs with Nvidia’s #GPUs using Nvidia’s #NVLink Fusion technology. This #collaboration will benefit customers, particularly #hyperscalers, who prefer custom infrastructure setups. The partnership highlights Nvidia’s strategy of collaborating with major tech companies to expand its influence in the #AIindustry. cnbc.com/2025/11/17/arm-nvidia #tech #media #news

  7. 8x AMD Instinct #MI355X (288GB @8TB/s) take back the lead over 8x Nvidia #B200 (180GB @8TB/s) in #FluidX3D #CFD, achieving 362k MLUPs/s (vs. 219k MLUPs/s). Thanks to Jon Stevens from Hot Aisle to run the benchmarks! 🖖😊

    In single-GPU, both perform about the same, but in 8x #GPU config, MI355X is 65% faster. The difference comes from PCIe bandwidth - MI355X does 55GB/s, B200 only 14GB/s. #Nvidia leaves a lot of perf on the table by not exposing #NVLink P2P to #OpenCL.

    github.com/ProjectPhysX/FluidX

  8. [Перевод] Подробное руководство по облачной инфраструктуре для ИИ-проектов

    Как вычислительные ресурсы, системы хранения, сетевые технологии и AI-фреймворки влияют на производительность AI-проектов. Почему понимание этой инфраструктуры даёт конкурентное преимущество

    habr.com/ru/articles/896194/

    #облачная_инфраструктура #искусственный_интеллект #gpu #tpu #dpu #хранение_данных #сетевые_технологии #nvlink #rdma #ai_фреймворки

  9. Невероятная мощь NVIDIA GB200 NVL72: Внутри гиганта ИИ-вычислений

    Привет, Хабр! Если вас всегда интересовало, как устроены по-настоящему производительные системы , вы попали по адресу. В сегодняшней статье мы расскажем, как Nvidia объединила сразу 72 ускорителя B200 в единый CUDA процессор GB200 NVL72 . Узнаем, как для создания эффективного интерконнекта используются технологии NVLink , Ethernet и Infiniband . Предметный разговор об аппаратной части уже ждет вас под кнопкой «Читать далее».

    habr.com/ru/companies/serverfl

    #сервер_флоу #GB200_NVL72 #nvlink #blackwell #nvidia_grace #Nvidia_Superchip #NVLink_Spine #infiniband #llm #SeverFlow

  10. From #SLI to #NVLink: The Evolution of #NVidia's Gaming and Multi-#GPU Technologies and Their Influence on #AI
    NVLink is crafted for scientific research, data analysis, and, most notably, AI and machine learning applications. The higher bandwidth, lower latency, and unified memory make NVLink a far more flexible and powerful solution for today’s computational challenges. The most recent iterations offer bandwidths up to 900 GB/s, a significant enhancement over SLI.
    storagereview.com/review/from-