#nvlink — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #nvlink, aggregated by home.social.
-
NVIDIA and Marvell alliance scales AI-RAN infrastructure https://www.telecomstechnews.com/news/nvidia-and-marvell-alliance-ai-ran-infrastructure/ #nvidia #marvell #airan #telecoms #nvlink #5g #6g #ai #technology
-
NVIDIA and Marvell alliance scales AI-RAN infrastructure https://www.telecomstechnews.com/news/nvidia-and-marvell-alliance-ai-ran-infrastructure/ #nvidia #marvell #airan #telecoms #nvlink #5g #6g #ai #technology
-
NVIDIA’s new Vera Rubin platform brings together specialized chips (Vera CPUs, Rubin GPUs, Groq LPUs, and BlueField-4 DPUs) into coordinated, rack-scale systems designed for real-time AI.
The big shift: AI isn’t just about training models anymore — it’s about orchestrating entire systems to power intelligent, autonomous agents in real time.
https://www.buysellram.com/blog/the-agentic-ai-era-how-nvidia-rubin-vera-cpu-groq-3-lpus-bluefield-4-redefine-the-inference-factory/
#NVIDIAGTC #AgenticAI #VeraRubin #DataCenter #GPU #InferenceFactory #AIInfrastructure #Groq #NVIDIA #NVLink #AIHardware #technology -
https://www.europesays.com/pl/191972/ NVIDIA Vera Rubin Superchip – omówienie platformy AI o pięciokrotnie wyższej wydajności niż Grace Blackwell #ai #Biznes #bluefield‑4 #budowa #Business #CES2026 #connectx‑9 #cpu #GPU #hbm4 #informacje #lpddr5x #nvidia #nvl72 #nvlink 6 #omówienie #opis #PL #Poland #Polish #Polska #Polski #premiera #specyfikacja #superchip #SztucznaInteligencja #targi #VeraRubin
-
Mixture of Experts Powers the Most Intelligent Frontier AI Models, Runs 10x Faster to Deliver 1/10 the Token Cost on NVIDIA Blackwell NVL72 The top 10 most intelligent open-source models all use a ...
#Data #Center #Artificial #Intelligence #Dynamo #Inference #NVIDIA #Blackwell #NVLink #Open #Source
Origin | Interest | Match -
Mixture of Experts Powers the Most Intelligent Frontier AI Models, Runs 10x Faster to Deliver 1/10 the Token Cost on NVIDIA Blackwell NVL72 The top 10 most intelligent open-source models all use a ...
#Data #Center #Artificial #Intelligence #Dynamo #Inference #NVIDIA #Blackwell #NVLink #Open #Source
Origin | Interest | Match -
Mixture of Experts Powers the Most Intelligent Frontier AI Models, Runs 10x Faster on NVIDIA Blackwell NVL72 The top 10 most intelligent open-source models all use a mixture-of-experts architecture...
#Data #Center #Artificial #Intelligence #Dynamo #Inference #NVIDIA #Blackwell #NVLink #Open #Source
Origin | Interest | Match -
#AWS announced #Trainium3, a new #AItrainingchip with significant performance and energy efficiency improvements. #Trainium4, already in development, will offer even better performance and support #Nvidia’s #NVLink Fusion technology, potentially attracting more AI applications to AWS. https://techcrunch.com/2025/12/02/amazon-releases-an-impressive-new-ai-chip-and-teases-a-nvidia-friendly-roadmap/?eicker.news #tech #media #news
-
#Arm and #Nvidia are #partnering to integrate Arm-based #Neoverse #CPUs with Nvidia’s #GPUs using Nvidia’s #NVLink Fusion technology. This #collaboration will benefit customers, particularly #hyperscalers, who prefer custom infrastructure setups. The partnership highlights Nvidia’s strategy of collaborating with major tech companies to expand its influence in the #AIindustry. https://www.cnbc.com/2025/11/17/arm-nvidia-nvlink-partnership-custom-cpu.html?eicker.news #tech #media #news
-
8x AMD Instinct #MI355X (288GB @8TB/s) take back the lead over 8x Nvidia #B200 (180GB @8TB/s) in #FluidX3D #CFD, achieving 362k MLUPs/s (vs. 219k MLUPs/s). Thanks to Jon Stevens from Hot Aisle to run the benchmarks! 🖖😊
In single-GPU, both perform about the same, but in 8x #GPU config, MI355X is 65% faster. The difference comes from PCIe bandwidth - MI355X does 55GB/s, B200 only 14GB/s. #Nvidia leaves a lot of perf on the table by not exposing #NVLink P2P to #OpenCL.
-
NVIDIA DGX Spark: o mini supercomputador de IA com chips Blackwell já tem preço e data de lançamento
🔗 https://tugatech.com.pt/t72827-nvidia-dgx-spark-o-mini-supercomputador-de-ia-com-chips-blackwell-ja-tem-preco-e-data-de-lancamento#Acer #asus #CES #ChatGPT #computador #desktop #docker #escalabilidade #google #hardware #ia #JetBrains #Lenovo #Meta #microsoft #MSI #mundo #nvidia #NVLink #OpenAI #software #spark #startup #startups #supercomputadores #tecnologia
-
[Перевод] Подробное руководство по облачной инфраструктуре для ИИ-проектов
Как вычислительные ресурсы, системы хранения, сетевые технологии и AI-фреймворки влияют на производительность AI-проектов. Почему понимание этой инфраструктуры даёт конкурентное преимущество
https://habr.com/ru/articles/896194/
#облачная_инфраструктура #искусственный_интеллект #gpu #tpu #dpu #хранение_данных #сетевые_технологии #nvlink #rdma #ai_фреймворки
-
Невероятная мощь NVIDIA GB200 NVL72: Внутри гиганта ИИ-вычислений
Привет, Хабр! Если вас всегда интересовало, как устроены по-настоящему производительные системы , вы попали по адресу. В сегодняшней статье мы расскажем, как Nvidia объединила сразу 72 ускорителя B200 в единый CUDA процессор GB200 NVL72 . Узнаем, как для создания эффективного интерконнекта используются технологии NVLink , Ethernet и Infiniband . Предметный разговор об аппаратной части уже ждет вас под кнопкой «Читать далее».
https://habr.com/ru/companies/serverflow/articles/864314/
#сервер_флоу #GB200_NVL72 #nvlink #blackwell #nvidia_grace #Nvidia_Superchip #NVLink_Spine #infiniband #llm #SeverFlow
-
From #SLI to #NVLink: The Evolution of #NVidia's Gaming and Multi-#GPU Technologies and Their Influence on #AI
NVLink is crafted for scientific research, data analysis, and, most notably, AI and machine learning applications. The higher bandwidth, lower latency, and unified memory make NVLink a far more flexible and powerful solution for today’s computational challenges. The most recent iterations offer bandwidths up to 900 GB/s, a significant enhancement over SLI.
https://www.storagereview.com/review/from-sli-to-nvlink-the-evolution-of-gaming-and-multi-gpu-technologies-and-their-influence-on-ai