home.social

#qwen3_5 — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #qwen3_5, aggregated by home.social.

  1. @john is garbage, has many fixes in recently, it is not fully ready yet

  2. Alibaba just released the Qwen‑3.5‑Medium model as open‑source, delivering Sonnet 4.5‑level performance on a single GPU. It uses a Mixture‑of‑Experts architecture and a new “Thinking Mode” to boost AI inference efficiency while staying lightweight. Dive into the details and see how this could reshape open‑source LLM development. #Qwen3_5 #OpenSourceLLM #MixtureOfExperts #ModelEfficiency

    🔗 aidailypost.com/news/alibaba-o

  3. Alibaba's new Qwen 3.5 397B-A17 outperforms even larger rivals by using multi-token prediction and a sparse mixture-of-experts architecture. It cuts inference cost while keeping top-tier performance, hinting at a new era for multimodal AI. Curious how 397 billion parameters can be cheaper? Read the full story. #Qwen3_5 #AlibabaAI #MixtureOfExperts #MultiTokenPrediction

    🔗 aidailypost.com/news/alibabas-

  4. Alibaba's new Qwen 3.5 397B-A17 outperforms even larger rivals by using multi-token prediction and a sparse mixture-of-experts architecture. It cuts inference cost while keeping top-tier performance, hinting at a new era for multimodal AI. Curious how 397 billion parameters can be cheaper? Read the full story. #Qwen3_5 #AlibabaAI #MixtureOfExperts #MultiTokenPrediction

    🔗 aidailypost.com/news/alibabas-

  5. Alibaba's new Qwen 3.5 397B-A17 outperforms even larger rivals by using multi-token prediction and a sparse mixture-of-experts architecture. It cuts inference cost while keeping top-tier performance, hinting at a new era for multimodal AI. Curious how 397 billion parameters can be cheaper? Read the full story. #Qwen3_5 #AlibabaAI #MixtureOfExperts #MultiTokenPrediction

    🔗 aidailypost.com/news/alibabas-