home.social

#qwen3 — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #qwen3, aggregated by home.social.

  1. Erkenntnis nach Selbstversuch: Wenn acht Sekunden Sample ausreichen, um lokale KI einen aktuellen Text mit meiner Stimme vorlesen zu lassen - das ändert in Sachen Quellenglaubwürdigkeit absehbar alles.

    #qwen3 #selbstversuch #ai #journalism auch im #lokaljournalismus

  2. Erkenntnis nach Selbstversuch: Wenn acht Sekunden Sample ausreichen, um lokale KI einen aktuellen Text mit meiner Stimme vorlesen zu lassen - das ändert in Sachen Quellenglaubwürdigkeit absehbar alles.

    #qwen3 #selbstversuch #ai #journalism auch im #lokaljournalismus

  3. Erkenntnis nach Selbstversuch: Wenn acht Sekunden Sample ausreichen, um lokale KI einen aktuellen Text mit meiner Stimme vorlesen zu lassen - das ändert in Sachen Quellenglaubwürdigkeit absehbar alles.

    #qwen3 #selbstversuch #ai #journalism auch im #lokaljournalismus

  4. Erkenntnis nach Selbstversuch: Wenn acht Sekunden Sample ausreichen, um lokale KI einen aktuellen Text mit meiner Stimme vorlesen zu lassen - das ändert in Sachen Quellenglaubwürdigkeit absehbar alles.

    #qwen3 #selbstversuch #ai #journalism auch im #lokaljournalismus

  5. Erkenntnis nach Selbstversuch: Wenn acht Sekunden Sample ausreichen, um lokale KI einen aktuellen Text mit meiner Stimme vorlesen zu lassen - das ändert in Sachen Quellenglaubwürdigkeit absehbar alles.

    #qwen3 #selbstversuch #ai #journalism auch im #lokaljournalismus

  6. New week, small update: Run LLMs Locally

    Now with a new setup for OpenCode with Qwen 3.6 and Gemma 4, including permissions and thinking variants.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #qwen3 #glm #localai #gemma4 #webgpu #opencode

  7. New week, small update: Run LLMs Locally

    Now with a new setup for OpenCode with Qwen 3.6 and Gemma 4, including permissions and thinking variants.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #qwen3 #glm #localai #gemma4 #webgpu #opencode

  8. New week, small update: Run LLMs Locally

    Now with a new setup for OpenCode with Qwen 3.6 and Gemma 4, including permissions and thinking variants.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #qwen3 #glm #localai #gemma4 #webgpu #opencode

  9. New week, small update: Run LLMs Locally

    Now with a new setup for OpenCode with Qwen 3.6 and Gemma 4, including permissions and thinking variants.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #qwen3 #glm #localai #gemma4 #webgpu #opencode

  10. New week, small update: Run LLMs Locally

    Now with a new setup for OpenCode with Qwen 3.6 and Gemma 4, including permissions and thinking variants.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #qwen3 #glm #localai #gemma4 #webgpu #opencode

  11. Krátce jsem zkoušel lokální #LLM modely na mém #frameworklaptop13 a docela zajímavě vypadá "přemýšlivý" qwen3.6:35b-a3b-q4_K_M, který na AMD Ryzen 5 AI 340 + 32GB RAM běží použitelně.

    Tento model má redukovaný počet aktivních parametrů pro zpracování jednoho tokenu. Na první pohled, tak méně zatěžuje CPU/GPU a celý notebook méně hučí a topí. Podobně funguje i starší qwen3-coder:30b-a3b-q4_K_M, který "nepřemýšlí" takže odpověď dorazí rychleji.

    Každopádně na tomto HW tyto modely generují výstup řádově v nižších desítkách (možná spíš jednotky) tokenů za sekundu. Měřeno pohledem oka. Použitelné to je, ale kdo to s lokálním LLM myslí opravdu vážně, brzy sáhne po něčem výkonnějším.

    #ollama #qwen3 #ryzen #localllm

  12. Qwen 3b lokal auf meinem Laptop... Ihr könnt alle einpacken, ihr Internet-basierten KI's!

    #KI #qwen3 #lokal #computer #AI

  13. RT @mr_r0b0t: Wusstest du, dass Qwen3.6 mit nativer MTP ausgeliefert wurde? Ja, dieselbe MTP, für die Google gestern die Unterstützung von Gemma4 freigegeben hat! Multi Token Prediction (MTP) = spekulatives Decoding. Hier ist ein Qwen3.6-Modell, quantisiert auf Q4KM, das MTP über ikllama.cpp unterstützt.

    mehr auf Arint.info

    #AI #Gemma4 #LLM #MTP #Qwen3 #arint_info

    https://x.com/mr_r0b0t/status/2052022017470120067#m

  14. New week, more slides: Run LLMs Locally

    Now with LFM 2 and new slides for using Transformers.js with WebGPU for Privacy Filter, Function Calling and Embeddings, running completely in your browser.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu

  15. New week, more slides: Run LLMs Locally

    Now with LFM 2 and new slides for using Transformers.js with WebGPU for Privacy Filter, Function Calling and Embeddings, running completely in your browser.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu

  16. New week, more slides: Run LLMs Locally

    Now with LFM 2 and new slides for using Transformers.js with WebGPU for Privacy Filter, Function Calling and Embeddings, running completely in your browser.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu

  17. New week, more slides: Run LLMs Locally

    Now with LFM 2 and new slides for using Transformers.js with WebGPU for Privacy Filter, Function Calling and Embeddings, running completely in your browser.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu

  18. New week, more slides: Run LLMs Locally

    Now with LFM 2 and new slides for using Transformers.js with WebGPU for Privacy Filter, Function Calling and Embeddings, running completely in your browser.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu

  19. New week, new slides: Run LLMs Locally

    Now including Nemotron 3 Nano Omni from Nvidia, Llama.cpp built-in tools and new slides about using Transformers.js with WebGPU for Image Recognition and OCR.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu

  20. New week, new slides: Run LLMs Locally

    Now including Nemotron 3 Nano Omni from Nvidia, Llama.cpp built-in tools and new slides about using Transformers.js with WebGPU for Image Recognition and OCR.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu

  21. New week, new slides: Run LLMs Locally

    Now including Nemotron 3 Nano Omni from Nvidia, Llama.cpp built-in tools and new slides about using Transformers.js with WebGPU for Image Recognition and OCR.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu

  22. New week, new slides: Run LLMs Locally

    Now including Nemotron 3 Nano Omni from Nvidia, Llama.cpp built-in tools and new slides about using Transformers.js with WebGPU for Image Recognition and OCR.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu

  23. New week, new slides: Run LLMs Locally

    Now including Nemotron 3 Nano Omni from Nvidia, Llama.cpp built-in tools and new slides about using Transformers.js with WebGPU for Image Recognition and OCR.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu

  24. Implemented my first AI application that runs completely in the browser using Transformers.js, Qwen3-4B and WebGPU. Amazing stuff!
    (all running on a 2 year old laptop with 10 tops integrated GPU)

    #ai #qwen3 #webgpu #transformers

  25. Ich sitze seit 7h an einem kack UML Diagramm mit #plantuml. Die Kollegin braucht das morgen.
    Da ich keine #cloud basierte #ai nutzen darf, wegen unsere Geschäftsgeheimnisse, nutze ich entweder unsere genehmigte #chatgpt Enterprise Version von der Firma oder #gemma4 / #qwen3 mit #lmstudio.
    Momentan hat sie die nicht ganz perfekte Variante, bei der man noch von einem "Schönheitsfehler" reden kann.

  26. New week, new update for the slides of my talk "Run LLMs Locally":

    Now including Gemma4 and Qwen3-Omni with Vision and Audio support and new slides describing Llama.cpp server parameters.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4

  27. New update for the slides of my talk "Run LLMs Locally": Bonsai-8B

    The latest version of Llama.cpp now supports Vulkan with 1-bit quantized models like Bonsai: 8B model having 1.1 GB in size, 2.5 GB in RAM.

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai

  28. New update for the slides of my talk "Run LLMs Locally": WebGPU

    Now models can run completely inside the browser using Transformers.js, Vulkan and WebGPU (slower than llama.cpp, but already usable).

    codeberg.org/thbley/talks/raw/

    #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #webgpu

    don't expect llm generated code to be correct ↓

  29. 📰 Ah, the tech world's latest offering, Qwen3.6-Plus, is here to dazzle us with "real-world agents"—because who doesn't need a glorified chatbot that can barely order a pizza without a system crash? 🤖🎉 Clearly, we're on the brink of a revolution where our apps are slightly less annoying than they used to be. 🚀🙄
    qwen.ai/blog?id=qwen3.6 #Qwen3.6Plus #Chatbots #TechNews #AIRevolution #UserExperience #HackerNews #ngated

  30. New update for the slides of my talk "Run LLMs Locally":

    Now including Reranking, Qwen 3.5 (slower than Qwen 3, but includes Vision) and loading models with Direct I/O.

    codeberg.org/thbley/talks/raw/

    #llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai #mcp

  31. 新舊MBP 本地LLM對決〡M2 Max vs M5 Max〡極限測試大型LLM
    當 2026 年最強 Apple 流動平台 M5 Max  MBP 遇上 Local 部署的  Qwen 3. […]
    #unwire TV #AI #Apple #M2 Max
    unwire.hk/2026/03/13/m5-max-ma

  32. I updated the slides for my talk "Run LLMs Locally":

    Now including image generation with Qwen3 and content classification from the Qwen3Guard Technical Report paper.

    codeberg.org/thbley/talks/raw/

    #llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai

  33. #Steeve 6 just dropped. $100 to train. 100k lines of #LoRA training on top of #Qwen 3.5. Vision now in-built so he really *can* see. Not to brag or anything, but let's see #OpenAI pull this off! :steeve:

    #llm #chatbot #qwen3

  34. Unsloth's "guide" to fine-tuning #Qwen3.5 is the digital equivalent of watching paint dry, but with more #acronyms and chevrons. 🤦‍♂️🚀 If you enjoy reading a glorified list of links and buzzwords, you'll be in heaven—otherwise, pray for a faster MoE escape plan! 🏃‍♀️💨
    unsloth.ai/docs/models/qwen3.5 #Unsloth #guide #techboredom #digitalescape #HackerNews #ngated

  35. Unsloth's "guide" to fine-tuning #Qwen3.5 is the digital equivalent of watching paint dry, but with more #acronyms and chevrons. 🤦‍♂️🚀 If you enjoy reading a glorified list of links and buzzwords, you'll be in heaven—otherwise, pray for a faster MoE escape plan! 🏃‍♀️💨
    unsloth.ai/docs/models/qwen3.5 #Unsloth #guide #techboredom #digitalescape #HackerNews #ngated

  36. Unsloth's "guide" to fine-tuning #Qwen3.5 is the digital equivalent of watching paint dry, but with more #acronyms and chevrons. 🤦‍♂️🚀 If you enjoy reading a glorified list of links and buzzwords, you'll be in heaven—otherwise, pray for a faster MoE escape plan! 🏃‍♀️💨
    unsloth.ai/docs/models/qwen3.5 #Unsloth #guide #techboredom #digitalescape #HackerNews #ngated

  37. Unsloth's "guide" to fine-tuning #Qwen3.5 is the digital equivalent of watching paint dry, but with more #acronyms and chevrons. 🤦‍♂️🚀 If you enjoy reading a glorified list of links and buzzwords, you'll be in heaven—otherwise, pray for a faster MoE escape plan! 🏃‍♀️💨
    unsloth.ai/docs/models/qwen3.5 #Unsloth #guide #techboredom #digitalescape #HackerNews #ngated

  38. Heaven help me, I'm trying to re-train #Steeve from a new foundational model. Again. :steeve:

    #qwen3 #qwen #ai #llm #chatbot

  39. New to me: News Analyzer that takes recordings of "The News" from my #HDHomeRun , transcribes it, breaks it into logical segments and summarizes each segment - all run offline, on local #llm

    #fasterwhisper #qwen3 #ffmpeg #comskip #framework #frameworkdesktop

  40. New to me: News Analyzer that takes recordings of "The News" from my #HDHomeRun , transcribes it, breaks it into logical segments and summarizes each segment - all run offline, on local #llm

    #fasterwhisper #qwen3 #ffmpeg #comskip #framework #frameworkdesktop

  41. New to me: News Analyzer that takes recordings of "The News" from my #HDHomeRun , transcribes it, breaks it into logical segments and summarizes each segment - all run offline, on local #llm

    #fasterwhisper #qwen3 #ffmpeg #comskip #framework #frameworkdesktop