#qwen3 — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #qwen3, aggregated by home.social.
-
Erkenntnis nach Selbstversuch: Wenn acht Sekunden Sample ausreichen, um lokale KI einen aktuellen Text mit meiner Stimme vorlesen zu lassen - das ändert in Sachen Quellenglaubwürdigkeit absehbar alles.
#qwen3 #selbstversuch #ai #journalism auch im #lokaljournalismus
-
Erkenntnis nach Selbstversuch: Wenn acht Sekunden Sample ausreichen, um lokale KI einen aktuellen Text mit meiner Stimme vorlesen zu lassen - das ändert in Sachen Quellenglaubwürdigkeit absehbar alles.
#qwen3 #selbstversuch #ai #journalism auch im #lokaljournalismus
-
Erkenntnis nach Selbstversuch: Wenn acht Sekunden Sample ausreichen, um lokale KI einen aktuellen Text mit meiner Stimme vorlesen zu lassen - das ändert in Sachen Quellenglaubwürdigkeit absehbar alles.
#qwen3 #selbstversuch #ai #journalism auch im #lokaljournalismus
-
Erkenntnis nach Selbstversuch: Wenn acht Sekunden Sample ausreichen, um lokale KI einen aktuellen Text mit meiner Stimme vorlesen zu lassen - das ändert in Sachen Quellenglaubwürdigkeit absehbar alles.
#qwen3 #selbstversuch #ai #journalism auch im #lokaljournalismus
-
Erkenntnis nach Selbstversuch: Wenn acht Sekunden Sample ausreichen, um lokale KI einen aktuellen Text mit meiner Stimme vorlesen zu lassen - das ändert in Sachen Quellenglaubwürdigkeit absehbar alles.
#qwen3 #selbstversuch #ai #journalism auch im #lokaljournalismus
-
RT @cheenanet: Ist die Ära da, in der Qwen3.6-27B mit nur einer RTX 3090 120 bis 200 Token pro Sekunde verarbeitet?
mehr auf Arint.info
#AI #GitHub #LLM #MachineLearning #Qwen3 #RTX3090 #arint_info
-
New week, small update: Run LLMs Locally
Now with a new setup for OpenCode with Qwen 3.6 and Gemma 4, including permissions and thinking variants.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #qwen3 #glm #localai #gemma4 #webgpu #opencode
-
New week, small update: Run LLMs Locally
Now with a new setup for OpenCode with Qwen 3.6 and Gemma 4, including permissions and thinking variants.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #qwen3 #glm #localai #gemma4 #webgpu #opencode
-
New week, small update: Run LLMs Locally
Now with a new setup for OpenCode with Qwen 3.6 and Gemma 4, including permissions and thinking variants.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #qwen3 #glm #localai #gemma4 #webgpu #opencode
-
New week, small update: Run LLMs Locally
Now with a new setup for OpenCode with Qwen 3.6 and Gemma 4, including permissions and thinking variants.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #qwen3 #glm #localai #gemma4 #webgpu #opencode
-
New week, small update: Run LLMs Locally
Now with a new setup for OpenCode with Qwen 3.6 and Gemma 4, including permissions and thinking variants.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #qwen3 #glm #localai #gemma4 #webgpu #opencode
-
Krátce jsem zkoušel lokální #LLM modely na mém #frameworklaptop13 a docela zajímavě vypadá "přemýšlivý" qwen3.6:35b-a3b-q4_K_M, který na AMD Ryzen 5 AI 340 + 32GB RAM běží použitelně.
Tento model má redukovaný počet aktivních parametrů pro zpracování jednoho tokenu. Na první pohled, tak méně zatěžuje CPU/GPU a celý notebook méně hučí a topí. Podobně funguje i starší qwen3-coder:30b-a3b-q4_K_M, který "nepřemýšlí" takže odpověď dorazí rychleji.
Každopádně na tomto HW tyto modely generují výstup řádově v nižších desítkách (možná spíš jednotky) tokenů za sekundu. Měřeno pohledem oka. Použitelné to je, ale kdo to s lokálním LLM myslí opravdu vážně, brzy sáhne po něčem výkonnějším.
-
RT @mr_r0b0t: Wusstest du, dass Qwen3.6 mit nativer MTP ausgeliefert wurde? Ja, dieselbe MTP, für die Google gestern die Unterstützung von Gemma4 freigegeben hat! Multi Token Prediction (MTP) = spekulatives Decoding. Hier ist ein Qwen3.6-Modell, quantisiert auf Q4KM, das MTP über ikllama.cpp unterstützt.
mehr auf Arint.info
-
New week, more slides: Run LLMs Locally
Now with LFM 2 and new slides for using Transformers.js with WebGPU for Privacy Filter, Function Calling and Embeddings, running completely in your browser.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu
-
New week, more slides: Run LLMs Locally
Now with LFM 2 and new slides for using Transformers.js with WebGPU for Privacy Filter, Function Calling and Embeddings, running completely in your browser.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu
-
New week, more slides: Run LLMs Locally
Now with LFM 2 and new slides for using Transformers.js with WebGPU for Privacy Filter, Function Calling and Embeddings, running completely in your browser.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu
-
New week, more slides: Run LLMs Locally
Now with LFM 2 and new slides for using Transformers.js with WebGPU for Privacy Filter, Function Calling and Embeddings, running completely in your browser.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu
-
New week, more slides: Run LLMs Locally
Now with LFM 2 and new slides for using Transformers.js with WebGPU for Privacy Filter, Function Calling and Embeddings, running completely in your browser.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu
-
New week, new slides: Run LLMs Locally
Now including Nemotron 3 Nano Omni from Nvidia, Llama.cpp built-in tools and new slides about using Transformers.js with WebGPU for Image Recognition and OCR.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu
-
New week, new slides: Run LLMs Locally
Now including Nemotron 3 Nano Omni from Nvidia, Llama.cpp built-in tools and new slides about using Transformers.js with WebGPU for Image Recognition and OCR.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu
-
New week, new slides: Run LLMs Locally
Now including Nemotron 3 Nano Omni from Nvidia, Llama.cpp built-in tools and new slides about using Transformers.js with WebGPU for Image Recognition and OCR.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu
-
New week, new slides: Run LLMs Locally
Now including Nemotron 3 Nano Omni from Nvidia, Llama.cpp built-in tools and new slides about using Transformers.js with WebGPU for Image Recognition and OCR.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu
-
New week, new slides: Run LLMs Locally
Now including Nemotron 3 Nano Omni from Nvidia, Llama.cpp built-in tools and new slides about using Transformers.js with WebGPU for Image Recognition and OCR.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu
-
Implemented my first AI application that runs completely in the browser using Transformers.js, Qwen3-4B and WebGPU. Amazing stuff!
(all running on a 2 year old laptop with 10 tops integrated GPU) -
We got 207 tok/s with Qwen3.5-27B on an RTX 3090
https://github.com/Luce-Org/lucebox-hub
#HackerNews #Qwen3.5 #RTX3090 #tok/s #machinelearning #AIperformance
-
Qwen3.6-Max-Preview: Smarter, Sharper, Still Evolving
https://qwen.ai/blog?id=qwen3.6-max-preview
#HackerNews #Qwen3.6 #Max #Preview #AI #Evolution #Technology #Innovation
-
Ich sitze seit 7h an einem kack UML Diagramm mit #plantuml. Die Kollegin braucht das morgen.
Da ich keine #cloud basierte #ai nutzen darf, wegen unsere Geschäftsgeheimnisse, nutze ich entweder unsere genehmigte #chatgpt Enterprise Version von der Firma oder #gemma4 / #qwen3 mit #lmstudio.
Momentan hat sie die nicht ganz perfekte Variante, bei der man noch von einem "Schönheitsfehler" reden kann. -
Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7
-
New week, new update for the slides of my talk "Run LLMs Locally":
Now including Gemma4 and Qwen3-Omni with Vision and Audio support and new slides describing Llama.cpp server parameters.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4
-
New update for the slides of my talk "Run LLMs Locally": Bonsai-8B
The latest version of Llama.cpp now supports Vulkan with 1-bit quantized models like Bonsai: 8B model having 1.1 GB in size, 2.5 GB in RAM.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai
-
New update for the slides of my talk "Run LLMs Locally": WebGPU
Now models can run completely inside the browser using Transformers.js, Vulkan and WebGPU (slower than llama.cpp, but already usable).
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #webgpu
don't expect llm generated code to be correct ↓
-
📰 Ah, the tech world's latest offering, Qwen3.6-Plus, is here to dazzle us with "real-world agents"—because who doesn't need a glorified chatbot that can barely order a pizza without a system crash? 🤖🎉 Clearly, we're on the brink of a revolution where our apps are slightly less annoying than they used to be. 🚀🙄
https://qwen.ai/blog?id=qwen3.6 #Qwen3.6Plus #Chatbots #TechNews #AIRevolution #UserExperience #HackerNews #ngated -
New update for the slides of my talk "Run LLMs Locally":
Now including music generation with ACE-Step and OCR using LightOnOCR.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai
-
New update for the slides of my talk "Run LLMs Locally":
Now including Reranking, Qwen 3.5 (slower than Qwen 3, but includes Vision) and loading models with Direct I/O.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai #mcp
-
新舊MBP 本地LLM對決〡M2 Max vs M5 Max〡極限測試大型LLM
當 2026 年最強 Apple 流動平台 M5 Max MBP 遇上 Local 部署的 Qwen 3. […]
#unwire TV #AI #Apple #M2 Max
https://unwire.hk/2026/03/13/m5-max-macbook-pro/unwire_podcast/?utm_source=rss&utm_medium=rss&utm_campaign=m5-max-macbook-pro -
One more update for the slides of my talk "Run LLMs Locally":
Now including text to speech with Qwen3-TTS and Model Context Protocol.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai #mcp
-
I updated the slides for my talk "Run LLMs Locally":
Now including image generation with Qwen3 and content classification from the Qwen3Guard Technical Report paper.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf
#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai
-
How to run Qwen 3.5 locally
https://unsloth.ai/docs/models/qwen3.5
#HackerNews #How #to #run #Qwen #3.5 #locally #Qwen3.5 #LocalSetup #AIModels #TechTutorials #HackerNews
-
Unsloth's "guide" to fine-tuning #Qwen3.5 is the digital equivalent of watching paint dry, but with more #acronyms and chevrons. 🤦♂️🚀 If you enjoy reading a glorified list of links and buzzwords, you'll be in heaven—otherwise, pray for a faster MoE escape plan! 🏃♀️💨
https://unsloth.ai/docs/models/qwen3.5/fine-tune #Unsloth #guide #techboredom #digitalescape #HackerNews #ngated -
Unsloth's "guide" to fine-tuning #Qwen3.5 is the digital equivalent of watching paint dry, but with more #acronyms and chevrons. 🤦♂️🚀 If you enjoy reading a glorified list of links and buzzwords, you'll be in heaven—otherwise, pray for a faster MoE escape plan! 🏃♀️💨
https://unsloth.ai/docs/models/qwen3.5/fine-tune #Unsloth #guide #techboredom #digitalescape #HackerNews #ngated -
Unsloth's "guide" to fine-tuning #Qwen3.5 is the digital equivalent of watching paint dry, but with more #acronyms and chevrons. 🤦♂️🚀 If you enjoy reading a glorified list of links and buzzwords, you'll be in heaven—otherwise, pray for a faster MoE escape plan! 🏃♀️💨
https://unsloth.ai/docs/models/qwen3.5/fine-tune #Unsloth #guide #techboredom #digitalescape #HackerNews #ngated -
Unsloth's "guide" to fine-tuning #Qwen3.5 is the digital equivalent of watching paint dry, but with more #acronyms and chevrons. 🤦♂️🚀 If you enjoy reading a glorified list of links and buzzwords, you'll be in heaven—otherwise, pray for a faster MoE escape plan! 🏃♀️💨
https://unsloth.ai/docs/models/qwen3.5/fine-tune #Unsloth #guide #techboredom #digitalescape #HackerNews #ngated -
Qwen3.5 122B and 35B models offer Sonnet 4.5 performance on local computers
#HackerNews #Qwen3.5 #Sonnet4.5 #LocalComputers #AIModels #OpenSource
-
New to me: News Analyzer that takes recordings of "The News" from my #HDHomeRun , transcribes it, breaks it into logical segments and summarizes each segment - all run offline, on local #llm
#fasterwhisper #qwen3 #ffmpeg #comskip #framework #frameworkdesktop
-
New to me: News Analyzer that takes recordings of "The News" from my #HDHomeRun , transcribes it, breaks it into logical segments and summarizes each segment - all run offline, on local #llm
#fasterwhisper #qwen3 #ffmpeg #comskip #framework #frameworkdesktop
-
New to me: News Analyzer that takes recordings of "The News" from my #HDHomeRun , transcribes it, breaks it into logical segments and summarizes each segment - all run offline, on local #llm
#fasterwhisper #qwen3 #ffmpeg #comskip #framework #frameworkdesktop