home.social

#opensourceai — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #opensourceai, aggregated by home.social.

  1. If you are exploring AI 3D model generation for Godot and Unity, read on. Turning a text prompt or a single photo into a textured 3D model is now possible entirely on your own hardware. This guide will help you navigate the landscape, whether you use Godot, Unity, or both. We focus on free, locally runnable AI models and explain exactly which output formats they support, so you can build a seamless pipeline from generation to engine. […]

    https://blog.icod.de/2026/05/02/ai-3d-models-godot-unity-local/
  2. NVIDIA Unveils "Nemotron 3 Nano Omni," Merging Vision, Audio, and Language for AI Agents

    NVIDIA's Nemotron 3 Nano Omni is a new AI model that combines vision, audio, and language. It helps AI agents work faster and understand more.

    #NvidiaAI, #Nemotron3, #MultimodalAI, #OpenSourceAI, #AIAgents

    newsletter.tf/nvidia-nemotron-

  3. Curious about Tencent's new Hunyuan 3 (HY3) model and its massive 295B total parameters ?
    🤖 I have put together a comprehensive guide on how this AI handles deep reasoning and developer workflows like a pro!

    Read it here: techglimmer.io/what-is-tencent

    #TencentHY3 #OpenSourceAI #TechGlimmer #Tencent

  4. RE: mastodon.social/@clifmo/116456

    Just wrapped a fully local LLM stack on Fedora + AMD RX 7900 XTX (24GB VRAM).

    🔹 Engine: llama.cpp w/ Vulkan/RADV backend
    🔹 Model: Qwen3.6-35B-A3B (MoE, Q4_K_M, ~21.4GB)
    🔹 Config: -ngl 99 --ctx-size 16384 --batch-size 512
    🔹 Perf: ~137 tok/s gen, ~344 tok/s prompt, rock-solid at 16k context
    🔹 All VRAM. Zero CPU offload.

    Powering AnythingLLM for private RAG pipelines. 🐧⚡

    #LocalLLM #Fedora #AMDGPU #Vulkan #OpenSourceAI #RAG #LLM

  5. RE: mastodon.social/@clifmo/116456

    Just wrapped a fully local LLM stack on Fedora + AMD RX 7900 XTX (24GB VRAM).

    🔹 Engine: llama.cpp w/ Vulkan/RADV backend
    🔹 Model: Qwen3.6-35B-A3B (MoE, Q4_K_M, ~21.4GB)
    🔹 Config: -ngl 99 --ctx-size 16384 --batch-size 512
    🔹 Perf: ~137 tok/s gen, ~344 tok/s prompt, rock-solid at 16k context
    🔹 All VRAM. Zero CPU offload.

    Powering AnythingLLM for private RAG pipelines. 🐧⚡

    #LocalLLM #Fedora #AMDGPU #Vulkan #OpenSourceAI #RAG #LLM

  6. RE: mastodon.social/@clifmo/116456

    Just wrapped a fully local LLM stack on Fedora + AMD RX 7900 XTX (24GB VRAM).

    🔹 Engine: llama.cpp w/ Vulkan/RADV backend
    🔹 Model: Qwen3.6-35B-A3B (MoE, Q4_K_M, ~21.4GB)
    🔹 Config: -ngl 99 --ctx-size 16384 --batch-size 512
    🔹 Perf: ~137 tok/s gen, ~344 tok/s prompt, rock-solid at 16k context
    🔹 All VRAM. Zero CPU offload.

    Powering AnythingLLM for private RAG pipelines. 🐧⚡

    #LocalLLM #Fedora #AMDGPU #Vulkan #OpenSourceAI #RAG #LLM

  7. RE: mastodon.social/@clifmo/116456

    Just wrapped a fully local LLM stack on Fedora + AMD RX 7900 XTX (24GB VRAM).

    🔹 Engine: llama.cpp w/ Vulkan/RADV backend
    🔹 Model: Qwen3.6-35B-A3B (MoE, Q4_K_M, ~21.4GB)
    🔹 Config: -ngl 99 --ctx-size 16384 --batch-size 512
    🔹 Perf: ~137 tok/s gen, ~344 tok/s prompt, rock-solid at 16k context
    🔹 All VRAM. Zero CPU offload.

    Powering AnythingLLM for private RAG pipelines. 🐧⚡

    #LocalLLM #Fedora #AMDGPU #Vulkan #OpenSourceAI #RAG #LLM

  8. RE: mastodon.social/@clifmo/116456

    Just wrapped a fully local LLM stack on Fedora + AMD RX 7900 XTX (24GB VRAM).

    🔹 Engine: llama.cpp w/ Vulkan/RADV backend
    🔹 Model: Qwen3.6-35B-A3B (MoE, Q4_K_M, ~21.4GB)
    🔹 Config: -ngl 99 --ctx-size 16384 --batch-size 512
    🔹 Perf: ~137 tok/s gen, ~344 tok/s prompt, rock-solid at 16k context
    🔹 All VRAM. Zero CPU offload.

    Powering AnythingLLM for private RAG pipelines. 🐧⚡

    #LocalLLM #Fedora #AMDGPU #Vulkan #OpenSourceAI #RAG #LLM

  9. Who will own the future of intelligence: a few corporate gatekeepers or the entire human race?
    While Big Tech builds walled gardens, a decentralized "global brain" is already outperforming trillion-dollar giants on key benchmarks.
    How can a protocol with no CEO challenge the status quo? Discover the truth about the "Bitcoin of Intelligence" and why the old AI model is destined to fail.
    Listen now to find out: spreaker.com/episode/47-ai-rev
    #Bittensor #DecentralizedAI #TAO #OpenSourceAI #Web3

  10. Gemma 4 is a serious upgrade for open-weight AI: 4-model lineup, 256K tokens, video understanding, native function calling and 140+ languages under Apache 2.0.
    I dug into the specs, benchmarks and where to run it (AI Studio, Edge Gallery, Hugging Face, Ollama, NIM, Docker) here: techglimmer.io/google-gemma-4-

    #Gemma4 #OpenSourceAI #FOSS #AI #GoogleDeepMind #TechGlimmer