#local-llms — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #local-llms, aggregated by home.social.
-
Mein #arbeitgeber labert grade in so nem #MicrosoftTeams Call für alle Mitarbeiter was von #digitalesouveranitat und dann soll ich MEHR mit #microsoft #github #copilot machen. Und selbstverständlich wird ALLES #ai. Sogar unsere TLD wechselt von .net auf .ai.
Wir sollen ganz explizit doch bitte #ki in die tägliche #Arbeit einbinden, der Vertrieb soll sich beim. Aber bloß "unsere" nutzen, wegen den Daten. Muss mich gleich mal informieren, ob #localLLMs erlaubt sind.
https://hessen.social/@Moonstone2487/116082676681677166 -
Mini PC for local LLMs in 2026
-
Running Local LLMs Offline on a Ten-Hour Flight
https://deploy.live/blog/running-local-llms-offline-on-a-ten-hour-flight/
#HackerNews #LocalLLMs #Offline #Flight #Technology #AIApplications #TravelTech
-
Just a note for parallel universe me: flash attention is bad for #LocalLLMs
-
So I have been trying the new #Gemma4 models on my M1 macbook pro, specifically the gemma4:26b which is 17gb in size.
Obviously not the most challenging coding challenge and tasks but...
Much much faster response times than local models 6-12 months ago. Previously qwen, deepseek, and even Gemm3 simply took too long to be practical.
I find it incredible this can run on just my 5.5 year old laptop.
-
Just so we are clear: #LocalLLMs are an asset if trained and used well. But please be aware that many projects are pretending to be open source but their releases contain closed source components where it's not transparent what is going on.
Go to the source. Llama.cpp, PyTorch, etc.
-
If you are running #LocalLLMs you may be using LM Studio. Just a fair warning.... While this is practical, it's also proxying everything through their infrastructure. It's a privacy nightmare.
-
Find out which AI models your machine can actually run.
CanIRun.ai — Can your machine run AI models? https://www.canirun.ai/
ht @researchbuzz.bsky.social
-
https://mastodon.social/@silentexception/116073910238301254
👍
Is there any initiative to pool the various initiatives across European academia and public research to come up with a common #ecosystem of #europeandata and #LocalLLMs ?
-
I spent some time playing around with Local LLMs on Apple Silicon. Here's how that played out.
Local LLMs on Spare Apple Silicon: A Cautionary Tale
https://macadminmusings.com/blog/2026/01/11/local-llms-on-spare-apple-silicon-a-cautionary-tale/
#MacAdmins #LocalLLMs -
MiniMax M2 & Agent: Ingenious in Simplicity MiniMax M2 & Agent: Ingenious in Simplicity MiniMax M2 was released on Monday 27th October by MiniMax, a Chinese AI lab founded in December 2021....
#ai #generative-ai #local-llms #llms #llm #llm-pricing #pelican-riding-a-bicycle #llm-release #ai-in-china #minimax
Origin | Interest | Match -
🚨 ALERT: Local LLMs, the supposed guardians of your digital fortress, are apparently about as secure as a wet paper bag. 🤦♂️ This "groundbreaking" #research reveals that local models are easily tricked, making them the cyber equivalent of a friendly puppy that wags its tail at everyone, including burglars. 🐶🔓
https://quesma.com/blog/local-llms-security-paradox/ #LocalLLMs #CyberSecurity #DigitalFortress #Vulnerability #AIModels #HackerNews #ngated -
Experimenting with Local LLMs on macOS
https://blog.6nok.org/experimenting-with-local-llms-on-macos/
#HackerNews #LocalLLMs #macOS #Experimenting #AI #Technology
-
🚀 Take control of your AI usage! With LiteLLM + OpenWebUI you can unify cloud & local models, set real budgets, and never get surprise bills. Perfect for home labs and small teams. 🧑💻💡
#LiteLLM #OpenWebUI #Docker #AItools #HomeLab #LocalLLMs #APIGateway #AIbudget #TechBlog #SmallBusinessAI
https://victornava.dev/2025/09/02/litellm-at-home-one-endpoint-real-budgets-zero-surprises/
-
New to AI? The #ollama UI might be the perfect, easy-to-use entry point. It's a fantastic start! 🎉 #AIforBeginners #LocalLLMs https://youtu.be/prrWESXl7wg
-
"Simon Willison has a plan for the end of the world. It’s a USB stick, onto which he has loaded a couple of his favorite open-weight LLMs—models that have been shared publicly by their creators and that can, in principle, be downloaded and run with local hardware. If human civilization should ever collapse, Willison plans to use all the knowledge encoded in their billions of parameters for help. “It’s like having a weird, condensed, faulty version of Wikipedia, so I can help reboot society with the help of my little USB stick,” he says.
But you don’t need to be planning for the end of the world to want to run an LLM on your own device. Willison, who writes a popular blog about local LLMs and software development, has plenty of compatriots: r/LocalLLaMA, a subreddit devoted to running LLMs on your own hardware, has half a million members.
For people who are concerned about privacy, want to break free from the control of the big LLM companies, or just enjoy tinkering, local models offer a compelling alternative to ChatGPT and its web-based peers.The local LLM world used to have a high barrier to entry: In the early days, it was impossible to run anything useful without investing in pricey GPUs. But researchers have had so much success in shrinking down and speeding up models that anyone with a laptop, or even a smartphone, can now get in on the action. “A couple of years ago, I’d have said personal computers are not powerful enough to run the good models. You need a $50,000 server rack to run them,” Willison says. “And I kept on being proved wrong time and time again.”"
https://www.technologyreview.com/2025/07/17/1120391/how-to-run-an-llm-on-your-laptop/
#AI #GenerativeAI #LLMs #Chatbots #LocalLLMs #Privacy #DataProtection #Decentralization