home.social

#ramalama — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #ramalama, aggregated by home.social.

  1. Friends Don't Let Friends Use #Ollama sleepingrobots.com/dreams/stop

    For running LLMs locally use llama.cpp with its built-in web UI, or try #ramalama ramalama.ai/

  2. Raised a bug about #ramalama today not playing well with #arm64 and and #amd gpus. However if you force the base image local inference does use #vulkan to run - and much faster than maxing out the CPU cores on my #altra.