home.social

#ellama — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #ellama, aggregated by home.social.

  1. @hendrik considering I am a complete #dart newbie I think it did pretty well. Even when it messed up it was able to auto iterate until make passed without me having to keep re-prompting.

    I'm still using #ellama for other interactions like for example reviewing a patch series before posting. This #eca workflow is really tuned for the edit/compile/test cycle of writing new code.

    The next time I play with it I want to try local inference and see how that performs with local models in control.

  2. The experience is very different from the #ellama integration I currently use for general queries. The principle interface is still a chat window but rather than copy and pasting code you can watch the #LLM's internal monologue and then approve requests to edit files and run tools.

    The first time I hit a compile error I just told it the build failed and for it to fix the problem. It's quite something watching it invoke make, read the error and then iterate until the problem is fixed.

    2/n

  3. Speaking of which, is there a way to configure to point to a remote service? I haven't been able to find something in the official docs.

  4. Sharing sessions through was a great idea after all. This, coupled with my cheap-ish on-demand server that I can connect to from my devices should be a decent jump in my productivity.

  5. Thanks to #ramalamma detecting my #vulkan capable #integratedgpu I can now run a lot of models without the CPU cores melting. I still need to work out the right runes for #ellama to work properly with the #mistral model though.

  6. I've been messing around with running LLMs locally on my laptop and seeing how they perform, subjectively, and not very systematically.

    I've been using the ellama emacs module, which makes things like summary and code completion very easy.

    I'm using llama3.2, which is quite a bit smaller than llama3.1, and runs very easily on my Framework 13, with AMD Ryzen CPU.

    🧵...

    #llama3 #ellama