home.social

#slm — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #slm, aggregated by home.social.

  1. What is #BLAS?

    BLAS is a set of fast matrix routines originally written in #Fortran.
    If you’re tired of dynamic types, hidden references, ownership rules, and endless “stream” abstractions, Free #Pascal + BLAS gives you old‑school, deterministic HPC #programming with none of the modern noise.

    #Copilot and I will be using Free Pascal and BLAS for our Small Language Model project #SLM. No more #C, #python, #Rust, or C#

    #AI #LLM #computer

  2. Why do people use #python, a glue language, which is so slow? The only reason is the AI ecosystem.

    #Copilot and I just tested Free Pascal and BLAS for its speed without using #numpy or #pytorch. The result is amazing. It took less than a second to do a 1024x1024 #matrix multiplication.

    We will be using Free #Pascal and #BLAS to write our Small Language Model #SLM using #NNUE.

    #AI #LLM

  3. Why do people use #python, a glue language, which is so slow? The only reason is the AI ecosystem.

    #Copilot and I just tested Free Pascal and BLAS for its speed without using #numpy or #pytorch. The result is amazing. It took less than a second to do a 1024x1024 #matrix multiplication.

    We will be using Free #Pascal and #BLAS to write our Small Language Model #SLM using #NNUE.

    #AI #LLM

  4. Why do people use #python, a glue language, which is so slow? The only reason is the AI ecosystem.

    #Copilot and I just tested Free Pascal and BLAS for its speed without using #numpy or #pytorch. The result is amazing. It took less than a second to do a 1024x1024 #matrix multiplication.

    We will be using Free #Pascal and #BLAS to write our Small Language Model #SLM using #NNUE.

    #AI #LLM

  5. Why do people use #python, a glue language, which is so slow? The only reason is the AI ecosystem.

    #Copilot and I just tested Free Pascal and BLAS for its speed without using #numpy or #pytorch. The result is amazing. It took less than a second to do a 1024x1024 #matrix multiplication.

    We will be using Free #Pascal and #BLAS to write our Small Language Model #SLM using #NNUE.

    #AI #LLM

  6. Fiz uma análise para o @outraspalavras.net de documento da Comissão de Análise Econômica e de Segurança EUA-China sobre como o país asiático escolheu disputar a corrida industrial por IA e o que o Sul Global pode aprender com isso.

    outraspalavras.net/tecnologiae

    #AI #China #US #data #SLM

  7. Da will ich eine alte Nvidia 3060 reaktivieren, um noch etwas mit #SLM / #LLM zu testen - und habe ausversehen die uralte 660 eingebaut.

    Wusste gar nicht mehr, dass ich die noch hatte.

    Warum müssen Grafikkarten erstmal so ähnlich aussehen?

  8. Ich bin inzwischen in der "Ich lasse ein gutes #LLM das Arbeitsergebnis von mehreren #SLM s evaluieren."-Phase angekommen...

  9. llmfit — one command to find which LLMs actually fit your hardware

    github.com/AlexsJones/llmfit

    Detects your RAM/GPU, scores hundreds of models, works with Ollama, LM Studio, llama.cpp...

    Written in Rust of course 🙌

    Handy when you're trying to optimize for SLMs like I do with @silex MCP => silex.me/ai/

    Have you tried local models (Ollama, LM Studio, llama.cpp...)?

    How did they perform, let me know 👇

    #Rust #LocalAI #SLM #FOSS #buildinpublic

  10. In my journey to make @silex AI-native and optimized for free #OpenSource local models

    I just released a grapesjs-ai-capabilities plugin, inspired by #WordPress new capability API: github.com/silexlabs/grapesjs-

    Now I'll refactor all the #GrapeJs plugins I maintain to expose capabilities as #MCP tools, with minimal prompts and very specific errors for #SLM

    Then I’ll benchmark different models and maybe experiment with fine-tuning

    #GrapesJS #AI #LocalAI #NoCode #BuildInPublic #FOSS #MCP #LLM

  11. Geometry > Scale: Как 40М параметров на решетке E8 обходят классические трансформеры

    Ребята, кажется, мы уперлись в стену. Пока гиганты наращивают параметры и жгут тераватты, пытаясь выжать каплю разума из статистики, я решил пересмотреть сам фундамент. Проблема не в данных, проблема в «вязкости» стандартного Attention.

    habr.com/ru/articles/1005298/

    #llm #E8 #transformer #transformers #edgeai #slm

  12. Any #FOSDEM discussion-party-like goers wanna adopt me for the evening?

    Interested in listening about anything interesting especially about #devops, #SBC (s), #hiking, #SLM, story driven games

  13. [Перевод] Как сделать (очень) маленькие LLM действительно полезными

    Команда AI for Devs подготовила перевод статьи о том, как выжать максимум из маленьких языковых моделей. Автор показывает, что даже очень компактные LLM могут быть полезны в реальных задачах — если правильно работать с контекстом, embeddings и RAG.

    habr.com/ru/articles/986770/

    #llm #slm #rag #embeddings #docker

  14. Everyone is hyping up LLMs, but in 2026 SLMs are quietly winning key battles. 🤖
    LLMs: great for broad, creative, multi-domain reasoning.
    SLMs: faster, cheaper, privacy-friendly specialists for focused tasks and edge deployments.

    New TechGlimmer article on when to pick each model and why “bigger” isn’t always better:
    🔗 techglimmer.io/llm-vs-slm-in-2

    #AI #LLM #SLM #GenAI #FOSS #MLOps

  15. Die UKW-Ausstrahlung der Freien Radios in Sachsen (@coloRadio, @radiot, #RadioBlau) ist nach dem Ausstieg des Mantelprogrammanbieters Apolloradio in Gefahr - die Sächsische Landesmedienanstalt hat eine Lizenzierung für ein Vollprogramm für die Freien Radios abgelehnt, und der UKW-Sendenetzbetreiber will die Frequenzen nicht für wenige Stunden Fensterprogramm am Tag trotzdem 24/7 vorhalten.
    radioblau.de/ukw-ausstrahlung-
    #FreieRadios #Sachsen #SLM #UKW #RadioT #coloradio #FreiesRadio

  16. [Перевод] Плато возможностей, или Куда катится машинное обучение в 2026 году

    2025 год стал для ИИ временем отрезвления. Эпоха масштабирования подходит к концу. Эксперты сходятся во мнении, что простым увеличением данных и вычислительной мощности следующий качественный скачок не совершить. На первый план выходят новые архитектуры, компактные модели и принципиально иные подходы к обучению. В 2026 году индустрия, похоже, даст ответ на вопрос, что ИИ может дать нам здесь и сейчас. Мы вступаем в эпоху прагматичного ИИ. Попробуем разобраться, какие именно тенденции определят лицо ИИ в наступающем 2026-м году.

    habr.com/ru/companies/bothub/a

    #вайбкодинг #2025_год #2026_год #ииагенты #алекс_крижевский #илья_суцкевер #джеффри_хинтон #openai #gpt3 #slm

  17. Started testing this - exceeded expectations for a small model essential.ai/research/rnj-1 #slm #llm

  18. #Clade4 #Sonnet and I have just deployed our second #AgenticAI on our #ICandy #Browser #dashboard project. We have have a total of 12 apps on this dashboard. The #AIAgent is a local AI chat app that allows me to chat with any of the small #LLM model that I downloaded and run on #LMStudio. I use them for learning #programming, #Math and #Japanese. This module can also generate 3000 #English sentences based on the set of vocabulary I upload to it that will be used for training my #NNUE #SLM.

  19. I created 7 #AgenticAI 's and 1 #CSharp #app and 1 #Typescript app using #Claude pro in 3 days. We are both exhausted. For $200 a year, I will be able to create more than 100 apps using Claude 4 #Sonnet. I will also be able to finish my #NNUE #SLM project and my #VR entertainment room project and learn as much knowledge as I can in just one year. Who says AIs are bad? You don't need to be a programmer or a big corporation to benefit from #AIs, you only need to be #computer literate. #AI

  20. I just installed #copilot for Obsidian. After getting the API key from Google, I could now use #Gemini 2.5 while writing notes for my #NNUE #SLM project in #Obsidian. In fact, I can choose other AIs to help me learn C# and other languages to code my project in Obsidian. Or I can focus on my model logic instead of spending most of my time learning computer languages. I can also use other AI's in my browser and in VS Code. All a sudden, I am going full #AI mode. I don't even need cloud storage.

  21. Why is #NNUE Broad Shallow Learning Small Language Model much more efficient and transparent than Deep Learning. I can use just 2 layers for my #SLM AI to learn English #Semantics. With this foundation, I can add more modules to the model to improve upon her understanding of the meanings of words in different contexts and to be able to learn other knowledge such as #mathematics. NNUE gives rise to a clear knowledge space and ease of computation. Sparse matrix multiplication is unnecessary. #AI

  22. I am now paying $16.8 a month for #Gemini Advanced for a couple of reasons. 1. I get 2T cloud storage. 2. I get #DeepResearch... I hate reading because I can't follow other people's logic. To learn any new knowledge, I will have to have curiosity in something first. I asked Gemini 2.5 to generate a deep report on #SparseMatrix because I need to speed up the matrix multiplication for my #NNUE #SLM with a huge parameter matrix and a super sparse input matrix representing vectors of words. #AI

  23. Das neueste 010 Min #Video

    BRENNPUNKT #GÜTERZÜGE
    Seit dem Wochenende läuft die vierte Serie mit Spezialgebiet #Schüttgut [Kies. Schotter. Sand. Aushub]

    #peertube video.ploud.fr/w/kKLSGECG62nNt

    #youtube : youtu.be/GETOZVlE1n8

    - SBBC DOPPELTRAKTION Re 620

    - SBB Re 420

    - SBB Re 620

    - SBBC Re 620

    - TÄGERHARD-TUNNEL

    - SBBC Am 843

    - HASTAG KIESWERK VERLAD

    - SBBC LOKZUG Re 620

    - ZUFAHRT & KIESWERK HOLCIM

    - MBC DOPPELTRAKTION Ge 4/4

    u.v.m.

    #TRAXX #BR189 #vectron ##re465 #dbcargo #re475 #bls #blscargo #re420 #br185 #postzug #re430 #re620 #re484 #rupperswil #würenlos #heitersberg #furttal #gotthard #simplon #olten #eisenbahn #bahn #bahnhof #train #trainspotting #lokführer #cff #ffs #sbb #sbbcargo #sbbc #doppeltraktion #bombardier #alstom #slm #db #arthgoldau #blsc #beacon #mrce #güterzug #mbc #BOXX #evu

  24. WSW will nicht-kommerziell werden: Wie Sachsen-Fernsehen-Chef Frank Haring mitteilt, hat er für WSW eine Umwandlung der Lizenz von „kommerzielles“ auf „nicht-kommerzielles Radio“ gestellt. Außerdem hat er Fördermittel aus dem NKL-Topf beantragt. "Es ist doch absurd, wenn man inzwischen für ein NKL-Projekt mehr Geld bekommen kann, als man in einem lokalen Markt wie Weißwasser für ein lokales Radio mit Werbung erlösen kann“, sagt er. flurfunk-dresden.de/2023/06/26

    #Medien #Sachsen #NKL #FreieRadios #WSW #SLM #Radio #Lausitz