home.social

#slms — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #slms, aggregated by home.social.

  1. Small Language Models (SLMs) represent a shift toward ultra-efficient, privacy-preserving AI that can run locally and offline. For businesses, that means lower latency, reduced infrastructure costs, and stronger data protection. Dive into practical implications and where SLMs fit into your AI strategy: wix.to/NWrnNyX

    #AI
    #SLMs
    #EdgeAI
    #Innovation
    #DataPrivacy
    #MachineLearning

  2. OwnAether Personal AI Operating Systems- What if your entire digital life — your work, your income, your creativity, your health, your automation, your business — was orchestrated by a single intelligent layer that learns you, works for you, and evolves with you?

    medium.com/@ownaether/the-pers

    #AI #PersonalAI #IndividualAI #MyAI #YourAI #LocalAI #DesktopAI #AIApps #PrivateAI #LLMs #SLMs #AIModels #PersonalAIAssistant #PersonalAIApp

  3. “The Saving Voices Project recently built a speech AI system for the Indigenous Soliga tribe in southern India. As younger members migrated to the cities for jobs, elders in the community feared losing their language. With a small number of speakers, no written script, and no internet access, commercial speech technology was not an option. The Saving Voices Project, along with the Indian Institute of Information Technology, Dharwad, custom-built cheap text-to-speech AI models that run on low-powered devices, and can operate offline for long periods.

    The model is replicable for Indigenous language preservation globally, Sathiaseelan said.

    “With just five hours of voice data, we were able to build a voice model for the Soliga by prioritizing community ownership, and with frugal, deployable technology,” he said.

    Unlike the compute-heavy AI models developed by Silicon Valley, the smaller models being built in India, Indonesia, and elsewhere can run on low-end devices and low-bandwidth networks, and be deployed in sectors such as agriculture, health-care, and education. The models are not only cost-efficient, they also have a lower impact on the environment, Sathiaseelan said.

    “This is perhaps the most important dimension of frugal AI,” he said. “It is about building leaner, more efficient systems from the ground up. By design, the systems use less compute, less memory, and less energy, which directly translates into a smaller carbon footprint.”“

    restofworld.org/2026/frugal-ai

    #AI #SLMs #FrugalAI #GlobalSouth #BigTech

  4. "If the strengths of A.I. are to truly be harnessed, the tech industry should stop focusing so heavily on these one-size-fits-all tools, and instead concentrate on narrow, specialized A.I. tools engineered for particular problems. Because, frankly, they’re often more effective.

    Until the advent of chatbots, most A.I. developers focused on building special-purpose systems, for things like playing chess or recommending books and movies to consumers. These systems were not nearly as sexy as talking to a chatbot, and each project often took years to get right. But they were often more reliable than today’s generative A.I. tools, because they didn’t try to learn everything from scratch and were often engineered on the basis of expert knowledge.

    Take chess. If you ask a large language model (the kind of A.I. that powers a chatbot like ChatGPT) to play a game of chess, it struggles to play well and often makes illegal moves, never fully grasping the rules of the game, even after exposure to huge amounts of relevant training data.

    Special-purpose programs for chess, in contrast, are programmed from the outset to follow a built-in set of rules, and structured around core notions such as board structure and a tree of possible moves. Such systems never make illegal moves, and the best special-purpose chess systems can easily beat even the most skilled humans. Remarkably, an Atari 2600, using custom A.I. software built in the 1970s, was recently reported to have beaten a large language model."

    nytimes.com/2025/10/16/opinion

    #AI #GenerativeAI #LLMs #Chatbots #SLMs

  5. Is Faith in the supposed “God-like” powers of large language models (LLMs) waning as businesses and developers shift their focus to smaller, more nimble alternatives?

    This trend suggests a significant change in the AI landscape, with important implications for both the tech giants at the forefront and those, like Apple, that have taken a more cautious approach.

    neurodoctor.com/2025/09/09/fai
    #ai #artificialintelligence #llm #slms #apple #nvidia #openai

  6. "AI-powered influence operations can now be executed end-to-end on commodity hardware. We show that small language models produce coherent, persona-driven political messaging and can be evaluated automatically without human raters. Two behavioural findings emerge. First, persona-over-model: persona design explains behaviour more than model identity. Second, engagement as a stressor: when replies must counter-arguments, ideological adherence strengthens and the prevalence of extreme content increases. We demonstrate that fully automated influence-content production is within reach of both large and small actors. Consequently, defence should shift from restricting model access towards conversation-centric detection and disruption of campaigns and coordination infrastructure. Paradoxically, the very consistency that enables these operations also provides a detection signature."

    arxiv.org/html/2508.20186v1

    #AI #GenerativeAI #LLMs #Chatbots #Propaganda #AIPropaganda #Disinformation #SLMs

  7. 🧠 Il futuro dell’#AI agentica è small? 

    💡Secondo NVIDIA Research, gli Small Language Models (#SLMs) offrono una combinazione vincente.

    👉 I dettagli: linkedin.com/posts/alessiopoma

    ___ 

    ✉️ 𝗦𝗲 𝘃𝘂𝗼𝗶 𝗿𝗶𝗺𝗮𝗻𝗲𝗿𝗲 𝗮𝗴𝗴𝗶𝗼𝗿𝗻𝗮𝘁𝗼/𝗮 𝘀𝘂 𝗾𝘂𝗲𝘀𝘁𝗲 𝘁𝗲𝗺𝗮𝘁𝗶𝗰𝗵𝗲, 𝗶𝘀𝗰𝗿𝗶𝘃𝗶𝘁𝗶 𝗮𝗹𝗹𝗮 𝗺𝗶𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: bit.ly/newsletter-alessiopomar 

    #AI #GenAI #GenerativeAI #IntelligenzaArtificiale #LLM 

  8. 🚀 Why pay more for cloud AI when smarter AI fits in your watch?
    Discover how Small Language Models are quietly outperforming LLMs —
    • 8X faster
    • 90% cheaper
    • 100% offline 🤯

    From Tesla to smart clinics, this is the AI story no one's telling — yet.
    Read the full piece 👇
    🔗 medium.com/@rogt.x1997/8x-fast

    #EdgeAI #SLMs #TinyML #FutureReady
    medium.com/@rogt.x1997/8x-fast

  9. "Ai2 tested DataDecide across a wide range of datasets and model sizes, using 10 benchmarks to evaluate how well small models predict large-scale performance. The findings aren’t earth-shattering, but they present useful takeaways for AI developers and researchers.

    For one, Ai2 found that small models (around 150 million parameters) can predict large-scale outcomes with surprising accuracy. Some benchmarks reached over 80% decision accuracy using just 0.01% of the compute compared to billion-parameter models.

    Since small-model experiments use less compute than other methods, developers don’t need to run full-scale tests just to predict outcomes. “The promise of this work is lower compute costs during training,” said Pijanowski.

    Ai2 found that scaling laws didn’t outperform the simpler method of ranking datasets by small-model results. Scaling laws, a more sophisticated and more costly testing method, aim to predict how accuracy improves with model size. For now, “just stick with ablating things at one scale,” advised Magnusson.

    The findings should give LLM devs pause for thought, Hunt said: “There are scaling laws that have been derived from empirical studies between data volume, compute resources and performance. Ai2’s research points out that we may want to revisit some of those assumptions.”"

    thenewstack.io/new-tools-help-

    #AI #GenerativeAI #LLMs #AITraining #SLMs

  10. Are you passionate about the latest in #AI? Here's your chance to shine!

    ✍️ Join the #InfoQ Annual Article Writing Competition!

    🏆 Win a #FreeTicket to #QCon or #InfoQDevSummit!

    🔗 Submit by March 30, 2025: bit.ly/417KPtk

    Which AI topic are you most excited to explore?

    Explore topics like #LLMs, #SLMs, #vLLMs, #GenAI, #VectorDatabases, #ExplainableAI, #RAG, and more!

  11. This #InfoQ #eMag brings together our most popular InfoQ Trends Reports from 2024, offering a deep dive into:
    💡 Cell-based architectures
    💡 Socio-technical systems
    💡 Large and small language models (LLMs & SLMs)
    💡 State-of-the-art innovations in the Java ecosystem

    Whether you're a developer, architect, technology leader, or simply a tech enthusiast, these reports provide actionable insights and valuable perspectives to help you:
    🚀 Plan your future roadmaps
    🚀 Explore emerging technologies & practices

    🔗 Download it for free: bit.ly/3PEiyoG

    #TrendsReport #SoftwareTrends #FreeDownload

    #SoftwareArchitecture #SoftwareDevelopment #LLMs #SLMs #Java

  12. "To prevent AI models from memorizing their input, we know exactly one robust method: differential privacy (DP). But crucially, DP requires you to precisely define what you want to protect. For example, to protect individual people, you must know which piece of data comes from which person in your dataset. If you have a dataset with identifiers, that's easy. If you want to use a humongous pile of data crawled from the open Web, that's not just hard: that's fundamentally impossible.

    In practice, this means that for massive AI models, you can't really protect the massive pile of training data. This probably doesn't matter to you: chances are, you can't afford to train one from scratch anyway. But you may want to use sensitive data to fine-tune them, so they can perform better on some task. There, you may be able to use DP to mitigate the memorization risks on your sensitive data.

    This still requires you to be OK with the inherent risk of the off-the-shelf LLMs, whose privacy and compliance story boils down to "everyone else is doing it, so it's probably fine?".

    To avoid this last problem, and get robust protection, and probably get better results… Why not train a reasonably-sized model entirely on data that you fully understand instead?"

    desfontain.es/blog/privacy-in-

    #AI #GenerativeAI #LLMs #SLMs #Privacy #DifferentialPrivacy #Memorization