home.social

#llm-optimization — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #llm-optimization, aggregated by home.social.

fetched live
  1. Prompt Repetition Improves Non-Reasoning LLMs: Google's New Study Google researchers found that simply repeating your prompt—copying and pasting it twice—dramatically improves LLM accuracy ...

    #promptlayer #prompt-engineering #llm-optimization #google-research #prompt-repetition #ai-accuracy

    Origin | Interest | Match
  2. New Nvidia research cuts LLM reasoning cost by 8× while keeping accuracy intact. By compressing the transformer’s key‑value cache with dynamic memory tricks, inference becomes far cheaper for everyone. A must‑read for anyone building open‑source LLMs. #DynamicMemoryCompression #KeyValueCache #NvidiaAI #LLMOptimization

    🔗 aidailypost.com/news/nvidia-te

  3. Manual prompt engineering is done. Discover meta-recursive prompting where LLMs optimize their own instructions for superior accuracy, depth, and 3x quality. hackernoon.com/never-write-a-p #llmoptimization

  4. Microsoft just solved the hidden cost problem in AI with LLMLingua, making large language models faster, cheaper, and smarter. hackernoon.com/how-to-compress #llmoptimization

  5. We used to SEO for humans. Now we're SEOing for bots pretending to be humans, reading content written by bots pretending to be humans, reviewed by humans pretending they still matter. 🌀 #LLMoptimization #AIReflux