#z-ai — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #z-ai, aggregated by home.social.
-
Fine-Tuning on My Own Commit History: The Model Now Writes Bugs in My Style
Because when you fine-tune on your own history, you are not training a model to be better than you. -
https://www.walknews.com/1285206/ 【速報】米、USTRグリア代表とEUの通商代表が5日に会合|FX・為替ニュース – ザイFX! #EU #Europe #EuropeNews #EuropeanUnion #FX #FX投資 #ZAi #オススメ #ザイ #チャート #ヨーロッパ #ヨーロッパニュース #ランキング #初心者 #外国為替証拠金取引 #欧州 #欧州連合 #比較 #為替
-
Here is my résumé when it comes to using #GLM 5.1 for refactoring a large #php #laravel code based on #sonarqube warnings/errors:
A DISASTER ‼️ - #claude on the other hand: one shot, everything works.For the record:
GLM5.1 was running via opencode go. (same prompt) 😞 😢i don’t think GLM is bad in general. i am talking here about refactoring a large codebase in particular. writing new code or doing small stuff works fine.
🧵 👇
-
[Перевод] Локальный запуск GLM-5.1
Перевод подготовил автор канала Друг Опенсурса , приятного прочтения, заранее благодарю за подписку В этой статье мы подробно разберем процесс развертывания GLM-5.1 с использованием llama.cpp и форматов GGUF. Узнаем о системных требованиях, сборке и настройках, оптимизации и практическом применении.
https://habr.com/ru/articles/1022242/
#glm51 #llm #Llamacpp #Unsloth #GGUF #Локальный_запуск #tool_calling #Zai #искусственный_интеллект
-
Z.ai、GLM-5.1を公開:SWE-Bench Pro 58.4と8時間連続実行でエージェンティックコーディングを前面に
Z.aiが公開した新モデル「GLM-5.1」は、いわゆる単発のコード生成を競う段階から、長時間の自律実行でどこまで成果物を出せるかへと、評価軸そのものをずらそうとしている。公式ドキュメントでは、単一タスクを最大8時間にわたって継続し、計画、実行、テスト、修正、最適化までを回し切ることを中核価値として打ち出した。従来の「1ターンでどれだけ賢いか」ではなく、「長い作業をどこまで破綻せずにやり切れるか」を前面に出した構図だ。 同社はGLM-5.1を最新のフラッグシップモデルと位置づけ、コーディング性能はClaude […] -
https://winbuzzer.com/2026/04/09/z-ai-releases-glm-5-1-754b-model-tops-swe-bench-pro-xcxwbn/
Z.ai Releases GLM-5.1: 754B Model Tops SWE-Bench Pro
#AI #Zai #GLM51 #GLM5 #AIModels #AgenticAI #OpenSourceAI #AICoding #VibeCoding #ChinaAI #AIBenchmarks #GenerativeAI
-
Ability to tackle long context tasks is so important for the most useful of applications for LLMs.
A lot of research involves disproving hypotheses. Aiding researchers by allowing them to set the skeleton for exhaustive search, and then using an LLM as an evolution function has been proven to work (see Alpha Evolve, Shinka Evolve, Darwin-Gödel Machines).
Training this ability to break outside the box through RL of these trajectories, paired with techniques to allow for unbounded input and output context length (RLM) seems to be the key.
-
Z.AI veröffentlicht das Vision-Coding-Modell GLM-5V-Turbo für multimodale Code-Generierung.
Die Architektur nutzt einen CogViT-Vision-Encoder und bietet ein Kontextfenster von 200.000 Token. Im Design2Code-Benchmark erreicht das System 94,8 Punkte bei API-Kosten von 1,20 US-Dollar je Million Input-Token.
#GLM5VTurbo #ZAI #Coding #LLM #News
https://www.all-ai.de/news/news26top/glm-5v-turbo-neu -
https://winbuzzer.com/2026/04/02/zai-launches-glm-5v-turbo-multimodal-vision-model-xcxwbn/
Z.ai Launches GLM-5V-Turbo Multimodal Vision Model
#AI #ZAI #Zhipu #GLM5VTurbo #GLM5VTurbo #ChinaAI #China #LLMs #MultimodalAI #AgenticAI #AIModels #ComputerVision #Glm5 #Openclaw #VisionCodingModel
-
I switched to PRO plan on my z.ai coding subscription to try out faster #glm5. So my first comment, it is much slower than 4.7, but overall all z.ai infrastructure today is kind of underperforming - including web interface.
Let me try it more and I will share some more comments.
@dawid btw, what did you mean by huge opencode tokens usage? 🤑
-
I'm gonna go against the general grain on Mastodon and say it's futile to fight against AI.
The cat's out of the bag, the genie out of the bottle. The fight we can, and must have is for the democratisation and use of AI for public good rather than at the behest of capital
I'm not talking about LLMs here (though they can be helpful). Seizing the computational advances that this AI wave brings is a genuine huge opportunity for humanity in terms of drug discovery, advances in computational simulations that would make Soviet central planners jealous. It's not a panacea, but a lot of tasks around pushing the boundaries of what's possible are endlessly testing and disproving theories, where AI would be relatively helpful.
I don't think blowing up data centres is the way to go, as it invites further brutalisation and restrictions on personal freedom to protect capital.
I think Chinese Labs are an answer to this.
I think making AI models more efficient and their use on consumer hardware tractable (Qwen3-Coder-Next is a great example that can run on Macbooks and has N-1 performance) is an answer to this.
I think fighting tooth and nail for LLM work to never be copyrightable is an answer to this.
I think new players like CXMT coming online with maybe less cutting edge, but mass affordable and accessible memory chips is an answer to this.
I think DeepSeek, Z.AI, Mistral distilling frontier models is an answer to this.
My ability to generate AI slop will inevitably outcompete your ability to shut it down. Your boss' ability to vibecode shit will outcompete your attempts to sandbox them or argue for proper due process.
The fight can only be fought by making AI economically intractable, not via moralisation
-
z.ai’s new GLM‑5 model shatters hallucination records, posting the lowest rate ever seen and edging out Moonshot’s Kimi K2.5. The open‑source LLM leverages advanced RL techniques and artificial analysis to boost reliability. Curious how it outperforms? Dive into the details! #GLM5 #zAI #OpenSourceAI #LowHallucination
🔗 https://aidailypost.com/news/zais-glm-5-logs-record-low-hallucination-rate-beats-moonshots-kimi-k25
-
https://winbuzzer.com/2026/02/11/zhipu-ai-glm-5-744b-model-rivals-claude-opus-z-ai-platform-xcxwbn/
Zhipu AI Releases GLM-5: 744B Model Rivals Claude Opus
-
#ZAI: #GLM5, a new large language model, is designed for #complexsystemsengineering and long-horizon agentic tasks. It boasts 744 billion parameters and integrates #DeepSeek #SparseAttention for improved efficiency. GLM-5 outperforms previous models on various benchmarks, including #reasoning, #coding, and #agentictasks, and is open-sourced for wider accessibility. https://z.ai/blog/glm-5?AIagents.at #AIagent #AI #ML #NLP #LLM #GenAI
-
Quick deep dive 🙆♀️ into the GLM series architecture of #ZAI with Yuxuan Zhang (Zhipu AI) #OpenSourceLLM
-
Нейро-дайджест: ключевые события мира AI за 3-ю неделю января 2026
Привет, это новый выпуск «Нейро-дайджеста» — коротких и полезных обзоров ключевых событий в мире искусственного интеллекта и технологий. Неделя выдалась насыщенной: Z.AI выпустили GLM-4.7-Flash — сверхлёгкую модель для кодинга, которая обходит конкурентов, Google научил Gemini заглядывать в ваши фото и почту, Suno теперь генерит мэшапы, а OpenAI добавляют рекламу в ChatGPT. Всё самое важное — в одном месте. Поехали! Читать дайджест →
https://habr.com/ru/companies/timeweb/articles/987680/
#нейросети #дайджест #ИИ #glm47flash #zai #suno #gemini #новости #google #timeweb_дайджест