home.social

#icml — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #icml, aggregated by home.social.

  1. Linguistic Calibration trains Llama 2 to emit confidence phrases that let a downstream reader make calibrated forecasts on related questions. The key move is defining calibration through reader utility instead of self-reported probability. Hedged text that doesn't help the reader makes no forecasting progress, so generic hedging can't game the objective.

    benjaminhan.net/posts/20260505

    #LLMs #Calibration #Hallucination #ICML #AI

  2. Conformal Factuality casts LM correctness as uncertainty quantification. Decompose the answer into sub-claims, score each, drop the low-confidence ones until the retained set is ~1-α factual. The sub-claim decomposition is doing most of the work, and the conformal machinery rides on top. Atomic-claim splitters have known failure modes, and the guarantee inherits them.

    benjaminhan.net/posts/20260505

    #ConformalPrediction #Calibration #Hallucination #LLMs #ICML #AI

  3. Position paper: today's self-improving agents lean on extrinsic metacognition — fixed human-designed loops about what to monitor, when to switch strategies. Genuine self-improvement needs the agent itself to decide those.

    The intrinsic/extrinsic axis is the right lens for recent agent work. STaR, DSPy, MASS, MetaSPO all extrinsic by this definition. Optimistic bet: current LLMs already carry partial ingredients.

    benjaminhan.net/posts/20260430

    #LLMs #AI #AgenticSystems #Cambridge #ICML

  4. blog.icml.cc/2026/03/18/on-vio

    This is wild. #ICML let reviewers individually choose whether they want to work under a no-LLM policy or light-LLM-use policy. Those who chose the no-LLM policy received watermarked PDFs with hidden instructions to include specific phrases in LLM output. Using this technique, they caught almost 800 reviews that violated the policy *the reviewers had chosen themselves*! And this was just a conservative detection approach which fails if the reviewer slightly paraphrases parts of the LLM output.

  5. Ah, the prestigious #ICML, bravely tackling the earth-shattering crisis of AI-assisted reviews by rejecting a whopping 2% of papers! 🤖📄 Clearly, the #integrity of peer review hangs by a thread, as program chairs valiantly protect us from the existential threat of Large Language Models daring to assist. 😂 Bravo, ICML, for saving us from this apocalypse!
    blog.icml.cc/2026/03/18/on-vio #AIreviews #PeerReview #LargeLanguageModels #HackerNews #ngated

  6. This is an interesting policy change regarding author attendance. Much more inclusive, but will authors struggle now to justify their travel expenses? Would be interesting to see how this affects author participation.

    From the ICML 2026 CfP icml.cc/Conferences/2026/CallF

    #icml #conferences

  7. New paper accepted! In which circunstances can we use abundant proxy preferences to quickly learn true preferences? I'm glad to announce our paper explores and proposes a model for one of these cases. Check out more on Yuchen's thread in Bluesky bsky.app/profile/zhuyuchen.bsk . #ICML2025 #ICML

  8. Добро пожаловать в CAMELoT

    Большие языковые модели ( LLM ) сталкиваются с трудностями при обработке длинных входных последовательностей из-за высоких затрат памяти и времени выполнения. Модели с расширенной памятью стали многообещающим решением этой проблемы, но текущие методы ограничены объёмом памяти и требуют дорогостоящего повторного обучения для интеграции с новой LLM . В этой статье мы познакомимся с модулем ассоциативной памяти , который может быть связан с любой предварительно обученной LLM без повторного обучения, что позволяет ему обрабатывать произвольно длинные входные последовательности. В отличие от предыдущих методов этот модуль ассоциативной памяти объединяет представления отдельных токенов в непараметрическую модель распределения. Эта модель управляется динамически путём надлежащего балансирования новизны и свежести входящих данных. Извлекая информацию из консолидированной ассоциативной памяти, базовый LLM на стандартных тестах достигает лучших результатов. Эта архитектура называется CAMELoT ( Consolidated Associationive Memory Enhanced Long Transformer ). Она демонстрирует превосходную производительность даже при крошечном контекстном окне в 128 токенов, а также обеспечивает улучшенное контекстное обучение с гораздо большим набором демонстраций.

    habr.com/ru/companies/first/ar

    #CAMELoT #Машинное_обучение #ICML

  9. 🎉 Two papers from the #MachineLearning and #NLP teams @LipnLab were accepted to #ICML!
    ▶️ The paper "Delaunay Graph: Addressing Over-Squashing and Over-Smoothing Using Delaunay Triangulation" by H. Attali, D. Buscladi, N. Pernelle presents a novel graph rewiring method that incorporates node features with low complexity to alleviate both Over-Squashing and Over-Smoothing issues.
    🔗 sites.google.com/view/hugoatta

  10. #ICML will have a Position Paper track. "The goal of this track is to highlight papers that stimulate (productive, civil) discussion on timely topics that need our community’s input" #AI

    Read more here: icml.cc/Conferences/2024/CallF

  11. Perhaps #AI language learning can be a byproduct of trying to function in a real world, like children do. "...perhaps the best way forward is to combine the approaches by augmenting emergent language learning with direct language supervision."

    [2306.08400] Simple Embodied Language Learning as a Byproduct of Meta-Reinforcement Learning. #ICML

    arxiv.org/abs/2306.08400

    doi.org/10.48550/arXiv.2306.08

  12. .@pamelasamuelson with a great talk here at #ICML but her list of disruptive technologies that challenged (and change) copyright does not include my favorite example—the player piano.

    I say this mostly in jest, but I do find that talking about player pianos and the Copyright Act of 1909 helps ground and humble discussions with technologists—copyright law, and creators, have faced challenges from tech for a very long time.

  13. Even more unpopular opinion: no one should be hosting a data set that might be used in ML training without seasoned T&S experts on the team.

    (If you’re at #ICML and want to discuss this hot take, poke me!)

    From: @dalias
    hachyderm.io/@dalias/110794388

  14. A friend attending #ICML just sent me a photo of this front runner for best poster award.

  15. I will be at ICML for the workshops later this week. Thanks to @emtiyaz and Thomas (moellenh.github.io) for inviting me to the workshop on “Duality Principles for Modern Machine Learning” (dp4ml.github.io); hope to also attend the TAG-ML workshop tagds.com/events/conference-wo Friday.

    #ICML #Duality #Manifolds

  16. #CFP for `Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators`, a workshop at #ICML

    differentiable.xyz/

    twitter.com/FHKPetersen/status

    Differentiable programming is a powerful tool, so I am quite interested in this workshop (especially as a #JuliaLang user, which has fantastic #AD support).

    #AutomaticDifferentiation #ML

  17. Im very excited to announce that everyone's favourite Bayesian symposium is back for 2023!🚀🚀
    The 5th Symposium on Advances in Approximate Bayesian Inference (AABI) will take place in 🏖️Honolulu Hawaii🌴, Sunday July 23rd, Co-Located with ICML!

    Website: approximateinference.org
    #aabi #machinelearning #bayes #icml

  18. Just integrated my first piece of #ChatGPT generated code into a research project I am building.

    Made me think of the #ICML conference rule on using #ChatGPT for writing. Is coding with #ChatGPT okay?

    Then, what's the difference between writing and coding?

  19. ICML 2023 Call for Post-Conference Workshops
    Friday, July 28, and Saturday, July 29, Hawaii, USA

    We invite researchers interested in chairing one of these workshops to submit proposals. Workshop organizers have several responsibilities, including coordinating workshop participation and content, publicizing and providing the program in a timely manner, and moderating the program throughout the workshop. #MachineLearning #ICML
    icml.cc/Conferences/2023/CallF