home.social

#limit — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #limit, aggregated by home.social.

  1. Udělal jsem Bash skript na kompresi Mastodonových příspěvků, aby měly větší šanci, že se vejdou do cenzurního limitu 2000 znaků, a snížila se tak diskriminace lidí, jejichž osobnost má vyšší hodnotu expresivity:

    #!/bin/bash

    LC_ALL=C sed -E '
    s/([],!?:.)}]) /\1/g
    s/ \(/(/g
    s/\.\.\./…/g
    s/ \[/[/g
    s/ / <200b> <200b> /g
    s/ / <200b> /g
    s/ ?-- ?/—/g
    s/ ?("|„|“) ?/\1/g
    '

    #mastodon #limit #characterlimit #compression #censorship #discrimination

  2. Nah, ezért nem tudok, noha minden ízemben szeretnék váltani Claude AI-ra.

    Kurva gyorsan elfogy a sima limit. És most nézem, hogy... mi az, hogy heti limit??? WTF???

    #Claude #ClaudeAI #AI #limit #limits #weekly #weeklylimit #magyar #hungarian

  3. Un système simple, résilient… et gratuit.

    Mais aujourd’hui, ce cycle est en train d’être brisé.

    À cause de la #privatisation du #vivant, une immense partie des #semences modernes ne peut plus être replantée librement...

    youtu.be/10wOAMt2TCU

    #LIMIT #Nature #Esclavage

  4. 🚨 Oh no, the syntax warriors have over-requested GitHub's patience, breaking the sacred #rate limit! Maybe "binding by adjacency" should include some basic #API #etiquette. 😂 Meanwhile, #GitHub is patiently waiting for you to discover the "Sign In" button.
    github.com/manifold-systems/ma #syntaxwarriors #limit #signIn #HackerNews #ngated

  5. Бесплатные AI-модели от Alibaba: 1M токенов на каждую модель Qwen через Сингапур

    Alibaba Cloud Model Studio (Сингапурский регион) даёт бесплатную квоту новым пользователям: 1 000 000 токенов на каждую модель . Не на аккаунт, а именно на каждую модель отдельно. То есть вы получаете по миллиону на Qwen-Max, Qwen-Plus, Qwen-Flash, Qwen3-Coder-Plus и так далее — параллельно. Квота действует 90 дней с момента активации. Что доступно Полная линейка Qwen3: Qwen-Max — флагман, сложные multi-step задачи, контекст 32K Qwen-Plus — баланс качества и скорости, контекст до 1M токенов Qwen-Flash — быстрая и дешёвая, тоже до 1M контекста Qwen3-Coder-Plus/Flash — специализированные для кода, контекст до 1M Qwen-VL — мультимодальные (текст + изображения) Qwen-OCR — извлечение текста из изображений, поддержка русского Qwen-Omni — аудио, видео, мультимодальность Плюс открытые модели (qwen3-235b-a22b и другие) тоже доступны через API с квотой.

    habr.com/ru/articles/994052/

    #ai #qwen #wan #alibaba #api #cloud #limit

  6. The German government is facing internal debate over the design and distribution of a planned electric vehicle (EV) purchase premium, sparking concerns about eq... news.osna.fm/?p=19848 | #news #ev #germany #incomes #limit

  7. It's literally not #unreasonable to implement some kind of #limit on how many #homes #someone can own. I'm not saying you should only be allowed to #own one #house I just see no #reason why #somebody needs to own more than like 4, and I think even that's super #generous.

  8. It's literally not #unreasonable to implement some kind of #limit on how many #homes #someone can own. I'm not saying you should only be allowed to #own one #house I just see no #reason why #somebody needs to own more than like 4, and I think even that's super #generous.

  9. Hit an interesting limit in the TypeScript language server¹:

    Looks like there’s a limit on the number of entries an object (constant) can have before the language server balks. Seems to hit it around 1,343.

    (I’m generating an object for an icon library.)

    Doesn’t appear to be related to file/memory size (breaking up the same number of entries into several objects works).

    Anyone know what limitation exactly I’m hitting (if it’s documented somewhere?) Been searching but couldn’t find any reference to it.

    ¹ It’s definitely a language server limit as I tried in VSCode as well to rule out it being a limit in Helix Editor.

    #TypeScript #limit #constant #object #languageServer #LSP

  10. #TFW you discover your instance admin has increased the post length limit from the default 500 chars.

    #power #mad #length #limit #chars #500chars

  11. Me: “It’s #broken for my case, and here’s an #experiment showing how and where it can be fixed.”
    Them: “That’s useful but this guy that outranks you wanted it this way and he’s out today so I can’t get him to explain it.”
    Me: (engages in #kludge to try to unblock, finds another stupid resource #limit)

    Apparently I’m not supposed to be able to prove to others that the thing I’m working on will work. I’m 👉️ 👈️ this close to hollering to #management.

  12. “All sanity, which is all Science, is founded upon Limit. We must be able to cut off, to define, to measure. Naturally, then, their opposites, Insanity and Religion, have for their prime characteristic, the Indefinable, Incomprehensible, Immeasurable.”

    https://library.hrmtc.com/2024/06/06/all-sanity-which-is-all-science-is-founded-upon-limit-we-must-be-able-to-cut-off-to-define-to-measure-naturally-then-their-opposites-insanity-and-religion-have-for-their-prime-characteri/

  13. Arıkan's new solution was to create near-perfect channels from ordinary channels by a process he called “#channel #polarization.”

    Noise would be transferred from one channel to a copy of the same channel to create a cleaner copy and a dirtier one.

    After a recursive series of such steps, two sets of channels emerge, one set being extremely noisy, the other being almost noise-free.

    The channels that are scrubbed of noise, in theory, can attain the Shannon limit.

    He dubbed his solution #polar #codes.
    It's as if the noise was banished to the North Pole, allowing for pristine communications at the South Pole.

    After this discovery, Arıkan spent two more years refining the details.
    He had read that before Shannon released his famous paper on information theory, his supervisor at Bell Labs would pop by and ask if the researcher had anything new.
    “Shannon never mentioned information theory,” says Arıkan with a laugh.
    “He kept his work undercover. He didn't disclose it.”

    That was also Arıkan's MO. “I had the luxury of knowing that no other person in the world was working on this problem,” Arıkan says, “because it was not a fashionable subject.”

    In 2008, three years after his eureka moment, Arıkan finally presented his work.

    He had understood its importance all along. Over the years, whenever he traveled, he would leave his unpublished manuscript in two envelopes addressed to “top colleagues whom I trusted,” with the order to mail them “if I don't come back.”

    In 2009 he published his definitive paper in the field's top journal, IEEE Transactions on Information Theory.

    It didn't exactly make him a household name, but within the small community of information theorists, polar codes were a sensation.

    Arıkan traveled to the US to give a series of lectures. (You can see them on YouTube; they are not for the mathematically fainthearted. The students look a bit bored.)

    Arıkan was justifiably proud of his accomplishment, but he didn't think of polar codes as something with practical value.

    It was a theoretical solution that, even if implemented, seemed unlikely to rival the error-correction codes already in place.

    He didn't even bother to get a patent.

    #channel #capacity #Shannon #limit #correcting #errors #Bilkent #University #eureka #accurately #redundancy #channel #coding #problem

  14. Arıkan's new solution was to create near-perfect channels from ordinary channels by a process he called “#channel #polarization.”

    Noise would be transferred from one channel to a copy of the same channel to create a cleaner copy and a dirtier one.

    After a recursive series of such steps, two sets of channels emerge, one set being extremely noisy, the other being almost noise-free.

    The channels that are scrubbed of noise, in theory, can attain the Shannon limit.

    He dubbed his solution #polar #codes.
    It's as if the noise was banished to the North Pole, allowing for pristine communications at the South Pole.

    After this discovery, Arıkan spent two more years refining the details.
    He had read that before Shannon released his famous paper on information theory, his supervisor at Bell Labs would pop by and ask if the researcher had anything new.
    “Shannon never mentioned information theory,” says Arıkan with a laugh.
    “He kept his work undercover. He didn't disclose it.”

    That was also Arıkan's MO. “I had the luxury of knowing that no other person in the world was working on this problem,” Arıkan says, “because it was not a fashionable subject.”

    In 2008, three years after his eureka moment, Arıkan finally presented his work.

    He had understood its importance all along. Over the years, whenever he traveled, he would leave his unpublished manuscript in two envelopes addressed to “top colleagues whom I trusted,” with the order to mail them “if I don't come back.”

    In 2009 he published his definitive paper in the field's top journal, IEEE Transactions on Information Theory.

    It didn't exactly make him a household name, but within the small community of information theorists, polar codes were a sensation.

    Arıkan traveled to the US to give a series of lectures. (You can see them on YouTube; they are not for the mathematically fainthearted. The students look a bit bored.)

    Arıkan was justifiably proud of his accomplishment, but he didn't think of polar codes as something with practical value.

    It was a theoretical solution that, even if implemented, seemed unlikely to rival the error-correction codes already in place.

    He didn't even bother to get a patent.

    #channel #capacity #Shannon #limit #correcting #errors #Bilkent #University #eureka #accurately #redundancy #channel #coding #problem

  15. Arıkan's new solution was to create near-perfect channels from ordinary channels by a process he called “#channel #polarization.”

    Noise would be transferred from one channel to a copy of the same channel to create a cleaner copy and a dirtier one.

    After a recursive series of such steps, two sets of channels emerge, one set being extremely noisy, the other being almost noise-free.

    The channels that are scrubbed of noise, in theory, can attain the Shannon limit.

    He dubbed his solution #polar #codes.
    It's as if the noise was banished to the North Pole, allowing for pristine communications at the South Pole.

    After this discovery, Arıkan spent two more years refining the details.
    He had read that before Shannon released his famous paper on information theory, his supervisor at Bell Labs would pop by and ask if the researcher had anything new.
    “Shannon never mentioned information theory,” says Arıkan with a laugh.
    “He kept his work undercover. He didn't disclose it.”

    That was also Arıkan's MO. “I had the luxury of knowing that no other person in the world was working on this problem,” Arıkan says, “because it was not a fashionable subject.”

    In 2008, three years after his eureka moment, Arıkan finally presented his work.

    He had understood its importance all along. Over the years, whenever he traveled, he would leave his unpublished manuscript in two envelopes addressed to “top colleagues whom I trusted,” with the order to mail them “if I don't come back.”

    In 2009 he published his definitive paper in the field's top journal, IEEE Transactions on Information Theory.

    It didn't exactly make him a household name, but within the small community of information theorists, polar codes were a sensation.

    Arıkan traveled to the US to give a series of lectures. (You can see them on YouTube; they are not for the mathematically fainthearted. The students look a bit bored.)

    Arıkan was justifiably proud of his accomplishment, but he didn't think of polar codes as something with practical value.

    It was a theoretical solution that, even if implemented, seemed unlikely to rival the error-correction codes already in place.

    He didn't even bother to get a patent.

    #channel #capacity #Shannon #limit #correcting #errors #Bilkent #University #eureka #accurately #redundancy #channel #coding #problem

  16. Arıkan's new solution was to create near-perfect channels from ordinary channels by a process he called “#channel #polarization.”

    Noise would be transferred from one channel to a copy of the same channel to create a cleaner copy and a dirtier one.

    After a recursive series of such steps, two sets of channels emerge, one set being extremely noisy, the other being almost noise-free.

    The channels that are scrubbed of noise, in theory, can attain the Shannon limit.

    He dubbed his solution #polar #codes.
    It's as if the noise was banished to the North Pole, allowing for pristine communications at the South Pole.

    After this discovery, Arıkan spent two more years refining the details.
    He had read that before Shannon released his famous paper on information theory, his supervisor at Bell Labs would pop by and ask if the researcher had anything new.
    “Shannon never mentioned information theory,” says Arıkan with a laugh.
    “He kept his work undercover. He didn't disclose it.”

    That was also Arıkan's MO. “I had the luxury of knowing that no other person in the world was working on this problem,” Arıkan says, “because it was not a fashionable subject.”

    In 2008, three years after his eureka moment, Arıkan finally presented his work.

    He had understood its importance all along. Over the years, whenever he traveled, he would leave his unpublished manuscript in two envelopes addressed to “top colleagues whom I trusted,” with the order to mail them “if I don't come back.”

    In 2009 he published his definitive paper in the field's top journal, IEEE Transactions on Information Theory.

    It didn't exactly make him a household name, but within the small community of information theorists, polar codes were a sensation.

    Arıkan traveled to the US to give a series of lectures. (You can see them on YouTube; they are not for the mathematically fainthearted. The students look a bit bored.)

    Arıkan was justifiably proud of his accomplishment, but he didn't think of polar codes as something with practical value.

    It was a theoretical solution that, even if implemented, seemed unlikely to rival the error-correction codes already in place.

    He didn't even bother to get a patent.

    #channel #capacity #Shannon #limit #correcting #errors #Bilkent #University #eureka #accurately #redundancy #channel #coding #problem

  17. Arıkan's new solution was to create near-perfect channels from ordinary channels by a process he called “#channel #polarization.”

    Noise would be transferred from one channel to a copy of the same channel to create a cleaner copy and a dirtier one.

    After a recursive series of such steps, two sets of channels emerge, one set being extremely noisy, the other being almost noise-free.

    The channels that are scrubbed of noise, in theory, can attain the Shannon limit.

    He dubbed his solution #polar #codes.
    It's as if the noise was banished to the North Pole, allowing for pristine communications at the South Pole.

    After this discovery, Arıkan spent two more years refining the details.
    He had read that before Shannon released his famous paper on information theory, his supervisor at Bell Labs would pop by and ask if the researcher had anything new.
    “Shannon never mentioned information theory,” says Arıkan with a laugh.
    “He kept his work undercover. He didn't disclose it.”

    That was also Arıkan's MO. “I had the luxury of knowing that no other person in the world was working on this problem,” Arıkan says, “because it was not a fashionable subject.”

    In 2008, three years after his eureka moment, Arıkan finally presented his work.

    He had understood its importance all along. Over the years, whenever he traveled, he would leave his unpublished manuscript in two envelopes addressed to “top colleagues whom I trusted,” with the order to mail them “if I don't come back.”

    In 2009 he published his definitive paper in the field's top journal, IEEE Transactions on Information Theory.

    It didn't exactly make him a household name, but within the small community of information theorists, polar codes were a sensation.

    Arıkan traveled to the US to give a series of lectures. (You can see them on YouTube; they are not for the mathematically fainthearted. The students look a bit bored.)

    Arıkan was justifiably proud of his accomplishment, but he didn't think of polar codes as something with practical value.

    It was a theoretical solution that, even if implemented, seemed unlikely to rival the error-correction codes already in place.

    He didn't even bother to get a patent.

    #channel #capacity #Shannon #limit #correcting #errors #Bilkent #University #eureka #accurately #redundancy #channel #coding #problem

  18. Arıkan devoted the next year to learning about networks, but he never gave up on his passion for information science.

    What gripped him most was solving a challenge that Shannon himself had spelled out in his 1948 paper:
    how to transport accurate information at high speed while defeating the inevitable “noise”
    —undesirable alterations of the message
    —introduced in the process of moving all those bits.

    The problem was known as #channel #capacity.

    According to Shannon, every communications channel had a kind of speed limit for transmitting information reliably.

    This as-yet-unattained theoretical boundary was referred to as the #Shannon #limit.

    Gallager had wrestled with the Shannon limit early in his career, and he got close. His much celebrated theoretical approach was something he called low-density parity-check codes, or LDPC, which were, in simplest terms, a high-speed method of #correcting #errors on the fly.

    While the mathematics of LDPC were innovative, Gallager understood at the time that it wasn't commercially viable.

    “It was just too complicated for the cost of the logical operations that were needed,” Gallager says now.

    Gallager and others at MIT figured that they had gotten as close to the Shannon limit as one could get, and he moved on.

    At MIT in the 1980s, the excitement about information theory had waned.
    But not for Arıkan.

    He wanted to solve the problem that stood in the way of reaching the Shannon limit.

    Even as he pursued his thesis on the networking problem that Gallager had pointed him to, he seized on a piece that included error correction.

    “When you do error-correction coding, you are in Shannon theory,” he says.

    Arıkan finished his doctoral thesis in 1986, and after a brief stint at the University of Illinois he returned to Turkey to join the country's first private, nonprofit research institution, #Bilkent #University, located on the outskirts of Ankara.

    Arıkan helped establish its engineering school. He taught classes. He published papers.

    But Bilkent also allowed him to pursue his potentially fruitless battle with the Shannon limit.

    “The best people are in the US, but why aren't they working for 10 years, 20 years on the same problem?” he said.
    “Because they wouldn't be able to get tenure; they wouldn't be able to get research funding.”

    Rather than advancing his field in tiny increments, he went on a monumental quest. It would be his work for the next 20 years.

    In December 2005 he had a kind of #eureka moment.
    Spurred by a question posed in a three-page dispatch written in 1965 by a Russian information scientist, Arıkan reframed the problem for himself.

    “The key to discoveries is to look at those places where there is still a paradox,” Arıkan says.

    “It's like the tip of an iceberg. If there is a point of dissatisfaction, take a closer look at it. You are likely to find a treasure trove underneath.”

    Arıkan's goal was to transmit messages accurately over a noisy channel at the fastest possible speed.

    The key word is #accurately. If you don't care about accuracy, you can send messages unfettered.

    But if you want the recipient to get the same data that you sent, you have to insert some #redundancy into the message.
    That gives the recipient a way to cross-check the message to make sure it's what you sent.

    Inevitably, that extra cross-checking slows things down.
    This is known as the #channel #coding #problem.

    The greater the amount of noise, the more added redundancy is needed to protect the message.

    And the more redundancy you add, the slower the rate of transmission becomes.

    The coding problem tries to defeat that trade-off and find ways to achieve reliable transmission of information at the fastest possible rate.

    The optimum rate would be the Shannon limit: channel coding nirvana.