home.social

#eureka — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #eureka, aggregated by home.social.

  1. Lost in Translation.. NEEEINNNN Ich schaue gerade zum dritten Mal #Eureka seit es herauskam und bin verwundert über die deutsche Episodenbeschreibung von S5 E7:

    Major Shaw erhält den Auftrag, bei General Dynamics ein Sicherheitssystem namens PanOp zu installieren. Außerdem soll er alles beschlagnahmen, was in Zusammenhang mit der Entführung der Astraeus steht – so auch den Photonen-Plasma-Computer, in dem Holly immer noch eingeschlossen ist. Um Holly zu retten, spielt Zane den entsprechenden Teil auf eine Partition des Hauptcomputer. Doch wenig später ist die entsprechende Partition leer … (Text: ProSieben)

    Das ist doch GLOBAL DYNAMICS und nicht General Dynamics :(

    WIE kann das passieren und das kann man nicht mehr ändern???

    #eureka #imdb

  2. Lost in Translation.. NEEEINNNN Ich schaue gerade zum dritten Mal #Eureka seit es herauskam und bin verwundert über die deutsche Episodenbeschreibung von S5 E7:

    Major Shaw erhält den Auftrag, bei General Dynamics ein Sicherheitssystem namens PanOp zu installieren. Außerdem soll er alles beschlagnahmen, was in Zusammenhang mit der Entführung der Astraeus steht – so auch den Photonen-Plasma-Computer, in dem Holly immer noch eingeschlossen ist. Um Holly zu retten, spielt Zane den entsprechenden Teil auf eine Partition des Hauptcomputer. Doch wenig später ist die entsprechende Partition leer … (Text: ProSieben)

    Das ist doch GLOBAL DYNAMICS und nicht General Dynamics :(

    WIE kann das passieren und das kann man nicht mehr ändern???

    #eureka #imdb

  3. #Mike war ein Hahn der 18 Monate überlebte obwohl er enthauptet wurde. Gefüttert wurde er durch Tropfen der Nahrung per Pipette in die Speiseröhre. Mit dieser Ernährung legte das Tier ein Kilo an Gewicht zu.

    #Eureka #Quiz

    de.m.wikipedia.org/wiki/Mike_(

  4. Wood, made to appear as stone. Vance Hotel, Eureka, California, 1872. National Register of Historic Places, 1991. Learn more at npgallery.nps.gov/pdfhost/docs (page 18 of pdf). Image credit Kurt Angersbach / Westernlabs #nps #nrhp #travel #forest #trees #photography #california #historicplaces #humboldt #eureka #architecture #history #building #color #wood #design #sign #plaque

  5. Wood, made to appear as stone. Vance Hotel, Eureka, California, 1872. National Register of Historic Places, 1991. Learn more at npgallery.nps.gov/pdfhost/docs (page 18 of pdf). Image credit Kurt Angersbach / Westernlabs #nps #nrhp #travel #forest #trees #photography #california #historicplaces #humboldt #eureka #architecture #history #building #color #wood #design #sign #plaque

  6. Wood, made to appear as stone. Vance Hotel, Eureka, California, 1872. National Register of Historic Places, 1991. Learn more at npgallery.nps.gov/pdfhost/docs (page 18 of pdf). Image credit Kurt Angersbach / Westernlabs #nps #nrhp #travel #forest #trees #photography #california #historicplaces #humboldt #eureka #architecture #history #building #color #wood #design #sign #plaque

  7. Wood, made to appear as stone. Vance Hotel, Eureka, California, 1872. National Register of Historic Places, 1991. Learn more at npgallery.nps.gov/pdfhost/docs (page 18 of pdf). Image credit Kurt Angersbach / Westernlabs #nps #nrhp #travel #forest #trees #photography #california #historicplaces #humboldt #eureka #architecture #history #building #color #wood #design #sign #plaque

  8. Wood, made to appear as stone. Vance Hotel, Eureka, California, 1872. National Register of Historic Places, 1991. Learn more at npgallery.nps.gov/pdfhost/docs (page 18 of pdf). Image credit Kurt Angersbach / Westernlabs #nps #nrhp #travel #forest #trees #photography #california #historicplaces #humboldt #eureka #architecture #history #building #color #wood #design #sign #plaque

  9. Hype for the Future 133NEV: Lander and Eureka Counties, Nevada

    Introduction Within the central portion of the State of Nevada and largely associated with the Great Basin, Lander and Eureka Counties are each identified as rural desert counties, with Lander County situated to the west of Eureka County. Traditionally, the City of Austin had served the role of the county seat of Lander County; however, the City of Battle Mountain has since taken over the role of the county government. On the other hand, the community of Eureka continues to serve the role of […]

    novatopflex.wordpress.com/2026

  10. Start rtl counter with a timer that is initialized 10 seconds from a 32bit wrap point so that the software handling of wrap-around is well exercised during development.

    #rtl #fpga #verilog #vhdl #xilinx #alchitry #eureka #protonpack #software #softwaredevelopment #hardware #embedded #fensterFreitag

  11. Start rtl counter with a timer that is initialized 10 seconds from a 32bit wrap point so that the software handling of wrap-around is well exercised during development.

    #rtl #fpga #verilog #vhdl #xilinx #alchitry #eureka #protonpack #software #softwaredevelopment #hardware #embedded #fensterFreitag

  12. Start rtl counter with a timer that is initialized 10 seconds from a 32bit wrap point so that the software handling of wrap-around is well exercised during development.

    #rtl #fpga #verilog #vhdl #xilinx #alchitry #eureka #protonpack #software #softwaredevelopment #hardware #embedded #fensterFreitag

  13. I’m proud to announce my 2026 Redwood Coast Birds calendar is available for purchase on my shop! It’s a 7x10” flip calendar with a stylized list of dates that really get out of the way of the beautiful bird imagery. It’s essentially a collection of twelve mini-prints, perfect for one’s office or rooms with little wall space.

    Each bird can be seen on the Redwood Coast here in the Pacific Northwest in the US, but especially here in Humboldt County. Most of the illustrations were requested by patrons who visited my booth over the summer, asking if I could do specific birds because they had connections to them or stories they cherished.

    This is a deeply personal collection for me. It is the beginning of my connection to the residents of my new community to which I moved at the beginning of the year.

    goimagine.com/2026-redwood-coa

    #2026Calendar #Birds #Birdart #FediGiftShop #IndieArtist #Calendar #ShopHandmade #REdwoodCoast #HumboldtCounty #Eureka #HumboldtMade

  14. 𝑳𝒊𝒕𝒆𝒓𝒂𝒓𝒚 𝑵𝒐𝒎𝒂𝒅𝒔 𝑬𝒑𝒊𝒔𝒐𝒅𝒆: 𝑻𝒉𝒆 “𝑯𝒊𝒅𝒆𝒐𝒖𝒔 𝑯𝒆𝒂𝒓𝒕:” 𝑷𝒐𝒆'𝒔 𝑨𝒆𝒔𝒕𝒉𝒆𝒕𝒊𝒄 𝒐𝒇 𝑨𝒄𝒄𝒐𝒖𝒏𝒕𝒂𝒃𝒊𝒍𝒊𝒕𝒚

    There really isn't that much to say about Poe, is there? He's just creepy. But wait. What if we could explain the supposed madness in all these stories?

    waywordsstudio.com/general/tra

    #podcast #literature #bookpodcast #edgarallanpoe #thetelltaleheart #theimpoftheperverse #thecaskofamontillado #thepoeticprinciple #eureka

  15. 𝐒𝐡𝐫𝐞𝐞𝐤𝐚𝐧𝐭 𝐏𝐚𝐭𝐢𝐥 𝐢𝐧𝐭𝐞𝐫𝐚𝐜𝐭𝐬 𝐚𝐭 𝐒𝐭𝐚𝐫𝐭𝐮𝐩 & 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫𝐬𝐡𝐢𝐩 𝐖𝐨𝐫𝐤𝐬𝐡𝐨𝐩, 𝐈𝐥𝐥𝐮𝐦𝐢𝐧𝐚𝐭𝐞 𝐛𝐲 𝐈𝐈𝐓 𝐁𝐨𝐦𝐛𝐚𝐲 𝐟𝐨𝐫 𝐌𝐚𝐫𝐚𝐭𝐡𝐚 𝐒𝐚𝐦𝐚𝐣 𝐂𝐨𝐥𝐥𝐞𝐠𝐞 𝐨𝐟 𝐏𝐡𝐲𝐬𝐢𝐨𝐭𝐡𝐞𝐫𝐚𝐩𝐲

    Shreekant Patil inspires students at Illuminate by IIT Bombay at MVP College of Physiotherapy with insights on startups, entrepreneurship,

    #ShreekantPatil #IITBombay #Illuminate #Startup #Entrepreneurship #Workshop #Ecosystem #Mentorship #Nashik #MVPPT #MarathaVidyaPrasarak #College #Physiotherapy #StartupEcosystem #NEC #Eureka #MSMEHelp #PressRelease #News

  16. 𝐒𝐡𝐫𝐞𝐞𝐤𝐚𝐧𝐭 𝐏𝐚𝐭𝐢𝐥 𝐢𝐧𝐭𝐞𝐫𝐚𝐜𝐭𝐬 𝐚𝐭 𝐒𝐭𝐚𝐫𝐭𝐮𝐩 & 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫𝐬𝐡𝐢𝐩 𝐖𝐨𝐫𝐤𝐬𝐡𝐨𝐩, 𝐈𝐥𝐥𝐮𝐦𝐢𝐧𝐚𝐭𝐞 𝐛𝐲 𝐈𝐈𝐓 𝐁𝐨𝐦𝐛𝐚𝐲 𝐟𝐨𝐫 𝐌𝐚𝐫𝐚𝐭𝐡𝐚 𝐒𝐚𝐦𝐚𝐣 𝐂𝐨𝐥𝐥𝐞𝐠𝐞 𝐨𝐟 𝐏𝐡𝐲𝐬𝐢𝐨𝐭𝐡𝐞𝐫𝐚𝐩𝐲

    #ShreekantPatil #IITBombay #Illuminate #Startup #Entrepreneurship #Workshop #Ecosystem #Mentorship #Nashik #MVPPT #MarathaVidyaPrasarak #College #Physiotherapy #StartupEcosystem #NEC #Eureka #MSMEHelp #PressRelease #News

  17. 𝐒𝐡𝐫𝐞𝐞𝐤𝐚𝐧𝐭 𝐏𝐚𝐭𝐢𝐥 𝐢𝐧𝐭𝐞𝐫𝐚𝐜𝐭𝐬 𝐚𝐭 𝐒𝐭𝐚𝐫𝐭𝐮𝐩 & 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫𝐬𝐡𝐢𝐩 𝐖𝐨𝐫𝐤𝐬𝐡𝐨𝐩, 𝐈𝐥𝐥𝐮𝐦𝐢𝐧𝐚𝐭𝐞 𝐛𝐲 𝐈𝐈𝐓 𝐁𝐨𝐦𝐛𝐚𝐲 𝐟𝐨𝐫 𝐌𝐚𝐫𝐚𝐭𝐡𝐚 𝐒𝐚𝐦𝐚𝐣 𝐂𝐨𝐥𝐥𝐞𝐠𝐞 𝐨𝐟 𝐏𝐡𝐲𝐬𝐢𝐨𝐭𝐡𝐞𝐫𝐚𝐩𝐲

    open.substack.com/pub/shreekan

    #ShreekantPatil #IITBombay #Illuminate #Startup #Entrepreneurship #Workshop #Ecosystem #Mentorship #Nashik #MVPPT #MarathaVidyaPrasarak #College #Physiotherapy #StartupEcosystem #NEC #Eureka #MSMEHelp #PressRelease #News

  18. 𝐒𝐡𝐫𝐞𝐞𝐤𝐚𝐧𝐭 𝐏𝐚𝐭𝐢𝐥 𝐢𝐧𝐭𝐞𝐫𝐚𝐜𝐭𝐬 𝐚𝐭 𝐒𝐭𝐚𝐫𝐭𝐮𝐩 & 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫𝐬𝐡𝐢𝐩 𝐖𝐨𝐫𝐤𝐬𝐡𝐨𝐩, 𝐈𝐥𝐥𝐮𝐦𝐢𝐧𝐚𝐭𝐞 𝐛𝐲 𝐈𝐈𝐓 𝐁𝐨𝐦𝐛𝐚𝐲 𝐟𝐨𝐫 𝐌𝐚𝐫𝐚𝐭𝐡𝐚 𝐒𝐚𝐦𝐚𝐣 𝐂𝐨𝐥𝐥𝐞𝐠𝐞 𝐨𝐟 𝐏𝐡𝐲𝐬𝐢𝐨𝐭𝐡𝐞𝐫𝐚𝐩𝐲

    biopage.com/post/shreekant-pat

    #ShreekantPatil #IITBombay #Illuminate #Startup #Entrepreneurship #Workshop #Ecosystem #Mentorship #Nashik #MVPPT #MarathaVidyaPrasarak #College #Physiotherapy #StartupEcosystem #NEC #Eureka #MSMEHelp #PressRelease #News

  19. 𝐒𝐡𝐫𝐞𝐞𝐤𝐚𝐧𝐭 𝐏𝐚𝐭𝐢𝐥 𝐢𝐧𝐭𝐞𝐫𝐚𝐜𝐭𝐬 𝐚𝐭 𝐒𝐭𝐚𝐫𝐭𝐮𝐩 & 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫𝐬𝐡𝐢𝐩 𝐖𝐨𝐫𝐤𝐬𝐡𝐨𝐩, 𝐈𝐥𝐥𝐮𝐦𝐢𝐧𝐚𝐭𝐞 𝐛𝐲 𝐈𝐈𝐓 𝐁𝐨𝐦𝐛𝐚𝐲 𝐟𝐨𝐫 𝐌𝐚𝐫𝐚𝐭𝐡𝐚 𝐒𝐚𝐦𝐚𝐣 𝐂𝐨𝐥𝐥𝐞𝐠𝐞 𝐨𝐟 𝐏𝐡𝐲𝐬𝐢𝐨𝐭𝐡𝐞𝐫𝐚𝐩𝐲

    pressreleasepoint.com/shreekan

    #ShreekantPatil #IITBombay #Illuminate #Startup #Entrepreneurship #Workshop #Ecosystem #Mentorship #Nashik #MVPPT #MarathaVidyaPrasarak #College #Physiotherapy #StartupEcosystem #NEC #Eureka #MSMEHelp

  20. 𝐒𝐡𝐫𝐞𝐞𝐤𝐚𝐧𝐭 𝐏𝐚𝐭𝐢𝐥 𝐢𝐧𝐭𝐞𝐫𝐚𝐜𝐭𝐬 𝐚𝐭 𝐒𝐭𝐚𝐫𝐭𝐮𝐩 & 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫𝐬𝐡𝐢𝐩 𝐖𝐨𝐫𝐤𝐬𝐡𝐨𝐩, 𝐈𝐥𝐥𝐮𝐦𝐢𝐧𝐚𝐭𝐞 𝐛𝐲 𝐈𝐈𝐓 𝐁𝐨𝐦𝐛𝐚𝐲 𝐟𝐨𝐫 𝐌𝐚𝐫𝐚𝐭𝐡𝐚 𝐒𝐚𝐦𝐚𝐣 𝐂𝐨𝐥𝐥𝐞𝐠𝐞 𝐨𝐟 𝐏𝐡𝐲𝐬𝐢𝐨𝐭𝐡𝐞𝐫𝐚𝐩𝐲

    medium.com/@shreekant-patil-me

    #ShreekantPatil #IITBombay #Illuminate #Startup #Entrepreneurship #Workshop #Ecosystem #Mentorship #Nashik #MVPPT #MarathaVidyaPrasarak #College #Physiotherapy #StartupEcosystem #NEC #Eureka #MSMEHelp

  21. First sip is a sublime balance of bitterness and sweetness in a fruit smoothie that makes me question why other breweries can't make this taste.

    #Beer #Review #FinalBoss #Eureka #DoubleIPA
    bfbcping.com/2025/09/eureka-he

  22. Oh, look! Yet another tech bro thinks he's the first to discover Invertible Bloom Filters! 🙄💡 Get ready to be dazzled by the 'Eureka!' moment of applying #XOR to billions of rows, like it's the new sliced bread. 🙃🍞
    nochlin.com/blog/extending-tha #techbro #eureka #invertiblebloomfilters #innovation #humor #HackerNews #ngated

  23. Je reprends la réalisation d’ #Azimut une histoire des idées scientifiques à l’époque médiévale, d’après les textes de Pascal Marchand. Cette BD vient après #Eurêka qui traitait du meme thème durant l’antiquité. Ces ouvrages sont édités chez La Boite à Bulles.

    #speeddrawing #encrage #dessin #bd #bandedessinée #science #épistémologie #epistemology #moyenage #darkages #backtothefuture #retourverslefutur

  24. Arıkan's new solution was to create near-perfect channels from ordinary channels by a process he called “#channel #polarization.”

    Noise would be transferred from one channel to a copy of the same channel to create a cleaner copy and a dirtier one.

    After a recursive series of such steps, two sets of channels emerge, one set being extremely noisy, the other being almost noise-free.

    The channels that are scrubbed of noise, in theory, can attain the Shannon limit.

    He dubbed his solution #polar #codes.
    It's as if the noise was banished to the North Pole, allowing for pristine communications at the South Pole.

    After this discovery, Arıkan spent two more years refining the details.
    He had read that before Shannon released his famous paper on information theory, his supervisor at Bell Labs would pop by and ask if the researcher had anything new.
    “Shannon never mentioned information theory,” says Arıkan with a laugh.
    “He kept his work undercover. He didn't disclose it.”

    That was also Arıkan's MO. “I had the luxury of knowing that no other person in the world was working on this problem,” Arıkan says, “because it was not a fashionable subject.”

    In 2008, three years after his eureka moment, Arıkan finally presented his work.

    He had understood its importance all along. Over the years, whenever he traveled, he would leave his unpublished manuscript in two envelopes addressed to “top colleagues whom I trusted,” with the order to mail them “if I don't come back.”

    In 2009 he published his definitive paper in the field's top journal, IEEE Transactions on Information Theory.

    It didn't exactly make him a household name, but within the small community of information theorists, polar codes were a sensation.

    Arıkan traveled to the US to give a series of lectures. (You can see them on YouTube; they are not for the mathematically fainthearted. The students look a bit bored.)

    Arıkan was justifiably proud of his accomplishment, but he didn't think of polar codes as something with practical value.

    It was a theoretical solution that, even if implemented, seemed unlikely to rival the error-correction codes already in place.

    He didn't even bother to get a patent.

    #channel #capacity #Shannon #limit #correcting #errors #Bilkent #University #eureka #accurately #redundancy #channel #coding #problem

  25. Arıkan's new solution was to create near-perfect channels from ordinary channels by a process he called “#channel #polarization.”

    Noise would be transferred from one channel to a copy of the same channel to create a cleaner copy and a dirtier one.

    After a recursive series of such steps, two sets of channels emerge, one set being extremely noisy, the other being almost noise-free.

    The channels that are scrubbed of noise, in theory, can attain the Shannon limit.

    He dubbed his solution #polar #codes.
    It's as if the noise was banished to the North Pole, allowing for pristine communications at the South Pole.

    After this discovery, Arıkan spent two more years refining the details.
    He had read that before Shannon released his famous paper on information theory, his supervisor at Bell Labs would pop by and ask if the researcher had anything new.
    “Shannon never mentioned information theory,” says Arıkan with a laugh.
    “He kept his work undercover. He didn't disclose it.”

    That was also Arıkan's MO. “I had the luxury of knowing that no other person in the world was working on this problem,” Arıkan says, “because it was not a fashionable subject.”

    In 2008, three years after his eureka moment, Arıkan finally presented his work.

    He had understood its importance all along. Over the years, whenever he traveled, he would leave his unpublished manuscript in two envelopes addressed to “top colleagues whom I trusted,” with the order to mail them “if I don't come back.”

    In 2009 he published his definitive paper in the field's top journal, IEEE Transactions on Information Theory.

    It didn't exactly make him a household name, but within the small community of information theorists, polar codes were a sensation.

    Arıkan traveled to the US to give a series of lectures. (You can see them on YouTube; they are not for the mathematically fainthearted. The students look a bit bored.)

    Arıkan was justifiably proud of his accomplishment, but he didn't think of polar codes as something with practical value.

    It was a theoretical solution that, even if implemented, seemed unlikely to rival the error-correction codes already in place.

    He didn't even bother to get a patent.

    #channel #capacity #Shannon #limit #correcting #errors #Bilkent #University #eureka #accurately #redundancy #channel #coding #problem

  26. Arıkan's new solution was to create near-perfect channels from ordinary channels by a process he called “#channel #polarization.”

    Noise would be transferred from one channel to a copy of the same channel to create a cleaner copy and a dirtier one.

    After a recursive series of such steps, two sets of channels emerge, one set being extremely noisy, the other being almost noise-free.

    The channels that are scrubbed of noise, in theory, can attain the Shannon limit.

    He dubbed his solution #polar #codes.
    It's as if the noise was banished to the North Pole, allowing for pristine communications at the South Pole.

    After this discovery, Arıkan spent two more years refining the details.
    He had read that before Shannon released his famous paper on information theory, his supervisor at Bell Labs would pop by and ask if the researcher had anything new.
    “Shannon never mentioned information theory,” says Arıkan with a laugh.
    “He kept his work undercover. He didn't disclose it.”

    That was also Arıkan's MO. “I had the luxury of knowing that no other person in the world was working on this problem,” Arıkan says, “because it was not a fashionable subject.”

    In 2008, three years after his eureka moment, Arıkan finally presented his work.

    He had understood its importance all along. Over the years, whenever he traveled, he would leave his unpublished manuscript in two envelopes addressed to “top colleagues whom I trusted,” with the order to mail them “if I don't come back.”

    In 2009 he published his definitive paper in the field's top journal, IEEE Transactions on Information Theory.

    It didn't exactly make him a household name, but within the small community of information theorists, polar codes were a sensation.

    Arıkan traveled to the US to give a series of lectures. (You can see them on YouTube; they are not for the mathematically fainthearted. The students look a bit bored.)

    Arıkan was justifiably proud of his accomplishment, but he didn't think of polar codes as something with practical value.

    It was a theoretical solution that, even if implemented, seemed unlikely to rival the error-correction codes already in place.

    He didn't even bother to get a patent.

    #channel #capacity #Shannon #limit #correcting #errors #Bilkent #University #eureka #accurately #redundancy #channel #coding #problem

  27. Arıkan's new solution was to create near-perfect channels from ordinary channels by a process he called “#channel #polarization.”

    Noise would be transferred from one channel to a copy of the same channel to create a cleaner copy and a dirtier one.

    After a recursive series of such steps, two sets of channels emerge, one set being extremely noisy, the other being almost noise-free.

    The channels that are scrubbed of noise, in theory, can attain the Shannon limit.

    He dubbed his solution #polar #codes.
    It's as if the noise was banished to the North Pole, allowing for pristine communications at the South Pole.

    After this discovery, Arıkan spent two more years refining the details.
    He had read that before Shannon released his famous paper on information theory, his supervisor at Bell Labs would pop by and ask if the researcher had anything new.
    “Shannon never mentioned information theory,” says Arıkan with a laugh.
    “He kept his work undercover. He didn't disclose it.”

    That was also Arıkan's MO. “I had the luxury of knowing that no other person in the world was working on this problem,” Arıkan says, “because it was not a fashionable subject.”

    In 2008, three years after his eureka moment, Arıkan finally presented his work.

    He had understood its importance all along. Over the years, whenever he traveled, he would leave his unpublished manuscript in two envelopes addressed to “top colleagues whom I trusted,” with the order to mail them “if I don't come back.”

    In 2009 he published his definitive paper in the field's top journal, IEEE Transactions on Information Theory.

    It didn't exactly make him a household name, but within the small community of information theorists, polar codes were a sensation.

    Arıkan traveled to the US to give a series of lectures. (You can see them on YouTube; they are not for the mathematically fainthearted. The students look a bit bored.)

    Arıkan was justifiably proud of his accomplishment, but he didn't think of polar codes as something with practical value.

    It was a theoretical solution that, even if implemented, seemed unlikely to rival the error-correction codes already in place.

    He didn't even bother to get a patent.

    #channel #capacity #Shannon #limit #correcting #errors #Bilkent #University #eureka #accurately #redundancy #channel #coding #problem

  28. Arıkan's new solution was to create near-perfect channels from ordinary channels by a process he called “#channel #polarization.”

    Noise would be transferred from one channel to a copy of the same channel to create a cleaner copy and a dirtier one.

    After a recursive series of such steps, two sets of channels emerge, one set being extremely noisy, the other being almost noise-free.

    The channels that are scrubbed of noise, in theory, can attain the Shannon limit.

    He dubbed his solution #polar #codes.
    It's as if the noise was banished to the North Pole, allowing for pristine communications at the South Pole.

    After this discovery, Arıkan spent two more years refining the details.
    He had read that before Shannon released his famous paper on information theory, his supervisor at Bell Labs would pop by and ask if the researcher had anything new.
    “Shannon never mentioned information theory,” says Arıkan with a laugh.
    “He kept his work undercover. He didn't disclose it.”

    That was also Arıkan's MO. “I had the luxury of knowing that no other person in the world was working on this problem,” Arıkan says, “because it was not a fashionable subject.”

    In 2008, three years after his eureka moment, Arıkan finally presented his work.

    He had understood its importance all along. Over the years, whenever he traveled, he would leave his unpublished manuscript in two envelopes addressed to “top colleagues whom I trusted,” with the order to mail them “if I don't come back.”

    In 2009 he published his definitive paper in the field's top journal, IEEE Transactions on Information Theory.

    It didn't exactly make him a household name, but within the small community of information theorists, polar codes were a sensation.

    Arıkan traveled to the US to give a series of lectures. (You can see them on YouTube; they are not for the mathematically fainthearted. The students look a bit bored.)

    Arıkan was justifiably proud of his accomplishment, but he didn't think of polar codes as something with practical value.

    It was a theoretical solution that, even if implemented, seemed unlikely to rival the error-correction codes already in place.

    He didn't even bother to get a patent.

    #channel #capacity #Shannon #limit #correcting #errors #Bilkent #University #eureka #accurately #redundancy #channel #coding #problem

  29. Arıkan devoted the next year to learning about networks, but he never gave up on his passion for information science.

    What gripped him most was solving a challenge that Shannon himself had spelled out in his 1948 paper:
    how to transport accurate information at high speed while defeating the inevitable “noise”
    —undesirable alterations of the message
    —introduced in the process of moving all those bits.

    The problem was known as #channel #capacity.

    According to Shannon, every communications channel had a kind of speed limit for transmitting information reliably.

    This as-yet-unattained theoretical boundary was referred to as the #Shannon #limit.

    Gallager had wrestled with the Shannon limit early in his career, and he got close. His much celebrated theoretical approach was something he called low-density parity-check codes, or LDPC, which were, in simplest terms, a high-speed method of #correcting #errors on the fly.

    While the mathematics of LDPC were innovative, Gallager understood at the time that it wasn't commercially viable.

    “It was just too complicated for the cost of the logical operations that were needed,” Gallager says now.

    Gallager and others at MIT figured that they had gotten as close to the Shannon limit as one could get, and he moved on.

    At MIT in the 1980s, the excitement about information theory had waned.
    But not for Arıkan.

    He wanted to solve the problem that stood in the way of reaching the Shannon limit.

    Even as he pursued his thesis on the networking problem that Gallager had pointed him to, he seized on a piece that included error correction.

    “When you do error-correction coding, you are in Shannon theory,” he says.

    Arıkan finished his doctoral thesis in 1986, and after a brief stint at the University of Illinois he returned to Turkey to join the country's first private, nonprofit research institution, #Bilkent #University, located on the outskirts of Ankara.

    Arıkan helped establish its engineering school. He taught classes. He published papers.

    But Bilkent also allowed him to pursue his potentially fruitless battle with the Shannon limit.

    “The best people are in the US, but why aren't they working for 10 years, 20 years on the same problem?” he said.
    “Because they wouldn't be able to get tenure; they wouldn't be able to get research funding.”

    Rather than advancing his field in tiny increments, he went on a monumental quest. It would be his work for the next 20 years.

    In December 2005 he had a kind of #eureka moment.
    Spurred by a question posed in a three-page dispatch written in 1965 by a Russian information scientist, Arıkan reframed the problem for himself.

    “The key to discoveries is to look at those places where there is still a paradox,” Arıkan says.

    “It's like the tip of an iceberg. If there is a point of dissatisfaction, take a closer look at it. You are likely to find a treasure trove underneath.”

    Arıkan's goal was to transmit messages accurately over a noisy channel at the fastest possible speed.

    The key word is #accurately. If you don't care about accuracy, you can send messages unfettered.

    But if you want the recipient to get the same data that you sent, you have to insert some #redundancy into the message.
    That gives the recipient a way to cross-check the message to make sure it's what you sent.

    Inevitably, that extra cross-checking slows things down.
    This is known as the #channel #coding #problem.

    The greater the amount of noise, the more added redundancy is needed to protect the message.

    And the more redundancy you add, the slower the rate of transmission becomes.

    The coding problem tries to defeat that trade-off and find ways to achieve reliable transmission of information at the fastest possible rate.

    The optimum rate would be the Shannon limit: channel coding nirvana.

  30. Arıkan devoted the next year to learning about networks, but he never gave up on his passion for information science.

    What gripped him most was solving a challenge that Shannon himself had spelled out in his 1948 paper:
    how to transport accurate information at high speed while defeating the inevitable “noise”
    —undesirable alterations of the message
    —introduced in the process of moving all those bits.

    The problem was known as #channel #capacity.

    According to Shannon, every communications channel had a kind of speed limit for transmitting information reliably.

    This as-yet-unattained theoretical boundary was referred to as the #Shannon #limit.

    Gallager had wrestled with the Shannon limit early in his career, and he got close. His much celebrated theoretical approach was something he called low-density parity-check codes, or LDPC, which were, in simplest terms, a high-speed method of #correcting #errors on the fly.

    While the mathematics of LDPC were innovative, Gallager understood at the time that it wasn't commercially viable.

    “It was just too complicated for the cost of the logical operations that were needed,” Gallager says now.

    Gallager and others at MIT figured that they had gotten as close to the Shannon limit as one could get, and he moved on.

    At MIT in the 1980s, the excitement about information theory had waned.
    But not for Arıkan.

    He wanted to solve the problem that stood in the way of reaching the Shannon limit.

    Even as he pursued his thesis on the networking problem that Gallager had pointed him to, he seized on a piece that included error correction.

    “When you do error-correction coding, you are in Shannon theory,” he says.

    Arıkan finished his doctoral thesis in 1986, and after a brief stint at the University of Illinois he returned to Turkey to join the country's first private, nonprofit research institution, #Bilkent #University, located on the outskirts of Ankara.

    Arıkan helped establish its engineering school. He taught classes. He published papers.

    But Bilkent also allowed him to pursue his potentially fruitless battle with the Shannon limit.

    “The best people are in the US, but why aren't they working for 10 years, 20 years on the same problem?” he said.
    “Because they wouldn't be able to get tenure; they wouldn't be able to get research funding.”

    Rather than advancing his field in tiny increments, he went on a monumental quest. It would be his work for the next 20 years.

    In December 2005 he had a kind of #eureka moment.
    Spurred by a question posed in a three-page dispatch written in 1965 by a Russian information scientist, Arıkan reframed the problem for himself.

    “The key to discoveries is to look at those places where there is still a paradox,” Arıkan says.

    “It's like the tip of an iceberg. If there is a point of dissatisfaction, take a closer look at it. You are likely to find a treasure trove underneath.”

    Arıkan's goal was to transmit messages accurately over a noisy channel at the fastest possible speed.

    The key word is #accurately. If you don't care about accuracy, you can send messages unfettered.

    But if you want the recipient to get the same data that you sent, you have to insert some #redundancy into the message.
    That gives the recipient a way to cross-check the message to make sure it's what you sent.

    Inevitably, that extra cross-checking slows things down.
    This is known as the #channel #coding #problem.

    The greater the amount of noise, the more added redundancy is needed to protect the message.

    And the more redundancy you add, the slower the rate of transmission becomes.

    The coding problem tries to defeat that trade-off and find ways to achieve reliable transmission of information at the fastest possible rate.

    The optimum rate would be the Shannon limit: channel coding nirvana.

  31. Arıkan devoted the next year to learning about networks, but he never gave up on his passion for information science.

    What gripped him most was solving a challenge that Shannon himself had spelled out in his 1948 paper:
    how to transport accurate information at high speed while defeating the inevitable “noise”
    —undesirable alterations of the message
    —introduced in the process of moving all those bits.

    The problem was known as #channel #capacity.

    According to Shannon, every communications channel had a kind of speed limit for transmitting information reliably.

    This as-yet-unattained theoretical boundary was referred to as the #Shannon #limit.

    Gallager had wrestled with the Shannon limit early in his career, and he got close. His much celebrated theoretical approach was something he called low-density parity-check codes, or LDPC, which were, in simplest terms, a high-speed method of #correcting #errors on the fly.

    While the mathematics of LDPC were innovative, Gallager understood at the time that it wasn't commercially viable.

    “It was just too complicated for the cost of the logical operations that were needed,” Gallager says now.

    Gallager and others at MIT figured that they had gotten as close to the Shannon limit as one could get, and he moved on.

    At MIT in the 1980s, the excitement about information theory had waned.
    But not for Arıkan.

    He wanted to solve the problem that stood in the way of reaching the Shannon limit.

    Even as he pursued his thesis on the networking problem that Gallager had pointed him to, he seized on a piece that included error correction.

    “When you do error-correction coding, you are in Shannon theory,” he says.

    Arıkan finished his doctoral thesis in 1986, and after a brief stint at the University of Illinois he returned to Turkey to join the country's first private, nonprofit research institution, #Bilkent #University, located on the outskirts of Ankara.

    Arıkan helped establish its engineering school. He taught classes. He published papers.

    But Bilkent also allowed him to pursue his potentially fruitless battle with the Shannon limit.

    “The best people are in the US, but why aren't they working for 10 years, 20 years on the same problem?” he said.
    “Because they wouldn't be able to get tenure; they wouldn't be able to get research funding.”

    Rather than advancing his field in tiny increments, he went on a monumental quest. It would be his work for the next 20 years.

    In December 2005 he had a kind of #eureka moment.
    Spurred by a question posed in a three-page dispatch written in 1965 by a Russian information scientist, Arıkan reframed the problem for himself.

    “The key to discoveries is to look at those places where there is still a paradox,” Arıkan says.

    “It's like the tip of an iceberg. If there is a point of dissatisfaction, take a closer look at it. You are likely to find a treasure trove underneath.”

    Arıkan's goal was to transmit messages accurately over a noisy channel at the fastest possible speed.

    The key word is #accurately. If you don't care about accuracy, you can send messages unfettered.

    But if you want the recipient to get the same data that you sent, you have to insert some #redundancy into the message.
    That gives the recipient a way to cross-check the message to make sure it's what you sent.

    Inevitably, that extra cross-checking slows things down.
    This is known as the #channel #coding #problem.

    The greater the amount of noise, the more added redundancy is needed to protect the message.

    And the more redundancy you add, the slower the rate of transmission becomes.

    The coding problem tries to defeat that trade-off and find ways to achieve reliable transmission of information at the fastest possible rate.

    The optimum rate would be the Shannon limit: channel coding nirvana.

  32. Arıkan devoted the next year to learning about networks, but he never gave up on his passion for information science.

    What gripped him most was solving a challenge that Shannon himself had spelled out in his 1948 paper:
    how to transport accurate information at high speed while defeating the inevitable “noise”
    —undesirable alterations of the message
    —introduced in the process of moving all those bits.

    The problem was known as #channel #capacity.

    According to Shannon, every communications channel had a kind of speed limit for transmitting information reliably.

    This as-yet-unattained theoretical boundary was referred to as the #Shannon #limit.

    Gallager had wrestled with the Shannon limit early in his career, and he got close. His much celebrated theoretical approach was something he called low-density parity-check codes, or LDPC, which were, in simplest terms, a high-speed method of #correcting #errors on the fly.

    While the mathematics of LDPC were innovative, Gallager understood at the time that it wasn't commercially viable.

    “It was just too complicated for the cost of the logical operations that were needed,” Gallager says now.

    Gallager and others at MIT figured that they had gotten as close to the Shannon limit as one could get, and he moved on.

    At MIT in the 1980s, the excitement about information theory had waned.
    But not for Arıkan.

    He wanted to solve the problem that stood in the way of reaching the Shannon limit.

    Even as he pursued his thesis on the networking problem that Gallager had pointed him to, he seized on a piece that included error correction.

    “When you do error-correction coding, you are in Shannon theory,” he says.

    Arıkan finished his doctoral thesis in 1986, and after a brief stint at the University of Illinois he returned to Turkey to join the country's first private, nonprofit research institution, #Bilkent #University, located on the outskirts of Ankara.

    Arıkan helped establish its engineering school. He taught classes. He published papers.

    But Bilkent also allowed him to pursue his potentially fruitless battle with the Shannon limit.

    “The best people are in the US, but why aren't they working for 10 years, 20 years on the same problem?” he said.
    “Because they wouldn't be able to get tenure; they wouldn't be able to get research funding.”

    Rather than advancing his field in tiny increments, he went on a monumental quest. It would be his work for the next 20 years.

    In December 2005 he had a kind of #eureka moment.
    Spurred by a question posed in a three-page dispatch written in 1965 by a Russian information scientist, Arıkan reframed the problem for himself.

    “The key to discoveries is to look at those places where there is still a paradox,” Arıkan says.

    “It's like the tip of an iceberg. If there is a point of dissatisfaction, take a closer look at it. You are likely to find a treasure trove underneath.”

    Arıkan's goal was to transmit messages accurately over a noisy channel at the fastest possible speed.

    The key word is #accurately. If you don't care about accuracy, you can send messages unfettered.

    But if you want the recipient to get the same data that you sent, you have to insert some #redundancy into the message.
    That gives the recipient a way to cross-check the message to make sure it's what you sent.

    Inevitably, that extra cross-checking slows things down.
    This is known as the #channel #coding #problem.

    The greater the amount of noise, the more added redundancy is needed to protect the message.

    And the more redundancy you add, the slower the rate of transmission becomes.

    The coding problem tries to defeat that trade-off and find ways to achieve reliable transmission of information at the fastest possible rate.

    The optimum rate would be the Shannon limit: channel coding nirvana.

  33. Arıkan devoted the next year to learning about networks, but he never gave up on his passion for information science.

    What gripped him most was solving a challenge that Shannon himself had spelled out in his 1948 paper:
    how to transport accurate information at high speed while defeating the inevitable “noise”
    —undesirable alterations of the message
    —introduced in the process of moving all those bits.

    The problem was known as #channel #capacity.

    According to Shannon, every communications channel had a kind of speed limit for transmitting information reliably.

    This as-yet-unattained theoretical boundary was referred to as the #Shannon #limit.

    Gallager had wrestled with the Shannon limit early in his career, and he got close. His much celebrated theoretical approach was something he called low-density parity-check codes, or LDPC, which were, in simplest terms, a high-speed method of #correcting #errors on the fly.

    While the mathematics of LDPC were innovative, Gallager understood at the time that it wasn't commercially viable.

    “It was just too complicated for the cost of the logical operations that were needed,” Gallager says now.

    Gallager and others at MIT figured that they had gotten as close to the Shannon limit as one could get, and he moved on.

    At MIT in the 1980s, the excitement about information theory had waned.
    But not for Arıkan.

    He wanted to solve the problem that stood in the way of reaching the Shannon limit.

    Even as he pursued his thesis on the networking problem that Gallager had pointed him to, he seized on a piece that included error correction.

    “When you do error-correction coding, you are in Shannon theory,” he says.

    Arıkan finished his doctoral thesis in 1986, and after a brief stint at the University of Illinois he returned to Turkey to join the country's first private, nonprofit research institution, #Bilkent #University, located on the outskirts of Ankara.

    Arıkan helped establish its engineering school. He taught classes. He published papers.

    But Bilkent also allowed him to pursue his potentially fruitless battle with the Shannon limit.

    “The best people are in the US, but why aren't they working for 10 years, 20 years on the same problem?” he said.
    “Because they wouldn't be able to get tenure; they wouldn't be able to get research funding.”

    Rather than advancing his field in tiny increments, he went on a monumental quest. It would be his work for the next 20 years.

    In December 2005 he had a kind of #eureka moment.
    Spurred by a question posed in a three-page dispatch written in 1965 by a Russian information scientist, Arıkan reframed the problem for himself.

    “The key to discoveries is to look at those places where there is still a paradox,” Arıkan says.

    “It's like the tip of an iceberg. If there is a point of dissatisfaction, take a closer look at it. You are likely to find a treasure trove underneath.”

    Arıkan's goal was to transmit messages accurately over a noisy channel at the fastest possible speed.

    The key word is #accurately. If you don't care about accuracy, you can send messages unfettered.

    But if you want the recipient to get the same data that you sent, you have to insert some #redundancy into the message.
    That gives the recipient a way to cross-check the message to make sure it's what you sent.

    Inevitably, that extra cross-checking slows things down.
    This is known as the #channel #coding #problem.

    The greater the amount of noise, the more added redundancy is needed to protect the message.

    And the more redundancy you add, the slower the rate of transmission becomes.

    The coding problem tries to defeat that trade-off and find ways to achieve reliable transmission of information at the fastest possible rate.

    The optimum rate would be the Shannon limit: channel coding nirvana.

  34. And our Friday #CultShelf leading ladies this week are Theresa Russell and Jane Lapotaire (pictured with Gene Hackman and Rutger Hauer, respectively)
    #Eureka #TheresaRussell #JaneLapotaire #80smovies