home.social

#aisle — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #aisle, aggregated by home.social.

  1. DATE: April 28, 2026 at 09:28AM
    SOURCE: HEALTHCARE INFO SECURITY

    Direct article link at end of text block below.

    Researchers Find 38 Flaws in #OpenEMR. They've Been Fixed t.co/9G4L5PIvyE #AISLE

    Here are any URLs found in the article text:

    t.co/9G4L5PIvyE

    Articles can be found by scrolling down the page at healthcareinfosecurity.com/ under the title "Latest"

    -------------------------------------------------

    Private, vetted email list for mental health professionals: clinicians-exchange.org

    Healthcare security & privacy posts not related to IT or infosec are at @HIPAABot . Even so, they mix in some infosec with the legal & regulatory information.

    -------------------------------------------------

    #security #healthcare #doctors #itsecurity #hacking #doxxing #psychotherapy #securitynews #psychotherapist #mentalhealth #psychiatry #hospital #socialwork #datasecurity #webbeacons #cookies #HIPAA #privacy #datanalytics #healthcaresecurity #healthitsecurity #patientrecords @infosec #telehealth #netneutrality #socialengineering

  2. DATE: April 28, 2026 at 09:28AM
    SOURCE: HEALTHCARE INFO SECURITY

    Direct article link at end of text block below.

    Researchers Find 38 Flaws in #OpenEMR. They've Been Fixed t.co/9G4L5PIvyE #AISLE

    Here are any URLs found in the article text:

    t.co/9G4L5PIvyE

    Articles can be found by scrolling down the page at healthcareinfosecurity.com/ under the title "Latest"

    -------------------------------------------------

    Private, vetted email list for mental health professionals: clinicians-exchange.org

    Healthcare security & privacy posts not related to IT or infosec are at @HIPAABot . Even so, they mix in some infosec with the legal & regulatory information.

    -------------------------------------------------

    #security #healthcare #doctors #itsecurity #hacking #doxxing #psychotherapy #securitynews #psychotherapist #mentalhealth #psychiatry #hospital #socialwork #datasecurity #webbeacons #cookies #HIPAA #privacy #datanalytics #healthcaresecurity #healthitsecurity #patientrecords @infosec #telehealth #netneutrality #socialengineering

  3. DATE: April 28, 2026 at 09:28AM
    SOURCE: HEALTHCARE INFO SECURITY

    Direct article link at end of text block below.

    Researchers Find 38 Flaws in #OpenEMR. They've Been Fixed t.co/9G4L5PIvyE #AISLE

    Here are any URLs found in the article text:

    t.co/9G4L5PIvyE

    Articles can be found by scrolling down the page at healthcareinfosecurity.com/ under the title "Latest"

    -------------------------------------------------

    Private, vetted email list for mental health professionals: clinicians-exchange.org

    Healthcare security & privacy posts not related to IT or infosec are at @HIPAABot . Even so, they mix in some infosec with the legal & regulatory information.

    -------------------------------------------------

    #security #healthcare #doctors #itsecurity #hacking #doxxing #psychotherapy #securitynews #psychotherapist #mentalhealth #psychiatry #hospital #socialwork #datasecurity #webbeacons #cookies #HIPAA #privacy #datanalytics #healthcaresecurity #healthitsecurity #patientrecords @infosec #telehealth #netneutrality #socialengineering

  4. DATE: April 28, 2026 at 09:28AM
    SOURCE: HEALTHCARE INFO SECURITY

    Direct article link at end of text block below.

    Researchers Find 38 Flaws in #OpenEMR. They've Been Fixed t.co/9G4L5PIvyE #AISLE

    Here are any URLs found in the article text:

    t.co/9G4L5PIvyE

    Articles can be found by scrolling down the page at healthcareinfosecurity.com/ under the title "Latest"

    -------------------------------------------------

    Private, vetted email list for mental health professionals: clinicians-exchange.org

    Healthcare security & privacy posts not related to IT or infosec are at @HIPAABot . Even so, they mix in some infosec with the legal & regulatory information.

    -------------------------------------------------

    #security #healthcare #doctors #itsecurity #hacking #doxxing #psychotherapy #securitynews #psychotherapist #mentalhealth #psychiatry #hospital #socialwork #datasecurity #webbeacons #cookies #HIPAA #privacy #datanalytics #healthcaresecurity #healthitsecurity #patientrecords @infosec #telehealth #netneutrality #socialengineering

  5. Ultraprocessed foods hurt your ability to focus – even if you eat a largely healthy diet

    Your support helps us to tell the story From reproductive rights to climate change to Big Tech, The…
    #NewsBeep #News #Nutrition #aisle #AU #Australia #colorimage #customer #Florida-USState #government #GulfCoastStates #Health #horizontal #Miami #photography #Politics #politicsandgovernment #PotatoChip #Store #usa
    newsbeep.com/au/628762/

  6. Ultraprocessed foods hurt your ability to focus – even if you eat a largely healthy diet

    Your support helps us to tell the story From reproductive rights to climate change to Big Tech, The…
    #NewsBeep #News #Nutrition #aisle #AU #Australia #colorimage #customer #Florida-USState #government #GulfCoastStates #Health #horizontal #Miami #photography #Politics #politicsandgovernment #PotatoChip #Store #usa
    newsbeep.com/au/628762/

  7. Niet alleen banken gaan hier last van krijgen.

    Aisle (vergelijkbaar met Mythos) had in januari al dingen gevonden in het open source pakket OpenSSL (wat zo goed als de hele wereld gebruikt).

    "Banken over de hele wereld maken zich zorgen over nieuw AI-model: ‘Dit kan heel slecht aflopen’"

    ad.nl/buitenland/banken-over-d

    #Mythos #Anthropic #aisle

  8. Niet alleen banken gaan hier last van krijgen.

    Aisle (vergelijkbaar met Mythos) had in januari al dingen gevonden in het open source pakket OpenSSL (wat zo goed als de hele wereld gebruikt).

    "Banken over de hele wereld maken zich zorgen over nieuw AI-model: ‘Dit kan heel slecht aflopen’"

    ad.nl/buitenland/banken-over-d

    #Mythos #Anthropic #aisle

  9. Niet alleen banken gaan hier last van krijgen.

    Aisle (vergelijkbaar met Mythos) had in januari al dingen gevonden in het open source pakket OpenSSL (wat zo goed als de hele wereld gebruikt).

    "Banken over de hele wereld maken zich zorgen over nieuw AI-model: ‘Dit kan heel slecht aflopen’"

    ad.nl/buitenland/banken-over-d

    #Mythos #Anthropic #aisle

  10. Niet alleen banken gaan hier last van krijgen.

    Aisle (vergelijkbaar met Mythos) had in januari al dingen gevonden in het open source pakket OpenSSL (wat zo goed als de hele wereld gebruikt).

    "Banken over de hele wereld maken zich zorgen over nieuw AI-model: ‘Dit kan heel slecht aflopen’"

    ad.nl/buitenland/banken-over-d

    #Mythos #Anthropic #aisle

  11. Niet alleen banken gaan hier last van krijgen.

    Aisle (vergelijkbaar met Mythos) had in januari al dingen gevonden in het open source pakket OpenSSL (wat zo goed als de hele wereld gebruikt).

    "Banken over de hele wereld maken zich zorgen over nieuw AI-model: ‘Dit kan heel slecht aflopen’"

    ad.nl/buitenland/banken-over-d

    #Mythos #Anthropic #aisle

  12. 📢 Rapport CSA/SANS : Claude Mythos d'Anthropic déclenche une tempête de vulnérabilités IA
    📝 ## 🌐 Contexte

    Publié le 13 avril 2026 par la Cloud Security Alliance (CSA), SANS Institute, [un]prompted et l'OWASP...
    📖 cyberveille : cyberveille.ch/posts/2026-04-1
    🌐 source : labs.cloudsecurityalliance.org
    #AISLE #Big_Sleep #Cyberveille

  13. Are frozen foods really that bad? How to shop healthy in the freezer aisle, according to a dietitian

    Young brunette woman buys fresh frozen food in the store The items are cool by nature — but…
    #NewsBeep #News #Nutrition #aisle #AU #Australia #frozendesserts #frozenfood #frozenmeals #frozenveggies #Health #HealthyEating #healthyoptions #MayaFeller #registereddietitian
    newsbeep.com/au/583318/

  14. this looks like a genuinely good and very impressive use of “AI” in security research – I’m leaving the air quotes in place at the moment since I haven’t been able to find much detail on how the system actually operates. #AISLE describes it as an “autonomous analyser” and “the world’s first #AI-native Cyber Reasoning System (CRS) for vulnerability management” 🙄

    I’m pretty sure it’s not just spicy autocarrot though, possibly a mix of deep learning or other machine learning techniques (things that I think of as part of “traditional” AI research) with a sprinkling of LLM on top for “natural language” capabilities (and it’s possible that they’re leaning into “AI” as a descriptor to assign to the current hype cycle rather than calling it “machine learning” but ¯_(ツ)_/¯ )

    What AI Security Research Looks Like When It Works

    “In the latest #OpenSSL security release on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned #CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL #CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

    These weren't trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that's potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST's CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from #EricYoung's original #SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google's.

    In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.”

    aisle.com/blog/what-ai-securit

  15. this looks like a genuinely good and very impressive use of “AI” in security research – I’m leaving the air quotes in place at the moment since I haven’t been able to find much detail on how the system actually operates. #AISLE describes it as an “autonomous analyser” and “the world’s first #AI-native Cyber Reasoning System (CRS) for vulnerability management” 🙄

    I’m pretty sure it’s not just spicy autocarrot though, possibly a mix of deep learning or other machine learning techniques (things that I think of as part of “traditional” AI research) with a sprinkling of LLM on top for “natural language” capabilities (and it’s possible that they’re leaning into “AI” as a descriptor to assign to the current hype cycle rather than calling it “machine learning” but ¯_(ツ)_/¯ )

    What AI Security Research Looks Like When It Works

    “In the latest #OpenSSL security release on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned #CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL #CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

    These weren't trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that's potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST's CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from #EricYoung's original #SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google's.

    In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.”

    aisle.com/blog/what-ai-securit

  16. this looks like a genuinely good and very impressive use of “AI” in security research – I’m leaving the air quotes in place at the moment since I haven’t been able to find much detail on how the system actually operates. #AISLE describes it as an “autonomous analyser” and “the world’s first #AI-native Cyber Reasoning System (CRS) for vulnerability management” 🙄

    I’m pretty sure it’s not just spicy autocarrot though, possibly a mix of deep learning or other machine learning techniques (things that I think of as part of “traditional” AI research) with a sprinkling of LLM on top for “natural language” capabilities (and it’s possible that they’re leaning into “AI” as a descriptor to assign to the current hype cycle rather than calling it “machine learning” but ¯_(ツ)_/¯ )

    What AI Security Research Looks Like When It Works

    “In the latest #OpenSSL security release on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned #CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL #CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

    These weren't trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that's potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST's CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from #EricYoung's original #SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google's.

    In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.”

    aisle.com/blog/what-ai-securit

  17. this looks like a genuinely good and very impressive use of “AI” in security research – I’m leaving the air quotes in place at the moment since I haven’t been able to find much detail on how the system actually operates. #AISLE describes it as an “autonomous analyser” and “the world’s first #AI-native Cyber Reasoning System (CRS) for vulnerability management” 🙄

    I’m pretty sure it’s not just spicy autocarrot though, possibly a mix of deep learning or other machine learning techniques (things that I think of as part of “traditional” AI research) with a sprinkling of LLM on top for “natural language” capabilities (and it’s possible that they’re leaning into “AI” as a descriptor to assign to the current hype cycle rather than calling it “machine learning” but ¯_(ツ)_/¯ )

    What AI Security Research Looks Like When It Works

    “In the latest #OpenSSL security release on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned #CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL #CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

    These weren't trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that's potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST's CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from #EricYoung's original #SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google's.

    In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.”

    aisle.com/blog/what-ai-securit

  18. this looks like a genuinely good and very impressive use of “AI” in security research – I’m leaving the air quotes in place at the moment since I haven’t been able to find much detail on how the system actually operates. #AISLE describes it as an “autonomous analyser” and “the world’s first #AI-native Cyber Reasoning System (CRS) for vulnerability management” 🙄

    I’m pretty sure it’s not just spicy autocarrot though, possibly a mix of deep learning or other machine learning techniques (things that I think of as part of “traditional” AI research) with a sprinkling of LLM on top for “natural language” capabilities (and it’s possible that they’re leaning into “AI” as a descriptor to assign to the current hype cycle rather than calling it “machine learning” but ¯_(ツ)_/¯ )

    What AI Security Research Looks Like When It Works

    “In the latest #OpenSSL security release on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned #CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL #CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

    These weren't trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that's potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST's CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from #EricYoung's original #SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google's.

    In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.”

    aisle.com/blog/what-ai-securit