home.social

#photophobia — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #photophobia, aggregated by home.social.

  1. Dear light-sensitive people, can you please share experiences with or recommendations for FL-41 glasses?

    I am looking for something that will help me navigate the harsh lights at my work place. I do have a rather small skull, so I often have to look long and hard for sunnies that don't slide off my nose. Hence, I'm looking for FL-41 glasses that have a relatively small fit. Do you have any recommendations?

    Do you have any other advice that you can share? Thanks!

    #Photophobia #LightSensitivity #AskingForAFriend #ActuallyAutistic #Neurodiversity #Neurodivergence #Neurodivergent #ADHD #AuDHD #Autism #FL41

  2. CW: What is an Ecology of Protections?

    What is an Ecology of Protections?

    In our paper, we illustrate how social media and information infrastructures can be imagined as an ecosystem of agents interacting through processes (policies) and behaviors (actions of users and automations).

    This ecosystem model makes it possible to understand how our current protections for photosensitive users are inadequate, allowing for multiple vectors of accidental and malicious exposure to dangerous flashing content.

    [Image 1: Narrative Description of Diagram:

    Platform Executives and Marketing Clients have a cyclical revenue relationship.

    Platform Executives influence Platform Designers and Developers.

    WCAG guidelines are meant to support Designers/Developers, but may not always be followed.

    Marketing Clients sometimes create flashing content.

    Non-sensitive users who are unaware of photosensitivity may also create and share and circulate flashing content, which can lead to accidental exposure for Photosensitive Users.

    When Non-sensitive Users who are aware of photosensitivity are exposed to flashing content, they may warn photosensitive users.

    Non-sensitive Users who are aware of photosensitivity may also intentionally or maliciously expose sensitive users.

    If any user attempts to report a flashing graphic, reporting features do not have appropriate categories, and platform developers are never notified of the problem.

    Sensitive users may also experience resets or overrides to their safety settings on major app updates.]

    A more robust ecology of protections centers the photosensitive user, even though they are not "the majority", providing them with multiple layers which they can control to protect themselves from exposures when other policies and community norms fail.

    [Image 2: Narrative description of the aggregate re-world map.

    The photosensitive user is protected from ambient, accidental, and malicious exposure by a dual layer of protection by device level settings and app level settings.

    If a developer update overrides app-level settings, the device level settings correct this.

    Users who are aware of photosensitivity are able to educate other users to prevent circulation of dangerous content.

    Advertisers and other users may still create and share flashing content, but an enforcement body can punish advertisers and platforms for creating or failing to prevent the circulation of flashing content.

    Users are able to report dangerous content to both the platform and to regulatory bodies.

    Malicious attacks are met with suspension or bans.]

    Such a nested system of protections can prevent ambient, accidental, and malicious exposure to dangerous flashing media. Establishing these features and community norms may further protect all users from psychologically harmful content.

    It is important to note that such provisions rely on centering user safety over corporate interests which rely on non-vonsensual auto-play to hook user attention and manipulate purchasing behavior. To this I say, get more creative. Or maybe... stop being a predator.

    Additionally, the central layer of protection requires implementation by OS developers to provide an optional postprocess filter which can inturrupt dangerous luminance shifts at the pixel buffer level. Other research attempts to classify and predict dangerous flashes through ML techniques, but this is not necessary. A always-on simple algorithmic edit to the next array of pixels provides fail-safe protection at a minor aesthetic expense - a ghost or trailing effect.

    [Image 3: An original sequence of frames which flash between pure black and pure white.

    A flash detection mask is calculated and a pixel-attenuation filter is loaded.

    In the final sequence, each frame is attenuated by the detection mask and filter, producing the impression of a flash without the actual danger.]

    Image 3 demonstrates the proposed filter applied to a full screen blink sequence. Between-frame calculations of flash detection using South et al.’s algorithm would produce a full white mask for each frame, indicating that each frame change was dangerous and needed to be adjusted. In other examples, the calculated mask would identify specific regions for pixel filtering. The mask could then be applied to a calculation which adjusts how much the next frame transitions between the previous frame and the intended next frame. In the example here, frame 2 becomes frame B, which is two plus the mask (Δ1), multiplied by a fraction (q). Frame 3 becomes frame C, which is the original frame 3 plus the calculated difference between frames 2 and 3 (Δ2), minus the pixel values in frame B. The first flashing sequence is significantly faded, and over time the flashing sequence averages out to show level changes without strobing.

    This effect may look strange, but photosensitive users already use "unaesthetic" filters to limit the overall brightness of their devices. Such filters may even enable content creators to change their aesthetic choices to eliminate ghosting which would indicate a safer content for all.

    dl.acm.org/doi/abs/10.1145/366

    #ASSETS2024 #GraphicsProgramming #TechnologyPolicy #Migraine #Epilepsy #photosensitivity #photophobia

  3. CW: What is an Ecology of Protections?

    What is an Ecology of Protections?

    In our paper, we illustrate how social media and information infrastructures can be imagined as an ecosystem of agents interacting through processes (policies) and behaviors (actions of users and automations).

    This ecosystem model makes it possible to understand how our current protections for photosensitive users are inadequate, allowing for multiple vectors of accidental and malicious exposure to dangerous flashing content.

    [Image 1: Narrative Description of Diagram:

    Platform Executives and Marketing Clients have a cyclical revenue relationship.

    Platform Executives influence Platform Designers and Developers.

    WCAG guidelines are meant to support Designers/Developers, but may not always be followed.

    Marketing Clients sometimes create flashing content.

    Non-sensitive users who are unaware of photosensitivity may also create and share and circulate flashing content, which can lead to accidental exposure for Photosensitive Users.

    When Non-sensitive Users who are aware of photosensitivity are exposed to flashing content, they may warn photosensitive users.

    Non-sensitive Users who are aware of photosensitivity may also intentionally or maliciously expose sensitive users.

    If any user attempts to report a flashing graphic, reporting features do not have appropriate categories, and platform developers are never notified of the problem.

    Sensitive users may also experience resets or overrides to their safety settings on major app updates.]

    A more robust ecology of protections centers the photosensitive user, even though they are not "the majority", providing them with multiple layers which they can control to protect themselves from exposures when other policies and community norms fail.

    [Image 2: Narrative description of the aggregate re-world map.

    The photosensitive user is protected from ambient, accidental, and malicious exposure by a dual layer of protection by device level settings and app level settings.

    If a developer update overrides app-level settings, the device level settings correct this.

    Users who are aware of photosensitivity are able to educate other users to prevent circulation of dangerous content.

    Advertisers and other users may still create and share flashing content, but an enforcement body can punish advertisers and platforms for creating or failing to prevent the circulation of flashing content.

    Users are able to report dangerous content to both the platform and to regulatory bodies.

    Malicious attacks are met with suspension or bans.]

    Such a nested system of protections can prevent ambient, accidental, and malicious exposure to dangerous flashing media. Establishing these features and community norms may further protect all users from psychologically harmful content.

    It is important to note that such provisions rely on centering user safety over corporate interests which rely on non-vonsensual auto-play to hook user attention and manipulate purchasing behavior. To this I say, get more creative. Or maybe... stop being a predator.

    Additionally, the central layer of protection requires implementation by OS developers to provide an optional postprocess filter which can inturrupt dangerous luminance shifts at the pixel buffer level. Other research attempts to classify and predict dangerous flashes through ML techniques, but this is not necessary. A always-on simple algorithmic edit to the next array of pixels provides fail-safe protection at a minor aesthetic expense - a ghost or trailing effect.

    [Image 3: An original sequence of frames which flash between pure black and pure white.

    A flash detection mask is calculated and a pixel-attenuation filter is loaded.

    In the final sequence, each frame is attenuated by the detection mask and filter, producing the impression of a flash without the actual danger.]

    Image 3 demonstrates the proposed filter applied to a full screen blink sequence. Between-frame calculations of flash detection using South et al.’s algorithm would produce a full white mask for each frame, indicating that each frame change was dangerous and needed to be adjusted. In other examples, the calculated mask would identify specific regions for pixel filtering. The mask could then be applied to a calculation which adjusts how much the next frame transitions between the previous frame and the intended next frame. In the example here, frame 2 becomes frame B, which is two plus the mask (Δ1), multiplied by a fraction (q). Frame 3 becomes frame C, which is the original frame 3 plus the calculated difference between frames 2 and 3 (Δ2), minus the pixel values in frame B. The first flashing sequence is significantly faded, and over time the flashing sequence averages out to show level changes without strobing.

    This effect may look strange, but photosensitive users already use "unaesthetic" filters to limit the overall brightness of their devices. Such filters may even enable content creators to change their aesthetic choices to eliminate ghosting which would indicate a safer content for all.

    dl.acm.org/doi/abs/10.1145/366

    #ASSETS2024 #GraphicsProgramming #TechnologyPolicy #Migraine #Epilepsy #photosensitivity #photophobia

  4. Yesterday, my students presented our work at the ACM ASSETS conference. "Not Only Annpying. But Dangerous": Devising an Ecology of Protections for Photosensitive Social Media Users" dl.acm.org/doi/abs/10.1145/366

    In this study, we investigate prior work, conduct survey inquiries, and use co-design methods to explore how social media design choices influence exposure to dangerous flashing content which can trigger seizures, migraines, nausea, and disorientation for photosensitive users.

    Through our analysis, we identify the current ecosystem of flashing content on the Internet, and propose a more robust ecology of protections, including on-device graphics filters that directly edit pixels buffers to prevent flashing before it occurs.

    First, existing WCAG guidelines against auto-play of media need to be enforced. Second, users should have device level control over animation that may trigger flashing, and this control shpuld not be able to be reset by platforms that try to enforce autoplay to support their own ad revenue. Third, other users need to be aware of what makes content dangerous, so thay they may stop circulating it and causing accidental exposure. Fourth, creators need to know what makes content dangerous, and how they can test for danger, to prevent them from creating dangerous media in the first place. This includes corporate creators, like movie studios, whose ads for acrion movies have been a recent source of autoplaying strobing content in movie trailors posted via ad platforms. Fifth, platforms, including GIF libraries, but also all social platforms, need to implement reporting mechanisms specifically for flashing content which can remove that content from circulation. Sixth, there should be actual penalties for platforms and creators that do not react to, correct, and remove dangerous content, or who force auto-play on users.

    And seventh, device manufacturers and operating system developers need to create on-device filters that eliminate flashing through simple real time post-processing. Machine Learning classification and prediction algorithms ARE NOT NECESSARY. We can do this with simple math. Yes it may sometimes look weird. But also people won't be dying in their sleep.

    This work is very important to me, and I've been working on it (on the side) since 2017. I wasn't allowed to pursue it fully as a graduate student. As faculty, I still had to string the project together on wishes. And I'm still looking for a collaborator to work on the implementation, though if industry would just get their shit together and do it themselves, that'd be great.

    #ASSETS2024 #accessibility #Epilepsy #migraine #photophobia #photosensitivity #NEISVoid #graphicsProgramming

  5. [Correspondence] #Oropouche virus #genomic #surveillance in #Brazil thelancet.com/journals/laninf/

    OROV is an #orthobunyavirus transmitted by bites of #Culicoides paraensis. OROV is endemic to N region of Brazil and is found in other countries, including #Peru, #Panama, & #Trinidad-Tobago. OROV is causative agent of Oropouche fever, a mild disease characterised by symptoms, such as fever, headache, nausea, diarrhoea, vomiting, dizziness, myalgia, & #photophobia. Cases of #meningoencephalitis...

  6. I used to have #migraines.

    Luckily, they were "classical" rather than "common" migraines. That means simply that attacks began with an #aura, giving advance warning, and time to retreat to the Bat Cave before the howling #headache, #photophobia, #helpless #existential #despair etc. kicked in.

    1/n

  7. #WindowFriday well, it's my little terrace. This suculent is growing in winter despite an accident (it felt and broke) and a change of pot (10 cms). #NatureInBed
    I'm happy because I haven't plants for years due my #fotosensibility and #photophobia but I can stay in dark rooms and have plants outdoors in my new place. I missed them so much #PlantLover #GreenHouse #suculentas #natureza #fotosensibilidad #fotofobia #lupus #EncefalomielitisMiálgica #ME #MECFS