home.social

#algorithmicbias — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #algorithmicbias, aggregated by home.social.

  1. Tubefilter: Social media has political divides, but some feeds are more polarized than others. “Researchers set up 323 ‘sock puppet’ accounts on TikTok to measure the political polarization of the For You Page, and they found some apparent disparities between right-leaning and left-leaning feeds.”

    https://rbfirehose.com/2026/05/07/tubefilter-social-media-has-political-divides-but-some-feeds-are-more-polarized-than-others/
  2. Tubefilter: Social media has political divides, but some feeds are more polarized than others. “Researchers set up 323 ‘sock puppet’ accounts on TikTok to measure the political polarization of the For You Page, and they found some apparent disparities between right-leaning and left-leaning feeds.”

    https://rbfirehose.com/2026/05/07/tubefilter-social-media-has-political-divides-but-some-feeds-are-more-polarized-than-others/
  3. Tubefilter: Social media has political divides, but some feeds are more polarized than others. “Researchers set up 323 ‘sock puppet’ accounts on TikTok to measure the political polarization of the For You Page, and they found some apparent disparities between right-leaning and left-leaning feeds.”

    https://rbfirehose.com/2026/05/07/tubefilter-social-media-has-political-divides-but-some-feeds-are-more-polarized-than-others/
  4. Tubefilter: Social media has political divides, but some feeds are more polarized than others. “Researchers set up 323 ‘sock puppet’ accounts on TikTok to measure the political polarization of the For You Page, and they found some apparent disparities between right-leaning and left-leaning feeds.”

    https://rbfirehose.com/2026/05/07/tubefilter-social-media-has-political-divides-but-some-feeds-are-more-polarized-than-others/
  5. Tubefilter: Social media has political divides, but some feeds are more polarized than others. “Researchers set up 323 ‘sock puppet’ accounts on TikTok to measure the political polarization of the For You Page, and they found some apparent disparities between right-leaning and left-leaning feeds.”

    https://rbfirehose.com/2026/05/07/tubefilter-social-media-has-political-divides-but-some-feeds-are-more-polarized-than-others/
  6. RE: fediscience.org/@oatp/11643887

    "Maximal transparency is almost certainly not ethically desirable."

    Desirable 'for whom'?

    For platforms facing regulatory scrutiny, opacity is a feature. For users discriminated against by biased recommendation engines, transparency is survival. For communities targeted by algorithmic manipulation, openness is a civil liberty.

    This paper usefully breaks transparency into dimensions and degrees—providing the "choice points" for an ethics of algorithmic openness. But let us be clear: the stakeholders who need transparency most are rarely the ones invited to design these systems.

    Our job as advocates for privacy, free software, and civil liberties is not to settle for the "ethically optimal" comfort zone of the powerful. It is to push the needle toward the maximum and let the burden of justification fall on those who demand secrecy.

    Let us use it to demand more.

    #DigitalJustice #AlgorithmicBias #PrivacyRights #OpenScience #AlgorithmicGovernance #DigitalDemocracy #InfoSec #TechPolicy

  7. Data collection is not the biggest problem. Interpretation is.
    A version of you is constantly being assembled — cleaner, simpler, more usable than you actually are.

    Measured.
    Sorted.
    Packaged.

    That’s the moment the mirror stops reflecting and starts rewriting.

    And once the model matters more than the person, complexity quietly disappears.

    #DigitalIdentity #AlgorithmicPower #DataPolitics #DataProtection #AIEthics #Technology&Society #Democracy #DigitalGovernance #AlgorithmicBias

  8. 🔎 LinkedIn Audit: Suppressing Women?
    I'm running a live experiment on professional visibility. I temporarily removed the "she/her" pronoun label from my LinkedIn profile 3 days ago.

    Initial Result: My average post reach immediately jumped from ~3,000 to over 6,000 impressions. That's a 100% increase overnight.

    This sudden spike creates a strong hypothesis: the presence of gender labels was acting as a suppression factor, or their absence is creating an algorithmic boost for distribution.

    We must audit the unseen structures (algorithms) that govern visibility. Our goal is to ensure equitable reach for all voices.

    Tracking data for 10 more days. Have you seen similar shifts?

    Let's discuss fairness in digital distribution.

    #SystemsLeadership #AlgorithmicBias #TechInclusion

  9. University of Southern California: USC Rossier study links online racism, including algorithmic bias, to negative impacts on Black adolescents’ mental health. “Through an analysis of daily diary surveys, the research team, led by Brendesha Tynes, found that exposure to online racism heightens anxiety and depressive symptoms in Black youth.”

    https://rbfirehose.com/2025/10/20/university-of-southern-california-usc-rossier-study-links-online-racism-including-algorithmic-bias-to-negative-impacts-on-black-adolescents-mental-health/

  10. The Conversation: Women’s sports are fighting an uphill battle against our social media algorithms. “Algorithms, trained to maximise engagement and profits, are deciding what appears in your feed, which video auto-plays next, and which highlights are pushed to the top of your screen. But here is the problem: algorithms prioritise content that is already popular. That usually means men’s […]

    https://rbfirehose.com/2025/05/12/the-conversation-womens-sports-are-fighting-an-uphill-battle-against-our-social-media-algorithms/

  11. The Conversation: Women’s sports are fighting an uphill battle against our social media algorithms. “Algorithms, trained to maximise engagement and profits, are deciding what appears in your feed, which video auto-plays next, and which highlights are pushed to the top of your screen. But here is the problem: algorithms prioritise content that is already popular. That usually means men’s […]

    https://rbfirehose.com/2025/05/12/the-conversation-womens-sports-are-fighting-an-uphill-battle-against-our-social-media-algorithms/

  12. The Conversation: Women’s sports are fighting an uphill battle against our social media algorithms. “Algorithms, trained to maximise engagement and profits, are deciding what appears in your feed, which video auto-plays next, and which highlights are pushed to the top of your screen. But here is the problem: algorithms prioritise content that is already popular. That usually means men’s […]

    https://rbfirehose.com/2025/05/12/the-conversation-womens-sports-are-fighting-an-uphill-battle-against-our-social-media-algorithms/

  13. The Conversation: Women’s sports are fighting an uphill battle against our social media algorithms. “Algorithms, trained to maximise engagement and profits, are deciding what appears in your feed, which video auto-plays next, and which highlights are pushed to the top of your screen. But here is the problem: algorithms prioritise content that is already popular. That usually means men’s […]

    https://rbfirehose.com/2025/05/12/the-conversation-womens-sports-are-fighting-an-uphill-battle-against-our-social-media-algorithms/

  14. The Conversation: Women’s sports are fighting an uphill battle against our social media algorithms. “Algorithms, trained to maximise engagement and profits, are deciding what appears in your feed, which video auto-plays next, and which highlights are pushed to the top of your screen. But here is the problem: algorithms prioritise content that is already popular. That usually means men’s […]

    https://rbfirehose.com/2025/05/12/the-conversation-womens-sports-are-fighting-an-uphill-battle-against-our-social-media-algorithms/

  15. THE ALGORITHM VS. THE HUMAN MIND: A LOSING BATTLE
    ¯

    _
    NO RECOGNITION FOR THE AUTHOR

    YouTube does not reward consistency, insight, or author reputation. A comment may become a “top comment” for a day, only to vanish the next. There’s no memory, no history of editorial value. The platform doesn’t surface authors who contribute regularly with structured, relevant input. There's no path for authorship to emerge or be noticed. The “like” system favors early commenters — the infamous firsts — who write “first,” “early,” or “30 seconds in” just after a video drops. These are the comments that rise to the top. Readers interact with the text, not the person behind it. This is by design. YouTube wants engagement to stay contained within the content creator’s channel, not spread toward the audience. A well-written comment should not amplify a small creator’s reach — that would disrupt the platform’s control over audience flow.
    ¯

    _
    USERS WHO’VE STOPPED THINKING

    The algorithm trains people to wait for suggestions. Most users no longer take the initiative to explore or support anyone unless pushed by the system. Even when someone says something exceptional, the response remains cold. The author is just a font — not a presence. A familiar avatar doesn’t trigger curiosity. On these platforms, people follow only the already-famous. Anonymity is devalued by default. Most users would rather post their own comment (that no one will ever read) than reply to others. Interaction is solitary. YouTube, by design, encourages people to think only about themselves.
    ¯

    _
    ZERO MODERATION FOR SMALL CREATORS

    Small creators have no support when it comes to moderation. In low-traffic streams, there's no way to filter harassment or mockery. Trolls can show up just to enjoy someone else's failure — and nothing stops them. Unlike big streamers who can appoint moderators, smaller channels lack both the tools and the visibility to protect themselves. YouTube provides no built-in safety net, even though these creators are often the most exposed.
    ¯

    _
    EXTERNAL LINKS ARE SABOTAGED

    Trying to drive traffic to your own website? In the “About” section, YouTube adds a warning label to every external link: “You’re about to leave YouTube. This site may be unsafe.” It looks like an antivirus alert — not a routine redirect. It scares away casual users. And even if someone knows better, they still have to click again to confirm. That’s not protection — it’s manufactured discouragement. This cheap shot, disguised as safety, serves a single purpose: preventing viewers from leaving the ecosystem. YouTube has no authority to determine what is or isn’t a “safe” site beyond its own platform.
    ¯

    _
    HUMANS CAN’T OUTPERFORM THE MACHINE

    At every level, the human loses. You can’t outsmart an algorithm that filters, sorts, buries. You can’t even decide who you want to support: the system always intervenes. Talent alone isn’t enough. Courage isn’t enough. You need to break through a machine built to elevate the dominant and bury the rest. YouTube claims to be a platform for expression. But what it really offers is a simulated discovery engine — locked down and heavily policed.
    ¯

    _
    ||#HSLdiary #HSLmichael

    #YouTubeCritique #AlgorithmicBias #DigitalLabour #IndieCreators #Shadowbanning #ContentModeration #PlatformJustice #AudienceManipulation

  16. As JD Vance criticizes EU's AI regulation, 12+ US states are considering algorithmic discrimination bills strikingly similar to the EU's AI Act. #AIRegulation #AlgorithmicBias #TechPolicy #JDVance #USStates #AIAct #Discrimination #GovTech #ArtificialIntelligence

  17. "In October 2021, we sent a freedom-of-information request to the Social Insurance Agency attempting to find out more. It immediately rejected our request. Over the next three years, we exchanged hundreds of emails and sent dozens of freedom-of-information requests, nearly all of which were rejected. We went to court, twice, and spoke to half a dozen public authorities.

    Lighthouse Reports and Svenska Dagbladet obtained an unpublished dataset containing thousands of applicants to Sweden’s temporary child support scheme, which supports parents taking care of sick children. Each of them had been flagged as suspicious by a predictive algorithm deployed by the Social Insurance Agency. Analysis of the dataset revealed that the agency’s fraud prediction algorithm discriminated against women, migrants, low-income earners and people without a university education.

    Months of reporting — including conversations with confidential sources — demonstrate how the agency has deployed these systems without scrutiny despite objections from regulatory authorities and even its own data protection officer."

    lighthousereports.com/investig

    #Sweden #SocialInsurance #ChildSupport #Algorithms #AlgorithmicDiscrimination #AlgorithmicBias