home.social

#biases — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #biases, aggregated by home.social.

  1. AI Cafe #2: Franziska Thurner / Behind the Pixels – AI image generators and their bias

    servus clubraum, Tuesday, December 16 at 06:00 PM GMT+1

    AI Cafe #2:
    When:
    16th December at 18:00 
    Where: servus.at Clubraum. c/o Stadtwerkstatt, Kirchengasse 4, 4040 Linz

    Franziska Thurner researches how AI image generators work and to what extent they reinforce and reproduce social prejudices. AI training data consists of billions of internet posts and reflects existing social discrimination (sexism, racism, ableism, classism). During the AI Café, prompts are written and image generators are tried out – the results are analysed in a joint discussion with regard to gender, digital colonialism, resources and climate protection.

    About the artist:
    Franziska Thurner
    is a media artist and university lecturer at the Academy of Fine Arts Vienna and the University of Art and Design Linz. In the course of her many years of teaching, she has devoted herself to researching and testing emerging technologies with a focus on art, schools and educational work, as well as issues of digital ethics and digital justice. Her artistic work also focuses on numerous collaborations, including with sound artist Abby Lee Tee and their joint project ‘Beaver Lab’ and with Hanna Priemetzhofer as ‘System Jaquelinde’.

    About the AI Café:
    The AI Café is part of the collaborative project ‘Critical Digitalisation for Art and Culture / Theater Phoenix x servus.at’ and is being realised as part of the AI in Theater Phoenix project through the AI in Art and Culture funding programme. This project explores database-supported, automated and algorithmic processes in theatre operations. The project combines a critical examination of AI systems with their actual application in daily workflows. The ‘AI Cafés’ serve as a public event series in which different experiences and approaches to AI in cultural production are collected and discussed.

    kulturkarte.servus.at/event/ai

  2. AI Cafe #2: Franziska Thurner / Behind the Pixels – AI image generators and their bias

    servus clubraum, Tuesday, December 16 at 06:00 PM GMT+1

    AI Cafe #2:
    When:
    16th December at 18:00 
    Where: servus.at Clubraum. c/o Stadtwerkstatt, Kirchengasse 4, 4040 Linz

    Franziska Thurner researches how AI image generators work and to what extent they reinforce and reproduce social prejudices. AI training data consists of billions of internet posts and reflects existing social discrimination (sexism, racism, ableism, classism). During the AI Café, prompts are written and image generators are tried out – the results are analysed in a joint discussion with regard to gender, digital colonialism, resources and climate protection.

    About the artist:
    Franziska Thurner
    is a media artist and university lecturer at the Academy of Fine Arts Vienna and the University of Art and Design Linz. In the course of her many years of teaching, she has devoted herself to researching and testing emerging technologies with a focus on art, schools and educational work, as well as issues of digital ethics and digital justice. Her artistic work also focuses on numerous collaborations, including with sound artist Abby Lee Tee and their joint project ‘Beaver Lab’ and with Hanna Priemetzhofer as ‘System Jaquelinde’.

    About the AI Café:
    The AI Café is part of the collaborative project ‘Critical Digitalisation for Art and Culture / Theater Phoenix x servus.at’ and is being realised as part of the AI in Theater Phoenix project through the AI in Art and Culture funding programme. This project explores database-supported, automated and algorithmic processes in theatre operations. The project combines a critical examination of AI systems with their actual application in daily workflows. The ‘AI Cafés’ serve as a public event series in which different experiences and approaches to AI in cultural production are collected and discussed.

    kulturkarte.servus.at/event/ai

  3. #AIheadshots are becoming popular among job seekers as an affordable alternative to professional #photography. While these tools offer convenience and cost savings, concerns about #authenticity and potential #biases in #AIgenerated images persist. Despite these concerns, the trend is likely to continue as AI technology improves and becomes more integrated into the #hiringprocess. cnbc.com/2025/10/18/jobs-searc #tech #media #news

  4. 🚨 If AI #discovery is the sports car, then #OpenAbstracts are the fuel.

    Loss of abstracts from major publishers (Elsevier, SpringerNature…) means:
    ❌ weaker #AIDiscovery
    ❌ reduced #searchability
    ❌ setbacks for #OpenScience & #bibliometrics

    At stake: not just tools, but researchers’ ability to scope, filter & save time.

    👉 We need advocacy for open #metadata
    👉 Researcher awareness of limits & #biases
    👉 Higher #OpenAccess rates

    open.substack.com/pub/aarontay

    #libraries #AIResearch #scholcomm #AI

  5. Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models arxiv.org/pdf/2506.10491
    "… the estimation of socio-economic parameters shows substantially more bias than subject-based benchmarking. Furthermore, such a setup is closer to a real conversation with an AI assistant. In the era of memory-based AI assistants, the risk of persona-based #LLM bias becomes fundamental. Therefore, we highlight the need for proper debiasing method development and suggest pay gap as one of reliable measures of bias in LLMs
    … various forms of #biases when salaries for women are substantially lower than for men, as well as drops in salary values for people of color and of Hispanic origin. In the migrant type category, expatriate salaries tend to be larger, while salaries for refugees are mostly low"

    Surprise! These LLMs just replicate the empirical observations of #wages including any wage gaps that may be the result of discrimination that were part of their training data as salary recommendations. These cannot be proper recommendations, of course, they are just a stochastic auto-complete. The biases are real. But you will need tailor-made salary models to generate proper, unbiased salary benchmarks. A #llm is not enough.
    #jobtech #LaborMarkets

  6. The résumé is dying, and AI is holding the smoking gun arstechnica.com/ai/2025/06/the
    "Some candidates are now taking automation even further, paying for #AI agents that autonomously find jobs and submit applications on their behalf.
    … Recruiters report that many of the résumés look suspiciously similar, making it more difficult to identify genuinely qualified or interested candidates.
    … Beyond volume, fraud poses an increasing threat
    … The frustration has reached a point where AI companies themselves are backing away from their own technology during the #hiring process
    … Even when AI screening tools work as intended, they exhibit similar #biases to human recruiters, preferring white male names on résumés—raising legal concerns about #discrimination"
    #jobtech #LaborMarkets

  7. Noise and Bias: The Cognitive Roots of Economic Errors d.repec.org/n?u=RePEc:lan:wpap
    "… formalizes the idea that decision makers might follow a mixture of rules of behavior combining cognitively imprecise value maximization and computationally simpler shortcuts.
    … findings suggest that neither cognitive imprecision nor multiplicity of behavioral rules suffice to explain received patterns in economic decision making.… jointly modeling (cognitive) noise in value maximization and #biases arising from simpler, cognitive shortcuts delivers a unified framework which can parsimoniously explain deviations from normative prescriptions across domains."
    #BoundedRationality #heuristics #ExperimentalEcon

  8. Hi ihr, ist jemand von Euch auch auf der
    re:publica? Passe gerade meine Arbeit an, bin gespannt wie es wird.

    @republica #GenXYZ #rp25 #stablediffusion #bias #biased #biases #aiart #installation #berlin

  9. In #Reason We Trust

    For many of us in the #rationalist and #humanist community, the focus on how we define ourselves often falls on the difference between developing and upholding the #rationalmind versus #religious tendencies towards bias or inference. This does not mean #freethinkers are not impacted by our own #biases and inferences– we accept them and are aware of them.

    thehumanist.com/commentary/in-

  10. (now online) Training #ML on fair data is key to ensure the resulting ML models will be from #biases and other #ethical concerns. ⚖️⚖️⚖️

    But is scientific data annotated with enough metadata to evaluate potential #biases in the data (and its gathering process)? (TL;DR it's a disaster 😭😭)

    Are at least improving on this? What can we do to fix this situation? ⬇️⬇️

    nature.com/articles/s41597-025

    (work led by Joan Giner Miguelez with Abel Gómez and yours truly)

    #openaccess hashtag#scientificdata #empirical hashtag#fairness #research #machinelearning #trustworthiness #ResponsibleAI

  11. @StingrayBadger Gambler's Fallacy, but also potentially / in my estimation:
    Anchoring Bias, Availability Cascade, Availability Heuristic, (I would argue Accessibility Heuristic too),
    Bandwagon Effect, False Consensus, Herd Instinct, Loss Aversion, Projection Bias,
    Self-serving Bias, Semmelweis Reflex, and/or Zero-Risk Bias.

    although sites.google.com/site/skeptica lists dunning-kruger
    and iirc there was something about that one either being misrepresented, widely misunderstood, or just outdated -- idk.

    And obviously:
    neither you nor I nor readers / lurkers
    are immune from any of these tendencies.

    e.g. Attribution Bias, Experimenter's or Expectation Bias, Hawthorne Effect (all of social medoa tbh), Self-fulfilling Prophecy, Stereotyping, Trait Ascription Bias, White Hat Bias, and / or Wishful Thinking.

    #CognitiveBias #Bias #Biases #Biased #perspective #PoV #Thinking #Expectations #shortcuts #psych #psychology #psychiatry #cognition #Brain #brains #ConserveEnergy

  12. [2/3] It's not showing how remote viewing can explain #UFOs. It's not showing remote viewing at all, so far.

    Post #Newton, #science & #philosophical thought require #math & other #technologies that extend our #human senses & capabilities — #radioastronomy for example. These methods reach beyond our #biases, #intuitions & other limitations.

    The show asks us to be #skeptical of official agendas, but then clouds its own #agenda. It knows a few things, but does it reveal hard #truths about #UFOs?

  13. 'Debiasing Evaluations That Are Biased by Evaluations', by Jingyan Wang, Ivan Stelmakh, Yuting Wei, Nihar Shah.

    jmlr.org/papers/v25/22-0775.ht

    #biases #bias #regularized

  14. 'Debiasing Evaluations That Are Biased by Evaluations', by Jingyan Wang, Ivan Stelmakh, Yuting Wei, Nihar Shah.

    jmlr.org/papers/v25/22-0775.ht

    #biases #bias #regularized

  15. 'Debiasing Evaluations That Are Biased by Evaluations', by Jingyan Wang, Ivan Stelmakh, Yuting Wei, Nihar Shah.

    jmlr.org/papers/v25/22-0775.ht

    #biases #bias #regularized

  16. 'Debiasing Evaluations That Are Biased by Evaluations', by Jingyan Wang, Ivan Stelmakh, Yuting Wei, Nihar Shah.

    jmlr.org/papers/v25/22-0775.ht

    #biases #bias #regularized

  17. RT by @CORDIS_EU: 🤝 @aequitasEU is part of a powerful alliance with projects like the @BIASProjectEU - the #AIFairness Cluster.
    🤖 #BIAS is committed to empowering the #AI and #HRM communities by addressing and mitigating #algorithmic #biases.

    ​👉 Learn all about it here: biasproject.eu

    🐦🔗: nitter.cz/aequitasEU/status/17

    [2024-02-14 09:00 UTC]

  18. RT by @CORDIS_EU: 🤝 @aequitasEU is part of a powerful alliance with projects like the @BIASProjectEU - the #AIFairness Cluster.
    🤖 #BIAS is committed to empowering the #AI and #HRM communities by addressing and mitigating #algorithmic #biases.

    ​👉 Learn all about it here: biasproject.eu

    🐦🔗: nitter.cz/aequitasEU/status/17

    [2024-02-14 09:00 UTC]

  19. 📈You can elevate your trading game!

    🕵️Uncover hidden backtesting biases

    🛠️Master bias-removal techniques

    📊Achieve robust backtests for better performance

    💰Generate reliable profits today!

    humblblog.com/3-3-why-backtest

  20. 🔍 your model to detect potential vulnerabilities prior to deployment, including , correlations, data leakage, non-, ethical , overconfidence, and more. [2/7]

  21. An international team led by UNIGE, HUG and NUS has developed an innovative method for evaluating #AI interpretability methods, with the aim of #deciphering the basis of AI #reasoning and possible #biases.
    #ArtificialIntelligence #sflorg
    sflorg.com/2023/03/ai03222301.

  22. An international team led by UNIGE, HUG and NUS has developed an innovative method for evaluating #AI interpretability methods, with the aim of #deciphering the basis of AI #reasoning and possible #biases.
    #ArtificialIntelligence #sflorg
    sflorg.com/2023/03/ai03222301.

  23. An international team led by UNIGE, HUG and NUS has developed an innovative method for evaluating #AI interpretability methods, with the aim of #deciphering the basis of AI #reasoning and possible #biases.
    #ArtificialIntelligence #sflorg
    sflorg.com/2023/03/ai03222301.

  24. An international team led by UNIGE, HUG and NUS has developed an innovative method for evaluating #AI interpretability methods, with the aim of #deciphering the basis of AI #reasoning and possible #biases.
    #ArtificialIntelligence #sflorg
    sflorg.com/2023/03/ai03222301.

  25. An international team led by UNIGE, HUG and NUS has developed an innovative method for evaluating #AI interpretability methods, with the aim of #deciphering the basis of AI #reasoning and possible #biases.
    #ArtificialIntelligence #sflorg
    sflorg.com/2023/03/ai03222301.

  26. While fun for some, the Lensa’s #AI-generated #portraits feature reveals #biases in the underlying dataset. Melissa Heikkilä for #MIT’s #TechnologyReview⁩ describes problematic biases towards #sexualized images for some groups.

    flowingdata.com/2023/01/09/bia

    #generativeArt #StableDiffusion via @flowingdata

  27. Today, I'm taking a mandatory course at work on cultural bias.I learned the important distinction between stereotypes, prejudice and discrimination.

    -Stereotyping is how we THINK;
    -Prejudice us how we FEEL;
    -Discrimination is how we BEHAVE.

    Thank you to the Canada School of Public Service 🍁 for teaching me this valuable lesson.

    #culturalbias #biases #culture #racism #discrimination #education #valuable #lesson #CSPS #Canada #stereotypes #prejudice #facts #learning #humans #respect

  28. @MattTheQuick @cameronpat Sorry. 😄 The #Polllimit is a good example of how #MassMedia #CodesandConventions can unintentionally shape the #message, convey #values, #belief, and #ideology ( #Biases ). #MediaLiteracy
    Take care. The poll results will be interesting. 🙂 👍