home.social

#ai-fairness — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #ai-fairness, aggregated by home.social.

fetched live
  1. How To Detect Unwanted Bias In Machine Learning Models ?

    Is your AI model biased?

    Discover how to identify hidden proxy variables, apply fairness metrics, and understand LLM behavior with our complete ML bias guide.

    Detecting unwanted bias in Machine Learning (ML) models is a critical step in building ethical and reliable AI. Bias can creep in at any stage—from data collection to model deployment—often reflecting historical prejudices or sampling errors.

    Here is a structured approach to identifying and measuring it.

    nbloglinks.com/how-to-detect-u

    #LLM #AI #ML #MLmodels #AIBias #AIfairness

  2. How bias hides inside recommendation algorithms—and what new techniques reveal about gendered patterns in user embeddings. hackernoon.com/evaluating-attr #aifairness

  3. Removing gender from AI models doesn’t erase bias. Learn how systematic stereotypes persist in recommendation systems despite feature removal. hackernoon.com/can-we-ever-ful #aifairness

  4. Even after removing gender data, bias lingers in AI. Here’s what latent space analysis reveals about hidden bias in machine learning models. hackernoon.com/why-gender-bias #aifairness

  5. Uncover how hidden stereotypes shape AI recommendations and learn how new frameworks can detect and reduce bias in machine learning models. hackernoon.com/quantifying-att #aifairness

  6. A framework to detect and measure bias in recommendation algorithms, revealing how AI can unintentionally reinforce stereotypes. hackernoon.com/understanding-a #aifairness

  7. "Fairness" in AI can be measured in different ways, such as ensuring similar outcomes for individuals with similar qualifications ("individual fairness") or ensuring groups have proportional outcomes ("group fairness"). #AIFairness

  8. After all these recent episodes, I don't know how anyone can have the nerve to say out loud that the Trump administration and the Republican Party value freedom of expression and oppose any form of censorship. Bunch of hypocrites! United States of America: The New Land of SELF-CENSORSHIP.

    "The National Institute of Standards and Technology (NIST) has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of “AI safety,” “responsible AI,” and “AI fairness” in the skills it expects of members and introduces a request to prioritize “reducing ideological bias, to enable human flourishing and economic competitiveness.”

    The information comes as part of an updated cooperative research and development agreement for AI Safety Institute consortium members, sent in early March. Previously, that agreement encouraged researchers to contribute technical work that could help identify and fix discriminatory model behavior related to gender, race, age, or wealth inequality. Such biases are hugely important because they can directly affect end users and disproportionately harm minorities and economically disadvantaged groups.

    The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signaling less interest in tracking misinformation and deep fakes. It also adds emphasis on putting America first, asking one working group to develop testing tools “to expand America’s global AI position.”"

    wired.com/story/ai-safety-inst

    #USA #Trump #ResponsibleAI #AISafety #AIFairness #AIEthics

  9. RT by @CORDIS_EU: 🤝 @aequitasEU is part of a powerful alliance with projects like the @BIASProjectEU - the #AIFairness Cluster.
    🤖 #BIAS is committed to empowering the #AI and #HRM communities by addressing and mitigating #algorithmic #biases.

    ​👉 Learn all about it here: biasproject.eu

    🐦🔗: nitter.cz/aequitasEU/status/17

    [2024-02-14 09:00 UTC]

  10. 🌟 Excited to represent Beyond AI Collective e.V. at re:publica! 🤝

    I will participate at this years re:publica on behalf of Beyond AI Collective! Let me know if you want ti meet and discuss AI, fairness, bias detection/mitigation, and open-source Android distributions. Looking forward to meet you there! 🌐✨

    #republica2023 #BeyondAI #AIfairness #OpenSourceAndroid #EthicsInAI #CIP #Gemeinwohl #rp23

  11. Noticias sobre Python científico de la semana, episodio 62 🐍⚙️🗳

    En resumen: Nace Arrow Database Connectivity, más out-of-core en Polars, análisis sintácticos, imparcialidad, machine learning ilustrado, y ¿cambio de nombre? astrojuanlu.substack.com/p/epi Apoya el noticiero suscribiéndote por correo 📫

    #arrow #polars #nlp #aiethics #aifairness #pydata #python