#inclusiveai โ Public Fediverse posts
Live and recent posts from across the Fediverse tagged #inclusiveai, aggregated by home.social.
-
๐ Recognition for international research collaboration
Prof. Giulia Barbareschi (RC Trust) receives an Honorable Mention Award at #HRI2026 ๐
Her team developed the RUSH Checklist โ improving transparency, reproducibility & quality in human-robot interaction research ๐ค
A key step toward more trustworthy & inclusive technologies.
๐ https://dl.acm.org/doi/abs/10.1145/3757279.3785572
#HumanRobotInteraction #InclusiveAI #ResponsibleAI #RCTrust #ResearchExcellence -
๐ Welcome Nancy Bou Kamel to RC Trust!
Nancy joins the Chair of Inclusive Technology and Collective Engagement, led by Prof. Giulia Barbareschi ๐คโฟ
She builds inclusive AI systems โ from low-latency speech recognition ๐ฃ๏ธ to scalable RAG & LLM architectures โ๏ธ
Her goal: AI that is accessible by design and robust in real-world contexts.
How can we embed inclusion into AI from the start?
#InclusiveAI #Accessibility #MachineLearning #WomenInTech #RCTrust
-
Googleโs AI speech dataset is being hailed as a turning point for Nigerian languages, boosting voice tech and inclusive AI for local communities. Read more:
#AI #AIBaseng #LanguageTech #Nigeria #InclusiveAI
https://aibase.ng/ai-africa/googles-ai-speech-dataset-a-turning-point-for-nigerian-languages/
-
Google and African universities have launched WAXAL, an open speech dataset covering 21 Sub-Saharan African languages to support voice-enabled AI tools like speech recognition and text-to-speech โ a boost for inclusive tech across the continent.
Read more:
#AI #AIBaseng #AfricaTech #LanguageTech #InclusiveAI
https://aibase.ng/ai-africa/google-and-african-university-partners-launch-waxal-speech-dataset/
-
๐ค Inclusive Research & AI
Who is included in AI research - and who is overlooked? At the AI Colloquium, Prof. Giulia Barbareschi discusses inclusive research practices and why they are essential for trustworthy, valid AI systems. ๐๐ง๐ 21 Jan 2026 | โฐ 9:30โ10:30
๐ TU Dortmund & ๐ป Zoom๐ฌ How can inclusion improve AI research?
#InclusiveAI #TrustworthyAI #HCI #DataScience #AIColloquium #Research #AccessibilitylMethods #Research #AIColloquium
-
Discover BHASHINI, India`s AI language platform promoting societal inclusivity through advanced language services for all citizens. https://english.mathrubhumi.com/technology/bhashini-ai-language-societal-inclusivity-kqkzz5hl?utm_source=dlvr.it&utm_medium=mastodon #BHASHINI #DigitalIndia #LanguageAI #InclusiveAI #AIInnovationIndia
-
๐ฃ ๐ช๐ผ๐ฟ๐ธ๐๐ต๐ผ๐ฝ ๐ฆ๐ฝ๐ผ๐๐น๐ถ๐ด๐ต๐: ๐๐ฒ๐ป(๐ฑ๐ฒ๐ฟ)๐๐ ๐ฆ๐ฎ๐ณ๐ฒ๐๐
The rise of Generative AI (GenAI) has brought transformative possibilities but also significant risks, particularly for women, girls, and marginalized communities across cultural contexts.
#InclusiveAI #GenderEquity #GenAI #Safety #Conference #Collaboration
-
๐ Unlocking True Accessibility with Human-Verified AI ๐
Discover how Aira is revolutionizing AI for the blind and low-vision community through human verification. Our latest white paper dives deep into the critical role of human oversight in enhancing AI accuracy, addressing issues like AI hallucinations, and delivering real-time, reliable image descriptions with Aira Access AI.
From research insights to real-world applications, see how human verification is the key to making AI more inclusive and trustworthy. A must-read for anyone passionate about AI, accessibility, and bridging the information gap.
Get your free download: https://mailchi.mp/6725af391d7b/ai-white-paper
#AccessibilityMatters #InclusiveAI #HumanVerified #AIAccessibility #BlindAndLowVision
-
Interesting PhD Position in โInclusive Design with Artificial Intelligenceโ at the Digital Interactions Lab, UvA:
Deadline 30 November 2024
-
Artificial intelligence needs to be trained on culturally diverse datasets to avoid bias
#ArtificialIntelligence #AI #ChatGPT #Diversity #Bias #Data #Datasets #AICulturalInclusion #InclusiveAI #FairTech #DiverseData #LargeLanguageModels
https://the-14.com/artificial-intelligence-needs-to-be-trained-on-culturally-diverse-datasets-to-avoid-bias/