home.social

#trustworthyai — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #trustworthyai, aggregated by home.social.

  1. 🤖 Hidden instructions shape how AI chatbots behave.
    Research from RC Trust by Anna Neumann, Yulu Pi & Jat Singh is featured in the Washington Post 📰
    The article explores so-called system prompts and questions of transparency, bias & user control in generative AI.
    rc-trust.ai/news/news-detail/u
    #AI #ResponsibleAI #TrustworthyAI

  2. 🤖 Hidden instructions shape how AI chatbots behave.
    Research from RC Trust by Anna Neumann, Yulu Pi & Jat Singh is featured in the Washington Post 📰
    The article explores so-called system prompts and questions of transparency, bias & user control in generative AI.
    rc-trust.ai/news/news-detail/u
    #AI #ResponsibleAI #TrustworthyAI

  3. 🤖 Hidden instructions shape how AI chatbots behave.
    Research from RC Trust by Anna Neumann, Yulu Pi & Jat Singh is featured in the Washington Post 📰
    The article explores so-called system prompts and questions of transparency, bias & user control in generative AI.
    rc-trust.ai/news/news-detail/u
    #AI #ResponsibleAI #TrustworthyAI

  4. New at RC Trust: Magdalena Wischnewski leads the Young Investigator Group Human Factors in AI Systems 🤖🧠
    Her research focuses on how AI shapes human thinking, emotions & decisions – and how trust in AI can be better understood and calibrated.
    🔗 rc-trust.ai/news/news-detail/u
    #TrustworthyAI #HumanCenteredAI #UARuhr

  5. 🔎🤖 How does generative AI change web search?
    At #ACL2026 🇺🇸, AISOC presents research on:
    📄 generative vs. traditional search
    🌐 source selection & diversity
    ⚖️ transparency & trust
    💡 Key point: AI doesn’t just retrieve information – it reshapes what users see.
    rc-trust.ai/news/news-detail/h
    #TrustworthyAI #NLP #MachineLearning #AIResearch #DigitalSociety

  6. 🔎🤖 How does generative AI change web search?
    At #ACL2026 🇺🇸, AISOC presents research on:
    📄 generative vs. traditional search
    🌐 source selection & diversity
    ⚖️ transparency & trust
    💡 Key point: AI doesn’t just retrieve information – it reshapes what users see.
    rc-trust.ai/news/news-detail/h
    #TrustworthyAI #NLP #MachineLearning #AIResearch #DigitalSociety

  7. 🤖✨ Welcome Tamara Stojanovska to AISOC at RC Trust!
    Her PhD research focuses on:
    🧠 understanding the deeper capabilities of LLMs
    🌍 evaluating AI in real-world tasks
    🔐 building trustworthy and reliable AI agents
    Beyond research:
    🏐 volleyball
    🍳 cooking
    🛼 inline skating
    rc-trust.ai/news/news-detail/l
    #TrustworthyAI #MachineLearning #LLMs #AIResearch #PhDLife

  8. 🤖✨ Welcome Tamara Stojanovska to AISOC at RC Trust!
    Her PhD research focuses on:
    🧠 understanding the deeper capabilities of LLMs
    🌍 evaluating AI in real-world tasks
    🔐 building trustworthy and reliable AI agents
    Beyond research:
    🏐 volleyball
    🍳 cooking
    🛼 inline skating
    rc-trust.ai/news/news-detail/l
    #TrustworthyAI #MachineLearning #LLMs #AIResearch #PhDLife

  9. 🤖🔍 Why do AI systems make certain decisions – and can we trust them?
    The new ExTraSafe Workshop (KI 2026 🇩🇪) focuses on explainability, transparency & safety.
    With Daniel Neider (TU Dortmund / RC Trust) among the organizers.
    📅 Call for Papers deadline: May 15, 2026
    Join the conversation 👇
    rc-trust.ai/news/news-detail/m
    #TrustworthyAI #ExplainableAI #AIresearch #KI2026
    Photo credit: „Bremer Stadtmusikanten“ von Cat, CC BY-NC-SA 2.0

  10. 🤖🔍 Why do AI systems make certain decisions – and can we trust them?
    The new ExTraSafe Workshop (KI 2026 🇩🇪) focuses on explainability, transparency & safety.
    With Daniel Neider (TU Dortmund / RC Trust) among the organizers.
    📅 Call for Papers deadline: May 15, 2026
    Join the conversation 👇
    rc-trust.ai/news/news-detail/m
    #TrustworthyAI #ExplainableAI #AIresearch #KI2026
    Photo credit: „Bremer Stadtmusikanten“ von Cat, CC BY-NC-SA 2.0

  11. 🤖 AI ethics needs a human perspective.
    How do people judge, follow, or resist AI decisions?
    Nils Köbis (RC Trust) co-edits a Special Issue on the psychology of AI ethics.
    📅 Deadline: May 31, 2026
    rc-trust.ai/news/news-detail/a
    #AIethics #HumanAI #Research #TrustworthyAI

  12. 🤖 New team member at RC Trust:
    Carolina Gerli joins HUAM to study how AI shapes anti-corruption, transparency & accountability ⚖️
    How do people interact with AI in these critical decisions?
    rc-trust.ai/news/news-detail/u
    #TrustworthyAI #AntiCorruption #Research

  13. 🤖 New team member at RC Trust:
    Carolina Gerli joins HUAM to study how AI shapes anti-corruption, transparency & accountability ⚖️
    How do people interact with AI in these critical decisions?
    rc-trust.ai/news/news-detail/u
    #TrustworthyAI #AntiCorruption #Research

  14. 🤖 AI is becoming part of our social lives.
    Inês Terrucha (RC Trust) joins IAS Amsterdam 🇳🇱 to study how everyday AI interactions shape connection & well-being.
    Do these “AI ties” strengthen or replace human relationships?
    rc-trust.ai/news/news-detail/w
    #TrustworthyAI #HumanAI #Research

  15. Last week at the University of Applied Sciences St. Pölten, the CERTAIN Project hosted a technical workshop and presented at the SAINT Conference 2026.

    Our focus: building trust directly into the ML pipeline.

    As AI evolves, aligning with the EU AI Act and General Data Protection Regulation is a major challenge. CERTAIN tackles this by turning legal requirements into machine-readable processes.

    🔗 certain-project.eu/certain-pro

    #AI #MLOps #TrustworthyAI #EUAIAct #CERTAINProject

  16. Last week at the University of Applied Sciences St. Pölten, the CERTAIN Project hosted a technical workshop and presented at the SAINT Conference 2026.

    Our focus: building trust directly into the ML pipeline.

    As AI evolves, aligning with the EU AI Act and General Data Protection Regulation is a major challenge. CERTAIN tackles this by turning legal requirements into machine-readable processes.

    🔗 certain-project.eu/certain-pro

    #AI #MLOps #TrustworthyAI #EUAIAct #CERTAINProject

  17. Last week at the University of Applied Sciences St. Pölten, the CERTAIN Project hosted a technical workshop and presented at the SAINT Conference 2026.

    Our focus: building trust directly into the ML pipeline.

    As AI evolves, aligning with the EU AI Act and General Data Protection Regulation is a major challenge. CERTAIN tackles this by turning legal requirements into machine-readable processes.

    🔗 certain-project.eu/certain-pro

    #AI #MLOps #TrustworthyAI #EUAIAct #CERTAINProject

  18. Last week at the University of Applied Sciences St. Pölten, the CERTAIN Project hosted a technical workshop and presented at the SAINT Conference 2026.

    Our focus: building trust directly into the ML pipeline.

    As AI evolves, aligning with the EU AI Act and General Data Protection Regulation is a major challenge. CERTAIN tackles this by turning legal requirements into machine-readable processes.

    🔗 certain-project.eu/certain-pro

    #AI #MLOps #TrustworthyAI #EUAIAct #CERTAINProject

  19. Last week at the University of Applied Sciences St. Pölten, the CERTAIN Project hosted a technical workshop and presented at the SAINT Conference 2026.

    Our focus: building trust directly into the ML pipeline.

    As AI evolves, aligning with the EU AI Act and General Data Protection Regulation is a major challenge. CERTAIN tackles this by turning legal requirements into machine-readable processes.

    🔗 certain-project.eu/certain-pro

    #AI #MLOps #TrustworthyAI #EUAIAct #CERTAINProject

  20. Feel free to reach out if you’d like to connect and chat about these papers or our other work. See you in Barcelona!

    #CHI2026 #HCI #AI #TrustworthyAI #SocialMedia #Journalism

    (7/7)

  21. Authors: Federico Marcuzzi (INSAIT - Institute for Computer Science, Artificial Intelligence and Technology), Xuefei Ning (Tsinghua University), Roy Schwartz (The Hebrew University of Jerusalem), and Iryna Gurevych (UKP Lab, Technische Universität Darmstadt and ATHENE Center).

    See you at #EACL2026 in Rabat 🕌!

    #UKPLab #NLProc #ResponsibleAI #Quantization #MLSafety #Fairness #TrustworthyAI #ModelCompression #LLMSafety #EthicalAI #NLP #AIResearch

  22. Authors: Federico Marcuzzi (INSAIT - Institute for Computer Science, Artificial Intelligence and Technology), Xuefei Ning (Tsinghua University), Roy Schwartz (The Hebrew University of Jerusalem), and Iryna Gurevych (UKP Lab, Technische Universität Darmstadt and ATHENE Center).

    See you at #EACL2026 in Rabat 🕌!

    #UKPLab #NLProc #ResponsibleAI #Quantization #MLSafety #Fairness #TrustworthyAI #ModelCompression #LLMSafety #EthicalAI #NLP #AIResearch

  23. Authors: Federico Marcuzzi (INSAIT - Institute for Computer Science, Artificial Intelligence and Technology), Xuefei Ning (Tsinghua University), Roy Schwartz (The Hebrew University of Jerusalem), and Iryna Gurevych (UKP Lab, Technische Universität Darmstadt and ATHENE Center).

    See you at #EACL2026 in Rabat 🕌!

    #UKPLab #NLProc #ResponsibleAI #Quantization #MLSafety #Fairness #TrustworthyAI #ModelCompression #LLMSafety #EthicalAI #NLP #AIResearch

  24. Authors: Federico Marcuzzi (INSAIT - Institute for Computer Science, Artificial Intelligence and Technology), Xuefei Ning (Tsinghua University), Roy Schwartz (The Hebrew University of Jerusalem), and Iryna Gurevych (UKP Lab, Technische Universität Darmstadt and ATHENE Center).

    See you at #EACL2026 in Rabat 🕌!

    #UKPLab #NLProc #ResponsibleAI #Quantization #MLSafety #Fairness #TrustworthyAI #ModelCompression #LLMSafety #EthicalAI #NLP #AIResearch

  25. Authors: Federico Marcuzzi (INSAIT - Institute for Computer Science, Artificial Intelligence and Technology), Xuefei Ning (Tsinghua University), Roy Schwartz (The Hebrew University of Jerusalem), and Iryna Gurevych (UKP Lab, Technische Universität Darmstadt and ATHENE Center).

    See you at #EACL2026 in Rabat 🕌!

    #UKPLab #NLProc #ResponsibleAI #Quantization #MLSafety #Fairness #TrustworthyAI #ModelCompression #LLMSafety #EthicalAI #NLP #AIResearch

  26. Subscribe to our newsletter 📬🤖

    Want insights into trustworthy AI, security & digital futures?

    RC Trust shares:
    ✨ Research highlights
    🎓 Inspiring scientists
    🌍 Events & collaborations
    🚀 What’s next in AI

    Whether you’re studying, researching, shaping decisions or simply curious about the future of tech – stay connected.

    Sign up now 💙
    rc-trust.ai/news/stay-informed

    Let’s build a digital future we can rely on. 🌐✨

    #TrustworthyAI #FutureOfTech #AIResearch #ScienceCommunication #CyberSecurity

  27. 🌍 We’re happy to welcome Jacob Grytzka to the Environmetrics group at RC Trust (TU Dortmund University)! 🎉
    His research focuses on interpretable statistical models for Earth system simulations and improving uncertainty quantification in climate projections. 📊🌡️
    Why this matters: Better understanding of environmental change supports more transparent and reliable decision-making–from infrastructure to climate adaptation. 🏗️🌱
    Welcome, Jacob! 🚀
    #ClimateScience #DataScience #TrustworthyAI
    📸:Julian Welz

  28. Work by Nils Dycke & Iryna Gurevych (Ubiquitous Knowledge Processing (UKP) Lab, Technische Universität Darmstadt and National Research Center for Applied Cybersecurity ATHENE)

    See you at #EACL2026 in Rabat 🕌!

    #UKPLab #LLMs #PeerReview #AIforScience #TrustworthyAI #NLP #Evaluation

  29. Work by Nils Dycke & Iryna Gurevych (Ubiquitous Knowledge Processing (UKP) Lab, Technische Universität Darmstadt and National Research Center for Applied Cybersecurity ATHENE)

    See you at #EACL2026 in Rabat 🕌!

    #UKPLab #LLMs #PeerReview #AIforScience #TrustworthyAI #NLP #Evaluation

  30. Work by Nils Dycke & Iryna Gurevych (Ubiquitous Knowledge Processing (UKP) Lab, Technische Universität Darmstadt and National Research Center for Applied Cybersecurity ATHENE)

    See you at #EACL2026 in Rabat 🕌!

    #UKPLab #LLMs #PeerReview #AIforScience #TrustworthyAI #NLP #Evaluation

  31. Work by Nils Dycke & Iryna Gurevych (Ubiquitous Knowledge Processing (UKP) Lab, Technische Universität Darmstadt and National Research Center for Applied Cybersecurity ATHENE)

    See you at #EACL2026 in Rabat 🕌!

    #UKPLab #LLMs #PeerReview #AIforScience #TrustworthyAI #NLP #Evaluation

  32. Work by Nils Dycke & Iryna Gurevych (Ubiquitous Knowledge Processing (UKP) Lab, Technische Universität Darmstadt and National Research Center for Applied Cybersecurity ATHENE)

    See you at #EACL2026 in Rabat 🕌!

    #UKPLab #LLMs #PeerReview #AIforScience #TrustworthyAI #NLP #Evaluation

  33. 🤖⚙️ How much trust does AI deserve?
    At the 2nd AI Workshop in Dortmund, Nicole Krämer (Social Psychology, Media & Communication) spoke about calibrated trust – finding the right balance in relying on AI.
    🧠 Sven Mayer (Human-AI Interaction) highlighted intelligent assistance systems that help humans and machines work better together.
    💡 A key takeaway: trustworthy AI is not only about technology – it’s about people, context, and responsibility.
    #AIWorkshop2026 #TrustworthyAI #FutureOfWork

  34. 🤖🚗 AI is moving into the physical world – but how do we verify it?
    Prof. Daniel Neider from the Department of Computer Science at TU Dortmund University joined a Shonan Meeting 🇯🇵, a selective research seminar bringing together experts in AI, formal methods, and robotics.
    The focus:
    🔬 LLM-guided synthesis
    🧪 testing
    📐 verification of learning-enabled cyber-physical systems.
    Key research for building trustworthy AI.
    #TrustworthyAI #ReliableAI #CyberPhysicalSystems #AIresearch

  35. 🚀 PhD position in Reliable AI
    The Reliable AI Group at TU Dortmund & RC Trust is looking for a PhD researcher to work on the DFG project Temporal Logic Sketching for AI 🤖📐
    🔬 Research: logic + machine learning
    💰 Fully funded (3 years)
    🌍 International research environment
    💻 GPU cluster & conference support
    📩 Apply: [email protected]
    ⏳ Deadline: March 31, 2026
    #PhDPosition #ReliableAI #TrustworthyAI #MachineLearning #AcademicJobs

  36. Chatbots increasingly recommend products, services or even financial advice. 🤖💬

    On 12 March, Nicole Krämer, Scientific Director of RC Trust, joins a policy discussion at the NRW Ministry for Consumer Protection at the event
    “Chatbot and AI Agent: (Not) a Friend and Helper?”

    The panel will explore trust, transparency and risks in AI-driven communication.

    #AI #TrustworthyAI #ConsumerProtection #DigitalPolicy #AIethics #HumanAI #RCTrust

    Photo: Till Niermann – CC BY-SA 3.0 edited

  37. 🤖📊 How do biases in AI systems change over time?

    In a new study in Applied Stochastic Models in Business and Industry, Meltem Aksoy (TU Dortmund University) analyzes thousands of responses generated by different versions of ChatGPT.

    The results show: model updates can shift political tendencies – but recognizable patterns remain.

    Together with Markus Pauly and the RC Trust, the research highlights why AI systems need continuous statistical monitoring.

    #TrustworthyAI #Statistics #DataScience

  38. Subscribe to the RC Trust Newsletter 📬🤖

    Stay updated on research about trustworthy AI, cybersecurity and digital society.

    We share publications, events and the people shaping responsible digital innovation – relevant for academia, public institutions and anyone interested in the future of technology.

    Join here:
    rc-trust.ai/news/stay-informed

    #TrustworthyAI #AIresearch #CyberSecurity #DigitalSociety

  39. Subscribe to the RC Trust Newsletter 📬🤖

    Stay updated on research about trustworthy AI, cybersecurity and digital society.

    We share publications, events and the people shaping responsible digital innovation – relevant for academia, public institutions and anyone interested in the future of technology.

    Join here:
    rc-trust.ai/news/stay-informed

    #TrustworthyAI #AIresearch #CyberSecurity #DigitalSociety

  40. 🚨 A viral video showed a plane emergency landing on the A3 in Duisburg.

    It never happened. The footage was AI-generated.

    In a WDR interview, Bianca Nowak explains how synthetic media influences perception, credibility & public discourse 🤖🧠

    At RC Trust, we study how people understand and evaluate AI.

    How do you assess whether content is real?

    #TrustworthyAI #AIethics #MediaLiteracy #ScienceCommunication #RCTRust

    Photo: Sebastian Meinken

  41. 🤖 Is Information Digital?

    At the AI Colloquium, Edward A. Lee (UC Berkeley) explores whether objective observation can ever fully capture physical reality. 📡🌍

    Using concepts from computer science & information theory, he argues that some knowledge may require embodied interaction – even for machines. 🚲🤖

    📅 5 March | 13:00–15:00 | TU Dortmund

    What are the limits of digital representation?

    #AIColloquium #CyberPhysicalSystems #TrustworthyAI #PhilosophyOfAI

    Photo: Hesham Elsherif/TU Dortmund

  42. 📊✨ More than 300 students explored statistics at TU Dortmund’s Day of Statistics!

    From sports data ⚽ and medicine 💊 to fake news detection 📰 and hands-on probability experiments 🚀 – the campus turned into a laboratory of curiosity.

    Co-organized by Markus Pauly, highlighting how strong statistical education underpins trustworthy data science.

    How do we inspire the next generation for #Statistics?

    #DataLiteracy #STEM #TrustworthyAI #Education #TUdortmund #ScienceOutreach

    Photo: Julia Welz

  43. 🤖⚖️ Can AI systems ever be truly objective?

    On 27 February 2026, DAGStat hosts the symposium “Data Ethics and AI” in Berlin 🇩🇪
    Experts will discuss fairness, bias & statistical transparency in AI-driven systems 📊

    🎓 Featuring Dr. Henrike Weinert (TU Dortmund University), linking data literacy & ethics.

    ⏳ Register by 20 February 2026
    📧 [email protected]

    #DataEthics #TrustworthyAI #Statistics #ResponsibleAI #DAGStat #Berlin #AIethics

    Photo: DAGStat

  44. We are conducting research to better understand how people from diverse backgrounds perceive and experience Artificial Intelligence (AI) in everyday life, work, and society.

    👉 docs.google.com/forms/d/e/1FAI

    All responses are anonymous and confidential.

    #ArtificialIntelligence #AI #Research #TrustworthyAI #DigitalTransformation

  45. 🔍 We’re hiring 2 Postdocs (m/f/d) in Verification & Formal Methods at TU Dortmund University (Reliable AI Group).

    While focused on formal guarantees, we also welcome applications in safe/secure ML and AI quality assurance. 🤖✨

    📅 Deadline: March 10, 2026
    📩 [email protected]

    Join an international, collaborative research environment with access to strong infrastructure & GPU resources.

    #Postdoc #FormalMethods #Verification #TrustworthyAI #AIResearch #MachineLearning

  46. 🤖✨ We’re happy to welcome Leon Swazinna as a PhD student at the Chair of Artificial Intelligence and Society.

    Leon researches large language models and asks critical questions:
    How can we detect when AI appears confident but is actually wrong?
    And what security risks should users be aware of in real-world applications? 🔍🔐

    His work contributes to more trustworthy and responsible AI systems.

    #TrustworthyAI #LLMs #PhD #AIResearch #DigitalSociety

  47. 🤖✨ We’re happy to welcome Leon Swazinna as a PhD student at the Chair of Artificial Intelligence and Society.

    Leon researches large language models and asks critical questions:
    How can we detect when AI appears confident but is actually wrong?
    And what security risks should users be aware of in real-world applications? 🔍🔐

    His work contributes to more trustworthy and responsible AI systems.

    #TrustworthyAI #LLMs #PhD #AIResearch #DigitalSociety

  48. 🏥 Synthetic Data & Trustworthy Health AI

    How can AI learn from health data without violating privacy? At the AI Colloquium, Allan Tucker shares lessons from synthetic health data generation-covering bias, concept drift, and regulation in evolving healthcare systems. 🧬📊⏳

    📅 4 Feb 2026 | ⏰ 9:30–10:30
    💻 Online via Zoom

    💬 What role should synthetic data play in medical AI?

    #HealthAI #SyntheticData #TrustworthyAI #DataScience #AIColloquium #EthicalAI #OpenScience