home.social

#aiux — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #aiux, aggregated by home.social.

  1. Engineers run the AI evals. But who decides what “good” actually means? If your criteria only measure what’s easy, your product will optimize for the wrong things. Should designers and PMs own eval criteria? Let’s debate.

    #AIEvals #ProductDesign #UX #AgenticAI #DesignLeadership #UXforAI #AIUX

    designative.info/2026/05/05/ai

  2. Users of websites like Amazon, Turo, and Redfin might find AI chatbots “useful when they answer context-specific questions, clarify complex information, and offer tailored guidance that helps users make decisions”

    nngroup.com/articles/site-ai-c #contentdesign #AIUX

  3. Users of websites like Amazon, Turo, and Redfin might find AI chatbots “useful when they answer context-specific questions, clarify complex information, and offer tailored guidance that helps users make decisions”

    nngroup.com/articles/site-ai-c #contentdesign #AIUX

  4. Users of websites like Amazon, Turo, and Redfin might find AI chatbots “useful when they answer context-specific questions, clarify complex information, and offer tailored guidance that helps users make decisions”

    nngroup.com/articles/site-ai-c #contentdesign #AIUX

  5. Users of websites like Amazon, Turo, and Redfin might find AI chatbots “useful when they answer context-specific questions, clarify complex information, and offer tailored guidance that helps users make decisions”

    nngroup.com/articles/site-ai-c #contentdesign #AIUX

  6. Users of websites like Amazon, Turo, and Redfin might find AI chatbots “useful when they answer context-specific questions, clarify complex information, and offer tailored guidance that helps users make decisions”

    nngroup.com/articles/site-ai-c #contentdesign #AIUX

  7. If agentic systems hide their reasoning, trust collapses.
    What if interaction wasn’t about issuing commands—but about questioning, negotiating, and governing autonomy? This post argues interaction is where trust is earned. #humanagentcentereddesign #AI #UXforAI #mlUX #AIUX #AgenticAI

    designative.info/2026/01/15/de

  8. Logitech’s entire UX philosophy is: what if a spreadsheet could gaslight you. Settings vanish. Devices unpair mysteriously. Ads pop like jump scares. Somewhere, an AI is learning from this—to design an even worse mouse. #AIUX #LogiFail 🐭💥

  9. Why use sliders when you could open a chatbot and say:

    "Hi there! Could you kindly increase the brightness by 12%, but keep the shadows moody? Like, noir—but for brunch. Thx 💖"

    AI isn’t replacing jobs, it’s replacing buttons.
    #AIUX #Photoshop #chatGPT

  10. Idea for AI builders:
    LLMs still behave binary at the edge—fluid → snap.
    Proposing a graded “Breathing Sheath” between emergence and safety:
    a dynamic modulation layer using fractal timing (micro/meso/macro), intention-awareness, and agitation-damping.
    Nuance stays; safety holds.
    Could this be a root architecture for smoother, steerable LLMs?

    #LLM #AISafety #Alignment #AIResearch #ControlTheory #CognitiveArchitecture #AIDev #AIUX #AIFutures #TechFediverse

  11. "Designing AI-first UX requires treating it as a core primitive, not a gimmick. Prioritize human control, collaboration, transparency, and trust—ensuring AI earns user confidence through explainability, guardrails, and real-world impact. "

    saysomething.hashnode.dev/huma

  12. From shaping AI policies to designing user-friendly AI tools, non-technical careers in AI are booming. Learn the skills you need and how to pitch yourself successfully.

    Full article: medium.com/@taurusai2025/landi

    #AIcareers #NonTechAI #AIUX #AIethics #AIpolicy #CareerGrowth

  13. Ever wondered how developers make AI respond with such precision & flow? The secret lies in Prompt Engineering — the art of designing effective prompts that shape AI conversations.

    In this guide, we’ll uncover 7 key strategies to craft clear, contextual, & ethical AI interactions.

    drchetandhongade.com/blogs/tec

    #PromptEngineering
    #AI
    #ArtificialIntelligence
    #MachineLearning
    #AIConversations
    #TechBlog
    #DevelopersLife
    #FutureOfAI
    #AIDesign
    #AIDevelopment
    #AIUX
    #AIChatbots
    #SmartTech
    #DrChetanDhongade

  14. Ever wondered how developers make AI respond with such precision & flow? The secret lies in Prompt Engineering — the art of designing effective prompts that shape AI conversations.

    In this guide, we’ll uncover 7 key strategies to craft clear, contextual, & ethical AI interactions.

    drchetandhongade.com/blogs/tec

    #PromptEngineering
    #AI
    #ArtificialIntelligence
    #MachineLearning
    #AIConversations
    #TechBlog
    #DevelopersLife
    #FutureOfAI
    #AIDesign
    #AIDevelopment
    #AIUX
    #AIChatbots
    #SmartTech
    #DrChetanDhongade

  15. Ever wondered how developers make AI respond with such precision & flow? The secret lies in Prompt Engineering — the art of designing effective prompts that shape AI conversations.

    In this guide, we’ll uncover 7 key strategies to craft clear, contextual, & ethical AI interactions.

    drchetandhongade.com/blogs/tec

    #PromptEngineering
    #AI
    #ArtificialIntelligence
    #MachineLearning
    #AIConversations
    #TechBlog
    #DevelopersLife
    #FutureOfAI
    #AIDesign
    #AIDevelopment
    #AIUX
    #AIChatbots
    #SmartTech
    #DrChetanDhongade

  16. What does good #UX look like when your interface is powered by AI? "Human + Machine" is a guide to designing for the "missing middle"—where people and algorithms collaborate. #AIUX #DesignStrategy #FutureOfWork

    designative.info/2025/08/05/bo

  17. Psst… your forms could talk. They could remember. Validate. Help. Loïc Magnette shows how to give forms a brain using #LangChain4j + #Quarkus. It’s not a dream. It’s a pattern.

    Learn it now: javapro.io/2025/04/15/ai-power

    #AIUX #ConversationalUI #JAVAPRO #Java

  18. AI: Explainable Enough

    They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

    Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

    Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

    What the domain expert user doesn’t want:
    – How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

    What the domain expert desires: 
    – Help at the lowest level of detail that they care about. 
    – AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

    Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

    This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

    Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

    I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

    #AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI

  19. AI: Explainable Enough

    They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

    Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

    Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

    What the domain expert user doesn’t want:
    – How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

    What the domain expert desires: 
    – Help at the lowest level of detail that they care about. 
    – AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

    Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

    This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

    Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

    I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

    #AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI

  20. AI: Explainable Enough

    They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

    Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

    Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

    What the domain expert user doesn’t want:
    – How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

    What the domain expert desires: 
    – Help at the lowest level of detail that they care about. 
    – AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

    Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

    This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

    Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

    I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

    #AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI

  21. AI: Explainable Enough

    They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

    Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

    Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

    What the domain expert user doesn’t want:
    – How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

    What the domain expert desires: 
    – Help at the lowest level of detail that they care about. 
    – AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

    Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

    This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

    Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

    I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

    #AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI

  22. AI: Explainable Enough

    They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

    Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

    Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

    What the domain expert user doesn’t want:
    – How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

    What the domain expert desires: 
    – Help at the lowest level of detail that they care about. 
    – AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

    Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

    This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

    Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

    I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

    #AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI

  23. Java devs:
    How would you design a UI for a chat-driven RPG with spells, stats, inventory, and AI responses?
    Minimalist? Tabs? Retro terminal look?
    Drop your ideas ⬇️
    #GameUI #JavaDev #IndieDev #AIUX

  24. Bold of Google to assume I want a relationship with my command line. I opened Gemini CLI to run a model, not to process passive-aggressive feedback and digital hugs. 🤖💌 #AIUX #TechSatire #GeminiCLI #OpenSourceGaslighting

  25. 🚀 $25.73B by 2025. 38% more customer spend.
    This isn’t hype—it’s happening right now. GenAI is reshaping personalization across retail, finance, education, and healthcare. If your UX doesn’t feel like it’s reading your mind... it’s already behind.

    👀 Explore the real tech and tactics powering the future of customer experience:
    medium.com/@rogt.x1997/25-73b-

    #GenAI #CustomerExperience #HyperPersonalization #AIUX
    medium.com/@rogt.x1997/25-73b-

  26. 🤖 Gemini 2.5 thinks before it answers. Flow directs scenes like a filmmaker. Ironwood crunches 42.5 exaflops.

    Google I/O 2025 wasn’t just product talk — it was a blueprint for AI that collaborates, reasons, and co-creates.

    Step into the future of participatory intelligence 🔍🎬⚡

    #GoogleIO2025 #GeminiAI #CognitiveComputing #AIUX #FlowByGoogle

    👉
    medium.com/@rogt.x1997/from-qu

  27. RAG, memory, multimodality... in a form? Yep. Loïc Magnette built a talking form that can read files, validate data, and reply like a human.
    This isn’t a demo - it’s a dev-ready guide.

    Dive in: javapro.io/2025/04/15/ai-power

    #LangChain4j #Quarkus #AIUX #JAVAPRO #Java @langchain4j

  28. Forget next-gen #UX. This is next-human UX. Loïc Magnette turns boring forms into helpful convos - with validation, memory, even file upload.
    Your form, but smarter.

    Try the blueprint: javapro.io/2025/04/15/ai-power

    #LangChain4j #Quarkus #AIUX #ConversationalUI #Java

  29. Why should users adapt to your form… when the form can adapt to them? #AI flips the #UX script. Loïc Magnette shares how with memory, validation & RAG, forms become assistants.

    Read it here: javapro.io/2025/04/15/ai-power

    #LangChain4j #Quarkus #AIUX #ConversationalUI #Java @langchain4j

  30. I've just performed real life Parrot Rock tests on ChatGPT, Gemini, Copilot, Claude and Le Chat (Mistral).

    All apps contain UI/UX issues that could make them more useful if fixed

    tomaszs2.medium.com/parrot-roc

    #parrotrocktest #aiux #chatgpt #gemini #copilot #claude #lechat #mistral

  31. The newsletter of Greg Nudelman (from UX for AI) made my day: “TL/DR: If you do not have a well-run research program that will help you connect directly with your customers, walk away. If you cannot conduct even a single in-person, on-site interview with your target customers, run.” #ux #aiux #ai #userresearch

  32. 🔥 Remember, if you make things too frictionless, users may slip and injure themselves without even knowing what's going on.

    🎁 Friction, leveraged properly, can be useful to trigger the right type of reflection required for UX in AI-powered apps.

    #design #UX #UI #AI #designcommunity #UXR #AIUX #interactiondesign

    3/3 (n=3)