#aiux — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #aiux, aggregated by home.social.
-
Engineers run the AI evals. But who decides what “good” actually means? If your criteria only measure what’s easy, your product will optimize for the wrong things. Should designers and PMs own eval criteria? Let’s debate.
#AIEvals #ProductDesign #UX #AgenticAI #DesignLeadership #UXforAI #AIUX
-
Users of websites like Amazon, Turo, and Redfin might find AI chatbots “useful when they answer context-specific questions, clarify complex information, and offer tailored guidance that helps users make decisions”
https://www.nngroup.com/articles/site-ai-chatbot/ #contentdesign #AIUX
-
Users of websites like Amazon, Turo, and Redfin might find AI chatbots “useful when they answer context-specific questions, clarify complex information, and offer tailored guidance that helps users make decisions”
https://www.nngroup.com/articles/site-ai-chatbot/ #contentdesign #AIUX
-
Users of websites like Amazon, Turo, and Redfin might find AI chatbots “useful when they answer context-specific questions, clarify complex information, and offer tailored guidance that helps users make decisions”
https://www.nngroup.com/articles/site-ai-chatbot/ #contentdesign #AIUX
-
Users of websites like Amazon, Turo, and Redfin might find AI chatbots “useful when they answer context-specific questions, clarify complex information, and offer tailored guidance that helps users make decisions”
https://www.nngroup.com/articles/site-ai-chatbot/ #contentdesign #AIUX
-
Users of websites like Amazon, Turo, and Redfin might find AI chatbots “useful when they answer context-specific questions, clarify complex information, and offer tailored guidance that helps users make decisions”
https://www.nngroup.com/articles/site-ai-chatbot/ #contentdesign #AIUX
-
If agentic systems hide their reasoning, trust collapses.
What if interaction wasn’t about issuing commands—but about questioning, negotiating, and governing autonomy? This post argues interaction is where trust is earned. #humanagentcentereddesign #AI #UXforAI #mlUX #AIUX #AgenticAI -
If AI agents start deciding for users, who fights for the guardrails? Principles fail—workflows win.
Your duty: Advocate where autonomy meets risk. Who's with me?
#artificialintelligence #businessdecisions #ethics #ethicsintech #aiethics #Accessibility #DecisionMaking #HumanComputerInteraction #HumanAgentCenteredDesign #Privacy #TrustBuilding #UX #AI #AgenticAI #DarkPatterns #DesignPrinciples #MoralPrinciples #DesignPhilosophy #WorkflowEncapsulation #aiux #uxai #mlux
-
If AI agents start deciding for users, who fights for the guardrails? Principles fail—workflows win.
Your duty: Advocate where autonomy meets risk. Who's with me?
#artificialintelligence #businessdecisions #ethics #ethicsintech #aiethics #Accessibility #DecisionMaking #HumanComputerInteraction #HumanAgentCenteredDesign #Privacy #TrustBuilding #UX #AI #AgenticAI #DarkPatterns #DesignPrinciples #MoralPrinciples #DesignPhilosophy #WorkflowEncapsulation #aiux #uxai #mlux
-
If AI agents start deciding for users, who fights for the guardrails? Principles fail—workflows win.
Your duty: Advocate where autonomy meets risk. Who's with me?
#artificialintelligence #businessdecisions #ethics #ethicsintech #aiethics #Accessibility #DecisionMaking #HumanComputerInteraction #HumanAgentCenteredDesign #Privacy #TrustBuilding #UX #AI #AgenticAI #DarkPatterns #DesignPrinciples #MoralPrinciples #DesignPhilosophy #WorkflowEncapsulation #aiux #uxai #mlux
-
If AI agents start deciding for users, who fights for the guardrails? Principles fail—workflows win.
Your duty: Advocate where autonomy meets risk. Who's with me?
#artificialintelligence #businessdecisions #ethics #ethicsintech #aiethics #Accessibility #DecisionMaking #HumanComputerInteraction #HumanAgentCenteredDesign #Privacy #TrustBuilding #UX #AI #AgenticAI #DarkPatterns #DesignPrinciples #MoralPrinciples #DesignPhilosophy #WorkflowEncapsulation #aiux #uxai #mlux
-
If AI agents start deciding for users, who fights for the guardrails? Principles fail—workflows win.
Your duty: Advocate where autonomy meets risk. Who's with me?
#artificialintelligence #businessdecisions #ethics #ethicsintech #aiethics #Accessibility #DecisionMaking #HumanComputerInteraction #HumanAgentCenteredDesign #Privacy #TrustBuilding #UX #AI #AgenticAI #DarkPatterns #DesignPrinciples #MoralPrinciples #DesignPhilosophy #WorkflowEncapsulation #aiux #uxai #mlux
-
Why use sliders when you could open a chatbot and say:
"Hi there! Could you kindly increase the brightness by 12%, but keep the shadows moody? Like, noir—but for brunch. Thx 💖"
AI isn’t replacing jobs, it’s replacing buttons.
#AIUX #Photoshop #chatGPT -
Idea for AI builders:
LLMs still behave binary at the edge—fluid → snap.
Proposing a graded “Breathing Sheath” between emergence and safety:
a dynamic modulation layer using fractal timing (micro/meso/macro), intention-awareness, and agitation-damping.
Nuance stays; safety holds.
Could this be a root architecture for smoother, steerable LLMs?#LLM #AISafety #Alignment #AIResearch #ControlTheory #CognitiveArchitecture #AIDev #AIUX #AIFutures #TechFediverse
-
"Designing AI-first UX requires treating it as a core primitive, not a gimmick. Prioritize human control, collaboration, transparency, and trust—ensuring AI earns user confidence through explainability, guardrails, and real-world impact. #AUXDesign #MicrosoftCopilot #AIUX #HumanCenteredDesign"
-
From shaping AI policies to designing user-friendly AI tools, non-technical careers in AI are booming. Learn the skills you need and how to pitch yourself successfully.
#AIcareers #NonTechAI #AIUX #AIethics #AIpolicy #CareerGrowth
-
Ever wondered how developers make AI respond with such precision & flow? The secret lies in Prompt Engineering — the art of designing effective prompts that shape AI conversations.
In this guide, we’ll uncover 7 key strategies to craft clear, contextual, & ethical AI interactions.
https://drchetandhongade.com/blogs/technology/prompt-engineering-guide/
#PromptEngineering
#AI
#ArtificialIntelligence
#MachineLearning
#AIConversations
#TechBlog
#DevelopersLife
#FutureOfAI
#AIDesign
#AIDevelopment
#AIUX
#AIChatbots
#SmartTech
#DrChetanDhongade -
Ever wondered how developers make AI respond with such precision & flow? The secret lies in Prompt Engineering — the art of designing effective prompts that shape AI conversations.
In this guide, we’ll uncover 7 key strategies to craft clear, contextual, & ethical AI interactions.
https://drchetandhongade.com/blogs/technology/prompt-engineering-guide/
#PromptEngineering
#AI
#ArtificialIntelligence
#MachineLearning
#AIConversations
#TechBlog
#DevelopersLife
#FutureOfAI
#AIDesign
#AIDevelopment
#AIUX
#AIChatbots
#SmartTech
#DrChetanDhongade -
Ever wondered how developers make AI respond with such precision & flow? The secret lies in Prompt Engineering — the art of designing effective prompts that shape AI conversations.
In this guide, we’ll uncover 7 key strategies to craft clear, contextual, & ethical AI interactions.
https://drchetandhongade.com/blogs/technology/prompt-engineering-guide/
#PromptEngineering
#AI
#ArtificialIntelligence
#MachineLearning
#AIConversations
#TechBlog
#DevelopersLife
#FutureOfAI
#AIDesign
#AIDevelopment
#AIUX
#AIChatbots
#SmartTech
#DrChetanDhongade -
What does good #UX look like when your interface is powered by AI? "Human + Machine" is a guide to designing for the "missing middle"—where people and algorithms collaborate. #AIUX #DesignStrategy #FutureOfWork
-
Psst… your forms could talk. They could remember. Validate. Help. Loïc Magnette shows how to give forms a brain using #LangChain4j + #Quarkus. It’s not a dream. It’s a pattern.
Learn it now: https://javapro.io/2025/04/15/ai-powered-form-wizards-chat-click-done/
-
AI: Explainable Enough
They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem.
Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details.
What the domain expert user doesn’t want:
– How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor.What the domain expert desires:
– Help at the lowest level of detail that they care about.
– AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X.Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.
This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem, easily. So in a Betty Crocker cake mix kind of way, let the user add the egg.
Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking.
I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.
#AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI
-
AI: Explainable Enough
They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem.
Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details.
What the domain expert user doesn’t want:
– How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor.What the domain expert desires:
– Help at the lowest level of detail that they care about.
– AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X.Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.
This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem, easily. So in a Betty Crocker cake mix kind of way, let the user add the egg.
Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking.
I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.
#AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI
-
AI: Explainable Enough
They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem.
Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details.
What the domain expert user doesn’t want:
– How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor.What the domain expert desires:
– Help at the lowest level of detail that they care about.
– AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X.Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.
This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem, easily. So in a Betty Crocker cake mix kind of way, let the user add the egg.
Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking.
I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.
#AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI
-
AI: Explainable Enough
They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem.
Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details.
What the domain expert user doesn’t want:
– How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor.What the domain expert desires:
– Help at the lowest level of detail that they care about.
– AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X.Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.
This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem, easily. So in a Betty Crocker cake mix kind of way, let the user add the egg.
Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking.
I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.
#AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI
-
AI: Explainable Enough
They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem.
Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details.
What the domain expert user doesn’t want:
– How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor.What the domain expert desires:
– Help at the lowest level of detail that they care about.
– AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X.Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.
This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem, easily. So in a Betty Crocker cake mix kind of way, let the user add the egg.
Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking.
I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.
#AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI
-
Bold of Google to assume I want a relationship with my command line. I opened Gemini CLI to run a model, not to process passive-aggressive feedback and digital hugs. 🤖💌 #AIUX #TechSatire #GeminiCLI #OpenSourceGaslighting
-
🚀 $25.73B by 2025. 38% more customer spend.
This isn’t hype—it’s happening right now. GenAI is reshaping personalization across retail, finance, education, and healthcare. If your UX doesn’t feel like it’s reading your mind... it’s already behind.👀 Explore the real tech and tactics powering the future of customer experience:
https://medium.com/@rogt.x1997/25-73b-by-2025-how-generative-ai-is-rewriting-customer-experience-economics-45ac3a61bce4#GenAI #CustomerExperience #HyperPersonalization #AIUX
https://medium.com/@rogt.x1997/25-73b-by-2025-how-generative-ai-is-rewriting-customer-experience-economics-45ac3a61bce4 -
🤖 Gemini 2.5 thinks before it answers. Flow directs scenes like a filmmaker. Ironwood crunches 42.5 exaflops.
Google I/O 2025 wasn’t just product talk — it was a blueprint for AI that collaborates, reasons, and co-creates.
Step into the future of participatory intelligence 🔍🎬⚡
#GoogleIO2025 #GeminiAI #CognitiveComputing #AIUX #FlowByGoogle
-
RAG, memory, multimodality... in a form? Yep. Loïc Magnette built a talking form that can read files, validate data, and reply like a human.
This isn’t a demo - it’s a dev-ready guide.Dive in: https://javapro.io/2025/04/15/ai-powered-form-wizards-chat-click-done/
#LangChain4j #Quarkus #AIUX #JAVAPRO #Java @langchain4j
-
Forget next-gen #UX. This is next-human UX. Loïc Magnette turns boring forms into helpful convos - with validation, memory, even file upload.
Your form, but smarter.Try the blueprint: https://javapro.io/2025/04/15/ai-powered-form-wizards-chat-click-done/
-
Why should users adapt to your form… when the form can adapt to them? #AI flips the #UX script. Loïc Magnette shares how with memory, validation & RAG, forms become assistants.
Read it here: https://javapro.io/2025/04/15/ai-powered-form-wizards-chat-click-done/
#LangChain4j #Quarkus #AIUX #ConversationalUI #Java @langchain4j
-
The newsletter of Greg Nudelman (from UX for AI) made my day: “TL/DR: If you do not have a well-run research program that will help you connect directly with your customers, walk away. If you cannot conduct even a single in-person, on-site interview with your target customers, run.” #ux #aiux #ai #userresearch
-
🔥 Remember, if you make things too frictionless, users may slip and injure themselves without even knowing what's going on.
🎁 Friction, leveraged properly, can be useful to trigger the right type of reflection required for UX in AI-powered apps.
#design #UX #UI #AI #designcommunity #UXR #AIUX #interactiondesign
3/3 (n=3)