#humanvsai — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #humanvsai, aggregated by home.social.
-
The Uncanny Valley and the Rising Power of Anti-AI Sentiment
https://localscribe.co/posts/uncanny-valley-and-rising-power-of-anti-ai-sentiment/
#HackerNews #UncannyValley #AntiAISentiment #AIethics #TechnologyDebate #HumanVsAI
-
@Cassandrich @Sobri | Zoe (she/her) @Scott Jenson @Phil Dennis-Jordan Also, an image doesn't always need the exact same alt-text whenever it's posted somewhere.
The alt-text must adapt to the context. It must be different according to the context in which an image is posted. Also, it must adapt to the place where it's posted. The same image, even within a very similar context, must have a different alt-text in the Fediverse than on commercial social media or a static website. Lastly, and this ties in with the Fediverse requiring different alt-texts, the audience must be taken into consideration.
Alt-text in metadata can't do either of this. An LLM can't do either of this either unless it's explicitly prompted to do so, and even that is questionable.
Many Mastodon users dream of only pressing a button or not even that, and some AI automagically generates a perfect alt-text for their image. Perfectly accurate with exactly the details required for the context and the intended audience as well as the expected audience, all while following every last image description and alt-text rule out there to a tee.
It's perfectly understandable. Mastodon had begun to feel like child's play when they were suddenly pressured into describing each and every image they post. Worse yet, it seems like over 90% of all Mastodon users do everything on a phone with no access to a hardware keyboard whatsoever. So they have to fumble their alt-texts into a screen keyboard while not even being able to see the image they're describing.
I'm neither on Mastodon nor on a phone. I've got the luxury of having a desktop computer with a hardware keyboard and being able to bllind-type. So I don't have a problem with writing my image descriptions myself with no help from an AI.
In fact, my own original images are all about an extreme niche topic. It's so obscure that no AI will ever be able to describe such images, much less explain them at my level of accuracy and detail. (Explanations go into the post text, by the way, and not into the alt-text, but I always have an additional image description in the post text for my original images anyway.)
I simply know things that no AI will ever know, not ChatGPT and not Claude either, at least not at the point in time when they need that knowledge. And I can see things that will always remain invisible for AIs.
You can develop better models all you want. But they'll never be able to do all that.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #AIVsHuman #HumanVsAI -
@Cassandrich @Sobri | Zoe (she/her) @Scott Jenson @Phil Dennis-Jordan Also, an image doesn't always need the exact same alt-text whenever it's posted somewhere.
The alt-text must adapt to the context. It must be different according to the context in which an image is posted. Also, it must adapt to the place where it's posted. The same image, even within a very similar context, must have a different alt-text in the Fediverse than on commercial social media or a static website. Lastly, and this ties in with the Fediverse requiring different alt-texts, the audience must be taken into consideration.
Alt-text in metadata can't do either of this. An LLM can't do either of this either unless it's explicitly prompted to do so, and even that is questionable.
Many Mastodon users dream of only pressing a button or not even that, and some AI automagically generates a perfect alt-text for their image. Perfectly accurate with exactly the details required for the context and the intended audience as well as the expected audience, all while following every last image description and alt-text rule out there to a tee.
It's perfectly understandable. Mastodon had begun to feel like child's play when they were suddenly pressured into describing each and every image they post. Worse yet, it seems like over 90% of all Mastodon users do everything on a phone with no access to a hardware keyboard whatsoever. So they have to fumble their alt-texts into a screen keyboard while not even being able to see the image they're describing.
I'm neither on Mastodon nor on a phone. I've got the luxury of having a desktop computer with a hardware keyboard and being able to bllind-type. So I don't have a problem with writing my image descriptions myself with no help from an AI.
In fact, my own original images are all about an extreme niche topic. It's so obscure that no AI will ever be able to describe such images, much less explain them at my level of accuracy and detail. (Explanations go into the post text, by the way, and not into the alt-text, but I always have an additional image description in the post text for my original images anyway.)
I simply know things that no AI will ever know, not ChatGPT and not Claude either, at least not at the point in time when they need that knowledge. And I can see things that will always remain invisible for AIs.
You can develop better models all you want. But they'll never be able to do all that.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #AIVsHuman #HumanVsAI -
@Cassandrich @Sobri | Zoe (she/her) @Scott Jenson @Phil Dennis-Jordan Also, an image doesn't always need the exact same alt-text whenever it's posted somewhere.
The alt-text must adapt to the context. It must be different according to the context in which an image is posted. Also, it must adapt to the place where it's posted. The same image, even within a very similar context, must have a different alt-text in the Fediverse than on commercial social media or a static website. Lastly, and this ties in with the Fediverse requiring different alt-texts, the audience must be taken into consideration.
Alt-text in metadata can't do either of this. An LLM can't do either of this either unless it's explicitly prompted to do so, and even that is questionable.
Many Mastodon users dream of only pressing a button or not even that, and some AI automagically generates a perfect alt-text for their image. Perfectly accurate with exactly the details required for the context and the intended audience as well as the expected audience, all while following every last image description and alt-text rule out there to a tee.
It's perfectly understandable. Mastodon had begun to feel like child's play when they were suddenly pressured into describing each and every image they post. Worse yet, it seems like over 90% of all Mastodon users do everything on a phone with no access to a hardware keyboard whatsoever. So they have to fumble their alt-texts into a screen keyboard while not even being able to see the image they're describing.
I'm neither on Mastodon nor on a phone. I've got the luxury of having a desktop computer with a hardware keyboard and being able to bllind-type. So I don't have a problem with writing my image descriptions myself with no help from an AI.
In fact, my own original images are all about an extreme niche topic. It's so obscure that no AI will ever be able to describe such images, much less explain them at my level of accuracy and detail. (Explanations go into the post text, by the way, and not into the alt-text, but I always have an additional image description in the post text for my original images anyway.)
I simply know things that no AI will ever know, not ChatGPT and not Claude either, at least not at the point in time when they need that knowledge. And I can see things that will always remain invisible for AIs.
You can develop better models all you want. But they'll never be able to do all that.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #AIVsHuman #HumanVsAI -
@Cassandrich @Sobri | Zoe (she/her) @Scott Jenson @Phil Dennis-Jordan Also, an image doesn't always need the exact same alt-text whenever it's posted somewhere.
The alt-text must adapt to the context. It must be different according to the context in which an image is posted. Also, it must adapt to the place where it's posted. The same image, even within a very similar context, must have a different alt-text in the Fediverse than on commercial social media or a static website. Lastly, and this ties in with the Fediverse requiring different alt-texts, the audience must be taken into consideration.
Alt-text in metadata can't do either of this. An LLM can't do either of this either unless it's explicitly prompted to do so, and even that is questionable.
Many Mastodon users dream of only pressing a button or not even that, and some AI automagically generates a perfect alt-text for their image. Perfectly accurate with exactly the details required for the context and the intended audience as well as the expected audience, all while following every last image description and alt-text rule out there to a tee.
It's perfectly understandable. Mastodon had begun to feel like child's play when they were suddenly pressured into describing each and every image they post. Worse yet, it seems like over 90% of all Mastodon users do everything on a phone with no access to a hardware keyboard whatsoever. So they have to fumble their alt-texts into a screen keyboard while not even being able to see the image they're describing.
I'm neither on Mastodon nor on a phone. I've got the luxury of having a desktop computer with a hardware keyboard and being able to bllind-type. So I don't have a problem with writing my image descriptions myself with no help from an AI.
In fact, my own original images are all about an extreme niche topic. It's so obscure that no AI will ever be able to describe such images, much less explain them at my level of accuracy and detail. (Explanations go into the post text, by the way, and not into the alt-text, but I always have an additional image description in the post text for my original images anyway.)
I simply know things that no AI will ever know, not ChatGPT and not Claude either, at least not at the point in time when they need that knowledge. And I can see things that will always remain invisible for AIs.
You can develop better models all you want. But they'll never be able to do all that.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #AIVsHuman #HumanVsAI -
@Woochancho @Diego Martínez (Kaeza) 🇺🇾 @🅰🅻🅸🅲🅴 (🌈🦄) Especially whenever humans have advantages over LLMs.
When I describe my own original images, I have two advantages.
One, I know much more about the contents of the image than any AI. That's because my original images always show something from extremely obscure 3-D virtual worlds. On top of that, I may add some extra insider knowledge or explain pop-cultural references in the long description in the post if it helps understand the image and its descriptions.
Two, the LLM can only look at the image with its limited resolution. That's all it has. In contrast, when I describe my images, I don't just look at the images. I look at the real deal in-world with a nearly infinite resolution.
For example, an LLM can only generate a description from a picture of a virtual building. But when I describe it, my avatar is in-world, standing right in front of the building whose picture I'm describing. I can move the avatar around, I can move the camera around, I can zoom in on anything. I can correctly identify that four-pixel blob as a strawberry cocktail wheras the LLM doesn't even notice it's there.
I've actually done two tests using LLaVA. I've fed it two images I had described myself previously to see what happens. It was abysmal. LLaVA hallucinated, it interpreted stuff wrongly and so forth, not to mention that LLaVA's description, even after being prompted to write a detailed description, wasn't nearly as detailed as mine.
In one image, there's an OpenSimWorld beacon placed rather prominently in the scenery. LLaVA completely ignored it. I described what it looks like in about 1,000 characters, and then I explained what it is, what OpenSimWorld is and how it works in another 4,000 characters or so.
It's an illusion that AI will soon catch up with any of this.
Oh, by the way: How is an AI supposed to pinpoint exactly where an image was made if the image shows a place of which multiple absolutely identical copies exist? Or if the image has a neutral background that doesn't even hint at where it was made? I can do that with no problem because I remember where I've made the image.
#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #LLaVA #AIVsHuman #HumanVsAI -
@Woochancho @Diego Martínez (Kaeza) 🇺🇾 @🅰🅻🅸🅲🅴 (🌈🦄) Especially whenever humans have advantages over LLMs.
When I describe my own original images, I have two advantages.
One, I know much more about the contents of the image than any AI. That's because my original images always show something from extremely obscure 3-D virtual worlds. On top of that, I may add some extra insider knowledge or explain pop-cultural references in the long description in the post if it helps understand the image and its descriptions.
Two, the LLM can only look at the image with its limited resolution. That's all it has. In contrast, when I describe my images, I don't just look at the images. I look at the real deal in-world with a nearly infinite resolution.
For example, an LLM can only generate a description from a picture of a virtual building. But when I describe it, my avatar is in-world, standing right in front of the building whose picture I'm describing. I can move the avatar around, I can move the camera around, I can zoom in on anything. I can correctly identify that four-pixel blob as a strawberry cocktail wheras the LLM doesn't even notice it's there.
I've actually done two tests using LLaVA. I've fed it two images I had described myself previously to see what happens. It was abysmal. LLaVA hallucinated, it interpreted stuff wrongly and so forth, not to mention that LLaVA's description, even after being prompted to write a detailed description, wasn't nearly as detailed as mine.
In one image, there's an OpenSimWorld beacon placed rather prominently in the scenery. LLaVA completely ignored it. I described what it looks like in about 1,000 characters, and then I explained what it is, what OpenSimWorld is and how it works in another 4,000 characters or so.
It's an illusion that AI will soon catch up with any of this.
Oh, by the way: How is an AI supposed to pinpoint exactly where an image was made if the image shows a place of which multiple absolutely identical copies exist? Or if the image has a neutral background that doesn't even hint at where it was made? I can do that with no problem because I remember where I've made the image.
#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #LLaVA #AIVsHuman #HumanVsAI -
@Woochancho @Diego Martínez (Kaeza) 🇺🇾 @🅰🅻🅸🅲🅴 (🌈🦄) Especially whenever humans have advantages over LLMs.
When I describe my own original images, I have two advantages.
One, I know much more about the contents of the image than any AI. That's because my original images always show something from extremely obscure 3-D virtual worlds. On top of that, I may add some extra insider knowledge or explain pop-cultural references in the long description in the post if it helps understand the image and its descriptions.
Two, the LLM can only look at the image with its limited resolution. That's all it has. In contrast, when I describe my images, I don't just look at the images. I look at the real deal in-world with a nearly infinite resolution.
For example, an LLM can only generate a description from a picture of a virtual building. But when I describe it, my avatar is in-world, standing right in front of the building whose picture I'm describing. I can move the avatar around, I can move the camera around, I can zoom in on anything. I can correctly identify that four-pixel blob as a strawberry cocktail wheras the LLM doesn't even notice it's there.
I've actually done two tests using LLaVA. I've fed it two images I had described myself previously to see what happens. It was abysmal. LLaVA hallucinated, it interpreted stuff wrongly and so forth, not to mention that LLaVA's description, even after being prompted to write a detailed description, wasn't nearly as detailed as mine.
In one image, there's an OpenSimWorld beacon placed rather prominently in the scenery. LLaVA completely ignored it. I described what it looks like in about 1,000 characters, and then I explained what it is, what OpenSimWorld is and how it works in another 4,000 characters or so.
It's an illusion that AI will soon catch up with any of this.
Oh, by the way: How is an AI supposed to pinpoint exactly where an image was made if the image shows a place of which multiple absolutely identical copies exist? Or if the image has a neutral background that doesn't even hint at where it was made? I can do that with no problem because I remember where I've made the image.
#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #LLaVA #AIVsHuman #HumanVsAI -
🚨 ALERT: Groundbreaking Revelation! 🚨 In an article that could've been written by a sentient eggplant, we learn the shocking truth that AI shouldn't write for you. Why? Because apparently, humans are much better at producing endless lists of tech jargon that nobody will ever read.🥱
https://alexhwoods.com/dont-let-ai-write-for-you/ #AIwriting #AIhumor #TechJargon #HumanVsAI #GroundbreakingRevelation #HackerNews #ngated -
🤖👩⚖️ A riveting tale of mistaken identity: Human vs. AI, where the protagonist fails to convince Aunt Mildred that they're not a chatbot. Spoiler alert: The aunt is still awaiting a #CAPTCHA result. 📜🍿
https://www.bbc.com/future/article/20260324-i-tried-to-prove-im-not-an-ai-deepfake #HumanVsAI #MistakenIdentity #AuntMildred #TechTales #HackerNews #ngated -
🤖👩⚖️ A riveting tale of mistaken identity: Human vs. AI, where the protagonist fails to convince Aunt Mildred that they're not a chatbot. Spoiler alert: The aunt is still awaiting a #CAPTCHA result. 📜🍿
https://www.bbc.com/future/article/20260324-i-tried-to-prove-im-not-an-ai-deepfake #HumanVsAI #MistakenIdentity #AuntMildred #TechTales #HackerNews #ngated -
🤖👩⚖️ A riveting tale of mistaken identity: Human vs. AI, where the protagonist fails to convince Aunt Mildred that they're not a chatbot. Spoiler alert: The aunt is still awaiting a #CAPTCHA result. 📜🍿
https://www.bbc.com/future/article/20260324-i-tried-to-prove-im-not-an-ai-deepfake #HumanVsAI #MistakenIdentity #AuntMildred #TechTales #HackerNews #ngated -
🤖👩⚖️ A riveting tale of mistaken identity: Human vs. AI, where the protagonist fails to convince Aunt Mildred that they're not a chatbot. Spoiler alert: The aunt is still awaiting a #CAPTCHA result. 📜🍿
https://www.bbc.com/future/article/20260324-i-tried-to-prove-im-not-an-ai-deepfake #HumanVsAI #MistakenIdentity #AuntMildred #TechTales #HackerNews #ngated -
🤖👩⚖️ A riveting tale of mistaken identity: Human vs. AI, where the protagonist fails to convince Aunt Mildred that they're not a chatbot. Spoiler alert: The aunt is still awaiting a #CAPTCHA result. 📜🍿
https://www.bbc.com/future/article/20260324-i-tried-to-prove-im-not-an-ai-deepfake #HumanVsAI #MistakenIdentity #AuntMildred #TechTales #HackerNews #ngated -
Using AI to upgrade your personal OS: Insights from an executive coach - This week on the GeekWire Podcast: Mark Briggs, an executive coach, AI strate... - https://www.geekwire.com/2025/using-ai-to-upgrade-your-personal-os-insights-from-an-executive-coach/ #organizationalintelligence #personaloperatingsystem #workplacetechnology #executivecoaching #meetingfollow-ups #geekwirepodcast #procrastination #aiproductivity #note-taking #leadership #markbriggs #humanvsai #podcasts #botornot
-
🤖 Humans trust their own work more than AI, even when AI performs just as well.
Why? It’s all about trust and collaboration. Humans bring creativity, while AI brings precision. Together, they’re unstoppable!
🔗 Learn how we can close the trust gap: https://blueheadline.com/tech-news/bias-uncovered-humans-work-over-ai/
What do you think—do you trust AI outputs? Share your thoughts below!
#Technology #AI #HumanvsAI #BlueHeadline #Collaboration #DigitalTrust #Innovation #MachineLearning #TechNews