#gen_ai — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #gen_ai, aggregated by home.social.
-
Heute ist es soweit, ich habe das erste Mal ein großformatiges "Foto" auf der Titelseite einer größeren Zeitung (in diesem Fall der Wochenzeitung "derGrazer") gesehen, welches KI-generiert ist und auch nichts mit KI, AI oder anderen verwandten, digitalen Themen zu tun hat.
Schade. Ich bin mir sicher, das es in Graz genügend Motive und auch talentierte Fotograf:innen gibt, die schnell und zu konkurrenzfähigem Entgelt ein Bild liefern hätten können.
Mit dem hätten sich Leser:innen auch eher identifizieren können, als mit so einem generischen und sterilen Bild. -
The Future of Artificial Intelligence
https://www.youtube.com/watch?v=HAiXT1mGTXc
"In this lecture, AI expert Melanie Mitchell will demystify how current-day AI works, how “intelligent” it really is, and what our expectations—and concerns—about its near-term and long-term prospects should be."
-
AI and the New Theocracies
https://www.linkedin.com/pulse/ai-new-theocracies-simon-wardley-rdmwe/
"There is far too much AI doom for my liking. However, there is one issue that does concern me. It is not about machines but about people. It is almost never mentioned and mainly arises from attempts to solve the above risks. The thing that gives me concern is the rise of a new Theocracy." -- #SimonWardley
-
A Scientist Rebels (1947)
https://www.theatlantic.com/magazine/archive/1947/01/a-scientist-rebels/656040/
"It is perfectly clear ... that to disseminate information about a weapon in the present state of our civilization is to make it practically certain that that weapon will be used." -- #NorbertWiener
-
AI auditing: The Broken Bus on the Road to AI Accountability
https://arxiv.org/abs/2401.14462
"...the practical nature of the "AI audit" ecosystem is muddled and imprecise, making it difficult to work through various concepts and map out the stakeholders involved in the practice. " -- #Abeba Birhane #RyanSteed #VictorOjewale #BrianaVecchione Inioluwa#DeborahRaji
-
OpenAI's GPT-4 finally meets its match: Scots Gaelic smashes safety guardrails
https://www.theregister.com/2024/01/31/gpt4_gaelic_safety/
"The safety guardrails preventing OpenAI's GPT-4 from spewing harmful text can be easily bypassed by translating prompts into uncommon languages – such as Zulu, Scots Gaelic, or Hmong." -- #KatyannaQuach
-
How enterprises are using open source LLMs: 16 examples
https://venturebeat.com/ai/how-enterprises-are-using-open-source-llms-16-examples/
"We learned of several enterprise companies experimenting extensively with open-source LLMs, and it’s only a matter of time before they have deployed LLMs." -- #MattMarshall #VentureBeat
-
The horrors experienced by Meta moderators: ‘I didn’t know what humans are capable of’
"A court ruling in Spain that attributes a content moderator’s mental health problems to the job paves the way for acknowledging that fact for at least 25 other employees as well"
-
Prompt Engineering with Llama 2
"Prompt engineering is using natural language to produce a desired response from a large language model (LLM). This interactive guide covers prompt engineering & best practices with Llama 2." -- #facebookResearch
-
Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality
https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality
"We examine 4 years worth of data, encompassing more than 150m changed lines of code, to determine how AI Assistants influence the quality of code being written. We find a significant uptick in churn code, and a concerning decrease in code reuse." -- #GitClear
-
Google cancels contract with an AI data firm that’s helped train Bard
https://www.theverge.com/2024/1/23/24048429/google-appen-cancel-contract-ai-training-bard
"Human workers at companies like Appen often handle many of the more distasteful parts of training AI and are often the lower-paid, often ignored backbone of the entire industry. " -- #theVerge
-
OpenAI and Google will be required to notify the government about AI models
https://mashable.com/article/openai-google-required-notify-government-ai-models-executive-order
"OpenAI, Google, and other AI companies will soon have to inform the government about developing foundation models, thanks to the Defense Production Act. " -- # CecilyMauran #Mashable
-
Sycophancy in Generative-AI Chatbots
https://www.nngroup.com/articles/sycophancy-generative-ai-chatbots/
"When faced with complex inquiries, language models will default to mirroring a user’s perspective or opinion, even if the behavior goes against empirical information. This type of “reward hacking” is an easy way to get a high rating on responses to a user’s prompt, but it is problematic for applications of AI that require accurate responses." -- #CalebSponheim #NNGroup
-
Get Ready for the Great AI Disappointment
https://www.wired.com/story/get-ready-for-the-great-ai-disappointment/
"Rose-tinted predictions for artificial intelligence’s grand achievements will be swept aside by underwhelming performance and dangerous results." -- #DARON_ACEMOGLU #wired
-
In New Experiment, Young Children Destroy AI at Basic Tasks
https://futurism.com/children-destroy-ai-basic-tasks
"Unlike large language and language-and-vision models, children are curious, active, self-supervised, and intrinsically motivated." -- #MAGGIE_HARRISON
-
A Chatbot from the 1960s has thoroughly beaten OpenAI's GPT-3.5 in a Turing test, because people thought it was just 'too bad' to be an actual AI
"[T]he main factor behind ELIZA's 'success' is that its responses are nothing like those from a modern LLM, leading to some interrogators believing it was simply too bad for it to be a real AI bot, thus assuming it had to be a person." -- #NickEvanson
-
Study identifies human–AI interaction scenarios that lead to information cocoons
https://techxplore.com/news/2023-10-humanai-interaction-scenarios-cocoons.html
"Information cocoons can have far-reaching adverse consequences, as they can exacerbate prejudice and social polarization, prevent growth, creativity and innovation, accentuate misinformation, and obstruct efforts aimed at creating a more inclusive world." -- #IngridFadelli #TechXplore
-
Role play with large language models
https://www.nature.com/articles/s41586-023-06647-8
"As dialogue agents become increasingly human-like in their performance, we must develop effective ways to describe their behaviour in high-level terms without falling into the trap of anthropomorphism." -- #MurrayShanahan #KyleMcDonell #LariaReynolds
-
Heard from a customer POV today on rolling out #GenAI across their organization:
- Output quality matters, to trust the model
- User experience is equally important, to quality
- Including the capability at no additional cost democratizes the value and removes the need to justify the ROI -
Tim O’Reilly on AI’s Role in the Attention Economy
"[A]fter decades of analyzing patterns at the intersection of technology and the economy, [O'Reilly] believes [AI] offers a new, and more productive, way to engage with endless content."
-
IBM taps AI to translate COBOL code to Java
https://techcrunch.com/2023/08/22/ibm-taps-ai-to-translate-cobol-code-to-java/
"Code Assistant for IBM Z is designed to assist businesses in refactoring their mainframe apps, ideally while preserving performance and security, according to IBM Research chief scientist Ruchir Puri." -- #KyleWiggers
-
Standards around generative AI
https://blog.ap.org/standards-around-generative-ai
"Accuracy, fairness and speed are the guiding values for AP’s news report, and we believe the mindful use of artificial intelligence can serve these values and over time improve how we work." -- #AmandaBarrett
-
Hackers red-teaming A.I. are ‘breaking stuff left and right,’ but don’t expect quick fixes from DefCon: ‘There are no good guardrails’
https://fortune.com/2023/08/13/hackers-red-teaming-ai-defcon-breaking-stuff-but-no-quick-fixes/
"Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows." -- #FRANK_BAJAK #BLOOMBERG
-
Google DeepMind and the University of Tokyo Researchers Introduce WebAgent: An LLM-Driven Agent that can Complete the Tasks on Real Websites Following Natural Language Instructions
"Thorough research shows that linking task planning with HTML summary in specialized language models is crucial for task performance, increasing the success rate on real-world online navigation by over 50%." -- #AneeshTickoo
For more see: https://arxiv.org/abs/2307.12856
-
The hardest part of building software is not coding, it’s requirements
"This article will talk about the relationship between requirements and software, as well as what an AI needs to produce good results." -- #JaredToporek
-
This Borges-Inspired AI Model Can Create Poems Based on iPhone Photos
https://gizmodo.com/ai-pamera-poems-iphone-photos-new-york-city-1850709381
"Pamera uses an object identifier and GPT-4 to generate poems based on photos in a matter of seconds. We tested it out on pics of a day in New York City." -- #MackDeGeurin
-
Speakeasy is using AI to automate API creation and distribution
https://techcrunch.com/2023/06/29/speakeasy-is-using-ai-to-automate-api-creation-and-distribution/
"Speakeasy co-founder and CEO Sagar Batchu describes his startup as an API infrastructure company, and that means it’s building tools to make it easier to create and distribute APIs, something that is near and dear to him as a developer himself. " -- #RonMiller #TechCrunch
-
Separating AI Fact from Fiction
https://matthewreinbold.com/2023/06/23/SeparatingAIFactFromFiction
"AI has been something I’ve not only heard a lot about, but spent time working with. In this presentation, I share some of my current thinking." -- #MatthewReinbold