#deepdream — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #deepdream, aggregated by home.social.
-
So, I started using #DeepDream when it first came out in 2015 to produce some creepy images. It soon turned into a "bag of dicks" -- every image suddenly had dicks in it. At that moment, I realized that any sort of AI-generated output could potentially become a bag of dicks.
-
So, I started using #DeepDream when it first came out in 2015 to produce some creepy images. It soon turned into a "bag of dicks" -- every image suddenly had dicks in it. At that moment, I realized that any sort of AI-generated output could potentially become a bag of dicks.
-
So, I started using #DeepDream when it first came out in 2015 to produce some creepy images. It soon turned into a "bag of dicks" -- every image suddenly had dicks in it. At that moment, I realized that any sort of AI-generated output could potentially become a bag of dicks.
-
So, I started using #DeepDream when it first came out in 2015 to produce some creepy images. It soon turned into a "bag of dicks" -- every image suddenly had dicks in it. At that moment, I realized that any sort of AI-generated output could potentially become a bag of dicks.
-
So, I started using #DeepDream when it first came out in 2015 to produce some creepy images. It soon turned into a "bag of dicks" -- every image suddenly had dicks in it. At that moment, I realized that any sort of AI-generated output could potentially become a bag of dicks.
-
DeepDream for Video with Temporal Consistency
https://github.com/jeremicna/deepdream-video-pytorch
#HackerNews #DeepDream #Video #TemporalConsistency #AI #Art #VideoProcessing
-
#LLMs are an amazing #AI experiment with incredibly important research results, but the technology is far too immature to be allowed into production. Nobody should trust its output. We need more research and less development, we need to train LLMs under controlled conditions where we dissect every step in the training process and look at how it changes the patterns inside the artificial neural network.
Ever since #DeepDream and its #puppyslugs, the first AI generated images that got viral on the Internet, researchers have been tinkering with already existing arltificial neural networks to see what a single layer of neurons or even a single neuron does within the system, and what kinds of patterns occur within such a network when it does something, but that's not enough. Instead of making the models bigger and bigger, we need to train much smaller models much faster and much more often, comparing individual outcomes to one another. Since we are dealing with complex systems very much capable of chaotic behaviour, we must look at them not in the way we look at conventional engineering. The AI machinery may be entirely deterministic as far as the mathematics go, using pseudorandom numbers as the noise it needs in order to work, but even if we can reproduce the output by using the same random seed, we still cannot understand how it came to be. All we are doing at the moment is basically just Linear Algebra, but with gazillion dimensional tensors. Even if we trace the path each single bit of input signal takes through the model, we still don't understand how the model makes the output from the input, and that's because we aren't paying enough attention to how exactly the training actually works.We've got all those cute but unreliable toys now. They may not be fit for production in most cases, but as soon as the LLM cult collapses and the AI bubble bursts, people will find out that much smaller models trained on manually curated data can actually be very useful for all kinds of specialised systems, even though we won't get any closer to #AGI. I think with our current hardware technology we won't even get close to actual human intelligence before the decline of the Industrial Age erodes our global industrial productive capacity to the point where computers become very rare and very expensive again. Our current digital computers are far too energy hungry and far too precise to run anything as complex and noise resistant as a human brain, we'd have to build something analogue and low-power. Something that doesn't compute with discrete numbers but with something like voltage or brightness that can take any value in between 0 and 1. Up to now, we haven't really tried to make analogue signal processing circuits really tiny because we have been using DSPs instead, but what if we tried to make very densely packed silicon chips out of them that mimic the signal pathway topology of a slice of brain? I'm pretty sure there are already people working on that somewhere, but with a pityful budget because all the "AI" funds go to bloody useless LLM chatbots.
When the #AIBubble bursts, there won't be much funding for AI research, but at least more of it will go to fields where actual progress can be made instead of putting it all into #MachineLearning. Machine learning is great, we have made some real progress in the last 20 years because of all the Internet data on which we could train our models, and also because of relatively cheap GPUs to do the heavy lifting, but now GPUs are expensive because of chatbot breeders and cryptobros, and the data on the Internet is far more AI output than anything else, and since AI can't tell AI and humans apart yet (if ever), we are at the point where there won't be any progress in machine learning without a lot of human labour. Even if some mousepad proles from Africa or Asia or Latin America do all the click work, write all the detailed descriptions for visual media and audio, pick out all the useless AI hallucinations that slipped into the proposed training data, it will make the process really, really expensive because this is something that takes a lot of time and can't be automated. So any completely new large scale machine learning models may be a thing of the past soon. Already existing models can be used, and with LoRAs, they can be taught some new tricks, but if we use whatever hype is left to learn as much as we can about the training process and what structures it builds inside the model, we will be able to build better models that can do more with less.
-
Pre #2020: #Factorizing Tools
These #AI wre #DeepLearning breakthroughs. #Word2Vec, #DeepDream and #AlphaGo solved novel, previously unsolvable, problems.
If you weren't in the field, you might not think these were AI, and #GPT 2 might have surprised you.
-
Топ-10 бесплатных нейросетей для генерации изображений: лучшие AI генераторы 2025 года
Признайтесь, сколько раз вы хотели быстро накидать картинку для поста или презентации, но вместо этого застревали в редакторе или бесконечных поисках подходящего изображения в Google? А ведь как было бы здорово, если бы картинка, которая у вас в голове, внезапно просто появилась! Время — деньги, вдохновение — на паузе, и тут на помощь приходит AI. Нейросети могут генерировать всё, что угодно, включая самые безумные идеи. Больше не нужно тратить часы на поиски, когда за пару кликов можно увидеть то, что секунду назад было в мыслях. Кстати, заметили обложку с динозавром? Давайте будем звать его Рекс. Рекс –сам плод работы нейросети. Сегодня он станет главной звездой наших экспериментов. Но что будем делать? Помните я говорил о безумных идеях? Так вот, чтобы понять все возможности генерации, давайте дадим AI сложное задание. Отправим Рекси куда-нибудь в космос, например на Луну, пусть наденет скафандр и готовит барбекю на фоне Земли. Интересно? Тогда пристегивайтесь, мы отправляемся в мир генерации изображений. 1. Grok А теперь знакомьтесь с Grok — нейросетью от xAI и моим личным фаворитом в этом списке. Grok обитает прямо в интерфейсе X (ранее известном как Twitter), и использует Flux для генерации изображений. Справляется отлично. Заводите бесплатный аккаунт на X.com , жмите на кнопку «Grok» — и вы в игре! Хотите классный арт? Нет проблем! Но мы же здесь для экспериментов, верно? Вбиваем: «Нарисуй динозавра в скафандре, который жарит барбекю на Луне на фоне Земли» . И вот результат — все довольны! И мы, и наш динозавр Рекс!
https://habr.com/ru/companies/bothub/articles/881888/
#ai #ии_и_машинное_обучение #генерация_изображений #canva #microsoft #deepai #adobe_express #deepdream
-
Does anyone know the URL for the "observatory" website (I think that's what they called it) where one of the AI/DNN labs had analysed various machine vision models and built a map of all of the nodes.
You could click on each node and see the images (and sometimes text) that triggered it, and also images that were generated when they excited that node while clamping others (like Deep Dreams)
I can't remember who it was and can't find it.
-
@futurebird I like AI images exactly because they often look weird and disturbing. Ever since #DeepDream happened a decade ago, turning all kinds of images into puppyslugs, masses of eyeballs, and weird tumours of architecture, I have been hooked. However, the latest generation of image generators isn't as much fun as earlier ones because their images don't look strange enough anymore.
-
@modean987 After SD 1.5 went public, all kinds of free image generator websites appeared, and I tried a lot of them, but my main platform is still #DeepDreamGenerator where I had been active since well before latent diffusion was a thing, when we only had #DeepDream and #DeepStyle. Then I discovered #Yodayo, a platform with over a hundred SD models now (mostly SD 1.5, a few SDXL), but almost all of them exclusively trained on anime, manga, and a few on Western comics and animation.
#aiart -
@modean987 I've been using all kinds of AI tools for images ever since #DeepDream happened in the 2010s, that thing with the puppyslugs. You input one image, select which neuron layer of the model you want to sample, and how many iterations you want, and you get a new image that somehow grew out of the old one. A little later, Neural Style Transfer aka #DeepStyle made it possible to mix two images, one for content and one for style. I made enormous volumes of images with that. #aiart
-
https://deepdreamgenerator.com/ddream/xccrwbf6vbh Random Robot AI Art #ai #aiart #deepdream
-
-
@hannah @ZachWeinersmith I think generative art is a great way of producing images that are weird, unreal, disturbing, or silly. I've always been a great fan of surrealism and dada, and especially in the case of dada, the entire movement has always been opposed to the cult of the artist and the meaning of art, striving to create silly and meaningless works, experimenting with mechanical means of image production even a hundred years ago when it began. I have been rendering weird machine hallucinations ever since #DeepDream and #DeepStyle aka #NeuralStyleTransfer appeared in the mid-10s, and I keep all of my tens of thousands of generated images in a folder labelled, "elektrodada".
-
愿你的新年像兔子一样祥和美好!
May your New Year be as lucky and lovely as the Rabbit!----
#aiart #art #ai #digitalart #generativeart #artificialintelligence
#machinelearning #aiartcommunity #abstractart #aiartists
#neuralart #vqgan #ganart #contemporaryart #deepdream
#artist #mastoart #artoftheday #newmediaart #nightcafestudio
#aiartist #modernart #neuralnetworks #neuralnetworkart
#abstract #styletransfer #stylegan #digitalartist -
Which one is real? None, I made them with Ai.
-----
#aiart #art #ai #digitalart #generativeart #artificialintelligence
#machinelearning #aiartcommunity #abstractart #aiartists
#neuralart #vqgan #ganart #contemporaryart #deepdream
#artist #nftart #artoftheday #newmediaart #nightcafestudio
#aiartist #modernart #stablediffusion
#abstract #styletransfer #stylegan #digitalartist -
Dolphins in sssspppppaaaaaaace.
https://pixelfed.social/p/Quinnie/519634552068894985
#aiart #art #ai #digitalart #generativeart #artificialintelligence #machinelearning #aiartcommunity #abstractart #nft #aiartists #neuralart #vqgan #ganart #contemporaryart #deepdream #artist #nftart #artoftheday #newmediaart #nightcafestudio #aiartist #modernart #neuralnetworks #neuralnetworkart #abstract #styletransfer #stylegan #digitalartist #photoroomai