#learningmodels — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #learningmodels, aggregated by home.social.
-
Seeing how inefficient, unsustainable, and generally doomed the hyperscale #AI business model is, it can be difficult to remember that #LearningModels don't need to be so resource intensive (if you don't have a business valuation to inflate).
https://youtu.be/8enXRDlWguU?t=3860
This #KarenHao interview from 8 months ago still feels evergreen.
-
Seeing how inefficient, unsustainable, and generally doomed the hyperscale #AI business model is, it can be difficult to remember that #LearningModels don't need to be so resource intensive (if you don't have a business valuation to inflate).
https://youtu.be/8enXRDlWguU?t=3860
This #KarenHao interview from 8 months ago still feels evergreen.
-
Seeing how inefficient, unsustainable, and generally doomed the hyperscale #AI business model is, it can be difficult to remember that #LearningModels don't need to be so resource intensive (if you don't have a business valuation to inflate).
https://youtu.be/8enXRDlWguU?t=3860
This #KarenHao interview from 8 months ago still feels evergreen.
-
Seeing how inefficient, unsustainable, and generally doomed the hyperscale #AI business model is, it can be difficult to remember that #LearningModels don't need to be so resource intensive (if you don't have a business valuation to inflate).
https://youtu.be/8enXRDlWguU?t=3860
This #KarenHao interview from 8 months ago still feels evergreen.
-
Seeing how inefficient, unsustainable, and generally doomed the hyperscale #AI business model is, it can be difficult to remember that #LearningModels don't need to be so resource intensive (if you don't have a business valuation to inflate).
https://youtu.be/8enXRDlWguU?t=3860
This #KarenHao interview from 8 months ago still feels evergreen.
-
Through scaling #DeepNeuralNetworks we have found in two different domains, #ReinforcementLearning and #LanguageModels, that these models learn to learn (#MetaLearning).
They spontaneously learn internal models with memory and learning capability which are able to exhibit #InContextLearning much faster and much more effectively than any of our standard #backpropagation based deep neural networks can.
These rather alien #LearningModels embedded inside the deep learning models are emulated by #neuron layers, but aren't necessarily deep learning models themselves.
I believe it is possible to extract these internal models which have learned to learn, out of the scaled up #DeepLearning #substrate they run on, and run them natively and directly on #hardware.
This allows those much more efficient learning models to be used either as #LearningAgents themselves, or as a further substrate for further meta-learning.
I have an #embodiment #research on-going but with a related goal and focus specifically in extracting (or distilling) the models out of the meta-models here:
https://github.com/keskival/embodied-emulated-personasIt is of course an open research problem how to do this, but I have a lot of ideas!
If you're inspired by this, or if you think the same, let's chat!