#workingmemory — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #workingmemory, aggregated by home.social.
-
🧠 New preprint by Zhong et al. proposes a #synaptic mechanism for #chunking in #WorkingMemory.
Using short-term #plasticity and synaptic augmentation, their model shows how items can be temporarily suppressed and later retrieved as chunks, increasing effective capacity w/o increasing simultaneous activity.
-
🧠 New preprint by Shervani-Tabar, Brincat & @ekmiller on emergent #TravelingWaves in #RNN.
By aligning RNN dynamics to an empirically measured #NeuralManifold, they show that task-relevant TW can emerge through #learning, w/o hard-coding wave dynamics or connectivity. The cool thing here is that the waves are not imposed or engineered, but emerge naturally from learning under #BiologicallyPlausible constraints:
-
🧠 New preprint by Shervani-Tabar, Brincat & @ekmiller on emergent #TravelingWaves in #RNN.
By aligning RNN dynamics to an empirically measured #NeuralManifold, they show that task-relevant TW can emerge through #learning, w/o hard-coding wave dynamics or connectivity. The cool thing here is that the waves are not imposed or engineered, but emerge naturally from learning under #BiologicallyPlausible constraints:
-
🧠 New preprint by Shervani-Tabar, Brincat & @ekmiller on emergent #TravelingWaves in #RNN.
By aligning RNN dynamics to an empirically measured #NeuralManifold, they show that task-relevant TW can emerge through #learning, w/o hard-coding wave dynamics or connectivity. The cool thing here is that the waves are not imposed or engineered, but emerge naturally from learning under #BiologicallyPlausible constraints:
-
🧠 New preprint by Shervani-Tabar, Brincat & @ekmiller on emergent #TravelingWaves in #RNN.
By aligning RNN dynamics to an empirically measured #NeuralManifold, they show that task-relevant TW can emerge through #learning, w/o hard-coding wave dynamics or connectivity. The cool thing here is that the waves are not imposed or engineered, but emerge naturally from learning under #BiologicallyPlausible constraints:
-
🧠 New preprint by Shervani-Tabar, Brincat & @ekmiller on emergent #TravelingWaves in #RNN.
By aligning RNN dynamics to an empirically measured #NeuralManifold, they show that task-relevant TW can emerge through #learning, w/o hard-coding wave dynamics or connectivity. The cool thing here is that the waves are not imposed or engineered, but emerge naturally from learning under #BiologicallyPlausible constraints:
-
🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.
They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.
-
🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.
They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.
-
🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.
They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.
-
🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.
They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.
-
🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.
They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.
-
Mastering Mindfulness: Train Your Brain for Better Focus!
#Mindfulness #AttentionControl #BrainTraining #Focus #MentalHealth #CognitiveScience #MindfulLiving #Neuroscience #WorkingMemory #SelfImprovement #AttentionDeficit #MindfulnessPractice #MentalClarity #BrainHacks
-
Beta-band #oscillations regulate maintenance & deletion of #WorkingMemory representations in humans. This study shows that the WM performance of older adults can be predicted by beta-band neural variability during working memory deletion #PLOSBiology https://plos.io/3B8yw6y
-
Visual working memories are abstractions of percepts – New #preprint by Duan and Curtis (2023)
🌍 https://www.biorxiv.org/content/10.1101/2023.12.01.569634v1
-
Just adding a few more professional interests, so that people can find me. Hello again 😊 #CulturalEvolution #CumulativeCulture #DevelopmentalPsychology #ToolUse #Innovation #ExecutiveFunctions #WorkingMemory #Primates #ComparativeCognition #Cognition #AnimalCulture #Cerebellum #EvolutionOfTechnology #SequenceLearning #SequenceCognition #SocialLearning #CausalReasoning