home.social

#workingmemory — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #workingmemory, aggregated by home.social.

  1. 🧠 New preprint by Zhong et al. proposes a #synaptic mechanism for #chunking in #WorkingMemory.

    Using short-term #plasticity and synaptic augmentation, their model shows how items can be temporarily suppressed and later retrieved as chunks, increasing effective capacity w/o increasing simultaneous activity.

    🌍 doi.org/10.7554/eLife.109538.1

    #Neuroscience #CompNeuro #SynapticPlasticity

  2. 🧠 New preprint by Shervani-Tabar, Brincat & @ekmiller on emergent #TravelingWaves in #RNN.

    By aligning RNN dynamics to an empirically measured #NeuralManifold, they show that task-relevant TW can emerge through #learning, w/o hard-coding wave dynamics or connectivity. The cool thing here is that the waves are not imposed or engineered, but emerge naturally from learning under #BiologicallyPlausible constraints:

    🌍 doi.org/10.64898/2026.01.08.69

    #Neuroscience #CompNeuro #NeuralDynamics #WorkingMemory

  3. 🧠 New preprint by Shervani-Tabar, Brincat & @ekmiller on emergent #TravelingWaves in #RNN.

    By aligning RNN dynamics to an empirically measured #NeuralManifold, they show that task-relevant TW can emerge through #learning, w/o hard-coding wave dynamics or connectivity. The cool thing here is that the waves are not imposed or engineered, but emerge naturally from learning under #BiologicallyPlausible constraints:

    🌍 doi.org/10.64898/2026.01.08.69

    #Neuroscience #CompNeuro #NeuralDynamics #WorkingMemory

  4. 🧠 New preprint by Shervani-Tabar, Brincat & @ekmiller on emergent #TravelingWaves in #RNN.

    By aligning RNN dynamics to an empirically measured #NeuralManifold, they show that task-relevant TW can emerge through #learning, w/o hard-coding wave dynamics or connectivity. The cool thing here is that the waves are not imposed or engineered, but emerge naturally from learning under #BiologicallyPlausible constraints:

    🌍 doi.org/10.64898/2026.01.08.69

    #Neuroscience #CompNeuro #NeuralDynamics #WorkingMemory

  5. 🧠 New preprint by Shervani-Tabar, Brincat & @ekmiller on emergent #TravelingWaves in #RNN.

    By aligning RNN dynamics to an empirically measured #NeuralManifold, they show that task-relevant TW can emerge through #learning, w/o hard-coding wave dynamics or connectivity. The cool thing here is that the waves are not imposed or engineered, but emerge naturally from learning under #BiologicallyPlausible constraints:

    🌍 doi.org/10.64898/2026.01.08.69

    #Neuroscience #CompNeuro #NeuralDynamics #WorkingMemory

  6. 🧠 New preprint by Shervani-Tabar, Brincat & @ekmiller on emergent #TravelingWaves in #RNN.

    By aligning RNN dynamics to an empirically measured #NeuralManifold, they show that task-relevant TW can emerge through #learning, w/o hard-coding wave dynamics or connectivity. The cool thing here is that the waves are not imposed or engineered, but emerge naturally from learning under #BiologicallyPlausible constraints:

    🌍 doi.org/10.64898/2026.01.08.69

    #Neuroscience #CompNeuro #NeuralDynamics #WorkingMemory

  7. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  8. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  9. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  10. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  11. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  12. Beta-band #oscillations regulate maintenance & deletion of #WorkingMemory representations in humans. This study shows that the WM performance of older adults can be predicted by beta-band neural variability during working memory deletion #PLOSBiology plos.io/3B8yw6y