home.social

#rnns — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #rnns, aggregated by home.social.

  1. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  2. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  3. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  4. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  5. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  6. 🧠 New preprint by Codol et al. (2025): Brain-like #NeuralDynamics for #behavioral control develop through #ReinforcementLearning. They show that only #RL, not #SupervisedLearning, yields neural activity geometries & dynamics matching monkey #MotorCortex recordings. RL-trained #RNNs operate at the edge of #chaos, reproduce adaptive reorganization under #visuomotor rotation, and require realistic limb #biomechanics to achieve brain-like control.

    🌍 doi.org/10.1101/2024.10.04.616

    #CompNeuro #Neuroscience

  7. #ITByte: The #MachineLearning models having sequential data as input or output are called #SequenceModels.

    It includes text streams, video clips, audio clips, time-series data, etc. Recurrent Neural Networks (#RNNs) and Long Short-Term Memory(#LSTM) are popular algorithms used in sequence models.

    knowledgezone.co.in/trends/exp

  8. #ITByte: The #MachineLearning models having sequential data as input or output are called #SequenceModels.

    It includes text streams, video clips, audio clips, time-series data, etc. Recurrent Neural Networks (#RNNs) and Long Short-Term Memory(#LSTM) are popular algorithms used in sequence models.

    knowledgezone.co.in/trends/exp

  9. 1997 with the advent of Long Short-Term Memory recurrent #neuralnetworks marks the subsequent step in our brief history of )large) #languagemodels from last week's #ise2023 lecture. Introduced by Sepp Hochreiter and Jürgen Schmidhuber #LSTM #RNNs enabled efficient processing of sequences of data.
    Slides: drive.google.com/file/d/1atNvM
    #nlp #llm #llms #ai #artificialintelligence #lecture @fizise