home.social

#mnist — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #mnist, aggregated by home.social.

  1. Taking this one step further, I also looked at #ConditionalGANs: Extending #GANs by conditioning both generator and discriminator on labels, so you can explicitly control what is generated. In the #MNIST case, this means generating specific digits and even smoothly interpolating between them:

    🌍 fabriziomusacchio.com/blog/202

    #MachineLearning #GenerativeAI #CGAN #GAN

    (The attached GIF shows the interpolation between the digits 6 and 1)

  2. Taking this one step further, I also looked at #ConditionalGANs: Extending #GANs by conditioning both generator and discriminator on labels, so you can explicitly control what is generated. In the #MNIST case, this means generating specific digits and even smoothly interpolating between them:

    🌍 fabriziomusacchio.com/blog/202

    #MachineLearning #GenerativeAI #CGAN #GAN

    (The attached GIF shows the interpolation between the digits 6 and 1)

  3. Taking this one step further, I also looked at #ConditionalGANs: Extending #GANs by conditioning both generator and discriminator on labels, so you can explicitly control what is generated. In the #MNIST case, this means generating specific digits and even smoothly interpolating between them:

    🌍 fabriziomusacchio.com/blog/202

    #MachineLearning #GenerativeAI #CGAN #GAN

    (The attached GIF shows the interpolation between the digits 6 and 1)

  4. Taking this one step further, I also looked at #ConditionalGANs: Extending #GANs by conditioning both generator and discriminator on labels, so you can explicitly control what is generated. In the #MNIST case, this means generating specific digits and even smoothly interpolating between them:

    🌍 fabriziomusacchio.com/blog/202

    #MachineLearning #GenerativeAI #CGAN #GAN

    (The attached GIF shows the interpolation between the digits 6 and 1)

  5. Taking this one step further, I also looked at #ConditionalGANs: Extending #GANs by conditioning both generator and discriminator on labels, so you can explicitly control what is generated. In the #MNIST case, this means generating specific digits and even smoothly interpolating between them:

    🌍 fabriziomusacchio.com/blog/202

    #MachineLearning #GenerativeAI #CGAN #GAN

    (The attached GIF shows the interpolation between the digits 6 and 1)

  6. Just came across an elegant new #SNN framework called #nervos by Maskeen and Lashkare, which implements a two layer SNN w/ local #STDP #learning to classify, e.g., #MNIST digits. Here is an example, where I apply it to a 6-class subset of MNIST. The model reaches around 85% accuracy & the learned synapses show digit-like patterns. Quite impressive in my view, given the simplicity of the architecture & the local learning rule:

    🌍fabriziomusacchio.com/blog/202

    #CompNeuro #Neuroscience #NeuralPlasticity

  7. Just came across an elegant new #SNN framework called #nervos by Maskeen and Lashkare, which implements a two layer SNN w/ local #STDP #learning to classify, e.g., #MNIST digits. Here is an example, where I apply it to a 6-class subset of MNIST. The model reaches around 85% accuracy & the learned synapses show digit-like patterns. Quite impressive in my view, given the simplicity of the architecture & the local learning rule:

    🌍fabriziomusacchio.com/blog/202

    #CompNeuro #Neuroscience #NeuralPlasticity

  8. Just came across an elegant new #SNN framework called #nervos by Maskeen and Lashkare, which implements a two layer SNN w/ local #STDP #learning to classify, e.g., #MNIST digits. Here is an example, where I apply it to a 6-class subset of MNIST. The model reaches around 85% accuracy & the learned synapses show digit-like patterns. Quite impressive in my view, given the simplicity of the architecture & the local learning rule:

    🌍fabriziomusacchio.com/blog/202

    #CompNeuro #Neuroscience #NeuralPlasticity

  9. Just came across an elegant new #SNN framework called #nervos by Maskeen and Lashkare, which implements a two layer SNN w/ local #STDP #learning to classify, e.g., #MNIST digits. Here is an example, where I apply it to a 6-class subset of MNIST. The model reaches around 85% accuracy & the learned synapses show digit-like patterns. Quite impressive in my view, given the simplicity of the architecture & the local learning rule:

    🌍fabriziomusacchio.com/blog/202

    #CompNeuro #Neuroscience #NeuralPlasticity

  10. Just came across an elegant new #SNN framework called #nervos by Maskeen and Lashkare, which implements a two layer SNN w/ local #STDP #learning to classify, e.g., #MNIST digits. Here is an example, where I apply it to a 6-class subset of MNIST. The model reaches around 85% accuracy & the learned synapses show digit-like patterns. Quite impressive in my view, given the simplicity of the architecture & the local learning rule:

    🌍fabriziomusacchio.com/blog/202

    #CompNeuro #Neuroscience #NeuralPlasticity

  11. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  12. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  13. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  14. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  15. 🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

    They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

    🌍 doi.org/10.1038/s41592-025-028

    #Neuroscience #CompNeuro

  16. Nothing like the #Kaggle #fashion #MNIST variant to make me feel like a real Elle Woods over here doing t-SNE on purses and saliency maps on ankle boots 😅

    github.com/janeadams/fashion_m

    #MachineLearning #WomeninSTEM #AI #ML #tsne #pca #WiDS #Python