#neurips2023 — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #neurips2023, aggregated by home.social.
-
I was an invited speaker at the Neurips conference in New Orleans in Dec 2023 for the NeuroAI social.
I was more than surprised to be invited to what is now primarily an AI/ML conference (despite "Neural" being the first word, and the conference's origins in comp neuroscience). To say that the successful AI systems currently deployed and neuroscience/study of biological intelligence have diverged would be an understatement, it was a somewhat odd choice for the organizers to invite a neurophysiologist like me.
So, I took the invite as an opportunity to talk about attention in biological vision and how whatever they now call as attention in AI/ML/CNN/transformers
is almost orthogonal to what many others and I study within visual neuroscience or psychology or cognitive science.While the talk was a partial critique of current AI models, it was more a call for them to take seriously the one instance of intelligence (i.e., the biological world) seriously and how it still has much to offer towards designing better AI systems.
If attention is not one of the cognitive ingredients that makes up the intelligence recipe towards autonomous systems, I don't know what is.
The talk slides can be found here: https://www.dropbox.com/scl/fi/927f50bfvqpwtserizgl5/NeuroAI_Neurips_KS2023.pdf?rlkey=r3pgvsyoudwczapjijx80pj7l
#Neurips2023 #NeuroAI #Attention #Vision #BiologicalVision #ActiveVision #SpaceVariance #NonlinearCompression #EyeMovements #Neurodynamics #AutonomousSystems #AI #ML
-
I was an invited speaker at the Neurips conference in New Orleans in Dec 2023 for the NeuroAI social.
I was more than surprised to be invited to what is now primarily an AI/ML conference (despite "Neural" being the first word, and the conference's origins in comp neuroscience). To say that the successful AI systems currently deployed and neuroscience/study of biological intelligence have diverged would be an understatement, it was a somewhat odd choice for the organizers to invite a neurophysiologist like me.
So, I took the invite as an opportunity to talk about attention in biological vision and how whatever they now call as attention in AI/ML/CNN/transformers
is almost orthogonal to what many others and I study within visual neuroscience or psychology or cognitive science.While the talk was a partial critique of current AI models, it was more a call for them to take seriously the one instance of intelligence (i.e., the biological world) seriously and how it still has much to offer towards designing better AI systems.
If attention is not one of the cognitive ingredients that makes up the intelligence recipe towards autonomous systems, I don't know what is.
The talk slides can be found here: https://www.dropbox.com/scl/fi/927f50bfvqpwtserizgl5/NeuroAI_Neurips_KS2023.pdf?rlkey=r3pgvsyoudwczapjijx80pj7l
#Neurips2023 #NeuroAI #Attention #Vision #BiologicalVision #ActiveVision #SpaceVariance #NonlinearCompression #EyeMovements #Neurodynamics #AutonomousSystems #AI #ML
-
I was an invited speaker at the Neurips conference in New Orleans in Dec 2023 for the NeuroAI social.
I was more than surprised to be invited to what is now primarily an AI/ML conference (despite "Neural" being the first word, and the conference's origins in comp neuroscience). To say that the successful AI systems currently deployed and neuroscience/study of biological intelligence have diverged would be an understatement, it was a somewhat odd choice for the organizers to invite a neurophysiologist like me.
So, I took the invite as an opportunity to talk about attention in biological vision and how whatever they now call as attention in AI/ML/CNN/transformers
is almost orthogonal to what many others and I study within visual neuroscience or psychology or cognitive science.While the talk was a partial critique of current AI models, it was more a call for them to take seriously the one instance of intelligence (i.e., the biological world) seriously and how it still has much to offer towards designing better AI systems.
If attention is not one of the cognitive ingredients that makes up the intelligence recipe towards autonomous systems, I don't know what is.
The talk slides can be found here: https://www.dropbox.com/scl/fi/927f50bfvqpwtserizgl5/NeuroAI_Neurips_KS2023.pdf?rlkey=r3pgvsyoudwczapjijx80pj7l
#Neurips2023 #NeuroAI #Attention #Vision #BiologicalVision #ActiveVision #SpaceVariance #NonlinearCompression #EyeMovements #Neurodynamics #AutonomousSystems #AI #ML
-
I was an invited speaker at the Neurips conference in New Orleans in Dec 2023 for the NeuroAI social.
I was more than surprised to be invited to what is now primarily an AI/ML conference (despite "Neural" being the first word, and the conference's origins in comp neuroscience). To say that the successful AI systems currently deployed and neuroscience/study of biological intelligence have diverged would be an understatement, it was a somewhat odd choice for the organizers to invite a neurophysiologist like me.
So, I took the invite as an opportunity to talk about attention in biological vision and how whatever they now call as attention in AI/ML/CNN/transformers
is almost orthogonal to what many others and I study within visual neuroscience or psychology or cognitive science.While the talk was a partial critique of current AI models, it was more a call for them to take seriously the one instance of intelligence (i.e., the biological world) seriously and how it still has much to offer towards designing better AI systems.
If attention is not one of the cognitive ingredients that makes up the intelligence recipe towards autonomous systems, I don't know what is.
The talk slides can be found here: https://www.dropbox.com/scl/fi/927f50bfvqpwtserizgl5/NeuroAI_Neurips_KS2023.pdf?rlkey=r3pgvsyoudwczapjijx80pj7l
#Neurips2023 #NeuroAI #Attention #Vision #BiologicalVision #ActiveVision #SpaceVariance #NonlinearCompression #EyeMovements #Neurodynamics #AutonomousSystems #AI #ML
-
I was an invited speaker at the Neurips conference in New Orleans in Dec 2023 for the NeuroAI social.
I was more than surprised to be invited to what is now primarily an AI/ML conference (despite "Neural" being the first word, and the conference's origins in comp neuroscience). To say that the successful AI systems currently deployed and neuroscience/study of biological intelligence have diverged would be an understatement, it was a somewhat odd choice for the organizers to invite a neurophysiologist like me.
So, I took the invite as an opportunity to talk about attention in biological vision and how whatever they now call as attention in AI/ML/CNN/transformers
is almost orthogonal to what many others and I study within visual neuroscience or psychology or cognitive science.While the talk was a partial critique of current AI models, it was more a call for them to take seriously the one instance of intelligence (i.e., the biological world) seriously and how it still has much to offer towards designing better AI systems.
If attention is not one of the cognitive ingredients that makes up the intelligence recipe towards autonomous systems, I don't know what is.
The talk slides can be found here: https://www.dropbox.com/scl/fi/927f50bfvqpwtserizgl5/NeuroAI_Neurips_KS2023.pdf?rlkey=r3pgvsyoudwczapjijx80pj7l
#Neurips2023 #NeuroAI #Attention #Vision #BiologicalVision #ActiveVision #SpaceVariance #NonlinearCompression #EyeMovements #Neurodynamics #AutonomousSystems #AI #ML
-
Research in mechanistic interpretability and neuroscience often relies on interpreting internal representations to understand systems, or manipulating representations to improve models. I gave a talk at the UniReps workshop at NeurIPS on a few challenges for this area, summary thread: 1/12
#ai #ml #neuroscience #computationalneuroscience #interpretability #NeuralRepresentations #neurips2023 -
Check out this #NeurIPS2023 paper by Dinc et al. (2023) who introduce #CORNN, convex #optimization of recurrent neural networks for rapid inference of #NeuralDynamics:
-
Learning better with Dale’s Law: A Spectral Perspective - #NeurIPS2023 contribution by Li et al. (2023). It discusses how to train brainlike #RNNs with separate inhibitory and excitatory units with similar performance as standard RNNs:
-
Learning better with Dale’s Law: A Spectral Perspective - #NeurIPS2023 contribution by Li et al. (2023). It discusses how to train brainlike #RNNs with separate inhibitory and excitatory units with similar performance as standard RNNs:
-
Learning better with Dale’s Law: A Spectral Perspective - #NeurIPS2023 contribution by Li et al. (2023). It discusses how to train brainlike #RNNs with separate inhibitory and excitatory units with similar performance as standard RNNs:
-
Learning better with Dale’s Law: A Spectral Perspective - #NeurIPS2023 contribution by Li et al. (2023). It discusses how to train brainlike #RNNs with separate inhibitory and excitatory units with similar performance as standard RNNs:
-
Learning better with Dale’s Law: A Spectral Perspective - #NeurIPS2023 contribution by Li et al. (2023). It discusses how to train brainlike #RNNs with separate inhibitory and excitatory units with similar performance as standard RNNs:
-
How to measure (dis)similarity between #NeuralRepresentations? This work by Harvey et al. (2023) ( @ahwilliams lab) illuminates the relation between #CanonicalCorrelationsAnalysis (#CCA), shape distances, #RepresentationalSimilarityAnalysis (#RSA), #CenteredKernelAlignment (#CKA), and #NormalizedBuresSimilarity (#NBS):
-
How to measure (dis)similarity between #NeuralRepresentations? This work by Harvey et al. (2023) ( @ahwilliams lab) illuminates the relation between #CanonicalCorrelationsAnalysis (#CCA), shape distances, #RepresentationalSimilarityAnalysis (#RSA), #CenteredKernelAlignment (#CKA), and #NormalizedBuresSimilarity (#NBS):
-
Very interesting work by Confavreux and colleagues on #MetaLearning families of #plasticity rules in #RecurrentSpikingNetworks using simulation-based inference 👌
🌍 https://openreview.net/forum?id=FLFasCFJNo
#RSN #CompNeuro #Neuroscience #NeurIPS2023 #SynapticPlasticity #SpikingNeuronalNetwork #SNN
-
Very interesting work by Confavreux and colleagues on #MetaLearning families of #plasticity rules in #RecurrentSpikingNetworks using simulation-based inference 👌
🌍 https://openreview.net/forum?id=FLFasCFJNo
#RSN #CompNeuro #Neuroscience #NeurIPS2023 #SynapticPlasticity #SpikingNeuronalNetwork #SNN
-
Very interesting work by Confavreux and colleagues on #MetaLearning families of #plasticity rules in #RecurrentSpikingNetworks using simulation-based inference 👌
🌍 https://openreview.net/forum?id=FLFasCFJNo
#RSN #CompNeuro #Neuroscience #NeurIPS2023 #SynapticPlasticity #SpikingNeuronalNetwork #SNN
-
Very interesting work by Confavreux and colleagues on #MetaLearning families of #plasticity rules in #RecurrentSpikingNetworks using simulation-based inference 👌
🌍 https://openreview.net/forum?id=FLFasCFJNo
#RSN #CompNeuro #Neuroscience #NeurIPS2023 #SynapticPlasticity #SpikingNeuronalNetwork #SNN
-
Very interesting work by Confavreux and colleagues on #MetaLearning families of #plasticity rules in #RecurrentSpikingNetworks using simulation-based inference 👌
🌍 https://openreview.net/forum?id=FLFasCFJNo
#RSN #CompNeuro #Neuroscience #NeurIPS2023 #SynapticPlasticity #SpikingNeuronalNetwork #SNN
-
Hierarchical #VAEs provide a normative account of motion processing in the primate brain and why the #VisualCortex is hierarchically organized – check this #NeurIPS2023 paper by Vafaii et al. (2023):
🌍 https://www.biorxiv.org/content/10.1101/2023.09.27.559646v2
-
Alignment of feedback and feedforward explains, how feedback connects in #VisualCortex contribute to perceptual experiences such as #imagination, de-occlusions, or #hallucinations) – Check this #NeurIPS2023 paper by Tahereh Toosi and Elias B. Issa:
-
Explaining model *predictions* is all well and good – but what about model *uncertainty*?
Pleased to announce that our paper on information theoretic Shapley values will be presented at #NeurIPS2023.
Joint work with David Watson, Josh O’Hara, Richard Mudd, and Ido Guy.
-
We propose a new family of probability densities that have closed form normalising constants. Our densities use two layer neural networks as parameters, and strictly generalise exponential families. We show that the squared norm can be integrated in closed form, resulting in the normalizing constant. We call the densities Squared Neural Family (#SNEFY), which are closed under conditioning.
Accepted at #NeurIPS2023. #MachineLearning #Bayesian #GaussianProcess