home.social

#neurips2023 — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #neurips2023, aggregated by home.social.

  1. I was an invited speaker at the Neurips conference in New Orleans in Dec 2023 for the NeuroAI social.

    I was more than surprised to be invited to what is now primarily an AI/ML conference (despite "Neural" being the first word, and the conference's origins in comp neuroscience). To say that the successful AI systems currently deployed and neuroscience/study of biological intelligence have diverged would be an understatement, it was a somewhat odd choice for the organizers to invite a neurophysiologist like me.

    So, I took the invite as an opportunity to talk about attention in biological vision and how whatever they now call as attention in AI/ML/CNN/transformers
    is almost orthogonal to what many others and I study within visual neuroscience or psychology or cognitive science.

    While the talk was a partial critique of current AI models, it was more a call for them to take seriously the one instance of intelligence (i.e., the biological world) seriously and how it still has much to offer towards designing better AI systems.

    If attention is not one of the cognitive ingredients that makes up the intelligence recipe towards autonomous systems, I don't know what is.

    The talk slides can be found here: dropbox.com/scl/fi/927f50bfvqp

    #Neurips2023 #NeuroAI #Attention #Vision #BiologicalVision #ActiveVision #SpaceVariance #NonlinearCompression #EyeMovements #Neurodynamics #AutonomousSystems #AI #ML

  2. I was an invited speaker at the Neurips conference in New Orleans in Dec 2023 for the NeuroAI social.

    I was more than surprised to be invited to what is now primarily an AI/ML conference (despite "Neural" being the first word, and the conference's origins in comp neuroscience). To say that the successful AI systems currently deployed and neuroscience/study of biological intelligence have diverged would be an understatement, it was a somewhat odd choice for the organizers to invite a neurophysiologist like me.

    So, I took the invite as an opportunity to talk about attention in biological vision and how whatever they now call as attention in AI/ML/CNN/transformers
    is almost orthogonal to what many others and I study within visual neuroscience or psychology or cognitive science.

    While the talk was a partial critique of current AI models, it was more a call for them to take seriously the one instance of intelligence (i.e., the biological world) seriously and how it still has much to offer towards designing better AI systems.

    If attention is not one of the cognitive ingredients that makes up the intelligence recipe towards autonomous systems, I don't know what is.

    The talk slides can be found here: dropbox.com/scl/fi/927f50bfvqp

    #Neurips2023 #NeuroAI #Attention #Vision #BiologicalVision #ActiveVision #SpaceVariance #NonlinearCompression #EyeMovements #Neurodynamics #AutonomousSystems #AI #ML

  3. I was an invited speaker at the Neurips conference in New Orleans in Dec 2023 for the NeuroAI social.

    I was more than surprised to be invited to what is now primarily an AI/ML conference (despite "Neural" being the first word, and the conference's origins in comp neuroscience). To say that the successful AI systems currently deployed and neuroscience/study of biological intelligence have diverged would be an understatement, it was a somewhat odd choice for the organizers to invite a neurophysiologist like me.

    So, I took the invite as an opportunity to talk about attention in biological vision and how whatever they now call as attention in AI/ML/CNN/transformers
    is almost orthogonal to what many others and I study within visual neuroscience or psychology or cognitive science.

    While the talk was a partial critique of current AI models, it was more a call for them to take seriously the one instance of intelligence (i.e., the biological world) seriously and how it still has much to offer towards designing better AI systems.

    If attention is not one of the cognitive ingredients that makes up the intelligence recipe towards autonomous systems, I don't know what is.

    The talk slides can be found here: dropbox.com/scl/fi/927f50bfvqp

    #Neurips2023 #NeuroAI #Attention #Vision #BiologicalVision #ActiveVision #SpaceVariance #NonlinearCompression #EyeMovements #Neurodynamics #AutonomousSystems #AI #ML

  4. I was an invited speaker at the Neurips conference in New Orleans in Dec 2023 for the NeuroAI social.

    I was more than surprised to be invited to what is now primarily an AI/ML conference (despite "Neural" being the first word, and the conference's origins in comp neuroscience). To say that the successful AI systems currently deployed and neuroscience/study of biological intelligence have diverged would be an understatement, it was a somewhat odd choice for the organizers to invite a neurophysiologist like me.

    So, I took the invite as an opportunity to talk about attention in biological vision and how whatever they now call as attention in AI/ML/CNN/transformers
    is almost orthogonal to what many others and I study within visual neuroscience or psychology or cognitive science.

    While the talk was a partial critique of current AI models, it was more a call for them to take seriously the one instance of intelligence (i.e., the biological world) seriously and how it still has much to offer towards designing better AI systems.

    If attention is not one of the cognitive ingredients that makes up the intelligence recipe towards autonomous systems, I don't know what is.

    The talk slides can be found here: dropbox.com/scl/fi/927f50bfvqp

    #Neurips2023 #NeuroAI #Attention #Vision #BiologicalVision #ActiveVision #SpaceVariance #NonlinearCompression #EyeMovements #Neurodynamics #AutonomousSystems #AI #ML

  5. I was an invited speaker at the Neurips conference in New Orleans in Dec 2023 for the NeuroAI social.

    I was more than surprised to be invited to what is now primarily an AI/ML conference (despite "Neural" being the first word, and the conference's origins in comp neuroscience). To say that the successful AI systems currently deployed and neuroscience/study of biological intelligence have diverged would be an understatement, it was a somewhat odd choice for the organizers to invite a neurophysiologist like me.

    So, I took the invite as an opportunity to talk about attention in biological vision and how whatever they now call as attention in AI/ML/CNN/transformers
    is almost orthogonal to what many others and I study within visual neuroscience or psychology or cognitive science.

    While the talk was a partial critique of current AI models, it was more a call for them to take seriously the one instance of intelligence (i.e., the biological world) seriously and how it still has much to offer towards designing better AI systems.

    If attention is not one of the cognitive ingredients that makes up the intelligence recipe towards autonomous systems, I don't know what is.

    The talk slides can be found here: dropbox.com/scl/fi/927f50bfvqp

    #Neurips2023 #NeuroAI #Attention #Vision #BiologicalVision #ActiveVision #SpaceVariance #NonlinearCompression #EyeMovements #Neurodynamics #AutonomousSystems #AI #ML

  6. Research in mechanistic interpretability and neuroscience often relies on interpreting internal representations to understand systems, or manipulating representations to improve models. I gave a talk at the UniReps workshop at NeurIPS on a few challenges for this area, summary thread: 1/12
    #ai #ml #neuroscience #computationalneuroscience #interpretability #NeuralRepresentations #neurips2023

  7. Learning better with Dale’s Law: A Spectral Perspective - #NeurIPS2023 contribution by Li et al. (2023). It discusses how to train brainlike #RNNs with separate inhibitory and excitatory units with similar performance as standard RNNs:

    🌍 openreview.net/forum?id=rDiMgZ

    #RNN #DalesLaw #CompNeuro #Neuroscience

  8. Learning better with Dale’s Law: A Spectral Perspective - #NeurIPS2023 contribution by Li et al. (2023). It discusses how to train brainlike #RNNs with separate inhibitory and excitatory units with similar performance as standard RNNs:

    🌍 openreview.net/forum?id=rDiMgZ

    #RNN #DalesLaw #CompNeuro #Neuroscience

  9. Learning better with Dale’s Law: A Spectral Perspective - #NeurIPS2023 contribution by Li et al. (2023). It discusses how to train brainlike #RNNs with separate inhibitory and excitatory units with similar performance as standard RNNs:

    🌍 openreview.net/forum?id=rDiMgZ

    #RNN #DalesLaw #CompNeuro #Neuroscience

  10. Learning better with Dale’s Law: A Spectral Perspective - #NeurIPS2023 contribution by Li et al. (2023). It discusses how to train brainlike #RNNs with separate inhibitory and excitatory units with similar performance as standard RNNs:

    🌍 openreview.net/forum?id=rDiMgZ

    #RNN #DalesLaw #CompNeuro #Neuroscience

  11. Learning better with Dale’s Law: A Spectral Perspective - #NeurIPS2023 contribution by Li et al. (2023). It discusses how to train brainlike #RNNs with separate inhibitory and excitatory units with similar performance as standard RNNs:

    🌍 openreview.net/forum?id=rDiMgZ

    #RNN #DalesLaw #CompNeuro #Neuroscience

  12. Hierarchical #VAEs provide a normative account of motion processing in the primate brain and why the #VisualCortex is hierarchically organized – check this #NeurIPS2023 paper by Vafaii et al. (2023):

    🌍 biorxiv.org/content/10.1101/20

    #Neuroscience #CompNeuro #VAE

  13. Alignment of feedback and feedforward explains, how feedback connects in #VisualCortex contribute to perceptual experiences such as #imagination, de-occlusions, or #hallucinations) – Check this #NeurIPS2023 paper by Tahereh Toosi and Elias B. Issa:

    🌍 arxiv.org/abs/2310.20599

    #CompNeuro #Neuroscience #Perception

  14. Explaining model *predictions* is all well and good – but what about model *uncertainty*?

    Pleased to announce that our paper on information theoretic Shapley values will be presented at #NeurIPS2023.

    Joint work with David Watson, Josh O’Hara, Richard Mudd, and Ido Guy.

    arxiv.org/abs/2306.05724

    #NeurIPS #xai #UncertaintyQuantification #machinelearning

  15. We propose a new family of probability densities that have closed form normalising constants. Our densities use two layer neural networks as parameters, and strictly generalise exponential families. We show that the squared norm can be integrated in closed form, resulting in the normalizing constant. We call the densities Squared Neural Family (#SNEFY), which are closed under conditioning.

    Accepted at #NeurIPS2023. #MachineLearning #Bayesian #GaussianProcess

    arxiv.org/abs/2305.13552