#mathpsych — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #mathpsych, aggregated by home.social.
-
I would appreciate recommendations for a preprint server to submit preprints in computational intelligence / computational cognitive science.
I know about the following preprint servers and would appreciate opinions about their relative merits for that topic area, and links to any other relevant preprint servers.
* https://arxiv.org/
- The available fine-grained categories in Computer Science *might* not cover my topics (depending on how strictly the categories are interpreted).
- The submission formats *might* be incompatible with my writing workflow.* https://osf.io/preprints/discover
- OSF Preprints is a network of community-run OSF-hosted preprint servers. The only vaguely relevant community server is PssyArXiv, and it isn't clear that my topics of interest are on-scope for PsyArXiv.* https://zenodo.org/
- Appears to have no constraints on format or topic (which is probably positive for me).
- Is not indexed by Google Scholar. (I have views about Google. However, not being indexed by Google means that preprints on Zenodo are effectively invisible to many researchers.)
#FediHelp #PrePrint #ScholarlyPublishing #AI #ArtificialIntelligence #CogSci #CognitiveScience #CompCogSci #ComputationalCognitiveScience #ComputationalIntelligence #MathematicalPsychology #MathPsych -
CW: Long #HiveMind request for pointers to literature on when loss minimisation is inappropriate for predictive modelling because of radical nonstationarity - PLEASE BOOST FOR REACH
A lot of #MachineLearning and #PredictiveModelling in #statistics is based on minimisation of loss with respect to a training data set. This assumes that the training data set as a whole is representative of potential training sets. Consequently, this implies that loss minimisation is not an appropriate approach (or way of conceptualising the problem) in problems where the training data sets are not representative of the potential testing sets. (As a working title, let's call this issue "radical nonstationarity".)
I recently read Javed & Sutton 2024 "The Big World Hypothesis and its Ramifications for Artificial Intelligence" (https://web.archive.org/web/20250203053026/https://openreview.net/forum?id=Sv7DazuCn8) and think it describes a superset of this issue of radical nonstationarity. I strongly recommend this paper for motivating why loss minimisation with respect to a training data set might not always be appropriate.
Imagine an intelligent agent existing over time in a "big world" environment. Each observation records information about a single interaction of the agent with it's environment, and this observation only records the locally observable part of the environment. The agent may be moving between locations in the environment that are radically different with respect to the predictive relationships that exist and the variables that are predictive of the outcome of interest may vary between observations. Nonetheless, there is some predictive information that an intelligent agent could exploit. The case where everything is totally random and unpredictable is of no interest when the focus of research is an intelligent agent. In such a world minimising loss with respect to the history of all observations seen by the agent or even a sliding window of recent history seems irrelevant to the point of obtuseness.
One possible approach to this issue might be for the agent to determine, on a per-observation basis, the subset of past observations that are most relevant to making a prediction for the current observation. Then loss minimisation might play some role in determining or using that subset. However, that use of a dynamically determined training set is not the same thing as loss minimisation with respect to a statically given training set.
I am trying to find pointers to scholarly literature that discusses this issue (i.e. situations where minimisation of loss with respect to some "fixed" training set). My problem is that I am struggling to come up with search terms to find them. So:
* Please suggest search terms that might help me find this literature
* Please provide pointers to relevant papers#PhilosophyOfStatistics #PhilosophyOfMachineLearning #CognitiveRobotics #MathematicalPsychology #MathPsych #CognitiveScience #CogSci #CognitiveNeuroscience #nonstationarity #LossMinimisation
-
The next VSAonline webinar is at 17:00 UTC (not the usual time), Monday 27 January.
Zoom: https://ltu-se.zoom.us/j/65564790287
Speaker: Anthony Thomas from UC Davis, USA
Title: ”Sketching a Picture of Vector Symbolic Architectures”
Abstract : Sketching algorithms are a broad area of research in theoretical computer science and numerical analysis that aim to distil data into a simple summary, called a "sketch," that retains some essential notion of structure while being much more efficient to store, query, and transmit.
Vector-symbolic architectures (VSAs) are an approach to computing on data represented using random vectors, and provide an elegant conceptual framework for realizing a wide variety of data structures and algorithms in a way that lends itself to implementation in highly-parallel and energy-efficient computer hardware.
Sketching algorithms and VSA have a substantial degree of consonance in their methods, motivations, and applications. In this tutorial style talk, I will discuss some of the connections between these two fields, focusing, in particular, on the connections between VSA and tensor-sketches, a family of sketching algorithms concerned with the setting in which the data being sketched can be decomposed into Kronecker (tensor) products between more primitive objects. This is exactly the situation of interest in VSA and the two fields have arrived at strikingly similar solutions to this problem.
#VectorSymbolicArchitectures #VSA #HyperdimensionalComputing #HDC #AI #ML #ComputationalCognitiveScience #CompCogSci #MathematicalPsychology #MathPsych #CognitiveScience #CogSci @cogsci
-
The next VSAonline webinar is at 17:00 UTC (not the usual time), Monday 27 January.
Zoom: https://ltu-se.zoom.us/j/65564790287
Speaker: Anthony Thomas from UC Davis, USA
Title: ”Sketching a Picture of Vector Symbolic Architectures”
Abstract : Sketching algorithms are a broad area of research in theoretical computer science and numerical analysis that aim to distil data into a simple summary, called a "sketch," that retains some essential notion of structure while being much more efficient to store, query, and transmit.
Vector-symbolic architectures (VSAs) are an approach to computing on data represented using random vectors, and provide an elegant conceptual framework for realizing a wide variety of data structures and algorithms in a way that lends itself to implementation in highly-parallel and energy-efficient computer hardware.
Sketching algorithms and VSA have a substantial degree of consonance in their methods, motivations, and applications. In this tutorial style talk, I will discuss some of the connections between these two fields, focusing, in particular, on the connections between VSA and tensor-sketches, a family of sketching algorithms concerned with the setting in which the data being sketched can be decomposed into Kronecker (tensor) products between more primitive objects. This is exactly the situation of interest in VSA and the two fields have arrived at strikingly similar solutions to this problem.
#VectorSymbolicArchitectures #VSA #HyperdimensionalComputing #HDC #AI #ML #ComputationalCognitiveScience #CompCogSci #MathematicalPsychology #MathPsych #CognitiveScience #CogSci @cogsci
-
The next VSAonline webinar is at 17:00 UTC (not the usual time), Monday 27 January.
Zoom: https://ltu-se.zoom.us/j/65564790287
Speaker: Anthony Thomas from UC Davis, USA
Title: ”Sketching a Picture of Vector Symbolic Architectures”
Abstract : Sketching algorithms are a broad area of research in theoretical computer science and numerical analysis that aim to distil data into a simple summary, called a "sketch," that retains some essential notion of structure while being much more efficient to store, query, and transmit.
Vector-symbolic architectures (VSAs) are an approach to computing on data represented using random vectors, and provide an elegant conceptual framework for realizing a wide variety of data structures and algorithms in a way that lends itself to implementation in highly-parallel and energy-efficient computer hardware.
Sketching algorithms and VSA have a substantial degree of consonance in their methods, motivations, and applications. In this tutorial style talk, I will discuss some of the connections between these two fields, focusing, in particular, on the connections between VSA and tensor-sketches, a family of sketching algorithms concerned with the setting in which the data being sketched can be decomposed into Kronecker (tensor) products between more primitive objects. This is exactly the situation of interest in VSA and the two fields have arrived at strikingly similar solutions to this problem.
#VectorSymbolicArchitectures #VSA #HyperdimensionalComputing #HDC #AI #ML #ComputationalCognitiveScience #CompCogSci #MathematicalPsychology #MathPsych #CognitiveScience #CogSci @cogsci
-
The next VSAonline webinar is at 17:00 UTC (not the usual time), Monday 27 January.
Zoom: https://ltu-se.zoom.us/j/65564790287
Speaker: Anthony Thomas from UC Davis, USA
Title: ”Sketching a Picture of Vector Symbolic Architectures”
Abstract : Sketching algorithms are a broad area of research in theoretical computer science and numerical analysis that aim to distil data into a simple summary, called a "sketch," that retains some essential notion of structure while being much more efficient to store, query, and transmit.
Vector-symbolic architectures (VSAs) are an approach to computing on data represented using random vectors, and provide an elegant conceptual framework for realizing a wide variety of data structures and algorithms in a way that lends itself to implementation in highly-parallel and energy-efficient computer hardware.
Sketching algorithms and VSA have a substantial degree of consonance in their methods, motivations, and applications. In this tutorial style talk, I will discuss some of the connections between these two fields, focusing, in particular, on the connections between VSA and tensor-sketches, a family of sketching algorithms concerned with the setting in which the data being sketched can be decomposed into Kronecker (tensor) products between more primitive objects. This is exactly the situation of interest in VSA and the two fields have arrived at strikingly similar solutions to this problem.
#VectorSymbolicArchitectures #VSA #HyperdimensionalComputing #HDC #AI #ML #ComputationalCognitiveScience #CompCogSci #MathematicalPsychology #MathPsych #CognitiveScience #CogSci @cogsci
-
The next VSAonline webinar is at 17:00 UTC (not the usual time), Monday 27 January.
Zoom: https://ltu-se.zoom.us/j/65564790287
Speaker: Anthony Thomas from UC Davis, USA
Title: ”Sketching a Picture of Vector Symbolic Architectures”
Abstract : Sketching algorithms are a broad area of research in theoretical computer science and numerical analysis that aim to distil data into a simple summary, called a "sketch," that retains some essential notion of structure while being much more efficient to store, query, and transmit.
Vector-symbolic architectures (VSAs) are an approach to computing on data represented using random vectors, and provide an elegant conceptual framework for realizing a wide variety of data structures and algorithms in a way that lends itself to implementation in highly-parallel and energy-efficient computer hardware.
Sketching algorithms and VSA have a substantial degree of consonance in their methods, motivations, and applications. In this tutorial style talk, I will discuss some of the connections between these two fields, focusing, in particular, on the connections between VSA and tensor-sketches, a family of sketching algorithms concerned with the setting in which the data being sketched can be decomposed into Kronecker (tensor) products between more primitive objects. This is exactly the situation of interest in VSA and the two fields have arrived at strikingly similar solutions to this problem.
#VectorSymbolicArchitectures #VSA #HyperdimensionalComputing #HDC #AI #ML #ComputationalCognitiveScience #CompCogSci #MathematicalPsychology #MathPsych #CognitiveScience #CogSci @cogsci
-
The schedule for the next VSAonline webinar series (January to June 2025) is published at:
https://sites.google.com/view/hdvsaonline/spring-2025
There are 11 talks around #VectorSymbolicArchitecture / #HyperdimensionalComputing
The talks are (almost always) recorded and published online, in case you can't participate in the live session.
@cogsci
#VSA #HDC #CompCogScii #MathPsych #AI #neuromorphic #neurosymbolic #ComputationalNeuroscience #ComputationalCognitiveScience #MathematicalPsychology -
The schedule for the next VSAonline webinar series (January to June 2025) is published at:
https://sites.google.com/view/hdvsaonline/spring-2025
There are 11 talks around #VectorSymbolicArchitecture / #HyperdimensionalComputing
The talks are (almost always) recorded and published online, in case you can't participate in the live session.
@cogsci
#VSA #HDC #CompCogScii #MathPsych #AI #neuromorphic #neurosymbolic #ComputationalNeuroscience #ComputationalCognitiveScience #MathematicalPsychology -
The schedule for the next VSAonline webinar series (January to June 2025) is published at:
https://sites.google.com/view/hdvsaonline/spring-2025
There are 11 talks around #VectorSymbolicArchitecture / #HyperdimensionalComputing
The talks are (almost always) recorded and published online, in case you can't participate in the live session.
@cogsci
#VSA #HDC #CompCogScii #MathPsych #AI #neuromorphic #neurosymbolic #ComputationalNeuroscience #ComputationalCognitiveScience #MathematicalPsychology -
The schedule for the next VSAonline webinar series (January to June 2025) is published at:
https://sites.google.com/view/hdvsaonline/spring-2025
There are 11 talks around #VectorSymbolicArchitecture / #HyperdimensionalComputing
The talks are (almost always) recorded and published online, in case you can't participate in the live session.
@cogsci
#VSA #HDC #CompCogScii #MathPsych #AI #neuromorphic #neurosymbolic #ComputationalNeuroscience #ComputationalCognitiveScience #MathematicalPsychology -
The schedule for the next VSAonline webinar series (January to June 2025) is published at:
https://sites.google.com/view/hdvsaonline/spring-2025
There are 11 talks around #VectorSymbolicArchitecture / #HyperdimensionalComputing
The talks are (almost always) recorded and published online, in case you can't participate in the live session.
@cogsci
#VSA #HDC #CompCogScii #MathPsych #AI #neuromorphic #neurosymbolic #ComputationalNeuroscience #ComputationalCognitiveScience #MathematicalPsychology -
New preprint:
“Algebras of actions in an agent’s representations of the world” -
Thanks @hosford42 for reminding me of this half-day tutorial on Vector Symbolic Architectures / Hyperdimensional Computing. The authors have been applying HDC/VSA to place recognition in robotics, but the tutorial coverage is much wider.
https://www.tu-chemnitz.de/etit/proaut/workshops_tutorials/hdc_ki19/index.html
#VSA #VectorSymbolicArchitecture #HDC #HyperdimensionalComputing #CogRob #CognitiveRobotics #CompCogSci #ComputationalCognitiveScience #CogSci #CognitiveScience #MathPsych #MathematicalPsychology
-
Here are two high level articles that mention Vector Symbolic Architecure / HyperDimensional Computing in the more "popular" end of the technical press:
https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10098176
#VSA #VectorSymbolicArchitecture #HDC #HyperdimensionalComputing #CompCogSci #ComputationalCognitiveScience #CogSci #CognitiveScience #MathPsych #MathematicalPsychology
-
CW: Job openings: two university positions in HyperDimensional Computing / Vector Symbolic Architectures
Lulea Technical University (Sweden) announces two open positions ultimately focusing on research on hyperdimensional computing and vector-symbolic architectures. The positions are connected to the recently approved grant by Swedish Research Council "Vector-symbolic architectures as an algebraic programming language for neuromorphic computers”.
The application deadline is January 31st, 2023
Position 1. Senior lecturer: https://lnkd.in/gu5zf69T
Position 2. Researcher: https://lnkd.in/giSuqkFi
Should you have any questions please write to Evgeny Osipov ([email protected]).
#VSA #VectorSymbolicArchitecture #HDC #HyperdimensionalComputing #CogSci #CognitiveScience #CogRob #CognitiveRobotics #MathPsych #CompCogSci #CompSci #ComputerScience #neuromorphic
#job -
CW: New paper on Vector Symbolic Architectures / Hyperdimensional Computing (Osipov et al)
Abstract:
Motivated by recent innovations in biologically inspired neuromorphic hardware, this article presents a novel unsupervised machine learning algorithm named Hyperseed that draws on the principles of vector symbolic architectures (VSAs) for fast learning of a topology preserving feature map of unlabeled data. It relies on two major operations of VSA, binding and bundling. The algorithmic part of Hyperseed is expressed within the Fourier holographic reduced representations (FHRR) model, which is specifically suited for implementation on spiking neuromorphic hardware. The two primary contributions of the Hyperseed algorithm are few-shot learning and a learning rule based on single vector operation. These properties are empirically evaluated on synthetic datasets and on illustrative benchmark use cases, IRIS classification, and a language identification task using the n -gram statistics. The results of these experiments confirm the capabilities of Hyperseed and its applications in neuromorphic hardware.
https://ieeexplore.ieee.org/abstract/document/9953974#VSA #VectorSymbolicArchitecture #HDC #HyperdimensionalComputing #CogSci #CogRob #MathPsych