#classifiers — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #classifiers, aggregated by home.social.
-
@icing There's a thing called "the curse of dimensionality" and it applies to neural networks. I guess you could say that it's like a reverse Moore's Law but for neural nets. Basically, (and this is just my mostly-non technical explanation), neural nets are basically huge multi-dimensional classifiers and when you need to do backpropagation to train the net, it involves making small adjustments to localised areas of the classifier space. The problem (or curse) of having more dimensions is that it becomes harder and harder to localise the changes because every local space becomes closer to all the other points in every other subspace. This means exponentially higher training costs as these models scale.
At least that's as I understand it. I'm not a mathematician, but I have read plenty of stuff relating to machine learning over the years (since the 90s) and I think I've got the above right...
-
@icing There's a thing called "the curse of dimensionality" and it applies to neural networks. I guess you could say that it's like a reverse Moore's Law but for neural nets. Basically, (and this is just my mostly-non technical explanation), neural nets are basically huge multi-dimensional classifiers and when you need to do backpropagation to train the net, it involves making small adjustments to localised areas of the classifier space. The problem (or curse) of having more dimensions is that it becomes harder and harder to localise the changes because every local space becomes closer to all the other points in every other subspace. This means exponentially higher training costs as these models scale.
At least that's as I understand it. I'm not a mathematician, but I have read plenty of stuff relating to machine learning over the years (since the 90s) and I think I've got the above right...
-
@icing There's a thing called "the curse of dimensionality" and it applies to neural networks. I guess you could say that it's like a reverse Moore's Law but for neural nets. Basically, (and this is just my mostly-non technical explanation), neural nets are basically huge multi-dimensional classifiers and when you need to do backpropagation to train the net, it involves making small adjustments to localised areas of the classifier space. The problem (or curse) of having more dimensions is that it becomes harder and harder to localise the changes because every local space becomes closer to all the other points in every other subspace. This means exponentially higher training costs as these models scale.
At least that's as I understand it. I'm not a mathematician, but I have read plenty of stuff relating to machine learning over the years (since the 90s) and I think I've got the above right...
-
@icing There's a thing called "the curse of dimensionality" and it applies to neural networks. I guess you could say that it's like a reverse Moore's Law but for neural nets. Basically, (and this is just my mostly-non technical explanation), neural nets are basically huge multi-dimensional classifiers and when you need to do backpropagation to train the net, it involves making small adjustments to localised areas of the classifier space. The problem (or curse) of having more dimensions is that it becomes harder and harder to localise the changes because every local space becomes closer to all the other points in every other subspace. This means exponentially higher training costs as these models scale.
At least that's as I understand it. I'm not a mathematician, but I have read plenty of stuff relating to machine learning over the years (since the 90s) and I think I've got the above right...
-
@icing There's a thing called "the curse of dimensionality" and it applies to neural networks. I guess you could say that it's like a reverse Moore's Law but for neural nets. Basically, (and this is just my mostly-non technical explanation), neural nets are basically huge multi-dimensional classifiers and when you need to do backpropagation to train the net, it involves making small adjustments to localised areas of the classifier space. The problem (or curse) of having more dimensions is that it becomes harder and harder to localise the changes because every local space becomes closer to all the other points in every other subspace. This means exponentially higher training costs as these models scale.
At least that's as I understand it. I'm not a mathematician, but I have read plenty of stuff relating to machine learning over the years (since the 90s) and I think I've got the above right...
-
Doctoral Thesis: Improving #bird #sound #classifiers for #passive #acoustic #monitoring In recent years, passive acoustic monitoring #PAM has emerged as a powerful tool for biodiversity assessment for vocalizing taxa such as birds, bats, amphibians and insects. helda.helsinki.fi/items/219f9a...
-
'A Comparative Evaluation of Quantification Methods', by Tobias Schumacher, Markus Strohmaier, Florian Lemmerich.
http://jmlr.org/papers/v26/21-0241.html
#classifiers #supervised #quantification -
'An Optimal Transport Approach for Computing Adversarial Training Lower Bounds in Multiclass Classification', by Nicolas Garcia Trillos, Matt Jacobs, Jakwang Kim, Matthew Werenski.
http://jmlr.org/papers/v25/24-0268.html
#adversarial #regularization #classifiers -
'Optimal Decision Tree and Adaptive Submodular Ranking with Noisy Outcomes', by Su Jia, Fatemeh Navidi, Viswanath Nagarajan, R. Ravi.
http://jmlr.org/papers/v25/23-1484.html
#adaptive #classifiers #optimal -
'Estimating the Replication Probability of Significant Classification Benchmark Experiments', by Daniel Berrar.
http://jmlr.org/papers/v25/24-0158.html
#classifiers #replicability #hypothesis -
'Non-splitting Neyman-Pearson Classifiers', by Jingming Wang, Lucy Xia, Zhigang Bao, Xin Tong.
http://jmlr.org/papers/v25/22-0795.html
#classifiers #classifier #classification -
'Generalization and Stability of Interpolating Neural Networks with Minimal Width', by Hossein Taheri, Christos Thrampoulidis.
http://jmlr.org/papers/v25/23-0422.html
#classifiers #generalization #minimization -
'Classification with Deep Neural Networks and Logistic Loss', by Zihan Zhang, Lei Shi, Ding-Xuan Zhou.
http://jmlr.org/papers/v25/22-0049.html
#classifiers #deepen #classification -
'Multi-class Probabilistic Bounds for Majority Vote Classifiers with Partially Labeled Data', by Vasilii Feofanov, Emilie Devijver, Massih-Reza Amini.
http://jmlr.org/papers/v25/23-0121.html
#classifiers #classifier #labeling -
'A Multilabel Classification Framework for Approximate Nearest Neighbor Search', by Ville Hyvönen, Elias Jääsaari, Teemu Roos.
http://jmlr.org/papers/v25/23-0286.html
#classification #classifiers #classifier -
'Lifted Bregman Training of Neural Networks', by Xiaoyu Wang, Martin Benning.
http://jmlr.org/papers/v24/22-0934.html
#autoencoders #classifiers #denoising -
'Statistical Comparisons of Classifiers by Generalized Stochastic Dominance', by Christoph Jansen, Malte Nalenz, Georg Schollmeyer, Thomas Augustin.
http://jmlr.org/papers/v24/22-0902.html
#classifiers #comparisons #randomization -
'Statistical Comparisons of Classifiers by Generalized Stochastic Dominance', by Christoph Jansen, Malte Nalenz, Georg Schollmeyer, Thomas Augustin.
http://jmlr.org/papers/v24/22-0902.html
#classifiers #comparisons #randomization -
'Statistical Comparisons of Classifiers by Generalized Stochastic Dominance', by Christoph Jansen, Malte Nalenz, Georg Schollmeyer, Thomas Augustin.
http://jmlr.org/papers/v24/22-0902.html
#classifiers #comparisons #randomization -
'Statistical Comparisons of Classifiers by Generalized Stochastic Dominance', by Christoph Jansen, Malte Nalenz, Georg Schollmeyer, Thomas Augustin.
http://jmlr.org/papers/v24/22-0902.html
#classifiers #comparisons #randomization -
'Random Forests for Change Point Detection', by Malte Londschien, Peter Bühlmann, Solt Kovács.
http://jmlr.org/papers/v24/22-0512.html
#changeforest #classifier #classifiers -
#python
#AI #IoT #Monitoring of #smart #building
A Comparison of Top 14 Supervised #ML #algorithm for #Room #Occupancy IoT MonitoringThe integration of occupancy detection IoT sensors with smart building ML management systems provides a foundation for smarter and more efficient decisions about space allocation in the workplace.
Based upon the overall model performance and previous studies, we have selected 14 #scikitlearn #classifiers
-
New #SurveyCertification:
On Averaging ROC Curves
Jack Hogan, Niall M. Adams
-
On Averaging ROC Curves
Jack Hogan, Niall M. Adams
Action editor: Hsuan-Tien Lin.
-
Finding Competence Regions in Domain Generalization
Jens Müller, Stefan T. Radev, Robert Schmier, Felix Draxler, Carsten Rother, Ullrich Koethe
Action editor: Hanwang Zhang.
-
'Generalization error bounds for multiclass sparse linear classifiers', by Tomer Levy, Felix Abramovich.
http://jmlr.org/papers/v24/22-0367.html
#classifiers #multiclass #misclassification -
Assuming Locally Equal Calibration Errors for Non-Parametric Multiclass Calibration
Kaspar Valk, Meelis Kull
Action editor: Aditya Menon.