home.social

#classifiers — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #classifiers, aggregated by home.social.

  1. @icing There's a thing called "the curse of dimensionality" and it applies to neural networks. I guess you could say that it's like a reverse Moore's Law but for neural nets. Basically, (and this is just my mostly-non technical explanation), neural nets are basically huge multi-dimensional classifiers and when you need to do backpropagation to train the net, it involves making small adjustments to localised areas of the classifier space. The problem (or curse) of having more dimensions is that it becomes harder and harder to localise the changes because every local space becomes closer to all the other points in every other subspace. This means exponentially higher training costs as these models scale.

    At least that's as I understand it. I'm not a mathematician, but I have read plenty of stuff relating to machine learning over the years (since the 90s) and I think I've got the above right...

    #MooresLaw #MachineLearning #Classifiers

  2. @icing There's a thing called "the curse of dimensionality" and it applies to neural networks. I guess you could say that it's like a reverse Moore's Law but for neural nets. Basically, (and this is just my mostly-non technical explanation), neural nets are basically huge multi-dimensional classifiers and when you need to do backpropagation to train the net, it involves making small adjustments to localised areas of the classifier space. The problem (or curse) of having more dimensions is that it becomes harder and harder to localise the changes because every local space becomes closer to all the other points in every other subspace. This means exponentially higher training costs as these models scale.

    At least that's as I understand it. I'm not a mathematician, but I have read plenty of stuff relating to machine learning over the years (since the 90s) and I think I've got the above right...

    #MooresLaw #MachineLearning #Classifiers

  3. @icing There's a thing called "the curse of dimensionality" and it applies to neural networks. I guess you could say that it's like a reverse Moore's Law but for neural nets. Basically, (and this is just my mostly-non technical explanation), neural nets are basically huge multi-dimensional classifiers and when you need to do backpropagation to train the net, it involves making small adjustments to localised areas of the classifier space. The problem (or curse) of having more dimensions is that it becomes harder and harder to localise the changes because every local space becomes closer to all the other points in every other subspace. This means exponentially higher training costs as these models scale.

    At least that's as I understand it. I'm not a mathematician, but I have read plenty of stuff relating to machine learning over the years (since the 90s) and I think I've got the above right...

    #MooresLaw #MachineLearning #Classifiers

  4. @icing There's a thing called "the curse of dimensionality" and it applies to neural networks. I guess you could say that it's like a reverse Moore's Law but for neural nets. Basically, (and this is just my mostly-non technical explanation), neural nets are basically huge multi-dimensional classifiers and when you need to do backpropagation to train the net, it involves making small adjustments to localised areas of the classifier space. The problem (or curse) of having more dimensions is that it becomes harder and harder to localise the changes because every local space becomes closer to all the other points in every other subspace. This means exponentially higher training costs as these models scale.

    At least that's as I understand it. I'm not a mathematician, but I have read plenty of stuff relating to machine learning over the years (since the 90s) and I think I've got the above right...

    #MooresLaw #MachineLearning #Classifiers

  5. @icing There's a thing called "the curse of dimensionality" and it applies to neural networks. I guess you could say that it's like a reverse Moore's Law but for neural nets. Basically, (and this is just my mostly-non technical explanation), neural nets are basically huge multi-dimensional classifiers and when you need to do backpropagation to train the net, it involves making small adjustments to localised areas of the classifier space. The problem (or curse) of having more dimensions is that it becomes harder and harder to localise the changes because every local space becomes closer to all the other points in every other subspace. This means exponentially higher training costs as these models scale.

    At least that's as I understand it. I'm not a mathematician, but I have read plenty of stuff relating to machine learning over the years (since the 90s) and I think I've got the above right...

    #MooresLaw #MachineLearning #Classifiers

  6. Doctoral Thesis: Improving #bird #sound #classifiers for #passive #acoustic #monitoring In recent years, passive acoustic monitoring #PAM has emerged as a powerful tool for biodiversity assessment for vocalizing taxa such as birds, bats, amphibians and insects. helda.helsinki.fi/items/219f9a...

  7. 'A Comparative Evaluation of Quantification Methods', by Tobias Schumacher, Markus Strohmaier, Florian Lemmerich.

    jmlr.org/papers/v26/21-0241.ht

    #classifiers #supervised #quantification

  8. 'An Optimal Transport Approach for Computing Adversarial Training Lower Bounds in Multiclass Classification', by Nicolas Garcia Trillos, Matt Jacobs, Jakwang Kim, Matthew Werenski.

    jmlr.org/papers/v25/24-0268.ht

    #adversarial #regularization #classifiers

  9. 'Optimal Decision Tree and Adaptive Submodular Ranking with Noisy Outcomes', by Su Jia, Fatemeh Navidi, Viswanath Nagarajan, R. Ravi.

    jmlr.org/papers/v25/23-1484.ht

    #adaptive #classifiers #optimal

  10. 'Estimating the Replication Probability of Significant Classification Benchmark Experiments', by Daniel Berrar.

    jmlr.org/papers/v25/24-0158.ht

    #classifiers #replicability #hypothesis

  11. 'Generalization and Stability of Interpolating Neural Networks with Minimal Width', by Hossein Taheri, Christos Thrampoulidis.

    jmlr.org/papers/v25/23-0422.ht

    #classifiers #generalization #minimization

  12. 'Classification with Deep Neural Networks and Logistic Loss', by Zihan Zhang, Lei Shi, Ding-Xuan Zhou.

    jmlr.org/papers/v25/22-0049.ht

    #classifiers #deepen #classification

  13. 'Multi-class Probabilistic Bounds for Majority Vote Classifiers with Partially Labeled Data', by Vasilii Feofanov, Emilie Devijver, Massih-Reza Amini.

    jmlr.org/papers/v25/23-0121.ht

    #classifiers #classifier #labeling

  14. 'A Multilabel Classification Framework for Approximate Nearest Neighbor Search', by Ville Hyvönen, Elias Jääsaari, Teemu Roos.

    jmlr.org/papers/v25/23-0286.ht

    #classification #classifiers #classifier

  15. 'Statistical Comparisons of Classifiers by Generalized Stochastic Dominance', by Christoph Jansen, Malte Nalenz, Georg Schollmeyer, Thomas Augustin.

    jmlr.org/papers/v24/22-0902.ht

    #classifiers #comparisons #randomization

  16. 'Statistical Comparisons of Classifiers by Generalized Stochastic Dominance', by Christoph Jansen, Malte Nalenz, Georg Schollmeyer, Thomas Augustin.

    jmlr.org/papers/v24/22-0902.ht

    #classifiers #comparisons #randomization

  17. 'Statistical Comparisons of Classifiers by Generalized Stochastic Dominance', by Christoph Jansen, Malte Nalenz, Georg Schollmeyer, Thomas Augustin.

    jmlr.org/papers/v24/22-0902.ht

    #classifiers #comparisons #randomization

  18. 'Statistical Comparisons of Classifiers by Generalized Stochastic Dominance', by Christoph Jansen, Malte Nalenz, Georg Schollmeyer, Thomas Augustin.

    jmlr.org/papers/v24/22-0902.ht

    #classifiers #comparisons #randomization

  19. 'Random Forests for Change Point Detection', by Malte Londschien, Peter Bühlmann, Solt Kovács.

    jmlr.org/papers/v24/22-0512.ht

    #changeforest #classifier #classifiers

  20. #python
    #AI #IoT #Monitoring of #smart #building
    A Comparison of Top 14 Supervised #ML #algorithm for #Room #Occupancy IoT Monitoring

    The integration of occupancy detection IoT sensors with smart building ML management systems provides a foundation for smarter and more efficient decisions about space allocation in the workplace.

    Based upon the overall model performance and previous studies, we have selected 14 #scikitlearn #classifiers

    #explore
    wp.me/pdMwZd-6xy

  21. Finding Competence Regions in Domain Generalization

    Jens Müller, Stefan T. Radev, Robert Schmier, Felix Draxler, Carsten Rother, Ullrich Koethe

    Action editor: Hanwang Zhang.

    openreview.net/forum?id=TSy0vu

    #classifiers #accuracy #classifier

  22. 'Generalization error bounds for multiclass sparse linear classifiers', by Tomer Levy, Felix Abramovich.

    jmlr.org/papers/v24/22-0367.ht

    #classifiers #multiclass #misclassification

  23. Assuming Locally Equal Calibration Errors for Non-Parametric Multiclass Calibration

    Kaspar Valk, Meelis Kull

    Action editor: Aditya Menon.

    openreview.net/forum?id=na5sHG

    #classifiers #classifier #calibration