home.social

Search

1000 results for “sparse_array”

  1. 'Bayesian Sparse Gaussian Mixture Model for Clustering in High Dimensions', by Dapeng Yao, Fangzheng Xie, Yanxun Xu.

    jmlr.org/papers/v26/23-0142.ht

    #sparse #clustering #clusters

  2. 'From Sparse to Dense Functional Data in High Dimensions: Revisiting Phase Transitions from a Non-Asymptotic Perspective', by Shaojun Guo, Dong Li, Xinghao Qiao, Yizhu Wang.

    jmlr.org/papers/v26/23-1578.ht

    #sparse #nonparametric #smoothing

  3. 'From Sparse to Dense Functional Data in High Dimensions: Revisiting Phase Transitions from a Non-Asymptotic Perspective', by Shaojun Guo, Dong Li, Xinghao Qiao, Yizhu Wang.

    jmlr.org/papers/v26/23-1578.ht

    #sparse #nonparametric #smoothing

  4. 'From Sparse to Dense Functional Data in High Dimensions: Revisiting Phase Transitions from a Non-Asymptotic Perspective', by Shaojun Guo, Dong Li, Xinghao Qiao, Yizhu Wang.

    jmlr.org/papers/v26/23-1578.ht

    #sparse #nonparametric #smoothing

  5. 'From Sparse to Dense Functional Data in High Dimensions: Revisiting Phase Transitions from a Non-Asymptotic Perspective', by Shaojun Guo, Dong Li, Xinghao Qiao, Yizhu Wang.

    jmlr.org/papers/v26/23-1578.ht

    #sparse #nonparametric #smoothing

  6. 'A minimax optimal approach to high-dimensional double sparse linear regression', by Yanhang Zhang, Zhifan Li, Shixiang Liu, Jianxin Yin.

    jmlr.org/papers/v25/23-0653.ht

    #sparse #thresholding #sparsity

  7. 'Generalization on the Unseen, Logic Reasoning and Degree Curriculum', by Emmanuel Abbe, Samy Bengio, Aryo Lotfi, Kevin Rizk.

    jmlr.org/papers/v25/24-0220.ht

    #sparse #learns #generalization

  8. 'Neural Networks with Sparse Activation Induced by Large Bias: Tighter Analysis with Bias-Generalized NTK', by Hongru Yang, Ziyu Jiang, Ruizhe Zhang, Yingbin Liang, Zhangyang Wang.

    jmlr.org/papers/v25/23-0831.ht

    #sparse #gradient #generalization

  9. 'White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?', by Yaodong Yu et al.

    jmlr.org/papers/v25/23-1547.ht

    #sparse #compressive #encoders

  10. 'skscope: Fast Sparsity-Constrained Optimization in Python', by Zezhi Wang, Junxian Zhu, Xueqin Wang, Jin Zhu, Huiyang Pen, Peng Chen, Anran Wang, Xiaoke Zhang.

    jmlr.org/papers/v25/23-1574.ht

    #sparse #optimization #sparsity

  11. 'Sparse Graphical Linear Dynamical Systems', by Emilie Chouzenoux, Victor Elvira.

    jmlr.org/papers/v25/23-0878.ht

    #lasso #models #graphical

  12. 'Sparse Representer Theorems for Learning in Reproducing Kernel Banach Spaces', by Rui Wang, Yuesheng Xu, Mingsong Yan.

    jmlr.org/papers/v25/23-0645.ht

    #regularization #sparse #regularized

  13. 'Sparse Representer Theorems for Learning in Reproducing Kernel Banach Spaces', by Rui Wang, Yuesheng Xu, Mingsong Yan.

    jmlr.org/papers/v25/23-0645.ht

    #regularization #sparse #regularized

  14. 'Sparse Representer Theorems for Learning in Reproducing Kernel Banach Spaces', by Rui Wang, Yuesheng Xu, Mingsong Yan.

    jmlr.org/papers/v25/23-0645.ht

    #regularization #sparse #regularized

  15. 'Sparse Representer Theorems for Learning in Reproducing Kernel Banach Spaces', by Rui Wang, Yuesheng Xu, Mingsong Yan.

    jmlr.org/papers/v25/23-0645.ht

    #regularization #sparse #regularized

  16. 'Sparse NMF with Archetypal Regularization: Computational and Robustness Properties', by Kayhan Behdin, Rahul Mazumder.

    jmlr.org/papers/v25/21-0233.ht

    #sparse #regularization #robustness

  17. 'Sparse Plus Low Rank Matrix Decomposition: A Discrete Optimization Approach', by Dimitris Bertsimas, Ryan Cory-Wright, Nicholas A. G. Johnson.

    jmlr.org/papers/v24/21-1130.ht

    #sparse #minimization #optimization

  18. 'Importance Sparsification for Sinkhorn Algorithm', by Mengyu Li, Jun Yu, Tao Li, Cheng Meng.

    jmlr.org/papers/v24/22-1311.ht

    #sparse #echocardiogram #efficiently

  19. Fast Kernel Methods for Generic Lipschitz Losses via $p$-Sparsified Sketches

    Tamim El Ahmad, Pierre Laforgue, Florence d'Alché-Buc

    Action editor: Makoto Yamada.

    openreview.net/forum?id=ry2qgR

    #sparse #kernel #approximations

  20. 'Robust Methods for High-Dimensional Linear Learning', by Ibrahim Merad, Stéphane Gaïffas.

    jmlr.org/papers/v24/22-0964.ht

    #sparse #robust #outliers

  21. 'Sparse GCA and Thresholded Gradient Descent', by Sheng Gao, Zongming Ma.

    jmlr.org/papers/v24/21-0745.ht

    #sparse #gca #pca

  22. 'MARS: A Second-Order Reduction Algorithm for High-Dimensional Sparse Precision Matrices Estimation', by Qian Li, Binyan Jiang, Defeng Sun.

    jmlr.org/papers/v24/21-0699.ht

    #sparse #matrix #penalized

  23. 'Sparse Training with Lipschitz Continuous Loss Functions and a Weighted Group L0-norm Constraint', by Michael R. Metel.

    jmlr.org/papers/v24/22-0615.ht

    #sparse #minimizing #lipschitz

  24. 'Fundamental limits and algorithms for sparse linear regression with sublinear sparsity', by Lan V. Truong.

    jmlr.org/papers/v24/21-0543.ht

    #sparse #sparsity #interpolation

  25. Sparse distance fields are an efficient representation of complex scene geometry. They can help accelerate techniques like real-time GI 🌅

    Our #GDC23 session walks you through the details, with an early look at our NEW #FidelityFX Brixelizer library! 👇

    schedule.gdconf.com/session/re

  26. 'Regularized Joint Mixture Models', by Konstantinos Perrakis, Thomas Lartigue, Frank Dondelinger, Sach Mukherjee.

    jmlr.org/papers/v24/21-0796.ht

    #sparse #regularized #mixture

  27. A geometrical connection between sparse and low-rank matrices and its application to manifold lea...

    Lawrence K. Saul

    openreview.net/forum?id=p8gncJ

    #sparse #manifold #dimensional

  28. Solving sparse finite element problems on neuromorphic hardware

    nature.com/articles/s42256-025

    Free preprint arxiv.org/abs/2501.10526

    As you solve partial differential equations without noticing with every move you make this is really useful problem solving #ai

    As the number of people understanding #FEM (finitite element methods) actually is small this will not be exagarated as #llm.

    Because there are more idiots who can talk than engineers who can calculate