home.social

#representationlearning β€” Public Fediverse posts

Live and recent posts from across the Fediverse tagged #representationlearning, aggregated by home.social.

  1. 🧠 New preprint by Fabian A. Mikulasch & @fzenke: Understanding Self-Supervised #Learning via #LatentDistribution Matching proposes a unifying theoretical framework for #SelfSupervisedLearning.

    The paper reframes #SSL as latent distribution matching, connecting contrastive, non-contrastive, predictive, and stop-gradient methods through a common probabilistic principle linking alignment, uniformity, and latent entropy.

    πŸ“ arxiv.org/abs/2605.03517

    #MachineLearning #RepresentationLearning #AI

  2. 🧠 New preprint by Fabian A. Mikulasch & @fzenke: Understanding Self-Supervised #Learning via #LatentDistribution Matching proposes a unifying theoretical framework for #SelfSupervisedLearning.

    The paper reframes #SSL as latent distribution matching, connecting contrastive, non-contrastive, predictive, and stop-gradient methods through a common probabilistic principle linking alignment, uniformity, and latent entropy.

    πŸ“ arxiv.org/abs/2605.03517

    #MachineLearning #RepresentationLearning #AI

  3. 🧠 New preprint by Fabian A. Mikulasch & @fzenke: Understanding Self-Supervised #Learning via #LatentDistribution Matching proposes a unifying theoretical framework for #SelfSupervisedLearning.

    The paper reframes #SSL as latent distribution matching, connecting contrastive, non-contrastive, predictive, and stop-gradient methods through a common probabilistic principle linking alignment, uniformity, and latent entropy.

    πŸ“ arxiv.org/abs/2605.03517

    #MachineLearning #RepresentationLearning #AI

  4. 🧠 New preprint by Fabian A. Mikulasch & @fzenke: Understanding Self-Supervised #Learning via #LatentDistribution Matching proposes a unifying theoretical framework for #SelfSupervisedLearning.

    The paper reframes #SSL as latent distribution matching, connecting contrastive, non-contrastive, predictive, and stop-gradient methods through a common probabilistic principle linking alignment, uniformity, and latent entropy.

    πŸ“ arxiv.org/abs/2605.03517

    #MachineLearning #RepresentationLearning #AI

  5. 🧠 New preprint by Fabian A. Mikulasch & @fzenke: Understanding Self-Supervised #Learning via #LatentDistribution Matching proposes a unifying theoretical framework for #SelfSupervisedLearning.

    The paper reframes #SSL as latent distribution matching, connecting contrastive, non-contrastive, predictive, and stop-gradient methods through a common probabilistic principle linking alignment, uniformity, and latent entropy.

    πŸ“ arxiv.org/abs/2605.03517

    #MachineLearning #RepresentationLearning #AI