#representationlearning β Public Fediverse posts
Live and recent posts from across the Fediverse tagged #representationlearning, aggregated by home.social.
-
π§ New preprint by Fabian A. Mikulasch & @fzenke: Understanding Self-Supervised #Learning via #LatentDistribution Matching proposes a unifying theoretical framework for #SelfSupervisedLearning.
The paper reframes #SSL as latent distribution matching, connecting contrastive, non-contrastive, predictive, and stop-gradient methods through a common probabilistic principle linking alignment, uniformity, and latent entropy.
-
π§ New preprint by Fabian A. Mikulasch & @fzenke: Understanding Self-Supervised #Learning via #LatentDistribution Matching proposes a unifying theoretical framework for #SelfSupervisedLearning.
The paper reframes #SSL as latent distribution matching, connecting contrastive, non-contrastive, predictive, and stop-gradient methods through a common probabilistic principle linking alignment, uniformity, and latent entropy.
-
π§ New preprint by Fabian A. Mikulasch & @fzenke: Understanding Self-Supervised #Learning via #LatentDistribution Matching proposes a unifying theoretical framework for #SelfSupervisedLearning.
The paper reframes #SSL as latent distribution matching, connecting contrastive, non-contrastive, predictive, and stop-gradient methods through a common probabilistic principle linking alignment, uniformity, and latent entropy.
-
π§ New preprint by Fabian A. Mikulasch & @fzenke: Understanding Self-Supervised #Learning via #LatentDistribution Matching proposes a unifying theoretical framework for #SelfSupervisedLearning.
The paper reframes #SSL as latent distribution matching, connecting contrastive, non-contrastive, predictive, and stop-gradient methods through a common probabilistic principle linking alignment, uniformity, and latent entropy.
-
π§ New preprint by Fabian A. Mikulasch & @fzenke: Understanding Self-Supervised #Learning via #LatentDistribution Matching proposes a unifying theoretical framework for #SelfSupervisedLearning.
The paper reframes #SSL as latent distribution matching, connecting contrastive, non-contrastive, predictive, and stop-gradient methods through a common probabilistic principle linking alignment, uniformity, and latent entropy.
-
Big congratulations to all authors! π
#ICLR2026 #MachineLearning #AIResearch #RepresentationLearning #InformationRetrieval #DenseRetrieval #SelfSupervisedLearning #LanguageModels #NLP #UKPLab #ICLR
-
Big congratulations to all authors! π
#ICLR2026 #MachineLearning #AIResearch #RepresentationLearning #InformationRetrieval #DenseRetrieval #SelfSupervisedLearning #LanguageModels #NLP #UKPLab #ICLR
-
Big congratulations to all authors! π
#ICLR2026 #MachineLearning #AIResearch #RepresentationLearning #InformationRetrieval #DenseRetrieval #SelfSupervisedLearning #LanguageModels #NLP #UKPLab #ICLR
-
Big congratulations to all authors! π
#ICLR2026 #MachineLearning #AIResearch #RepresentationLearning #InformationRetrieval #DenseRetrieval #SelfSupervisedLearning #LanguageModels #NLP #UKPLab #ICLR