#wasserstein — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #wasserstein, aggregated by home.social.
-
📐📚New study on #WassersteinDistance: Bonet et al. study #geodesic rays in #Wasserstein space and derive conditions for their existence. They show that #Busemann functions can be computed via #OT, with closed-form solutions for 1D and Gaussian cases. This enables efficient sliced distances for labeled datasets, closely matching classical metrics at lower cost and supporting dataset “flows” for #TransferLearning.
-
📐 New preprint by Gabriel Peyré: The paper introduces a new class of spectral #Wasserstein distances, linking #OptimalTransport with normalized #gradient methods. It shows that spectrally normalized #GradientDescent can be interpreted as a gradient flow in this spectral-W geometry, providing a principled bridge between #optimization dynamics and transport metrics:
-
'Wasserstein F-tests for Frechet regression on Bures-Wasserstein manifolds', by Haoshu Xu, Hongzhe Li.
http://jmlr.org/papers/v26/24-0493.html
#wasserstein #covariates #covariate -
'Entropic Gromov-Wasserstein Distances: Stability and Algorithms', by Gabriel Rioux, Ziv Goldfeld, Kengo Kato.
http://jmlr.org/papers/v25/24-0039.html
#regularization #wasserstein #variational -
The #Wasserstein #metric (#EMD) can be used, to train #GenerativeAdversarialNetworks (#GANs) more effectively. This tutorial compares a default GAN with a #WassersteinGAN (#WGAN) trained on the #MNIST dataset.
-
Apart from #Wasserstein Distance (#EMD), other #metrics also play an important role in #MachineLearning tasks such as #clustering, #classification, and #InformationRetrieval. In this tutorial, you can find a discussion of five commonly used metrics: EMD, #KullbackLeiblerDivergence (KL Divergence), #JensenShannonDivergence (JS Divergence), #TotalVariationDistance (TV Distance), and #BhattacharyyaDistance.
🌎 https://www.fabriziomusacchio.com/blog/2023-07-28-probability_density_metrics/
-
The #Wasserstein distance (#EMD), sliced Wasserstein distance (#SWD), and the #L2norm are common #metrics used to quantify the ‘distance’ between two distributions. This tutorial compares these three metrics and discusses their advantages and disadvantages.
🌎 https://www.fabriziomusacchio.com/blog/2023-07-26-wasserstein_vs_l2_norm/
-
This tutorial takes a different approach to explain the #Wasserstein distance (#EMD) by approximating the #EMD with cumulative distribution functions (#CDF), providing a more intuitive understanding of the metric.
🌎 https://www.fabriziomusacchio.com/blog/2023-07-24-wasserstein_distance_cdf_approximation/
-
Calculating the #Wasserstein distance (#EMD) 📈 can be computational costly when using #LinearProgramming. The #Sinkhorn algorithm provides a computationally efficient method for approximating the EMD, making it a practical choice for many applications, especially for large datasets 💫. Here is another tutorial, showing how to solve #OptimalTransport problem using the Sinkhorn algorithm in #Python 🐍
🌎 https://www.fabriziomusacchio.com/blog/2023-07-23-wasserstein_distance_sinkhorn/
-
The #Wasserstein distance 📐, aka Earth Mover’s Distance (#EMD), provides a robust and insightful approach for comparing #ProbabilityDistributions 📊. I’ve composed a #Python tutorial 🐍 that explains the #OptimalTransport problem required to calculate EMD. It also shows how to solve the OT problem and calculate the EMD using the Python Optimal Transport (POT) library. Feel free to use and share it 🤗
🌎 https://www.fabriziomusacchio.com/blog/2023-07-23-wasserstein_distance/
-
An Explicit Expansion of the Kullback-Leibler Divergence along its Fisher-Rao Gradient Flow
Carles Domingo-Enrich, Aram-Alexandre Pooladian
Action editor: Murat Erdogdu.
-