home.social

#optimaltransport — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #optimaltransport, aggregated by home.social.

  1. Following up on this, I also explored a more direct use of #WassersteinDistance in #WGANs: Instead of training a discriminator, the generator is optimized by explicitly computing the #OptimalTransport distance between real and generated samples. This turns the loss into the actual metric of interest and removes the adversarial setup, leading to a more direct and stable training signal. And we can generate cool animations, too ^_^

    🌍 fabriziomusacchio.com/blog/202

    #MachineLearning #Wasserstein

  2. 📐📚New study on #WassersteinDistance: Bonet et al. study #geodesic rays in #Wasserstein space and derive conditions for their existence. They show that #Busemann functions can be computed via #OT, with closed-form solutions for 1D and Gaussian cases. This enables efficient sliced distances for labeled datasets, closely matching classical metrics at lower cost and supporting dataset “flows” for #TransferLearning.

    🌍 openreview.net/forum?id=Xpt0HE

    #OptimalTransport #MachineLearning

  3. 📐 New preprint by Gabriel Peyré: The paper introduces a new class of spectral #Wasserstein distances, linking #OptimalTransport with normalized #gradient methods. It shows that spectrally normalized #GradientDescent can be interpreted as a gradient flow in this spectral-W geometry, providing a principled bridge between #optimization dynamics and transport metrics:

    📄 arxiv.org/abs/2604.04891

    #MachineLearning #WassersteinDistance

  4. 📝💤 "Behold, the 'brief' intro to optimal transport where intuition triumphs over 'maths' because who needs rigor? 🙄 It's basically a #YouTube rabbit hole disguised as a blog, because nothing says 'understandable' like suggesting you watch a four-year-old lecture series. 📚📺"
    alexhwilliams.info/itsneuronal #optimaltransport #rabbitHole #blogpost #mathintuition #lectureSeries #HackerNews #ngated

  5. The #Wasserstein distance (#EMD), sliced Wasserstein distance (#SWD), and the #L2norm are common #metrics used to quantify the ‘distance’ between two distributions. This tutorial compares these three metrics and discusses their advantages and disadvantages.

    🌎 fabriziomusacchio.com/blog/202

    #OptimalTransport #MachineLearning

  6. This tutorial takes a different approach to explain the #Wasserstein distance (#EMD) by approximating the #EMD with cumulative distribution functions (#CDF), providing a more intuitive understanding of the metric.

    🌎 fabriziomusacchio.com/blog/202

    #OptimalTransport

  7. Calculating the #Wasserstein distance (#EMD) 📈 can be computational costly when using #LinearProgramming. The #Sinkhorn algorithm provides a computationally efficient method for approximating the EMD, making it a practical choice for many applications, especially for large datasets 💫. Here is another tutorial, showing how to solve #OptimalTransport problem using the Sinkhorn algorithm in #Python 🐍

    🌎 fabriziomusacchio.com/blog/202

  8. The #Wasserstein distance 📐, aka Earth Mover’s Distance (#EMD), provides a robust and insightful approach for comparing #ProbabilityDistributions 📊. I’ve composed a #Python tutorial 🐍 that explains the #OptimalTransport problem required to calculate EMD. It also shows how to solve the OT problem and calculate the EMD using the Python Optimal Transport (POT) library. Feel free to use and share it 🤗

    🌎 fabriziomusacchio.com/blog/202

  9. Today I attended an excellent seminar by Yunan Yang (ETH Zürich) titled "Optimal transport for learning chaotic dynamics via invariant measures" in the #NumericalAnalysis and #ScientificComputing series in Manchester.

    Many interesting ideas and a lot to unpack, so I can't do it justice, but here is a summary.

    #OptimalTransport #DynamicalSystems #ParameterIdentification #InverseProblems