#gradient-descent — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #gradient-descent, aggregated by home.social.
-
📐 New preprint by Gabriel Peyré: The paper introduces a new class of spectral #Wasserstein distances, linking #OptimalTransport with normalized #gradient methods. It shows that spectrally normalized #GradientDescent can be interpreted as a gradient flow in this spectral-W geometry, providing a principled bridge between #optimization dynamics and transport metrics:
-
Differential Logic • 18
If we follow the classical line which singles out linear functions as ideals of simplicity then we may complete the analytic series of the proposition in the following way.
The next venn diagram shows the differential proposition we get by extracting the linear approximation to the difference map at each cell or point of the universe What results is the logical analogue of what would ordinarily be called the differential of but since the adjective differential is being attached to just about everything in sight the alternative name tangent map is commonly used for whenever it’s necessary to single it out.
To be clear about what’s being indicated here, it’s a visual way of summarizing the following data.
To understand the extended interpretations, that is, the conjunctions of basic and differential features which are being indicated here, it may help to note the following equivalences.
Capping the analysis of the proposition in terms of succeeding orders of linear propositions, the final venn diagram of the series shows the remainder map which happens to be linear in pairs of variables.
Reading the arrows off the map produces the following data.
In short, is a constant field, having the value at each cell.
Resources
- Logic Syllabus
- Minimal Negation Operator
- Survey of Differential Logic
- Survey of Animated Logical Graphs
cc: Academia.edu • Cybernetics • Laws of Form • Mathstodon (1) (2)
#Amphecks #Animata #BooleanAlgebra #BooleanFunctions #CSPeirce #CactusGraphs #Change #Cybernetics #DifferentialCalculus #DifferentialLogic #DiscreteDynamics #EquationalInference #FunctionalLogic #GradientDescent #GraphTheory #InquiryDrivenSystems #Logic #LogicalGraphs #Mathematics #MinimalNegationOperators #PropositionalCalculus #Time #Visualization
cc: Research Gate • Structural Modeling • Systems Science • Syscoi -
Differential Logic • 17
Enlargement and Difference Maps
Continuing with the example the following venn diagram shows the enlargement or shift map in the same style of field picture we drew for the tacit extension
A very important conceptual transition has just occurred here, almost tacitly, as it were. Generally speaking, having a set of mathematical objects of compatible types, in this case the two differential fields and both of the type is very useful, because it allows us to consider those fields as integral mathematical objects which can be operated on and combined in the ways we usually associate with algebras.
In the present case one notices the tacit extension and the enlargement are in a sense dual to each other. The tacit extension indicates all the arrows out of the region where is true and the enlargement indicates all the arrows into the region where is true. The only arc they have in common is the no‑change loop at If we add the two sets of arcs in mod 2 fashion then the loop of multiplicity 2 zeroes out, leaving the 6 arrows of shown in the following venn diagram.
Resources
- Logic Syllabus
- Minimal Negation Operator
- Survey of Differential Logic
- Survey of Animated Logical Graphs
cc: Academia.edu • Cybernetics • Laws of Form • Mathstodon (1) (2)
#Amphecks #Animata #BooleanAlgebra #BooleanFunctions #CSPeirce #CactusGraphs #Change #Cybernetics #DifferentialCalculus #DifferentialLogic #DiscreteDynamics #EquationalInference #FunctionalLogic #GradientDescent #GraphTheory #InquiryDrivenSystems #Logic #LogicalGraphs #Mathematics #MinimalNegationOperators #PropositionalCalculus #Time #Visualization
cc: Research Gate • Structural Modeling • Systems Science • Syscoi -
Differential Logic • 15
The structure of a differential field may be described as follows. With each point of there is associated an object of the following type: a proposition about changes in that is, a proposition In that frame of reference, if is the universe generated by the set of coordinate propositions then is the differential universe generated by the set of differential propositions The differential propositions and may thus be interpreted as indicating and respectively.
A differential operator of the first order type we are currently considering, takes a proposition and gives back a differential proposition In the field view of the scene, we see the proposition as a scalar field and we see the differential proposition as a vector field, specifically, a field of propositions about contemplated changes in
The field of changes produced by on is shown in the following venn diagram.
The differential field specifies the changes which need to be made from each point of in order to reach one of the models of the proposition that is, in order to satisfy the proposition
The field of changes produced by on is shown in the following venn diagram.
The differential field specifies the changes which need to be made from each point of in order to feel a change in the felt value of the field
Resources
- Logic Syllabus
- Minimal Negation Operator
- Survey of Differential Logic
- Survey of Animated Logical Graphs
cc: Academia.edu • Cybernetics • Laws of Form • Mathstodon (1) (2)
#Amphecks #Animata #BooleanAlgebra #BooleanFunctions #CSPeirce #CactusGraphs #Change #Cybernetics #DifferentialCalculus #DifferentialLogic #DiscreteDynamics #EquationalInference #FunctionalLogic #GradientDescent #GraphTheory #InquiryDrivenSystems #Logic #LogicalGraphs #Mathematics #MinimalNegationOperators #PropositionalCalculus #Time #Visualization
cc: Research Gate • Structural Modeling • Systems Science • Syscoi -
Differential Logic • 14
Let us summarize the outlook on differential logic we’ve reached so far. We’ve been considering a class of operators on universes of discourse, each of which takes us from considering one universe of discourse to considering a larger universe of discourse An operator of that general type, namely, acts on each proposition of the source universe to produce a proposition of the target universe
The operators we’ve examined so far are the enlargement or shift operator and the difference operator The operators and act on propositions in that is, propositions of the form which amount to propositions about the subject matter of and they produce propositions of the form which amount to propositions about specified collections of changes conceivably occurring in
At this point we find ourselves in need of visual representations, suitable arrays of concrete pictures to anchor our more earthy intuitions and help us keep our wits about us as we venture into ever more rarefied airs of abstraction.
One good picture comes to us by way of the field concept. Given a space a field of a specified type over is formed by associating with each point of an object of type If that sounds like the same thing as a function from to the space of things of type — it is nothing but — and yet it does seem helpful to vary the mental images and take advantage of the figures of speech most naturally springing to mind under the emblem of the field idea.
In the field picture a proposition becomes a scalar field, that is, a field of values in
For example, consider the logical conjunction shown in the following venn diagram.
Each of the operators takes us from considering propositions here viewed as scalar fields over to considering the corresponding differential fields over analogous to what in real analysis are usually called vector fields over
Resources
- Logic Syllabus
- Minimal Negation Operator
- Survey of Differential Logic
- Survey of Animated Logical Graphs
cc: Academia.edu • Cybernetics • Laws of Form • Mathstodon (1) (2)
#Amphecks #Animata #BooleanAlgebra #BooleanFunctions #CSPeirce #CactusGraphs #Change #Cybernetics #DifferentialCalculus #DifferentialLogic #DiscreteDynamics #EquationalInference #FunctionalLogic #GradientDescent #GraphTheory #InquiryDrivenSystems #Logic #LogicalGraphs #Mathematics #MinimalNegationOperators #PropositionalCalculus #Time #Visualization
cc: Research Gate • Structural Modeling • Systems Science • Syscoi -
Differential Logic • 13
Transforms Expanded over Ordinary and Differential Variables
Two views of how the difference operator acts on the set of sixteen functions are shown below. Table A5 shows the expansion of over the set of ordinary variables and Table A6 shows the expansion of over the set of differential variables.
Difference Map Expanded over Ordinary Variables
Difference Map Expanded over Differential Variables
Resources
- Logic Syllabus
- Minimal Negation Operator
- Survey of Differential Logic
- Survey of Animated Logical Graphs
cc: Academia.edu • Cybernetics • Laws of Form • Mathstodon (1) (2)
#Amphecks #Animata #BooleanAlgebra #BooleanFunctions #CSPeirce #CactusGraphs #Change #Cybernetics #DifferentialCalculus #DifferentialLogic #DiscreteDynamics #EquationalInference #FunctionalLogic #GradientDescent #GraphTheory #InquiryDrivenSystems #Logic #LogicalGraphs #Mathematics #MinimalNegationOperators #PropositionalCalculus #Time #Visualization
cc: Research Gate • Structural Modeling • Systems Science • Syscoi -
Partly some thoughts for the players in our Mothership campaign and partly the tip-of-the-iceberg of my problems with Gradient Descent.
After posting this and talking with a player, a side-adventure with Ooze Nuns might be a good change-of-pace.
#ttrpg #mothership #gradientDescent #oozeNuns
https://blog.psionic-cyclops.org/article/stygian-marches-campaign-report/
-
🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.
They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.
-
🎓🤓 Ah yes, another riveting exposé on gradient descent—because clearly, what the world needs is a PhD dissertation masquerading as a web page! Complete with an intricate breakdown of Part I, Part II, and wait for it...Part III. 🚀🔍 Spoiler alert: the internet remains unfazed by this groundbreaking revelation. 💡📉
https://centralflows.github.io/part1/ #gradientdescent #PhDdissertation #webdevelopment #techhumor #machinelearning #HackerNews #ngated -
@AmenZwa
Sure. Not that #GradientDescent is any panacea anyways. Local optima are infamously its bane -- and there's a potential for techniques that look beyond the local differential neighborhood to find more global optima.That's one of the cool things about evolving and cross-breeding algorithms, although that particular approach has a lot of overhead. Cool, though.
-
I say, #Mastodon should adopt this "common decency rule" of conduct:
A poster who has not studied thoroughly the #mathematics that underlies the #GradientDescent optimisation method (of at least one among Cauchy, Hadamard, Curry, Rumelhart, et al.) shall be barred from hyping up all things #DL #AI.
-
📚 New preprint by Vafaii, Galor & Yates: Brain-like variational inference. They derive #SpikingNeuralNetwork dynamics directly from variational free energy minimization via online natural #GradientDescent, yielding the iterative Poisson #VAE (iP-VAE) with strong sparsity, reconstruction & #BiologicalPlausibility.
🌍 https://arxiv.org/abs/2410.19315
🧑💻 https://github.com/hadivafaii/IterativeVAE -
The method of #GradientDescent has been wildly used in #MachineLearning. Most of these ML models are set up in such a way that there is a target to be learned. The difference between the AI's prediction and the target value forms a loss function that will be minimized gradually by the AI using gradient descent. However, gradient descent can also be used in linear approximation of functions or even doing simultaneous equation linear regression estimations. #AI #mathematics
-
How linear regression works intuitively and how it leads to gradient descent
https://briefer.cloud/blog/posts/least-squares/
#HackerNews #linearregression #gradientdescent #machinelearning #statistics #dataanalysis
-
Dive into #RMSProp optimization! Discover how this advanced #GradientDescent algorithm adapts learning rates for each parameter. Learn #Python implementation and boost your #MachineLearning skills. Optimize like a pro! #DeepLearning #AI
https://teguhteja.id/rmsprop-mastering-advanced-optimization-in-python/
-
Discover how momentum in gradient descent algorithms can revolutionize your machine learning optimization. Learn to implement this powerful technique for faster convergence and improved model training efficiency. #MachineLearning #GradientDescent
https://teguhteja.id/momentum-accelerating-convergence-gradient-descent-algorithms/
-
Discover how Gradient Descent Optimization enhances linear regression. This guide covers implementation in Python, key concepts, and practical tips. #MachineLearning #DataScience #GradientDescent #LinearRegression #Python
https://teguhteja.id/gradient-descent-optimization-in-linear-regression/
-
Not exactly seeing how this helps #HomeAssistant with their technical debt. But it'll be interesting to see what new features corporate involvement will bring here!
https://techhub.social/@sstranger/112306809438796297 @sstranger
-
I made a short tutorial on understanding #GradientDescent in #MachineLearning, including interactive #Jupyter notebooks to play around with:
🌍 https://www.fabriziomusacchio.com/blog/2023-03-27-gradient_descent/
-
Risky Giant Steps Can Solve #Optimization Problems Faster | #QuantaMagazine
"#Grimmer found that the fastest sequences always had one thing in common: The middle step [in a sequence of repeating #GradientDescent steps] was always a big one. Its size depended on the number of steps in the repeating sequence. For a three-step sequence, the big step had length 4.9. For a 15-step sequence, the algorithm recommended one step of length 29.7. And for a 127-step sequence, the longest one tested, the big central leap was a whopping 370. At first that sounds like an absurdly large number, Grimmer said, but there were enough total steps to make up for that giant leap, so even if you blew past the bottom, you could still make it back quickly."
https://www.quantamagazine.org/risky-giant-steps-can-solve-optimization-problems-faster-20230811/
-
@manisha @Neurograce @neuroscience @cogsci @cogneurophys @PessoaBrain
@cian comment in thread reminded me:
@kordinglab recent talks on how #computationalneuroscience #ml methods, while spread across labs & papers, still face #neurotheory proven grounds:
Does the #brain do #gradientDescent ?
https://www.youtube.com/watch?v=E5hATeCZQnU#causality for #neuroscience & beyond
https://www.youtube.com/watch?v=XUD69JshQTk#philosophyofneuroscience
#systemsneuroscience
#mindbrainps: IMHO #ml yea: best when for discovery, not causality
-
This is a nice post with a bunch of practical tricks to debug neural networks:
https://nonint.com/2023/07/01/techniques-for-debugging-neural-networks/ -
Someone asked how to solve the Advent of Code puzzle of 2015 day 15 https://adventofcode.com/2015/day/15 so I made this Python Jupyter notebook which shows a possible approach that is not brute force. If they can generalise this to the actual problem with 4 instead of 2 variables then that will be a useful introduction to gradient descent. https://ee1.nl/code/aoc2015-15.html
#AdventOfCode #AoC2015 #LinearAlgebra #Optimization #GradientDescent #Python #Jupyter #programming #puzzle
-
@JacobPhD why do we think of #Evolutionary #Fitness as a #Peak? Maybe because #Fitness was was a peak status outside the context of #Evolution? But it's very relevant since in #MachineLearning we consider #Optimisation through #GradientDescent not ascent