home.social

#instrumentalism — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #instrumentalism, aggregated by home.social.

  1. Scientific breakthroughs often begin with someone saying, “Don’t panic. This crazy sounding assumption is just to make the math work.”

    Nicholaus Copernicus, when he developed his theory of heliocentrism (the earth orbits the sun), was operating from a scientific realist view. In other words, he thought his system reflected actual reality, or at least reflected it better than Ptolemy’s geocentric system (everything orbits the earth), which had been the accepted model of the universe since ancient times.

    However, the new reality he presented was controversial, particularly in protestant circles at the time. Which led Andreas Osiander, a Lutheran theologian involved in printing his book, to add an unauthorized and unsigned preface. Osiander argued that Copernicus’ framework shouldn’t be evaluated on whether it’s literally true, but as a useful mathematical framework to make predicting astronomical phenomena easier. In other words, don’t worry; it’s just convenient math.

    For decades many astronomers followed Osiander’s advice, accepting just Copernicus’ mathematics. The number who actually accepted heliocentric realism was vanishingly small. One astronomer, Tycho Brahe, advocated for a compromise cosmology with most planets orbiting the sun, but the sun still orbiting the earth. Straight Copernicans like Johannes Kepler and Galileo Galilei were very rare. It wouldn’t be until the early 1600s and Galileo’s telescopic observations, that heliocentrism started to be taken seriously (and resisted).

    Moving forward to 1900, Max Planck was trying to mathematically model black body radiation. But he couldn’t make it work. In desperation, he made a change he was loathe to do, one that would make his math compatible with Ludwig Boltzmann’s statistical interpretation of entropy, a view he opposed. He added discrete quantities into the equations, essentially doing the math as if there was a minimum unit of radiation. The change worked.

    Planck was beginning the science of quantum physics, but he didn’t see it at the time. He saw the quantization as purely a pragmatic move, a mathematical contrivance, and was skeptical of any deeper philosophical implications. However, a few years later, Albert Einstein used quanta to explain the photoelectric effect, essentially reifying the quanta into what we now know as photons.

    That same year, Einstein introduced his theory of special relativity. He was a realist about the theory from the beginning. However, his equations had implications for spacetime that he was initially skeptical of. We call it “Minkowski spacetime” today because his old math teacher, Hermann Minkowski, recognized the implications. Einstein eventually came around.

    But after working out general relativity, Einstein was again resistant to some of the implications of his math. General relativity predicted that the universe either had to be contracting or expanding. To save appearances, he introduced a fudge factor called the cosmological constant, a move he later regretted after observations showed that the universe was indeed expanding. (Although the cosmological constant later found new life with the discovery of dark energy.)

    Einstein was also resistant to certain solutions to his equations, solutions which seemed to indicate there could be regions of spacetime which were so curved that nothing could escape. In the early 1900s, these seemed like perverse entities that couldn’t be physical. Of course, today we know black holes exist and play a pivotal role in the universe. We’re able to detect and image them.

    In 1935, Einstein, together with Boris Podolsky and Nathan Rosen, published the famous “EPR paradox” paper, pointing out issues in the mathematics of quantum theory that violated locality, at least under conventional interpretations of quantum mechanics. Erwin Schrödinger followed up with additional papers naming the phenomenon “entanglement”, as well as coming up with the famous “Schrödinger’s cat” thought experiment, which questioned the implications of the mathematical framework he himself had been instrumental in developing.

    The thrust of their argument at the time was that these mathematical implications couldn’t be reality. However, twenty years later, John Stuart Bell came up with a way to test those implications. Alain Aspect, John F. Clauser, and Aton Zeilinger won the 2022 Nobel Prize for their experiments carrying out out those tests, progressively closing the loopholes to such an extent that it would be at least as absurd for the predictions to be wrong as right.

    (Einstein is picked on a lot on this post, but it’s worth noting that these are cases of him blanching at the implications of his own brilliant theories, or theories he helped develop. The fact is many famous scientists struggled with the full implications of their discoveries.)

    Of course, the mathematics aren’t always right. Newton’s laws of gravity were used to predict the existence of Neptune based on anomalies in Uranus’ orbit. However, those same laws were also used to predict the existence of the planet Vulcan, supposedly closer to the sun than Mercury. But Mercury’s orbital anomalies turned out to be stranger, heralding the limitations of Newtonian theory, limitations which would require Einstein’s general relativity to resolve.

    And the Large Hadron Collider hasn’t been kind to many speculative theories and their mathematics. So just because someone can manipulate equations, doesn’t mean it reflects reality.

    On the other hand, when the mathematics of a heavily tested theory, with no further assumptions, make predictions that can’t currently be tested, history seems to suggest taking them seriously. And mathematical convenience often heralds new realities. Even when the limits of a theory are reached, the new explanation typically ends up being far stranger than the initial prediction.

    Granted, it’s always possible to ignore the implied ontology by going instrumentalist. I do think it’s important to be able to put on the instrumentalist hat from time to time. It helps to sidestep ontological biases. Planck did it to make his breakthrough, as did Werner Heisenberg when he was working out the initial mathematical framework for quantum mechanics. But these were theorists using instrumentalism to make progress in spite of the strangeness.

    Other times making progress seems to mean finding ways to reconcile theories, to find where they converge, an inherently realist approach.  Einstein reportedly worked out special relativity from reconciling classical electromagnetism and Newtonian motion, and then general relativity from reconciling special relativity and Newtonian gravity. And most of us got interested in science and philosophy to get closer to truth, not to unrelated prediction instruments.

    This is why my own preferred outlook these days is structural realism, a sort of minimal realism that accepts the mathematical structures described by well tested theories as real, but remains agnostic on any underlying ontology. However even structural realism means accepting strange implications. 

    Which is why many people reach for instrumentalism. Although few are able to stick with it consistently. And selectively adopting it to dismiss predictions we don’t like seems firmly in the tradition of Osiander.

    Unless of course I’m missing something.

    Featured image source

    https://selfawarepatterns.com/2023/12/30/is-it-just-the-math/

    #instrumentalism #Philosophy #PhilosophyOfScience #Physics #Science #scientificRealism #structuralRealism

  2. Scientific breakthroughs often begin with someone saying, “Don’t panic. This crazy sounding assumption is just to make the math work.”

    Nicholaus Copernicus, when he developed his theory of heliocentrism (the earth orbits the sun), was operating from a scientific realist view. In other words, he thought his system reflected actual reality, or at least reflected it better than Ptolemy’s geocentric system (everything orbits the earth), which had been the accepted model of the universe since ancient times.

    However, the new reality he presented was controversial, particularly in protestant circles at the time. Which led Andreas Osiander, a Lutheran theologian involved in printing his book, to add an unauthorized and unsigned preface. Osiander argued that Copernicus’ framework shouldn’t be evaluated on whether it’s literally true, but as a useful mathematical framework to make predicting astronomical phenomena easier. In other words, don’t worry; it’s just convenient math.

    For decades many astronomers followed Osiander’s advice, accepting just Copernicus’ mathematics. The number who actually accepted heliocentric realism was vanishingly small. One astronomer, Tycho Brahe, advocated for a compromise cosmology with most planets orbiting the sun, but the sun still orbiting the earth. Straight Copernicans like Johannes Kepler and Galileo Galilei were very rare. It wouldn’t be until the early 1600s and Galileo’s telescopic observations, that heliocentrism started to be taken seriously (and resisted).

    Moving forward to 1900, Max Planck was trying to mathematically model black body radiation. But he couldn’t make it work. In desperation, he made a change he was loathe to do, one that would make his math compatible with Ludwig Boltzmann’s statistical interpretation of entropy, a view he opposed. He added discrete quantities into the equations, essentially doing the math as if there was a minimum unit of radiation. The change worked.

    Planck was beginning the science of quantum physics, but he didn’t see it at the time. He saw the quantization as purely a pragmatic move, a mathematical contrivance, and was skeptical of any deeper philosophical implications. However, a few years later, Albert Einstein used quanta to explain the photoelectric effect, essentially reifying the quanta into what we now know as photons.

    That same year, Einstein introduced his theory of special relativity. He was a realist about the theory from the beginning. However, his equations had implications for spacetime that he was initially skeptical of. We call it “Minkowski spacetime” today because his old math teacher, Hermann Minkowski, recognized the implications. Einstein eventually came around.

    But after working out general relativity, Einstein was again resistant to some of the implications of his math. General relativity predicted that the universe either had to be contracting or expanding. To save appearances, he introduced a fudge factor called the cosmological constant, a move he later regretted after observations showed that the universe was indeed expanding. (Although the cosmological constant later found new life with the discovery of dark energy.)

    Einstein was also resistant to certain solutions to his equations, solutions which seemed to indicate there could be regions of spacetime which were so curved that nothing could escape. In the early 1900s, these seemed like perverse entities that couldn’t be physical. Of course, today we know black holes exist and play a pivotal role in the universe. We’re able to detect and image them.

    In 1935, Einstein, together with Boris Podolsky and Nathan Rosen, published the famous “EPR paradox” paper, pointing out issues in the mathematics of quantum theory that violated locality, at least under conventional interpretations of quantum mechanics. Erwin Schrödinger followed up with additional papers naming the phenomenon “entanglement”, as well as coming up with the famous “Schrödinger’s cat” thought experiment, which questioned the implications of the mathematical framework he himself had been instrumental in developing.

    The thrust of their argument at the time was that these mathematical implications couldn’t be reality. However, twenty years later, John Stuart Bell came up with a way to test those implications. Alain Aspect, John F. Clauser, and Aton Zeilinger won the 2022 Nobel Prize for their experiments carrying out out those tests, progressively closing the loopholes to such an extent that it would be at least as absurd for the predictions to be wrong as right.

    (Einstein is picked on a lot on this post, but it’s worth noting that these are cases of him blanching at the implications of his own brilliant theories, or theories he helped develop. The fact is many famous scientists struggled with the full implications of their discoveries.)

    Of course, the mathematics aren’t always right. Newton’s laws of gravity were used to predict the existence of Neptune based on anomalies in Uranus’ orbit. However, those same laws were also used to predict the existence of the planet Vulcan, supposedly closer to the sun than Mercury. But Mercury’s orbital anomalies turned out to be stranger, heralding the limitations of Newtonian theory, limitations which would require Einstein’s general relativity to resolve.

    And the Large Hadron Collider hasn’t been kind to many speculative theories and their mathematics. So just because someone can manipulate equations, doesn’t mean it reflects reality.

    On the other hand, when the mathematics of a heavily tested theory, with no further assumptions, make predictions that can’t currently be tested, history seems to suggest taking them seriously. And mathematical convenience often heralds new realities. Even when the limits of a theory are reached, the new explanation typically ends up being far stranger than the initial prediction.

    Granted, it’s always possible to ignore the implied ontology by going instrumentalist. I do think it’s important to be able to put on the instrumentalist hat from time to time. It helps to sidestep ontological biases. Planck did it to make his breakthrough, as did Werner Heisenberg when he was working out the initial mathematical framework for quantum mechanics. But these were theorists using instrumentalism to make progress in spite of the strangeness.

    Other times making progress seems to mean finding ways to reconcile theories, to find where they converge, an inherently realist approach.  Einstein reportedly worked out special relativity from reconciling classical electromagnetism and Newtonian motion, and then general relativity from reconciling special relativity and Newtonian gravity. And most of us got interested in science and philosophy to get closer to truth, not to unrelated prediction instruments.

    This is why my own preferred outlook these days is structural realism, a sort of minimal realism that accepts the mathematical structures described by well tested theories as real, but remains agnostic on any underlying ontology. However even structural realism means accepting strange implications. 

    Which is why many people reach for instrumentalism. Although few are able to stick with it consistently. And selectively adopting it to dismiss predictions we don’t like seems firmly in the tradition of Osiander.

    Unless of course I’m missing something.

    Featured image source

    https://selfawarepatterns.com/2023/12/30/is-it-just-the-math/

    #instrumentalism #Philosophy #PhilosophyOfScience #Physics #Science #scientificRealism #structuralRealism

  3. Scientific breakthroughs often begin with someone saying, “Don’t panic. This crazy sounding assumption is just to make the math work.”

    Nicholaus Copernicus, when he developed his theory of heliocentrism (the earth orbits the sun), was operating from a scientific realist view. In other words, he thought his system reflected actual reality, or at least reflected it better than Ptolemy’s geocentric system (everything orbits the earth), which had been the accepted model of the universe since ancient times.

    However, the new reality he presented was controversial, particularly in protestant circles at the time. Which led Andreas Osiander, a Lutheran theologian involved in printing his book, to add an unauthorized and unsigned preface. Osiander argued that Copernicus’ framework shouldn’t be evaluated on whether it’s literally true, but as a useful mathematical framework to make predicting astronomical phenomena easier. In other words, don’t worry; it’s just convenient math.

    For decades many astronomers followed Osiander’s advice, accepting just Copernicus’ mathematics. The number who actually accepted heliocentric realism was vanishingly small. One astronomer, Tycho Brahe, advocated for a compromise cosmology with most planets orbiting the sun, but the sun still orbiting the earth. Straight Copernicans like Johannes Kepler and Galileo Galilei were very rare. It wouldn’t be until the early 1600s and Galileo’s telescopic observations, that heliocentrism started to be taken seriously (and resisted).

    Moving forward to 1900, Max Planck was trying to mathematically model black body radiation. But he couldn’t make it work. In desperation, he made a change he was loathe to do, one that would make his math compatible with Ludwig Boltzmann’s statistical interpretation of entropy, a view he opposed. He added discrete quantities into the equations, essentially doing the math as if there was a minimum unit of radiation. The change worked.

    Planck was beginning the science of quantum physics, but he didn’t see it at the time. He saw the quantization as purely a pragmatic move, a mathematical contrivance, and was skeptical of any deeper philosophical implications. However, a few years later, Albert Einstein used quanta to explain the photoelectric effect, essentially reifying the quanta into what we now know as photons.

    That same year, Einstein introduced his theory of special relativity. He was a realist about the theory from the beginning. However, his equations had implications for spacetime that he was initially skeptical of. We call it “Minkowski spacetime” today because his old math teacher, Hermann Minkowski, recognized the implications. Einstein eventually came around.

    But after working out general relativity, Einstein was again resistant to some of the implications of his math. General relativity predicted that the universe either had to be contracting or expanding. To save appearances, he introduced a fudge factor called the cosmological constant, a move he later regretted after observations showed that the universe was indeed expanding. (Although the cosmological constant later found new life with the discovery of dark energy.)

    Einstein was also resistant to certain solutions to his equations, solutions which seemed to indicate there could be regions of spacetime which were so curved that nothing could escape. In the early 1900s, these seemed like perverse entities that couldn’t be physical. Of course, today we know black holes exist and play a pivotal role in the universe. We’re able to detect and image them.

    In 1935, Einstein, together with Boris Podolsky and Nathan Rosen, published the famous “EPR paradox” paper, pointing out issues in the mathematics of quantum theory that violated locality, at least under conventional interpretations of quantum mechanics. Erwin Schrödinger followed up with additional papers naming the phenomenon “entanglement”, as well as coming up with the famous “Schrödinger’s cat” thought experiment, which questioned the implications of the mathematical framework he himself had been instrumental in developing.

    The thrust of their argument at the time was that these mathematical implications couldn’t be reality. However, twenty years later, John Stuart Bell came up with a way to test those implications. Alain Aspect, John F. Clauser, and Aton Zeilinger won the 2022 Nobel Prize for their experiments carrying out out those tests, progressively closing the loopholes to such an extent that it would be at least as absurd for the predictions to be wrong as right.

    (Einstein is picked on a lot on this post, but it’s worth noting that these are cases of him blanching at the implications of his own brilliant theories, or theories he helped develop. The fact is many famous scientists struggled with the full implications of their discoveries.)

    Of course, the mathematics aren’t always right. Newton’s laws of gravity were used to predict the existence of Neptune based on anomalies in Uranus’ orbit. However, those same laws were also used to predict the existence of the planet Vulcan, supposedly closer to the sun than Mercury. But Mercury’s orbital anomalies turned out to be stranger, heralding the limitations of Newtonian theory, limitations which would require Einstein’s general relativity to resolve.

    And the Large Hadron Collider hasn’t been kind to many speculative theories and their mathematics. So just because someone can manipulate equations, doesn’t mean it reflects reality.

    On the other hand, when the mathematics of a heavily tested theory, with no further assumptions, make predictions that can’t currently be tested, history seems to suggest taking them seriously. And mathematical convenience often heralds new realities. Even when the limits of a theory are reached, the new explanation typically ends up being far stranger than the initial prediction.

    Granted, it’s always possible to ignore the implied ontology by going instrumentalist. I do think it’s important to be able to put on the instrumentalist hat from time to time. It helps to sidestep ontological biases. Planck did it to make his breakthrough, as did Werner Heisenberg when he was working out the initial mathematical framework for quantum mechanics. But these were theorists using instrumentalism to make progress in spite of the strangeness.

    Other times making progress seems to mean finding ways to reconcile theories, to find where they converge, an inherently realist approach.  Einstein reportedly worked out special relativity from reconciling classical electromagnetism and Newtonian motion, and then general relativity from reconciling special relativity and Newtonian gravity. And most of us got interested in science and philosophy to get closer to truth, not to unrelated prediction instruments.

    This is why my own preferred outlook these days is structural realism, a sort of minimal realism that accepts the mathematical structures described by well tested theories as real, but remains agnostic on any underlying ontology. However even structural realism means accepting strange implications. 

    Which is why many people reach for instrumentalism. Although few are able to stick with it consistently. And selectively adopting it to dismiss predictions we don’t like seems firmly in the tradition of Osiander.

    Unless of course I’m missing something.

    Featured image source

    https://selfawarepatterns.com/2023/12/30/is-it-just-the-math/

    #instrumentalism #Philosophy #PhilosophyOfScience #Physics #Science #scientificRealism #structuralRealism

  4. Scientific breakthroughs often begin with someone saying, “Don’t panic. This crazy sounding assumption is just to make the math work.”

    Nicholaus Copernicus, when he developed his theory of heliocentrism (the earth orbits the sun), was operating from a scientific realist view. In other words, he thought his system reflected actual reality, or at least reflected it better than Ptolemy’s geocentric system (everything orbits the earth), which had been the accepted model of the universe since ancient times.

    However, the new reality he presented was controversial, particularly in protestant circles at the time. Which led Andreas Osiander, a Lutheran theologian involved in printing his book, to add an unauthorized and unsigned preface. Osiander argued that Copernicus’ framework shouldn’t be evaluated on whether it’s literally true, but as a useful mathematical framework to make predicting astronomical phenomena easier. In other words, don’t worry; it’s just convenient math.

    For decades many astronomers followed Osiander’s advice, accepting just Copernicus’ mathematics. The number who actually accepted heliocentric realism was vanishingly small. One astronomer, Tycho Brahe, advocated for a compromise cosmology with most planets orbiting the sun, but the sun still orbiting the earth. Straight Copernicans like Johannes Kepler and Galileo Galilei were very rare. It wouldn’t be until the early 1600s and Galileo’s telescopic observations, that heliocentrism started to be taken seriously (and resisted).

    Moving forward to 1900, Max Planck was trying to mathematically model black body radiation. But he couldn’t make it work. In desperation, he made a change he was loathe to do, one that would make his math compatible with Ludwig Boltzmann’s statistical interpretation of entropy, a view he opposed. He added discrete quantities into the equations, essentially doing the math as if there was a minimum unit of radiation. The change worked.

    Planck was beginning the science of quantum physics, but he didn’t see it at the time. He saw the quantization as purely a pragmatic move, a mathematical contrivance, and was skeptical of any deeper philosophical implications. However, a few years later, Albert Einstein used quanta to explain the photoelectric effect, essentially reifying the quanta into what we now know as photons.

    That same year, Einstein introduced his theory of special relativity. He was a realist about the theory from the beginning. However, his equations had implications for spacetime that he was initially skeptical of. We call it “Minkowski spacetime” today because his old math teacher, Hermann Minkowski, recognized the implications. Einstein eventually came around.

    But after working out general relativity, Einstein was again resistant to some of the implications of his math. General relativity predicted that the universe either had to be contracting or expanding. To save appearances, he introduced a fudge factor called the cosmological constant, a move he later regretted after observations showed that the universe was indeed expanding. (Although the cosmological constant later found new life with the discovery of dark energy.)

    Einstein was also resistant to certain solutions to his equations, solutions which seemed to indicate there could be regions of spacetime which were so curved that nothing could escape. In the early 1900s, these seemed like perverse entities that couldn’t be physical. Of course, today we know black holes exist and play a pivotal role in the universe. We’re able to detect and image them.

    In 1935, Einstein, together with Boris Podolsky and Nathan Rosen, published the famous “EPR paradox” paper, pointing out issues in the mathematics of quantum theory that violated locality, at least under conventional interpretations of quantum mechanics. Erwin Schrödinger followed up with additional papers naming the phenomenon “entanglement”, as well as coming up with the famous “Schrödinger’s cat” thought experiment, which questioned the implications of the mathematical framework he himself had been instrumental in developing.

    The thrust of their argument at the time was that these mathematical implications couldn’t be reality. However, twenty years later, John Stuart Bell came up with a way to test those implications. Alain Aspect, John F. Clauser, and Aton Zeilinger won the 2022 Nobel Prize for their experiments carrying out out those tests, progressively closing the loopholes to such an extent that it would be at least as absurd for the predictions to be wrong as right.

    (Einstein is picked on a lot on this post, but it’s worth noting that these are cases of him blanching at the implications of his own brilliant theories, or theories he helped develop. The fact is many famous scientists struggled with the full implications of their discoveries.)

    Of course, the mathematics aren’t always right. Newton’s laws of gravity were used to predict the existence of Neptune based on anomalies in Uranus’ orbit. However, those same laws were also used to predict the existence of the planet Vulcan, supposedly closer to the sun than Mercury. But Mercury’s orbital anomalies turned out to be stranger, heralding the limitations of Newtonian theory, limitations which would require Einstein’s general relativity to resolve.

    And the Large Hadron Collider hasn’t been kind to many speculative theories and their mathematics. So just because someone can manipulate equations, doesn’t mean it reflects reality.

    On the other hand, when the mathematics of a heavily tested theory, with no further assumptions, make predictions that can’t currently be tested, history seems to suggest taking them seriously. And mathematical convenience often heralds new realities. Even when the limits of a theory are reached, the new explanation typically ends up being far stranger than the initial prediction.

    Granted, it’s always possible to ignore the implied ontology by going instrumentalist. I do think it’s important to be able to put on the instrumentalist hat from time to time. It helps to sidestep ontological biases. Planck did it to make his breakthrough, as did Werner Heisenberg when he was working out the initial mathematical framework for quantum mechanics. But these were theorists using instrumentalism to make progress in spite of the strangeness.

    Other times making progress seems to mean finding ways to reconcile theories, to find where they converge, an inherently realist approach.  Einstein reportedly worked out special relativity from reconciling classical electromagnetism and Newtonian motion, and then general relativity from reconciling special relativity and Newtonian gravity. And most of us got interested in science and philosophy to get closer to truth, not to unrelated prediction instruments.

    This is why my own preferred outlook these days is structural realism, a sort of minimal realism that accepts the mathematical structures described by well tested theories as real, but remains agnostic on any underlying ontology. However even structural realism means accepting strange implications. 

    Which is why many people reach for instrumentalism. Although few are able to stick with it consistently. And selectively adopting it to dismiss predictions we don’t like seems firmly in the tradition of Osiander.

    Unless of course I’m missing something.

    Featured image source

    https://selfawarepatterns.com/2023/12/30/is-it-just-the-math/

    #instrumentalism #Philosophy #PhilosophyOfScience #Physics #Science #scientificRealism #structuralRealism

  5. Scientific breakthroughs often begin with someone saying, “Don’t panic. This crazy sounding assumption is just to make the math work.”

    Nicholaus Copernicus, when he developed his theory of heliocentrism (the earth orbits the sun), was operating from a scientific realist view. In other words, he thought his system reflected actual reality, or at least reflected it better than Ptolemy’s geocentric system (everything orbits the earth), which had been the accepted model of the universe since ancient times.

    However, the new reality he presented was controversial, particularly in protestant circles at the time. Which led Andreas Osiander, a Lutheran theologian involved in printing his book, to add an unauthorized and unsigned preface. Osiander argued that Copernicus’ framework shouldn’t be evaluated on whether it’s literally true, but as a useful mathematical framework to make predicting astronomical phenomena easier. In other words, don’t worry; it’s just convenient math.

    For decades many astronomers followed Osiander’s advice, accepting just Copernicus’ mathematics. The number who actually accepted heliocentric realism was vanishingly small. One astronomer, Tycho Brahe, advocated for a compromise cosmology with most planets orbiting the sun, but the sun still orbiting the earth. Straight Copernicans like Johannes Kepler and Galileo Galilei were very rare. It wouldn’t be until the early 1600s and Galileo’s telescopic observations, that heliocentrism started to be taken seriously (and resisted).

    Moving forward to 1900, Max Planck was trying to mathematically model black body radiation. But he couldn’t make it work. In desperation, he made a change he was loathe to do, one that would make his math compatible with Ludwig Boltzmann’s statistical interpretation of entropy, a view he opposed. He added discrete quantities into the equations, essentially doing the math as if there was a minimum unit of radiation. The change worked.

    Planck was beginning the science of quantum physics, but he didn’t see it at the time. He saw the quantization as purely a pragmatic move, a mathematical contrivance, and was skeptical of any deeper philosophical implications. However, a few years later, Albert Einstein used quanta to explain the photoelectric effect, essentially reifying the quanta into what we now know as photons.

    That same year, Einstein introduced his theory of special relativity. He was a realist about the theory from the beginning. However, his equations had implications for spacetime that he was initially skeptical of. We call it “Minkowski spacetime” today because his old math teacher, Hermann Minkowski, recognized the implications. Einstein eventually came around.

    But after working out general relativity, Einstein was again resistant to some of the implications of his math. General relativity predicted that the universe either had to be contracting or expanding. To save appearances, he introduced a fudge factor called the cosmological constant, a move he later regretted after observations showed that the universe was indeed expanding. (Although the cosmological constant later found new life with the discovery of dark energy.)

    Einstein was also resistant to certain solutions to his equations, solutions which seemed to indicate there could be regions of spacetime which were so curved that nothing could escape. In the early 1900s, these seemed like perverse entities that couldn’t be physical. Of course, today we know black holes exist and play a pivotal role in the universe. We’re able to detect and image them.

    In 1935, Einstein, together with Boris Podolsky and Nathan Rosen, published the famous “EPR paradox” paper, pointing out issues in the mathematics of quantum theory that violated locality, at least under conventional interpretations of quantum mechanics. Erwin Schrödinger followed up with additional papers naming the phenomenon “entanglement”, as well as coming up with the famous “Schrödinger’s cat” thought experiment, which questioned the implications of the mathematical framework he himself had been instrumental in developing.

    The thrust of their argument at the time was that these mathematical implications couldn’t be reality. However, twenty years later, John Stuart Bell came up with a way to test those implications. Alain Aspect, John F. Clauser, and Aton Zeilinger won the 2022 Nobel Prize for their experiments carrying out out those tests, progressively closing the loopholes to such an extent that it would be at least as absurd for the predictions to be wrong as right.

    (Einstein is picked on a lot on this post, but it’s worth noting that these are cases of him blanching at the implications of his own brilliant theories, or theories he helped develop. The fact is many famous scientists struggled with the full implications of their discoveries.)

    Of course, the mathematics aren’t always right. Newton’s laws of gravity were used to predict the existence of Neptune based on anomalies in Uranus’ orbit. However, those same laws were also used to predict the existence of the planet Vulcan, supposedly closer to the sun than Mercury. But Mercury’s orbital anomalies turned out to be stranger, heralding the limitations of Newtonian theory, limitations which would require Einstein’s general relativity to resolve.

    And the Large Hadron Collider hasn’t been kind to many speculative theories and their mathematics. So just because someone can manipulate equations, doesn’t mean it reflects reality.

    On the other hand, when the mathematics of a heavily tested theory, with no further assumptions, make predictions that can’t currently be tested, history seems to suggest taking them seriously. And mathematical convenience often heralds new realities. Even when the limits of a theory are reached, the new explanation typically ends up being far stranger than the initial prediction.

    Granted, it’s always possible to ignore the implied ontology by going instrumentalist. I do think it’s important to be able to put on the instrumentalist hat from time to time. It helps to sidestep ontological biases. Planck did it to make his breakthrough, as did Werner Heisenberg when he was working out the initial mathematical framework for quantum mechanics. But these were theorists using instrumentalism to make progress in spite of the strangeness.

    Other times making progress seems to mean finding ways to reconcile theories, to find where they converge, an inherently realist approach.  Einstein reportedly worked out special relativity from reconciling classical electromagnetism and Newtonian motion, and then general relativity from reconciling special relativity and Newtonian gravity. And most of us got interested in science and philosophy to get closer to truth, not to unrelated prediction instruments.

    This is why my own preferred outlook these days is structural realism, a sort of minimal realism that accepts the mathematical structures described by well tested theories as real, but remains agnostic on any underlying ontology. However even structural realism means accepting strange implications. 

    Which is why many people reach for instrumentalism. Although few are able to stick with it consistently. And selectively adopting it to dismiss predictions we don’t like seems firmly in the tradition of Osiander.

    Unless of course I’m missing something.

    Featured image source

    https://selfawarepatterns.com/2023/12/30/is-it-just-the-math/

    #instrumentalism #Philosophy #PhilosophyOfScience #Physics #Science #scientificRealism #structuralRealism

  6. But in the proper interpretation of "pragmatic," namely the function of consequences as necessary tests of the validity of propositions, provided these consequences are operationally instituted and are such as to resolve the specific problem evoking the operations, the text that follows is thoroughly pragmatic.[49]
    #pramatism #dewey #instrumentalism #philosophy #consequentialism
    en.wikipedia.org/wiki/John_Dew