home.social

#structuralrealism — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #structuralrealism, aggregated by home.social.

  1. If usefulness isn’t a guide to what’s real, what is?

    Seems like I’ve been writing a lot about quantum mechanics lately. Apparently so have a lot of other people. One thing that keeps coming up is the reality or non-reality of the quantum wave function. Raoni Arroyo and Jonas R. Becker Arenhart argue for non-reality: Quantum mechanics works, but it doesn’t describe reality: Predictive power is not a guide to reality. (Warning: likely paywall.)

    Along similar lines, in an article about what he says are quantum myths, Ethan Siegel argues that superpositions are not fundamental to quantum physics:

    Superpositions are incredibly useful as intermediate calculational steps to determine what your possible outcomes (and their probabilities) will be, but we can never measure them directly.

    Arroyo and Arenhart take a similar line. They argue that it would be more intellectually honest for wave function realists to call their position wave function pragmatism. As they note in the title of their piece, they don’t see predictive success as a guide to reality.

    The question I want to ask these people is, if predictive power, if usefulness, isn’t your guide to what is real, then what is?

    It’s worth thinking about why we care whether something is real or not. Is the sound I’m hearing from outside rain? Is the rain real? To say it is is to say I need to take an umbrella with me when I go outside, or be prepared to get wet. To say it isn’t is to say I can walk outside without worry of getting wet. We get similar considerations when trying to decide if a stock rally is real or illusory, or, from an evolutionary perspective, whether the sound in the bushes is a real predator or just a figment of your imagination. Reality is that which makes a difference, something which there’s a possible cost to ignoring.

    Admittedly, this is a strange point to make when talking about quantum states. It might seem like whether they’re real has little to no bearing in our daily lives. But they do seem to make a difference for experimenters and quantum computing engineers. They have to take the dynamics implied in these mathematical tools seriously. In the case of quantum computing, it’s the very dynamics that seem to enable what they’re trying to do. Failure to treat them as real has consequences.

    Now, I’m a structural realist. I think what we can count on being real in successful scientific theories are the structures they describe, at least to some level of approximation. That doesn’t mean we can count on them being fundamental, or that we know what they may be structures of. This is particularly important to remember with quantum theory, where the structures are all we currently have.

    Does that mean that, rather than being structures of objective reality prior to a measurement, they could actually be structures of subjective expectations as the QBists argue? Or of the way the experimental equipment has been set up, as other antirealists argue? I suppose so. But that seems to imply the possibilities are completely set by these expectations or preparations, that if scientists really wanted to, they could get any result they wanted.

    In practice, something seems to constrain the possible results. Of course, if I put on the epistemic hat, I could argue that those constraints are the constraints on their thoughts (QBism) or practical equipment limitations (other epistemics), not anything in the quantum realm. But taking this literally, that seems to imply that quantum physics is a big illusion, a side effect of the way scientists think or construct experiments. If so, how could anyone be sure that any scientific measurements beyond human senses are to be trusted?

    All of that is before remembering that if we think anything objective at all is happening in the physics prior to a measurement, that there are mathematical theorems which kick in and demonstrate that quantum states must describe something real. Epistemic interpretations of quantum mechanics, such as Copenhagen, QBism, and RQM avoid this be saying there is no such objective physics prior to measurement (or interaction). Which, to me, makes calling them “epistemic” misleading. Qbists in particular argue for a “participatory reality,” a notion they inherited from John Wheeler’s “it from bit” idea.

    This selective application of antirealism has always felt like gerrymandering to me. Most of the proponents want to resist the idealism label, but they seem to want to take from metaphysical antirealism just what they need to avoid quantum state realism. It all feels forced.

    Interestingly enough, that doesn’t appear to have been Niels Bohr’s take. Historians often argue that he was more of a neo-Kantian than either an instrumentalist or idealist. His take seemed to be that the quantum realm was real, but inaccessible, the noumena always beyond the phenomena. Of course, this predates the theorems I mentioned above, which is what forces stronger stances from contemporary epistemic proponents.

    But my issue with the Kantian view is it pushes reality into something utterly and forever unknowable. Reportedly, Kant’s motivations for doing this were to preserve space for God, the soul, free will, and morality in response to the “Crisis of the Enlightenment,” which seemed to call all of those things into question. I suspect neo-Kantians are trying to preserve different things, but that kind of preservation likely remains part of their motivation.

    But the cost of doing so is to remove the practical aspects I noted above when deciding what’s real or not. In my view, it removes any utility from the concept of reality, except for talking in terms of theology or overall metaphysics.

    Which may be why Arroyo and Arenhart want to use the word “pragmatic” instead. I think a better strategy is to retain our grounded everyday meaning for “real,” but admit that we never know whether we’ve reached ultimate reality. But this is coming from someone who doesn’t share the Kantian or neo-Kantian concerns.

    Overall, my theory of reality is pragmatic. But I continue to wonder, for the people arguing against that take, what standard are they using?

    What do you think? Are there issues with a pragmatic take on reality I’m overlooking? If so, what would be a better standard?

    #antirealism #Philosophy #PhilosophyOfScience #Physics #QuantumMechanics #realism #structuralRealism

  2. If usefulness isn’t a guide to what’s real, what is?

    Seems like I’ve been writing a lot about quantum mechanics lately. Apparently so have a lot of other people. One thing that keeps coming up is the reality or non-reality of the quantum wave function. Raoni Arroyo and Jonas R. Becker Arenhart argue for non-reality: Quantum mechanics works, but it doesn’t describe reality: Predictive power is not a guide to reality. (Warning: likely paywall.)

    Along similar lines, in an article about what he says are quantum myths, Ethan Siegel argues that superpositions are not fundamental to quantum physics:

    Superpositions are incredibly useful as intermediate calculational steps to determine what your possible outcomes (and their probabilities) will be, but we can never measure them directly.

    Arroyo and Arenhart take a similar line. They argue that it would be more intellectually honest for wave function realists to call their position wave function pragmatism. As they note in the title of their piece, they don’t see predictive success as a guide to reality.

    The question I want to ask these people is, if predictive power, if usefulness, isn’t your guide to what is real, then what is?

    It’s worth thinking about why we care whether something is real or not. Is the sound I’m hearing from outside rain? Is the rain real? To say it is is to say I need to take an umbrella with me when I go outside, or be prepared to get wet. To say it isn’t is to say I can walk outside without worry of getting wet. We get similar considerations when trying to decide if a stock rally is real or illusory, or, from an evolutionary perspective, whether the sound in the bushes is a real predator or just a figment of your imagination. Reality is that which makes a difference, something which there’s a possible cost to ignoring.

    Admittedly, this is a strange point to make when talking about quantum states. It might seem like whether they’re real has little to no bearing in our daily lives. But they do seem to make a difference for experimenters and quantum computing engineers. They have to take the dynamics implied in these mathematical tools seriously. In the case of quantum computing, it’s the very dynamics that seem to enable what they’re trying to do. Failure to treat them as real has consequences.

    Now, I’m a structural realist. I think what we can count on being real in successful scientific theories are the structures they describe, at least to some level of approximation. That doesn’t mean we can count on them being fundamental, or that we know what they may be structures of. This is particularly important to remember with quantum theory, where the structures are all we currently have.

    Does that mean that, rather than being structures of objective reality prior to a measurement, they could actually be structures of subjective expectations as the QBists argue? Or of the way the experimental equipment has been set up, as other antirealists argue? I suppose so. But that seems to imply the possibilities are completely set by these expectations or preparations, that if scientists really wanted to, they could get any result they wanted.

    In practice, something seems to constrain the possible results. Of course, if I put on the epistemic hat, I could argue that those constraints are the constraints on their thoughts (QBism) or practical equipment limitations (other epistemics), not anything in the quantum realm. But taking this literally, that seems to imply that quantum physics is a big illusion, a side effect of the way scientists think or construct experiments. If so, how could anyone be sure that any scientific measurements beyond human senses are to be trusted?

    All of that is before remembering that if we think anything objective at all is happening in the physics prior to a measurement, that there are mathematical theorems which kick in and demonstrate that quantum states must describe something real. Epistemic interpretations of quantum mechanics, such as Copenhagen, QBism, and RQM avoid this be saying there is no such objective physics prior to measurement (or interaction). Which, to me, makes calling them “epistemic” misleading. Qbists in particular argue for a “participatory reality,” a notion they inherited from John Wheeler’s “it from bit” idea.

    This selective application of antirealism has always felt like gerrymandering to me. Most of the proponents want to resist the idealism label, but they seem to want to take from metaphysical antirealism just what they need to avoid quantum state realism. It all feels forced.

    Interestingly enough, that doesn’t appear to have been Niels Bohr’s take. Historians often argue that he was more of a neo-Kantian than either an instrumentalist or idealist. His take seemed to be that the quantum realm was real, but inaccessible, the noumena always beyond the phenomena. Of course, this predates the theorems I mentioned above, which is what forces stronger stances from contemporary epistemic proponents.

    But my issue with the Kantian view is it pushes reality into something utterly and forever unknowable. Reportedly, Kant’s motivations for doing this were to preserve space for God, the soul, free will, and morality in response to the “Crisis of the Enlightenment,” which seemed to call all of those things into question. I suspect neo-Kantians are trying to preserve different things, but that kind of preservation likely remains part of their motivation.

    But the cost of doing so is to remove the practical aspects I noted above when deciding what’s real or not. In my view, it removes any utility from the concept of reality, except for talking in terms of theology or overall metaphysics.

    Which may be why Arroyo and Arenhart want to use the word “pragmatic” instead. I think a better strategy is to retain our grounded everyday meaning for “real,” but admit that we never know whether we’ve reached ultimate reality. But this is coming from someone who doesn’t share the Kantian or neo-Kantian concerns.

    Overall, my theory of reality is pragmatic. But I continue to wonder, for the people arguing against that take, what standard are they using?

    What do you think? Are there issues with a pragmatic take on reality I’m overlooking? If so, what would be a better standard?

    #antirealism #Philosophy #PhilosophyOfScience #Physics #QuantumMechanics #realism #Science #structuralRealism

  3. If usefulness isn’t a guide to what’s real, what is?

    Seems like I’ve been writing a lot about quantum mechanics lately. Apparently so have a lot of other people. One thing that keeps coming up is the reality or non-reality of the quantum wave function. Raoni Arroyo and Jonas R. Becker Arenhart argue for non-reality: Quantum mechanics works, but it doesn’t describe reality: Predictive power is not a guide to reality. (Warning: likely paywall.)

    Along similar lines, in an article about what he says are quantum myths, Ethan Siegel argues that superpositions are not fundamental to quantum physics:

    Superpositions are incredibly useful as intermediate calculational steps to determine what your possible outcomes (and their probabilities) will be, but we can never measure them directly.

    Arroyo and Arenhart take a similar line. They argue that it would be more intellectually honest for wave function realists to call their position wave function pragmatism. As they note in the title of their piece, they don’t see predictive success as a guide to reality.

    The question I want to ask these people is, if predictive power, if usefulness, isn’t your guide to what is real, then what is?

    It’s worth thinking about why we care whether something is real or not. Is the sound I’m hearing from outside rain? Is the rain real? To say it is is to say I need to take an umbrella with me when I go outside, or be prepared to get wet. To say it isn’t is to say I can walk outside without worry of getting wet. We get similar considerations when trying to decide if a stock rally is real or illusory, or, from an evolutionary perspective, whether the sound in the bushes is a real predator or just a figment of your imagination. Reality is that which makes a difference, something which there’s a possible cost to ignoring.

    Admittedly, this is a strange point to make when talking about quantum states. It might seem like whether they’re real has little to no bearing in our daily lives. But they do seem to make a difference for experimenters and quantum computing engineers. They have to take the dynamics implied in these mathematical tools seriously. In the case of quantum computing, it’s the very dynamics that seem to enable what they’re trying to do. Failure to treat them as real has consequences.

    Now, I’m a structural realist. I think what we can count on being real in successful scientific theories are the structures they describe, at least to some level of approximation. That doesn’t mean we can count on them being fundamental, or that we know what they may be structures of. This is particularly important to remember with quantum theory, where the structures are all we currently have.

    Does that mean that, rather than being structures of objective reality prior to a measurement, they could actually be structures of subjective expectations as the QBists argue? Or of the way the experimental equipment has been set up, as other antirealists argue? I suppose so. But that seems to imply the possibilities are completely set by these expectations or preparations, that if scientists really wanted to, they could get any result they wanted.

    In practice, something seems to constrain the possible results. Of course, if I put on the epistemic hat, I could argue that those constraints are the constraints on their thoughts (QBism) or practical equipment limitations (other epistemics), not anything in the quantum realm. But taking this literally, that seems to imply that quantum physics is a big illusion, a side effect of the way scientists think or construct experiments. If so, how could anyone be sure that any scientific measurements beyond human senses are to be trusted?

    All of that is before remembering that if we think anything objective at all is happening in the physics prior to a measurement, that there are mathematical theorems which kick in and demonstrate that quantum states must describe something real. Epistemic interpretations of quantum mechanics, such as Copenhagen, QBism, and RQM avoid this be saying there is no such objective physics prior to measurement (or interaction). Which, to me, makes calling them “epistemic” misleading. Qbists in particular argue for a “participatory reality,” a notion they inherited from John Wheeler’s “it from bit” idea.

    This selective application of antirealism has always felt like gerrymandering to me. Most of the proponents want to resist the idealism label, but they seem to want to take from metaphysical antirealism just what they need to avoid quantum state realism. It all feels forced.

    Interestingly enough, that doesn’t appear to have been Niels Bohr’s take. Historians often argue that he was more of a neo-Kantian than either an instrumentalist or idealist. His take seemed to be that the quantum realm was real, but inaccessible, the noumena always beyond the phenomena. Of course, this predates the theorems I mentioned above, which is what forces stronger stances from contemporary epistemic proponents.

    But my issue with the Kantian view is it pushes reality into something utterly and forever unknowable. Reportedly, Kant’s motivations for doing this were to preserve space for God, the soul, free will, and morality in response to the “Crisis of the Enlightenment,” which seemed to call all of those things into question. I suspect neo-Kantians are trying to preserve different things, but that kind of preservation likely remains part of their motivation.

    But the cost of doing so is to remove the practical aspects I noted above when deciding what’s real or not. In my view, it removes any utility from the concept of reality, except for talking in terms of theology or overall metaphysics.

    Which may be why Arroyo and Arenhart want to use the word “pragmatic” instead. I think a better strategy is to retain our grounded everyday meaning for “real,” but admit that we never know whether we’ve reached ultimate reality. But this is coming from someone who doesn’t share the Kantian or neo-Kantian concerns.

    Overall, my theory of reality is pragmatic. But I continue to wonder, for the people arguing against that take, what standard are they using?

    What do you think? Are there issues with a pragmatic take on reality I’m overlooking? If so, what would be a better standard?

    #antirealism #Philosophy #PhilosophyOfScience #Physics #QuantumMechanics #realism #structuralRealism

  4. If usefulness isn’t a guide to what’s real, what is?

    Seems like I’ve been writing a lot about quantum mechanics lately. Apparently so have a lot of other people. One thing that keeps coming up is the reality or non-reality of the quantum wave function. Raoni Arroyo and Jonas R. Becker Arenhart argue for non-reality: Quantum mechanics works, but it doesn’t describe reality: Predictive power is not a guide to reality. (Warning: likely paywall.)

    Along similar lines, in an article about what he says are quantum myths, Ethan Siegel argues that superpositions are not fundamental to quantum physics:

    Superpositions are incredibly useful as intermediate calculational steps to determine what your possible outcomes (and their probabilities) will be, but we can never measure them directly.

    Arroyo and Arenhart take a similar line. They argue that it would be more intellectually honest for wave function realists to call their position wave function pragmatism. As they note in the title of their piece, they don’t see predictive success as a guide to reality.

    The question I want to ask these people is, if predictive power, if usefulness, isn’t your guide to what is real, then what is?

    It’s worth thinking about why we care whether something is real or not. Is the sound I’m hearing from outside rain? Is the rain real? To say it is is to say I need to take an umbrella with me when I go outside, or be prepared to get wet. To say it isn’t is to say I can walk outside without worry of getting wet. We get similar considerations when trying to decide if a stock rally is real or illusory, or, from an evolutionary perspective, whether the sound in the bushes is a real predator or just a figment of your imagination. Reality is that which makes a difference, something which there’s a possible cost to ignoring.

    Admittedly, this is a strange point to make when talking about quantum states. It might seem like whether they’re real has little to no bearing in our daily lives. But they do seem to make a difference for experimenters and quantum computing engineers. They have to take the dynamics implied in these mathematical tools seriously. In the case of quantum computing, it’s the very dynamics that seem to enable what they’re trying to do. Failure to treat them as real has consequences.

    Now, I’m a structural realist. I think what we can count on being real in successful scientific theories are the structures they describe, at least to some level of approximation. That doesn’t mean we can count on them being fundamental, or that we know what they may be structures of. This is particularly important to remember with quantum theory, where the structures are all we currently have.

    Does that mean that, rather than being structures of objective reality prior to a measurement, they could actually be structures of subjective expectations as the QBists argue? Or of the way the experimental equipment has been set up, as other antirealists argue? I suppose so. But that seems to imply the possibilities are completely set by these expectations or preparations, that if scientists really wanted to, they could get any result they wanted.

    In practice, something seems to constrain the possible results. Of course, if I put on the epistemic hat, I could argue that those constraints are the constraints on their thoughts (QBism) or practical equipment limitations (other epistemics), not anything in the quantum realm. But taking this literally, that seems to imply that quantum physics is a big illusion, a side effect of the way scientists think or construct experiments. If so, how could anyone be sure that any scientific measurements beyond human senses are to be trusted?

    All of that is before remembering that if we think anything objective at all is happening in the physics prior to a measurement, that there are mathematical theorems which kick in and demonstrate that quantum states must describe something real. Epistemic interpretations of quantum mechanics, such as Copenhagen, QBism, and RQM avoid this be saying there is no such objective physics prior to measurement (or interaction). Which, to me, makes calling them “epistemic” misleading. Qbists in particular argue for a “participatory reality,” a notion they inherited from John Wheeler’s “it from bit” idea.

    This selective application of antirealism has always felt like gerrymandering to me. Most of the proponents want to resist the idealism label, but they seem to want to take from metaphysical antirealism just what they need to avoid quantum state realism. It all feels forced.

    Interestingly enough, that doesn’t appear to have been Niels Bohr’s take. Historians often argue that he was more of a neo-Kantian than either an instrumentalist or idealist. His take seemed to be that the quantum realm was real, but inaccessible, the noumena always beyond the phenomena. Of course, this predates the theorems I mentioned above, which is what forces stronger stances from contemporary epistemic proponents.

    But my issue with the Kantian view is it pushes reality into something utterly and forever unknowable. Reportedly, Kant’s motivations for doing this were to preserve space for God, the soul, free will, and morality in response to the “Crisis of the Enlightenment,” which seemed to call all of those things into question. I suspect neo-Kantians are trying to preserve different things, but that kind of preservation likely remains part of their motivation.

    But the cost of doing so is to remove the practical aspects I noted above when deciding what’s real or not. In my view, it removes any utility from the concept of reality, except for talking in terms of theology or overall metaphysics.

    Which may be why Arroyo and Arenhart want to use the word “pragmatic” instead. I think a better strategy is to retain our grounded everyday meaning for “real,” but admit that we never know whether we’ve reached ultimate reality. But this is coming from someone who doesn’t share the Kantian or neo-Kantian concerns.

    Overall, my theory of reality is pragmatic. But I continue to wonder, for the people arguing against that take, what standard are they using?

    What do you think? Are there issues with a pragmatic take on reality I’m overlooking? If so, what would be a better standard?

    #antirealism #Philosophy #PhilosophyOfScience #Physics #QuantumMechanics #realism #structuralRealism

  5. If usefulness isn’t a guide to what’s real, what is?

    Seems like I’ve been writing a lot about quantum mechanics lately. Apparently so have a lot of other people. One thing that keeps coming up is the reality or non-reality of the quantum wave function. Raoni Arroyo and Jonas R. Becker Arenhart argue for non-reality: Quantum mechanics works, but it doesn’t describe reality: Predictive power is not a guide to reality. (Warning: likely paywall.)

    Along similar lines, in an article about what he says are quantum myths, Ethan Siegel argues that superpositions are not fundamental to quantum physics:

    Superpositions are incredibly useful as intermediate calculational steps to determine what your possible outcomes (and their probabilities) will be, but we can never measure them directly.

    Arroyo and Arenhart take a similar line. They argue that it would be more intellectually honest for wave function realists to call their position wave function pragmatism. As they note in the title of their piece, they don’t see predictive success as a guide to reality.

    The question I want to ask these people is, if predictive power, if usefulness, isn’t your guide to what is real, then what is?

    It’s worth thinking about why we care whether something is real or not. Is the sound I’m hearing from outside rain? Is the rain real? To say it is is to say I need to take an umbrella with me when I go outside, or be prepared to get wet. To say it isn’t is to say I can walk outside without worry of getting wet. We get similar considerations when trying to decide if a stock rally is real or illusory, or, from an evolutionary perspective, whether the sound in the bushes is a real predator or just a figment of your imagination. Reality is that which makes a difference, something which there’s a possible cost to ignoring.

    Admittedly, this is a strange point to make when talking about quantum states. It might seem like whether they’re real has little to no bearing in our daily lives. But they do seem to make a difference for experimenters and quantum computing engineers. They have to take the dynamics implied in these mathematical tools seriously. In the case of quantum computing, it’s the very dynamics that seem to enable what they’re trying to do. Failure to treat them as real has consequences.

    Now, I’m a structural realist. I think what we can count on being real in successful scientific theories are the structures they describe, at least to some level of approximation. That doesn’t mean we can count on them being fundamental, or that we know what they may be structures of. This is particularly important to remember with quantum theory, where the structures are all we currently have.

    Does that mean that, rather than being structures of objective reality prior to a measurement, they could actually be structures of subjective expectations as the QBists argue? Or of the way the experimental equipment has been set up, as other antirealists argue? I suppose so. But that seems to imply the possibilities are completely set by these expectations or preparations, that if scientists really wanted to, they could get any result they wanted.

    In practice, something seems to constrain the possible results. Of course, if I put on the epistemic hat, I could argue that those constraints are the constraints on their thoughts (QBism) or practical equipment limitations (other epistemics), not anything in the quantum realm. But taking this literally, that seems to imply that quantum physics is a big illusion, a side effect of the way scientists think or construct experiments. If so, how could anyone be sure that any scientific measurements beyond human senses are to be trusted?

    All of that is before remembering that if we think anything objective at all is happening in the physics prior to a measurement, that there are mathematical theorems which kick in and demonstrate that quantum states must describe something real. Epistemic interpretations of quantum mechanics, such as Copenhagen, QBism, and RQM avoid this be saying there is no such objective physics prior to measurement (or interaction). Which, to me, makes calling them “epistemic” misleading. Qbists in particular argue for a “participatory reality,” a notion they inherited from John Wheeler’s “it from bit” idea.

    This selective application of antirealism has always felt like gerrymandering to me. Most of the proponents want to resist the idealism label, but they seem to want to take from metaphysical antirealism just what they need to avoid quantum state realism. It all feels forced.

    Interestingly enough, that doesn’t appear to have been Niels Bohr’s take. Historians often argue that he was more of a neo-Kantian than either an instrumentalist or idealist. His take seemed to be that the quantum realm was real, but inaccessible, the noumena always beyond the phenomena. Of course, this predates the theorems I mentioned above, which is what forces stronger stances from contemporary epistemic proponents.

    But my issue with the Kantian view is it pushes reality into something utterly and forever unknowable. Reportedly, Kant’s motivations for doing this were to preserve space for God, the soul, free will, and morality in response to the “Crisis of the Enlightenment,” which seemed to call all of those things into question. I suspect neo-Kantians are trying to preserve different things, but that kind of preservation likely remains part of their motivation.

    But the cost of doing so is to remove the practical aspects I noted above when deciding what’s real or not. In my view, it removes any utility from the concept of reality, except for talking in terms of theology or overall metaphysics.

    Which may be why Arroyo and Arenhart want to use the word “pragmatic” instead. I think a better strategy is to retain our grounded everyday meaning for “real,” but admit that we never know whether we’ve reached ultimate reality. But this is coming from someone who doesn’t share the Kantian or neo-Kantian concerns.

    Overall, my theory of reality is pragmatic. But I continue to wonder, for the people arguing against that take, what standard are they using?

    What do you think? Are there issues with a pragmatic take on reality I’m overlooking? If so, what would be a better standard?

    #antirealism #Philosophy #PhilosophyOfScience #Physics #QuantumMechanics #realism #structuralRealism

  6. New preprint: Functional Causation Beyond Spacetime: A Non-Metric Framework for Temporal Structure

    Can causal direction emerge from internal asymmetries rather than geometry?

    📄 PDF: philpapers.org/rec/LENFCB
    🔍 #Causality #ModalLogic #StructuralRealism #PhilosophyOfPhysics

  7. Scientific breakthroughs often begin with someone saying, “Don’t panic. This crazy sounding assumption is just to make the math work.”

    Nicholaus Copernicus, when he developed his theory of heliocentrism (the earth orbits the sun), was operating from a scientific realist view. In other words, he thought his system reflected actual reality, or at least reflected it better than Ptolemy’s geocentric system (everything orbits the earth), which had been the accepted model of the universe since ancient times.

    However, the new reality he presented was controversial, particularly in protestant circles at the time. Which led Andreas Osiander, a Lutheran theologian involved in printing his book, to add an unauthorized and unsigned preface. Osiander argued that Copernicus’ framework shouldn’t be evaluated on whether it’s literally true, but as a useful mathematical framework to make predicting astronomical phenomena easier. In other words, don’t worry; it’s just convenient math.

    For decades many astronomers followed Osiander’s advice, accepting just Copernicus’ mathematics. The number who actually accepted heliocentric realism was vanishingly small. One astronomer, Tycho Brahe, advocated for a compromise cosmology with most planets orbiting the sun, but the sun still orbiting the earth. Straight Copernicans like Johannes Kepler and Galileo Galilei were very rare. It wouldn’t be until the early 1600s and Galileo’s telescopic observations, that heliocentrism started to be taken seriously (and resisted).

    Moving forward to 1900, Max Planck was trying to mathematically model black body radiation. But he couldn’t make it work. In desperation, he made a change he was loathe to do, one that would make his math compatible with Ludwig Boltzmann’s statistical interpretation of entropy, a view he opposed. He added discrete quantities into the equations, essentially doing the math as if there was a minimum unit of radiation. The change worked.

    Planck was beginning the science of quantum physics, but he didn’t see it at the time. He saw the quantization as purely a pragmatic move, a mathematical contrivance, and was skeptical of any deeper philosophical implications. However, a few years later, Albert Einstein used quanta to explain the photoelectric effect, essentially reifying the quanta into what we now know as photons.

    That same year, Einstein introduced his theory of special relativity. He was a realist about the theory from the beginning. However, his equations had implications for spacetime that he was initially skeptical of. We call it “Minkowski spacetime” today because his old math teacher, Hermann Minkowski, recognized the implications. Einstein eventually came around.

    But after working out general relativity, Einstein was again resistant to some of the implications of his math. General relativity predicted that the universe either had to be contracting or expanding. To save appearances, he introduced a fudge factor called the cosmological constant, a move he later regretted after observations showed that the universe was indeed expanding. (Although the cosmological constant later found new life with the discovery of dark energy.)

    Einstein was also resistant to certain solutions to his equations, solutions which seemed to indicate there could be regions of spacetime which were so curved that nothing could escape. In the early 1900s, these seemed like perverse entities that couldn’t be physical. Of course, today we know black holes exist and play a pivotal role in the universe. We’re able to detect and image them.

    In 1935, Einstein, together with Boris Podolsky and Nathan Rosen, published the famous “EPR paradox” paper, pointing out issues in the mathematics of quantum theory that violated locality, at least under conventional interpretations of quantum mechanics. Erwin Schrödinger followed up with additional papers naming the phenomenon “entanglement”, as well as coming up with the famous “Schrödinger’s cat” thought experiment, which questioned the implications of the mathematical framework he himself had been instrumental in developing.

    The thrust of their argument at the time was that these mathematical implications couldn’t be reality. However, twenty years later, John Stuart Bell came up with a way to test those implications. Alain Aspect, John F. Clauser, and Aton Zeilinger won the 2022 Nobel Prize for their experiments carrying out out those tests, progressively closing the loopholes to such an extent that it would be at least as absurd for the predictions to be wrong as right.

    (Einstein is picked on a lot on this post, but it’s worth noting that these are cases of him blanching at the implications of his own brilliant theories, or theories he helped develop. The fact is many famous scientists struggled with the full implications of their discoveries.)

    Of course, the mathematics aren’t always right. Newton’s laws of gravity were used to predict the existence of Neptune based on anomalies in Uranus’ orbit. However, those same laws were also used to predict the existence of the planet Vulcan, supposedly closer to the sun than Mercury. But Mercury’s orbital anomalies turned out to be stranger, heralding the limitations of Newtonian theory, limitations which would require Einstein’s general relativity to resolve.

    And the Large Hadron Collider hasn’t been kind to many speculative theories and their mathematics. So just because someone can manipulate equations, doesn’t mean it reflects reality.

    On the other hand, when the mathematics of a heavily tested theory, with no further assumptions, make predictions that can’t currently be tested, history seems to suggest taking them seriously. And mathematical convenience often heralds new realities. Even when the limits of a theory are reached, the new explanation typically ends up being far stranger than the initial prediction.

    Granted, it’s always possible to ignore the implied ontology by going instrumentalist. I do think it’s important to be able to put on the instrumentalist hat from time to time. It helps to sidestep ontological biases. Planck did it to make his breakthrough, as did Werner Heisenberg when he was working out the initial mathematical framework for quantum mechanics. But these were theorists using instrumentalism to make progress in spite of the strangeness.

    Other times making progress seems to mean finding ways to reconcile theories, to find where they converge, an inherently realist approach.  Einstein reportedly worked out special relativity from reconciling classical electromagnetism and Newtonian motion, and then general relativity from reconciling special relativity and Newtonian gravity. And most of us got interested in science and philosophy to get closer to truth, not to unrelated prediction instruments.

    This is why my own preferred outlook these days is structural realism, a sort of minimal realism that accepts the mathematical structures described by well tested theories as real, but remains agnostic on any underlying ontology. However even structural realism means accepting strange implications. 

    Which is why many people reach for instrumentalism. Although few are able to stick with it consistently. And selectively adopting it to dismiss predictions we don’t like seems firmly in the tradition of Osiander.

    Unless of course I’m missing something.

    Featured image source

    https://selfawarepatterns.com/2023/12/30/is-it-just-the-math/

    #instrumentalism #Philosophy #PhilosophyOfScience #Physics #Science #scientificRealism #structuralRealism

  8. Scientific breakthroughs often begin with someone saying, “Don’t panic. This crazy sounding assumption is just to make the math work.”

    Nicholaus Copernicus, when he developed his theory of heliocentrism (the earth orbits the sun), was operating from a scientific realist view. In other words, he thought his system reflected actual reality, or at least reflected it better than Ptolemy’s geocentric system (everything orbits the earth), which had been the accepted model of the universe since ancient times.

    However, the new reality he presented was controversial, particularly in protestant circles at the time. Which led Andreas Osiander, a Lutheran theologian involved in printing his book, to add an unauthorized and unsigned preface. Osiander argued that Copernicus’ framework shouldn’t be evaluated on whether it’s literally true, but as a useful mathematical framework to make predicting astronomical phenomena easier. In other words, don’t worry; it’s just convenient math.

    For decades many astronomers followed Osiander’s advice, accepting just Copernicus’ mathematics. The number who actually accepted heliocentric realism was vanishingly small. One astronomer, Tycho Brahe, advocated for a compromise cosmology with most planets orbiting the sun, but the sun still orbiting the earth. Straight Copernicans like Johannes Kepler and Galileo Galilei were very rare. It wouldn’t be until the early 1600s and Galileo’s telescopic observations, that heliocentrism started to be taken seriously (and resisted).

    Moving forward to 1900, Max Planck was trying to mathematically model black body radiation. But he couldn’t make it work. In desperation, he made a change he was loathe to do, one that would make his math compatible with Ludwig Boltzmann’s statistical interpretation of entropy, a view he opposed. He added discrete quantities into the equations, essentially doing the math as if there was a minimum unit of radiation. The change worked.

    Planck was beginning the science of quantum physics, but he didn’t see it at the time. He saw the quantization as purely a pragmatic move, a mathematical contrivance, and was skeptical of any deeper philosophical implications. However, a few years later, Albert Einstein used quanta to explain the photoelectric effect, essentially reifying the quanta into what we now know as photons.

    That same year, Einstein introduced his theory of special relativity. He was a realist about the theory from the beginning. However, his equations had implications for spacetime that he was initially skeptical of. We call it “Minkowski spacetime” today because his old math teacher, Hermann Minkowski, recognized the implications. Einstein eventually came around.

    But after working out general relativity, Einstein was again resistant to some of the implications of his math. General relativity predicted that the universe either had to be contracting or expanding. To save appearances, he introduced a fudge factor called the cosmological constant, a move he later regretted after observations showed that the universe was indeed expanding. (Although the cosmological constant later found new life with the discovery of dark energy.)

    Einstein was also resistant to certain solutions to his equations, solutions which seemed to indicate there could be regions of spacetime which were so curved that nothing could escape. In the early 1900s, these seemed like perverse entities that couldn’t be physical. Of course, today we know black holes exist and play a pivotal role in the universe. We’re able to detect and image them.

    In 1935, Einstein, together with Boris Podolsky and Nathan Rosen, published the famous “EPR paradox” paper, pointing out issues in the mathematics of quantum theory that violated locality, at least under conventional interpretations of quantum mechanics. Erwin Schrödinger followed up with additional papers naming the phenomenon “entanglement”, as well as coming up with the famous “Schrödinger’s cat” thought experiment, which questioned the implications of the mathematical framework he himself had been instrumental in developing.

    The thrust of their argument at the time was that these mathematical implications couldn’t be reality. However, twenty years later, John Stuart Bell came up with a way to test those implications. Alain Aspect, John F. Clauser, and Aton Zeilinger won the 2022 Nobel Prize for their experiments carrying out out those tests, progressively closing the loopholes to such an extent that it would be at least as absurd for the predictions to be wrong as right.

    (Einstein is picked on a lot on this post, but it’s worth noting that these are cases of him blanching at the implications of his own brilliant theories, or theories he helped develop. The fact is many famous scientists struggled with the full implications of their discoveries.)

    Of course, the mathematics aren’t always right. Newton’s laws of gravity were used to predict the existence of Neptune based on anomalies in Uranus’ orbit. However, those same laws were also used to predict the existence of the planet Vulcan, supposedly closer to the sun than Mercury. But Mercury’s orbital anomalies turned out to be stranger, heralding the limitations of Newtonian theory, limitations which would require Einstein’s general relativity to resolve.

    And the Large Hadron Collider hasn’t been kind to many speculative theories and their mathematics. So just because someone can manipulate equations, doesn’t mean it reflects reality.

    On the other hand, when the mathematics of a heavily tested theory, with no further assumptions, make predictions that can’t currently be tested, history seems to suggest taking them seriously. And mathematical convenience often heralds new realities. Even when the limits of a theory are reached, the new explanation typically ends up being far stranger than the initial prediction.

    Granted, it’s always possible to ignore the implied ontology by going instrumentalist. I do think it’s important to be able to put on the instrumentalist hat from time to time. It helps to sidestep ontological biases. Planck did it to make his breakthrough, as did Werner Heisenberg when he was working out the initial mathematical framework for quantum mechanics. But these were theorists using instrumentalism to make progress in spite of the strangeness.

    Other times making progress seems to mean finding ways to reconcile theories, to find where they converge, an inherently realist approach.  Einstein reportedly worked out special relativity from reconciling classical electromagnetism and Newtonian motion, and then general relativity from reconciling special relativity and Newtonian gravity. And most of us got interested in science and philosophy to get closer to truth, not to unrelated prediction instruments.

    This is why my own preferred outlook these days is structural realism, a sort of minimal realism that accepts the mathematical structures described by well tested theories as real, but remains agnostic on any underlying ontology. However even structural realism means accepting strange implications. 

    Which is why many people reach for instrumentalism. Although few are able to stick with it consistently. And selectively adopting it to dismiss predictions we don’t like seems firmly in the tradition of Osiander.

    Unless of course I’m missing something.

    Featured image source

    https://selfawarepatterns.com/2023/12/30/is-it-just-the-math/

    #instrumentalism #Philosophy #PhilosophyOfScience #Physics #Science #scientificRealism #structuralRealism

  9. Scientific breakthroughs often begin with someone saying, “Don’t panic. This crazy sounding assumption is just to make the math work.”

    Nicholaus Copernicus, when he developed his theory of heliocentrism (the earth orbits the sun), was operating from a scientific realist view. In other words, he thought his system reflected actual reality, or at least reflected it better than Ptolemy’s geocentric system (everything orbits the earth), which had been the accepted model of the universe since ancient times.

    However, the new reality he presented was controversial, particularly in protestant circles at the time. Which led Andreas Osiander, a Lutheran theologian involved in printing his book, to add an unauthorized and unsigned preface. Osiander argued that Copernicus’ framework shouldn’t be evaluated on whether it’s literally true, but as a useful mathematical framework to make predicting astronomical phenomena easier. In other words, don’t worry; it’s just convenient math.

    For decades many astronomers followed Osiander’s advice, accepting just Copernicus’ mathematics. The number who actually accepted heliocentric realism was vanishingly small. One astronomer, Tycho Brahe, advocated for a compromise cosmology with most planets orbiting the sun, but the sun still orbiting the earth. Straight Copernicans like Johannes Kepler and Galileo Galilei were very rare. It wouldn’t be until the early 1600s and Galileo’s telescopic observations, that heliocentrism started to be taken seriously (and resisted).

    Moving forward to 1900, Max Planck was trying to mathematically model black body radiation. But he couldn’t make it work. In desperation, he made a change he was loathe to do, one that would make his math compatible with Ludwig Boltzmann’s statistical interpretation of entropy, a view he opposed. He added discrete quantities into the equations, essentially doing the math as if there was a minimum unit of radiation. The change worked.

    Planck was beginning the science of quantum physics, but he didn’t see it at the time. He saw the quantization as purely a pragmatic move, a mathematical contrivance, and was skeptical of any deeper philosophical implications. However, a few years later, Albert Einstein used quanta to explain the photoelectric effect, essentially reifying the quanta into what we now know as photons.

    That same year, Einstein introduced his theory of special relativity. He was a realist about the theory from the beginning. However, his equations had implications for spacetime that he was initially skeptical of. We call it “Minkowski spacetime” today because his old math teacher, Hermann Minkowski, recognized the implications. Einstein eventually came around.

    But after working out general relativity, Einstein was again resistant to some of the implications of his math. General relativity predicted that the universe either had to be contracting or expanding. To save appearances, he introduced a fudge factor called the cosmological constant, a move he later regretted after observations showed that the universe was indeed expanding. (Although the cosmological constant later found new life with the discovery of dark energy.)

    Einstein was also resistant to certain solutions to his equations, solutions which seemed to indicate there could be regions of spacetime which were so curved that nothing could escape. In the early 1900s, these seemed like perverse entities that couldn’t be physical. Of course, today we know black holes exist and play a pivotal role in the universe. We’re able to detect and image them.

    In 1935, Einstein, together with Boris Podolsky and Nathan Rosen, published the famous “EPR paradox” paper, pointing out issues in the mathematics of quantum theory that violated locality, at least under conventional interpretations of quantum mechanics. Erwin Schrödinger followed up with additional papers naming the phenomenon “entanglement”, as well as coming up with the famous “Schrödinger’s cat” thought experiment, which questioned the implications of the mathematical framework he himself had been instrumental in developing.

    The thrust of their argument at the time was that these mathematical implications couldn’t be reality. However, twenty years later, John Stuart Bell came up with a way to test those implications. Alain Aspect, John F. Clauser, and Aton Zeilinger won the 2022 Nobel Prize for their experiments carrying out out those tests, progressively closing the loopholes to such an extent that it would be at least as absurd for the predictions to be wrong as right.

    (Einstein is picked on a lot on this post, but it’s worth noting that these are cases of him blanching at the implications of his own brilliant theories, or theories he helped develop. The fact is many famous scientists struggled with the full implications of their discoveries.)

    Of course, the mathematics aren’t always right. Newton’s laws of gravity were used to predict the existence of Neptune based on anomalies in Uranus’ orbit. However, those same laws were also used to predict the existence of the planet Vulcan, supposedly closer to the sun than Mercury. But Mercury’s orbital anomalies turned out to be stranger, heralding the limitations of Newtonian theory, limitations which would require Einstein’s general relativity to resolve.

    And the Large Hadron Collider hasn’t been kind to many speculative theories and their mathematics. So just because someone can manipulate equations, doesn’t mean it reflects reality.

    On the other hand, when the mathematics of a heavily tested theory, with no further assumptions, make predictions that can’t currently be tested, history seems to suggest taking them seriously. And mathematical convenience often heralds new realities. Even when the limits of a theory are reached, the new explanation typically ends up being far stranger than the initial prediction.

    Granted, it’s always possible to ignore the implied ontology by going instrumentalist. I do think it’s important to be able to put on the instrumentalist hat from time to time. It helps to sidestep ontological biases. Planck did it to make his breakthrough, as did Werner Heisenberg when he was working out the initial mathematical framework for quantum mechanics. But these were theorists using instrumentalism to make progress in spite of the strangeness.

    Other times making progress seems to mean finding ways to reconcile theories, to find where they converge, an inherently realist approach.  Einstein reportedly worked out special relativity from reconciling classical electromagnetism and Newtonian motion, and then general relativity from reconciling special relativity and Newtonian gravity. And most of us got interested in science and philosophy to get closer to truth, not to unrelated prediction instruments.

    This is why my own preferred outlook these days is structural realism, a sort of minimal realism that accepts the mathematical structures described by well tested theories as real, but remains agnostic on any underlying ontology. However even structural realism means accepting strange implications. 

    Which is why many people reach for instrumentalism. Although few are able to stick with it consistently. And selectively adopting it to dismiss predictions we don’t like seems firmly in the tradition of Osiander.

    Unless of course I’m missing something.

    Featured image source

    https://selfawarepatterns.com/2023/12/30/is-it-just-the-math/

    #instrumentalism #Philosophy #PhilosophyOfScience #Physics #Science #scientificRealism #structuralRealism

  10. Scientific breakthroughs often begin with someone saying, “Don’t panic. This crazy sounding assumption is just to make the math work.”

    Nicholaus Copernicus, when he developed his theory of heliocentrism (the earth orbits the sun), was operating from a scientific realist view. In other words, he thought his system reflected actual reality, or at least reflected it better than Ptolemy’s geocentric system (everything orbits the earth), which had been the accepted model of the universe since ancient times.

    However, the new reality he presented was controversial, particularly in protestant circles at the time. Which led Andreas Osiander, a Lutheran theologian involved in printing his book, to add an unauthorized and unsigned preface. Osiander argued that Copernicus’ framework shouldn’t be evaluated on whether it’s literally true, but as a useful mathematical framework to make predicting astronomical phenomena easier. In other words, don’t worry; it’s just convenient math.

    For decades many astronomers followed Osiander’s advice, accepting just Copernicus’ mathematics. The number who actually accepted heliocentric realism was vanishingly small. One astronomer, Tycho Brahe, advocated for a compromise cosmology with most planets orbiting the sun, but the sun still orbiting the earth. Straight Copernicans like Johannes Kepler and Galileo Galilei were very rare. It wouldn’t be until the early 1600s and Galileo’s telescopic observations, that heliocentrism started to be taken seriously (and resisted).

    Moving forward to 1900, Max Planck was trying to mathematically model black body radiation. But he couldn’t make it work. In desperation, he made a change he was loathe to do, one that would make his math compatible with Ludwig Boltzmann’s statistical interpretation of entropy, a view he opposed. He added discrete quantities into the equations, essentially doing the math as if there was a minimum unit of radiation. The change worked.

    Planck was beginning the science of quantum physics, but he didn’t see it at the time. He saw the quantization as purely a pragmatic move, a mathematical contrivance, and was skeptical of any deeper philosophical implications. However, a few years later, Albert Einstein used quanta to explain the photoelectric effect, essentially reifying the quanta into what we now know as photons.

    That same year, Einstein introduced his theory of special relativity. He was a realist about the theory from the beginning. However, his equations had implications for spacetime that he was initially skeptical of. We call it “Minkowski spacetime” today because his old math teacher, Hermann Minkowski, recognized the implications. Einstein eventually came around.

    But after working out general relativity, Einstein was again resistant to some of the implications of his math. General relativity predicted that the universe either had to be contracting or expanding. To save appearances, he introduced a fudge factor called the cosmological constant, a move he later regretted after observations showed that the universe was indeed expanding. (Although the cosmological constant later found new life with the discovery of dark energy.)

    Einstein was also resistant to certain solutions to his equations, solutions which seemed to indicate there could be regions of spacetime which were so curved that nothing could escape. In the early 1900s, these seemed like perverse entities that couldn’t be physical. Of course, today we know black holes exist and play a pivotal role in the universe. We’re able to detect and image them.

    In 1935, Einstein, together with Boris Podolsky and Nathan Rosen, published the famous “EPR paradox” paper, pointing out issues in the mathematics of quantum theory that violated locality, at least under conventional interpretations of quantum mechanics. Erwin Schrödinger followed up with additional papers naming the phenomenon “entanglement”, as well as coming up with the famous “Schrödinger’s cat” thought experiment, which questioned the implications of the mathematical framework he himself had been instrumental in developing.

    The thrust of their argument at the time was that these mathematical implications couldn’t be reality. However, twenty years later, John Stuart Bell came up with a way to test those implications. Alain Aspect, John F. Clauser, and Aton Zeilinger won the 2022 Nobel Prize for their experiments carrying out out those tests, progressively closing the loopholes to such an extent that it would be at least as absurd for the predictions to be wrong as right.

    (Einstein is picked on a lot on this post, but it’s worth noting that these are cases of him blanching at the implications of his own brilliant theories, or theories he helped develop. The fact is many famous scientists struggled with the full implications of their discoveries.)

    Of course, the mathematics aren’t always right. Newton’s laws of gravity were used to predict the existence of Neptune based on anomalies in Uranus’ orbit. However, those same laws were also used to predict the existence of the planet Vulcan, supposedly closer to the sun than Mercury. But Mercury’s orbital anomalies turned out to be stranger, heralding the limitations of Newtonian theory, limitations which would require Einstein’s general relativity to resolve.

    And the Large Hadron Collider hasn’t been kind to many speculative theories and their mathematics. So just because someone can manipulate equations, doesn’t mean it reflects reality.

    On the other hand, when the mathematics of a heavily tested theory, with no further assumptions, make predictions that can’t currently be tested, history seems to suggest taking them seriously. And mathematical convenience often heralds new realities. Even when the limits of a theory are reached, the new explanation typically ends up being far stranger than the initial prediction.

    Granted, it’s always possible to ignore the implied ontology by going instrumentalist. I do think it’s important to be able to put on the instrumentalist hat from time to time. It helps to sidestep ontological biases. Planck did it to make his breakthrough, as did Werner Heisenberg when he was working out the initial mathematical framework for quantum mechanics. But these were theorists using instrumentalism to make progress in spite of the strangeness.

    Other times making progress seems to mean finding ways to reconcile theories, to find where they converge, an inherently realist approach.  Einstein reportedly worked out special relativity from reconciling classical electromagnetism and Newtonian motion, and then general relativity from reconciling special relativity and Newtonian gravity. And most of us got interested in science and philosophy to get closer to truth, not to unrelated prediction instruments.

    This is why my own preferred outlook these days is structural realism, a sort of minimal realism that accepts the mathematical structures described by well tested theories as real, but remains agnostic on any underlying ontology. However even structural realism means accepting strange implications. 

    Which is why many people reach for instrumentalism. Although few are able to stick with it consistently. And selectively adopting it to dismiss predictions we don’t like seems firmly in the tradition of Osiander.

    Unless of course I’m missing something.

    Featured image source

    https://selfawarepatterns.com/2023/12/30/is-it-just-the-math/

    #instrumentalism #Philosophy #PhilosophyOfScience #Physics #Science #scientificRealism #structuralRealism

  11. Scientific breakthroughs often begin with someone saying, “Don’t panic. This crazy sounding assumption is just to make the math work.”

    Nicholaus Copernicus, when he developed his theory of heliocentrism (the earth orbits the sun), was operating from a scientific realist view. In other words, he thought his system reflected actual reality, or at least reflected it better than Ptolemy’s geocentric system (everything orbits the earth), which had been the accepted model of the universe since ancient times.

    However, the new reality he presented was controversial, particularly in protestant circles at the time. Which led Andreas Osiander, a Lutheran theologian involved in printing his book, to add an unauthorized and unsigned preface. Osiander argued that Copernicus’ framework shouldn’t be evaluated on whether it’s literally true, but as a useful mathematical framework to make predicting astronomical phenomena easier. In other words, don’t worry; it’s just convenient math.

    For decades many astronomers followed Osiander’s advice, accepting just Copernicus’ mathematics. The number who actually accepted heliocentric realism was vanishingly small. One astronomer, Tycho Brahe, advocated for a compromise cosmology with most planets orbiting the sun, but the sun still orbiting the earth. Straight Copernicans like Johannes Kepler and Galileo Galilei were very rare. It wouldn’t be until the early 1600s and Galileo’s telescopic observations, that heliocentrism started to be taken seriously (and resisted).

    Moving forward to 1900, Max Planck was trying to mathematically model black body radiation. But he couldn’t make it work. In desperation, he made a change he was loathe to do, one that would make his math compatible with Ludwig Boltzmann’s statistical interpretation of entropy, a view he opposed. He added discrete quantities into the equations, essentially doing the math as if there was a minimum unit of radiation. The change worked.

    Planck was beginning the science of quantum physics, but he didn’t see it at the time. He saw the quantization as purely a pragmatic move, a mathematical contrivance, and was skeptical of any deeper philosophical implications. However, a few years later, Albert Einstein used quanta to explain the photoelectric effect, essentially reifying the quanta into what we now know as photons.

    That same year, Einstein introduced his theory of special relativity. He was a realist about the theory from the beginning. However, his equations had implications for spacetime that he was initially skeptical of. We call it “Minkowski spacetime” today because his old math teacher, Hermann Minkowski, recognized the implications. Einstein eventually came around.

    But after working out general relativity, Einstein was again resistant to some of the implications of his math. General relativity predicted that the universe either had to be contracting or expanding. To save appearances, he introduced a fudge factor called the cosmological constant, a move he later regretted after observations showed that the universe was indeed expanding. (Although the cosmological constant later found new life with the discovery of dark energy.)

    Einstein was also resistant to certain solutions to his equations, solutions which seemed to indicate there could be regions of spacetime which were so curved that nothing could escape. In the early 1900s, these seemed like perverse entities that couldn’t be physical. Of course, today we know black holes exist and play a pivotal role in the universe. We’re able to detect and image them.

    In 1935, Einstein, together with Boris Podolsky and Nathan Rosen, published the famous “EPR paradox” paper, pointing out issues in the mathematics of quantum theory that violated locality, at least under conventional interpretations of quantum mechanics. Erwin Schrödinger followed up with additional papers naming the phenomenon “entanglement”, as well as coming up with the famous “Schrödinger’s cat” thought experiment, which questioned the implications of the mathematical framework he himself had been instrumental in developing.

    The thrust of their argument at the time was that these mathematical implications couldn’t be reality. However, twenty years later, John Stuart Bell came up with a way to test those implications. Alain Aspect, John F. Clauser, and Aton Zeilinger won the 2022 Nobel Prize for their experiments carrying out out those tests, progressively closing the loopholes to such an extent that it would be at least as absurd for the predictions to be wrong as right.

    (Einstein is picked on a lot on this post, but it’s worth noting that these are cases of him blanching at the implications of his own brilliant theories, or theories he helped develop. The fact is many famous scientists struggled with the full implications of their discoveries.)

    Of course, the mathematics aren’t always right. Newton’s laws of gravity were used to predict the existence of Neptune based on anomalies in Uranus’ orbit. However, those same laws were also used to predict the existence of the planet Vulcan, supposedly closer to the sun than Mercury. But Mercury’s orbital anomalies turned out to be stranger, heralding the limitations of Newtonian theory, limitations which would require Einstein’s general relativity to resolve.

    And the Large Hadron Collider hasn’t been kind to many speculative theories and their mathematics. So just because someone can manipulate equations, doesn’t mean it reflects reality.

    On the other hand, when the mathematics of a heavily tested theory, with no further assumptions, make predictions that can’t currently be tested, history seems to suggest taking them seriously. And mathematical convenience often heralds new realities. Even when the limits of a theory are reached, the new explanation typically ends up being far stranger than the initial prediction.

    Granted, it’s always possible to ignore the implied ontology by going instrumentalist. I do think it’s important to be able to put on the instrumentalist hat from time to time. It helps to sidestep ontological biases. Planck did it to make his breakthrough, as did Werner Heisenberg when he was working out the initial mathematical framework for quantum mechanics. But these were theorists using instrumentalism to make progress in spite of the strangeness.

    Other times making progress seems to mean finding ways to reconcile theories, to find where they converge, an inherently realist approach.  Einstein reportedly worked out special relativity from reconciling classical electromagnetism and Newtonian motion, and then general relativity from reconciling special relativity and Newtonian gravity. And most of us got interested in science and philosophy to get closer to truth, not to unrelated prediction instruments.

    This is why my own preferred outlook these days is structural realism, a sort of minimal realism that accepts the mathematical structures described by well tested theories as real, but remains agnostic on any underlying ontology. However even structural realism means accepting strange implications. 

    Which is why many people reach for instrumentalism. Although few are able to stick with it consistently. And selectively adopting it to dismiss predictions we don’t like seems firmly in the tradition of Osiander.

    Unless of course I’m missing something.

    Featured image source

    https://selfawarepatterns.com/2023/12/30/is-it-just-the-math/

    #instrumentalism #Philosophy #PhilosophyOfScience #Physics #Science #scientificRealism #structuralRealism