Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this is a very naive refutation of what the point was about Kuhn.

First off, the notion that engineering "validates" science. What do you mean by validate? Do you mean successful engineering informed by a set of scientific principles, somehow, in a scientifically rigorous way, renders those principles true? As in, end of story true?

Because most of mechanical engineering is done on the assumption that force equals mass times acceleration. The industrial revolution yielded countless engineering marvels on the back of Newton--cars, trains, breathtaking buildings and bridges. But the theory isn't "true," because Albert Einstein did some thinking and realized that all of the physics change as you go really fast--something "engineering" missed, despite the fact that the stuff "worked."

But somehow it feels wrong to say that Newton was wrong, right? Because in his world, with the kind of thinking and set of scientific instruments available to him, and the battled hardened inverse square phenomena [1] that could be painstakingly measured and applied in engineering, it was infallible. It was true. But only in historical context.

1) Actually not quite so. The strange orbit of Mercury was out of line with Newton's equations, so much so that the planet Vulcan, placed out of Earth's visible observation, was invented to explain it. And so the theory was saved, until General Relativity explained it away, too. So much for "truth."



> I think this is a very naive refutation of what the point was about Kuhn.

I don't deny that it's naive.

> First off, the notion that engineering "validates" science. What do you mean by validate? Do you mean successful engineering informed by a set of scientific principles, somehow, in a scientifically rigorous way, renders those principles true? As in, end of story true?

Nothing is end-of-story true except in mathematics, where we have access to absolute truth by virtue of first having accepted an axiom system as absolutely true within a context and then having accepted some logical rules as being capable of turning one absolute truth into another absolute truth within the same context.

Mathematics is absolute, but it's only valid within the abstract context of that branch of mathematics.

Physics, for example, is only conditionally true, contingent on us finding evidence which refutes a given theory, but it is applicable to the real world.

So engineering provides evidence that the theories we have can predict the behavior of the Universe at least in the context where they're being applied. A theory is only validated in the world in which it is tested. Granted. However, to the extent it is tested and validated, that validation should be accepted as worthwhile, as opposed to being written off as something culturally contingent.

> Because most of mechanical engineering is done on the assumption that force equals mass times acceleration. The industrial revolution yielded countless engineering marvels on the back of Newton--cars, trains, breathtaking buildings and bridges. But the theory isn't "true," because Albert Einstein did some thinking and realized that all of the physics change as you go really fast--something "engineering" missed, despite the fact that the stuff "worked."

Right, and Einstein's predictions about how the acceleration of a massive particle to near light speed would affect its measurement of time were not validated by engineering but experimentation. And it's also true that bridge engineering validates Newton as much as it does Einstein and Dirac, for example, because it operates in a world where all three theories are "valid" in the sense of "if you use them to help make your bridge, they will not cause it to fall down", and it validates whatever ideas the ancient Roman bridge-builders had, at least if the bridge is of a style the Romans made. I grant all of that.

Philosophically, then, we're back to Popper, in that negative results push science forwards, whereas positive results only make us more sure that the ground we're standing on is solid. We shouldn't ignore positive results, though, because the bridge will still stand even after the next paradigm shift; we should further accept all theories as provisionally correct. That much seems fairly mainstream, philosophically speaking.

However, we are moving forwards. We are able to explain more observations than we have been able to in the past. We are not just moving in circles, with each paradigm shift undoing all of our work and sending us back to square one. We learn to make better and better bridges, to bring this back to engineering.

> But somehow it feels wrong to say that Newton was wrong, right? Because in his world, with the kind of thinking and set of scientific instruments available to him, and the battled hardened inverse square phenomena [1] that could be painstakingly measured and applied in engineering, it was infallible. It was true. But only in historical context.

Newton's laws were always provisional. We now know them to be incomplete, but still useful for human-scale construction, on Earth or in space or on other bodies entirely. They've been subsumed into more modern theories as a special case; they're the equations you observe when you set the paramters to be similar to what humans will experience first-hand. And, as you said, they couldn't explain Mercury, which modern theories can, so they were incomplete even before we had GPS satellites to falsify their predictions. (I mean, they were observably incomplete. Our observation doesn't dictate what reality is; any solipsists can kindly imagine that I don't exist and refrain from communicating with me.)

So engineering does validate theories, but validation isn't enough to winnow theories until you come up with some test some of them fail. That's just Popperian philosophy, though, isn't it? That's just the philosophy of science that all the cool kids are so done with right now, right? My point is that we shouldn't imagine that the validation is worthless, or imagine that it can be undone, because any new theory will have to explain precisely the same behavior as the old one, paradigm shift or no.


I don't really have time to respond to your comments right now, but I did want to make one tangential remark.

  Mathematics is absolute, but it's only valid within the abstract context of that branch of mathematics.
Mathematics itself went through a paradigm shift in the early 20th century, known as the "foundational crisis". At the time, mathematicians began running into paradoxes which existing theories could not properly address, including Russel's Paradox.

In response, mathematicians developed a set of formal axioms (nowadays most people use ZFC, although sometimes Von Neumann–Bernays–Gödel and other variations are used) which produce a mathematical foundation that is consistent (i.e. free of paradoxes/contradictions).

However, as Gödel's Incompleteness Theorems demonstrated, there is no set of foundational axioms which are both consistent (free of contradictions) and complete (all mathematical truths can be deduced by such a system).

So, while it is true that mathematical proofs are formally valid deductions from a set of axioms, it is worth recognizing that the relationship between mathematics and truth are somewhat more complex than they seem. As it stands, there are an infinite number of mathematical statements that cannot be derived by an axiomatic system. Some philosophers have even sought to identify 'quasi-empiricism' in mathematical thought [1].

And if you find that interesting, you'll love James Conant's paper on Logically Alien Thought [2].

[1] http://en.wikipedia.org/wiki/Quasi-empiricism_in_mathematics

[2] http://philosophy.uchicago.edu/faculty/files/conant/Search%2...


My point about Mercury was important because it shows that there is an appreciable level of "give" that a theory has before the scientific community agrees that there is something wrong with it. That level of give is socially determined. It does matter if the discoverer of the anomaly is a Cambridge phd or a crackpot with no credentials. The measurement instruments matter, and the fallability of those instruments play into the acceptance of the results, too. A Popperian viewpoint is somewhat naive because what constitutes a falsification is incredibly fraught! Read Lakatos. He models scientific progression as a series of research programs that have "hard cores" of belief that are protected by ancillary theories. In the event of a negative result, it's those theories that are investigated first. For example, is my telescope correct? Is the theory of light that informs my telescope correct? Is there a dark planet influencing things that I can't see? In hindsight, Mercury should have falsified Newton, because all of the falsifying observations were valid. But it didn't because reasons.

We (and by we I mean Popperians) want to believe that science is a series of universally positive logical assertions that can be cut down by a single negative observation, as logic would dictate. But we don't always know what the criteria are for successful negative observations. The criteria are less rigid and well defined than we would be willing to admit. They vary from community to community. Robert Milikan won the Nobel prize for measuring the charge of an electron with his brilliant oil drop experiment. Only problem? His measurement was wrong. As folks tried to repeat it, they deviated more and more from his original measurement, until many repetitions and many publications later they landed on the correct value. If you were to plot the "true" measurement for the charge of the electron against time, you would see something deviating very slowly from an arbitrary incorrect value to the correct one. You have to ask, how on earth is this possible? Bias, authority, imprecision over truth criteria—all at play. And I think it's this sociological fuzziness in play in many thousands of small ways that lead us to at least question the assumptions on which truth is founded.


> We (and by we I mean Popperians) want to believe that science is a series of universally positive logical assertions that can be cut down by a single negative observation, as logic would dictate. But we don't always know what the criteria are for successful negative observations. The criteria are less rigid and well defined than we would be willing to admit.

Bayesian reasoning helps here, I think, because people are wrong, and different people are wrong with different probabilities. For example, overturning mass-energy conservation because someone said they saw a professor turn into a small cat or a strange spacecraft appear and disappear is not reasonable: The probability of one person being wrong or insane is a lot higher than the probability of something really well-verified being completely incorrect.

Is it political at times? Yes. Can it be improved? Sure. But it is flawed, not completely broken, and I think Kuhn makes too much of the flawed-ness which encourages people to imagine that it's completely broken and therefore the next paradigm shift will validate homeopathy.


I don't think anybody is claiming that science is "completely broken," but there are some who want nice, clean, logical delineations, who think that science is filling out some invisible, giant truth table. And that by each assertion in that truth table, there's a straightforward "this is how to falsify me" entry that scientists can look up and enact.

Based on how science actually works, this notion is fanciful. No such table exists. If you were to have the luxury of asking the top physicists, say, to create such a table for you, they'd very likely all look different.

Also, your comment regarding homeopathy is something of a strawman. Paradigms are incommensurate. If we do incur a paradigm shift in our lifetimes, it's likely that our current ways of speaking about science will be unable to capture it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: