Before they outlined what they were looking for. Now they're just sifting.
Most science doesn't allow you to propose "We will expose A to X and see what happens", but rather expects a hypothesis: "We expect Z following X>A because of [rationale]."
"Most science doesn't allow you to propose "We will expose A to X and see what happens", but rather expects a hypothesis: "We expect Z following X>A because of [rationale].""
Which I consider one of the major flaws of the current scientific practice. There's absolutely nothing wrong with trawling over data looking for interesting things. You just have to be more statistically careful. Huge amounts of science have been done by just trawling over things or rapid-fire throwing theories at the wall and seeing what sticks. Having to always call your shots means you can only slightly push the frontier back. There's a place for that, which is probably "the vast majority of experiments". But we need that more exploratory stuff too.
But somehow we got entrenched with a terrible oversimplification of science as "The Way Science Is Done", which hyperspecifies the parameters of the ways we can modify our confidence values in various theories. There's more valid choices than is currently considered valid, and we're missing out.
I wonder if this is due to the success of the current models (less hidden corners), academic incentives for paper production as opposed to breakthrough work, the ever growing specificity and cost of experiments (making less sure experiments unviable) or some combination of these factors.
Before they outlined what they were looking for. Now they're just sifting.
Most science doesn't allow you to propose "We will expose A to X and see what happens", but rather expects a hypothesis: "We expect Z following X>A because of [rationale]."