|
|
 Originally Posted by MadMojoMonkey
I'm on board with the assessment that a sloppy reproduction of the original experiment is unable to comment on the validity of the original experiment.
I'm under the impression that, even in cases where the initial researcher was involved in the reproduction, the results were not stellar. There have been a slew of failures to reproduce, even when the procedures of the original experiment were followed to a T.
There's two major classes of issues around reproducibility, methodological (i.e., how closely the replication reproduces what was done in the original study) and statistically (how the original study's results are compared to the replication attempt and how inferences are drawn as to whether the result constitutes a replication).
As for methods, the willingness of someone to cooperate in a replication attempt speaks mainly to their own integrity - it doesn't necessarily ensure the replication attempt itself isn't being corrupted by some unknown variable or the replicator's general incompetence in trying to replicate a study done outside their own area of expertise. This can happen despite everyone's best intentions.
Fwiw, i have spoken to some of the people whose work was reproduced in the psych reproducibility project. Even among the ones who agreed and cooperated, many of them had criticisms regarding how much the replication faithfully reproduced the original methods. This of course is clearly a concern in interpreting the results.
Regarding statistics, it is SUPER annoying that these people (the reproducibility project) set out to assess reproducibility without even one of them appearing to have a solid grasp of how to do so statistically. The way they did the stats doesn't pass the laugh test because it relies on matching the results of two separate null hypothesis statistical tests with binary outcomes (haha) rather than measuring the evidence that the replication provides evidence for the existence of an effect consistent in size with that implied by the original research.
 Originally Posted by MadMojoMonkey
If your argument is that psychology deals with people, who can be roughly described as chaotic systems, and therefore reproducibility is not as fundamental to the field... then I'd say that's a step away from scientific rigor. It doesn't mean it's not science, but it certainly means it's using a permutation of scientific method.
I didn't say anything like that. If you have an effect that is large and consistent enough to be of import then by definition you will be able to design a study with sufficient power to find it and it should reproduce reliably. You should also provide enough information that a person acting in good faith can repeat your methods as precisely as is humanly possible.
People who run thousands of subjects and report 'significant' correlations of 0.2 are bigger morons than people who report large effects from small samples that may be difficult to replicate (though the latter are still morons, just as not on the same scale as the former). The reason for this is that many replication attempts are done in bad faith - the authors often have a vested interest in disproving someone's results and so (consciously or not) manipulate the methods or analyses just enough to make that happen.
That said, it is entirely possible to obtain spurious results due to statistical variance alone. Thus, the best defense for a scientist of integrity against having their reputation tarnished by failures to replicate is to replicate themselves at least once and preferably several times before they publish anything.
In the end, it would not surprise me entirely if a fair number of people are making errors in their design and/or analyses and thus reporting spurious results. Statistics is a complicated subject and analyzing numerous variables simultaneously is not a trivial matter. Whether the number of fuck ups is anywhere near the ~50% claimed by the reproducibility project is something I very much doubt for the reasons given above. In the end, though, the vast majority of important effects obtained in the field have been replicated dozens if not hundreds of times and shown to be reliable because they are of sufficient interest that people want to study them.
The layperson's impression that half of what we know about the mind must be bullshit because of the reproducibility project's report is just wrong.
|