Some recent highlights (thanks to @hbdchick for hearing about many) include mere exposure effect not replicating, a bad history of hummingbird flap rates, vitamin D with/without calcium doesn't matter for bones, Alzheimer's research is bunk, no evidence for nudging, doubt on serotonin as a useful part of theory on depression (but some drugs still seem to work somehow sometimes anyway, so don't get too skeptical (but still skeptical and notice long term issues)).
An older highlight is around Koko the monkey. A lot of the flaws with Koko are repeated in small by so many published researchers, i.e. that there's just no data to even attempt an outside analysis or replication, here's another recent thread pointing this out.
What's the point of this post? Just to join in the melancholy that so much of the species' so-called 'knowledge' isn't, and to say that arguments need to be better before you let yourself be convinced by some claim. Sure, maybe you might charitably grant someone's claim some credence if you've never thought about it before otherwise, but even then, retain some skepticism until arguments improve. It doesn't matter that a proposition "has a lot of supporting evidence" or that there's "a literature" about something. There's correlation? Everything correlates. That's just not good enough if that's the argument! Things improve by actually diving into more than one paper or piece of supporting evidence and reporting the underlying findings, and such a request should be made to anyone with a lazy "there are papers" argument if you want to actually engage them -- which papers specifically, what observations, what experiments, are these unconnected facts or has someone tried to make a model explaining them and how does that model fare? But even when you have solid seeming papers, they can still be wrong.
Bro-science by people on twitter trying to do better isn't any better, see e.g. the problems with the lithium take of some guy's quest to understand obesity.
This message shouldn't really be new to anyone exposed to the old rationalist community. Such news is just more reason to sigh and be more cynical. I like this old post just laying on how non-Bayesian methods are often crap as just one source of problems, but of course if you follow science news even casually with an actual eye towards seeing what's true you'll see constant reminders of all the crap and many reasons why it's gotten worse and isn't even limited to the 'soft sciences'. That old post had the great observation of parapsychology being like a control-group for science, but it seems like it's not so much a control group but really not that different from a lot of 'actual' science.
Are Bayesian methods enough to improve things? Probably not, but I don't think they could make things worse. Continue pushing back against validity of p-values -- yes, even in physics. Continue pushing for more mathematical sophistication, not less. (Tai's method should be an embarrassment to the medical field.) Continue skepticism as default. Read the actual papers (the actual claims will often be totally different than what gets publicly memed). Don't be afraid to stick your neck out in support of solid-seeming research, the technologically advanced world we live in is evidence enough that some of our knowledge is actually knowledge. But if your head gets chopped off when something you believed is overturned, accept it without much complaint, and when you pick your head back up try to do better next time.
Posted on 2022-08-01 by Jach