Just because some research says X is good and some says X is bad, doesn’t mean we don’t know if X is good or bad.
Research quality is also important.
Correlation is easy to measure. When X and Y are related, there are many methods we can use to figure out how much they’re related, how much they covary. Causation is not so easy. Is it X causing Y, Y causing X or some third factor Z that causes both?
The gold standard of getting at causation is the randomized controlled experiment. When done well, randomized controlled experiments are internally valid. In the setting tested, we can say that X causes Y if when X is varied, Y varies as well.
Randomized controlled experiments may not be externally valid. The subject pool may not act the same as all people who aren’t undergraduate psychology majors. The general equilibrium effects may be different if adding money for one intervention takes away money from another intervention, rather than leaving everything else the same. Additionally, an intervention may work great on a small set of people but may flounder with a much larger set (ex. training out of work people to be welders– great when it’s a small number or people, not so good when every unemployed person can now weld).
We can’t always do a randomized controlled experiment. Sometimes the interventions would be illegal, unethical, inappropriate for a lab, or just too expensive. Social scientists have a number of ways to get at causality when that’s the case. Notably, economists use “natural experiments” — exogenous shocks to the treatment that, with some fancy math, can be used to isolate the causal mechanism from what is correlational but not causal. Popular methods include something called “differences-in-differences” which is a way to subtract out bias by using two (or more) imperfect treatments (changing state laws over time are popular), and “instrumental variables” in which you use a Z variable that is related to your X variable but is only related to your Y variable through X, so you know that the Z part of X is causally affecting Y. There are other techniques that can be used such as regression discontinuity design or propensity score matching that have various positives and drawbacks.
It doesn’t matter if 20 published education papers find that X and Y are related and then make the claim that X causes Y. That doesn’t mean that X causes Y. Standards of publication for causal claims are different in different fields. But if the same claim is published in a high quality psychology journal, then you can be pretty sure that they did a randomized controlled experiment to figure out causation, and they probably got it right, at least from an internal validity standpoint.
If the same claim is published in a high quality economics journal, then they may not have done a randomized controlled experiment, but they probably did the best that can be done with a high quality quasi-experiment or natural experiment. (Ignoring the subset of theory papers that can prove anything and are still published in high quality journals…) These economics findings may be more likely to be externally valid than the psychology findings, but it will depend on what kind of natural experiment the authors exploited. If they only studied teen moms, then the findings may not be relevant to single men over the age of 50.
So just because research is mixed on a topic doesn’t mean we don’t actually know the answer. If some of the research is crap, and some of it is good, then you can ignore the crap part and just focus on what is good. How can you tell what is good? Well, that’s a bit harder, but keeping in mind that correlation is not causation and looking hard for what the authors are actually measuring is a good first step.
Do you get frustrated when reporters report on research without having any idea about the quality of the research? How do you winnow out the wheat from the chaff?