I read your blog occasionally and am interested in your comments on this article studying hiring preferences for male/female academics in science fields.
Anything for an occasional blog reader?
There’s already some great commentary on this terrible article (shame on PNAS for publishing it!) <– scroll down in the link for a bunch of linked studies.
In addition to all of the problems already illuminated in the linked criticism, but there are some elements of the survey design right off the bat that have been shown to decrease predicted discrimination. For example, comparing two functionally identical resumes next to each other results in decreased implicit bias (according to a much better written PLOS One article). That’s why good lab studies will compare across participants rather than within participants. Field studies do often compare within participants (job openings), but they aren’t the only two resumes being considered, and even so, a new working paper by a researcher (David Phillips) from Hope college shows that by sending functionally identical resumes in these field studies, matched pair audit studies do change how the resumes are perceived.
Also the quality of the candidates matters– there seems to be a winner take all thing going in many stem fields so when women are at the top of the distribution they’re preferred, but when they’re not at the top, they are discriminated against compared to similar males.
Finally, even if the research designs were externally and internally valid (which they are not, see linked commentary), there have been at least 19 studies showing the opposite of this study, so it’s unlikely, but these results could just be random.
(That’s not even including the history that the authors have of doing bad science to support their demonstrated agenda.)