Motherhood Online: A book review

We  were sent Motherhood Online by the editor, Michelle Moravec.

This book is a scholarly academic tome, but even given that, there are only two articles in it that I would call inaccessible to non-academic readers.  (And those two articles are both short and probably inaccessible to most academic readers as well.)  Non-academic readers will find the first section just as amusing and the second and third sections just as interesting as this academic reader.

The book starts out with case studies that will be familiar to anyone who has ever been on a pregnancy or mothering forum.  It does seem that if you’ve been on one of these forums, you’ve really been on all of the forums, for all the differences we perceive between the mothering.coms and the babycenters of the world, the dynamics are not that much different, even across forums from different countries.  Oddly, this section is titled “Theoretical perspectives” but is, for the most part, a-theoretical and, for the most part, focuses on each author’s own experiences with an online parenting community.

The second section… titled, “Case studies” includes articles with a broader theory base, more formal qualitative methods, and comparisons across different cases.  This second section focuses on communities that many of us have had less experience with, but are interesting in their own rights.  I especially enjoyed the studies of teenage mothers, autistic parents, port-wine stain, stay-at-home dads, and really most of the articles in this section.  I felt like I learned something reading many of these articles.

The last section focuses on blogs and community, with the stand-out piece being one on the community of people from developed countries who use (employ?) Indian women as surrogate mothers.

Although the introduction focuses on the positives to these online communities, the articles themselves are even-handed with both the positives (community building, information sharing, support) and the negatives (conflict, incorrect information, rationalization, etc.)  The authors come from a number of different disciplines, including communication, sociology, public health, anthropology, history and others.  These different disciplinary paths and perspectives come across in the methodology and writing.  Obviously we feel more comfortable with the social scientist methodologies, but other disciplines provide for entertaining reading and discussion.

Is this worth reading?  Sure!  Especially if you’re into non-fiction and would like to think a bit about they dynamics of online communities.  The book includes a nice collection of articles that, should, for the most part, be as easy to read as a Malcolm Gladwell book, but with perhaps a few more citations included.


Good vs. bad research

Just because some research says X is good and some says X is bad, doesn’t mean we don’t know if X is good or bad.

Research quality is also important.

Correlation is easy to measure.  When X and Y are related, there are many methods we can use to figure out how much they’re related, how much they covary.  Causation is not so easy.   Is it X causing Y, Y causing X or some third factor Z that causes both?

The gold standard of getting at causation is the randomized controlled experiment.  When done well, randomized controlled experiments are internally valid.  In the setting tested, we can say that X causes Y if when X is varied, Y varies as well.

Randomized controlled experiments may not be externally valid.  The subject pool may not act the same as all people who aren’t undergraduate psychology majors.  The general equilibrium effects may be different if adding money for one intervention takes away money from another intervention, rather than leaving everything else the same.  Additionally, an intervention may work great on a small set of people but may flounder with a much larger set (ex.  training out of work people to be welders– great when it’s a small number or people, not so good when every unemployed person can now weld).

We can’t always do a randomized controlled experiment.  Sometimes the interventions would be illegal, unethical, inappropriate for a lab, or just too expensive.  Social scientists have a number of ways to get at causality when that’s the case.  Notably, economists use “natural experiments” — exogenous shocks to the treatment that, with some fancy math, can be used to isolate the causal mechanism from what is correlational but not causal.  Popular methods include something called “differences-in-differences” which is a way to subtract out bias by using two (or more) imperfect treatments (changing state laws over time are popular), and “instrumental variables” in which you use a Z variable that is related to your X variable but is only related to your Y variable through X, so you know that the Z part of X is causally affecting Y.  There are other techniques that can be used such as regression discontinuity design or propensity score matching that have various positives and drawbacks.

It doesn’t matter if 20 published education papers find that X and Y are related and then make the claim that X causes Y.  That doesn’t mean that X causes Y.  Standards of publication for causal claims are different in different fields.  But if the same claim is published in a high quality psychology journal, then you can be pretty sure that they did a randomized controlled experiment to figure out causation, and they probably got it right, at least from an internal validity standpoint.

If the same claim is published in a high quality economics journal, then they may not have done a randomized controlled experiment, but they probably did the best that can be done with a high quality quasi-experiment or natural experiment.  (Ignoring the subset of theory papers that can prove anything and are still published in high quality journals…)  These economics findings may be more likely to be externally valid than the psychology findings, but it will depend on what kind of natural experiment the authors exploited.  If they only studied teen moms, then the findings may not be relevant to single men over the age of 50.

So just because research is mixed on a topic doesn’t mean we don’t actually know the answer.  If some of the research is crap, and some of it is good, then you can ignore the crap part and just focus on what is good.  How can you tell what is good?  Well, that’s a bit harder, but keeping in mind that correlation is not causation and looking hard for what the authors are actually measuring is a good first step.

Do you get frustrated when reporters report on research without having any idea about the quality of the research?  How do you winnow out the wheat from the chaff?

Failed Projects

As (social) scientists, sometimes we all experience a project that just never works.  Sometimes I underestimate the initial investment and it never really gets off the ground (e.g., I thought I could get these resources but I can’t).  Sometimes you do the whole thing and it looks great, but when you try to replicate it you can’t.  Sometimes the data just don’t tell a coherent story and you need to go back and think about your methodology, predictions, theory, etc.  I have had each one of these happen to me, some more than once.

How do you recognize when a project is failing, and what do you do with it afterwards?

Possible things to do with it:  set the data aside, maybe they will be useful for something later on.  Revise your procedure and start over.  Let a student have it for a small project that you know won’t get published anyway.  Shrug and chalk it up to a learning experience, taking the long view that my career is decades long and not every study has to pan out.  These are all painful, but necessary in the case where you don’t want to lose more time throwing good effort after bad data.

One more choice:  write an article about the methodology instead of the results, giving advice to other researchers who may want to do something similar.  This approach has worked for me at least once.

Each time something fails after an initial investment of time and energy, it hurts.  It sucks.  I could always just choose to do projects with lower initial startup costs and lower risk of total failure (p.s. design your study so you will still get some usable data even if your main hypothesis is uninterpretable).  However, I don’t want to stick with the safe and easy all the time.  I did a study my first year on the tenure track that was easy, fast, data collection went great, it looked simple — but then the data analysis turned into a bear and we’ve just now gotten the manuscript out.  I want to keep trying new things.

I’m trying to think that if you don’t fail at some projects, you aren’t trying enough things.  If you don’t get rejections on your paper, you aren’t aiming high enough.  (Thanks, CPP, for reinforcing this idea!)

I tell myself, you are allowed to suck.  Indeed, you must suck.  Get all your sucking out of the way so you can move on.  Fail better.

Any advice from the Grumpeteers on when to cut your losses?

Yes, I talk to the press

Do you?

Whenever the press calls it always gets me off my game.  Especially when I don’t pass the phone interview stage.  It’s hard to get focused back on work.  I see why a lot of my colleagues in the greater academic community don’t take press calls, but I sort of feel like it’s part of my job to spread truth and light to the general public.  If only I were as good at explaining complicated things as Neil DeGrasse Tyson!  Maybe after tenure I’ll polish up some soundbites and practice them like he recommends.

Neil DeGrasse Tyson

#2 never gets phone calls from the press, because they took my office phone away in a budget cut.

Do you talk to the press?  And how related to your area of expertise does it have to be for you to say something?

Theory of cat hair diffusion

The reason cats shed so much is that they are trying to reach cat hair equilibrium.  Cat hair equilibrium will occur when the surrounding space has the same density of cat hair as the place from which the cat hair emanates, namely, the cat.   Under this theory, the cat will eventually stop shedding once we are so deep in fur that we are unable to breathe.

A corollary to this theory is that diffusion occurs faster when the gradient between the high density fur area and the low density fur area is larger.  That is, the more you vacuum, the faster they shed.

Related:  #2 claims this furminator reduces her cat by 10% upon use.  It doesn’t seem to stop him shedding, but it does make him smaller!  Perhaps there is a complex equation about changing surface-to-volume ratio.  One time (on a previous cat) I decided that I would keep brushing until he either quit shedding, or was naked.  After an hour neither had occurred, and I gave up.

Do you all have any corroborating or denying evidence?  Alternate theories?  Stories of ginormous cat-hair tumbleweeds?

******* creationists!

DC keeps asking questions about how humans evolved… ze knows there were dinosaurs and they died out, and now there are humans, and the mammals during dinosaur time were small, but how did there get to be humans?

Did you know that none of the museums in the city nearest ours have an early humans exhibit?  From the webpage it looks like the museum in the next nearest city also do not have such an exhibit.

We’re all going to (Washington) DC this year just so we can find a museum with an early humans exhibit!  (Also there’s a conference.  Ah, conference vacations, without you we would never leave town.)

But DC still keeps asking, so I thought…well, we’ve got LOTS of science books.  Why don’t we have one for human evolution?  Hm, Scholastic (see:  Scholastic addiction) has lots of science books, but I haven’t seen any on early humans.  I wonder why that is.

Wait, I thought… Maybe the fact that the museums in this specific region of the country are conspicuously missing exhibits on early humans is related to the fact that Scholastic doesn’t sell any books on say, evolution.

Maybe there’s a large purchasing demographic they want to keep buying/donating/visiting.

DH says:   There’s just some people you don’t want to tick off.  Would you want to upset anarchists? Well…

Well, that kind of sucks.  Dratted creationists and their dratted market power.  No wonder folks don’t believe in evolution– there’s a feedback loop.

That’s my rant.  Are there things you miss because of where you’re living?  Do groups with large amounts of purchasing power ever mess you up?

Stereotype threat

Men who are secure in their masculinity are both great lovers and don’t waste their time trying to make themselves feel like real men by putting down women on the internet.  It is well known, and has been scientifically shown*, that men who spend large amounts of times posting about women’s genetic and nature-born inferiority have tiny penises and are trying to compensate for being lousy in bed.

Now, whether that’s true or not, such a statement may cause these loser-“men” to subconsciously doubt their virility and indeed perform poorly in bed (even worse than they already are!).

Stereotype threat occurs when people are aware or are made aware of a stereotype regarding their group.  When presented with this stereotype, their measured performance moves closer to the perceived mean for their group.  This effect has been shown over and over, for minorities, for women, for lower-caste Indians in India, in testing situations and in real situations.  It is a real phenomenon.  You tell someone that their group is bad at something or worse at something than another group, and their performance will suffer.  These negative stereotypes become self-fulfilling prophecies.

Stereotype threat is malicious and malignant.  When men post bogus studies about women and minorities’ supposed natural-born inferiority and complain about poor Larry Summers (who, incidentally, was pushed out of Harvard for problems with micromanagement, his abrasive administrative style, and his general disdain for any humanity/social science field that is not economics, and not for anything to do with his ignorant remarks at NBER), they are feeding stereotype threat.  On Chronicle forums, they are making academic women feel like they should and can achieve less.  On gifted forums, they’re implicitly encouraging housewives to stay home with their children, and to not expect as much from their daughters as from their sons.  Their unchecked general acceptance that people who aren’t white (or occasionally Asian) men are inferior can spread to other people who read their comments and “proofs” who then spread the contagion to people IRL.  And such comments push out those who would argue against them by creating and promoting an unwelcoming and hostile environment for women who aren’t willing to be bullied.

Why so Slow by Virginia Valian is a must read.  It’s a fantastic literature review and well-reasoned argument of exactly how many of these differences that some ascribe to genetics are actually the product of our culture.  If you are a nature-only person, this book provides convincing evidence of nurture.

by Virginia Valian

Even if there are strong genetic differences in ability or whatever by gender  (which there probably aren’t), that does not say much about individual people.  Imagine two normal curves overlapping normal curves, in which one is slightly shifted:

Now compare the area under the curve that is shared to the area that is not.  Individual differences will always outweigh differences between groups.   Or if you believe Pinker (and many experts do not agree with his conclusions or his methods), the two groups may have the same means, but one curve is fatter and the other taller.  Almost nobody is in the parts of the tail that aren’t shared.  Again, individual differences are always greater than differences between groups.

So, to summarize, if you have a tiny penis stop being an ass on the internet.  If you have to harm people by telling them that their entire group is inferior, you’re doing real harm and you’re a loser.  Real men don’t need to put women down in order to feel masculine, because they already are.

*using small-penis-man definitions of “scientifically,” not standard definitions from people who understand statistics, though no doubt there is actually a correlation.  Someone should study that.

Disclaimer:  Penis length is not a direct indicator of female satisfaction, nor does it actually have any bearing on a person’s value as a person. However, we choose this example because we believe it to be most insulting to men who constantly post negative “proof” of women’s innate inferiority (which is stupid of them).  Additionally, it is well known that ability in bed increases with one’s valuation of one’s partner ;).