Finding what interests me in a new career

One of us is job-hunting after quitting academia and moving to paradise.  I have been looking for jobs I want, but I haven’t been finding that many to apply to–I still have enough resources at this point to be able to focus my search on jobs I would like rather than taking any job.  I have applied for about 20 things and gotten 1 phone interview and no in-person interviews or offers.

What do I want?  I want something sciency and researchy, in the social sciences.  I am not a clinician and not a certified CRA.  I am not a biologist or pharmacist or engineer and I do not use Hadoop (I could learn if I had to but it doesn’t seem necessary now).  I don’t program (other than several standard social science statistics softwares and some dabbling in things like .html etc., but not like C++ kind of programming) and I don’t want to.

I have [#2: excellent] skills in data analysis, writing, editing, literature review, and many things about the research process [#2:  I fully vouch for these– she reads every paper of mine before I send it out and she’s helped me a ton when stat-transfer fails me, and more than once she’s saved my rear end doing last second RA work when I was up against a deadline and I found a SNAFU.  I’ve also shamelessly stolen a lot of her teaching stuff, but that’s probably irrelevant since she doesn’t want to adjunct or lecture.]!  (See the second table below)  I can do tons of research.

I am not an extrovert and interacting with people most of the time drains me, but I interact quite successfully in teams and research groups.  I’m not interested in being a manager of people in a pure managerial sense, though I can do some and I am experienced supervising teams of research assistants.

Ever since I was a little kid, every “career interest” test I have ever taken has always come out that I should be a professor, and it still does.  However, nope nope nope!

I played with this online thing for scientists and it was kinda enlightening.  It tells you, among other things, about what your values, skills, and interests are in a career.  Here are mine.

First, here are my values of things that are unimportant and important to me in a new career:  (for these big tables, click to embiggen).  I know this is a lot to ask for, but it represents the ideal.

My Values in a job image

Second, here are my scientific skills, what I think I am good and bad at:

Science Skills Summary image

Third, here are my interests:

Interests Summary image

The jobs it suggests for me include faculty at a research university (nopenopenope) and the things I am already applying for, such as research manager stuff.  I would be happy to manage someone’s lab, although I can’t put up with a job where the ONLY thing I do is make other people’s travel arrangements.  I could do quite a good job in something like research administration, if it focused on compliance and not budgets (though I can and will do budgets so long as it isn’t the *only* thing). I am good at teaching but I will never do it ever again.  I love collaborating with other scientists but am not crazy about managing people.

I would like to work for a nonprofit or the VA (which keeps failing to hire me over and over).  I’m not against working at a for-profit company though, especially if the pay is good and the work is interesting.  Program-analyst type stuff seems to be a title I come across a lot for job postings.  The site also suggests that I be an epidemiologist (interesting but I’m not trained for it), a clinical diagnostician (not trained for it and don’t want to be), and a teaching faculty (NOPE NOPE NOPE).  I would be fine as non-academic staff at a university.  I do not do drug testing, nor do I have any wet-lab skills.

You can be sure that my cover letter and resume are shiny, personalized, revised, and proofread by #2 [and, #2 notes, more importantly, the career office at her former grad school went over her resume when she did the change from cv to resume].

I’m not expecting to go in at the highest level, and I don’t really want to. I am definitely willing to work my way up to some extent, but not all the way from the proverbial mailroom. My retirement funds are anemic and if the job is really poorly paid, it might be more profitable to spend that time searching for a better job, rather than being tied to a job that’s both low-paying *and* boring.

Mostly I’ve been applying for jobs that I find on Indeed.com.  But I need to expand.  And yes, I know I should be networking more (and I swear I am networking!)– this post is part of that effort.  ;)

I promise I’m not as much of a special snowflake as I sound like here; I have skills that would really help an employer if only I could convince them of that [and, #2 notes, if she could find more job openings, preferably before they’re advertised…].  Help!

Grumpeteers, what say you?  How can I get a job that pays decently and is also suited to my skills, interests, knowledge, and background?  

Top 20 baby words

DC2 is about to the age in which ze starts saying things, so I got to wondering what are the early words that babies say.

Fortunately, there’s research on this topic.  I came across a 2008 article from some psychologists at Stanford that includes a chart titled, Rank-Ordered Top 20 Words for Children Who Can Say 1–10 Words on CDI and Percentage of Children Producing Them, by Language

It’s Table 4 if you click that link.  They include Hong Kong and Beijing’s words as well.

Here’s the words for the United States (copied from Tardif et al. 2008).
(n = 264)
Daddy
Mommy
BaaBaa
Bye
Hi

UhOh
Grr
Bottle
Yum Yum
Dog
No
Woof Woof
Vroom
Kitty
Ball
Baby
Duck
Cat
Ouch
Banana

My first word (not counting Ma’s and Da’s) was the same as my oldest’s first word, “Hi” there on the list.  DC2 hasn’t gotten to “Hi.”  Months ago DC2 was saying key (for kitty) but that seems to have dropped out of the lexicon and has been replaced with Ca (for cat).  Dog has been added.    Ze says, “Yeah,” a lot to signal agreement. Ze can make three different sounds that dogs make — “bowwow” they taught at daycare, “woof” I taught hir, and DH taught hir panting [update:  ze can also make stuffed dog make the slobbery dog kisses sound now, so that’s 4].  Occasionally we’ll hear a ba for bottle, or a bana or nana for banana.  Ze may be saying a lot more, but it’s awfully difficult to tell with the pronunciation.  I remember that DC1 was really into animal sounds, especially barn animals, when ze started to talk in earnest.

Do you have any cute baby word stories?  What was your first word?  Were your first or your children’s first (if applicable) on the list?

This is why we can’t have nice things.

This [grant thing] that [redacted] has is really stupid.  So much bad science to “further women and minorities”.  Reading through their annual report and it’s thing after thing of, “We had this workshop, but nobody came.”  They’re also not checking to see if anything works even when people do come.  There’s not even data collected before and after to see if there’s even a change, much less a treatment effect.  There was one thing where they’re like, “we were going to do this survey but…”  They sent the report to me to evaluate, but the entire campus was “treated” and uh… the treatment seems to have been nothing.

Bad science makes the baby Jesus cry.  Poor baby Jesus.
They seem to have a lot of meetings too.  So basically, trying to further the careers of women and minorities at this school consists of making them go to pointless meetings.
See, this is why women and minorities can’t have nice things.

Argh!

(Note:  Some details in the above rant have been changed to protect both the stupid and our own rear ends.)
Are you ever astonished by the amount of bad science done for a good cause?  Have you ever noticed that it’s always the under-represented who have to waste time in meetings?

The negativity jar

The Imposter Syndrome and other forms of negativity can keep people, especially women, from achieving as much as they should.  If you say enough negative things about yourself, eventually other people start to believe them too.

One of the things that we did back in graduate school (during the job market) was have a big jar named the “Negativity Jar.” Anytime we said something negative about ourselves, we would have to put a quarter in the jar (we were poor graduate students– you might want to up that to $1 or $5). That forced us to restructure to say things that were actually true– to get at what was actually bothering us, and not to reinforce the negative lies. It forced what Cognitive Behavioral Therapists call “Cognitive restructuring.”

After about 2 weeks there was no more money put in the jar. At the end of the year we were able to buy a little bit of chocolate, not the hard liquor we’d been planning on.

Have you ever had a problem with negative self-talk?  What have you done to address it?  Did it work?

Motherhood Online: A book review

We  were sent Motherhood Online by the editor, Michelle Moravec.

This book is a scholarly academic tome, but even given that, there are only two articles in it that I would call inaccessible to non-academic readers.  (And those two articles are both short and probably inaccessible to most academic readers as well.)  Non-academic readers will find the first section just as amusing and the second and third sections just as interesting as this academic reader.

The book starts out with case studies that will be familiar to anyone who has ever been on a pregnancy or mothering forum.  It does seem that if you’ve been on one of these forums, you’ve really been on all of the forums, for all the differences we perceive between the mothering.coms and the babycenters of the world, the dynamics are not that much different, even across forums from different countries.  Oddly, this section is titled “Theoretical perspectives” but is, for the most part, a-theoretical and, for the most part, focuses on each author’s own experiences with an online parenting community.

The second section… titled, “Case studies” includes articles with a broader theory base, more formal qualitative methods, and comparisons across different cases.  This second section focuses on communities that many of us have had less experience with, but are interesting in their own rights.  I especially enjoyed the studies of teenage mothers, autistic parents, port-wine stain, stay-at-home dads, and really most of the articles in this section.  I felt like I learned something reading many of these articles.

The last section focuses on blogs and community, with the stand-out piece being one on the community of people from developed countries who use (employ?) Indian women as surrogate mothers.

Although the introduction focuses on the positives to these online communities, the articles themselves are even-handed with both the positives (community building, information sharing, support) and the negatives (conflict, incorrect information, rationalization, etc.)  The authors come from a number of different disciplines, including communication, sociology, public health, anthropology, history and others.  These different disciplinary paths and perspectives come across in the methodology and writing.  Obviously we feel more comfortable with the social scientist methodologies, but other disciplines provide for entertaining reading and discussion.

Is this worth reading?  Sure!  Especially if you’re into non-fiction and would like to think a bit about they dynamics of online communities.  The book includes a nice collection of articles that, should, for the most part, be as easy to read as a Malcolm Gladwell book, but with perhaps a few more citations included.

Good vs. bad research

Just because some research says X is good and some says X is bad, doesn’t mean we don’t know if X is good or bad.

Research quality is also important.

Correlation is easy to measure.  When X and Y are related, there are many methods we can use to figure out how much they’re related, how much they covary.  Causation is not so easy.   Is it X causing Y, Y causing X or some third factor Z that causes both?

The gold standard of getting at causation is the randomized controlled experiment.  When done well, randomized controlled experiments are internally valid.  In the setting tested, we can say that X causes Y if when X is varied, Y varies as well.

Randomized controlled experiments may not be externally valid.  The subject pool may not act the same as all people who aren’t undergraduate psychology majors.  The general equilibrium effects may be different if adding money for one intervention takes away money from another intervention, rather than leaving everything else the same.  Additionally, an intervention may work great on a small set of people but may flounder with a much larger set (ex.  training out of work people to be welders– great when it’s a small number or people, not so good when every unemployed person can now weld).

We can’t always do a randomized controlled experiment.  Sometimes the interventions would be illegal, unethical, inappropriate for a lab, or just too expensive.  Social scientists have a number of ways to get at causality when that’s the case.  Notably, economists use “natural experiments” — exogenous shocks to the treatment that, with some fancy math, can be used to isolate the causal mechanism from what is correlational but not causal.  Popular methods include something called “differences-in-differences” which is a way to subtract out bias by using two (or more) imperfect treatments (changing state laws over time are popular), and “instrumental variables” in which you use a Z variable that is related to your X variable but is only related to your Y variable through X, so you know that the Z part of X is causally affecting Y.  There are other techniques that can be used such as regression discontinuity design or propensity score matching that have various positives and drawbacks.

It doesn’t matter if 20 published education papers find that X and Y are related and then make the claim that X causes Y.  That doesn’t mean that X causes Y.  Standards of publication for causal claims are different in different fields.  But if the same claim is published in a high quality psychology journal, then you can be pretty sure that they did a randomized controlled experiment to figure out causation, and they probably got it right, at least from an internal validity standpoint.

If the same claim is published in a high quality economics journal, then they may not have done a randomized controlled experiment, but they probably did the best that can be done with a high quality quasi-experiment or natural experiment.  (Ignoring the subset of theory papers that can prove anything and are still published in high quality journals…)  These economics findings may be more likely to be externally valid than the psychology findings, but it will depend on what kind of natural experiment the authors exploited.  If they only studied teen moms, then the findings may not be relevant to single men over the age of 50.

So just because research is mixed on a topic doesn’t mean we don’t actually know the answer.  If some of the research is crap, and some of it is good, then you can ignore the crap part and just focus on what is good.  How can you tell what is good?  Well, that’s a bit harder, but keeping in mind that correlation is not causation and looking hard for what the authors are actually measuring is a good first step.

Do you get frustrated when reporters report on research without having any idea about the quality of the research?  How do you winnow out the wheat from the chaff?

Failed Projects

As (social) scientists, sometimes we all experience a project that just never works.  Sometimes I underestimate the initial investment and it never really gets off the ground (e.g., I thought I could get these resources but I can’t).  Sometimes you do the whole thing and it looks great, but when you try to replicate it you can’t.  Sometimes the data just don’t tell a coherent story and you need to go back and think about your methodology, predictions, theory, etc.  I have had each one of these happen to me, some more than once.

How do you recognize when a project is failing, and what do you do with it afterwards?

Possible things to do with it:  set the data aside, maybe they will be useful for something later on.  Revise your procedure and start over.  Let a student have it for a small project that you know won’t get published anyway.  Shrug and chalk it up to a learning experience, taking the long view that my career is decades long and not every study has to pan out.  These are all painful, but necessary in the case where you don’t want to lose more time throwing good effort after bad data.

One more choice:  write an article about the methodology instead of the results, giving advice to other researchers who may want to do something similar.  This approach has worked for me at least once.

Each time something fails after an initial investment of time and energy, it hurts.  It sucks.  I could always just choose to do projects with lower initial startup costs and lower risk of total failure (p.s. design your study so you will still get some usable data even if your main hypothesis is uninterpretable).  However, I don’t want to stick with the safe and easy all the time.  I did a study my first year on the tenure track that was easy, fast, data collection went great, it looked simple — but then the data analysis turned into a bear and we’ve just now gotten the manuscript out.  I want to keep trying new things.

I’m trying to think that if you don’t fail at some projects, you aren’t trying enough things.  If you don’t get rejections on your paper, you aren’t aiming high enough.  (Thanks, CPP, for reinforcing this idea!)

I tell myself, you are allowed to suck.  Indeed, you must suck.  Get all your sucking out of the way so you can move on.  Fail better.

Any advice from the Grumpeteers on when to cut your losses?

Follow

Get every new post delivered to your Inbox.

Join 274 other followers