Two Types of Researchers?

Last winter I gave a quick brown bag where I speculated about the possibility of two distinct types of researchers. I drew from a number of sources to construct my prototypes. To be clear, I do not suspect that all researchers will fall neatly into one of these two types. I suspect these are so-called “fuzzy” types. I also know that at least one of my colleagues hates this idea. Thus, I apologize in advance.

Regardless, I think there is something to my working taxonomy and I would love to get data on these issues. Absent data, this will have to remain purely hypothetical. There is of course a degree of hyperbole mixed in here as well. Enjoy (or not)!

Approach I Approach II
Ioannidis (2008) Label: Aggressive Discoverer Reflective Replicator
Abelson (1995) Label: Brash/Liberal Stuffy/Conservative
Tetlock (2005) or Berlin (1953) Label: Hedgehogs Foxes
Focus: Discovery Finding Sturdy Effects
Preference: Novelty Definitiveness
Research Materials: Private possessions Public goods
Ideal Reporting Standard: Interesting findings only Everything
Analytic Approach: Find results to support view Concerned about sensitivity
Favorite Sections of Papers: Introduction & Discussion Method & Results
Favorite Kind of Article: Splashy reports that get media coverage Meta-Analyses
View on Confidence Intervals: Unnecessary clutter The smaller the better
Stand on the NHST Controversy: What controversy? Jacob Cohen was a god
View on TED Talks: Yes. Please pick me. Meh!
Greatest Fear: Getting scooped Having findings fail to replicate
Orientation in the Field: Advocacy Skepticism
Error Risk: Type I Type II
Advertisements

Author: mbdonnellan

Professor Social and Personality Psychology Texas A &M University

10 thoughts on “Two Types of Researchers?”

  1. The stuffy conservative in me wants to know if you have any actual data to support this and if so where I can download it to verify your analyses.

    The brash liberal in me just tweeted a link.

  2. If you want to collect data on this I would be happy to run your LCA. Although you may want to boil it down to a few specific behaviors and/or discrete items (e.g., I have written a meta-analysis. or Being scooped keeps me up at night), that’s already a lot of items (You wouldn’t want to make them forced choice or anything…wait, isn’t there a really popular measure out there that uses forced choice?).

  3. Part of me wants to reflexively claim that one approach is superior to the other (and to align myself with that column). On the other hand I think that there might be more of a yin and yang thing going here or maybe a checks and balances analogy is more appropriate. Also, stop picking on TED talks. They aren’t all that bad 🙂

  4. Interesting comments! I think the field as a whole probably does well with a healthy mix of both approaches (to the extent there is any degree of validity in this idea). We need discovery and justification. In my ideal world, we need people to push forward new theories and findings and we need people to make sure those findings hold up. Likewise, we need people to evaluate boundary conditions and test how well findings generalize across settings, cultures, etc.

    My worry is that there is an imbalance in these approaches in “soft” psychology circa 2012 and I worry that the current incentive structure tends to favor Approach I over Approach II. There are hints of a shift in the pendulum but I will reserve judgment for a few more years. Not to be cynical, but Cohen, Meehl, Lykken, etc talked about many of these issues decades ago and I suspect things are actually worse now than in 1994 or 1978 or 1968.

    I think it is an interesting question as to why individual researchers tend to gravitate to one approach or the other or if there is within person variability. I also suspect (shockingly, I know!) that there are dispositional tendencies at play as well. I predict a mean difference in Openness across the two approaches and even certain facets of Conscientiousness. I guess I do need to find some data for Aidan.

    As for TED talks, I am largely indifferent but approaching a negative perspective. The Dr. Phil (Z. not McGraw) epitomizes my concerns. But perhaps I am over-generalizing.

  5. How well do you think the current incentive system does in terms of valuing quality vs. quantity? Is having 5 good publications valued higher than 10 average publications? Should it? Obviously, it would vary case-by-case, publication-by-publication, but I’m interested in the general sense of it.

  6. Joseph Tan raises a good question. Questions about quantity and quality are important and probably worthy of multiple blog posts. I have a few quick thoughts and it will become apparent that I need to think more carefully about these issues.

    1. It is much easier to measure quantity than quality. Give four members of a P & T committee or a hiring committee a set of 10 CVs and the inter-rater agreement on number of publications will be far better than the agreement surrounding the quality of the publications. Just because something is harder to measure, does not mean it should NOT be measured. However, we should expect a greater amount of measurement error and more vicious fights over the validity of the quality ratings. I suspect that big institutions weight quantity more heavily than they should because it is easier to measure. When this fact co-occurs with an increasing pressure to measure scholarly output, you can easily produce a shift in the incentive structure. Administrators appear to be concerned that we [faculty and grad students] are dutifully producing our “widgets” and they can easily look at numbers of papers and grant dollars without knowing much about the substance of the area. So I think there are incentives for quantity over quality, especially once you have an academic job.

    2. Judgments of quality get tied up in the two approaches I outlined. In other words, a quality publication might be defined differently by those who favor one approach over the other. JPSP and Psychological Science are often viewed as quality outlets. They have a high impact factor and psychological researchers across different sub-disciplines know of these journals. I think these outlets privilege Type I research over Type II research. Although it can happen, JPSP appears reluctant to publish failures to replicate. Psychological Science seems obsessed (at least to me) with splashy articles. They have recently published research with sample sizes that are shockingly low (see some of my previous blog posts). One litmus test for Type I versus II researchers might be their views on sample size. If you ask someone questions about the ideal sample size for a given study and they answer “Whatever one gets me my effect” then you know they are more Approach I than Approach II. I think that many of the papers in these “quality” outlets are more interested in providing evidence that some effect is present instead of making sure that effect is measured with precision or that a specific effect exactly replicates.

    3. This is a now a [very] long-winded way to circle around a nasty reality — quality is a tough nut to crack (cue Zen and the Art of Motorcycle Maintenance). But I will stick my neck out here: 1) the field is tipped toward Approach I over Approach II; 2) judgments of quality differ between these two approaches; so 3) what many people think of as a quality paper embodies Approach I more than Approach II.

    1. One quick thought:
      That mostly makes sense to me, though I would intuitively think quantity incentives would pressure people to produce lots of “safe” papers that don’t push the field forward much. I’m not quite sure which approach “safe” papers fall into, mostly because I’m not sure I can fully define a “safe” paper.

    2. I agree with pretty much all the comments above (neutral on TED talks), but would like to add one more point. I think the Type I researchers is a small ecological niche. In other words, we need such people, but we can’t afford very many of them, in the same way that the savanna environment needs more wildebeests than lions. A turning point for social psychology came in the late 1970’s when the spectacular creativity and success of a few prominent researchers made everybody want to emulate them. Every paper had a new jazzy idea, with a cute title. Nobody was doing the boring stuff of developing more precise measures, establishing boundary conditions, and least of all replicating. So now we are where we are.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s