Reviewing Papers in 2016

[Preface: I am bit worried that this post might be taken the wrong way concerning my ratio of reject to total recommendations. I simply think it is useful information to know about myself. I also think that keeping more detailed records of my reviewing habits was educational and made the reviewing processes even more interesting. I suspect others might have the same reaction.]

Happy 2017! I collected more detailed data on my reviewing habits in 2016. Previously, I had just kept track of the outlets and total number of reviews to report on annual evaluation documents.  In 2016, I started tracking my recommendations and the outcomes of the papers I reviewed. This was an interesting exercise and I plan to repeat it for 2017.  I also have some ideas for extensions that I will outline in this post.

Preliminary Data:

I provided 51 reviews from 1 Jan 2016 to 29 Dec 2016. Of these 51 reviews, 38 were first time submissions (74.5%) whereas 13 (25.5%) were revisions of papers that I had previously reviewed.  For the 38 first time submissions, I made the follow recommendations to the Editor:

Decision

Frequency Percentage

Accept

1 2.6%
R&R 13

34.2%

Reject 24

63.2%

 

Yikes! I don’t think of myself as a terribly harsh reviewer but it looks like I recommended “Reject” about 2 out of 3 times that I submitted reviews. (I score below the mean on measures of A so perhaps this is consistent?)  I was curious about my base rate tendencies and now I have data. I feel a little bit guilty.

I will say that my recommendation is tailored to the journal in terms of my perception of the selectivity of an outlet. I might have high expectations for papers published in one of the so-called top outlets and I might have a slight bias to saying yes to those outlets more so than a less selective outlet (I am going to track this data in 2017).  I should also note that I never say whether a paper should be accepted or not in my comments to the authors.  I know that can create an awkward situation for Editors (at least it does for me when I am placed in that role).

For the revisions, I made the following recommendations to the Editor:

Decision

Frequency Percentage

Accept

9 69.2%
R&R 2

15.4%

Reject 2

15.4%

I had previously made reject recommendations on the initial submissions in the two cases above. My opinion was unchanged by the revision.  I can say that the Editor ultimately rejected those two papers and that the initial letter was frank about chances of those paper.  I know we all hate having revisions rejected.

I was most interested in how many times my initial recommendations predicted the ultimate outcome of a paper. Here is a crosstab for my reviews of first time submissions:

Ultimate Decision

Accept

Reject

Unknown

Total
My Recommendation

Accept

1 0 0 1

R&R

6 2 5 13
Reject 4 18 2

24

Total 11 20 7

38

Note: Unknown refers to decisions that were in progress at the end of the calendar year for 2016.

This suggests that my reject decisions are usually consistent with ultimate outcome for that paper at that outlet. My decision was inconsistent with ultimate outcome for that paper in 4 out of 22 known cases (18%).  In 18 of the 22 known cases, my decision was concordant with the final decision.  (Yes, I know I should compute kappas here to deal with base rate differences but I am lazy.)

In the end, I think this was a good exercise as it has made me slightly more aware of my recommendations and helped my gauge agreement.  As noted above, I am going to add information to the 2017 iteration of this exercise.  Foremost, I plan to track how many reviews that I decline in 2017 and note my personal reasons for declining.  Categories will include: Conflict of Interest; Too Many Existing Ad Hoc Reviews (X Number on My Desk); Outside of My Area of Expertise; Issue with the Journal (e.g., I won’t review for certain outlets because of their track record on publishing papers that I trust); Other.  I will also track whether the submission was blinded and the number of words in my review.

I try to accept as many reviews as I can but I sometimes feel overwhelmed by the workload. Indeed, I struggle with the right level of involvement in peer review. I believe reviewing is an important service to the field but it is time consuming. My intuition is that an academic should review a minimum of three to four times the number of papers they submitted for peer review per year. I want to make sure that I meet this standard moving forward.

Anyways, I think that was a fairly interesting exercise and I think others might think so as well.