Home > English, Other > How not to review scientific papers

How not to review scientific papers

This year a scientific conference will take place, where me and a number other colleagues of our department submitted papers to. A double-blind review process was selected for this conference, meaning that two independent reviewers will evaluate and give marks for papers. This is (in theory) a good process. Often times the reviews are helpful for the authors, even when their papers are not accepted, and the overall quality of the conference is somewhat ensured. Now, I got back some of the some worst reviews I ever read – not because my papers got rejected (one got accepted) – but because of the review comments. Some colleagues asked me whether to take such reviews seriously and I asked some of them myself. I’d like to share my thoughts about this with you and I’m pretty sure you will agree.

I was (co-)author of two different papers. Since I knew there were a high number of submission, acceptance was not guaranteed, but I was eager to see the reviews, which often proved very helpful on other conferences. These were the actual reviews we got:

Clearly exceeds 400-word limt for abstract.
Very interesting innovative approach on participatory planning software. Once
up and running, I would like to learn more about experiences with the
implementation. (5/10)

At least this review had one hint, that the reviewer actually read the paper. But clearly exceeds 400-word limit? Our abstract had (including date and title) 394 words → FAIL

may be rejected (4/10)

may be rejected? Oh really? That’s so helpful, who needs reasons anyway? My grandmother probably could have written a better review, and she surely isn’t able to read English scientific papers → FAIL

Now to the second paper, which was actually accepted:

In order to properly compare companies as a whole on a higher level, the
authors suggest to a) aggregate key performance indicators towards a single
evaluation value and b) to assign different weights to indicators based on
stakeholder input.
The new models need to be summarized more clearly. (9/10)

Ok, this is actually the best review (which is quite sad), because it shows the reviewer at least read it. Also, he/she gives one hint, which is not really helpful (in a full paper it’s much easier to clearly present the approach compared to a 400-word abstract), but at least it’s something. In other conferences this review would be fail, but now my standards are so low already, I’ll say it’s acceptable. Not to worry though, there’s another review:

good paper (8/10)

Gee, thanks! This is such a reasonable, well structured review. What can I say? Oh yes → FAIL

One of the foundations of scientific method is to have results written down in an understandable (reproducible) order. This conference had more submission than expected, granted. But having this review process is just utterly pointless – you can’t even tell if the reviewers at least had a look at your paper. For what it’s worth: Most of the papers accepted here had some professor titles as authors – this doesn’t necessarily mean anything, but just a hint on another big problem of current science – titles are more important than content.
Overall, this feedback just leaves a very shallow taste whether the conference (or at least the reviewers) can be taken seriously. The conference hasn’t even started and already leaves an unprofessional impression. Let’s hope, the actual presentations and final papers will make up for this.

Categories: English, Other Tags: ,
  1. No comments yet.
  1. No trackbacks yet.

ERROR: si-captcha.php plugin: GD image support not detected in PHP!

Contact your web host and ask them to enable GD image support for PHP.

ERROR: si-captcha.php plugin: imagepng function not detected in PHP!

Contact your web host and ask them to enable imagepng for PHP.