Four ideas for changing peer-reviewed academic publishing culture
by Holger H. Hoos
This document is currently just a sketch; nevertheless, please feel free to
share or link to it as you see fit. If you do so, I kindly ask you to reference this document
as follows:
Holger H. Hoos: Four ideas for changing peer-reviewed academic publishing culture.
Working document, version 0.2 of 2013/07/20; latest version available at www.cs.ubc.ca/~hoos/publishing.html.
Here are four ideas that I believe have the potential to address many of the weaknesses
of standard peer-reviewed publishing models, in particular, the archival conference
proceedings model predominantly used in most areas of CS. To the best of my knowledge,
all four ideas are new; however, to varying degrees, similar, less ambitious concepts have been
or are being adopted in some areas of CS.
I believe that each of these ideas can be implemented independently of the others, but
some of them complement each other well, as briefly outlined in the notes below.
1. Continuous submissions
- PC appointed for 12 months, possibly in staggered terms
(including chairs; black-out/preferred periods)
- submissions continuously accepted, reviewing on timeline similar to that for
old-style conferences / journals like JAIR
- because the burden of handling big batches is lessened, summary rejects
without reviews (as practiced by many journals, e.g., Nature) can be used more
effectively
- accepted manuscripts go into conference slots, FIFO
- resubmissions possible, but only once per 12-month cycle; these will preferably
be handled by the same PC members
- there's still a deadline for each conference (in the sense that past that date,
there should be no expectation that a paper can make it in), based on logistical
constraints
- proceedings could (but don't have to) be replaced by journal
- advance on-line publication (now practiced by many journals and, informally,
authors of papers accepted for major conferences) can be used to reduce publication delays
- type of presentation (long, short, poster) could still be decided when putting together
conference program (based on reviews, area, general interest, ...)
- would combine well with rich reviewing, reviewing token system
(which may help reduce impact on reviewer load)
- applies to conferences, not journals (which do this anyway)
[see VLDB, who appear to be doing this at least in part; ICML also has recently moved
to multiple waves of submissions, but still handles submissions in a batched manner]
Which problem does this address?
Reviewing quality; quality of published papers;
reviewer load (not obvious to see, but indeed an expected improvement, essentially
due to more efficient handling of "resubmissions" to same or other conferences);
publication delays (for reasons similar to those hinted at under reviewer load)
2. Rich reviewing
- reviewers have the option to share intermediate stages of a reviewing cycle
with authors (while remaining anonymous)
- authors and reviewers are notified whenever the reviewing dialogue has been updated
- authors have the option to respond to reviewer input or questions, or proactively
to provide additional comments
- reviewers are encouraged, but not obliged to take intermediate author input
into account (no obligation, in order to avoid "moving target" phenomenon)
- reviewers and authors can opt out of the rich reviewing process, on a
per-paper or permanent basis
- could be easily and efficiently supported by suitably designed submission handling
software
- would combine well with continuous submissions, reviewing token system
(which may help reduce impact on reviewer load)
- this would naturally extend to a post-acceptance commenting phase, where readers
can, anonymously or not (there are pros and cons to anonymity at that stage), comment
on or discuss the work
- could be applied to conferences or journals
[author rebuttals, now increasingly common in CS, can be seen as a very limited form of
rich reviewing]
Which problem does this address?
Reviewing quality; quality of published papers;
subjective experience of authors and reviewers;
publication delays (due to reduced incidence of poorly justified rejections)
3. Reviewer rating
- reviewers anonymously rate each others' reviews (for the same paper),
editors/program chairs can also rate review quality
- ratings get aggregated robustly (median)
- reviewer ratings can be used by editors/program chairs for making decisions on
future use of reviewers (probably using a scheme that takes into account relative
standing, i.e., statistical rank, and absolute aggregate ratings)
- consistently bad reviewers get "dropped silently"
- consistently good reviewers might get rewarded
- reviewer ratings could be used to weight reviewers' acceptance recommendations
(potential problem: noise, strategic bias)
- could be easily and efficiently supported by suitably designed submission handling
software
- would combine well with continuous submissions, reviewing token system
(where it would provide one basis for awarding tokens, see below)
- could be applied to conferences or journals
[many conferences have internal "blacklists", but in all cases I am aware of,
these are managed in a somewhat subjective and ad-hoc manner]
Which problem does this address?
Reviewing quality (and hence paper quality); frustration (and publication delays)
due to poorly justified rejections
4. Reviewing token system
- authors of submitted papers need to "pay" for reviewing using "reviewing tokens"
(1 token per review, 2-3 per submission, irrespective of number of authors)
- reviewing tokens can be contributed by any author of a paper
- reviewers earn a reviewing token for each review they provide (unless it is considered
"useless", based on reviewer rating or editor's/pc's assessment)
- editors might also get a certain number of tokens per year (to compensate them
fairly for their service)
- there is a market for reviewing tokens (they can be purchased, sold, traded)
- system could be bootstrapped by allowing token accounts to go into debt
(perhaps up to -3)
- free tokens could be awarded in case of hardship (e.g., student-only papers)
- journal / organisation (e.g., AAAI, Canadian AI Association, SAT Association)
acts as the bank for reviewing tokens
- could be applied to conferences or journals
- could be easily and efficiently supported by suitably designed submission handling
software
- would combine well with reviewer rating, rich reviewing (since it mitigates concerns
about reviewer workload arising in the context of that idea), continuous submissions
[this idea can be contrasted with open access models where authors pay for publication;
different from many other disciplines, where professional editing and typesetting still
incur a non-trivial cost to the publisher, in CS, the true cost lies in reviewing]
Which problem does this address?
Overly incremental publications
(with implications on reviewing load, ...);
quality of published papers;
reviewing quality; reviewer workload
last update: 2013/07/20 [hh]