How to determine quality of papers
There are different metrics to measure ‘how good’ a paper is. None is perfect or completely accurate. The best way to judge is personal, and you will develop your own preferences and ‘qualities indicators’ over time.
The San Francisco Declaration on Research Assessment (DORA) opens with this statement:
There is a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties.
This is why some organizations (listed below) provide rankings that are meant to evaluate research fairly according to some criteria. Unfortunately a single criteria cannot measure and assess quality in a coherent way across different disciplines, communities, eras, topics.
Common rankings for references
For starters, you can follow guidelines about the quality of the venue where a work is accepted for publications.
-
CORE ranking: COmparing Research & Education (Australian Conference Portal Ranking) follows DORA criteria and is my personal favorite (though it does not include many nice venues we have within cryptography)
-
Google Scholar: has a ranking of security conferences1.
-
I am not aware of an official ranking for venues within cryptography (if you are, please let me know!), and especially within cryptographic protocols and applications. So I made a collection of venues I like and provide rakings according to my personal experience + CORE (when available).
Common metrics
Keep in mind that people like simplistic statements. This means that they will pick a metric and use it to compare people/works and assess quality of research. However, no matter what metric you choose, it will not be completely accurate and fair (especially if you attempt to compare researchers/papers from neighboring fields). This is because any metric will compress a complex, multidimensional data (e.g., venue quality, number of citations, number of total publications, number of authors, reviewers’ expertise…) into a single value, which of course simplifies things, but at the same time loses important factors.
- Number of publications: the more the marrier! But keep in mind that if having too many publications2 is an index of either geniality, or (most commonly) that little or no contribution comes from that author in each work. Too little publications (especially per-year publications) may mean unproductivity or simply a bad luck with reviewer 2 😑
-
Number of citations: intuitively, the more one paper is cited, the more it is known in the community. One may wish this would imply that highly cited works/researchers are the strongest, but keep in mind that actually many among the most cited papers are surveys, systematic organization of knowledge, broken constructions and attacks.
-
h-index: is defined as the maximum value of
h
such that the given author/journal has published at leasth
papers that have each been cited at leasth
times. The index is designed to improve upon simpler measures such as the total number of citations or publications. One’s h-index is important “in general” but it does not always accurately capture the quality of one’s research. -
h-index for events: attempts to transfer the quality/impact of the event (conference / journal) to the individuals publishing there. Keep in mind that h-indexes tend to indicate the popularity of a topic rather than the quality of a work. Moreover, h-indexes enforce positive loops: if something goes up in ranking, then more people are willing to send their good work there, which improves the quality of works accepted, and in turns increases the h-index value.
-
i10-Index is defined as the number of publications with at least 10 citations.
This very simple measure is only used by Google Scholar to help gauge the productivity of researchers. -
Impact factor (IF): measures the frequency with which the average article in a journal has been cited in a particular year.
-
Corner cases: in cryptography, core theoretical results are often of higher quality than many applied security papers. But doing theory is hard, so you don’t see as much follow-up work on that (and thus, fewer citations and publications of this nature are spread across years). Sometimes a paper becomes ‘popular’ just because it was written at the right time and got momentum (citation/visibility) quickly.
-
Google Scholar ranking is based on h-index and is skewed towards security and applications rather than core crypto. As of now (July 2022), I do not agree much with this ranking for my own research field. ↩︎
-
How many is too many publications, depends on one’s field of research and career stage. Within cryptography, I’d say that for PhD students a good average is 1-2 papers / year, for Post Docs is 2-4/year, and for Professors 2-5/years. ↩︎