How to determine quality of papers

There are different metrics to measure ‘how good’ a paper is. None is perfect or completely accurate. The best way to judge is personal, and you will develop your own preferences and ‘qualities indicators’ over time.

The San Francisco Declaration on Research Assessment (DORA) opens with this statement:

There is a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties.

This is why some organizations (listed below) provide rankings that are meant to evaluate research fairly according to some criteria. Unfortunately a single criteria cannot measure and assess quality in a coherent way across different disciplines, communities, eras, topics.

Common rankings for references

For starters, you can follow guidelines about the quality of the venue where a work is accepted for publications.

Common metrics

Keep in mind that people like simplistic statements. This means that they will pick a metric and use it to compare people/works and assess quality of research. However, no matter what metric you choose, it will not be completely accurate and fair (especially if you attempt to compare researchers/papers from neighboring fields). This is because any metric will compress a complex, multidimensional data (e.g., venue quality, number of citations, number of total publications, number of authors, reviewers’ expertise…) into a single value, which of course simplifies things, but at the same time loses important factors.


  1. Google Scholar ranking is based on h-index and is skewed towards security and applications rather than core crypto. As of now (July 2022), I do not agree much with this ranking for my own research field. ↩︎

  2. How many is too many publications, depends on one’s field of research and career stage. Within cryptography, I’d say that for PhD students a good average is 1-2 papers / year, for Post Docs is 2-4/year, and for Professors 2-5/years. ↩︎