Monday, 13 March, 2023 | 14:00 | Applied Micro Research Seminar

Carlo Rasmus Schwarz (Bocconi University) " Measuring Science: Performance Metrics and the Allocation of Talent"

Carlo Rasmus Schwarz, Ph.D.

Bocconi University, Italy

Join online: https://call.lifesizecloud.com/17485887 (Passcode: 5996)


Authors: Carlo Schwarz, Sebastian Hager and Fabian Waldinger

Abstract: Performance measures are ubiquitous in the labor market. In the sciences, performance measures affect hiring, promotions, outside offers, research funding, and the prestige of academics. Nowadays, many performance measures, such as h-indices or impact factors, are based on citations. Despite being a staple of modern science, it was impossible to measure citations until the 1960s, when Eugene Garfield published the Science Citation Index (SCI). While scientists always had a rough sense of the influence of scientific work, the SCI provided the first systematic tabulation of citations. For the first time, it became possible to identify the most highly cited papers and researchers in any given field. Therefore, scientists and departments quickly started to use the SCI, and all leading universities purchased the SCI books. Soon after the introduction of the SCI, university administrators, academics and sociologists of science began to use citations to assess academics’ performance.

In this paper, we study how the introduction of the SCI shaped modern science. We analyze how the visibility of citations affected assortative matching between scientists and departments. We also study the effect of the SCI on career outcomes such as promotions. Finally, we document differential effects among disadvantaged groups in academia.

To quantify how the SCI affected modern science, we use data from the World of Academia Database (Iaria et al., 2022) and extract all U.S. academics for 1956 and 1969. We combine these data with information from the Clarivate Web of Science. This allows us to measure the number of publications and citations of each academic and to construct historical department rankings. Whereas today we can measure citations and publications for the entire twentieth century, contemporaries could only observe citations after the first publication of the SCI in 1961. To measure which citations were observable at the time, we hand-collect information from historical volumes of the SCI.

In the first part of this article, we investigate assortative matching between scientists and universities. With the introduction of the SCI, the correlation of scientists' citations and the ranking of their department increased by 59%. This suggests that the introduction of the SCI increased the importance of citations in the evaluation of scientists. However, this observed increase could also have been caused by other factors. For example, funding might have become more concentrated at the most highly ranked departments, causing the academics at those departments to produce the most relevant and highly cited research. To isolate the effect of the SCI, we exploit that, for technical reasons, the SCI only covered citations from citing articles in a subset of journals and years. Hence, only citations from citing articles in this subset were visible to the scientific community. In contrast, other citations remained invisible, because they were not covered in the SCI. Today, however, these citations can be observed with richer citation data from the Clarivate Web of Science.

We can thus estimate the effect of visible citations, relative to invisible citations, on assortative matching between academics and departments. We find that visible citations have a four times larger impact on the department rank of scientists than do invisible citations. Specifically, a scientist with a 10 percentiles higher individual-level visible citation rank on average matched with a 2.3 percentile better department. In contrast, a scientist with a 10 percentiles higher invisible citation rank matched with a 0.57 percentile better department. This pattern holds even if we control for detailed publication records, i.e., for the number of publications in each journal (e.g., two Nature, one Science, one PNAS) and year (e.g., one publication in 1956, two in 1958, and one in 1960). The fact that invisible citations have some predictive power is consistent with the notion that even in the absence of citation data academics have a rough sense of the importance of other research.

In the second part of the paper, we investigate the heterogeneous benefits of citation metrics for different groups. First, we estimate a non-parametric regression for scientists with different individual-level (visible or invisible) citation ranks. We show that scientists in higher percentiles, and especially those above the 90th percentile, of the individual-level citation distribution, benefited disproportionately from the availability of citation metrics. Second, we find that availability of citation metrics disproportionately benefited highly cited academics who were originally in lower-ranked departments. Thus citations allowed the discovery of these ``hidden stars.'' Third, we investigate if historically disadvantaged groups benefited more from citation metrics. We find that neither female nor Asian academics benefited disproportionally from the availability of citation metrics.

In the last part of the article, we study the impact of the SCI on career outcomes, such as promotions. For scientists who were assistant/associate professors in 1956, we investigate whether they were promoted to full professor by 1969. We find that the promotion probability increased by 4.2 percentage points (or 5.9 percent) for scientists with a 10 percentiles higher visible citation rank. This indicates that academic institutions in the United States used citations in the SCI as a metric for promotion decisions. In contrast, invisible citations did not affect promotion probabilities. Thus, scientists whose citations were not covered by the SCI were overlooked in promotion decisions.