Incredible insights and eureka moments don’t usually come out of thin air. Despite what the Hitchhikers Guide to the Galaxy would have you believe, a student who had been left to sweep up never has the sudden insight that if an infinite improbability drive is a virtual impossibility, that it must have finite improbability, and so all that one has to do to make one work is figure out exactly how improbable it is, feed that figure into the finite improbability generator, give it a fresh cup of really hot tea, and turn it on.”
That just never happens.
Instead, it takes years of hard work and dedication towards completing research studies, publishing papers in a variety of peer reviewed journals and convincing funding agencies to give you millions of dollars to conduct your next study. Naturally, those are exactly the kinds of researchers that universities and institutes are trying to hire. Rock star researchers that bring in funding, prestige, good colleagues and great graduate students are always in high demand.
There is, however, one small challenge. How does a school pick out the researcher who is going to be the best investment amongst the multitudes of promising young Ph.D. students?
Historically, search committees try to predict success by looking at past work and bringing candidates in to get their gut feelings. But researchers at Northwestern University are looking to the future. They’ve created an algorithm that predicts young scientists’ success with a very high rate of accuracy.
The formula was developed in the lab of Konrad Kording, senior author of the Nature paper describing it and associate professor in physical medicine and rehabilitation at Northwestern University Feinberg School of Medicine. It takes into account a researcher’s h index (a measure of the quality and quantity of papers published), the number of articles written, the years since publishing the first article, the number of distinct journals one has published in and the number of articles in high impact journals.
The algorithm was put to the test on data from 3,293 scientists (3,085 neuroscientists, 57 Drosphila scientists and 151 evolutionary scientists) for whom they constructed a publication, citation and funding history. The results proved to be twice as accurate for predicting future success as the h value alone. What’s more, the numbers of articles written, the diversity of publication and the number of top articles over time all were particularly influential predictors of future success.
Of course, there is no substitute for carefully evaluating people in person, face-to-face. This is just another tool in the toolbox for committees to use when making important decisions about their institution’s future.
It also shows that impr