Saturday, January 12, 2008

Ranking Research(ers)

I hear about ranking all the time, ranking of universities, web search results, job candidates, peers for annual review, whatever. I have never seen a ranking that sates people.

Here is a ranking of researchers (it is keyed off DBLP and therefore has database+theory bias), part of a useful site for conference search. I don't know the underlying algorithms or premises. You can find interesting inversions in these lists and be tickled.

Instead, we can do some research. Is there an axiomatic approach to ranking? That is, can we state certain properties that ranking must satisfy (beyond total/partial ordering), and see if such rankings exist? One can say: there should be a significant difference between top 3 and below 10 in quality? different rankings obtained from projecting on different attributes should look similar? ranking should be relatively stable over time? Some principled study that goes beyond the machine learning formulation will be cool. It seems to me that the axioms acceptable for ranking universities is different from axioms acceptable for ranking say the researchers or job candidates or web search results.

Update: People asked me about the source of the ranking above. See here for a writeup by Kuhn and Wattenhofer. PageRank is involved.

Update 2: Was wondering if this list is similar to one you can obtain by a simple sort order, such as, a function of the number of publications, degree of publication graph, etc.

7 Comments:

Blogger artoo said...

Good achievement to be higher on the "overall" list than the "theory list"!

The list needs to be cleaned up --- Martin Farach-Colton and Lane Hemaspaandra have split personalities, for example.

3:05 AM  
Blogger metoo said...

Or that one is less of a theoretician than a computer scientist. half full or half empty? :)

9:17 AM  
Anonymous Anonymous said...

This has a bias against people with fewer co-authors.
For example, there are several people with strong single author publications who do not make either STOC/FOCS/SODA list, e.g. both best paper winners from 2007 STOC.

4:34 PM  
Anonymous Anonymous said...

yoohoo, somebody owes me ten bucks!

this is the most awesome ranking function ever.

-vijay k

4:54 PM  
Blogger Rasmus Pagh said...

For more info on the background of the ranking see this article from a recent SIGACT news. It is a very entertaining read. The measure of the author ranking is "how central is this author", based on a PageRank-like computation. Clearly, people who publish alone are not favored by this measure.

12:17 AM  
Anonymous Anonymous said...

Judging from the SIGACT News article, it's using only the author-conference bipartite multigraph as its source of information. Achievement according to this list is attained by publishing many papers in "good" conferences, where good conferences are the ones the high ranking authors send many of their papers to. So, in terms of ranking theorists, I don't think this ends up being very different than merely counting how many STOC/FOCS papers they have. It doesn't seem to take into account how highly cited the papers are, for instance.

Re the anonymous comment about bias against having few co-authors: I don't think so, as co-authorship also seems to not be considered in the input information. If you have few co-authors but still publish frequently in FOCS and STOC, you should do well on this list. But the point about best paper awards is well taken; the quality of the papers is measured solely by what conference they're in.

2:06 PM  
Anonymous viagra online said...

I've been exploring this blog since the firs entry and I've got a question for you: What happened to the scrolling blog updates? I liked them. Not complaining, just asking, it's a shame because I think that was the best part, now this blog has been changing the topics and its features too. 23jj

2:22 PM  

Post a Comment

<< Home