H-indices and how academic publishing has changed: Feynman and Einstein just aren’t that impressive anymore

August 4, 2011 at 9:22 am 18 comments

There’s an interesting list of the computer scientists with the top “h-indices.”  An h-index is a metric for the productivity of researchers and impact of publications.  There are 500 computer scientists with an h-index of 40 or higher on that list.  I know that h-indices get discussed in every promotion and tenure meeting I’ve been at for several years now.

Google now has a “Citations” feature where famous scientists’ h-indices and citation records are published.  Richard Feynman has an h-index of 37.  Albert Einstein has an h-index of 44.  37 might not earn Feynman a full professor status at some academic departments today.  I know lots of those folks with h-indices over 44, but I don’t think any of them are household names more than Einstein’s.

Maybe this gap between measurable-impact and perceived-impact says something about just how valuable the h-index is for measuring impact and productivity.  The h-index assumes that impact can be measured in terms of (a) publishing a lot and (b) having lots of people reference what you publish.  It’s inherently incestuous — academic impact  matters in terms of academic citations.  A best-selling book that millions buy and thousands talk about does not count towards an h-index — it only matters in terms of who cites it.  Blogs and other new media, which may have impact (on thinking, on actions), are not included in h-index calculations.

Or maybe this observation says something about how academic publishing has changed.  Feynman and Einstein have had a great impact, and neither (a) needed to publish as much as today’s scientists to achieve their  impact nor (b) were expected to publish as much as faculty today.  Maybe this h-index gap is more reflective of our desire to quantify everything, than of our desire to measure real and significant impact.

Entry filed under: Uncategorized. Tags: , , .

Instruction makes student attitudes on computational modeling worse: Caballero thesis part 3 Heading to International Computing Education Research 2011 in Rhode Island: How CS students choose Threads

18 Comments Add your own

  • 1. Alan Kay  |  August 4, 2011 at 9:44 am

    Hi Mark

    One of your best posts! I don’t think the problem is a desire to quantify, but certainly a desire to avoid having to make real assessments, and possibly a desire to avoid the real nature of the bell curve in a popular and developed academic field (in the sense that people like Einstein and Feynmann “just aren’t fair” to the majority, whereas anyone can publish and cite if the thresholds are set low enough).



  • 2. gasstationwithoutpumps  |  August 4, 2011 at 9:47 am

    H-index is very field-dependent. At least the Google Scholar measurement includes a wider range of publications than the more traditional citation counting in ISI, which omitted conference publications and publications in journals outside a narrow range.

    My 4th most highly cited paper is in a journal that ISI doesn’t index, and many of the citations to it are also in such journals—but even Google scholar is limited to fairly academic citations. The 266 citations there are far fewer than the 183,000 hits for a fairly specific Google web search on the same algorithm.

    Of course, it disturbs me a bit to see on the list people I know to be not all that bright. I can’t help thinking that if only I could get over my writers’ block I could up my H-index from 31 to 40 or 50, but I’m probably a bit delusional about that.

  • 3. Mike Lutz  |  August 4, 2011 at 5:28 pm

    What was Kurt Godel’s h index? Turing’s?

    It looks like the h index favors those hitting for average over those who hit infrequent but towering blasts out of the park.

    With all due respect, I’d rank Einstein and Feynman in front of all the folks on the list you cite.

  • 4. Gilbert Bernstein  |  August 4, 2011 at 7:40 pm

    Pardon my incredulity as a mere lowly grad student, but seriously? You’re telling me that the h-index (or pick your other favorite flawed measurement device) is actually considered by promotion/tenure committees? That’s really depressing if so. Do they consider Erdos number as well? Or perhaps Bacon-Erdos number for the more interdisciplinary professors?

    • 5. Mark N.  |  August 5, 2011 at 2:59 pm

      It’s really a bit baffling. A bigger scandal than the h-index, imo, is the weight given to the Thomson Impact Factor, a particularly statistically-illiterate piece of nonsense with several glaring bandaids that barely covering up the ugliest parts (it’s really easy to game, so Thomson manually edits it to exclude the worst gamers). That these sorts of things are done in science departments just makes it all the more baffling. It’s almost like a parody of scientism, that scientists will accept obviously crap metrics, as long as they “look sciencey” in form; you know, with numbers and an equation and data and stuff.

      • 6. gasstationwithoutpumps  |  August 6, 2011 at 11:32 am

        Th H-index was introduced to improve on previous practice, which was either to count total number of citations in ISI’s index or to count the number of papers in “high impact factor” journals. Both of these metrics are cruder and even less associated with quality than H-index is. I don’t think any one really believes H-index is a good measure, but it is a lot better than “I like this guy”, which is what you tend to get if there are purely subjective measures of quality.

        • 7. Alan Kay  |  August 6, 2011 at 11:38 am

          The question in such comparisons is not “is x better than y?” but “is x above any reasonable threshold?”. (If it isn’t, then it doesn’t matter whether “x is better than y”.)

          This mistake is made all the time by our poor human brains, to the point that we need a learned heuristic to guard against it.

          It is especially prevalent in all levels of education, and leads to rejoicing when reading (etc.) scores “improve” a little, but where the students still are essentially illiterate.

          • 8. gasstationwithoutpumps  |  August 6, 2011 at 12:56 pm

            No—since decisions about tenure will be made no matter what data are available, even modest improvements in the data are useful. If H-index is positively correlated with desirable properties for granting tenure, and more so than other criteria that can be used, then it is useful, even if badly flawed. If there are several weak indicators of quality, then combining them can result in better decisions than just picking the “best” one.

            That is, the question is whether considering H-index leads to better or worse decisions on average than not considering it. Despite it’s serious flaws, I’m convinced that, on average, considering H-index leads to better decisions than ignoring it.

          • 9. Gilbert Bernstein  |  August 6, 2011 at 7:26 pm

            Ahh, thanks for the clarification Alan. That makes a lot more sense/seems reasonable.

  • 10. Algebra++  |  August 5, 2011 at 4:23 am

    1st guy is looking around the ground under a street light. 2nd guy walks up. ‘Whatcha doin’?’ 1st :’Looking for my keys’. 2nd : ‘Is this where you lost them?’ 1st: ‘No, but the light is better over here.’
    I find this a far-too-common attitude among administrators and government officials, since they generally don’t have technical backgrounds. They fixate on a particular parameter, not because its the right one, but because it’s an easy one to watch.

  • 11. Alan Kay  |  August 6, 2011 at 1:09 pm

    To gasstationwithoutpumps | August 6, 2011 at 12:56 pm

    I suppose … if it’s between below average and average in actual ability and accomplishments. This seems wrong.

    And, I don’t think this is remotely right if the process excludes top of the bell curve folks (for example, Ivan Sutherland is not on the list. Nor is Bob Barton. Etc.)

    I remember Dave Evans telling me that he had to fight to get Bob Barton a good position (in the 60s yet!) because he only had a Master’s degree in Math, and he had only written a few papers. The fact that he was the world’s great computer designer and one of the great geniuses in our field had very little impact on the university even then. But Dave got his way after a lot of unnecessary work.

  • 12. Sugel  |  August 6, 2011 at 5:09 pm

    In a paper published in the November 15 issue of the Proceedings of the National Academy of Sciences which appears this week in the journals early online edition Hirsch explains that his h-index can give a reliable estimate of the importance significance and broad impact of a scientists cumulative research contributions. For a person to have a high h-index is not an accident Hirsch says after testing his method on scientists in a variety of disciplines and circulating his formula on physics bulletin boards for other scholars to test. The h-index is derived from the number of times a scientists publications are cited in other papers but is calculated in a way to avoid some of the problems associated with counting large numbers of marginal papers or high-profile coauthors…

  • 13. Tony Hursh  |  August 8, 2011 at 5:40 pm

    Other notable absences: Papert, Kay :-), Cook, Codd, and Dijkstra.

  • 14. Filip  |  February 29, 2012 at 1:43 pm

    Just run on your article, nice one.

    However, just wanted to point on that information about h-index of Feynman and Einstein has changed on google scholar.

    That probably means that the situation with h-indexes is not that bad for great scientists after all 🙂


  • 15. Tyrone  |  December 29, 2017 at 11:43 am

    What people forget about h-indices is that your h-index can never exceed your total number of publications. So people like Feynman & Einstein, who published relatively few papers but with high impact, have relatively low h-indices.

  • […] Attempts to balance quality and quantity of science for a given researcher via something called the h-index are similarly problematic. (See here for Philip Ball’s insightful critique of h-indices). I […]

  • […] universitaires se notent les uns les autres selon une procédure baroque connue sous le nom de « h-index« . Ce mécanisme est apprécié des bureaucrates, mais il récompense le conformisme et le manque […]

  • […] incest” that consists of academics grading each other in a baroque procedure known as the “h-index.” It is loved by bureaucrats, but it rewards conformity and lack of […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,184 other subscribers


Recent Posts

Blog Stats

  • 2,053,933 hits
August 2011

CS Teaching Tips

%d bloggers like this: