In the wake of the discussion of how different fields have different measures of evaluation (a view I am not 100% on board with: if a subgroup chooses a method of evaluation antithetical to the mores of the rest of academe, don’t be surprised if the group gets little respect outside their narrow group), it was interesting to flip through a recent issue of Nature (thanks Ilona!). In addition to a fascinating article on the likelihood of Mercury colliding with the Earth in the next 3 billion years or so (about 1/2500 if I read things correctly), it was interesting to note the apparently required paragraph for co-authored papers:
J.L. designed the study, performed the simulations, and their analysis and wrote the paper. M.G. wrote the computer code.
(other articles with more coauthors divvy up the work in more detail).
We don’t do this in operations research (at least as far as I have seen) and I have made a point of always going with alphabetical author listing (which generally puts me last, though I have sought out co-authors Yildiz, Yunes, and Zin recently) which has the aura of equal participation, even in cases where the participation is not so equal. Other people try to order by contribution, though it is unclear what metric to use in such a case. In promotion and tenure, we typically (at our school) do not try to parse out individual contributions to papers, though we do discuss individual strengths and weaknesses.
I think this sort of paragraph would actually be a boon to our literature. It would force some people to think about why they are really part of a paper, and add honesty to the system. Of course, it also adds to the arguing and power struggle that can arise in research collaborations.
At my campus, we are supposed to provide this information to the campus-level committee that reviews our cases every two or three years. It doesn’t have to be at the level of detail you describe — it can be a percentage of the total effort, or “primary author”, “secondary author”, “student work under my supervision”, etc. — but they do require some assistance in sorting out what the co-authorships mean.
I suppose such a system might also cut down on meaningless co-authorships where someone is listed merely for having been present during a conversation or being the head of a lab but not actually making a measurable contribution to the paper itself. That might be a good thing.
In David’s case (UC Irvine) it would be interesting to cross-check vitas to determine if the views of the roles were consistent between co-authors. The Nature system forces consistency, of course.
Haven’t seen that authoring system in economics either.
“if a subgroup chooses a method of evaluation antithetical to the mores of the rest of academe, don’t be surprised if the group gets little respect outside their narrow group”
Perhaps I can give a counter-argument that may convince you for the opposite and that making comparisons across fields is unfair and often results in strange outcomes.
A few years back, my school decided to classify journals as “top” journals or not. (Top researchers publish in top journals, right?)
So, how do you rank journals? Well, the committee decides to objective and fair, and avoid personal biases. So, they use the impact factor for this purpose. What is the result? We realize that our Operations group is at the absolute bottom, publishing only in low-quality journals, with significantly lower impact factors than the impact factors of the journals for the rest of the school. Only a couple of OR journals have impact factors above 2, and many have impact factors below 1. Horrible! In contrast, you have Econometrica with impact factor close to 5 and other Economics journals with impact factors close to 4-5. Even when searching across all OR journals, we could not locate OR journals with high impact factors.
What does this mean? That OR researchers are of low quality and publish only in low-quality journals compared to the economists? Economists thought so.