HAL Id: hal-01771422
https://hal-univ-rennes1.archives-ouvertes.fr/hal-01771422
Submitted on 19 Apr 2018
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
David Moher, Florian Naudet, Ioana A Cristea, Frank Miedema, John P A Ioannidis, Steven N Goodman
To cite this version:
David Moher, Florian Naudet, Ioana A Cristea, Frank Miedema, John P A Ioannidis, et al.. Assessing
scientists for hiring, promotion, and tenure. PLoS Biology, Public Library of Science, 2018, 16 (3),
pp.e2004089. �10.1371/journal.pbio.2004089�. �hal-01771422�
Assessing scientists for hiring, promotion, and tenure
David Moher
1,2*, Florian Naudet
2,3, Ioana A. Cristea
2,4, Frank Miedema
5, John P.
A. Ioannidis
2,6,7,8,9, Steven N. Goodman
2,6,71 Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada, 2 Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, United States of America, 3 INSERM CIC-P 1414, Clinical Investigation Center, CHU Rennes, Rennes 1 University, Rennes, France, 4 Department of Clinical Psychology and Psychotherapy, Babeş- Bolyai University, Cluj-Napoca, Romania, 5 Executive Board, UMC Utrecht, Utrecht University, Utrecht, the Netherlands, 6 Department of Medicine, Stanford University, Stanford, California, United States of America, 7 Department of Health Research and Policy, Stanford University, Stanford, California, United States of America, 8 Department of Biomedical Data Science, Stanford University, Stanford, California, United States of America, 9 Department of Statistics, Stanford University, Stanford, California, United States of America
* dmoher@ohri.ca
Abstract
Assessment of researchers is necessary for decisions of hiring, promotion, and tenure. A burgeoning number of scientific leaders believe the current system of faculty incentives and rewards is misaligned with the needs of society and disconnected from the evidence about the causes of the reproducibility crisis and suboptimal quality of the scientific publication record. To address this issue, particularly for the clinical and life sciences, we convened a 22-member expert panel workshop in Washington, DC, in January 2017. Twenty-two aca- demic leaders, funders, and scientists participated in the meeting. As background for the meeting, we completed a selective literature review of 22 key documents critiquing the cur- rent incentive system. From each document, we extracted how the authors perceived the problems of assessing science and scientists, the unintended consequences of maintaining the status quo for assessing scientists, and details of their proposed solutions. The resulting table was used as a seed for participant discussion. This resulted in six principles for assessing scientists and associated research and policy implications. We hope the content of this paper will serve as a basis for establishing best practices and redesigning the current approaches to assessing scientists by the many players involved in that process.
Introduction
Assessing researchers is a focal point of decisions about their hiring, promotion, and tenure.
Building, writing, presenting, evaluating, prioritising, and selecting curriculum vitae (CVs) is a prolific and often time-consuming industry for grant applicants, faculty candidates, and assessment committees. Institutions need to make decisions in a constrained environment (e.g., limited time and budgets). Many assessment efforts assess primarily what is easily deter- mined, such as the number and amount of funded grants and the number and citations of a1111111111
a1111111111 a1111111111 a1111111111 a1111111111
OPEN ACCESS
Citation: Moher D, Naudet F, Cristea IA, Miedema F, Ioannidis JPA, Goodman SN (2018) Assessing scientists for hiring, promotion, and tenure. PLoS Biol 16(3): e2004089. https://doi.org/10.1371/
journal.pbio.2004089
Published: March 29, 2018
Copyright: © 2018 Moher et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: The authors received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Abbreviations: ACUMEN, Academic Careers Understood through Measurement and Norms;
CSTS, Centre for Science and Technology Studies;
CV, curriculum vita; DOI, digital object identifier;
DORA, Declaration on Research Assessment; GBP, Great Britain Pound; GEP, Good Evaluation Practices; ISNI, International Standard Name Identifier; JIF, journal impact factor; METRICS, Meta-Research Innovation Center at Stanford;
NAS, National Academy of Sciences; NIH, National
Institutes of Health; NIHR, National Institute for
Health Research; OA, open access; PQRST,
Productive, high-Quality, Reproducible, Shareable,
published papers. Even for readily measurable aspects, though, the criteria used for assessment and decisions vary across settings and institutions and are not necessarily applied consistently, even within the same institution. Moreover, several institutions use metrics that are well known to be problematic [1]. For example, there is a large literature on the problems with and alternatives to the journal impact factor (JIF) for appraising citation impact. Many institutions still use it to assess faculty through the quality of the literature they publish in, or even to deter- mine monetary rewards [2].
That faculty hiring and advancement at top institutions requires papers published in jour- nals with the highest JIF (e.g., Nature, Science, Cell) is more than just a myth circulating among postdoctoral students [3–6]. Emphasis on JIF does not make sense when only 10%–20% of the papers published in a journal are responsible for 80%–90% of a journal’s impact factor [7,8].
More importantly, other aspects of research impact and quality, for which automated indices are not available, are ignored. For example, faculty practices that make a university and its research more open and available through data sharing or education could feed into researcher assessments [9,10].
Few assessments of scientists focus on the use of good or bad research practices, nor do cur- rently used measures tell us much about what researchers contribute to society—the ultimate goal of most applied research. In applied and life sciences, the reproducibility of findings by others or the productivity of a research finding is rarely systematically evaluated, in spite of documented problems with the published scientific record [11] and reproducibility across all scientific domains [12–13]. This is compounded by incomplete reporting and suboptimal transparency [14]. Too much research goes unpublished or is unavailable to interested parties [15].
Using more appropriate incentives and rewards may help improve clinical and life sciences and their impact at all levels, including their societal value. We set out to ascertain what has been proposed to improve the evaluation of ‘life and clinical’ research scientists, how a broad spectrum of stakeholders view the strengths and weaknesses of these proposals, and what new ways of assessing scientists should be considered.
Methods
To help address this goal, we convened a 1-day expert panel workshop in Washington, DC, in January 2017. Pre-existing commentaries and proposals to assess scientists were identified with snowball techniques [16] (i.e., an iterative process of selecting articles; the process is often started with a small number of articles that meet inclusion criteria; see below) to examine the literature to ascertain what other groups are writing, proposing, or implementing in terms of how to assess scientists. We also searched the Meta-Research Innovation Center at Stanford (METRICS) research digest and reached out to content experts. Formal searching proved diffi- cult (e.g., exp
Reward/(7408); reward
.ti,kw (9708), incentiv
.ti,kw. (5078)) and resulted in a very large number of records with low sensitivity and recall. We did not set out to conduct a formal systematic review of every article on the topic.
Broad criteria were used for article inclusion (the article focus had to be either biblio- metrics, research evaluation, and/or management of scientists and it had to be reported in English). Two of us selected the potential papers and at least three of us reviewed and discussed each selection for its inclusion. From each included article we extracted the following informa- tion: authors, name of article/report and its geographic location, the authors’ stated perspective of the problem assessing research and scientists, the authors’ description of the unintended consequences of maintaining the current assessment scheme, the article’s proposed solutions, and our interpretation of the potential limitations of the proposal. The resulting table (early
and Translatable; RCR, relative citation ratio; REF, Research Excellence Framework; REWARD, Reduce research Waste And Reward Diligence;
RIAS, responsible indicator for assessing scientists; SNIP, Source Normalized Impact per Paper; SWAN, Scientific Women’s Academic Network; TOP, transparency and openness promotion; UMC, Utrecht Medical Centre.
Provenance: Not commissioned; externally peer
reviewed.
version of Table 1) along with a few specific publications was shared with the participants in advance of the meeting in the hopes that it would stimulate thinking on the topic and be a ref- erence source for discussions.
We invited 23 academic leaders: deans of medicine (e.g., Oxford), public and foundation funders (e.g., National Institutes of Health [NIH]), health policy organisations (e.g., Science Policy, European Commission; Belgium), and individual scientists from several countries.
They were selected based on contributions to the literature on the topic and representation of important interests and constituencies. Twenty-two were able to participate (see S1 Table for a complete list of participants and their affiliations). Prior to the meeting, all participants were sent the results of a selected review of the literature distilled into a table (see Table 1) and sev- eral selected readings.
Table 1 served as the basis for an initial discussion about the problems of assessing scien- tists. This was followed by discussions of possible solutions to the problems, new approaches for promotion and tenure committees, and implementation strategies. All discussions were recorded, transcribed, and read by five coauthors. For this, six general principles were derived.
This summary was then shared with all meeting participants for additional input.
Results
We included a list of 21 documents [11,17–36] in Table 1. There has been a burgeoning inter- est in assessing scientists in the last 5 years (publication year range: 2012–January 2017). An almost equal number of documents originated from the US and Europe (one also jointly from Canada). We divided the documents into four categories: large group efforts (e.g., the Leiden Manifesto [20]), smaller group or individual efforts (e.g., Ioannidis and Khoury’s Productive, high-Quality, Reproducible, Shareable, and Translatable [PQRST] proposal [29]); journal activities (e.g., Nature [32]); and newer quantitative metric proposals (e.g., journal citation dis- tributions [34]).
We interpreted all of the documents to describe the problems of assessing science and sci- entists in a similar manner. There is a misalignment between the current problems in research.
The right questions are not being asked; the research is not appropriately planned and con- ducted; and when the research is completed, results remain unavailable, unpublished, or get selectively reported; reproducibility is lacking as is evidence about how scientists are incenti- vised and rewarded. We interpreted that several of the documents pointed to a disconnect between the production of research and the needs of society (i.e., productivity may lack trans- lational impact and societal added value). We paraphrased the views expressed across several of the documents: ‘we should be able to improve research if we reward scientists specifically for adopting behaviours that are known to improve research’. Finally, many of the documents described the JIF as an inadequate measure for assessing the impact of scientists [19,20,29].
The JIF is commonly used by academic institutions [4,5,37] to assess scientists, although there are efforts to encourage them not to do so. Not modifying the current assessment system will likely result in the continued bandwagon behaviour that has not always resulted in positive societal behaviour [25,38]. Acting now to consider modifying the current assessment system might be seen as a progressive move by current scientific leaders to improve how subsequent generations of scientists are evaluated.
Large group proposals for assessing scientists
Nine large group efforts were included [11,17–24] (see Table 1), representing different stake-
holder groups. The Leiden Manifesto [20] and the Declaration on Research Assessment
(DORA) [19] were both developed at academic society meetings and are international in their
Table 1. A list of sources examini ng the problems, potential unantici pated consequenc es, proposed solutions , and potential limitation s when assessing science and scientists.
DocumentnameGeographic regionThestatedperspectiveoftheproblemsassessing scienceandscientistsUnintendedconsequencesofmaintainingthe currentassessmentschemeProposedsolutionsforassessingscientistsPotentiallimitationsofproposal Largergroupproposals ACUMEN(2014) [17]EuropeTheACUMENconsortiumwasfocusedon ‘understandingthewaysinwhichresearchersare evaluatedbytheirpeersandbyinstitutions,andat assessinghowthesciencesystemcanbeimproved andenhanced’.Theynotedthat,‘Currently,thereis adiscrepancybetweenthecriteriausedin performanceassessmentandthebroadersocialand economicfunctionofscientificandscholarly research’.Thereportpointstofiveproblems:1.‘Evaluation criteriaarestilldominatedbymono-disciplinary measures,whichreflectanimportantbutlimited numberofdimensionsofthequalityandrelevance ofscientificandscholarlywork’; 2.‘Increasethepressureontheexistingformsof qualitycontrolandevaluation’; 3.‘Theevaluationsystemhasnotbeenabletokeep upsufficientlywiththetransformationsintheway researcherscreateknowledgeandcommunicate theirresearchtocolleaguesandthepublicatlarge’; 4.Thebibliometricsincurrentuse‘donotproduce viableresultsattheleveloftheindividual researcher’; 5.Thecurrent‘scientificandscholarlysystemhasa genderbias’.
TheconsortiumhasdevelopedcriteriaandguidelinesforGEPandhas designedaprototypeforaweb-basedACUMENperformanceportfolio.The GEPfocusesonthreeindicatorsforacademicassessment:(1)expertise,(2) outputs,and(3)impacts.Theperformanceportfolioisdividedintofourparts: (1)narrativeandacademicagecalculation,(2)expertise,(3)output,and(4) influence.
TheACUMENportfoliostressestheinclusionofan evidence-basednarrative,inwhichresearcherstell ‘theirstory’.Theriskisthatresearchersmightbe nudgedtomoldordistorttheirworkand achievementstomakethemfitinacompelling narrative.Moreover,thealreadyprevalentself- marketingtechniques(i.e.,howtobest‘sell’yourself) riskbecomingevenmorepervasiveandinterfering withthequalityofresearch.Somevaluable researchersmightnothaveacoherentstorytotell norbeadeptnarrators. AmsterdamCall forActionon OpenScience (2016)[18]
EuropeParticipantsattheOpenScience—FromVisionto Action—Conferencenotedthat‘Openscience presentstheopportunitytoradicallychangethe wayweevaluate,reward,andincentivisescience.Its goalistoacceleratescientificprogressandenhance theimpactofscienceforthebenefitofsociety’.
Conferenceparticipantsarguethatmaintainingthe currentschemewillcontinuetocreateaclimateof disconnectbetweenold-worldbibliometricsand newerapproaches,suchasacommitmenttoopen science—‘Thisemphasisdoesnotcorrespondwith ourgoalstoachievesocietalimpactalongside scientificimpact’.
Theconferenceparticipants’visionisfor‘Newassessment,rewardand evaluationsystems.Newsystemsthatreallydealwiththecoreofknowledge creationandaccountfortheimpactofscientificresearchonscienceand societyatlarge,includingtheeconomy,andincentivisecitizenscience’.To reachthisgoal,theCallforActionrecommendsactioninfourareas:(1) completeOA,(2)datasharing,(3)newwaysofassessingscientists,and(4) introducingevidencetoinformbestpracticesforthefirstthreethemes. Twelveactionitemsareproposed:to(1)changeassessment,evaluation,and rewardsystemsinscience;(2)facilitatetextanddataminingofcontent;(3) improveinsightintoIPRandissuessuchasprivacy;(4)createtransparency onthecostsandconditionsofacademiccommunication;(5)introduceFAIR andsecuredataprinciples;(6)setupcommone-infrastructures;(7)adoptOA principles;(8)stimulatenewpublishingmodelsforknowledgetransfer;(9) stimulateevidence-basedresearchoninnovationsinopenscience;(10) develop,implement,monitor,andrefineOAplans;(11)involveresearchers andnewusersinopenscience;and(12)encouragestakeholderstoshare expertiseandinformationonopenscience.
AlthoughtheAmsterdamCallprovidesseveral actionablestepsstakeholderscantake,someofthe actionsfacebarrierstoimplementation.For example,tochangeassessment,evaluation,and rewardschemes,oneactionistosignDORA, althoughfeworganizationshavedoneso.Similarly, evenifhiring,promotion,andtenurecommittees wantedtomoveawayfromfocusingsolelyonJIFs,it isunclearwhethertheycoulddosowithoutthe broaderresearchinstitutionagreeingtosuch modificationsinassessingscientists. DORA(2012)[19]InternationalAtthe2012annualmeetingoftheAmerican SocietyforCellBiology,agroupofpublishersand editorsnoted‘apressingneedtoimprovetheways inwhichtheoutputofscientificresearchis evaluatedbyfundingagencies,academic institutions,andotherparties’.Inresponse,theSan FranciscoDORAwasproduced.
DORApointstothecriticalproblemswithusing JIFasameasureofascientist’sworth:‘theJournal ImpactFactorhasanumberofwell-documented deficienciesasatoolforresearchassessment’.
DORAhasonegeneralrecommendation—donotusejournal-basedmetrics, suchasJIFs,assurrogatemeasuresofthequalityofindividualresearcharticles toassessanindividualscientist’scontributionsinhiring,promotion,or fundingdecisionsand17specificrecommendations,forresearchers:(1)focus oncontent;(2)citeprimaryliterature;(3)usearangeofmetricstoshowthe impactofyourwork;(4)changetheculture;funders:(5)statethatscientific contentofapaper,nottheJIFofthejournalinwhichitwaspublished,iswhat matters;(6)considervaluefromalloutputsandoutcomesgeneratedby research;researchinstitutions:(7)whenhiringandpromoting,statethat scientificcontentofapaper,nottheJIFofthejournalinwhichitwas published,iswhatmatters;(8)considervaluefromalloutputsandoutcomes generatedbyresearch;publishers:(9)ceasetopromotejournalsbyimpact factor;(10)provideanarrayofmetrics;(11)focusonarticle-levelmetrics;(12) identifydifferentauthorcontributions,openthebibliometriccitationdata; (13)encourageprimaryliteraturecitations;andorganizationsthatsupply metrics:(14)betransparent;(15)provideaccesstodata;(16)discouragedata manipulation;and(17)providedifferentmetricsforprimaryliteratureand reviews.
Thereisafocusoncitingprimaryresearch.Within medicine,citingsystematicreviewsisoften preferred.Aclarificationonthispointwouldbe useful.DORAissilentonhowstakeholdersshould optimallyimplementtheirrecommendations. Similarly,DORAdoesnotprovideguidanceon whether(andhow)hiring,promotion,andtenure committeesshouldmonitoradherencetoand implementationoftheirrecommendations. TheLeiden Manifesto(2015) [20]
InternationalAtthe2014InternationalConferenceonScience andTechnologyIndicatorsinLeiden,agroupof scientometriciansmettodiscusshowdataarebeing usedtogovernscience,includingevaluating scientists.Themanifestoauthorsobservedthat ‘researchevaluationsthatwereoncebespokeand performedbypeersarenowroutineandrelianton metrics’.
‘Theproblemisthatevaluationisnowledbythe dataratherthanbyjudgement.Metricshave proliferated:usuallywellintentioned,notalways wellinformed,oftenillapplied’.
TheLeidenManifestoproposes10bestpractices:(1)quantitativeevaluation shouldsupportqualitative,expertassessment;(2)measureperformance againsttheresearchmissionsoftheinstitution,group,orresearcher;(3) protectexcellenceinlocallyrelevantresearch;(4)keepdatacollectionand analyticalprocessesopen,transparent,andsimple;(5)allowthoseevaluatedto verifydataandanalysis;(6)accountforvariationbyfieldinpublicationand citationpractices;(7)baseassessmentofindividualresearchersona qualitativejudgmentoftheirportfolio;(8)avoidmisplacedconcretenessand falseprecision;(9)recognisethesystemiceffectsofassessmentandindicators; and(10)scrutiniseindicatorsregularlyandupdatethem.Abidingbythese10 principles,researchevaluationcanplayanimportantpartinthedevelopment ofscienceanditsinteractionswithsociety.
ThefocusoftheLeidenManifestoisonresearch metrics.Beyondthefocusonmetrics,itisunclear whatthebroaderscientificcommunitythinksof theseprinciples.Similarly,itisnotclearhowbest promotionandtenurecommitteesmightimplement them.Forexample,whileallowingscientiststo reviewandverifytheirevaluationdata(principle5), itislessclearastohowthiscouldbeeasily monitored.Forexample,shouldtherebeauditand feedbackabouteachprinciple?
( C on ti n u ed )
Table 1. (Continued )
Documentname Geographic region Thestatedperspectiveoftheproblemsassessing scienceandscientists Unintendedconsequencesofmaintainingthe currentassessmentscheme ProposedsolutionsforassessingscientistsPotentiallimitationsofproposal Wilsdon(The MetricTide) (2015)[21]United KingdomThereportwasinitiatedtoevaluatetheroleof metricsinresearchassessmentandmanagementas partoftheUK’sREF.Thereporttakesa‘deeper lookatpotentialusesandlimitationsofresearch metricsandindicators.Ithasexploredtheuseof metricsacrossdifferentdisciplines,andassessed theirpotentialcontributiontothedevelopmentof researchexcellenceandimpact’.
Thereportraisesconcerns‘thatsomequantitative indicatorscanbegamed,orcanleadtounintended consequences;journalimpactfactorsandcitation countsaretwoprominentexamples’.
Thereportproposesfiveattributestoimprovetheassessmentofresearchers: (1)robustness,(2)humility,(3)transparency,(4)diversity,and(5)reflexivity. Thereportalsomakes20recommendationsdealingwithabroadspectrumof issuesrelatedtoresearchassessmentforstakeholderstoconsider:(1)the researchcommunityshoulddevelopamoresophisticatedandnuanced approachtothecontributionandlimitationsofquantitativeindicators;(2)at aninstitutionallevel,highereducationinstitutionleadersshoulddevelopa clearstatementofprinciplesontheirapproachtoresearchmanagementand assessment,includingtheroleofquantitativeindicators;(3)research managersandadministratorsshouldchampiontheseprinciplesandtheuseof responsiblemetricswithintheirinstitutions;(4)humanresourcesmanagers andrecruitmentorpromotionpanelsinhighereducationinstitutionsshould beexplicitaboutthecriteriausedforacademicappointmentandpromotion decisions;(5)individualresearchersshouldbemindfulofthelimitationsof particularindicators;(6)researchfundersshoulddeveloptheirowncontext- specificprinciplesfortheuseofquantitativeindicatorsinresearchassessment andmanagement;(7)dataproviders,analysts,andproducersofuniversity rankingsandleaguetablesshouldstriveforgreatertransparencyand interoperabilitybetweendifferentmeasurementsystems;(8)publishersshould reduceemphasisonJIFsasapromotionaltool,andonlyusetheminthe contextofavarietyofjournal-basedmetricsthatprovidearicherviewof performance;(9)thereisaneedforgreatertransparencyandopennessin researchdatainfrastructure;(10)asetofprinciplesshouldbedevelopedfor technologies,practices,andculturesthatcansupportopen,trustworthy researchinformationmanagement;(11)theUKresearchsystemshouldtake fulladvantageofORCIDasitspreferredsystemofuniqueidentifiers.ORCID IDsshouldbemandatoryforallresearchersinthenextREF;(12)identifiers arealsoneededforinstitutions,andthemostlikelycandidateforaglobal solutionistheISNI,whichalreadyhasgoodcoverageofpublishers,funders, andresearchorganizations;(13)publishersshouldmandateORCIDIDsand ISNIsandfundergrantreferencesforarticlesubmissionandretainthis metadatathroughoutthepublicationlifecycle;(14)theuseofDOIsshouldbe extendedtocoverallresearchoutputs;(15)furtherinvestmentinresearch informationinfrastructureisrequired;(16)HEFCE,funders,HEIs,andJisc shouldexplorehowtoleveragedataheldinexistingplatformstosupportthe REFprocess,andviceversa;(17)BISshouldidentifywaysoflinkingdata gatheredfromresearch-relatedplatforms(includingGatewaytoResearch, Researchfish,andtheREF)moredirectlytopolicyprocessesinBISandother departments;inassessingoutputs,werecommendthatquantitativedata— particularlyaroundpublishedoutputs—continuetohaveaplaceininforming peer-reviewjudgementsofresearchquality;inassessingimpact,we recommendthatHEFCEandtheUKHEfundingbodiesbuildontheanalysis oftheimpactcasestudiesfromREF2014todevelopclearguidelinesforthe useofquantitativeindicatorsinfutureimpactcasestudies;inassessingthe researchenvironment,werecommendthatthereisscopeforenhancingthe useofquantitativedatabutthatthesedataneedtobeprovidedwithsufficient contexttoenabletheirinterpretation;(18)theUKresearchcommunityneeds amechanismtocarryforwardtheagendasetoutinthisreport;(19)the establishmentofaForumforResponsibleMetrics,whichwouldbring togetherresearchfunders,HEIsandtheirrepresentativebodies,publishers, dataproviders,andotherstoworkonissuesofdatastandards, interoperability,openness,andtransparency;researchfundersneedto increaseinvestmentinthescienceofsciencepolicy;and(20)onepositive aspectofthisreviewhasbeenthedebateithasgenerated.Asalegacy initiative,thesteeringgroupissettingupablog(www.ResponsibleMetrics. org)asaforumforongoingdiscussionoftheissuesraisedbythisreport.
TheMetricTide,althoughindependent,was commissionedbytheMinisterforUniversitiesand Sciencein2014,inparttoinformtheREFusedby universitiesacrossthecountry.Althoughthe recommendationsaresound,someofthemmightbe perceivedasbeingUK-centric.Towhatdegreethe recommendationsmightapplyandbeimplemented onaglobalscaleisunclear. NAS(2015)[22]UnitedStatesTheUSNASandtheAnnenbergRetreatat Sunnylandsconvenedthisgroupofseniorscientists ‘toexaminewaystoremovesomeofthecurrent disincentivestohighstandardsofintegrityin science’.Incentivesandrewardsinacademic promotionwereincludedinthisexamination.
Theauthorsindicatethatifthecurrentsystemdoes notevolve,therewillbeseriousthreatstothe credibilityofscience.Theystate,‘Ifscienceisto enhanceitscapacitiestoimproveour understandingofourselvesandourworld,protect thehard-earnedtrustandesteeminwhichsociety holdsit,andpreserveitsroleasadriverofour economy,scientistsmustsafeguarditsrigorand reliabilityinthefaceofchallengesposedbya researchecosystemthatisevolvingindramaticand sometimesunsettlingways’.
Theauthors‘believethatincentivesshouldbechangedsothatscholarsare rewardedforpublishingwellratherthanoften.Intenurecasesatuniversities, asingrantsubmissions,thecandidateshouldbeevaluatedontheimportance ofaselectsetofwork,insteadofusingthenumberofpublicationsorimpact ratingofajournalasasurrogateforquality’. Incentivesshouldalsochangetorewardbettercorrection(orself-correction) ofscience.Itincludesimprovingthepeer-reviewprocessandreducingthe stigmaassociatedwithretractions.
Becausenewincentivescouldbepotentially damaging,authors‘urgethateachbescrutinizedand evaluatedbeforebeingbroadlyimplemented’.
( C on ti n u ed )
Table 1. (Continued )
Documentname Geographic region Thestatedperspectiveoftheproblemsassessing scienceandscientists Unintendedconsequencesofmaintainingthe currentassessmentscheme ProposedsolutionsforassessingscientistsPotentiallimitationsofproposal NuffieldCouncil onBioethics (2014)[23]UKTheNuffield’sCultureofScientificResearchinthe UKreport‘aimedtoinformandadvancedebate abouttheethicalconsequencesofthecultureof scientificresearchintermsofencouraginggood researchpracticeandtheproductionofhighquality science’. Throughacombinationofsurveysandresearcher engagement,feedbackincluded‘theperceptionthat publishinginhighimpact-factorjournalsisthe mostimportantelementinassessmentsforfunding, jobs,andpromotionsiscreatingastrongpressure onscientiststopublishinthesejournals’.
Thefeedbackreceivedcautionedmaintainingthe focusonJIFs—’Thisisbelievedtoberesultingin importantresearchnotbeingpublished, disincentivesformultidisciplinaryresearch, authorshipissues,andalackofrecognitionfor non-articleresearchoutputs’.
Thereportsuggestedactionsfordifferentstakeholders(funders,publishers andeditors,researchinstitutions,researchers,andlearnedsocietyand professionalbodies)toconsider,principally,(1)improvingtransparency;(2) improvingthepeer-reviewprocess(e.g.,bytraining);(3)cultivatingan environmentbasedontheethicsofresearch;(4)assessingbroadlythetrack recordsofresearchersandfellowresearchers;(5)involvingresearchersin policymakinginadialoguewithotherstakeholders;and(6)promoting standardsforhigh-qualityscience.
Whiletheauthorssuggesteddifferentactionsfor variousstakeholders,theyemphasisedthat‘a collectiveandcoordinatedapproachislikelytobe themosteffective’.Suchcollaborativeactionsmaybe difficulttooperationaliseandimplement. REWARD(2014) [11]
MultinationalTheLancetcommissionedaseries,‘Increasing value:ReducingWaste’,andfollow-upconference toaddressthecredibilityofscientificresearch.The commissioningeditorsaskedwhether‘thefaultlie withmyopicuniversityadministrationsledastray byperverseincentivesorwithjournalsthatput profitandpublicityabovequality?’
Ifthecurrentbibliometricsystemismaintained, thereisarealriskthatscientistswillbe‘judgedon thebasisoftheimpactfactorsofthejournalsin whichtheirworkispublished’.Impactfactorsare weaklycorrelatedwithquality.
TheREWARDseriesmakes17recommendationscoveringabroadspectrum ofstakeholders:(1)moreresearchonresearchshouldbedonetoidentify factorsassociatedwithsuccessfulreplicationofbasicresearchandtranslation toapplicationinhealthcareandhowtoachievethemostproductiveratioof basictoappliedresearch;(2)researchfundersshouldmakeinformation availableabouthowtheydecidewhatresearchtosupportandfund investigationsoftheeffectsofinitiativestoengagepotentialusersofresearch inresearchprioritisation;(3)researchfundersandregulatorsshoulddemand thatproposalsforadditionalprimaryresearcharejustifiedbysystematic reviewsshowingwhatisalreadyknown,andincreasefundingfortherequired synthesesofexistingevidence;(4)researchfundersandresearchregulators shouldstrengthenanddevelopsourcesofinformationaboutresearchthatis inprogress,ensurethattheyareusedbyresearchers,insistonpublicationof protocolsatstudyinception,andencouragecollaborationtoreducewaste;(5) makepubliclyavailablethefullprotocols,analysisplansorsequenceof analyticalchoices,andrawdataforalldesignedandundertakenbiomedical research;(6)maximisetheeffect-to-biasratioinresearchthroughdefensible designandconductstandards,awell-trainedmethodologicalresearch workforce,continuingprofessionaldevelopment,andinvolvementof nonconflictedstakeholders;(7)rewardwithfundingandacademicorother recognitionreproducibilitypracticesandreproducibleresearch,andenablean efficientcultureforreplicationofresearch;(8)peopleregulatingresearch shouldusetheirinfluencetoreduceothercausesofwasteandinefficiencyin research;(9)regulatorsandpolicymakersshouldworkwithresearchers, patients,andhealthprofessionalstostreamlineandharmonisethelaws, regulations,guidelines,andprocessesthatgovernwhetherandhowresearch canbedone,andensurethattheyareproportionatetotheplausiblerisks associatedwiththeresearch;(10)researchersandresearchmanagersshould increasetheefficiencyofrecruitment,retention,datamonitoring,anddata sharinginresearchthroughtheuseofresearchdesignsknowntoreduce inefficiencies,anddoadditionalresearchtolearnhowefficiencycanbe increased;(11)everyone,particularlyindividualsresponsibleforhealthcare systems,canhelptoimprovetheefficiencyofclinicalresearchbypromoting integrationofresearchineverydayclinicalpractice;(12)institutionsand fundersshouldadoptperformancemetricsthatrecognisefulldisseminationof researchandreuseoforiginaldatasetsbyexternalresearchers;(13) investigators,funders,sponsors,regulators,researchethicscommittees,and journalsshouldsystematicallydevelopandadoptstandardsforthecontentof studyprotocolsandfullstudyreports,andfordata-sharingpractices;(14) funders,sponsors,regulators,researchethicscommittees,journals,and legislatorsshouldendorseandenforcestudyregistrationpolicies,wide availabilityoffullstudyinformation,andsharingofparticipant-leveldatafor allhealthresearch;(15)fundersandresearchinstitutionsmustshiftresearch regulationsandrewardstoalignwithbetterandmorecompletereporting; (16)researchfundersshouldtakeresponsibilityforreportinginfrastructure thatsupportsgoodreportingandarchiving;and(17)funders,institutions,and publishersshouldimprovethecapabilityandcapacityofauthorsand reviewersinhigh-qualityandcompletereporting.Thereisarecognitionof problemswithacademicrewardsystemsthatappeartofocusonquantity morethanquality.Partoftheseriesincludesadiscussionaboutevaluating scientistsonasetofbestpractices,includingreproducibilityofresearch findings,thequalityofthereporting,completedisseminationoftheresearch, andtherigorofthemethodsused.
Thereislittleintheseriesabouttherelationship betweenthetrustworthinessofbiomedicalresearch andhiring,promotion,andtenureofscientists. Similarly,theseriesdoesnotproposeanactionplan forexamininghiring,promotion,andtenure practices.
( C on ti n u ed )
Table 1. (Continued )
Documentname Geographic region Thestatedperspectiveoftheproblemsassessing scienceandscientists Unintendedconsequencesofmaintainingthe currentassessmentscheme ProposedsolutionsforassessingscientistsPotentiallimitationsofproposal REF[24]UKThereisaneedtogobeyondtraditional quantitativemetricstogainamorein-depth assessmentofthevalueofacademicinstitutions.Notbeingabletoidentifythesocietalvalue(e.g., publicfundingofhighereducationinstitutionsand theimpactoftheresearchconducted)ofacademic institutions.
TheREFisanewnationalinitiativetoassessthequalityofresearchinhigher educationinstitutionsassessinginstitutionaloutputs,impact,and environmentcovering36fieldsofstudy(e.g.,law,economics,and econometrics).Outputsaccountfor65%oftheassessment(i.e.,‘arethe productofanyformofresearch,published,suchasjournalarticles, monographsandchaptersinbooks,aswellasoutputsdisseminatedinother wayssuchasdesigns,performancesandexhibitions’).Impactaccountsfor 20%oftheassessment(e.g.,‘isanyeffecton,changeorbenefittothe economy,society,culture,publicpolicyorservices,health,theenvironment orqualityoflife,beyondacademia’),andtheenvironmentaccountsfor15%of theassessment(e.g.,‘thestrategy,resourcesandinfrastructurethatsupport research’).
Intheabsenceofasetofquantifiersforassessingthe impactoftheenvironment,itremainsunclearif, basedonthedescriptionsproposed,different evaluatorswouldreachthesameconclusions.This mightbethecasemoreforsomecriteriaandlessfor others.TheREFmightstifleinnovationanddecrease collegialityacrossuniversities.Otherlimitations havebeennoted[44]. Smallergrouporindividualproposals Benedictus(UMC Utrecht,2016) [25]
the NetherlandsTheauthors’view,inspiredbytheirrelated initiative,ScienceinTransition,isthat ‘bibliometricsarewarpingscience—encouraging quantityoverquality’.
Focusingonmeaninglessbibliometricsiskeeping scientists‘fromdoingwhatreallymattered,suchas strengtheningcontactswithpatientorganizations ortryingtomakepromisingtreatmentsworkinthe realworld’.
Tomoveawayfromusingbibliometricstoevaluatescientists,theauthors proposefivealternativebehaviourstoassess:(1)managerialresponsibilities andacademicobligations;(2)mentoringstudents,teaching,andadditional newresponsibilities;(3)ifapplicable,adescriptionofclinicalwork;(4) participationinorganisingclinicaltrialsandresearchintonewtreatmentsand diagnostics;and(5)entrepreneurshipandcommunityoutreach.
Thisisaninterventionattheinstitutionalleveltotry tochangeincentivesandrewards.Anovelsetof criteriausedtoevaluateresearchandresearcherswas designedandhasbeenintroducedinalarge academicmedicalcentre. Itneedstobestudiedhowthisnewmodelistaken upinpracticeandwhetheritinducestheintended effects. Edwards(2017) [26]
USTwoengineersareconcernedabouttheuseof quantitativemetricstoassesstheperformanceof researchers.
Theauthorsarguethatcontinuedrelianceon quantitativemetricsmayleadtosubstantiveand systemicthreatstoscientificintegrity.
Todealwithincentivesandhypercompetition,theauthorshaveproposed(1) thatmoredataareneededtobetterunderstandthesignificanceandextentof theproblem;(2)thatfundingshouldbeprovidedtodevelopbestpracticesfor assessingscientistsforpromotion,tenure,andhiring;(3)bettereducation aboutscientificmisconductforstudents;(4)incorporatingqualitative components,suchasservicetocommunity,intoPhDtrainingprograms;and (5)theneedforacademicinstitutionstoreducetheirrelianceonquantitative metricstoassessscientists.
Someoftheproposalslikeconveningapanelof expertstodevelopguidelinesfortheevaluationof candidatesorreframingthePhDasanexercisein ‘characterbuilding’mightbeineffective,confusethe panoramafurther,andnothavesupportfroma considerablepartofthescientificcommunity.Panels ofexpertsareanotoriouslyunreliableandsubjective sourceofevidence,theyareexposedtogroupthink, potentialconflictsofinterest,andreinforcingalready existingbiases.Thereisnoreasontoassume expertiseisalsoassociatedwithcomingupwith goodpractices.Giventhefinancialhardships, gruesomework,andtoughcompletion,thePhD programalreadyisaVictorianexercisein‘character building’. Ioannidis(2014) [27]
USThisessayfocusesondeveloping‘effective interventionstoimprovethecredibilityand efficiencyofscientificinvestigation’.
Currently,scientificpublicationsareoften‘falseor grosslyexaggerated,andtranslationofknowledge intousefulapplicationsisoftenslowand potentiallyinefficient’.Unlesswedevelopbetter scientificandpublicationpractices,muchofthe scientificoutputwillremaingrosslywasted.
Theauthorproposes12bestpracticestoachievetruthandcredibilityin science.Theseinclude(1)large-scalecollaborativeresearch;(2)adoptionofa replicationculture;(3)registration;(4)sharing;(5)reproducibilitypractices; (6)betterstatisticalmethods;(7)standardisationofdefinitionsandanalyses; (8)moreappropriate(usuallymorestringent)statisticalthresholds;(9) improvementinstudydesignstandards;(10)strongerthresholdsforclaimsof discovery;(11)improvementsinpeerreview,reporting,anddisseminationof research;and(12)bettertrainingofthescientificworkforce.Thesebest practicescouldbeusedasresearchcurrenciesforpromotionandtenure.The authorprovidesexamplesofhowthesebestpracticescanbeusedindifferent waysaspartoftherewardsystemforevaluatingscientists.
Theauthoracknowledgesthat‘interventionsto changethecurrentsystemshouldnotbeaccepted withoutproperscrutiny,evenwhentheyare reasonableandwellintended.Ideally,theyshouldbe evaluatedexperimentally’.Manyoftheresearch practiceslacksufficientempiricalevidenceastotheir worth. Mazumdar(2015) [28]
USThisgroupwasfocusedonwaystoassessteam science—oftentherolebiostatisticiansfind themselvesin.Theirviewisthat‘thoseresponsible forjudgingacademicproductivity,including departmentchairs,institutionalpromotion committees,provosts,anddeans,mustlearnhowto evaluateperformanceinthisincreasinglycomplex framework’.
Concentratingontraditionalmetrics‘can substantiallydevaluethecontributionsofateam scientist’.
Toassesstheresearchcomponentofabiostatisticianaspartofateam collaboration,theauthorsproposeaflexiblequantitativeandqualitative frameworkinvolvingfourevaluationthemesthatcanbeappliedbroadlyto appointment,promotion,andtenuredecisions.Thesecriteriaare:design activities,implementationactivities,analysisactivities,andmanuscript reportingactivities.
Theauthorsstate,‘Theparadigmisgeneralizableto otherteamscientists’.However,‘becauseteam scientistscomefrommanydisciplines,includingthe clinical,basic,anddatasciences,thesamecriteria cannotbeapplicabletoall’.Onelimitationisthe potentialgamingofsuchaflexiblescheme. IoannidisPQRST (2014)[29]USTheauthorsofthispaperstatetheproblemas ‘scientistsaretypicallyrewardedforpublishing articles,obtaininggrants,andclaimingnovel, significantresults’.
Theauthorsnote,‘However,emphasison publicationcanleadtoleastpublishableunits, authorshipinflation,andpotentiallyirreproducible results’.Inshort,thistypeofassessmentmight tarnishscienceandhowscientistsareevaluated.
Toreduceourrelianceontraditionalquantitativemetricsforassessingand rewardingresearch,theauthorsproposeabestpracticeindex—PQRST— revolvingaroundproductivity,quality,reproducibility,sharing,and translationofresearch.Theauthorsalsoproposeexamplesonhoweachitem couldbeoperationalised,e.g.,forproductivity;examplesincludenumberof publicationsinthetoptierpercentageofcitationsforthescientificfieldand year,proportionoffundedproposalsthathaveresultedin1published reportsofthemainresults,andproportionofregisteredprotocolsthathave beenpublished2yearsafterthecompletionofthestudies.Similarly,onecould counttheproportionofpublicationsthatfulfill1qualitystandards; proportionofpublicationsthatarereproducible;proportionofpublications thatsharetheirdata,materials,and/orprotocols(whicheveritemsare relevant);andproportionofpublicationsthathaveresultedinsuccessful accomplishmentofadistaltranslationalmilestone,e.g.,gettingpromising resultsinhumantrialsforinterventionstestedinanimalsorcellcultures,or licensingofinterventionforclinicaltrials.
Theauthorsacknowledgethatsomeindicators requirebuildingnewtoolstocapturethemreliably andsystematically.Forquality,oneneedstoselect standardsthatmaybedifferentperfield/designand thisrequiressomeconsensuswithinthefield.There isnowide-coverageautomateddatabasecurrently forassessingreproducibility,sharing,and translation,butproposalsaremadeonhowthis couldbedonesystematicallyandwhomightcurate suchefforts.Focusingonthetop,mostinfluential publicationsmayalsohelpstreamlinetheprocess.
( C on ti n u ed )
Table 1. (Continued )
Documentname Geographic region Thestatedperspectiveoftheproblemsassessing scienceandscientists Unintendedconsequencesofmaintainingthe currentassessmentschemeProposedsolutionsforassessingscientistsPotentiallimitationsofproposal Nosek(2015)[30]USTheauthorsstatetheproblemastruthversus publishability—’Therealproblemisthatthe incentivesforpublishableresultscanbeatodds withtheincentivesforaccurateresults.This producesaconflictofinterest.Theconflictmay increasethelikelihoodofdesign,analysis,and reportingdecisionsthatinflatetheproportionof falseresultsinthepublishedliterature’.
Withtheperverse‘publishorperish’mantra,the authorsarguethatauthorsmayfeelcompelledto fabricatetheirresultsandunderminetheintegrity ofscienceandscientists.‘Withflexibleanalysis options,wearemorelikelytofindtheonethat producesamorepublishablepatternofresultsto bemorereasonableanddefensiblethanothers’.
Theauthorsproposeaseriesofbestpracticesthatmightresolvethe aforementionedconflicts.Thesebestpracticesincluderestructuringthe currentincentive/rewardschemeforacademicpromotionandtenure,useof reportingguidelines,promotingbetterpeerreview,andjournalsdevotedto publishingreplicationsorstatisticallynegativeresults.
Theanalysesandproposedactionsandinterventions areveryclearandgenerateawareness.Tohaveeven moreeffect,theseactionsshouldbetakenupby leadersinacademiaandinstitutions.Actionsby fundingagenciesmayalsoberequiredtosetproper criteriafortheirreviewersofgrantproposals. Journalproposals eLife(2013)[31]USTheeditorsofeLifeareconcernedaboutthe ‘widespreadperceptionthatresearchassessmentis dominatedbyasinglemetric,thejournalimpact factor(JIF)’.Theyareinterestedinreformingthese evaluationsusingdifferentmetrics.
Theeditorsstate,‘Thefocusonpublicationina highimpact-factorjournalastheprizealso distractsattentionfromotherimportant responsibilitiesofresearchers—suchasteaching, mentoringandahostofotheractivities(including thereviewofmanuscriptsforjournals!).Forthe sakeofscience,theemphasisneedstochange’.
Tohelpcountertheseproblems,theeditorsdiscussseveraloptions,suchas repositoriesforsharinginformation:Dryadfordatasets;Figshareforprimary research,figures,anddatasets;andSlideshareforpresentations.Altmetric. com,ImpactStory,andPlumAnalyticscanbeusedtoaggregatemedia coverage,citationnumbers,andsocialwebmetrics.Takentogether,such informationislikelytoprovideabroaderassessmentoftheimpactofresearch wellbeyondthatoftheJIF.
Whileexemplary,itisunclearhowwidespreadthese initiativeswillbecomeandwhetherthereare implementationhurdles.Tohavebroaderimpact, similarinitiativesneedtobeendorsedand implementedinthousandsofjournals. Nature(2016)[32]UKThejournal’sperspectiveisthat‘metricsare intrinsicallyreductiveand,assuch,canbe dangerous.Relyingonthemasayardstickof performance,ratherthanasapointertounderlying achievementsandchallenges,usuallyleadsto pathologicalbehaviour.Thejournalimpactfactoris justsuchametric’.
RelyingontheJIFwillmaintainthe aforementionedproblems.Tohelpcombattheseproblems,thejournalhasproposedtwosolutions:first, ‘applicantsforanyjob,promotionorfundingshouldbeaskedtoincludea shortsummaryofwhattheyconsidertheirachievementstobe,ratherthan justtolisttheirpublications’.Second,‘journalsneedtobemorediversein howtheydisplaytheirperformance’.
Whiletheuseofdiversemetricsisapositivestep, theyarenothelpfulforresearchersacrossdifferent disciplines.Forexample,Altmetricsdoesnothave field-specificscoresyet.Itisdifficulttoknowwhat thesealternativemetricsmeanandhowtheyshould beconsideredwithinaresearcher’sevaluation portfolio. Quantitativeproposals RCR(2015)[33]USTheseauthorswereinterestedindevelopinga scientificallyrigorousalternativetothecurrent perverseprestigeoftheJIFforassessingthemeritof publicationsand,byassociation,scientists.
Theauthorslistanumberofproblemswith maintainingcurrentbibliometrics.Manyofthese areechoedinotherreports/papersinthistable.
TheauthorsreportonthedevelopmentandvalidationoftheRCRmetric.The RCR‘isbaseduponthenovelideaofusingtheco-citationnetworkofeach articletofield-andtime-normalizebycalculatingtheexpectedcitationrate fromtheaggregatecitationbehaviorofatopicallylinkedcohort’.Thearticle citationrateisthenumeratorandtheaveragecitationrateisthedenominator.
Moreindependentresearchisneededtoexaminethe relationbetweentheRCRandothermetricsandthe predictivevalidityoftheRCRaswellaswhetherit probesuntowardconsequences,suchasgamingor endowingquestionableresearchpractices.Arecent paperdisputesthevalidityoftheRCRbyraising severalconcernsregardingthecalculationalgorithm [51]. Journalcitation distributions (2016)[34]International
TheJIFisapoorsummaryofrawdistributionof citationnumbersfromagivenjournal,becausethat distributionishighlyskewedtohighvalues.
TheJIFsayslittleaboutthelikelycitationnumbers ofanysinglepaperinajournal,letaloneother dimensionsofqualitythatarepoorlycapturedby citationcounts.
Theauthorsproposedusingfulljournalcitationdistributionsor nonparametricsummaries(e.g.,IQR)andreadingofindividualpapersto evaluateboththepapersandjournals.
ItisnotclearhowtousefullJCRstoevaluateeithera journaloraspecificpaperorwhatsummariesofthe JCRaremostinformative.Citationcountsdonot capturethereasonforthecitation. R-index(2015) [35]Canada/ DenmarkPeerreviewisundermanythreats.Forone,weare fastapproachingasituationinwhichthenumberof manuscriptsrequiringpeerreviewwilloutstripthe numberofavailablepeerreviewers.Peerreviewing, whileanessentialpartofthescientificprocess, remainslargelyundervaluedwhenassessing scientists.
Theauthorshaveproposedaquantitativemetricto gaugethecontributionstowardspeerreviewingby researchers.‘Bygivingciteableacademic recognitionforreviewing,R-indexwillencourage moreparticipationwithbetterreviews,regardless ofthecareerstage’.
TheR-indexiscalculatedas‘Eachjournal,j,willdisclosetheirannuallistof reviewers,i,andthenumberofpaperstheyreviewed,nj.Foreachkthpaperin agivenjournalj,thetotalnumberofwords,wkj,ismultipliedbythesquare rootofthejournal’simpactfactor,IFj.Thisproductisweightedbytheeditor’s feedbackonindividualrevisions,whichisgivenbyascoreofexcellence,skj, rangingfrom0(poorquality)to1(exceptionallygoodquality)’.Traditionally, peerreviewisgivenlittleweightinassessingscientists.Thismetricmight provideanopportunitytomorerealisticallycreditthecontributionsofpeer reviewingtothelargerscientificcommunity.Thismetriccouldbean alternativetotheusualsolefocusonJIF.
Theindexisexceedinglycomplex;interpretationis notintuitiveandrequiresstandardisedinputsthat arenotcurrentlygatheredatjournals.Ithasnotyet beenappliedtomultiplejournalsandreliespartlyon IF. S-index(2017) [36]
USThecurrentsystem,bynotaccordinganycreditto theproductionofdatathatothersthenusein publications,disincentivisesdatasharingandthe abilitytoassessresearchreproducibilityormake newscientificcontributionsfromtheshareddata.
Theauthorshaveproposedametrictooffer authors’ameasurableincentivetosharetheirdata withotherresearchers.
TheS-indexisinessenceanH-indexforpublicationsthatusesscientists’ shareddatabutdoesnotincludethoseresearchersasauthors.Itcountsthe numberofsuchpublications,N,thatgetmorethanNcitations.Promotion andtenurecommitteescouldusethismetricasacomplementtodirect citationcounts.
Thishasnotbeenappliedinpractice.Itisnotclear howtoassignsharingcredittoinitiativeswithmany authors.Thereisnoestablishedmethodtotrackuse ofdatasets;itreliesoncitationcounts.