Skip to main content

Changing How We Evaluate Scholarship

While major universities recognize the impact of open scholarship for distribution through institutional repositories, they do not go far enough in “counting” open scholarship in its various forms towards rank and advancement.  In this post, I am comparing the recommendations of DORA, the Leiden Manifesto, and HuMetrics on how we can change the way we evaluate scholarship to be more quality-centered.  While none of the publications define quality in scholarship, there is definitely overlap in their recommendations for change, despite the fact that these organizations come from different disciplines. These could act as general areas of consideration for proposing a shift in the evaluation of scholarship.



1) Quality over quantity. All three publications agree on emphasizing quality as defined by experts in the field, not exclusively by journal metrics.  Journal metrics are a part of the picture, but not the whole picture.  Institutions, funders, and researchers alike should emphasize the importance of the quality of research over the package or publishing it comes in (DORA, 2014. HIcks et al, 2015).  HuMetrics (2020) goes further to suggest a shift from output to quality since research in the humanities is often a lengthier process with traditionally fewer publications and citations. Some article-level metrics could be helpful to emphasize the quality of given research over the rating of a journal.  


2) Transparency in metrics used.  There should be no “black box” metrics (Leiden Manifesto, 2015).  This is a response to criticisms that it is not always clear what is being assessed in JIF or other metrics.  Also, criteria should be clear for funding decisions and RPT decisions.  For scholars, DORA recommends “responsible authorship,” with clear delineation of how listed authors have contributed.  


3) Broaden perspectives on impact. DORA suggests that acceptable research outputs should expand to include software and data sets.  HuMetrics expands research impact to data curation and project management, as well as other scholarly processes which surround journal publication, such as peer review and editing.  By broadening our view of scholarship, we debunk the “myth of the lone scholar” (HuMetrics, 2020) and count all meaningful scholarly contributions in RPT decisions.  All three mention various forms of scholarship, including open scholarship.  We should use altmetrics to quantify varied types of impact, but realize that even altmetrics leave out quality contributions in public policy, debates, and conferences (HuMeterics, 2020).


4) Base evaluation on values.  The Leiden Manifesto suggests that institutions clearly define their values and priorities and align evaluation accordingly.  Scholars at a teaching institution, for example, should not be held to the same research standards as those at Research 1 Institutions.  HuMetrics is concerned with the culture of scholarship and recommends that scholars “establish values-based frameworks that align with, recognize, and reward personal and institutional values enacted in supportive local contexts.”  This will allow us to avoid getting into corrosive competitive cultures based on journal metrics that don’t reflect the richness of scholarly work.  


These four areas represent the overlapping priorities of these three well-known publications recommending change in how we evaluate scholarship. Using these priorities as a guide could help departments analyze and improve current practices of evaluating reserach.


Agate, N., Kennison, R., Konkiel, S., Long, C. P., Rhody, J., Sacchi, S., & Weber, P. (2020). The transformative power of values-enacted scholarship. Humanities and Social Sciences Communications, 7(1), 1-12.


Agate, N., Kennison, R., Konkiel, S., Long, C., Rhody, J., & Sacchi, S. (2017). HuMetricsHSS: towards value-based indicators in the Humanities and Social Sciences.


Bladek, M. (2014). DORA: San Francisco declaration on research assessment (May 2013). College & Research Libraries News, 75(4), 191-196.


Hicks, D., Wouters, P., Waltman, L. et al. (2015).  Bibliometrics: The Leiden Manifesto for research metrics. Nature 520, 429–431. https://doi.org/10.1038/520429a


Comments

Popular posts from this blog

OER-phoria

"Sunrise Acrobatics"  by  Zach Dischner  is licensed under  CC BY-SA 2.0 I was sitting in Zoom class with OER-expert Dr. David Wiley, who teaches one class every other year in the Master’s program I am enrolled in, along with eight other students, when I got caught up in OER-phoria.  The realization that copyright was straight jacketing contemporary culture and creativity and that open licenses could democratize education globally was mind-blowing.  Now, David Wiley was in no way encouraging any band-wagon hopping. There were no drums, but I felt ready to march.  There was no flag, but I was ready to wave one high in the air.  In fact, I may or may not have handed in a creative assignment where I wrote a song about the benefits and challenges of OER through the years set to the tune of Leonard Cohen’s “Hallelujah.”  Though it was meant to be funny, I had forgotten during the pandemic that I was not a gifted singer.  I fear it registered ...

Is "Open Scholarship" still locked?

Scholars writing about Open Educational Resources (OER), including myself, use expansive vocabulary about the potential for the democratization of learning.  For instance, this "lockbox" statement from the William and Flora Hewlett Foundation (2013) regarding the OER movement provides an example of such liberal democratic sentiment:   "These digital materials have the potential to give people everywhere equal access to our collective knowledge and provide many more people around the world with access to quality education by making lectures, books, and curricula widely available on the Internet for little or no cost. By enabling virtually anyone to tap into, translate, and tailor educational materials previously reserved only for students at elite universities, OER has the potential to jump start careers and economic development in communities that lag behind. Millions worldwide have already opened this educational lockbox, but if OER is going to democratize learning and ...

Academic Echo Chambers

Academic echo chambers “Twitter makes smart people dumb,” tweeted George Siemens.  Siemens is reflecting on the fact that algorithms based on retweets make our networks more and more homogenous, and our language is limited to 140 characters and conversation bytes.  He is also throwing out argument bait to elicit counter-discourse, in my opinion. Or he is just privileged and arrogant calling his colleagues dumb, some of whom find a voice and self-positioning with social media that they would not find otherwise.  Sherrie Spelic took the bait, indicating that she was annoyed by both his baiting and his apparent arrogance.  She replied on her blog that Twitter does not have that kind of control over us, and that she is “nobody’s version of dumb.”  Spelic is making the point that with some amount of effort and a general mindset of growth, we need not dismiss the forum altogether.  An article from Science in conjunction with Facebook shows that users actually cu...