Meeting high expectations: In February 160 attendees gathered in London at BMA House for the fifth Researcher to Reader conference.
While Covid-19 had been ravaging cities and regions in China for weeks, the virus was only named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) on 11 February with the first known confirmed case in the US reported on 20 January. Other than a few speaker references and some extra time for handwashing, I do not recall a particular emphasis at the event, which would be my last in person conference before the world rapidly shifted to social distancing and sheltering.
Recounting my experience at Researcher to Reader (R2R) here in late April 2020, when so much has changed forever, is looking back at a long past idyllic time. As an historian, I've often wondered what people thought during times that would result in great transition. Did they understand what was happening? Sadly, I conclude, that we thought we did, we tried to, but we most certainly did not. So please view this summary for what it is, a look back at an exciting and thought-provoking two days of mingling with industry colleagues and researchers, both friends and strangers, with an intense focus on issues affecting researchers and readers alike.
R2R has come to be known for some key strengths: exploring the perspectives of researchers throughout the program and highly interactive workshops that bring groups together three times over the course of the event to work in depth on timely issues. Attendees indicate their workshop preferences at the time of registration, and each session is capped at about 25 to ensure adequate participation for all. This year's options included: Equitable OA in Low-Income and Middle-Income Countries; Improving Peer Review Support for Researchers; Transformative Agreement Collaboration; Practicality and Purity--Commerce in the Academy; and Open Access Price Transparency.
As an example of this offering, let me say a little about the Improving Peer Review Support for Researchers Workshop that Christine Tully, University of Findlay, and I facilitated.
This topic appealed to me given my increasing focus on open and community review through annotation, as well as prospects for streamlining traditional peer review through new open tools. Attendees included researchers, publishers, start-up entrepreneurs, librarians, and vendors from the US, UK, and Europe. Sessions focused on interactive exercises such as 'speedboats, in which participants brainstorm factors for accelerating or slowing down a speedboat--in this case peer review. After identifying as many as they could, ranging from training to incentives, from workflow systems to AI and other tools, attendees chose one to focus on for the second session.
We next participated in an exercise aimed at surmounting obstacles through creating thinking called 'We can if...' For example, an inadequate number of peer reviewers might be surmounted 'if... peer reviewers were paid or through an increased outreach toward early career or non-Western reviewers, and so on.' Finally, in our third gathering, every table took one challenge and plotted their ideas along both impact and effort axes, to identify what low effort activities might result in high-impact results. A voting exercise around the most promising ideas rounded out the day. Christine and I reported the overall findings in a results session on the afternoon of the second day. We do hope that some exciting ideas might eventually be pursued more broadly.
I was pleased to see researchers on the stage and in the audience. I was also pleasantly surprised by the cross-section of attendees: advocates for open access, open data, and open science, including publishers, librarians, and funders, as well as folks from more traditional commercial publishers. Events often draw from one group predominantly, leaving speakers from different perspectives either preaching to the choir or fighting the tide (to mix metaphors).
Suffice as to say that Jonathan Adams, chief scientist at the Institute for Scientific Information, who delivered the opening keynote 'Research Ecosystem Dynamics: Publication Adaptation, Evolution, or Extinction,' and Richard Charkin, president of Bloomsbury China and president of The Book Society, were more representative of the latter group, which led to some pointed questions and animated side conversations on Twitter.
Adams presented interesting data on the shifting global output and co-authorship and offered theories on why and when notions around 'evidence of excellence' have changed. He detailed changes in research assessment that have made research into a strategically managed enterprise. Charkin, who stepped in at the last minute, gave an entertaining retrospective of his wideranging career, but struck some as glossing over large challenges we face as an industry, such as diversity and pay inequity.
A definite highlight of the event was a structured debate, moderated by Rick Anderson, Dean at Marriott Library, University of Utah, around the proposition: The venue of its publication tells us nothing useful about the quality of a paper. Arguing in favour were Toby Green, managing director at Coherent Digital, and Mike Taylor from IndexData. Pushing back were Pippa Smart, EIC of Learned Publishing and a publishing consultant, and Niall Boyce, editor of The Lancet Psychiatry. An audience poll was taken, and the winner would be whichever side moved more votes into its column subsequent to another final poll. The 'pro' team mentioned highly-cited but later retracted papers, such as the flawed study linking the MMR vaccine to autism; a weak correlation between citations and Impact Factor; and an increasing emphasis on preprint servers to disseminate early research. The 'con' team touched on definitions of quality and quality assurance. Smart emphasised the role of peer review, editorial judgement, and the mission and vision of a journal in filtering content. Ultimately, the 'pro' team moved the needle the farthest, shifting more votes, unhindered by Rick's strict time-keeping and their failure to 'edit for length', as Boyce joked.
With regard to the many other informative sessions, I'll steal some organisational structure from Mark Allin's brief summary, focusing on 'reasons to be concerned' and 'reasons to be optimistic', bearing in mind, of course, that each speaker touched upon challenges and solutions. In the concern column, new open access funder mandates like Plan S challenge societies.
Tasha Mellins-Cohen provided background and rationale around the Microbiology Society's selection of open and alternative models based upon data across their journals portfolio. Researchers from the Global South continue to struggle for access to content and opportunities to publish. Solomon Derese, presenting remotely from his office 4,237 miles away at the University of Nairobi, detailed the impact of Research4Life on access to e-resources in Africa. Derese explained the incredible impact that access to 85,000 journals through Research4Life has had on African researchers, including an overall increase in research output--doubling Africa's share of world publication between 2005 and 2016.
Women continue to be underrepresented in peer review and in publishing overall. Laura Fogg-Rogers of the University of West England presented suggestions for improvement, including the success of the Athena SWAN Charter program, and tackling structural barriers in peer review. Finally, our impact measurements are inadequate and oversimplified, leading to perverse incentives, as recounted by Sabine Hossenfelder of the Frankfurt Institute for Advanced Studies. She argued instead for customisable measures via a tool called Scimeter.
On the side for optimism, there is new promise around fair data and reproducibility. Rebecca Grant from Springer Nature noted the steady growth in researchers sharing their data. Currently, 119 organisations endorse FAIR data principles (findable, accessible, interoperable, reusable). Elsevier's Catriona Fennell presented a Manifesto for Reproducible Science, advocating investment in diverse and innovative journals, new article types that provide a reward for sharing data and software, inviting replication studies in mainstream journals, and using the CRediT (Contributor Roles Taxonomy).
Artificial intelligence may finally be showing promise in scholarly communications, according to Olly Rickard of HighWire, including advancements in machine learning, natural language processing, and speech recognition. Diving in deeper, Michael Upshall from UNSILO addressed the thus far limited uptake of AI in publishing, advising publishers to start with a business case, choose the most appropriate tool, identify metrics, and seek advice on how to use and evaluate, including checking for bias.
Harnessing the creativity of the attendees and the wide-ranging perspectives of the speakers--from so many researchers in particular, made the event both energising and informative. By providing a forum to explore ideas as researchers and readers--and many roles in between--Researcher to Reader met my high expectations and then some. Now, with so much riding on efficient and accurate research results, from the health and safety of our essential workers and those they care for to our own individual health, we must continue to support the research lifecycle through future gatherings either online or in person. Ri
Heather Staines is head Of partnerships at Knowledge Futures Group
|Printer friendly Cite/link Email Feedback|
|Article Type:||Conference news|
|Date:||Jun 1, 2020|
|Previous Article:||Helping new audiences to follow the science: Richard Gallagher explains why reliable scientific insight is needed more now than ever.|
|Next Article:||Cambridge launches open research platform.|