OAI 10

Van 21 t/m 23 juni was er het tweejaarlijkse Open Archives Initiative congres (voluit The CERN – UNIGE Workshop on Innovations in Scholarly Communication) in Geneve.  Hieronder een verslag van wat mij het meeste opviel.

DSUG

Voorafgaand aan het congres is er altijd een DSpace User Group. Omdat DSpace een van onze belangrijkste systemen is, ben ik daar heen geweest. De makers van DSpace deden een oproep om mee te denken over de requirements van DSpace 7, een geheel vernieuwde DSpace die volgend jaar uitkomt. Zie de pagina van de Outreach Group. Dit is een goede kans om nieuwe functionaliteit voor te stellen.

Zij presenteerden ook een lijst met nieuwe features die ze willen implementeren, al zeiden ze er bij dat niet alles al in DSpace 7 zal zitten. Zo komt ondersteuning voor het maken van relaties tussen objecten pas in DSpace 8. Er is nog geen definitieve lijst, maar de tot nu gedefinieerde use cases staan op de DSpace wiki

Interoperability workshop

Terwijl Bianca Kramer en Jeroen Bosman een workshop gaven over Open Science workflows, deed ik mee aan een workshop van Herbert Van de Sompel over SignPosting en ResourceSync. Dit was een technische workshop, maar kort samengevat is SignPosting bedoeld om robots te helpen bij hun navigatie door je site, bijvoorbeeld door aan te geven dat deze metadata een beschrijving geven van die pdf. ResourceSync is bedoeld voor harvesters en biedt meer mogelijkheden dan OAI-PMH (dat nu bijvoorbeeld wordt gebruikt om items in Narcis te krijgen).

Een mooi voorbeeld van het gebruik van ResourceSync is CORE. Zij halen zoveel mogelijk Open Access materiaal op van de sites van publishers. Omdat elke publisher dat materiaal op z’n eigen manier aanbiedt, met z’n eigen metadataformat, “vertaalt” CORE dat naar ResourceSync, zodat ook anderen (zoals de Open Access Button) er gebruik van kunnen maken.

Natuurlijk willen we allemaal gelijk aan de slag met deze technieken, maar voor repositories kunnen we daar beter mee wachten totdat de repositorysoftware (voor ons dus DSpace) dat ondersteunt, want zelf inbouwen is niet eenvoudig.

 

Technische sessies

De technische sessies gingen ook over vormen van interoperabiliteit. Twee ervan (van Mark Wilkinson en Phil Archer) gingen in op manieren om je data zodanig te beschrijven dat anderen er iets mee kunnen. Zo kan een onderzoeker een systeem vragen “bevatten deze datasets deze identifier voor een gen?”, om er achter te komen welke van die datasets voor haar interessant zijn zonder ze eerst allemaal te downloaden.

Daarnaast was er een presentatie van Sarven Capadisli over Linked Research, waar hij later in de week, voor een klein groepje geïnteresseerden ook een uitgebreidere uitleg over gaf. Ook hier gaat het erom je onderzoek, publicaties en data zo goed mogelijk te beschrijven, met gebruik van standaarden. Maar het is ook belangrijk om goede tools te hebben die overweg kunnen met die standaarden. Dan ben je niet afhankelijk van wat een leverancier mogelijk maakt, maar kun je desnoods een eigen applicatie maken die kan communiceren met al die andere tools.

 

Open Access outside academia

Er wordt vaak gezegd dat Open Access ook, of juist, van belang is voor mensen die niet aan een universiteit verbonden zijn. Maar is dat ook zo?

ElHassan El Sabry heeft hier onderzoek naar gedaan. Helaas heeft hij nog geen bewijs gevonden dat Open Access belangrijk wordt gevonden buiten de academische wereld. Alleen belangengroepen voor patienten maken er echt gebruik van.

Nilam Ashra-McGrath keek er juist naar vanuit het oogpunt van de niet-academische burger. NGOs en journalisten, zeker in de niet-Westerse wereld, zijn wel degelijk geïnteresseerd in Open Access, omdat ze geen geld hebben om artikelen te kopen (laat staan licenties).

Tenslotte had Alasdair Rae een mooi voorbeeld van wat er kan gebeuren als iemand zijn onderzoeksdata open beschikbaar stelt. Diverse mensen, tot in de mainstream media toe, gebruikten zijn data voor analyses en visualisaties. Zo had zijn onderzoek een veel groter bereik dan hij ooit had verwacht.

 

Social Media Research

Peter Bath en Wasim Ahmed gebruiken sociale media als bron voor onderzoek, maar houden zich daarbij ook bezig met de ethische kwesties die daarbij een rol spelen. Per slot van rekening zijn tweets en posts te herleiden tot individuen. En meestal is het niet mogelijk om iedereen om expliciete toestemming te vragen voor het gebruik van wat ze geschreven hebben.

Rodrigo Costas constateerde dat er op Twitter relatief vaak wordt getweet over reviews en discussies over onderzoek, of over details uit een artikel, maar niet over het onderzoek of het artikel zelf. Dat betekent dat altmetrics een vertekend beeld kunnen geven van hoe belangrijk een publicatie is. Er kan heel veel getweet worden over een grappig detail, zonder dat iemand ingaat op de inhoud van het artikel.

 

Future of Repositories

Ook bij de presentaties over de toekomst van repositories kwamen de problemen met interoperabiliteit weer ter sprake.

Petr Knoth noemde het feit dat het ophalen van een full-text meestal nog ingewikkelder is dan het ophalen van metadata, want het is voor een robot vaak niet te zien welke link hij daarvoor moet volgen. Hij merkte overigens op dat het frustrerend is dat altmetrics niet om kunnen gaan met meerdere locaties voor een artikel, waardoor bijvoorbeeld de preprint versie in een repository en de publisher versie apart geteld worden.

Leslie Chan vestigde de aandacht op wat hij “the missing glue” noemde. Interoperabiliteit technisch mogelijk maken is niet genoeg: hoe zorg je dat er daarna ook iets mee gedaan wordt? Welke instituten, repositories, publishers doen mee? Hoe regel je een duurzame samenwerking? En ook: hoe overtuig je management dat het de moeite loont om er resources beschikbaar voor te stellen?

 

2017-06-22 21.07.24-1

Tijdens dit alles zuchtte Geneve onder een hittegolf, dus we waren heel blij dat het conferentiediner werd gehouden op een boot die een rondje voer over het meer.

Geplaatst in I&M2.0 | 1 reactie

Defining Open Science Definitions

by Jeroen Bosman & Bianca Kramer

[updated 20170328 to include the Open Science definition from Foster]

These days there seems to be a real surge in all things open science. At least it’s the talk of the town. Having visited the Open Science Conference 2017 in Berlin and witnessing the many discussions on Twitter at the venue it seems clear though that ideas on what Open Science is (or could be) vary tremendously. Rather than see this as a problematic lack of focus, or as a sign that it is too early to define what open science is or not, we’d argue that the scope of open science and the variety of actors involved make it not realistic, and even counterproductive, to expect there to be, now or in the future, one definition of open science that fits all. To get things done, any one researcher or other stakeholder involved cannot act on all aspects of open science simultaneously, so choices will be made. The outcomes of these choices will be different for different people, disciplines, organizations and countries.

However, it is good to be aware of the different opinions people have on what open science is and could be, both to facilitate understanding each other’s actions and motivations, and to foster productive collaboration towards better scholarly communication at an individual, institutional or (inter)national level.

The choices people and organizations make on what they see as most important and perhaps also most realistic determines how they define open science. Broadly, we can categorize 5 types of definitions.

Before we turn to these different types, it’s useful to look at some early definitions, including the one by Open Knowledge International (OKFN), that in 2005 issued the Open Definition: “Open means anyone can freely access, use, modify, and share for any purpose (subject, at most, to requirements that preserve provenance and openness).” The Wikipedia article on Open Science, started on August 5, 2006 by Aaron Swartz, now uses this definition: “Open science is the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society, amateur or professional. It encompasses practices such as publishing open research, campaigning for open access, encouraging scientists to practice open notebook science, and generally making it easier to publish and communicate scientific knowledge.” Historically open science proponents arose from the open source movement that already had their Free Software definition (1986) and the Open Source definition (1998).

open-definition

The Open Definition from OKFN is available in many languages

Broad definitions using selective interpretations can be very strong and are often used to get multiple stakeholders to agree on action. An example of this is the Dutch National Plan Open Science, focussing on open access, open data and adding evaluation alternatives. The Dutch national plan uses the EU definition from the Open innovation, open science, open to the world report: “Open Science represents a new approach to the scientific process based on cooperative work and new ways of diffusing knowledge by using digital technologies and new collaborative tools. The idea captures a systemic change to the way science and research have been carried out for the last fifty years: shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process“. While making choices as well, the European Union seems to take a much broader approach by for instance explicitly including the promotion of citizen science and outreach and engagement initiatives. That is something our own university seems to go for as well . The Open Science monitor recently launched by the European Union also shows that the EU takes a broader perspective.

EU-open-science-monitor

European Union Open Science monitor. Click to dive into the data.

In Making open science a reality (2015), the OECD takes a similar approach of using a broad basic definition and focussing on a limited number of issues: access, data and collaboration. Though many will agree that the topics selected by the EU, OECD and in the Dutch National Plan are important issues, others may argue that without adding the context of inclusivity (especially incorporating issues and discussion in the Global South) its potential value is still limited.

Maximal definitions accommodate all developments that could be relevant to making science and knowledge more open. Their goal is not so much to define but to inventorize, link and organize ideas and developments. The often used taxonomy by Foster (an EU-project using Open science is the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society as definition) plays this role. The text version of Foster’s Open Science definition is: “Open Science is the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods.” Note that this definition is somewhat narrower than the Foster taxonomy, leaving out the explicit mention of evaluation.

Foster-open-science-taxonomy-at-20170326

The Foster taxonomy. Click the image to see the other tabs


An alternative taxonomy is the list we created ourselves. Our list looks at aspects of open science within an open knowledge framework, because Open Science does not develop in a vacuum and is part of a broader movement towards open knowledge. It partially overlaps with developments towards open source, open hardware and open educational resources. And open access and open data, by many considered integral to Open Science are also movements in their own right, for instance pushing for open government data. One might say that overall there are six SHADES of open.

 

six-shades-of-open-knowledge

Six SHADES of Open

Thus, for us, now, Open Knowledge and within that Open Science, consists of:

  1. Open Science, that is:
    1. Open to participation, by:
      • Being inclusive, meaning
        • No barriers based on race, gender, income, status
      • Being open to public participation, in the form of
        • Involvement of societal partners in research priority setting
        • Evaluations that include societal relevance
        • Citizen science
    2. Open to use, check, modify, re-use and redistribute, by offering:
      • Open Access, for people and machines, to:
        • Proposals and applications
        • Data
        • Code
        • Early versions, preprints, working papers
        • Papers and books
        • Reviews and comments
        • Posters and presentations
      • Open, non-proprietary standards
      • Open licences
      • Full documentation of process
    3. Open to the world, so offering:
      • Translations
      • Plain language explanations
      • Outreach beyond academia
      • Open to questions from outside academia
      • Curation and annotation of non-scholarly information
      • Actionable formats
      • Participation in public debate
  2. Open educational resources
  3. Open source software
  4. Open hardware

The list aims at broad coverage, without looking at barriers to implementation. It highlights priorities in the Dutch national Open Science Plan (in red) and the things that Utrecht University on top of that will probably also cover in its Open Science ambitions (in blue), as apparent from its Strategic Plan 2016-2020.

There are also practical definitions that while denoting what is the core of Open Science still leave room for many different implementations. A nice example is the definition by Egon Willighagen, which echoes the Open Definition.

openscienceis-egon-willighagen

Then we get to the personal definitions. Everyone involved will have a favourite definition, perhaps making some combination what one would love to see happen with what one sees as practices that can be implemented and adopted realistically. Last year we asked colleague librarians and others on Twitter what their definition was. You can still see those and add yours using the #openscienceis hashtag. Also, you could share your idea of what Open Science is in the way Eva Mendéz did…

van-eva-mendez-zie-twitter

Open Science as an organism, by Eva Mendéz. Click to view original tweet. This image is shared with a CC-BY-NC-SA license

A final group is the catchphrase definitions, that have the aim to sell the idea more than strictly define it. They often make great discussion starters. There is a nice collection of Open Science catchphrases. One example that we would like to share here is: “Open Science = Science done according to its definition” by Graham Steel.

If you have a good answer to the question what open science is, we encourage you to add it to this wonderful Open Science Q&A website set up at Bielefeld to be used by anyone. It currently has three relevant questions: (1) What does open mean in open science?, (2) What criteria does a research project need to match to be called open science? and (3) the Open Science Catchphrases we already mentioned. The question What is Open science? is not yet listed …

We definitely do not want to say what kind of definition is better or more useful. Let’s keep this discussion alive and see what interpretations bring broad and speedy progress without harming those that have a different interpretation.

One way to do this is by exploring and discussing practices and tools that can be used and adopted to make aspects of open science a reality. From our own research in the project 101 Innovations in Scholarly Communication, and from the many workshops and meetings with researchers and other stakeholders we learned that some developments are smooth-running while others go slowly.

In the end it’s perhaps more important to deal with the motivations and barriers that determine the speed of developments towards Open Science, than worry about the exact definition of it.

Developments_slow-smooth_new

Open Science at different speeds

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License, excepting the image by Eva Mendéz.

Geplaatst in congressen, I&M2.0, I&M2.0, Open Access, Open science | Tags: , , , , , , , , , | 1 reactie

This post can be found here:

Defining Open Science definitions

Link | Geplaatst op door | Een reactie plaatsen

NWO – the impact factor paradox

[January 11, 2017: update added at the end of the post]

While there are many reasons not to use the journal impact factor (IF) for the assessment of individual research articles, researchers or research groups, its use for this purpose is still widespread, particularly in the sciences. Members of all parties involved (researchers, research institutions and funders) proclaim to want to move away from using IF for assessment purposes (see DORA signatories, which include the VSNU). However, in practice they often keep each other hostage to a system where publishing in ‘high-impact journals’ is in itself considered a mark of excellence*. To break free from this pattern, someone has to make the first move.

In the Netherlands, government funding agency NWO can be seen as holding a key position in this regard.  As long as researchers expect NWO to use IF as an important assessment criterium, the majority of them will play along with that, for fear of diminishing their funding chances. So what, exactly, does NWO ask for in this respect?

Somewhat surprisingly, three of the main NWO-funding programmes, the VENI, VIDI and VICI-schemes, do not specifically ask applicants to list IFs for their published papers, though all three of them leave this as an option. The VIDI and VICI programmes do require listing median IF for the applicant’s field, but only for applications from Medical Sciences (VIDI) and the more broad domain of Life Sciences (VICI). Inclusion of this metric in other disciplines is voluntary, but required if applicants decide to include IFs for individual publications (Figure 1).

As for other NWO funding programmes, there is at least one (Rubicon) for which inclusion of IFs for applicant’s journal articles is compulsory (in the 2016 round), but also many others for which it isn’t. Applications for these programmes generally ask for a list of selected publications, but do not specifically mention IF.

veni-vidi-vici-if

Figure 1. Mention of impact factors (highlighted in yellow) in the most recent NWO-application forms for VENI, VIDI and VICI grants

The question then is: do applicants decide to list IFs anyway because they think it is to their benefit or even expected, also when it is not compulsory? And the other side of the coin: do referees and members of selection committees take journal status into account even when IFs are not required in the application?

It would be interesting to survey prior applicants to gain insight into the first question. As to the second, there are indications that the assumptions made here are true in at least some cases. In 2014-2015, Volkskrant journalist Martijn van Calmthout was allowed to follow the assessment of applications for the VIDI programme for Earth and Life Sciences (at that time, including IFs in the application was also not compulsory). He was allowed access to the normally confidential deliberations of the selection committee. In the resulting newspaper article, van Calmthout reports remarks on the weight of publication lists and the value of publications in specific ‘high-ranking’ journals. Committee members also commented on a candidate’s H-index, even though that, too, was not a metric requested in the application.

Even though the extent to which IFs weigh into NWO’s assessement of candidates is unknown (both for programmes that require listing of IFs and those that don’t), perception that it is an important criterium is certainly alive and well. One example of this is the remark of Junior Minister Sander Dekker in a parliamentary meeting on national research policy: “(…) if you apply for NWO-funding, you are mainly judged on the length of your publication list and the impact factor of the journals you published in” (Figure 2).

ao_wetenschapsbeleid_20160420_p43_cropped

Figure 2. Remark of Junior Minister Dekker in parliamentary meeting on national research policy (AO Wetenschapsbeleid, April 20 2016)

In the same meeting, Sander Dekker mentioned that internally, NWO is discussing novel ideas for research assessment. He committed to asking the funding agency to share these discussions with the academic community (AO Wetenschapsbeleid April 20 2016, p.43-44). One of the options apparently discussed was the suggestion of Spinoza-laureate Marten Scheffer of Wageningen University & Research for a model where researchers receive baseline non-competitive funding, with the stipulation that they distribute part of that funding among other researchers (for more information on this idea, see e.g. Volkskrant May 22, 2016 -in Dutch).

In a plenary parliamentary debate on May 23, 2016, MP Eppo Bruins proposed a vote to experiment with innovative methods for distribution of research funding (or, in other words, ex ante research assessment) (Figure 3A) The motion was accepted by Sander Dekker, and thus did not need to be voted on by parliament. On September 23, Minister Jet Bussemaker made it known that following the motion, NWO was asked to take action on this idea, if possible in collaboration with the Netherlands Bureau for Economic Policy Analysis (CPB) (Figure 3B) .

motie-bruins-collage

Figure 3. Parliamentary motion by MP Eppo Bruins, June 23, 2016 (A); follow-up by Minister Bussemaker in letter to parliament, September 23, 2016 (B)

As of January 2017, it is unclear what the status of these experiments is. It would be welcomed if NWO publicly shared any concrete plans it has in this regard and, more broadly, its thoughts and ideas around good criteria for research assessment, as Sander Dekker promised to request of the funding agency.

Changes in funding instruments are expected following the recent reorganization of NWO, with the aim of harmonizing funding instruments across domains.  This looks like a good opportunity to implement and clearly communicate a vision on indicators for research assessment, signalling a move away from IF and stimulating applicants to focus on other ways of demonstrating research impact.

And will we then see NWO sign DORA, too?

 

[update 2017011: On January 10, NWO announced a ‘national working conference‘ on ex ante research assessment. This conference will take place on April 4, 2017 in Amsterdam. Applications to attend can be made by expressing interest through this form, after which a selection will be made. Though NWO stresses knowledge of existing procedures as a condition to apply, I sincerely hope they will include people who have experiences and ideas *outside* current procedures as well, as this will increase both innovative thinking and broader support.

Following the national conference, NWO will also organize an international conference on this theme, aimed at European research councils and other granting organizations. In this regard, it would be interesting to look at the recently formed Open Research Funders Group, a US-based initiative launched in collaboration with SPARC (Scholarly Publishing and Academic Resources Coalition).]

 

*For a good critique on the concept of excellence in research culture see Excellence R Us, an article by Samuel Moore, Cameron Neylon, Martin Paul Eve, Daniel O’Donnell and Damian Patterson, shared on Figshare.

Geplaatst in I&M2.0, nwo, onderzoek, Open science, tijdschriften, toekomst, valorisatie | Tags: , , , , , , | 1 reactie

Full speed ahead with open science

Jeroen Bosman & Bianca Kramer

Crick Institute – October 19, London. 20161020_135118Some 35 researchers and 20 publishers, funders and librarians gather at the brand new magnificent Francis Crick Institute premises, between the equally magnificent St. Pancras railway station and the British Library. The Crick Institute is a partnership between the Medical Research Council, Cancer Research UK, the Wellcome Trust and three London universities – Imperial College London, King’s College London and University College London.

Open Research London – The meeting is organised by Open Research London (@OpenResLDN), and is an irregularly returning event originally started by Ross Mounce, John Tennant and Torsten Reimer and now coordinated by Frank Norman from the Crick Institute. Open Research London (ORL) is an informal group formed to promote the idea of sharing and collaboration of research. This specific ORL meeting focused on Open Science tools and workflows. There were presentations on bioRxiv and Wellcome Open Research and we moderated a 1-hour mini workshop.

BioRxiv – John Inglis, executive director of Cold Spring Harbor Laboratory Press and co-founder of bioRxiv, told us how this pre-print archive, launched in November 2013, was modeled on arXiv, the preprint platform that over the last 2 decades gained a dominant position in sharing of papers in physics, astronomy and mathematics. Cold Spring Harbor Laboratory Press manages bioRxiv which is directed at the broad spectrum of life sciences, from computational biology to paleontology. So far, over 6200 papers have been accepted from over 30,000 authors, with and current submission levels at ~450/month.

biorxiv-basic-stats

BiorXiv has rejected less than 10% of the submissions, e.g. because they were off topic or contained plagiarism. First indications are that some 60% of the papers shared through bioRxiv is published within two years. The initial success of bioRxiv is at least partially ascribed to the high level support, with for instance Anurag Acharya (Google Scholar) and Paul Ginsparg (arXiv) on the advisory board. BiorXiv now seems out of the experimental phase as it is accepted by researchers and many journals. Still though, it is working on further improvements, together with HighWire that hosts the archive. Also, they are trying to find solutions for some issues such as what to do with the preprint when the related journal article publication is retracted.

Wellcome Open Research – Equally interesting is the experimental online journal Wellcome Open Research presented on by Robert Kiley (Head of Digital Services at the Wellcome Library). While its setup with fast publishing and post-pub peer review and focus on sharing all kinds of research outputs is already innovative, the real experiment lies in ownership and author selection. This journal is a funder’s journal (anyone know of other examples?) and submissions are restricted to papers, datasets etc. with at least one Wellcome-funded (co-)­author. This journal, hosted and operated by F1000, will be fully open with expected (but internally paid) APC’s of between 150 and 900 US$.

full-frontal-audienceOpen Science workflow – We were granted the second half of the evening and led the group in a pursuit of Open Science workflows. For that we built on the insights and material developed in the 101 innovations in scholarly communication project. After a short introduction on the concept of workflows, illustrated by some hypothetical workflow examples, the participants got into action.
First, in pairs, they briefly discussed the function and use of specific scholarly communication tools. These had been given to them as small button-like plastic circles. Next, people had to match ‘their’ tool to one of the 120+ cards with research practices that hang from the wall, organized by research activity. Of course people could create additional cards and circles with practices and tools.

circles-and-cardsThen came the  most exciting part: we jointly composed an open science workflow by taking cards with research practices that should be part of such a workflow from the wall and hanging them together on an empty canvas with 7 research phases. In what little time remained after that we annotated some parts of the resulting workflow with suns and clouds, indicating specific enabling and constraining factors. The resulting workflow consisted of 69 practices of which 31 were supported by some 50 tools.

open_science_workflow_orl_201610

The whole process was actually less chaotic and messy than expected, though it would have been good to discuss the resulting workflow more in depth. Is it not too crowded? Are all practices relevant for every researcher? Why were some practices included and others left out? Which tools work well together and which don’t? And what about the practices that have no supporting tool attached to them: was that caused simply by lack of time during this session? 

Though there was indeed not enough time to dive into those questions, the participants seemed quite interested to see the final result and keen to work on this together. And if we left any pieces of tape etc. on the wall, we sincerely apologize to Crick’s ;-)

For us, the meeting brought a very good end to three intensive days that also included a workshop and presentation for librarians at Internet Librarian International on research support by libraries. And one of the great pleasures was meeting @DVDGC13!

Geplaatst in congressen, I&M2.0, innovatie, Open science, presentaties, projecten | Tags: , , , , , , , , , | 4 reacties

Pay It Forward, gedegen onderzoek naar duurzaamheid van Gold Open Access model

De University of California Libraries hebben een groot opgezet onderzoek gedaan naar de duurzaamheid van een volledig Gold open accessmodel. De Scholarly Kitchen reageerde enigszins zuur dat het onderzoek alleen maar aantoonde wat we al jaren wisten.

Eén interessant element uit het rapport is de mogelijke verhouding tussen financiering van APC’s uit het institutionele budget en vanuit de budgetten van onderzoeksfinanciers. hebben gekeken naar het “Library Break Even Point”. Als het bibliotheekbudget voor abonnementen wordt getransformeerd in een OA fonds voor APC’s en alle tijdschriftartikelen worden in Gold OA gepubliceerd, wat mag dan de gemiddelde hoogte van een APC zijn?

De hoogte van die gemiddelde APC verschilt zoals verwacht sterk per universiteit, immers de ene universiteit produceert meer publicaties dan de andere. Maar de onderzoekers schetsen meerdere scenario’s. Als de universiteit alle kosten voor open access publiceren moet dragen, dan kunnen zij niet de APC’s betalen die nu worden gevraagd. Maar in een tweede scenario waarin de onderzoeksfinanciers de publicatiekosten dragen voor “hun” onderzoek kunnen de universiteiten wel degelijk de APC’s betalen die nu worden gevraagd. Bij alle behalve de meest onderzoeksintensieve universiteiten ligt het break even point zelfs ver boven de gemiddelde APC die nu gevraagd wordt.

Dit tweede scenario komt overeen met de praktijk in Nederland, met EU en NWO die open access financieren. Dat de universiteiten de kosten voor open access in dit scenario kunnen dragen neemt natuurlijk niet weg dat de onderzoeksfinanciers wel met extra kosten worden geconfronteerd.

Zie pagina 95-99 in het rapport.

Een andere interessant element vond ik de berekening van Costs per article (p. 83-90). Wat kost het een uitgever om een artikel te produceren en is (dus) een mogelijke/rechtvaardige/verwachtte APC?

In de workshop over open access financiering die ik heb gegeven verwijs ik naar een artikel uit Nature, waarin wordt voorgerekend dat uitgevers per artikel gemiddeld 5.000 USD aan inkomsten genereren en dat dit (bij een geschatte winst van 20 a 30%) betekent dat het produceren van een artikel hen gemiddeld 3.500-4.000 USD moet kosten.

Dit onderzoek uit California komt met andere cijfers. Zij hebben op vijf verschillende manieren geprobeerd die kosten te achterhalen, variërend van vragen aan de uitgevers tot het interpreteren van openbare belastinggegevens. De gemiddelde kostprijs die daar uit komt varieert van 960 USD tot 2.508 USD.

De onderzoekers geven zelf aan dat hun aanpak ” is not meant to replicate current operations for established subscription-based publishers”. Dit zijn niet de kosten die de grote uitgevers maken voor het produceren van één artikel (de 3.500-4.000 USD uit het artikel in Nature), maar de APC’s waarbij een uitgever (volledig open access, not-for-profit) goed kan renderen.

Ik ben pas bij pagina 99 van het rapport, misschien later meer.

Geplaatst in Open Access | 2 reacties