Systematic literature review of “Teaching Open Science”

According to my opinion, we do not pay enough attention to teaching Open Science in higher education. Therefore, I designed a seminar to teach students the practices of Open Science by doing qualitative research.About this seminar, I wrote the article ”Teaching Open Science and qualitative methods“. For the article ”Teaching Open Science and qualitative methods“, I started to review the literature on ”Teaching Open Science“. The result of my literature review is that certain aspects of Open Science are used for teaching. However, Open Science with all its aspects (Open Access, Open Data, Open Methodology, Open Science Evaluation and Open Science Tools) is not an issue in publications about teaching.

Based on this insight, I have started a systematic literature review. I realized quickly that I need help to analyse and interpret the articles and to evaluate my preliminary findings. Especially different disciplinary cultures of teaching different aspects of Open Science are challenging, as I myself, as a social scientist, do not have enough insight to be able to interpret the results correctly. Therefore, I would like to invite you to participate in this research project!

I am now looking for people who would like to join a collaborative process to further explore and write the systematic literature review on “Teaching Open Science“. Because I want to turn this project into a Massive Open Online Paper (MOOP). According to the 10 rules of Tennant et al (2019) on MOOPs, it is crucial to find a core group that is enthusiastic about the topic. Therefore, I am looking for people who are interested in creating the structure of the paper and writing the paper together with me. I am also looking for people who want to search for and review literature or evaluate the literature I have already found. Together with the interested persons I would then define, the rules for the project (cf. Tennant et al. 2019). So if you are interested to contribute to the further search for articles and / or to enhance the interpretation and writing of results, please get in touch. For everyone interested to contribute, the list of articles collected so far is freely accessible at Zotero: https://www.zotero.org/groups/2359061/teaching_open_science. The figure shown below provides a first overview of my ongoing work. I created the figure with the free software yEd and uploaded the file to zenodo, so everyone can download and work with it: https://zenodo.org/record/3371415.

To make transparent what I have done so far, I will first introduce what a systematic literature review is. Secondly, I describe the decisions I made to start with the systematic literature review. Third, I present the preliminary results.

Systematic literature review – an Introduction

Systematic literature reviews “are a method of mapping out areas of uncertainty, and identifying where little or no relevant research has been done.” (Petticrew/Roberts 2008: 2). Fink defines the systematic literature review as a “systemic, explicit, and reproducible method for identifying, evaluating, and synthesizing the existing body of completed and recorded work produced by researchers, scholars, and practitioners.” (Fink 2019: 6). The aim of a systematic literature reviews is to surpass the subjectivity of a researchers’ search for literature. However, there can never be an objective selection of articles. This is because the researcher has for example already made a preselection by deciding about search strings, for example “Teaching Open Science”. In this respect, transparency is the core criteria for a high-quality review. 

In order to achieve high quality and transparency, Fink (2019: 6-7) proposes the following seven steps:

  1. Selecting a research question.
  2. Selecting the bibliographic database.
  3. Choosing the search terms.
  4. Applying practical screening criteria.
  5. Applying methodological screening criteria.
  6. Doing the review.
  7. Synthesizing the results.

I have adapted these steps for the “Teaching Open Science” systematic literature review. In the following, I will present the decisions I have made.

Systematic literature review – decisions I made

  1. Research question: I am interested in the following research questions: How is Open Science taught in higher education? Is Open Science taught in its full range with all aspects like Open Access, Open Data, Open Methodology, Open Science Evaluation and Open Science Tools? Which aspects are taught? Are there disciplinary differences as to which aspects are taught and, if so, why are there such differences?
  2. Databases: I started my search at the Directory of Open Science (DOAJ). “DOAJ is a community-curated online directory that indexes and provides access to high quality, open access, peer-reviewed journals.” (https://doaj.org/) Secondly, I used the Bielefeld Academic Search Engine (base). Base is operated by Bielefeld University Library and “one of the world’s most voluminous search engines especially for academic web resources” (base-search.net). Both platforms are non-commercial and focus on Open Access publications and thus differ from the commercial publication databases, such as Web of Science and Scopus. For this project, I deliberately decided against commercial providers and the restriction of search in indexed journals. Thus, because my explicit aim was to find articles that are open in the context of Open Science.
  3. Search terms: To identify articles about teaching Open Science I used the following search strings: “teaching open science” OR teaching “open science” OR teach „open science“. The topic search looked for the search strings in title, abstract and keywords of articles. Since these are very narrow search terms, I decided to broaden the method. I searched in the reference lists of all articles that appear from this search for further relevant literature. Using Google Scholar I checked which other authors cited the articles in the sample. If the so checked articles met my methodological criteria, I included them in the sample and looked through the reference lists and citations at Google Scholar. This process has not yet been completed.
  4. Practical screening criteria: I have included English and German articles in the sample, as I speak these languages (articles in other languages are very welcome, if there are people who can interpret them!). In the sample only journal articles, articles in edited volumes, working papers and conference papers from proceedings were included. I checked whether the journals were predatory journals – such articles were not included. I did not include blogposts, books or articles from newspapers. I only included articles that fulltexts are accessible via my institution (University of Kassel). As a result, recently published articles at Elsevier could not be included because of the special situation in Germany regarding the Project DEAL (https://www.projekt-deal.de/about-deal/). For articles that are not freely accessible, I have checked whether there is an accessible version in a repository or whether preprint is available. If this was not the case, the article was not included. I started the analysis in May 2019.
  5. Methodological criteria: The method described above to check the reference lists has the problem of subjectivity. Therefore, I hope that other people will be interested in this project and evaluate my decisions. I have used the following criteria as the basis for my decisions: First, the articles must focus on teaching. For example, this means that articles must describe how a course was designed and carried out. Second, at least one aspect of Open Science has to be addressed. The aspects can be very diverse (FOSS, repositories, wiki, data management, etc.) but have to comply with the principles of openness. This means, for example, I included an article when it deals with the use of FOSS in class and addresses the aspects of openness of FOSS. I did not include articles when the authors describe the use of a particular free and open source software for teaching but did not address the principles of openness or re-use.
  6. Doing the review: Due to the methodical approach of going through the reference lists, it is possible to create a map of how the articles relate to each other. This results in thematic clusters and connections between clusters. The starting point for the map were four articles (Cook et al. 2018; Marsden, Thompson, and Plonsky 2017; Petras et al. 2015; Toelch and Ostwald 2018) that I found using the databases and criteria described above. I used yEd to generate the network. „yEd is a powerful desktop application that can be used to quickly and effectively generate high-quality diagrams.” (https://www.yworks.com/products/yed) In the network, arrows show, which articles are cited in an article and which articles are cited by others as well. In addition, I made an initial rough classification of the content using colours. This classification is based on the contents mentioned in the articles’ title and abstract. This rough content classification requires a more exact, i.e., content-based subdivision and evaluation by others, who are experts in the respective fields/disciplines.

Preliminary results

The following map presents an overview of the articles I reviewed. The articles highlighted in grey are work in progress as I have not looked at the bibliographies and have not checked who quoted them according to googlescholar, yet. Accordingly, the search of the literature is not finished. So far, I identified five thematic clusters and two solitary articles:

  1. Brown Cluster: Use of free and open source software in Information and Computer Sciences.
  2. Orange Cluster: Use of geospatial free and open source software in teaching.
  3. Turquoise Cluster: Use of wikis in teaching.
  4. Pink/Red Cluster: Teaching data management and achieve data/digital literacy.
  5. Blue Cluster: Teaching reproducible science workflows und replication studies.
  6. Green Articles: Repositories.

As the map shows, the biggest cluster (so far) is the brown cluster and the articles are strongly interlinked. This is also possible for the turquoise cluster, which has been the least I have worked on so far. Certain articles connect the orange, the brown and the turquoise cluster. The common basis of these three clusters is the use of free and open source software in teaching. Whereas the pink/red cluster and the blue cluster are not (yet) connected with the other clusters. The map contains two articles highlighted in green that discuss how repositories can be used in teaching. One of the green articles is in the centre of the brown cluster.

I am eagerly looking forward to your feedback, collaboration and contributions!

The figure shows the five clusters described above.
Map of the clusters of “teaching Open Science”.

 

 

 

Literature

  • Cook, Katherine, Canan Çakirlar, Timothy Goddard, Robert Carl DeMuth, and Joshua Wells. 2018. ‘Teaching Open Science: Published Data and Digital Literacy in Archaeology Classrooms’. Advances in Archaeological Practice 6 (02): 144–56. https://doi.org/10.1017/aap.2018.5.
  • Fink, Arlene. 2019. Conducting Research Literature Reviews: From the Internet to Paper. SAGE Publications.
  • Marsden, Emma, Sophie Thompson, and Luke Plonsky. 2017. ‘Open Science in Second Language Acquisition Research: The IRIS Repository of Research Materials and Data’. Edited by C. Granget, M.-A. Dat, D. Guedat-Bittighoffer, and C. Cuet. SHS Web of Conferences 38: 1–5. https://doi.org/10.1051/shsconf/20173800013.
  • Petras, Vaclav, Anna Petrasova, Brendan Harmon, Ross Meentemeyer, and Helena Mitasova. 2015. ‘Integrating Free and Open Source Solutions into Geospatial Science Education’. ISPRS International Journal of Geo-Information 4 (2): 942–56. https://doi.org/10.3390/ijgi4020942.
  • Petticrew, Mark, and Helen Roberts. 2008. Systematic Reviews in the Social Sciences: A Practical Guide. John Wiley & Sons.
  • Tennant, Jonathan, Natalia Z Bielczyk, Veronika Cheplygina, Bastian Greshake Tzovaras, Chris Hubertus Joseph Hartgerink, Johanna Havemann, Paola Masuzzo, and Tobias Steiner. 2019. ‘Ten Simple Rules for Researchers Collaborating on Massively Open Online Papers (MOOPs)’. Preprint. MetaArXiv. https://doi.org/10.31222/osf.io/et8ak.
  • Toelch, Ulf, and Dirk Ostwald. 2018. ‘Digital Open Science—Teaching Digital Tools for Reproducible and Transparent Research’. PLOS Biology 16 (7): e2006022. https://doi.org/10.1371/journal.pbio.2006022.


Diesen Blogbeitrag zitieren
Isabel Steinhardt (2019, 20. August). Systematic literature review of “Teaching Open Science”. Sozialwissenschaftliche Methodenberatung. Abgerufen am 18. April 2024, von https://doi.org/10.58079/uiq0

5 Gedanken zu „Systematic literature review of “Teaching Open Science”“

  1. Thank you for this brief introduction for your planned MOOP.
    The outline is well defined and there are some interesting clusters.
    The search engines are interesting in that you remain only in open science searches. However, do you know how well/deep these engines carry out their searches?. Also are there links with institutional libraries, if I am in university carrying out the search it may be different from if I did it from home without vpn.
    I’ll be happy to chat further

    1. Dear Kwok,
      thank you very much for this interesting question. In the moment I do not have an answer, but I have a look and try to find out. I used a VPN-Client during the research process.
      Best Isabel

  2. I’d like to suggest a slightly broader approach regarding databases, search terms, and search strategy.

    1. Databases
    Concentrating on open instruments is narrowing the results, sadly. Maybe using The Lens is a viable alternative to Scopus or Web of Science. About The Lens: https://about.lens.org/.

    2. Search terms
    I skimmed through the Zotero library and I felt there was a lot missing. Since you are interested in teaching it seems appropriate to include search terms like “curriculum” .

    3. Search strategy
    It seems wise to use wildcards and boolean operators to build a search, like (teach* OR curricu* OR learn*) AND “open science”.

    Using this search phrase and your selection of document types I find ~1000 works in The Lens:
    https://www.lens.org/lens/scholar/search/results?q=(teach*%20OR%20curricu*%20OR%20learn*)%20AND%20%22open%20science%22&page=0&limit=10&orderBy=%2Bscore&filterMap=%7B%22publication_type%22:%7B%22journal%20article%22:true,%22conference%20proceedings%22:true,%22book%20chapter%22:true,%22book%22:true,%22conference%20proceedings%20article%22:true%7D%7D&dateFilterField=year_published&previewType=SCHOLAR_ANALYSIS&preview=true&regexEnabled=false&useAuthorId=false

    This is a lot. Only open access publications (891):

    https://www.lens.org/lens/scholar/search/results?q=(teach*%20OR%20curricu*%20OR%20learn*)%20AND%20%22open%20science%22&page=0&limit=10&orderBy=%2Bscore&filterMap=%7B%22publication_type%22:%7B%22journal%20article%22:true,%22conference%20proceedings%22:true,%22book%20chapter%22:true,%22book%22:true,%22conference%20proceedings%20article%22:true%7D,%22is_open_access%22:true%7D&dateFilterField=year_published&previewType=SCHOLAR_ANALYSIS&preview=true&regexEnabled=false&useAuthorId=false

    Learn* seems to unneccessarily bloat the results. Without we have much more reasonable numbers (326):
    https://www.lens.org/lens/scholar/search/results?q=(teach*%20%20OR%20curricu*)%20%20AND%20%22open%20science%22&page=0&limit=10&orderBy=%2Bscore&filterMap=%7B%22publication_type%22:%7B%22journal%20article%22:true,%22conference%20proceedings%22:true,%22book%20chapter%22:true,%22book%22:true,%22conference%20proceedings%20article%22:true%7D%7D&dateFilterField=year_published&previewType=SCHOLAR_ANALYSIS&preview=true&regexEnabled=false&useAuthorId=false

    From there on it seems to be viable to sort out the noise.

    1. Dear CH,
      thank you so much for this comment and all your invested time! Your comment gives me the chance to make it even clearer what my concern was. What you are suggesting is a “classic” literature search with search strings to generate as many hits as possible and then sort out the noise. I already used this procedure (Mapping the quality assurance of teaching and learning in higher education: the emergence of a specialty?), but decided against it. The reason was that I was interested in what aspects of open science are taught, which I may not know at first. It is a qualitative reconstructive approach – this may be unusual for a systematic literature review. That’s why I created the graphics (network) by hand and not via e.g. Vosviewer using algorithms. We must certainly formulate the criteria even more clearly, why which article was included in the sample! This is not defined well enough yet…
      But it would be exciting to compare the results of this project with those of a second project using a classical mapping procedure. It seems to me that a second project is being created here?!
      Best Isabel

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.

Suche in OpenEdition Search

Sie werden weitergeleitet zur OpenEdition Search