• Open access
  • Published: 14 August 2018

Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies

  • Chris Cooper   ORCID: orcid.org/0000-0003-0864-5607 1 ,
  • Andrew Booth 2 ,
  • Jo Varley-Campbell 1 ,
  • Nicky Britten 3 &
  • Ruth Garside 4  

BMC Medical Research Methodology volume  18 , Article number:  85 ( 2018 ) Cite this article

194k Accesses

193 Citations

122 Altmetric

Metrics details

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving readers clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before.

The purpose of this review is to determine if a shared model of the literature searching process can be detected across systematic review guidance documents and, if so, how this process is reported in the guidance and supported by published studies.

A literature review.

Two types of literature were reviewed: guidance and published studies. Nine guidance documents were identified, including: The Cochrane and Campbell Handbooks. Published studies were identified through ‘pearl growing’, citation chasing, a search of PubMed using the systematic review methods filter, and the authors’ topic knowledge.

The relevant sections within each guidance document were then read and re-read, with the aim of determining key methodological stages. Methodological stages were identified and defined. This data was reviewed to identify agreements and areas of unique guidance between guidance documents. Consensus across multiple guidance documents was used to inform selection of ‘key stages’ in the process of literature searching.

Eight key stages were determined relating specifically to literature searching in systematic reviews. They were: who should literature search, aims and purpose of literature searching, preparation, the search strategy, searching databases, supplementary searching, managing references and reporting the search process.

Conclusions

Eight key stages to the process of literature searching in systematic reviews were identified. These key stages are consistently reported in the nine guidance documents, suggesting consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews. Further research to determine the suitability of using the same process of literature searching for all types of systematic review is indicated.

Peer Review reports

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving review stakeholders clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before. This is in contrast to the information science literature, which has developed information processing models as an explicit basis for dialogue and empirical testing. Without an explicit model, research in the process of systematic literature searching will remain immature and potentially uneven, and the development of shared information models will be assumed but never articulated.

One way of developing such a conceptual model is by formally examining the implicit “programme theory” as embodied in key methodological texts. The aim of this review is therefore to determine if a shared model of the literature searching process in systematic reviews can be detected across guidance documents and, if so, how this process is reported and supported.

Identifying guidance

Key texts (henceforth referred to as “guidance”) were identified based upon their accessibility to, and prominence within, United Kingdom systematic reviewing practice. The United Kingdom occupies a prominent position in the science of health information retrieval, as quantified by such objective measures as the authorship of papers, the number of Cochrane groups based in the UK, membership and leadership of groups such as the Cochrane Information Retrieval Methods Group, the HTA-I Information Specialists’ Group and historic association with such centres as the UK Cochrane Centre, the NHS Centre for Reviews and Dissemination, the Centre for Evidence Based Medicine and the National Institute for Clinical Excellence (NICE). Coupled with the linguistic dominance of English within medical and health science and the science of systematic reviews more generally, this offers a justification for a purposive sample that favours UK, European and Australian guidance documents.

Nine guidance documents were identified. These documents provide guidance for different types of reviews, namely: reviews of interventions, reviews of health technologies, reviews of qualitative research studies, reviews of social science topics, and reviews to inform guidance.

Whilst these guidance documents occasionally offer additional guidance on other types of systematic reviews, we have focused on the core and stated aims of these documents as they relate to literature searching. Table  1 sets out: the guidance document, the version audited, their core stated focus, and a bibliographical pointer to the main guidance relating to literature searching.

Once a list of key guidance documents was determined, it was checked by six senior information professionals based in the UK for relevance to current literature searching in systematic reviews.

Identifying supporting studies

In addition to identifying guidance, the authors sought to populate an evidence base of supporting studies (henceforth referred to as “studies”) that contribute to existing search practice. Studies were first identified by the authors from their knowledge on this topic area and, subsequently, through systematic citation chasing key studies (‘pearls’ [ 1 ]) located within each key stage of the search process. These studies are identified in Additional file  1 : Appendix Table 1. Citation chasing was conducted by analysing the bibliography of references for each study (backwards citation chasing) and through Google Scholar (forward citation chasing). A search of PubMed using the systematic review methods filter was undertaken in August 2017 (see Additional file 1 ). The search terms used were: (literature search*[Title/Abstract]) AND sysrev_methods[sb] and 586 results were returned. These results were sifted for relevance to the key stages in Fig.  1 by CC.

figure 1

The key stages of literature search guidance as identified from nine key texts

Extracting the data

To reveal the implicit process of literature searching within each guidance document, the relevant sections (chapters) on literature searching were read and re-read, with the aim of determining key methodological stages. We defined a key methodological stage as a distinct step in the overall process for which specific guidance is reported, and action is taken, that collectively would result in a completed literature search.

The chapter or section sub-heading for each methodological stage was extracted into a table using the exact language as reported in each guidance document. The lead author (CC) then read and re-read these data, and the paragraphs of the document to which the headings referred, summarising section details. This table was then reviewed, using comparison and contrast to identify agreements and areas of unique guidance. Consensus across multiple guidelines was used to inform selection of ‘key stages’ in the process of literature searching.

Having determined the key stages to literature searching, we then read and re-read the sections relating to literature searching again, extracting specific detail relating to the methodological process of literature searching within each key stage. Again, the guidance was then read and re-read, first on a document-by-document-basis and, secondly, across all the documents above, to identify both commonalities and areas of unique guidance.

Results and discussion

Our findings.

We were able to identify consensus across the guidance on literature searching for systematic reviews suggesting a shared implicit model within the information retrieval community. Whilst the structure of the guidance varies between documents, the same key stages are reported, even where the core focus of each document is different. We were able to identify specific areas of unique guidance, where a document reported guidance not summarised in other documents, together with areas of consensus across guidance.

Unique guidance

Only one document provided guidance on the topic of when to stop searching [ 2 ]. This guidance from 2005 anticipates a topic of increasing importance with the current interest in time-limited (i.e. “rapid”) reviews. Quality assurance (or peer review) of literature searches was only covered in two guidance documents [ 3 , 4 ]. This topic has emerged as increasingly important as indicated by the development of the PRESS instrument [ 5 ]. Text mining was discussed in four guidance documents [ 4 , 6 , 7 , 8 ] where the automation of some manual review work may offer efficiencies in literature searching [ 8 ].

Agreement between guidance: Defining the key stages of literature searching

Where there was agreement on the process, we determined that this constituted a key stage in the process of literature searching to inform systematic reviews.

From the guidance, we determined eight key stages that relate specifically to literature searching in systematic reviews. These are summarised at Fig. 1 . The data extraction table to inform Fig. 1 is reported in Table  2 . Table 2 reports the areas of common agreement and it demonstrates that the language used to describe key stages and processes varies significantly between guidance documents.

For each key stage, we set out the specific guidance, followed by discussion on how this guidance is situated within the wider literature.

Key stage one: Deciding who should undertake the literature search

The guidance.

Eight documents provided guidance on who should undertake literature searching in systematic reviews [ 2 , 4 , 6 , 7 , 8 , 9 , 10 , 11 ]. The guidance affirms that people with relevant expertise of literature searching should ‘ideally’ be included within the review team [ 6 ]. Information specialists (or information scientists), librarians or trial search co-ordinators (TSCs) are indicated as appropriate researchers in six guidance documents [ 2 , 7 , 8 , 9 , 10 , 11 ].

How the guidance corresponds to the published studies

The guidance is consistent with studies that call for the involvement of information specialists and librarians in systematic reviews [ 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 ] and which demonstrate how their training as ‘expert searchers’ and ‘analysers and organisers of data’ can be put to good use [ 13 ] in a variety of roles [ 12 , 16 , 20 , 21 , 24 , 25 , 26 ]. These arguments make sense in the context of the aims and purposes of literature searching in systematic reviews, explored below. The need for ‘thorough’ and ‘replicable’ literature searches was fundamental to the guidance and recurs in key stage two. Studies have found poor reporting, and a lack of replicable literature searches, to be a weakness in systematic reviews [ 17 , 18 , 27 , 28 ] and they argue that involvement of information specialists/ librarians would be associated with better reporting and better quality literature searching. Indeed, Meert et al. [ 29 ] demonstrated that involving a librarian as a co-author to a systematic review correlated with a higher score in the literature searching component of a systematic review [ 29 ]. As ‘new styles’ of rapid and scoping reviews emerge, where decisions on how to search are more iterative and creative, a clear role is made here too [ 30 ].

Knowing where to search for studies was noted as important in the guidance, with no agreement as to the appropriate number of databases to be searched [ 2 , 6 ]. Database (and resource selection more broadly) is acknowledged as a relevant key skill of information specialists and librarians [ 12 , 15 , 16 , 31 ].

Whilst arguments for including information specialists and librarians in the process of systematic review might be considered self-evident, Koffel and Rethlefsen [ 31 ] have questioned if the necessary involvement is actually happening [ 31 ].

Key stage two: Determining the aim and purpose of a literature search

The aim: Five of the nine guidance documents use adjectives such as ‘thorough’, ‘comprehensive’, ‘transparent’ and ‘reproducible’ to define the aim of literature searching [ 6 , 7 , 8 , 9 , 10 ]. Analogous phrases were present in a further three guidance documents, namely: ‘to identify the best available evidence’ [ 4 ] or ‘the aim of the literature search is not to retrieve everything. It is to retrieve everything of relevance’ [ 2 ] or ‘A systematic literature search aims to identify all publications relevant to the particular research question’ [ 3 ]. The Joanna Briggs Institute reviewers’ manual was the only guidance document where a clear statement on the aim of literature searching could not be identified. The purpose of literature searching was defined in three guidance documents, namely to minimise bias in the resultant review [ 6 , 8 , 10 ]. Accordingly, eight of nine documents clearly asserted that thorough and comprehensive literature searches are required as a potential mechanism for minimising bias.

The need for thorough and comprehensive literature searches appears as uniform within the eight guidance documents that describe approaches to literature searching in systematic reviews of effectiveness. Reviews of effectiveness (of intervention or cost), accuracy and prognosis, require thorough and comprehensive literature searches to transparently produce a reliable estimate of intervention effect. The belief that all relevant studies have been ‘comprehensively’ identified, and that this process has been ‘transparently’ reported, increases confidence in the estimate of effect and the conclusions that can be drawn [ 32 ]. The supporting literature exploring the need for comprehensive literature searches focuses almost exclusively on reviews of intervention effectiveness and meta-analysis. Different ‘styles’ of review may have different standards however; the alternative, offered by purposive sampling, has been suggested in the specific context of qualitative evidence syntheses [ 33 ].

What is a comprehensive literature search?

Whilst the guidance calls for thorough and comprehensive literature searches, it lacks clarity on what constitutes a thorough and comprehensive literature search, beyond the implication that all of the literature search methods in Table 2 should be used to identify studies. Egger et al. [ 34 ], in an empirical study evaluating the importance of comprehensive literature searches for trials in systematic reviews, defined a comprehensive search for trials as:

a search not restricted to English language;

where Cochrane CENTRAL or at least two other electronic databases had been searched (such as MEDLINE or EMBASE); and

at least one of the following search methods has been used to identify unpublished trials: searches for (I) conference abstracts, (ii) theses, (iii) trials registers; and (iv) contacts with experts in the field [ 34 ].

Tricco et al. (2008) used a similar threshold of bibliographic database searching AND a supplementary search method in a review when examining the risk of bias in systematic reviews. Their criteria were: one database (limited using the Cochrane Highly Sensitive Search Strategy (HSSS)) and handsearching [ 35 ].

Together with the guidance, this would suggest that comprehensive literature searching requires the use of BOTH bibliographic database searching AND supplementary search methods.

Comprehensiveness in literature searching, in the sense of how much searching should be undertaken, remains unclear. Egger et al. recommend that ‘investigators should consider the type of literature search and degree of comprehension that is appropriate for the review in question, taking into account budget and time constraints’ [ 34 ]. This view tallies with the Cochrane Handbook, which stipulates clearly, that study identification should be undertaken ‘within resource limits’ [ 9 ]. This would suggest that the limitations to comprehension are recognised but it raises questions on how this is decided and reported [ 36 ].

What is the point of comprehensive literature searching?

The purpose of thorough and comprehensive literature searches is to avoid missing key studies and to minimize bias [ 6 , 8 , 10 , 34 , 37 , 38 , 39 ] since a systematic review based only on published (or easily accessible) studies may have an exaggerated effect size [ 35 ]. Felson (1992) sets out potential biases that could affect the estimate of effect in a meta-analysis [ 40 ] and Tricco et al. summarize the evidence concerning bias and confounding in systematic reviews [ 35 ]. Egger et al. point to non-publication of studies, publication bias, language bias and MEDLINE bias, as key biases [ 34 , 35 , 40 , 41 , 42 , 43 , 44 , 45 , 46 ]. Comprehensive searches are not the sole factor to mitigate these biases but their contribution is thought to be significant [ 2 , 32 , 34 ]. Fehrmann (2011) suggests that ‘the search process being described in detail’ and that, where standard comprehensive search techniques have been applied, increases confidence in the search results [ 32 ].

Does comprehensive literature searching work?

Egger et al., and other study authors, have demonstrated a change in the estimate of intervention effectiveness where relevant studies were excluded from meta-analysis [ 34 , 47 ]. This would suggest that missing studies in literature searching alters the reliability of effectiveness estimates. This is an argument for comprehensive literature searching. Conversely, Egger et al. found that ‘comprehensive’ searches still missed studies and that comprehensive searches could, in fact, introduce bias into a review rather than preventing it, through the identification of low quality studies then being included in the meta-analysis [ 34 ]. Studies query if identifying and including low quality or grey literature studies changes the estimate of effect [ 43 , 48 ] and question if time is better invested updating systematic reviews rather than searching for unpublished studies [ 49 ], or mapping studies for review as opposed to aiming for high sensitivity in literature searching [ 50 ].

Aim and purpose beyond reviews of effectiveness

The need for comprehensive literature searches is less certain in reviews of qualitative studies, and for reviews where a comprehensive identification of studies is difficult to achieve (for example, in Public health) [ 33 , 51 , 52 , 53 , 54 , 55 ]. Literature searching for qualitative studies, and in public health topics, typically generates a greater number of studies to sift than in reviews of effectiveness [ 39 ] and demonstrating the ‘value’ of studies identified or missed is harder [ 56 ], since the study data do not typically support meta-analysis. Nussbaumer-Streit et al. (2016) have registered a review protocol to assess whether abbreviated literature searches (as opposed to comprehensive literature searches) has an impact on conclusions across multiple bodies of evidence, not only on effect estimates [ 57 ] which may develop this understanding. It may be that decision makers and users of systematic reviews are willing to trade the certainty from a comprehensive literature search and systematic review in exchange for different approaches to evidence synthesis [ 58 ], and that comprehensive literature searches are not necessarily a marker of literature search quality, as previously thought [ 36 ]. Different approaches to literature searching [ 37 , 38 , 59 , 60 , 61 , 62 ] and developing the concept of when to stop searching are important areas for further study [ 36 , 59 ].

The study by Nussbaumer-Streit et al. has been published since the submission of this literature review [ 63 ]. Nussbaumer-Streit et al. (2018) conclude that abbreviated literature searches are viable options for rapid evidence syntheses, if decision-makers are willing to trade the certainty from a comprehensive literature search and systematic review, but that decision-making which demands detailed scrutiny should still be based on comprehensive literature searches [ 63 ].

Key stage three: Preparing for the literature search

Six documents provided guidance on preparing for a literature search [ 2 , 3 , 6 , 7 , 9 , 10 ]. The Cochrane Handbook clearly stated that Cochrane authors (i.e. researchers) should seek advice from a trial search co-ordinator (i.e. a person with specific skills in literature searching) ‘before’ starting a literature search [ 9 ].

Two key tasks were perceptible in preparing for a literature searching [ 2 , 6 , 7 , 10 , 11 ]. First, to determine if there are any existing or on-going reviews, or if a new review is justified [ 6 , 11 ]; and, secondly, to develop an initial literature search strategy to estimate the volume of relevant literature (and quality of a small sample of relevant studies [ 10 ]) and indicate the resources required for literature searching and the review of the studies that follows [ 7 , 10 ].

Three documents summarised guidance on where to search to determine if a new review was justified [ 2 , 6 , 11 ]. These focused on searching databases of systematic reviews (The Cochrane Database of Systematic Reviews (CDSR) and the Database of Abstracts of Reviews of Effects (DARE)), institutional registries (including PROSPERO), and MEDLINE [ 6 , 11 ]. It is worth noting, however, that as of 2015, DARE (and NHS EEDs) are no longer being updated and so the relevance of this (these) resource(s) will diminish over-time [ 64 ]. One guidance document, ‘Systematic reviews in the Social Sciences’, noted, however, that databases are not the only source of information and unpublished reports, conference proceeding and grey literature may also be required, depending on the nature of the review question [ 2 ].

Two documents reported clearly that this preparation (or ‘scoping’) exercise should be undertaken before the actual search strategy is developed [ 7 , 10 ]).

The guidance offers the best available source on preparing the literature search with the published studies not typically reporting how their scoping informed the development of their search strategies nor how their search approaches were developed. Text mining has been proposed as a technique to develop search strategies in the scoping stages of a review although this work is still exploratory [ 65 ]. ‘Clustering documents’ and word frequency analysis have also been tested to identify search terms and studies for review [ 66 , 67 ]. Preparing for literature searches and scoping constitutes an area for future research.

Key stage four: Designing the search strategy

The Population, Intervention, Comparator, Outcome (PICO) structure was the commonly reported structure promoted to design a literature search strategy. Five documents suggested that the eligibility criteria or review question will determine which concepts of PICO will be populated to develop the search strategy [ 1 , 4 , 7 , 8 , 9 ]. The NICE handbook promoted multiple structures, namely PICO, SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) and multi-stranded approaches [ 4 ].

With the exclusion of The Joanna Briggs Institute reviewers’ manual, the guidance offered detail on selecting key search terms, synonyms, Boolean language, selecting database indexing terms and combining search terms. The CEE handbook suggested that ‘search terms may be compiled with the help of the commissioning organisation and stakeholders’ [ 10 ].

The use of limits, such as language or date limits, were discussed in all documents [ 2 , 3 , 4 , 6 , 7 , 8 , 9 , 10 , 11 ].

Search strategy structure

The guidance typically relates to reviews of intervention effectiveness so PICO – with its focus on intervention and comparator - is the dominant model used to structure literature search strategies [ 68 ]. PICOs – where the S denotes study design - is also commonly used in effectiveness reviews [ 6 , 68 ]. As the NICE handbook notes, alternative models to structure literature search strategies have been developed and tested. Booth provides an overview on formulating questions for evidence based practice [ 69 ] and has developed a number of alternatives to the PICO structure, namely: BeHEMoTh (Behaviour of interest; Health context; Exclusions; Models or Theories) for use when systematically identifying theory [ 55 ]; SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) for identification of social science and evaluation studies [ 69 ] and, working with Cooke and colleagues, SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) [ 70 ]. SPIDER has been compared to PICO and PICOs in a study by Methley et al. [ 68 ].

The NICE handbook also suggests the use of multi-stranded approaches to developing literature search strategies [ 4 ]. Glanville developed this idea in a study by Whitting et al. [ 71 ] and a worked example of this approach is included in the development of a search filter by Cooper et al. [ 72 ].

Writing search strategies: Conceptual and objective approaches

Hausner et al. [ 73 ] provide guidance on writing literature search strategies, delineating between conceptually and objectively derived approaches. The conceptual approach, advocated by and explained in the guidance documents, relies on the expertise of the literature searcher to identify key search terms and then develop key terms to include synonyms and controlled syntax. Hausner and colleagues set out the objective approach [ 73 ] and describe what may be done to validate it [ 74 ].

The use of limits

The guidance documents offer direction on the use of limits within a literature search. Limits can be used to focus literature searching to specific study designs or by other markers (such as by date) which limits the number of studies returned by a literature search. The use of limits should be described and the implications explored [ 34 ] since limiting literature searching can introduce bias (explored above). Craven et al. have suggested the use of a supporting narrative to explain decisions made in the process of developing literature searches and this advice would usefully capture decisions on the use of search limits [ 75 ].

Key stage five: Determining the process of literature searching and deciding where to search (bibliographic database searching)

Table 2 summarises the process of literature searching as reported in each guidance document. Searching bibliographic databases was consistently reported as the ‘first step’ to literature searching in all nine guidance documents.

Three documents reported specific guidance on where to search, in each case specific to the type of review their guidance informed, and as a minimum requirement [ 4 , 9 , 11 ]. Seven of the key guidance documents suggest that the selection of bibliographic databases depends on the topic of review [ 2 , 3 , 4 , 6 , 7 , 8 , 10 ], with two documents noting the absence of an agreed standard on what constitutes an acceptable number of databases searched [ 2 , 6 ].

The guidance documents summarise ‘how to’ search bibliographic databases in detail and this guidance is further contextualised above in terms of developing the search strategy. The documents provide guidance of selecting bibliographic databases, in some cases stating acceptable minima (i.e. The Cochrane Handbook states Cochrane CENTRAL, MEDLINE and EMBASE), and in other cases simply listing bibliographic database available to search. Studies have explored the value in searching specific bibliographic databases, with Wright et al. (2015) noting the contribution of CINAHL in identifying qualitative studies [ 76 ], Beckles et al. (2013) questioning the contribution of CINAHL to identifying clinical studies for guideline development [ 77 ], and Cooper et al. (2015) exploring the role of UK-focused bibliographic databases to identify UK-relevant studies [ 78 ]. The host of the database (e.g. OVID or ProQuest) has been shown to alter the search returns offered. Younger and Boddy [ 79 ] report differing search returns from the same database (AMED) but where the ‘host’ was different [ 79 ].

The average number of bibliographic database searched in systematic reviews has risen in the period 1994–2014 (from 1 to 4) [ 80 ] but there remains (as attested to by the guidance) no consensus on what constitutes an acceptable number of databases searched [ 48 ]. This is perhaps because thinking about the number of databases searched is the wrong question, researchers should be focused on which databases were searched and why, and which databases were not searched and why. The discussion should re-orientate to the differential value of sources but researchers need to think about how to report this in studies to allow findings to be generalised. Bethel (2017) has proposed ‘search summaries’, completed by the literature searcher, to record where included studies were identified, whether from database (and which databases specifically) or supplementary search methods [ 81 ]. Search summaries document both yield and accuracy of searches, which could prospectively inform resource use and decisions to search or not to search specific databases in topic areas. The prospective use of such data presupposes, however, that past searches are a potential predictor of future search performance (i.e. that each topic is to be considered representative and not unique). In offering a body of practice, this data would be of greater practicable use than current studies which are considered as little more than individual case studies [ 82 , 83 , 84 , 85 , 86 , 87 , 88 , 89 , 90 ].

When to database search is another question posed in the literature. Beyer et al. [ 91 ] report that databases can be prioritised for literature searching which, whilst not addressing the question of which databases to search, may at least bring clarity as to which databases to search first [ 91 ]. Paradoxically, this links to studies that suggest PubMed should be searched in addition to MEDLINE (OVID interface) since this improves the currency of systematic reviews [ 92 , 93 ]. Cooper et al. (2017) have tested the idea of database searching not as a primary search method (as suggested in the guidance) but as a supplementary search method in order to manage the volume of studies identified for an environmental effectiveness systematic review. Their case study compared the effectiveness of database searching versus a protocol using supplementary search methods and found that the latter identified more relevant studies for review than searching bibliographic databases [ 94 ].

Key stage six: Determining the process of literature searching and deciding where to search (supplementary search methods)

Table 2 also summaries the process of literature searching which follows bibliographic database searching. As Table 2 sets out, guidance that supplementary literature search methods should be used in systematic reviews recurs across documents, but the order in which these methods are used, and the extent to which they are used, varies. We noted inconsistency in the labelling of supplementary search methods between guidance documents.

Rather than focus on the guidance on how to use the methods (which has been summarised in a recent review [ 95 ]), we focus on the aim or purpose of supplementary search methods.

The Cochrane Handbook reported that ‘efforts’ to identify unpublished studies should be made [ 9 ]. Four guidance documents [ 2 , 3 , 6 , 9 ] acknowledged that searching beyond bibliographic databases was necessary since ‘databases are not the only source of literature’ [ 2 ]. Only one document reported any guidance on determining when to use supplementary methods. The IQWiG handbook reported that the use of handsearching (in their example) could be determined on a ‘case-by-case basis’ which implies that the use of these methods is optional rather than mandatory. This is in contrast to the guidance (above) on bibliographic database searching.

The issue for supplementary search methods is similar in many ways to the issue of searching bibliographic databases: demonstrating value. The purpose and contribution of supplementary search methods in systematic reviews is increasingly acknowledged [ 37 , 61 , 62 , 96 , 97 , 98 , 99 , 100 , 101 ] but understanding the value of the search methods to identify studies and data is unclear. In a recently published review, Cooper et al. (2017) reviewed the literature on supplementary search methods looking to determine the advantages, disadvantages and resource implications of using supplementary search methods [ 95 ]. This review also summarises the key guidance and empirical studies and seeks to address the question on when to use these search methods and when not to [ 95 ]. The guidance is limited in this regard and, as Table 2 demonstrates, offers conflicting advice on the order of searching, and the extent to which these search methods should be used in systematic reviews.

Key stage seven: Managing the references

Five of the documents provided guidance on managing references, for example downloading, de-duplicating and managing the output of literature searches [ 2 , 4 , 6 , 8 , 10 ]. This guidance typically itemised available bibliographic management tools rather than offering guidance on how to use them specifically [ 2 , 4 , 6 , 8 ]. The CEE handbook provided guidance on importing data where no direct export option is available (e.g. web-searching) [ 10 ].

The literature on using bibliographic management tools is not large relative to the number of ‘how to’ videos on platforms such as YouTube (see for example [ 102 ]). These YouTube videos confirm the overall lack of ‘how to’ guidance identified in this study and offer useful instruction on managing references. Bramer et al. set out methods for de-duplicating data and reviewing references in Endnote [ 103 , 104 ] and Gall tests the direct search function within Endnote to access databases such as PubMed, finding a number of limitations [ 105 ]. Coar et al. and Ahmed et al. consider the role of the free-source tool, Zotero [ 106 , 107 ]. Managing references is a key administrative function in the process of review particularly for documenting searches in PRISMA guidance.

Key stage eight: Documenting the search

The Cochrane Handbook was the only guidance document to recommend a specific reporting guideline: Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [ 9 ]. Six documents provided guidance on reporting the process of literature searching with specific criteria to report [ 3 , 4 , 6 , 8 , 9 , 10 ]. There was consensus on reporting: the databases searched (and the host searched by), the search strategies used, and any use of limits (e.g. date, language, search filters (The CRD handbook called for these limits to be justified [ 6 ])). Three guidance documents reported that the number of studies identified should be recorded [ 3 , 6 , 10 ]. The number of duplicates identified [ 10 ], the screening decisions [ 3 ], a comprehensive list of grey literature sources searched (and full detail for other supplementary search methods) [ 8 ], and an annotation of search terms tested but not used [ 4 ] were identified as unique items in four documents.

The Cochrane Handbook was the only guidance document to note that the full search strategies for each database should be included in the Additional file 1 of the review [ 9 ].

All guidance documents should ultimately deliver completed systematic reviews that fulfil the requirements of the PRISMA reporting guidelines [ 108 ]. The guidance broadly requires the reporting of data that corresponds with the requirements of the PRISMA statement although documents typically ask for diverse and additional items [ 108 ]. In 2008, Sampson et al. observed a lack of consensus on reporting search methods in systematic reviews [ 109 ] and this remains the case as of 2017, as evidenced in the guidance documents, and in spite of the publication of the PRISMA guidelines in 2009 [ 110 ]. It is unclear why the collective guidance does not more explicitly endorse adherence to the PRISMA guidance.

Reporting of literature searching is a key area in systematic reviews since it sets out clearly what was done and how the conclusions of the review can be believed [ 52 , 109 ]. Despite strong endorsement in the guidance documents, specifically supported in PRISMA guidance, and other related reporting standards too (such as ENTREQ for qualitative evidence synthesis, STROBE for reviews of observational studies), authors still highlight the prevalence of poor standards of literature search reporting [ 31 , 110 , 111 , 112 , 113 , 114 , 115 , 116 , 117 , 118 , 119 ]. To explore issues experienced by authors in reporting literature searches, and look at uptake of PRISMA, Radar et al. [ 120 ] surveyed over 260 review authors to determine common problems and their work summaries the practical aspects of reporting literature searching [ 120 ]. Atkinson et al. [ 121 ] have also analysed reporting standards for literature searching, summarising recommendations and gaps for reporting search strategies [ 121 ].

One area that is less well covered by the guidance, but nevertheless appears in this literature, is the quality appraisal or peer review of literature search strategies. The PRESS checklist is the most prominent and it aims to develop evidence-based guidelines to peer review of electronic search strategies [ 5 , 122 , 123 ]. A corresponding guideline for documentation of supplementary search methods does not yet exist although this idea is currently being explored.

How the reporting of the literature searching process corresponds to critical appraisal tools is an area for further research. In the survey undertaken by Radar et al. (2014), 86% of survey respondents (153/178) identified a need for further guidance on what aspects of the literature search process to report [ 120 ]. The PRISMA statement offers a brief summary of what to report but little practical guidance on how to report it [ 108 ]. Critical appraisal tools for systematic reviews, such as AMSTAR 2 (Shea et al. [ 124 ]) and ROBIS (Whiting et al. [ 125 ]), can usefully be read alongside PRISMA guidance, since they offer greater detail on how the reporting of the literature search will be appraised and, therefore, they offer a proxy on what to report [ 124 , 125 ]. Further research in the form of a study which undertakes a comparison between PRISMA and quality appraisal checklists for systematic reviews would seem to begin addressing the call, identified by Radar et al., for further guidance on what to report [ 120 ].

Limitations

Other handbooks exist.

A potential limitation of this literature review is the focus on guidance produced in Europe (the UK specifically) and Australia. We justify the decision for our selection of the nine guidance documents reviewed in this literature review in section “ Identifying guidance ”. In brief, these nine guidance documents were selected as the most relevant health care guidance that inform UK systematic reviewing practice, given that the UK occupies a prominent position in the science of health information retrieval. We acknowledge the existence of other guidance documents, such as those from North America (e.g. the Agency for Healthcare Research and Quality (AHRQ) [ 126 ], The Institute of Medicine [ 127 ] and the guidance and resources produced by the Canadian Agency for Drugs and Technologies in Health (CADTH) [ 128 ]). We comment further on this directly below.

The handbooks are potentially linked to one another

What is not clear is the extent to which the guidance documents inter-relate or provide guidance uniquely. The Cochrane Handbook, first published in 1994, is notably a key source of reference in guidance and systematic reviews beyond Cochrane reviews. It is not clear to what extent broadening the sample of guidance handbooks to include North American handbooks, and guidance handbooks from other relevant countries too, would alter the findings of this literature review or develop further support for the process model. Since we cannot be clear, we raise this as a potential limitation of this literature review. On our initial review of a sample of North American, and other, guidance documents (before selecting the guidance documents considered in this review), however, we do not consider that the inclusion of these further handbooks would alter significantly the findings of this literature review.

This is a literature review

A further limitation of this review was that the review of published studies is not a systematic review of the evidence for each key stage. It is possible that other relevant studies could help contribute to the exploration and development of the key stages identified in this review.

This literature review would appear to demonstrate the existence of a shared model of the literature searching process in systematic reviews. We call this model ‘the conventional approach’, since it appears to be common convention in nine different guidance documents.

The findings reported above reveal eight key stages in the process of literature searching for systematic reviews. These key stages are consistently reported in the nine guidance documents which suggests consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews.

In Table 2 , we demonstrate consensus regarding the application of literature search methods. All guidance documents distinguish between primary and supplementary search methods. Bibliographic database searching is consistently the first method of literature searching referenced in each guidance document. Whilst the guidance uniformly supports the use of supplementary search methods, there is little evidence for a consistent process with diverse guidance across documents. This may reflect differences in the core focus across each document, linked to differences in identifying effectiveness studies or qualitative studies, for instance.

Eight of the nine guidance documents reported on the aims of literature searching. The shared understanding was that literature searching should be thorough and comprehensive in its aim and that this process should be reported transparently so that that it could be reproduced. Whilst only three documents explicitly link this understanding to minimising bias, it is clear that comprehensive literature searching is implicitly linked to ‘not missing relevant studies’ which is approximately the same point.

Defining the key stages in this review helps categorise the scholarship available, and it prioritises areas for development or further study. The supporting studies on preparing for literature searching (key stage three, ‘preparation’) were, for example, comparatively few, and yet this key stage represents a decisive moment in literature searching for systematic reviews. It is where search strategy structure is determined, search terms are chosen or discarded, and the resources to be searched are selected. Information specialists, librarians and researchers, are well placed to develop these and other areas within the key stages we identify.

This review calls for further research to determine the suitability of using the conventional approach. The publication dates of the guidance documents which underpin the conventional approach may raise questions as to whether the process which they each report remains valid for current systematic literature searching. In addition, it may be useful to test whether it is desirable to use the same process model of literature searching for qualitative evidence synthesis as that for reviews of intervention effectiveness, which this literature review demonstrates is presently recommended best practice.

Abbreviations

Behaviour of interest; Health context; Exclusions; Models or Theories

Cochrane Database of Systematic Reviews

The Cochrane Central Register of Controlled Trials

Database of Abstracts of Reviews of Effects

Enhancing transparency in reporting the synthesis of qualitative research

Institute for Quality and Efficiency in Healthcare

National Institute for Clinical Excellence

Population, Intervention, Comparator, Outcome

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Setting, Perspective, Intervention, Comparison, Evaluation

Sample, Phenomenon of Interest, Design, Evaluation, Research type

STrengthening the Reporting of OBservational studies in Epidemiology

Trial Search Co-ordinators

Booth A. Unpacking your literature search toolbox: on search styles and tactics. Health Information & Libraries Journal. 2008;25(4):313–7.

Article   Google Scholar  

Petticrew M, Roberts H. Systematic reviews in the social sciences: a practical guide. Oxford: Blackwell Publishing Ltd; 2006.

Book   Google Scholar  

Institute for Quality and Efficiency in Health Care (IQWiG). IQWiG Methods Resources. 7 Information retrieval 2014 [Available from: https://www.ncbi.nlm.nih.gov/books/NBK385787/ .

NICE: National Institute for Health and Care Excellence. Developing NICE guidelines: the manual 2014. Available from: https://www.nice.org.uk/media/default/about/what-we-do/our-programmes/developing-nice-guidelines-the-manual.pdf .

Sampson M. MJ, Lefebvre C, Moher D, Grimshaw J. Peer Review of Electronic Search Strategies: PRESS; 2008.

Google Scholar  

Centre for Reviews & Dissemination. Systematic reviews – CRD’s guidance for undertaking reviews in healthcare. York: Centre for Reviews and Dissemination, University of York; 2009.

eunetha: European Network for Health Technology Assesment Process of information retrieval for systematic reviews and health technology assessments on clinical effectiveness 2016. Available from: http://www.eunethta.eu/sites/default/files/Guideline_Information_Retrieval_V1-1.pdf .

Kugley SWA, Thomas J, Mahood Q, Jørgensen AMK, Hammerstrøm K, Sathe N. Searching for studies: a guide to information retrieval for Campbell systematic reviews. Oslo: Campbell Collaboration. 2017; Available from: https://www.campbellcollaboration.org/library/searching-for-studies-information-retrieval-guide-campbell-reviews.html

Lefebvre C, Manheimer E, Glanville J. Chapter 6: searching for studies. In: JPT H, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions; 2011.

Collaboration for Environmental Evidence. Guidelines for Systematic Review and Evidence Synthesis in Environmental Management.: Environmental Evidence:; 2013. Available from: http://www.environmentalevidence.org/wp-content/uploads/2017/01/Review-guidelines-version-4.2-final-update.pdf .

The Joanna Briggs Institute. Joanna Briggs institute reviewers’ manual. 2014th ed: the Joanna Briggs institute; 2014. Available from: https://joannabriggs.org/assets/docs/sumari/ReviewersManual-2014.pdf

Beverley CA, Booth A, Bath PA. The role of the information specialist in the systematic review process: a health information case study. Health Inf Libr J. 2003;20(2):65–74.

Article   CAS   Google Scholar  

Harris MR. The librarian's roles in the systematic review process: a case study. Journal of the Medical Library Association. 2005;93(1):81–7.

PubMed   PubMed Central   Google Scholar  

Egger JB. Use of recommended search strategies in systematic reviews and the impact of librarian involvement: a cross-sectional survey of recent authors. PLoS One. 2015;10(5):e0125931.

Li L, Tian J, Tian H, Moher D, Liang F, Jiang T, et al. Network meta-analyses could be improved by searching more sources and by involving a librarian. J Clin Epidemiol. 2014;67(9):1001–7.

Article   PubMed   Google Scholar  

McGowan J, Sampson M. Systematic reviews need systematic searchers. J Med Libr Assoc. 2005;93(1):74–80.

Rethlefsen ML, Farrell AM, Osterhaus Trzasko LC, Brigham TJ. Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews. J Clin Epidemiol. 2015;68(6):617–26.

Weller AC. Mounting evidence that librarians are essential for comprehensive literature searches for meta-analyses and Cochrane reports. J Med Libr Assoc. 2004;92(2):163–4.

Swinkels A, Briddon J, Hall J. Two physiotherapists, one librarian and a systematic literature review: collaboration in action. Health Info Libr J. 2006;23(4):248–56.

Foster M. An overview of the role of librarians in systematic reviews: from expert search to project manager. EAHIL. 2015;11(3):3–7.

Lawson L. OPERATING OUTSIDE LIBRARY WALLS 2004.

Vassar M, Yerokhin V, Sinnett PM, Weiher M, Muckelrath H, Carr B, et al. Database selection in systematic reviews: an insight through clinical neurology. Health Inf Libr J. 2017;34(2):156–64.

Townsend WA, Anderson PF, Ginier EC, MacEachern MP, Saylor KM, Shipman BL, et al. A competency framework for librarians involved in systematic reviews. Journal of the Medical Library Association : JMLA. 2017;105(3):268–75.

Cooper ID, Crum JA. New activities and changing roles of health sciences librarians: a systematic review, 1990-2012. Journal of the Medical Library Association : JMLA. 2013;101(4):268–77.

Crum JA, Cooper ID. Emerging roles for biomedical librarians: a survey of current practice, challenges, and changes. Journal of the Medical Library Association : JMLA. 2013;101(4):278–86.

Dudden RF, Protzko SL. The systematic review team: contributions of the health sciences librarian. Med Ref Serv Q. 2011;30(3):301–15.

Golder S, Loke Y, McIntosh HM. Poor reporting and inadequate searches were apparent in systematic reviews of adverse effects. J Clin Epidemiol. 2008;61(5):440–8.

Maggio LA, Tannery NH, Kanter SL. Reproducibility of literature search reporting in medical education reviews. Academic medicine : journal of the Association of American Medical Colleges. 2011;86(8):1049–54.

Meert D, Torabi N, Costella J. Impact of librarians on reporting of the literature searching component of pediatric systematic reviews. Journal of the Medical Library Association : JMLA. 2016;104(4):267–77.

Morris M, Boruff JT, Gore GC. Scoping reviews: establishing the role of the librarian. Journal of the Medical Library Association : JMLA. 2016;104(4):346–54.

Koffel JB, Rethlefsen ML. Reproducibility of search strategies is poor in systematic reviews published in high-impact pediatrics, cardiology and surgery journals: a cross-sectional study. PLoS One. 2016;11(9):e0163309.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Fehrmann P, Thomas J. Comprehensive computer searches and reporting in systematic reviews. Research Synthesis Methods. 2011;2(1):15–32.

Booth A. Searching for qualitative research for inclusion in systematic reviews: a structured methodological review. Systematic Reviews. 2016;5(1):74.

Article   PubMed   PubMed Central   Google Scholar  

Egger M, Juni P, Bartlett C, Holenstein F, Sterne J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health technology assessment (Winchester, England). 2003;7(1):1–76.

Tricco AC, Tetzlaff J, Sampson M, Fergusson D, Cogo E, Horsley T, et al. Few systematic reviews exist documenting the extent of bias: a systematic review. J Clin Epidemiol. 2008;61(5):422–34.

Booth A. How much searching is enough? Comprehensive versus optimal retrieval for technology assessments. Int J Technol Assess Health Care. 2010;26(4):431–5.

Papaioannou D, Sutton A, Carroll C, Booth A, Wong R. Literature searching for social science systematic reviews: consideration of a range of search techniques. Health Inf Libr J. 2010;27(2):114–22.

Petticrew M. Time to rethink the systematic review catechism? Moving from ‘what works’ to ‘what happens’. Systematic Reviews. 2015;4(1):36.

Betrán AP, Say L, Gülmezoglu AM, Allen T, Hampson L. Effectiveness of different databases in identifying studies for systematic reviews: experience from the WHO systematic review of maternal morbidity and mortality. BMC Med Res Methodol. 2005;5

Felson DT. Bias in meta-analytic research. J Clin Epidemiol. 1992;45(8):885–92.

Article   PubMed   CAS   Google Scholar  

Franco A, Malhotra N, Simonovits G. Publication bias in the social sciences: unlocking the file drawer. Science. 2014;345(6203):1502–5.

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. Grey literature in systematic reviews: a cross-sectional study of the contribution of non-English reports, unpublished studies and dissertations to the results of meta-analyses in child-relevant reviews. BMC Med Res Methodol. 2017;17(1):64.

Schmucker CM, Blümle A, Schell LK, Schwarzer G, Oeller P, Cabrera L, et al. Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research. PLoS One. 2017;12(4):e0176210.

Egger M, Zellweger-Zahner T, Schneider M, Junker C, Lengeler C, Antes G. Language bias in randomised controlled trials published in English and German. Lancet (London, England). 1997;350(9074):326–9.

Moher D, Pham B, Lawson ML, Klassen TP. The inclusion of reports of randomised trials published in languages other than English in systematic reviews. Health technology assessment (Winchester, England). 2003;7(41):1–90.

Pham B, Klassen TP, Lawson ML, Moher D. Language of publication restrictions in systematic reviews gave different results depending on whether the intervention was conventional or complementary. J Clin Epidemiol. 2005;58(8):769–76.

Mills EJ, Kanters S, Thorlund K, Chaimani A, Veroniki A-A, Ioannidis JPA. The effects of excluding treatments from network meta-analyses: survey. BMJ : British Medical Journal. 2013;347

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. The contribution of databases to the results of systematic reviews: a cross-sectional study. BMC Med Res Methodol. 2016;16(1):127.

van Driel ML, De Sutter A, De Maeseneer J, Christiaens T. Searching for unpublished trials in Cochrane reviews may not be worth the effort. J Clin Epidemiol. 2009;62(8):838–44.e3.

Buchberger B, Krabbe L, Lux B, Mattivi JT. Evidence mapping for decision making: feasibility versus accuracy - when to abandon high sensitivity in electronic searches. German medical science : GMS e-journal. 2016;14:Doc09.

Lorenc T, Pearson M, Jamal F, Cooper C, Garside R. The role of systematic reviews of qualitative evidence in evaluating interventions: a case study. Research Synthesis Methods. 2012;3(1):1–10.

Gough D. Weight of evidence: a framework for the appraisal of the quality and relevance of evidence. Res Pap Educ. 2007;22(2):213–28.

Barroso J, Gollop CJ, Sandelowski M, Meynell J, Pearce PF, Collins LJ. The challenges of searching for and retrieving qualitative studies. West J Nurs Res. 2003;25(2):153–78.

Britten N, Garside R, Pope C, Frost J, Cooper C. Asking more of qualitative synthesis: a response to Sally Thorne. Qual Health Res. 2017;27(9):1370–6.

Booth A, Carroll C. Systematic searching for theory to inform systematic reviews: is it feasible? Is it desirable? Health Info Libr J. 2015;32(3):220–35.

Kwon Y, Powelson SE, Wong H, Ghali WA, Conly JM. An assessment of the efficacy of searching in biomedical databases beyond MEDLINE in identifying studies for a systematic review on ward closures as an infection control intervention to control outbreaks. Syst Rev. 2014;3:135.

Nussbaumer-Streit B, Klerings I, Wagner G, Titscher V, Gartlehner G. Assessing the validity of abbreviated literature searches for rapid reviews: protocol of a non-inferiority and meta-epidemiologic study. Systematic Reviews. 2016;5:197.

Wagner G, Nussbaumer-Streit B, Greimel J, Ciapponi A, Gartlehner G. Trading certainty for speed - how much uncertainty are decisionmakers and guideline developers willing to accept when using rapid reviews: an international survey. BMC Med Res Methodol. 2017;17(1):121.

Ogilvie D, Hamilton V, Egan M, Petticrew M. Systematic reviews of health effects of social interventions: 1. Finding the evidence: how far should you go? J Epidemiol Community Health. 2005;59(9):804–8.

Royle P, Milne R. Literature searching for randomized controlled trials used in Cochrane reviews: rapid versus exhaustive searches. Int J Technol Assess Health Care. 2003;19(4):591–603.

Pearson M, Moxham T, Ashton K. Effectiveness of search strategies for qualitative research about barriers and facilitators of program delivery. Eval Health Prof. 2011;34(3):297–308.

Levay P, Raynor M, Tuvey D. The Contributions of MEDLINE, Other Bibliographic Databases and Various Search Techniques to NICE Public Health Guidance. 2015. 2015;10(1):19.

Nussbaumer-Streit B, Klerings I, Wagner G, Heise TL, Dobrescu AI, Armijo-Olivo S, et al. Abbreviated literature searches were viable alternatives to comprehensive searches: a meta-epidemiological study. J Clin Epidemiol. 2018;102:1–11.

Briscoe S, Cooper C, Glanville J, Lefebvre C. The loss of the NHS EED and DARE databases and the effect on evidence synthesis and evaluation. Res Synth Methods. 2017;8(3):256–7.

Stansfield C, O'Mara-Eves A, Thomas J. Text mining for search term development in systematic reviewing: A discussion of some methods and challenges. Research Synthesis Methods.n/a-n/a.

Petrova M, Sutcliffe P, Fulford KW, Dale J. Search terms and a validated brief search filter to retrieve publications on health-related values in Medline: a word frequency analysis study. Journal of the American Medical Informatics Association : JAMIA. 2012;19(3):479–88.

Stansfield C, Thomas J, Kavanagh J. 'Clustering' documents automatically to support scoping reviews of research: a case study. Res Synth Methods. 2013;4(3):230–41.

PubMed   Google Scholar  

Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi S. PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv Res. 2014;14:579.

Andrew B. Clear and present questions: formulating questions for evidence based practice. Library Hi Tech. 2006;24(3):355–68.

Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qual Health Res. 2012;22(10):1435–43.

Whiting P, Westwood M, Bojke L, Palmer S, Richardson G, Cooper J, et al. Clinical effectiveness and cost-effectiveness of tests for the diagnosis and investigation of urinary tract infection in children: a systematic review and economic model. Health technology assessment (Winchester, England). 2006;10(36):iii-iv, xi-xiii, 1–154.

Cooper C, Levay P, Lorenc T, Craig GM. A population search filter for hard-to-reach populations increased search efficiency for a systematic review. J Clin Epidemiol. 2014;67(5):554–9.

Hausner E, Waffenschmidt S, Kaiser T, Simon M. Routine development of objectively derived search strategies. Systematic Reviews. 2012;1(1):19.

Hausner E, Guddat C, Hermanns T, Lampert U, Waffenschmidt S. Prospective comparison of search strategies for systematic reviews: an objective approach yielded higher sensitivity than a conceptual one. J Clin Epidemiol. 2016;77:118–24.

Craven J, Levay P. Recording database searches for systematic reviews - what is the value of adding a narrative to peer-review checklists? A case study of nice interventional procedures guidance. Evid Based Libr Inf Pract. 2011;6(4):72–87.

Wright K, Golder S, Lewis-Light K. What value is the CINAHL database when searching for systematic reviews of qualitative studies? Syst Rev. 2015;4:104.

Beckles Z, Glover S, Ashe J, Stockton S, Boynton J, Lai R, et al. Searching CINAHL did not add value to clinical questions posed in NICE guidelines. J Clin Epidemiol. 2013;66(9):1051–7.

Cooper C, Rogers M, Bethel A, Briscoe S, Lowe J. A mapping review of the literature on UK-focused health and social care databases. Health Inf Libr J. 2015;32(1):5–22.

Younger P, Boddy K. When is a search not a search? A comparison of searching the AMED complementary health database via EBSCOhost, OVID and DIALOG. Health Inf Libr J. 2009;26(2):126–35.

Lam MT, McDiarmid M. Increasing number of databases searched in systematic reviews and meta-analyses between 1994 and 2014. Journal of the Medical Library Association : JMLA. 2016;104(4):284–9.

Bethel A, editor Search summary tables for systematic reviews: results and findings. HLC Conference 2017a.

Aagaard T, Lund H, Juhl C. Optimizing literature search in systematic reviews - are MEDLINE, EMBASE and CENTRAL enough for identifying effect studies within the area of musculoskeletal disorders? BMC Med Res Methodol. 2016;16(1):161.

Adams CE, Frederick K. An investigation of the adequacy of MEDLINE searches for randomized controlled trials (RCTs) of the effects of mental health care. Psychol Med. 1994;24(3):741–8.

Kelly L, St Pierre-Hansen N. So many databases, such little clarity: searching the literature for the topic aboriginal. Canadian family physician Medecin de famille canadien. 2008;54(11):1572–3.

Lawrence DW. What is lost when searching only one literature database for articles relevant to injury prevention and safety promotion? Injury Prevention. 2008;14(6):401–4.

Lemeshow AR, Blum RE, Berlin JA, Stoto MA, Colditz GA. Searching one or two databases was insufficient for meta-analysis of observational studies. J Clin Epidemiol. 2005;58(9):867–73.

Sampson M, Barrowman NJ, Moher D, Klassen TP, Pham B, Platt R, et al. Should meta-analysts search Embase in addition to Medline? J Clin Epidemiol. 2003;56(10):943–55.

Stevinson C, Lawlor DA. Searching multiple databases for systematic reviews: added value or diminishing returns? Complementary Therapies in Medicine. 2004;12(4):228–32.

Suarez-Almazor ME, Belseck E, Homik J, Dorgan M, Ramos-Remus C. Identifying clinical trials in the medical literature with electronic databases: MEDLINE alone is not enough. Control Clin Trials. 2000;21(5):476–87.

Taylor B, Wylie E, Dempster M, Donnelly M. Systematically retrieving research: a case study evaluating seven databases. Res Soc Work Pract. 2007;17(6):697–706.

Beyer FR, Wright K. Can we prioritise which databases to search? A case study using a systematic review of frozen shoulder management. Health Info Libr J. 2013;30(1):49–58.

Duffy S, de Kock S, Misso K, Noake C, Ross J, Stirk L. Supplementary searches of PubMed to improve currency of MEDLINE and MEDLINE in-process searches via Ovid. Journal of the Medical Library Association : JMLA. 2016;104(4):309–12.

Katchamart W, Faulkner A, Feldman B, Tomlinson G, Bombardier C. PubMed had a higher sensitivity than Ovid-MEDLINE in the search for systematic reviews. J Clin Epidemiol. 2011;64(7):805–7.

Cooper C, Lovell R, Husk K, Booth A, Garside R. Supplementary search methods were more effective and offered better value than bibliographic database searching: a case study from public health and environmental enhancement (in Press). Research Synthesis Methods. 2017;

Cooper C, Booth, A., Britten, N., Garside, R. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: A methodological review. (In Press). BMC Systematic Reviews. 2017.

Greenhalgh T, Peacock R. Effectiveness and efficiency of search methods in systematic reviews of complex evidence: audit of primary sources. BMJ (Clinical research ed). 2005;331(7524):1064–5.

Article   PubMed Central   Google Scholar  

Hinde S, Spackman E. Bidirectional citation searching to completion: an exploration of literature searching methods. PharmacoEconomics. 2015;33(1):5–11.

Levay P, Ainsworth N, Kettle R, Morgan A. Identifying evidence for public health guidance: a comparison of citation searching with web of science and Google scholar. Res Synth Methods. 2016;7(1):34–45.

McManus RJ, Wilson S, Delaney BC, Fitzmaurice DA, Hyde CJ, Tobias RS, et al. Review of the usefulness of contacting other experts when conducting a literature search for systematic reviews. BMJ (Clinical research ed). 1998;317(7172):1562–3.

Westphal A, Kriston L, Holzel LP, Harter M, von Wolff A. Efficiency and contribution of strategies for finding randomized controlled trials: a case study from a systematic review on therapeutic interventions of chronic depression. Journal of public health research. 2014;3(2):177.

Matthews EJ, Edwards AG, Barker J, Bloor M, Covey J, Hood K, et al. Efficient literature searching in diffuse topics: lessons from a systematic review of research on communicating risk to patients in primary care. Health Libr Rev. 1999;16(2):112–20.

Bethel A. Endnote Training (YouTube Videos) 2017b [Available from: http://medicine.exeter.ac.uk/esmi/workstreams/informationscience/is_resources,_guidance_&_advice/ .

Bramer WM, Giustini D, de Jonge GB, Holland L, Bekhuis T. De-duplication of database search results for systematic reviews in EndNote. Journal of the Medical Library Association : JMLA. 2016;104(3):240–3.

Bramer WM, Milic J, Mast F. Reviewing retrieved references for inclusion in systematic reviews using EndNote. Journal of the Medical Library Association : JMLA. 2017;105(1):84–7.

Gall C, Brahmi FA. Retrieval comparison of EndNote to search MEDLINE (Ovid and PubMed) versus searching them directly. Medical reference services quarterly. 2004;23(3):25–32.

Ahmed KK, Al Dhubaib BE. Zotero: a bibliographic assistant to researcher. J Pharmacol Pharmacother. 2011;2(4):303–5.

Coar JT, Sewell JP. Zotero: harnessing the power of a personal bibliographic manager. Nurse Educ. 2010;35(5):205–7.

Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

Sampson M, McGowan J, Tetzlaff J, Cogo E, Moher D. No consensus exists on search reporting methods for systematic reviews. J Clin Epidemiol. 2008;61(8):748–54.

Toews LC. Compliance of systematic reviews in veterinary journals with preferred reporting items for systematic reviews and meta-analysis (PRISMA) literature search reporting guidelines. Journal of the Medical Library Association : JMLA. 2017;105(3):233–9.

Booth A. "brimful of STARLITE": toward standards for reporting literature searches. Journal of the Medical Library Association : JMLA. 2006;94(4):421–9. e205

Faggion CM Jr, Wu YC, Tu YK, Wasiak J. Quality of search strategies reported in systematic reviews published in stereotactic radiosurgery. Br J Radiol. 2016;89(1062):20150878.

Mullins MM, DeLuca JB, Crepaz N, Lyles CM. Reporting quality of search methods in systematic reviews of HIV behavioral interventions (2000–2010): are the searches clearly explained, systematic and reproducible? Research Synthesis Methods. 2014;5(2):116–30.

Yoshii A, Plaut DA, McGraw KA, Anderson MJ, Wellik KE. Analysis of the reporting of search strategies in Cochrane systematic reviews. Journal of the Medical Library Association : JMLA. 2009;97(1):21–9.

Bigna JJ, Um LN, Nansseu JR. A comparison of quality of abstracts of systematic reviews including meta-analysis of randomized controlled trials in high-impact general medicine journals before and after the publication of PRISMA extension for abstracts: a systematic review and meta-analysis. Syst Rev. 2016;5(1):174.

Akhigbe T, Zolnourian A, Bulters D. Compliance of systematic reviews articles in brain arteriovenous malformation with PRISMA statement guidelines: review of literature. Journal of clinical neuroscience : official journal of the Neurosurgical Society of Australasia. 2017;39:45–8.

Tao KM, Li XQ, Zhou QH, Moher D, Ling CQ, Yu WF. From QUOROM to PRISMA: a survey of high-impact medical journals' instructions to authors and a review of systematic reviews in anesthesia literature. PLoS One. 2011;6(11):e27611.

Wasiak J, Tyack Z, Ware R. Goodwin N. Jr. Poor methodological quality and reporting standards of systematic reviews in burn care management. International wound journal: Faggion CM; 2016.

Tam WW, Lo KK, Khalechelvam P. Endorsement of PRISMA statement and quality of systematic reviews and meta-analyses published in nursing journals: a cross-sectional study. BMJ Open. 2017;7(2):e013905.

Rader T, Mann M, Stansfield C, Cooper C, Sampson M. Methods for documenting systematic review searches: a discussion of common issues. Res Synth Methods. 2014;5(2):98–115.

Atkinson KM, Koenka AC, Sanchez CE, Moshontz H, Cooper H. Reporting standards for literature searches and report inclusion criteria: making research syntheses more transparent and easy to replicate. Res Synth Methods. 2015;6(1):87–95.

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol. 2016;75:40–6.

Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, Lefebvre C. An evidence-based practice guideline for the peer review of electronic search strategies. J Clin Epidemiol. 2009;62(9):944–52.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ (Clinical research ed). 2017;358.

Whiting P, Savović J, Higgins JPT, Caldwell DM, Reeves BC, Shea B, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.

Relevo R, Balshem H. Finding evidence for comparing medical interventions: AHRQ and the effective health care program. J Clin Epidemiol. 2011;64(11):1168–77.

Medicine Io. Standards for Systematic Reviews 2011 [Available from: http://www.nationalacademies.org/hmd/Reports/2011/Finding-What-Works-in-Health-Care-Standards-for-Systematic-Reviews/Standards.aspx .

CADTH: Resources 2018.

Download references

Acknowledgements

CC acknowledges the supervision offered by Professor Chris Hyde.

This publication forms a part of CC’s PhD. CC’s PhD was funded through the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme (Project Number 16/54/11). The open access fee for this publication was paid for by Exeter Medical School.

RG and NB were partially supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care South West Peninsula.

The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

Author information

Authors and affiliations.

Institute of Health Research, University of Exeter Medical School, Exeter, UK

Chris Cooper & Jo Varley-Campbell

HEDS, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK

Andrew Booth

Nicky Britten

European Centre for Environment and Human Health, University of Exeter Medical School, Truro, UK

Ruth Garside

You can also search for this author in PubMed   Google Scholar

Contributions

CC conceived the idea for this study and wrote the first draft of the manuscript. CC discussed this publication in PhD supervision with AB and separately with JVC. CC revised the publication with input and comments from AB, JVC, RG and NB. All authors revised the manuscript prior to submission. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Chris Cooper .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:.

Appendix tables and PubMed search strategy. Key studies used for pearl growing per key stage, working data extraction tables and the PubMed search strategy. (DOCX 30 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Cooper, C., Booth, A., Varley-Campbell, J. et al. Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies. BMC Med Res Methodol 18 , 85 (2018). https://doi.org/10.1186/s12874-018-0545-3

Download citation

Received : 20 September 2017

Accepted : 06 August 2018

Published : 14 August 2018

DOI : https://doi.org/10.1186/s12874-018-0545-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Literature Search Process
  • Citation Chasing
  • Tacit Models
  • Unique Guidance
  • Information Specialists

BMC Medical Research Methodology

ISSN: 1471-2288

search methods for literature review

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

search methods for literature review

Try for free

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved February 19, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, what is your plagiarism score.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 28, Issue 6
  • Rapid reviews methods series: Guidance on literature search
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0001-6644-9845 Irma Klerings 1 ,
  • Shannon Robalino 2 ,
  • http://orcid.org/0000-0003-4808-3880 Andrew Booth 3 ,
  • http://orcid.org/0000-0002-2903-6870 Camila Micaela Escobar-Liquitay 4 ,
  • Isolde Sommer 1 ,
  • http://orcid.org/0000-0001-5531-3678 Gerald Gartlehner 1 , 5 ,
  • Declan Devane 6 , 7 ,
  • Siw Waffenschmidt 8
  • On behalf of the Cochrane Rapid Reviews Methods Group
  • 1 Department for Evidence-Based Medicine and Evaluation , University of Krems (Danube University Krems) , Krems , Niederösterreich , Austria
  • 2 Center for Evidence-based Policy , Oregon Health & Science University , Portland , Oregon , USA
  • 3 School of Health and Related Research (ScHARR) , The University of Sheffield , Sheffield , UK
  • 4 Research Department, Associate Cochrane Centre , Instituto Universitario Escuela de Medicina del Hospital Italiano de Buenos Aires , Buenos Aires , Argentina
  • 5 RTI-UNC Evidence-based Practice Center , RTI International , Research Triangle Park , North Carolina , USA
  • 6 School of Nursing & Midwifery, HRB TMRN , National University of Ireland Galway , Galway , Ireland
  • 7 Evidence Synthesis Ireland & Cochrane Ireland , University of Galway , Galway , Ireland
  • 8 Information Management Department , Institute for Quality and Efficiency in Healthcare , Cologne , Germany
  • Correspondence to Irma Klerings, Department for Evidence-based Medicine and Evaluation, Danube University Krems, Krems, Niederösterreich, Austria; irma.klerings{at}donau-uni.ac.at

This paper is part of a series of methodological guidance from the Cochrane Rapid Reviews Methods Group. Rapid reviews (RR) use modified systematic review methods to accelerate the review process while maintaining systematic, transparent and reproducible methods. In this paper, we address considerations for RR searches. We cover the main areas relevant to the search process: preparation and planning, information sources and search methods, search strategy development, quality assurance, reporting, and record management. Two options exist for abbreviating the search process: (1) reducing time spent on conducting searches and (2) reducing the size of the search result. Because screening search results is usually more resource-intensive than conducting the search, we suggest investing time upfront in planning and optimising the search to save time by reducing the literature screening workload. To achieve this goal, RR teams should work with an information specialist. They should select a small number of relevant information sources (eg, databases) and use search methods that are highly likely to identify relevant literature for their topic. Database search strategies should aim to optimise both precision and sensitivity, and quality assurance measures (peer review and validation of search strategies) should be applied to minimise errors.

  • Evidence-Based Practice
  • Systematic Reviews as Topic
  • Information Science

Data availability statement

No data are available.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjebm-2022-112079

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

Compared with systematic reviews, rapid reviews (RR) often abbreviate or limit the literature search in some way to accelerate review production. However, RR guidance rarely specifies how to select topic-appropriate search approaches.

WHAT THIS STUDY ADDS

This paper presents an overview of considerations and recommendations for RR searching, covering the complete search process from the planning stage to record management. We also provide extensive appendices with practical examples, useful sources and a glossary of terms.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

There is no one-size-fits-all solution for RR literature searching: review teams should consider what search approaches best fit their RR project.

Introduction

This paper is part of a series from the Cochrane Rapid Reviews Methods Group (RRMG) providing methodological guidance for rapid reviews (RRs). 1–3 While the RRMG’s guidance 4 5 on Cochrane RR production includes brief advice on literature searching, we aim to provide in-depth recommendations for the entire search process.

Literature searching is the foundation for all reviews; therefore, it is important to understand the goals of a specific RR. The scope of RRs varies considerably (from focused questions to overviews of broad topics). 6 As with conventional systematic reviews (SRs), there is not a one-size-fits-all approach for RR literature searches. We aim to support RR teams in choosing methods that best fit their project while understanding the limitations of modified search methods. Our recommendations derive from current systematic search guidance, evidence on modified search methods and practical experience conducting RRs.

This paper presents considerations and recommendations, described briefly in table 1 . The table also includes a comparison to the SR search process based on common recommendations. 7–10 We provide supplemental materials, including a list of additional resources, further details of our recommendations, practical examples, and a glossary (explaining the terms written in italics) in online supplemental appendices A–C .

Supplemental material

  • View inline

Recommendations for rapid review literature searching

Preparation and planning

Given that the results of systematic literature searches underpin a review, planning the searches is integral to the overall RR preparation. The RR search process follows the same steps as an SR search; therefore, RR teams must be familiar with the general standards of systematic searching . Templates (see online supplemental appendix B ) and reporting guidance 11 for SR searches can also be adapted to structure the RR search process.

Developing a plan for the literature search forms part of protocol development and should involve an information specialist (eg, librarian). Information specialists can assist in refining the research question, selecting appropriate search methods and resources, designing and executing search strategies, and reporting the search methods. At minimum, specialist input should include assessing information sources and methods and providing feedback on the primary database search strategy.

Two options exist for abbreviating the search process: (1) reducing time spent on conducting searches (eg, using automation tools, reusing existing search strategies, omitting planning or quality assurance steps) and (2) reducing the size of the search result (eg, limiting the number of information sources, increasing the precision of search strategies, using study design filters). Study selection (ie, screening search results) is usually more resource-intensive than searching, 12 particularly for topics with complex or broad concepts or diffuse terminology; thus, the second option may be more efficient for the entire RR. Investing time upfront in optimising search sensitivity (ie, completeness) and precision (ie, positive predictive value) can save time in the long run by reducing the screening and selection workload.

Preliminary or scoping searches are critical to this process. They inform the choice of search methods and identify potentially relevant literature. Texts identified through preliminary searching serve as known relevant records that can be used throughout the search development process (see sections on database selection, development and validation of search strategies).

In addition to planning the search itself, the review team should factor in time for quality assurance steps (eg, search strategy peer review) and the management of search results (eg, deduplication, full-text retrieval).

Information sources and methods

To optimise the balance of search sensitivity and precision, RR teams should prioritise the most relevant information sources for the topic and the type of evidence required. These can include bibliographic databases (eg, MEDLINE/PubMed), grey literature sources and targeted supplementary search methods. Note that this approach differs from the Methodological Expectations of Cochrane Intervention Reviews Standards 9 where the same core set of information sources is required for every review and further supplemented by additional topic-specific and evidence-specific sources.

Choosing bibliographic databases

For many review topics, most evidence is found in peer-reviewed journal articles, making bibliographic databases the main resource of systematic searching. Limiting the number of databases searched can be a viable option in RRs, but it is important to prioritise topic-appropriate databases.

MEDLINE has been found to have high coverage for studies included in SRs 13 14 and is an appealing database choice because access is free via PubMed. However, coverage varies depending on topics and relevant study designs. 15 16 Additionally, even if all eligible studies for a topic were available in MEDLINE, search strategies will usually miss some eligible studies because search sensitivity is lower than database coverage. 13 17 This means searching MEDLINE alone is not a viable option, and additional information sources or search methods are required. Known relevant records can be used to help assess the coverage of selected databases (see also online supplemental appendix C ).

Further information sources and search techniques

Supplementary systematic search methods have three main goals, to identify (1) grey literature, (2) published literature not covered by the selected bibliographic databases and (3) database-covered literature that was not retrieved by the database searches.

When RRs search only a small number of databases, supplementary searches can be particularly important to pick up eligible studies not identified via database searching. While supplementary methods might increase the time spent on searching, they sometimes better optimise search sensitivity and precision, saving time in the long run. 18 Depending on the topic and relevant evidence, such methods can offer an alternative to adding additional specialised database searches. To decide if and what supplementary searches are helpful, it is important to evaluate what literature might be missed by the database searches and how this might affect the specific RR.

Study registries and other grey literature

Some studies indicate that the omission of grey literature searches rarely affects review conclusions. 17 19 However, the relevance of study registries and other grey literature sources is topic-dependent. 16 19–21 For example, randomised controlled trials (RCTs) on newly approved drugs are typically identified in ClinicalTrials.gov. 20 For rapidly evolving topics such as COVID-19, preprints are an important source. 21 For public health interventions, various types of grey literature may be important (eg, evaluations conducted by local public health agencies). 22

Further supplementary search methods

Other supplementary techniques (eg, checking reference lists, reviewing specific websites or electronic table of contents, contacting experts) may identify additional studies not retrieved by database searches. 23 One of the most common approaches involves checking reference lists of included studies and relevant reviews. This method may identify studies missed by limited database searches. 12 Another promising citation-based approach is using the ‘similar articles’ option in PubMed, although research has focused on updating existing SRs. 24 25

Considerations for RRs of RCTs

Databases and search methods to identify RCTs have been particularly well researched. 17 20 24 26 27 For this reason, it is possible to give more precise recommendations for RRs based on RCTs than for other types of review. Table 2 provides an overview of the most important considerations; additional information can be found in online supplemental appendix C .

Information sources for identification of randomised controlled trials (RCTs)

Search strategies

We define ‘search strategy’ as a Boolean search query in a specific database (eg, MEDLINE) using a specific interface (eg, Ovid). When several databases are searched, this query is usually developed in a primary database and interface (eg, Ovid MEDLINE) and translated to other databases.

Developing search strategies

Optimising search strategy precision while aiming for high sensitivity is critical in reducing the number of records retrieved. Preliminary searches provide crucial information to aid efficient search strategy development. Reviewing the abstracts and subject headings used in known relevant records will assist in identifying appropriate search terms. Text analysis tools can also be used to support this process, 28 29 for example, to develop ‘objectively derived’ search strategies. 30

Reusing or adapting complete search strategies (eg, from SRs identified by the preliminary searches) or selecting elements of search strategies for reuse can accelerate search strategy development. Additionally, validated search filters (eg, for study design) can be used to reduce the size of the search result without compromising the sensitivity of a search strategy. 31 However, quality assurance measures are necessary whether the search strategy is purpose-built, reused or adapted (see the ‘Quality assurance’ section.)

Database-specific and interface-specific functionalities can also be used to improve searches’ precision and reduce the search result’s size. Some options are: restricting to records where subject terms have been assigned as the major focus of an article (eg, major descriptors in MeSH), using proximity operators (ie, terms adjacent or within a set number of words), frequency operators (ie, terms have to appear a minimum number of times in an abstract) or restricting search terms to the article title. 32–34

Automated syntax translation can save time and reduce errors when translating a primary search strategy to different databases. 35 36 However, manual adjustments will usually be necessary.

The time taken to learn how to use supporting technologies (eg, text analysis, syntax translation) proficiently should not be underestimated. The time investment is most likely to pay off for frequent searchers. A later paper in this series will address supporting software for the entire review process.

Limits and restrictions

Limits and restrictions (eg, publication dates, language) are another way to reduce the number of records retrieved but should be tailored to the topic and applied with caution. For example, if most studies about an intervention were published 10 years ago, then an arbitrary cut-off of ‘the last 5 years’ will miss many relevant studies. 37 Similarly, limiting to ‘English only’ is acceptable for most cases, but early in the COVID-19 pandemic, a quarter of available research articles were written in Chinese. 38 Depending on the RR topic, certain document types (eg, conference abstracts, dissertations) might be excluded if not considered relevant to the research question.

Note also that preset limiting functions in search interfaces (eg, limit to humans) often rely on subject headings (eg, MeSH) alone. They will miss eligible studies that lack or have incomplete subject indexing. Using (validated) search filters 31 is preferable.

Updating existing reviews

One approach to RR production involves updating an existing SR. In this case, preliminary searches should be used to check if new evidence is available. If the review team decide to update the review, they should assess the original search methods and adapt these as necessary.

One option is to identify the minimum set of databases required to retrieve all the original included studies. 39 Any reused search strategies should be validated and peer-reviewed (see below) and optimised for precision and/or sensitivity.

Additionally, it is important to assess whether the topic terminology or the relevant databases have changed since the original SR search.

In some cases, designing a new search process may be more efficient than reproducing the original search.

Quality assurance and search strategy peer review

Errors in search strategies are common and can impact the sensitivity and comprehensiveness of the search result. 40 If an RR search uses a small number of information sources, such errors could affect the identification of relevant studies.

Validation of search strategies

The primary database search strategy should be validated using known relevant records (if available). This means testing if the primary search strategy retrieves eligible studies found through preliminary searching. If some known studies are not identified, the searcher assesses the reasons and decides if revisions are necessary. Even a precision-focused systematic search should identify the majority—we suggest at least 80%–90%—of known studies. This is based on benchmarks for sensitivity-precision-maximising search filters 41 and assumes that the set of known studies is representative of the whole of relevant studies.

Peer review of search strategies

Ideally, an information specialist should review the planned information sources and search methods and use the PRESS (Peer Review of Electronic Search Strategies) checklist 42 to assess the primary search strategy. Turnaround time has to be factored into the process from the outset (eg, waiting for feedback, revising the search strategy). PRESS recommends a maximum turnaround time of five working days for feedback, but in-house peer review often takes only a few hours.

If the overall RR time plan does not allow for a full peer review of the search strategy, a review team member with search experience should check the search strategy for spelling errors and correct use of Boolean operators (AND, OR, NOT) at a minimum.

Reporting and record management

Record management requirements of RRs are largely identical to SRs and have to be factored into the time plan. Teams should develop a data management plan and review the relevant reporting standards at the project’s outset. PRISMA-S (Preferred Reporting Items for Systematic Reviews and Meta-Analyses literature search extension) 11 is a reporting standard for SR searches that can be adapted for RRs.

Reference management software (eg, EndNote, 43 Zotero 44 ) should be used to track search results, including deduplication. Note that record management for database searches is less time-consuming than for many supplementary or grey literature searches, which often require manual entry into reference management software. 12

Additionally, software platforms for SR production (eg, Covidence, 45 EPPI-Reviewer, 46 Systematic Review Data Repository Plus 47 ) can provide a unified way to keep track of records throughout the whole review process, which can improve management and save time. These platforms and other dedicated tools (eg, SRA Deduplicator) 48 also offer automated deduplication. However, the time and cost investment necessary to appropriately use these tools have to be considered.

Decisions about search methods for an RR need to consider where time can be most usefully invested and processes accelerated. The literature search should be considered in the context of the entire review process, for example, protocol development and literature screening: Findings of preliminary searches often affect the development and refinement of the research question and the review’s eligibility criteria . In turn, they affect the number of records retrieved by the searches and therefore the time needed for literature selection.

For this reason, focusing only on reducing time spent on designing and conducting searches can be a false economy when seeking to speed up review production. While some approaches (eg, text analysis, automated syntax translation) may save time without negatively affecting search validity, others (eg, skipping quality assurance steps, using convenient information sources without considering their topic appropriateness) may harm the entire review. Information specialists can provide crucial aid concerning the appropriate design of search strategies, choice of methods and information sources.

For this reason, we consider that investing time at the outset of the review to carefully choose a small number of highly appropriate search methods and optimise search sensitivity and precision likely leads to better and more manageable results.

Ethics statements

Patient consent for publication.

Not applicable.

  • Gartlehner G ,
  • Nussbaumer-Streit B ,
  • Nussbaumer Streit B ,
  • Garritty C ,
  • Tricco AC ,
  • Nussbaumer-Streit B , et al
  • Trivella M ,
  • Hamel C , et al
  • Hartling L ,
  • Guise J-M ,
  • Kato E , et al
  • Lefebvre C ,
  • Glanville J ,
  • Briscoe S , et al
  • Higgins JPT ,
  • Lasserson T ,
  • Chandler J , et al
  • European network for Health Technology Assessment (EUnetHTA)
  • Rethlefsen ML ,
  • Kirtley S ,
  • Waffenschmidt S , et al
  • Klerings I , et al
  • Bramer WM ,
  • Giustini D ,
  • Halladay CW ,
  • Trikalinos TA ,
  • Schmid IT , et al
  • Frandsen TF ,
  • Eriksen MB ,
  • Hammer DMG , et al
  • Klerings I ,
  • Wagner G , et al
  • Husk K , et al
  • Featherstone R ,
  • Nuspl M , et al
  • Knelangen M ,
  • Hausner E ,
  • Metzendorf M-I , et al
  • Gianola S ,
  • Bargeri S , et al
  • Hillier-Brown FC ,
  • Moore HJ , et al
  • Varley-Campbell J , et al
  • Sampson M ,
  • de Bruijn B ,
  • Urquhart C , et al
  • Fitzpatrick-Lewis D , et al
  • Affengruber L ,
  • Waffenschmidt S ,
  • Kaiser T , et al
  • The InterTASC Information Specialists’ Sub-Group
  • Kleijnen J , et al
  • Jacob C , et al
  • Kaunelis D ,
  • Mensinkai S , et al
  • Mast F , et al
  • Sanders S ,
  • Carter M , et al
  • Marshall IJ ,
  • Marshall R ,
  • Wallace BC , et al
  • Fidahic M ,
  • Runjic R , et al
  • Hopewell S ,
  • Salvador-Oliván JA ,
  • Marco-Cuenca G ,
  • Arquero-Avilés R
  • Navarro-Ruan T ,
  • Hobson N , et al
  • McGowan J ,
  • Salzwedel DM , et al
  • Clarivate Analytics
  • Corporation for Digital Scholarship
  • Veritas Health Innovation Ltd
  • Graziosi S ,
  • Brunton J , et al
  • Agency for Healthcare Research and Quality
  • Institute for Evidence-Based Healthcare

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

Twitter @micaelaescb

Collaborators On behalf of the Cochrane Rapid Reviews Methods Group: Declan Devane, Gerald Gartlehner, Isolde Sommer.

Contributors IK, SR, AB, CME-L and SW contributed to the conceptualisation of this paper. IK, AB and CME-L wrote the first draft of the manuscript. All authors critically reviewed and revised the manuscript. IK is responsible for the overall content.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests AB is co-convenor of the Cochrane Qualitative and Implementation Methods Group. In the last 36 months, he received royalties from Systematic Approaches To a Successful Literature Review (Sage 3rd edn), payment or honoraria form the Agency for Healthcare Research and Quality, and travel support from the WHO. DD works part time for Cochrane Ireland and Evidence Synthesis Ireland, which are funded within the University of Ireland Galway (Ireland) by the Health Research Board (HRB) and the Health and Social Care, Research and Development (HSC R&D) Division of the Public Health Agency in Northern Ireland.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Linked Articles

  • Research methods and reporting Rapid reviews methods series: Guidance on team considerations, study selection, data extraction and risk of bias assessment Barbara Nussbaumer-Streit Isolde Sommer Candyce Hamel Declan Devane Anna Noel-Storr Livia Puljak Marialena Trivella Gerald Gartlehner BMJ Evidence-Based Medicine 2023; 28 418-423 Published Online First: 19 Apr 2023. doi: 10.1136/bmjebm-2022-112185
  • Research methods and reporting Rapid reviews methods series: Guidance on assessing the certainty of evidence Gerald Gartlehner Barbara Nussbaumer-Streit Declan Devane Leila Kahwati Meera Viswanathan Valerie J King Amir Qaseem Elie Akl Holger J Schuenemann BMJ Evidence-Based Medicine 2023; 29 50-54 Published Online First: 19 Apr 2023. doi: 10.1136/bmjebm-2022-112111
  • Research methods and reporting Rapid Reviews Methods Series: Involving patient and public partners, healthcare providers and policymakers as knowledge users Chantelle Garritty Andrea C Tricco Maureen Smith Danielle Pollock Chris Kamel Valerie J King BMJ Evidence-Based Medicine 2023; 29 55-61 Published Online First: 19 Apr 2023. doi: 10.1136/bmjebm-2022-112070

Read the full text or download the PDF:

Charles Sturt University

Literature Review: Developing a search strategy

  • Traditional or narrative literature reviews
  • Scoping Reviews
  • Systematic literature reviews
  • Annotated bibliography
  • Keeping up to date with literature
  • Finding a thesis
  • Evaluating sources and critical appraisal of literature
  • Managing and analysing your literature
  • Further reading and resources

From research question to search strategy

Keeping a record of your search activity

Good search practice could involve keeping a search diary or document detailing your search activities (Phelps et. al. 2007, pp. 128-149), so that you can keep track of effective search terms, or to help others to reproduce your steps and get the same results. 

This record could be a document, table or spreadsheet with:

  • The names of the sources you search and which provider you accessed them through - eg Medline (Ovid), Web of Science (Thomson Reuters). You should also include any other literature sources you used.
  • how you searched (keyword and/or subject headings)
  • which search terms you used (which words and phrases)
  • any search techniques you employed (truncation, adjacency, etc)
  • how you combined your search terms (AND/OR). Check out the Database Help guide for more tips on Boolean Searching.
  • The number of search results from each source and each strategy used. This can be the evidence you need to prove a gap in the literature, and confirms the importance of your research question.

A search planner may help you to organise you thoughts prior to conducting your search. If you have any problems with organising your thoughts prior, during and after searching please contact your Library  Faculty Team   for individual help.

  • Literature search - a librarian's handout to introduce tools, terms and techniques Created by Elsevier librarian, Katy Kavanagh Web, this document outlines tools, terms and techniques to think about when conducting a literature search.
  • Search planner

Literature search cycle

search methods for literature review

Diagram text description

This diagram illustrates the literature search cycle. It shows a circle in quarters. Top left quarter is identify main concepts with rectangle describing how to do this by identifying:controlled vocabulary terms, synonyms, keywords and spelling. Top right quarter select library resources to search and rectangle describing resources to search library catalogue relevant journal articles and other resource. Bottom right corner of circle search resources and in rectangle consider using boolean searchingproximity searching and truncated searching techniques. Bottom left quarter of circle review and refine results. In rectangle evaluate results, rethink keywords and create alerts.

Have a search framework

Search frameworks are mnemonics which can help you focus your research question. They are also useful in helping you to identify the concepts and terms you will use in your literature search.

PICO is a search framework commonly used in the health sciences to focus clinical questions.  As an example, you work in an aged care facility and are interested in whether cranberry juice might help reduce the common occurrence of urinary tract infections.  The PICO framework would look like this:

Now that the issue has been broken up to its elements, it is easier to turn it into an answerable research question: “Does cranberry juice help reduce urinary tract infections in people living in aged care facilities?”

Other frameworks may be helpful, depending on your question and your field of interest. PICO can be adapted to PICOT (which adds T ime) or PICOS (which adds S tudy design), or PICOC (adding C ontext).

For qualitative questions you could use

  • SPIDER : S ample,  P henomenon of  I nterest,  D esign,  E valuation,  R esearch type  

For questions about causes or risk,

  • PEO : P opulation,  E xposure,  O utcomes

For evaluations of interventions or policies, 

  • SPICE: S etting,  P opulation or  P erspective,  I ntervention,  C omparison,  E valuation or
  • ECLIPSE: E xpectation,  C lient group,  L ocation,  I mpact,  P rofessionals,  SE rvice 

See the University of Notre Dame Australia’s examples of some of these frameworks. 

You can also try some PICO examples in the National Library of Medicine's PubMed training site: Using PICO to frame clinical questions.

Contact Your Faculty Team Librarian

Faculty librarians are here to provide assistance to students, researchers and academic staff by providing expert searching advice, research and curriculum support.

  • Faculty of Arts & Education team
  • Faculty of Business, Justice & Behavioural Science team
  • Faculty of Science team

Further reading

Cover Art

  • << Previous: Annotated bibliography
  • Next: Keeping up to date with literature >>
  • Last Updated: Jan 16, 2024 1:39 PM
  • URL: https://libguides.csu.edu.au/review

Acknowledgement of Country

Charles Sturt University is an Australian University, TEQSA Provider Identification: PRV12018. CRICOS Provider: 00005F.

How to undertake a literature search: a step-by-step guide

Affiliation.

  • 1 Literature Search Specialist, Library and Archive Service, Royal College of Nursing, London.
  • PMID: 32279549
  • DOI: 10.12968/bjon.2020.29.7.431

Undertaking a literature search can be a daunting prospect. Breaking the exercise down into smaller steps will make the process more manageable. This article suggests 10 steps that will help readers complete this task, from identifying key concepts to choosing databases for the search and saving the results and search strategy. It discusses each of the steps in a little more detail, with examples and suggestions on where to get help. This structured approach will help readers obtain a more focused set of results and, ultimately, save time and effort.

Keywords: Databases; Literature review; Literature search; Reference management software; Research questions; Search strategy.

  • Databases, Bibliographic*
  • Information Storage and Retrieval / methods*
  • Nursing Research
  • Review Literature as Topic*

search methods for literature review

Help us improve our Library guides with this 5 minute survey . We appreciate your feedback!

  • UOW Library
  • Key guides for students

Literature Review

How to search effectively.

  • Find examples of literature reviews
  • How to write a literature review
  • Grey literature

1. Identify search words

Analyse your research topic or question.

  • What are the main ideas?
  • What concepts or theories have you already covered?
  • Write down your main ideas, synonyms, related words and phrases.
  • If you're looking for particular types of research, you can use these as search words. E.g. qualitative, quantitative, methodology, review, survey, test, trend (and more).
  • Be mindful of UK and US spelling variations. E.g. organisation OR organization, ageing OR aging.
  • Interactive Keyword Builder
  • Identifying effective keywords

2. Connect your search words

Find results with one or more search words.

Use OR between words that mean the same thing.

E.g.  adolescent  OR  teenager

This search will find results with either (or both) of the search words.

Find results with two search words

Use AND between words which represent the main ideas in the question.

E.g. adolescent AND “physical activity”

This will find results with both of the search words.

Exclude search words

Use NOT to exclude words that you don’t want in your search results.

E.g. (adolescent OR teenager) NOT “young adult”

3. Use search tricks

Search for different word endings.

Truncation *

The asterisk symbol * will help you search for different word endings.

E.g. teen* will find results with the words: teen, teens, teenager, teenagers

Specific truncation symbols will vary. Check the 'Help' section of the database you are searching.

Search for common phrases

Phrase searching “...........”

Double quotation marks help you search for common phrases and make your results more relevant.

E.g. “physical activity” will find results with the words physical activity together as a phrase.

Search for spelling variations within related terms

Wildcards ?

Wildcard symbols allow you to search for spelling variations within the same or related terms.

E.g. wom?n will find results with women OR woman

Specific wild card symbols will vary. Check the 'Help' section of the database you are searching.

Search terms within specific ranges of each other

Proximity  w/#

Proximity searching allows you to specify where your search terms will appear in relation to each other.

E.g.  pain w/10 morphine will search for pain within ten words of morphine

Specific proximity symbols will vary. Check the 'Help' section of the database you are searching.

4. Improve your search results

All library databases are different and you can't always search and refine in the same way. Try to be consistent when transferring your search in the library databases you have chosen.

Narrow and refine your search results by:

  • year of publication or date range (for recent or historical research)
  • document or source type (e.g. article, review or book)
  • subject or keyword (for relevance). Try repeating your search using the 'subject' headings or 'keywords' field to focus your search
  • searching in particular fields, i.e. citation and abstract. Explore the available dropdown menus to change the fields to be searched.

When searching, remember to:

Adapt your search and keep trying.

Searching for information is a process and you won't always get it right the first time. Improve your results by changing your search and trying again until you're happy with what you have found.

Keep track of your searches

Keeping track of searches saves time as you can rerun them, store references, and set up regular alerts for new research relevant to your topic.

Most library databases allow you to register with a personal account. Look for a 'log in', 'sign in' or 'register' button to get started.

  • Literature review search tracker (Excel spreadsheet)

Manage your references

There are free and subscription reference management programs available on the web or to download on your computer.

  • EndNote - The University has a license for EndNote. It is available for all students and staff, although is recommended for postgraduates and academic staff.
  • Zotero - Free software recommended for undergraduate students.
  • Previous: How to write a literature review
  • Next: Where to search when doing a literature review
  • Last Updated: Feb 5, 2024 1:46 PM
  • URL: https://uow.libguides.com/literaturereview

Insert research help text here

LIBRARY RESOURCES

Library homepage

Library SEARCH

A-Z Databases

STUDY SUPPORT

Learning Co-Op (academic skills support)

Referencing and citing

Digital Skills Hub

MORE UOW SERVICES

UOW homepage

Student support and wellbeing

IT Services

search methods for literature review

On the lands that we study, we walk, and we live, we acknowledge and respect the traditional custodians and cultural knowledge holders of these lands.

search methods for literature review

Copyright & disclaimer | Privacy & cookie usage

  • UNC Libraries
  • HSL Academic Process
  • Systematic Reviews
  • Step 3: Conduct Literature Searches

Systematic Reviews: Step 3: Conduct Literature Searches

Created by health science librarians.

HSL Logo

  • Step 1: Complete Pre-Review Tasks
  • Step 2: Develop a Protocol

About Step 3: Conduct Literature Searches

Partner with a librarian, systematic searching process, choose a few databases, search with controlled vocabulary and keywords, acknowledge outdated or offensive terminology, helpful tip - building your search, use nesting, boolean operators, and field tags, build your search, translate to other databases and other searching methods, document the search, updating your review.

  • Searching FAQs
  • Step 4: Manage Citations
  • Step 5: Screen Citations
  • Step 6: Assess Quality of Included Studies
  • Step 7: Extract Data from Included Studies
  • Step 8: Write the Review

  Check our FAQ's

   Email us

  Chat with us (during business hours)

   Call (919) 962-0800

   Make an appointment with a librarian

  Request a systematic or scoping review consultation

Search the FAQs

In Step 3, you will design a search strategy to find all of the articles related to your research question. You will:

  • Define the main concepts of your topic
  • Choose which databases you want to search
  • List terms to describe each concept
  • Add terms from controlled vocabulary like MeSH
  • Use field tags to tell the database where to search for terms
  • Combine terms and concepts with Boolean operators AND and OR
  • Translate your search strategy to match the format standards for each database
  • Save a copy of your search strategy and details about your search

There are many factors to think about when building a strong search strategy for systematic reviews. Librarians are available to provide support with this step of the process.

Click an item below to see how it applies to Step 3: Conduct Literature Searches.

Reporting your review with PRISMA

For PRISMA, there are specific items you will want to report from your search.  For this step, review the PRISMA-S checklist.

  • PRISMA-S for Searching
  • Specify all databases, registers, websites, organizations, reference lists, and other sources searched or consulted to identify studies. Specify the date when each source was last searched or consulted. Present the full search strategies for all databases, registers and websites, including any filters and limits used.
  • For information on how to document database searches and other search methods on your PRISMA flow diagram, visit our FAQs "How do I document database searches on my PRISMA flow diagram?" and "How do I document a grey literature search for my PRISMA flow diagram?"

Managing your review with Covidence

For this step of the review, in Covidence you can:

  • Document searches in Covidence review settings so all team members can view
  • Add keywords from your search to be highlighted in green or red while your team screens articles in your review settings

How a librarian can help with Step 3

When designing and conducting literature searches, a librarian can advise you on :

  • How to create a search strategy with Boolean operators, database-specific syntax, subject headings, and appropriate keywords 
  • How to apply previously published systematic review search strategies to your current search
  • How to test your search strategy's performance 
  • How to translate a search strategy from one database's preferred structure to another

The goal of a systematic retrieve is to find all results that are relevant to your topic. Because systematic review searches can be quite extensive and retrieve large numbers of results, an important aspect of systematic searching is limiting the number of irrelevant results that need to be screened. Librarians are experts trained in literature searching and systematic review methodology. Ask us a question or partner with a librarian to save time and improve the quality of your review. Our comparison chart detailing two tiers of partnership provides more information on how librarians can collaborate with and contribute to systematic review teams.

Magnifying glass looking at city lights

Search Process

  • Use controlled vocabulary, if applicable
  • Include synonyms/keyword terms
  • Choose databases, websites, and/or registries to search
  • Translate to other databases
  • Search using other methods (e.g. hand searching)
  • Validate and peer review the search

Databases can be multidisciplinary or subject specific. Choose the best databases for your research question. Databases index various journals, so in order to be comprehensive, it is important to search multiple databases when conducting a systematic review. Consider searching databases with more diverse or global coverage (i.e., Global Index Medicus) when appropriate. A list of frequently used databases is provided below. You can access UNC Libraries' full listing of databases on the HSL website (arranged alphabetically or by subject ).

Generally speaking, when literature searching, you are not searching the full-text article. Instead, you are searching certain citation data fields, like title, abstract, keyword, controlled vocabulary terms, and more. When developing a literature search, a good place to start is to identify searchable concepts of the research question, and then expand by adding other terms to describe those concepts. Read below for more information and examples on how to develop a literature search, as well as find tips and tricks for developing more comprehensive searches.

Identify search concepts and terms for each

Start by identifying the main concepts of your research question. If unsure, try using a question framework to help identify the main searchable concepts. PICO is one example of a question framework and is used specifically for clinical questions. If your research question doesn't fit into the PICO model well, view other examples of question frameworks and try another!

Click to view our example in PICO format

Question: for patients 65 years and older, does an influenza vaccine reduce the future risk of pneumonia, controlled vocabulary.

Controlled vocabulary is a set of terminology assigned to citations to describe the content of each reference. Searching with controlled vocabulary can improve the relevancy of search results. Many databases assign controlled vocabulary terms to citations, but their naming schema is often specific to each database. For example, the controlled vocabulary system searchable via PubMed is MeSH, or Medical Subject Headings. More information on searching MeSH can be found here .

Note: Controlled vocabulary may be outdated, and some databases allow users to submit requests to update terminology.

Click to view Controlled Vocabulary for our example PICO

As mentioned above, databases with controlled vocabulary often use their own unique system. A listing of controlled vocabulary systems by database is shown below.

Keyword Terms

Not all citations are indexed with controlled vocabulary terms, however, so it is important to combine controlled vocabulary searches with keyword, or text word, searches. 

Authors often write about the same topic in varied ways and it is important to add these terms to your search in order to capture most of the literature. For example, consider these elements when developing a list of keyword terms for each concept:

  • American versus British spelling
  • hyphenated terms
  • quality of life
  • satisfaction
  • vaccination
  • influenza vaccination

There are several resources to consider when searching for synonyms. Scan the results of preliminary searches to identify additional terms. Look for synonyms, word variations, and other possibilities in Wikipedia, other encyclopedias or dictionaries, and databases. For example, PubChem lists additional drug names and chemical compounds.

Click to display Controlled Vocabulary and Keywords for our example PICO

Combining controlled vocabulary and text words in PubMed would look like this:

"Influenza Vaccines"[Mesh] OR "influenza vaccine" OR "influenza vaccines" OR "flu vaccine" OR "flu vaccines" OR "flu shot" OR "flu shots" OR "influenza virus vaccine" OR "influenza virus vaccines"

Social and cultural norms have been rapidly changing around the world. This has led to changes in the vocabulary used, such as when describing people or populations. Library and research terminology changes more slowly, and therefore can be considered outdated, unacceptable, or overly clinical for use in conversation or writing.

For our example with people 65 years and older, APA Style Guidelines recommend that researchers use terms like “older adults” and “older persons” and forgo terms like “senior citizens” and “elderly” that connote stereotypes. While these are current recommendations, researchers will recognize that terms like “elderly” have previously been used in the literature. Therefore, removing these terms from the search strategy may result in missed relevant articles. 

Research teams need to discuss current and outdated terminology and decide which terms to include in the search to be as comprehensive as possible. The research team or a librarian can search for currently preferred terms in glossaries, dictionaries, published guidelines, and governmental or organizational websites. The University of Michigan Library provides suggested wording to use in the methods section when antiquated, non-standard, exclusionary, or potentially offensive terms are included in the search.

Check the methods sections or supplementary materials of published systematic reviews for search strategies to see what terminology they used. This can help inform your search strategy by using MeSH terms or keywords you may not have thought of. However, be aware that search strategies will differ in their comprehensiveness.

You can also run a preliminary search for your topic, sort the results by Relevance or Best Match, and skim through titles and abstracts to identify terminology from relevant articles that you should include in your search strategy.

Nesting is a term that describes organizing search terms inside parentheses. This is important because, just like their function in math, commands inside a set of parentheses occur first. Parentheses let the database know in which order terms should be combined. 

Always combine terms for a single concept inside a parentheses set. For example: 

( "Influenza Vaccines"[Mesh] OR "influenza vaccine" OR "influenza vaccines" OR "flu vaccine" OR "flu vaccines" OR "flu shot" OR "flu shots" OR "influenza virus vaccine" OR "influenza virus vaccines" )

Additionally, you may nest a subset of terms for a concept inside a larger parentheses set, as seen below. Pay careful attention to the number of parenthesis sets and ensure they are matched, meaning for every open parentheses you also have a closed one.

( "Influenza Vaccines"[Mesh] OR "influenza vaccine" OR "influenza vaccines" OR "flu vaccine" OR "flu vaccines" OR "flu shot" OR "flu shots" OR "influenza virus vaccine" OR "influenza virus vaccines" OR   (( flu OR influenza ) AND ( vaccine OR vaccines OR vaccination OR immunization )))

Boolean operators

Boolean operators are used to combine terms in literature searches. Searches are typically organized using the Boolean operators OR or AND. OR is used to combine search terms for the same concept (i.e., influenza vaccine). AND is used to combine different concepts (i.e., influenza vaccine AND older adults AND pneumonia). An example of how Boolean operators can affect search retrieval is shown below. Using AND to combine the three concepts will only retrieve results where all are present. Using OR to combine the concepts will retrieve results that use all separately or together. It is important to note that, generally speaking, when you are performing a literature search you are only searching the title, abstract, keywords and other citation data. You are not searching the full-text of the articles.

boolean venn diagram example

The last major element to consider when building systematic literature searches are field tags. Field tags tell the database exactly where to search. For example, you can use a field tag to tell a database to search for a term in just the title, the title and abstract, and more. Just like with controlled vocabulary, field tag commands are different for every database.

If you do not manually apply field tags to your search, most databases will automatically search in a set of citation data points. Databases may also overwrite your search with algorithms if you do not apply field tags. For systematic review searching, best practice is to apply field tags to each term for reproducibility.

For example:

("Influenza Vaccines"[Mesh] OR "influenza vaccine"[tw] OR "influenza vaccines"[tw] OR "flu vaccine"[tw] OR "flu vaccines"[tw] OR "flu shot"[tw] OR "flu shots"[tw] OR "influenza virus vaccine"[tw] OR "influenza virus vaccines"[tw] OR ((flu[tw] OR influenza[tw]) AND (vaccine[tw] OR vaccines[tw] OR vaccination[tw] OR immunization[tw])))

Click to view field tags for several health databases

For more information about how to use a variety of databases, check out our guides on searching.

  • Searching PubMed guide Guide to searching Medline via the PubMed database
  • Searching Embase guide Guide to searching Embase via embase.com
  • Searching Scopus guide Guide to searching Scopus via scopus.com
  • Searching EBSCO Databases guide Guide to searching CINAHL, PsycInfo, Global Health, & other databases via EBSCO

Combining search elements together

Organizational structure of literature searches is very important. Specifically, how terms are grouped (or nested) and combined with Boolean operators will drastically impact search results. These commands tell databases exactly how to combine terms together, and if done incorrectly or inefficiently, search results returned may be too broad or irrelevant.

For example, in PubMed:

(influenza OR flu) AND vaccine is a properly combined search and it produces around 50,000 results.

influenza OR flu AND vaccine is not properly combined.  Databases may read it as everything about influenza OR everything about (flu AND vaccine), which would produce more results than needed.

We recommend one or more of the following:

  • put all your synonyms together inside a set of parentheses, then put AND between the closing parenthesis of one set and the opening parenthesis of the next set
  • use a separate search box for each set of synonyms
  • run each set of synonyms as a separate search, and then combine all your searches
  • ask a librarian if your search produces too many or too few results

Click to view the proper way to combine MeSH terms and Keywords for our example PICO

Question: for patients 65 years and older, does an influenza vaccine reduce the future risk of pneumonia , translating search strategies to other databases.

Databases often use their own set of terminology and syntax. When searching multiple databases, you need to adjust the search slightly to retrieve comparable results. Our sections on Controlled Vocabulary and Field Tags have information on how to build searches in different databases.  Resources to help with this process are listed below.

  • Polyglot search A tool to translate a PubMed or Ovid search to other databases
  • Search Translation Resources (Cornell) A listing of resources for search translation from Cornell University
  • Advanced Searching Techniques (King's College London) A collection of advanced searching techniques from King's College London

Other searching methods

Hand searching.

Literature searches can be supplemented by hand searching. One of the most popular ways this is done with systematic reviews is by searching the reference list and citing articles of studies included in the review. Another method is manually browsing key journals in your field to make sure no relevant articles were missed. Other sources that may be considered for hand searching include: clinical trial registries, white papers and other reports, pharmaceutical or other corporate reports, conference proceedings, theses and dissertations, or professional association guidelines.

Searching grey literature

Grey literature typically refers to literature not published in a traditional manner and often not retrievable through large databases and other popular resources. Grey literature should be searched for inclusion in systematic reviews in order to reduce bias and increase thoroughness. There are several databases specific to grey literature that can be searched.

  • Open Grey Grey literature for Europe
  • OAIster A union catalog of millions of records representing open access resources from collections worldwide
  • Grey Matters: a practical tool for searching health-related grey literature (CADTH) From CADTH, the Canadian Agency for Drugs and Technologies in Health, Grey Matters is a practical tool for searching health-related grey literature. The MS Word document covers a grey literature checklist, including national and international health technology assessment (HTA) web sites, drug and device regulatory agencies, clinical trial registries, health economics resources, Canadian health prevalence or incidence databases, and drug formulary web sites.
  • Duke Medical Center Library: Searching for Grey Literature A good online compilation of resources by the Duke Medical Center Library.

Systematic review quality is highly dependent on the literature search(es) used to identify studies. To follow best practices for reporting search strategies, as well as increase reproducibility and transparency, document various elements of the literature search for your review. To make this process more clear, a statement and checklist for reporting literature searches has been developed and and can be found below.

  • PRISMA-S: Reporting Literature Searches in Systematic Reviews
  • Section 4.5 Cochrane Handbook - Documenting and reporting the search process

At a minimum, document and report certain elements, such as databases searched, including name (i.e., Scopus) and platform (i.e. Elsevier), websites, registries, and grey literature searched. In addition, this also may include citation searching and reaching out to experts in the field. Search strategies used in each database or source should be documented, along with any filters or limits, and dates searched. If a search has been updated or was built upon previous work, that should be noted as well. It is also helpful to document which search terms have been tested and decisions made for term inclusion or exclusion by the team. Last, any peer review process should be stated as well as the total number of records identified from each source and how deduplication was handled. 

If you have a librarian on your team who is creating and running the searches, they will handle the search documentation.

You can document search strategies in word processing software you are familiar with like Microsoft Word or Excel, or Google Docs or Sheets. A template, and separate example file, is provided below for convenience. 

  • Search Strategy Documentation Template
  • Search Strategy Documentation Example

*Some databases like PubMed are being continually updated with new technology and algorithms. This means that searches may retrieve different results than when originally run, even with the same filters, date limits, etc.

When you decide to update a systematic review search, there are two ways of identifying new articles:  

1. rerun the original search strategy without any changes. .

Rerun the original search strategy without making any changes.  Import the results into your citation manager, and remove all articles duplicated from the original set of search results.

2. Rerun the original search strategy and add an entry date filter.

Rerun the original search strategy and add a date filter for when the article was added to the database ( not the publication date).  An entry date filter will find any articles added to the results since you last ran the search, unlike a publication date filter, which would only find more recent articles.

Some examples of entry date filters for articles entered since December 31, 2021 are:

  • PubMed:   AND ("2021/12/31"[EDAT] : "3000"[EDAT])
  • Embase: AND [31-12-2021]/sd
  • CINAHL:   AND EM 20211231-20231231
  • PsycInfo: AND RD 20211231-20231231
  • Scopus:   AND LOAD-DATE AFT 20211231  

Your PRISMA flow diagram

For more information about updating the PRISMA flow diagram for your systematic review, see the information on filling out a PRISMA flow diagram for review updates on the Step 8: Write the Review page of the guide.

  • << Previous: Step 2: Develop a Protocol
  • Next: Step 4: Manage Citations >>
  • Last Updated: Feb 8, 2024 9:22 AM
  • URL: https://guides.lib.unc.edu/systematic-reviews

Search & Find

  • E-Research by Discipline
  • More Search & Find

Places & Spaces

  • Places to Study
  • Book a Study Room
  • Printers, Scanners, & Computers
  • More Places & Spaces
  • Borrowing & Circulation
  • Request a Title for Purchase
  • Schedule Instruction Session
  • More Services

Support & Guides

  • Course Reserves
  • Research Guides
  • Citing & Writing
  • More Support & Guides
  • Mission Statement
  • Diversity Statement
  • Staff Directory
  • Job Opportunities
  • Give to the Libraries
  • News & Exhibits
  • Reckoning Initiative
  • More About Us

UNC University Libraries Logo

  • Search This Site
  • Privacy Policy
  • Accessibility
  • Give Us Your Feedback
  • 208 Raleigh Street CB #3916
  • Chapel Hill, NC 27515-8890
  • 919-962-1053
  • Open access
  • Published: 06 December 2017

Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study

  • Wichor M. Bramer 1 ,
  • Melissa L. Rethlefsen 2 ,
  • Jos Kleijnen 3 , 4 &
  • Oscar H. Franco 5  

Systematic Reviews volume  6 , Article number:  245 ( 2017 ) Cite this article

139k Accesses

807 Citations

88 Altmetric

Metrics details

Within systematic reviews, when searching for relevant references, it is advisable to use multiple databases. However, searching databases is laborious and time-consuming, as syntax of search strategies are database specific. We aimed to determine the optimal combination of databases needed to conduct efficient searches in systematic reviews and whether the current practice in published reviews is appropriate. While previous studies determined the coverage of databases, we analyzed the actual retrieval from the original searches for systematic reviews.

Since May 2013, the first author prospectively recorded results from systematic review searches that he performed at his institution. PubMed was used to identify systematic reviews published using our search strategy results. For each published systematic review, we extracted the references of the included studies. Using the prospectively recorded results and the studies included in the publications, we calculated recall, precision, and number needed to read for single databases and databases in combination. We assessed the frequency at which databases and combinations would achieve varying levels of recall (i.e., 95%). For a sample of 200 recently published systematic reviews, we calculated how many had used enough databases to ensure 95% recall.

A total of 58 published systematic reviews were included, totaling 1746 relevant references identified by our database searches, while 84 included references had been retrieved by other search methods. Sixteen percent of the included references (291 articles) were only found in a single database; Embase produced the most unique references ( n  = 132). The combination of Embase, MEDLINE, Web of Science Core Collection, and Google Scholar performed best, achieving an overall recall of 98.3 and 100% recall in 72% of systematic reviews. We estimate that 60% of published systematic reviews do not retrieve 95% of all available relevant references as many fail to search important databases. Other specialized databases, such as CINAHL or PsycINFO, add unique references to some reviews where the topic of the review is related to the focus of the database.

Conclusions

Optimal searches in systematic reviews should search at least Embase, MEDLINE, Web of Science, and Google Scholar as a minimum requirement to guarantee adequate and efficient coverage.

Peer Review reports

Investigators and information specialists searching for relevant references for a systematic review (SR) are generally advised to search multiple databases and to use additional methods to be able to adequately identify all literature related to the topic of interest [ 1 , 2 , 3 , 4 , 5 , 6 ]. The Cochrane Handbook, for example, recommends the use of at least MEDLINE and Cochrane Central and, when available, Embase for identifying reports of randomized controlled trials [ 7 ]. There are disadvantages to using multiple databases. It is laborious for searchers to translate a search strategy into multiple interfaces and search syntaxes, as field codes and proximity operators differ between interfaces. Differences in thesaurus terms between databases add another significant burden for translation. Furthermore, it is time-consuming for reviewers who have to screen more, and likely irrelevant, titles and abstracts. Lastly, access to databases is often limited and only available on subscription basis.

Previous studies have investigated the added value of different databases on different topics [ 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 ]. Some concluded that searching only one database can be sufficient as searching other databases has no effect on the outcome [ 16 , 17 ]. Nevertheless others have concluded that a single database is not sufficient to retrieve all references for systematic reviews [ 18 , 19 ]. Most articles on this topic draw their conclusions based on the coverage of databases [ 14 ]. A recent paper tried to find an acceptable number needed to read for adding an additional database; sadly, however, no true conclusion could be drawn [ 20 ]. However, whether an article is present in a database may not translate to being found by a search in that database. Because of this major limitation, the question of which databases are necessary to retrieve all relevant references for a systematic review remains unanswered. Therefore, we research the probability that single or various combinations of databases retrieve the most relevant references in a systematic review by studying actual retrieval in various databases.

The aim of our research is to determine the combination of databases needed for systematic review searches to provide efficient results (i.e., to minimize the burden for the investigators without reducing the validity of the research by missing relevant references). A secondary aim is to investigate the current practice of databases searched for published reviews. Are included references being missed because the review authors failed to search a certain database?

Development of search strategies

At Erasmus MC, search strategies for systematic reviews are often designed via a librarian-mediated search service. The information specialists of Erasmus MC developed an efficient method that helps them perform searches in many databases in a much shorter time than other methods. This method of literature searching and a pragmatic evaluation thereof are published in separate journal articles [ 21 , 22 ]. In short, the method consists of an efficient way to combine thesaurus terms and title/abstract terms into a single line search strategy. This search is then optimized. Articles that are indexed with a set of identified thesaurus terms, but do not contain the current search terms in title or abstract, are screened to discover potential new terms. New candidate terms are added to the basic search and evaluated. Once optimal recall is achieved, macros are used to translate the search syntaxes between databases, though manual adaptation of the thesaurus terms is still necessary.

Review projects at Erasmus MC cover a wide range of medical topics, from therapeutic effectiveness and diagnostic accuracy to ethics and public health. In general, searches are developed in MEDLINE in Ovid (Ovid MEDLINE® In-Process & Other Non-Indexed Citations, Ovid MEDLINE® Daily and Ovid MEDLINE®, from 1946); Embase.com (searching both Embase and MEDLINE records, with full coverage including Embase Classic); the Cochrane Central Register of Controlled Trials (CENTRAL) via the Wiley Interface; Web of Science Core Collection (hereafter called Web of Science); PubMed restricting to records in the subset “as supplied by publisher” to find references that not yet indexed in MEDLINE (using the syntax publisher [sb]); and Google Scholar. In general, we use the first 200 references as sorted in the relevance ranking of Google Scholar. When the number of references from other databases was low, we expected the total number of potential relevant references to be low. In this case, the number of hits from Google Scholar was limited to 100. When the overall number of hits was low, we additionally searched Scopus, and when appropriate for the topic, we included CINAHL (EBSCOhost), PsycINFO (Ovid), and SportDiscus (EBSCOhost) in our search.

Beginning in May 2013, the number of records retrieved from each search for each database was recorded at the moment of searching. The complete results from all databases used for each of the systematic reviews were imported into a unique EndNote library upon search completion and saved without deduplication for this research. The researchers that requested the search received a deduplicated EndNote file from which they selected the references relevant for inclusion in their systematic review. All searches in this study were developed and executed by W.M.B.

Determining relevant references of published reviews

We searched PubMed in July 2016 for all reviews published since 2014 where first authors were affiliated to Erasmus MC, Rotterdam, the Netherlands, and matched those with search registrations performed by the medical library of Erasmus MC. This search was used in earlier research [ 21 ]. Published reviews were included if the search strategies and results had been documented at the time of the last update and if, at minimum, the databases Embase, MEDLINE, Cochrane CENTRAL, Web of Science, and Google Scholar had been used in the review. From the published journal article, we extracted the list of final included references. We documented the department of the first author. To categorize the types of patient/population and intervention, we identified broad MeSH terms relating to the most important disease and intervention discussed in the article. We copied from the MeSH tree the top MeSH term directly below the disease category or, in to case of the intervention, directly below the therapeutics MeSH term. We selected the domain from a pre-defined set of broad domains, including therapy, etiology, epidemiology, diagnosis, management, and prognosis. Lastly, we checked whether the reviews described limiting their included references to a particular study design.

To identify whether our searches had found the included references, and if so, from which database(s) that citation was retrieved, each included reference was located in the original corresponding EndNote library using the first author name combined with the publication year as a search term for each specific relevant publication. If this resulted in extraneous results, the search was subsequently limited using a distinct part of the title or a second author name. Based on the record numbers of the search results in EndNote, we determined from which database these references came. If an included reference was not found in the EndNote file, we presumed the authors used an alternative method of identifying the reference (e.g., examining cited references, contacting prominent authors, or searching gray literature), and we did not include it in our analysis.

Data analysis

We determined the databases that contributed most to the reviews by the number of unique references retrieved by each database used in the reviews. Unique references were included articles that had been found by only one database search. Those databases that contributed the most unique included references were then considered candidate databases to determine the most optimal combination of databases in the further analyses.

In Excel, we calculated the performance of each individual database and various combinations. Performance was measured using recall, precision, and number needed to read. See Table  1 for definitions of these measures. These values were calculated both for all reviews combined and per individual review.

Performance of a search can be expressed in different ways. Depending on the goal of the search, different measures may be optimized. In the case of a clinical question, precision is most important, as a practicing clinician does not have a lot of time to read through many articles in a clinical setting. When searching for a systematic review, recall is the most important aspect, as the researcher does not want to miss any relevant references. As our research is performed on systematic reviews, the main performance measure is recall.

We identified all included references that were uniquely identified by a single database. For the databases that retrieved the most unique included references, we calculated the number of references retrieved (after deduplication) and the number of included references that had been retrieved by all possible combinations of these databases, in total and per review. For all individual reviews, we determined the median recall, the minimum recall, and the percentage of reviews for which each single database or combination retrieved 100% recall.

For each review that we investigated, we determined what the recall was for all possible different database combinations of the most important databases. Based on these, we determined the percentage of reviews where that database combination had achieved 100% recall, more than 95%, more than 90%, and more than 80%. Based on the number of results per database both before and after deduplication as recorded at the time of searching, we calculated the ratio between the total number of results and the number of results for each database and combination.

Improvement of precision was calculated as the ratio between the original precision from the searches in all databases and the precision for each database and combination.

To compare our practice of database usage in systematic reviews against current practice as evidenced in the literature, we analyzed a set of 200 recent systematic reviews from PubMed. On 5 January 2017, we searched PubMed for articles with the phrase “systematic review” in the title. Starting with the most recent articles, we determined the databases searched either from the abstract or from the full text until we had data for 200 reviews. For the individual databases and combinations that were used in those reviews, we multiplied the frequency of occurrence in that set of 200 with the probability that the database or combination would lead to an acceptable recall (which we defined at 95%) that we had measured in our own data.

Our earlier research had resulted in 206 systematic reviews published between 2014 and July 2016, in which the first author was affiliated with Erasmus MC [ 21 ]. In 73 of these, the searches and results had been documented by the first author of this article at the time of the last search. Of those, 15 could not be included in this research, since they had not searched all databases we investigated here. Therefore, for this research, a total of 58 systematic reviews were analyzed. The references to these reviews can be found in Additional file 1 . An overview of the broad topical categories covered in these reviews is given in Table  2 . Many of the reviews were initiated by members of the departments of surgery and epidemiology. The reviews covered a wide variety of disease, none of which was present in more than 12% of the reviews. The interventions were mostly from the chemicals and drugs category, or surgical procedures. Over a third of the reviews were therapeutic, while slightly under a quarter answered an etiological question. Most reviews did not limit to certain study designs, 9% limited to RCTs only, and another 9% limited to other study types.

Together, these reviews included a total of 1830 references. Of these, 84 references (4.6%) had not been retrieved by our database searches and were not included in our analysis, leaving in total 1746 references. In our analyses, we combined the results from MEDLINE in Ovid and PubMed (the subset as supplied by publisher) into one database labeled MEDLINE.

Unique references per database

A total of 292 (17%) references were found by only one database. Table  3 displays the number of unique results retrieved for each single database. Embase retrieved the most unique included references, followed by MEDLINE, Web of Science, and Google Scholar. Cochrane CENTRAL is absent from the table, as for the five reviews limited to randomized trials, it did not add any unique included references. Subject-specific databases such as CINAHL, PsycINFO, and SportDiscus only retrieved additional included references when the topic of the review was directly related to their special content, respectively nursing, psychiatry, and sports medicine.

Overall performance

The four databases that had retrieved the most unique references (Embase, MEDLINE, Web of Science, and Google Scholar) were investigated individually and in all possible combinations (see Table  4 ). Of the individual databases, Embase had the highest overall recall (85.9%). Of the combinations of two databases, Embase and MEDLINE had the best results (92.8%). Embase and MEDLINE combined with either Google Scholar or Web of Science scored similarly well on overall recall (95.9%). However, the combination with Google Scholar had a higher precision and higher median recall, a higher minimum recall, and a higher proportion of reviews that retrieved all included references. Using both Web of Science and Google Scholar in addition to MEDLINE and Embase increased the overall recall to 98.3%. The higher recall from adding extra databases came at a cost in number needed to read (NNR). Searching only Embase produced an NNR of 57 on average, whereas, for the optimal combination of four databases, the NNR was 73.

Probability of appropriate recall

We calculated the recall for individual databases and databases in all possible combination for all reviews included in the research. Figure  1 shows the percentages of reviews where a certain database combination led to a certain recall. For example, in 48% of all systematic reviews, the combination of Embase and MEDLINE (with or without Cochrane CENTRAL; Cochrane CENTRAL did not add unique relevant references) reaches a recall of at least 95%. In 72% of studied systematic reviews, the combination of Embase, MEDLINE, Web of Science, and Google Scholar retrieved all included references. In the top bar, we present the results of the complete database searches relative to the total number of included references. This shows that many database searches missed relevant references.

Percentage of systematic reviews for which a certain database combination reached a certain recall. The X -axis represents the percentage of reviews for which a specific combination of databases, as shown on the y -axis, reached a certain recall (represented with bar colors). Abbreviations: EM Embase, ML MEDLINE, WoS Web of Science, GS Google Scholar. Asterisk indicates that the recall of all databases has been calculated over all included references. The recall of the database combinations was calculated over all included references retrieved by any database

Differences between domains of reviews

We analyzed whether the added value of Web of Science and Google Scholar was dependent of the domain of the review. For 55 reviews, we determined the domain. See Fig.  2 for the comparison of the recall of Embase, MEDLINE, and Cochrane CENTRAL per review for all identified domains. For all but one domain, the traditional combination of Embase, MEDLINE, and Cochrane CENTRAL did not retrieve enough included references. For four out of five systematic reviews that limited to randomized controlled trials (RCTs) only, the traditional combination retrieved 100% of all included references. However, for one review of this domain, the recall was 82%. Of the 11 references included in this review, one was found only in Google Scholar and one only in Web of Science.

Percentage of systematic reviews of a certain domain for which the combination Embase, MEDLINE and Cochrane CENTRAL reached a certain recall

Reduction in number of results

We calculated the ratio between the number of results found when searching all databases, including databases not included in our analyses, such as Scopus, PsycINFO, and CINAHL, and the number of results found searching a selection of databases. See Fig.  3 for the legend of the plots in Figs.  4 and 5 . Figure  4 shows the distribution of this value for individual reviews. The database combinations with the highest recall did not reduce the total number of results by large margins. Moreover, in combinations where the number of results was greatly reduced, the recall of included references was lower.

Legend of Figs. 3 and 4

The ratio between number of results per database combination and the total number of results for all databases

The ratio between precision per database combination and the total precision for all databases

Improvement of precision

To determine how searching multiple databases affected precision, we calculated for each combination the ratio between the original precision, observed when all databases were searched, and the precision calculated for different database combinations. Figure  5 shows the improvement of precision for 15 databases and database combinations. Because precision is defined as the number of relevant references divided by the number of total results, we see a strong correlation with the total number of results.

Status of current practice of database selection

From a set of 200 recent SRs identified via PubMed, we analyzed the databases that had been searched. Almost all reviews (97%) reported a search in MEDLINE. Other databases that we identified as essential for good recall were searched much less frequently; Embase was searched in 61% and Web of Science in 35%, and Google Scholar was only used in 10% of all reviews. For all individual databases or combinations of the four important databases from our research (MEDLINE, Embase, Web of Science, and Google Scholar), we multiplied the frequency of occurrence of that combination in the random set, with the probability we found in our research that this combination would lead to an acceptable recall of 95%. The calculation is shown in Table  5 . For example, around a third of the reviews (37%) relied on the combination of MEDLINE and Embase. Based on our findings, this combination achieves acceptable recall about half the time (47%). This implies that 17% of the reviews in the PubMed sample would have achieved an acceptable recall of 95%. The sum of all these values is the total probability of acceptable recall in the random sample. Based on these calculations, we estimate that the probability that this random set of reviews retrieved more than 95% of all possible included references was 40%. Using similar calculations, also shown in Table  5 , we estimated the probability that 100% of relevant references were retrieved is 23%.

Our study shows that, to reach maximum recall, searches in systematic reviews ought to include a combination of databases. To ensure adequate performance in searches (i.e., recall, precision, and number needed to read), we find that literature searches for a systematic review should, at minimum, be performed in the combination of the following four databases: Embase, MEDLINE (including Epub ahead of print), Web of Science Core Collection, and Google Scholar. Using that combination, 93% of the systematic reviews in our study obtained levels of recall that could be considered acceptable (> 95%). Unique results from specialized databases that closely match systematic review topics, such as PsycINFO for reviews in the fields of behavioral sciences and mental health or CINAHL for reviews on the topics of nursing or allied health, indicate that specialized databases should be used additionally when appropriate.

We find that Embase is critical for acceptable recall in a review and should always be searched for medically oriented systematic reviews. However, Embase is only accessible via a paid subscription, which generally makes it challenging for review teams not affiliated with academic medical centers to access. The highest scoring database combination without Embase is a combination of MEDLINE, Web of Science, and Google Scholar, but that reaches satisfactory recall for only 39% of all investigated systematic reviews, while still requiring a paid subscription to Web of Science. Of the five reviews that included only RCTs, four reached 100% recall if MEDLINE, Web of Science, and Google Scholar combined were complemented with Cochrane CENTRAL.

The Cochrane Handbook recommends searching MEDLINE, Cochrane CENTRAL, and Embase for systematic reviews of RCTs. For reviews in our study that included RCTs only, indeed, this recommendation was sufficient for four (80%) of the reviews. The one review where it was insufficient was about alternative medicine, specifically meditation and relaxation therapy, where one of the missed studies was published in the Indian Journal of Positive Psychology . The other study from the Journal of Advanced Nursing is indexed in MEDLINE and Embase but was only retrieved because of the addition of KeyWords Plus in Web of Science. We estimate more than 50% of reviews that include more study types than RCTs would miss more than 5% of included references if only traditional combination of MEDLINE, Embase, and Cochrane CENTAL is searched.

We are aware that the Cochrane Handbook [ 7 ] recommends more than only these databases, but further recommendations focus on regional and specialized databases. Though we occasionally used the regional databases LILACS and SciELO in our reviews, they did not provide unique references in our study. Subject-specific databases like PsycINFO only added unique references to a small percentage of systematic reviews when they had been used for the search. The third key database we identified in this research, Web of Science, is only mentioned as a citation index in the Cochrane Handbook, not as a bibliographic database. To our surprise, Cochrane CENTRAL did not identify any unique included studies that had not been retrieved by the other databases, not even for the five reviews focusing entirely on RCTs. If Erasmus MC authors had conducted more reviews that included only RCTs, Cochrane CENTRAL might have added more unique references.

MEDLINE did find unique references that had not been found in Embase, although our searches in Embase included all MEDLINE records. It is likely caused by difference in thesaurus terms that were added, but further analysis would be required to determine reasons for not finding the MEDLINE records in Embase. Although Embase covers MEDLINE, it apparently does not index every article from MEDLINE. Thirty-seven references were found in MEDLINE (Ovid) but were not available in Embase.com . These are mostly unique PubMed references, which are not assigned MeSH terms, and are often freely available via PubMed Central.

Google Scholar adds relevant articles not found in the other databases, possibly because it indexes the full text of all articles. It therefore finds articles in which the topic of research is not mentioned in title, abstract, or thesaurus terms, but where the concepts are only discussed in the full text. Searching Google Scholar is challenging as it lacks basic functionality of traditional bibliographic databases, such as truncation (word stemming), proximity operators, the use of parentheses, and a search history. Additionally, search strategies are limited to a maximum of 256 characters, which means that creating a thorough search strategy can be laborious.

Whether Embase and Web of Science can be replaced by Scopus remains uncertain. We have not yet gathered enough data to be able to make a full comparison between Embase and Scopus. In 23 reviews included in this research, Scopus was searched. In 12 reviews (52%), Scopus retrieved 100% of all included references retrieved by Embase or Web of Science. In the other 48%, the recall by Scopus was suboptimal, in one occasion as low as 38%.

Of all reviews in which we searched CINAHL and PsycINFO, respectively, for 6 and 9% of the reviews, unique references were found. For CINAHL and PsycINFO, in one case each, unique relevant references were found. In both these reviews, the topic was highly related to the topic of the database. Although we did not use these special topic databases in all of our reviews, given the low number of reviews where these databases added relevant references, and observing the special topics of those reviews, we suggest that these subject databases will only add value if the topic is related to the topic of the database.

Many articles written on this topic have calculated overall recall of several reviews, instead of the effects on all individual reviews. Researchers planning a systematic review generally perform one review, and they need to estimate the probability that they may miss relevant articles in their search. When looking at the overall recall, the combination of Embase and MEDLINE and either Google Scholar or Web of Science could be regarded sufficient with 96% recall. This number however is not an answer to the question of a researcher performing a systematic review, regarding which databases should be searched. A researcher wants to be able to estimate the chances that his or her current project will miss a relevant reference. However, when looking at individual reviews, the probability of missing more than 5% of included references found through database searching is 33% when Google Scholar is used together with Embase and MEDLINE and 30% for the Web of Science, Embase, and MEDLINE combination. What is considered acceptable recall for systematic review searches is open for debate and can differ between individuals and groups. Some reviewers might accept a potential loss of 5% of relevant references; others would want to pursue 100% recall, no matter what cost. Using the results in this research, review teams can decide, based on their idea of acceptable recall and the desired probability which databases to include in their searches.

Strengths and limitations

We did not investigate whether the loss of certain references had resulted in changes to the conclusion of the reviews. Of course, the loss of a minor non-randomized included study that follows the systematic review’s conclusions would not be as problematic as losing a major included randomized controlled trial with contradictory results. However, the wide range of scope, topic, and criteria between systematic reviews and their related review types make it very hard to answer this question.

We found that two databases previously not recommended as essential for systematic review searching, Web of Science and Google Scholar, were key to improving recall in the reviews we investigated. Because this is a novel finding, we cannot conclude whether it is due to our dataset or to a generalizable principle. It is likely that topical differences in systematic reviews may impact whether databases such as Web of Science and Google Scholar add value to the review. One explanation for our finding may be that if the research question is very specific, the topic of research might not always be mentioned in the title and/or abstract. In that case, Google Scholar might add value by searching the full text of articles. If the research question is more interdisciplinary, a broader science database such as Web of Science is likely to add value. The topics of the reviews studied here may simply have fallen into those categories, though the diversity of the included reviews may point to a more universal applicability.

Although we searched PubMed as supplied by publisher separately from MEDLINE in Ovid, we combined the included references of these databases into one measurement in our analysis. Until 2016, the most complete MEDLINE selection in Ovid still lacked the electronic publications that were already available in PubMed. These could be retrieved by searching PubMed with the subset as supplied by publisher. Since the introduction of the more complete MEDLINE collection Epub Ahead of Print , In-Process & Other Non-Indexed Citations , and Ovid MEDLINE® , the need to separately search PubMed as supplied by publisher has disappeared. According to our data, PubMed’s “as supplied by publisher” subset retrieved 12 unique included references, and it was the most important addition in terms of relevant references to the four major databases. It is therefore important to search MEDLINE including the “Epub Ahead of Print, In-Process, and Other Non-Indexed Citations” references.

These results may not be generalizable to other studies for other reasons. The skills and experience of the searcher are one of the most important aspects in the effectiveness of systematic review search strategies [ 23 , 24 , 25 ]. The searcher in the case of all 58 systematic reviews is an experienced biomedical information specialist. Though we suspect that searchers who are not information specialists or librarians would have a higher possibility of less well-constructed searches and searches with lower recall, even highly trained searchers differ in their approaches to searching. For this study, we searched to achieve as high a recall as possible, though our search strategies, like any other search strategy, still missed some relevant references because relevant terms had not been used in the search. We are not implying that a combined search of the four recommended databases will never result in relevant references being missed, rather that failure to search any one of these four databases will likely lead to relevant references being missed. Our experience in this study shows that additional efforts, such as hand searching, reference checking, and contacting key players, should be made to retrieve extra possible includes.

Based on our calculations made by looking at random systematic reviews in PubMed, we estimate that 60% of these reviews are likely to have missed more than 5% of relevant references only because of the combinations of databases that were used. That is with the generous assumption that the searches in those databases had been designed sensitively enough. Even when taking into account that many searchers consider the use of Scopus as a replacement of Embase, plus taking into account the large overlap of Scopus and Web of Science, this estimate remains similar. Also, while the Scopus and Web of Science assumptions we made might be true for coverage, they are likely very different when looking at recall, as Scopus does not allow the use of the full features of a thesaurus. We see that reviewers rarely use Web of Science and especially Google Scholar in their searches, though they retrieve a great deal of unique references in our reviews. Systematic review searchers should consider using these databases if they are available to them, and if their institution lacks availability, they should ask other institutes to cooperate on their systematic review searches.

The major strength of our paper is that it is the first large-scale study we know of to assess database performance for systematic reviews using prospectively collected data. Prior research on database importance for systematic reviews has looked primarily at whether included references could have theoretically been found in a certain database, but most have been unable to ascertain whether the researchers actually found the articles in those databases [ 10 , 12 , 16 , 17 , 26 ]. Whether a reference is available in a database is important, but whether the article can be found in a precise search with reasonable recall is not only impacted by the database’s coverage. Our experience has shown us that it is also impacted by the ability of the searcher, the accuracy of indexing of the database, and the complexity of terminology in a particular field. Because these studies based on retrospective analysis of database coverage do not account for the searchers’ abilities, the actual findings from the searches performed, and the indexing for particular articles, their conclusions lack immediate translatability into practice. This research goes beyond retrospectively assessed coverage to investigate real search performance in databases. Many of the articles reporting on previous research concluded that one database was able to retrieve most included references. Halladay et al. [ 10 ] and van Enst et al. [ 16 ] concluded that databases other than MEDLINE/PubMed did not change the outcomes of the review, while Rice et al. [ 17 ] found the added value of other databases only for newer, non-indexed references. In addition, Michaleff et al. [ 26 ] found that Cochrane CENTRAL included 95% of all RCTs included in the reviews investigated. Our conclusion that Web of Science and Google Scholar are needed for completeness has not been shared by previous research. Most of the previous studies did not include these two databases in their research.

We recommend that, regardless of their topic, searches for biomedical systematic reviews should combine Embase, MEDLINE (including electronic publications ahead of print), Web of Science (Core Collection), and Google Scholar (the 200 first relevant references) at minimum. Special topics databases such as CINAHL and PsycINFO should be added if the topic of the review directly touches the primary focus of a specialized subject database, like CINAHL for focus on nursing and allied health or PsycINFO for behavioral sciences and mental health. For reviews where RCTs are the desired study design, Cochrane CENTRAL may be similarly useful. Ignoring one or more of the databases that we identified as the four key databases will result in more precise searches with a lower number of results, but the researchers should decide whether that is worth the >increased probability of losing relevant references. This study also highlights once more that searching databases alone is, nevertheless, not enough to retrieve all relevant references.

Future research should continue to investigate recall of actual searches beyond coverage of databases and should consider focusing on the most optimal database combinations, not on single databases.

Levay P, Raynor M, Tuvey D. The contributions of MEDLINE, other bibliographic databases and various search techniques to NICE public health guidance. Evid Based Libr Inf Pract. 2015;10:50–68.

Article   Google Scholar  

Stevinson C, Lawlor DA. Searching multiple databases for systematic reviews: added value or diminishing returns? Complement Ther Med. 2004;12:228–32.

Article   CAS   PubMed   Google Scholar  

Lawrence DW. What is lost when searching only one literature database for articles relevant to injury prevention and safety promotion? Inj Prev. 2008;14:401–4.

Lemeshow AR, Blum RE, Berlin JA, Stoto MA, Colditz GA. Searching one or two databases was insufficient for meta-analysis of observational studies. J Clin Epidemiol. 2005;58:867–73.

Article   PubMed   Google Scholar  

Zheng MH, Zhang X, Ye Q, Chen YP. Searching additional databases except PubMed are necessary for a systematic review. Stroke. 2008;39:e139. author reply e140

Beyer FR, Wright K. Can we prioritise which databases to search? A case study using a systematic review of frozen shoulder management. Health Inf Libr J. 2013;30:49–58.

Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions: The Cochrane Collaboration, London, United Kingdom. 2011.

Wright K, Golder S, Lewis-Light K. What value is the CINAHL database when searching for systematic reviews of qualitative studies? Syst Rev. 2015;4:104.

Article   PubMed   PubMed Central   Google Scholar  

Wilkins T, Gillies RA, Davies K. EMBASE versus MEDLINE for family medicine searches: can MEDLINE searches find the forest or a tree? Can Fam Physician. 2005;51:848–9.

PubMed   Google Scholar  

Halladay CW, Trikalinos TA, Schmid IT, Schmid CH, Dahabreh IJ. Using data sources beyond PubMed has a modest impact on the results of systematic reviews of therapeutic interventions. J Clin Epidemiol. 2015;68:1076–84.

Ahmadi M, Ershad-Sarabi R, Jamshidiorak R, Bahaodini K. Comparison of bibliographic databases in retrieving information on telemedicine. J Kerman Univ Med Sci. 2014;21:343–54.

Google Scholar  

Lorenzetti DL, Topfer L-A, Dennett L, Clement F. Value of databases other than MEDLINE for rapid health technology assessments. Int J Technol Assess Health Care. 2014;30:173–8.

Beckles Z, Glover S, Ashe J, Stockton S, Boynton J, Lai R, Alderson P. Searching CINAHL did not add value to clinical questions posed in NICE guidelines. J Clin Epidemiol. 2013;66:1051–7.

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. The contribution of databases to the results of systematic reviews: a cross-sectional study. BMC Med Res Methodol. 2016;16:1–13.

Aagaard T, Lund H, Juhl C. Optimizing literature search in systematic reviews—are MEDLINE, EMBASE and CENTRAL enough for identifying effect studies within the area of musculoskeletal disorders? BMC Med Res Methodol. 2016;16:161.

van Enst WA, Scholten RJ, Whiting P, Zwinderman AH, Hooft L. Meta-epidemiologic analysis indicates that MEDLINE searches are sufficient for diagnostic test accuracy systematic reviews. J Clin Epidemiol. 2014;67:1192–9.

Rice DB, Kloda LA, Levis B, Qi B, Kingsland E, Thombs BD. Are MEDLINE searches sufficient for systematic reviews and meta-analyses of the diagnostic accuracy of depression screening tools? A review of meta-analyses. J Psychosom Res. 2016;87:7–13.

Bramer WM, Giustini D, Kramer BM, Anderson PF. The comparative recall of Google Scholar versus PubMed in identical searches for biomedical systematic reviews: a review of searches used in systematic reviews. Syst Rev. 2013;2:115.

Bramer WM, Giustini D, Kramer BMR. Comparing the coverage, recall, and precision of searches for 120 systematic reviews in Embase, MEDLINE, and Google Scholar: a prospective study. Syst Rev. 2016;5:39.

Ross-White A, Godfrey C. Is there an optimum number needed to retrieve to justify inclusion of a database in a systematic review search? Health Inf Libr J. 2017;33:217–24.

Bramer WM, Rethlefsen ML, Mast F, Kleijnen J. A pragmatic evaluation of a new method for librarian-mediated literature searches for systematic reviews. Res Synth Methods. 2017. doi: 10.1002/jrsm.1279 .

Bramer WM, de Jonge GB, Rethlefsen ML, Mast F, Kleijnen J. A systematic approach to searching: how to perform high quality literature searches more efficiently. J Med Libr Assoc. 2018.

Rethlefsen ML, Farrell AM, Osterhaus Trzasko LC, Brigham TJ. Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews. J Clin Epidemiol. 2015;68:617–26.

McGowan J, Sampson M. Systematic reviews need systematic searchers. J Med Libr Assoc. 2005;93:74–80.

PubMed   PubMed Central   Google Scholar  

McKibbon KA, Haynes RB, Dilks CJW, Ramsden MF, Ryan NC, Baker L, Flemming T, Fitzgerald D. How good are clinical MEDLINE searches? A comparative study of clinical end-user and librarian searches. Comput Biomed Res. 1990;23:583–93.

Michaleff ZA, Costa LO, Moseley AM, Maher CG, Elkins MR, Herbert RD, Sherrington C. CENTRAL, PEDro, PubMed, and EMBASE are the most comprehensive databases indexing randomized controlled trials of physical therapy interventions. Phys Ther. 2011;91:190–7.

Download references

Acknowledgements

Not applicable

Melissa Rethlefsen receives funding in part from the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number UL1TR001067. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Availability of data and materials

The datasets generated and/or analyzed during the current study are available from the corresponding author on a reasonable request.

Author information

Authors and affiliations.

Medical Library, Erasmus MC, Erasmus University Medical Centre Rotterdam, 3000 CS, Rotterdam, the Netherlands

Wichor M. Bramer

Spencer S. Eccles Health Sciences Library, University of Utah, Salt Lake City, Utah, USA

Melissa L. Rethlefsen

Kleijnen Systematic Reviews Ltd., York, UK

Jos Kleijnen

School for Public Health and Primary Care (CAPHRI), Maastricht University, Maastricht, the Netherlands

Department of Epidemiology, Erasmus MC, Erasmus University Medical Centre Rotterdam, Rotterdam, the Netherlands

Oscar H. Franco

You can also search for this author in PubMed   Google Scholar

Contributions

WB, JK, and OF designed the study. WB designed the searches used in this study and gathered the data. WB and ML analyzed the data. WB drafted the first manuscript, which was revised critically by the other authors. All authors have approved the final manuscript.

Corresponding author

Correspondence to Wichor M. Bramer .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

WB has received travel allowance from Embase for giving a presentation at a conference. The other authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:.

Reviews included in the research . References to the systematic reviews published by Erasmus MC authors that were included in the research. (DOCX 19 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Bramer, W.M., Rethlefsen, M.L., Kleijnen, J. et al. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. Syst Rev 6 , 245 (2017). https://doi.org/10.1186/s13643-017-0644-y

Download citation

Received : 21 August 2017

Accepted : 24 November 2017

Published : 06 December 2017

DOI : https://doi.org/10.1186/s13643-017-0644-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Databases, bibliographic
  • Review literature as topic
  • Sensitivity and specificity
  • Information storage and retrieval

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

search methods for literature review

Book cover

Encyclopedia of Evidence in Pharmaceutical Public Health and Health Services Research in Pharmacy pp 1–15 Cite as

Methodological Approaches to Literature Review

  • Dennis Thomas 2 ,
  • Elida Zairina 3 &
  • Johnson George 4  
  • Living reference work entry
  • First Online: 09 May 2023

308 Accesses

The literature review can serve various functions in the contexts of education and research. It aids in identifying knowledge gaps, informing research methodology, and developing a theoretical framework during the planning stages of a research study or project, as well as reporting of review findings in the context of the existing literature. This chapter discusses the methodological approaches to conducting a literature review and offers an overview of different types of reviews. There are various types of reviews, including narrative reviews, scoping reviews, and systematic reviews with reporting strategies such as meta-analysis and meta-synthesis. Review authors should consider the scope of the literature review when selecting a type and method. Being focused is essential for a successful review; however, this must be balanced against the relevance of the review to a broad audience.

  • Literature review
  • Systematic review
  • Meta-analysis
  • Scoping review
  • Research methodology

This is a preview of subscription content, log in via an institution .

Akobeng AK. Principles of evidence based medicine. Arch Dis Child. 2005;90(8):837–40.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Alharbi A, Stevenson M. Refining Boolean queries to identify relevant studies for systematic review updates. J Am Med Inform Assoc. 2020;27(11):1658–66.

Article   PubMed   PubMed Central   Google Scholar  

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Article   Google Scholar  

Aromataris E MZE. JBI manual for evidence synthesis. 2020.

Google Scholar  

Aromataris E, Pearson A. The systematic review: an overview. Am J Nurs. 2014;114(3):53–8.

Article   PubMed   Google Scholar  

Aromataris E, Riitano D. Constructing a search strategy and searching for evidence. A guide to the literature search for a systematic review. Am J Nurs. 2014;114(5):49–56.

Babineau J. Product review: covidence (systematic review software). J Canad Health Libr Assoc Canada. 2014;35(2):68–71.

Baker JD. The purpose, process, and methods of writing a literature review. AORN J. 2016;103(3):265–9.

Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326.

Bramer WM, Rethlefsen ML, Kleijnen J, Franco OH. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. Syst Rev. 2017;6(1):1–12.

Brown D. A review of the PubMed PICO tool: using evidence-based practice in health education. Health Promot Pract. 2020;21(4):496–8.

Cargo M, Harris J, Pantoja T, et al. Cochrane qualitative and implementation methods group guidance series – paper 4: methods for assessing evidence on intervention implementation. J Clin Epidemiol. 2018;97:59–69.

Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997;126(5):376–80.

Article   CAS   PubMed   Google Scholar  

Counsell C. Formulating questions and locating primary studies for inclusion in systematic reviews. Ann Intern Med. 1997;127(5):380–7.

Cummings SR, Browner WS, Hulley SB. Conceiving the research question and developing the study plan. In: Cummings SR, Browner WS, Hulley SB, editors. Designing Clinical Research: An Epidemiological Approach. 4th ed. Philadelphia (PA): P Lippincott Williams & Wilkins; 2007. p. 14–22.

Eriksen MB, Frandsen TF. The impact of patient, intervention, comparison, outcome (PICO) as a search strategy tool on literature search quality: a systematic review. JMLA. 2018;106(4):420.

Ferrari R. Writing narrative style literature reviews. Medical Writing. 2015;24(4):230–5.

Flemming K, Booth A, Hannes K, Cargo M, Noyes J. Cochrane qualitative and implementation methods group guidance series – paper 6: reporting guidelines for qualitative, implementation, and process evaluation evidence syntheses. J Clin Epidemiol. 2018;97:79–85.

Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Inf Libr J. 2009;26(2):91–108.

Green BN, Johnson CD, Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. J Chiropr Med. 2006;5(3):101–17.

Gregory AT, Denniss AR. An introduction to writing narrative and systematic reviews; tasks, tips and traps for aspiring authors. Heart Lung Circ. 2018;27(7):893–8.

Harden A, Thomas J, Cargo M, et al. Cochrane qualitative and implementation methods group guidance series – paper 5: methods for integrating qualitative and implementation evidence within intervention effectiveness reviews. J Clin Epidemiol. 2018;97:70–8.

Harris JL, Booth A, Cargo M, et al. Cochrane qualitative and implementation methods group guidance series – paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis. J Clin Epidemiol. 2018;97:39–48.

Higgins J, Thomas J. In: Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.3, updated February 2022). Available from www.training.cochrane.org/handbook.: Cochrane; 2022.

International prospective register of systematic reviews (PROSPERO). Available from https://www.crd.york.ac.uk/prospero/ .

Khan KS, Kunz R, Kleijnen J, Antes G. Five steps to conducting a systematic review. J R Soc Med. 2003;96(3):118–21.

Landhuis E. Scientific literature: information overload. Nature. 2016;535(7612):457–8.

Lockwood C, Porritt K, Munn Z, Rittenmeyer L, Salmond S, Bjerrum M, Loveday H, Carrier J, Stannard D. Chapter 2: Systematic reviews of qualitative evidence. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis. JBI; 2020. Available from https://synthesismanual.jbi.global . https://doi.org/10.46658/JBIMES-20-03 .

Chapter   Google Scholar  

Lorenzetti DL, Topfer L-A, Dennett L, Clement F. Value of databases other than medline for rapid health technology assessments. Int J Technol Assess Health Care. 2014;30(2):173–8.

Moher D, Liberati A, Tetzlaff J, Altman DG, the PRISMA Group. Preferred reporting items for (SR) and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;6:264–9.

Mulrow CD. Systematic reviews: rationale for systematic reviews. BMJ. 1994;309(6954):597–9.

Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143.

Munthe-Kaas HM, Glenton C, Booth A, Noyes J, Lewin S. Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool. BMC Med Res Methodol. 2019;19(1):1–13.

Murphy CM. Writing an effective review article. J Med Toxicol. 2012;8(2):89–90.

NHMRC. Guidelines for guidelines: assessing risk of bias. Available at https://nhmrc.gov.au/guidelinesforguidelines/develop/assessing-risk-bias . Last published 29 August 2019. Accessed 29 Aug 2022.

Noyes J, Booth A, Cargo M, et al. Cochrane qualitative and implementation methods group guidance series – paper 1: introduction. J Clin Epidemiol. 2018b;97:35–8.

Noyes J, Booth A, Flemming K, et al. Cochrane qualitative and implementation methods group guidance series – paper 3: methods for assessing methodological limitations, data extraction and synthesis, and confidence in synthesized qualitative findings. J Clin Epidemiol. 2018a;97:49–58.

Noyes J, Booth A, Moore G, Flemming K, Tunçalp Ö, Shakibazadeh E. Synthesising quantitative and qualitative evidence to inform guidelines on complex interventions: clarifying the purposes, designs and outlining some methods. BMJ Glob Health. 2019;4(Suppl 1):e000893.

Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid Healthcare. 2015;13(3):141–6.

Polanin JR, Pigott TD, Espelage DL, Grotpeter JK. Best practice guidelines for abstract screening large-evidence systematic reviews and meta-analyses. Res Synth Methods. 2019;10(3):330–42.

Article   PubMed Central   Google Scholar  

Shea BJ, Grimshaw JM, Wells GA, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7(1):1–7.

Shea BJ, Reeves BC, Wells G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Brit Med J. 2017;358

Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. Br Med J. 2016;355

Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. JAMA. 2000;283(15):2008–12.

Tawfik GM, Dila KAS, Mohamed MYF, et al. A step by step guide for conducting a systematic review and meta-analysis with simulation data. Trop Med Health. 2019;47(1):1–9.

The Critical Appraisal Program. Critical appraisal skills program. Available at https://casp-uk.net/ . 2022. Accessed 29 Aug 2022.

The University of Melbourne. Writing a literature review in Research Techniques 2022. Available at https://students.unimelb.edu.au/academic-skills/explore-our-resources/research-techniques/reviewing-the-literature . Accessed 29 Aug 2022.

The Writing Center University of Winconsin-Madison. Learn how to write a literature review in The Writer’s Handbook – Academic Professional Writing. 2022. Available at https://writing.wisc.edu/handbook/assignments/reviewofliterature/ . Accessed 29 Aug 2022.

Thompson SG, Sharp SJ. Explaining heterogeneity in meta-analysis: a comparison of methods. Stat Med. 1999;18(20):2693–708.

Tricco AC, Lillie E, Zarin W, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16(1):15.

Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

Yoneoka D, Henmi M. Clinical heterogeneity in random-effect meta-analysis: between-study boundary estimate problem. Stat Med. 2019;38(21):4131–45.

Yuan Y, Hunt RH. Systematic reviews: the good, the bad, and the ugly. Am J Gastroenterol. 2009;104(5):1086–92.

Download references

Author information

Authors and affiliations.

Centre of Excellence in Treatable Traits, College of Health, Medicine and Wellbeing, University of Newcastle, Hunter Medical Research Institute Asthma and Breathing Programme, Newcastle, NSW, Australia

Dennis Thomas

Department of Pharmacy Practice, Faculty of Pharmacy, Universitas Airlangga, Surabaya, Indonesia

Elida Zairina

Centre for Medicine Use and Safety, Monash Institute of Pharmaceutical Sciences, Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, Parkville, VIC, Australia

Johnson George

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Johnson George .

Section Editor information

College of Pharmacy, Qatar University, Doha, Qatar

Derek Charles Stewart

Department of Pharmacy, University of Huddersfield, Huddersfield, United Kingdom

Zaheer-Ud-Din Babar

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this entry

Cite this entry.

Thomas, D., Zairina, E., George, J. (2023). Methodological Approaches to Literature Review. In: Encyclopedia of Evidence in Pharmaceutical Public Health and Health Services Research in Pharmacy. Springer, Cham. https://doi.org/10.1007/978-3-030-50247-8_57-1

Download citation

DOI : https://doi.org/10.1007/978-3-030-50247-8_57-1

Received : 22 February 2023

Accepted : 22 February 2023

Published : 09 May 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-50247-8

Online ISBN : 978-3-030-50247-8

eBook Packages : Springer Reference Biomedicine and Life Sciences Reference Module Biomedical and Life Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Banner

The Literature Review: 3. Methods for Searching the Literature

  • 1. Introduction
  • 2. Why Do a Literature Review?
  • 3. Methods for Searching the Literature
  • 4. Analysing the Literature
  • 5. Organizing the Literature Review
  • 6. Writing the Review

1. Tasks Involved in a Literature Review

There are two major tasks involved in a literature review:

  • Identifying and selecting literature
  • Writing about the literature

2. Skills Required for Conducting a Literature Search

  • Information seeking skills
  • Ability to use manual and electronic methods to identify useful resources
  • Ability to conduct extensive bibliographic searches
  • Critical appraisal skills
  • Ability to describe, critique, and relate each source to the topic
  • Ability to identify areas of controversy in the literature
  • Organizational skills
  • Ability to organize the literature collected around your topic
  • Ability to present the review logically

3. Searching Techniques

Scan the literature for various types of content, including:

  • theoretical foundations and definitions
  • discussion and debate
  • current issues

Skim potential works to select materials for inclusion

  • decide whether to include or exclude a work from the review

4. Sorting the Literature

For each article identified for possible inclusion in the literature review, you need to:

1. read the abstract

  • decide whether to read the entire article

2. read the introduction

  • explains why the study is important
  • provides a review and evalution of relevant literature

3. read Methods section critically

  • focus on participants and methodology

4. evaluate results

  • are the conclusions logical?
  • is there evidence of bias?

5. Notetaking

  • Take notes as you read through each paper that you will include in the review
  • Purpose of study - research aims or hypotheses
  • Research design and methodology
  • Data analysis
  • Summary of findings

Part of the task in taking notes is to begin the process of sifting and arranging ideas

6. Questions to Keep in Mind

  • What are the key sources of information on this topic?
  • What are the major issues and debates on this topic?
  • What are the key theories, concepts, and ideas on this topic?
  • What are the main questions and problems that have been addressed so far?
  • What are the strengths and weaknesses of the various arguments on the topic?
  • Who are the significant research personalities in this area?
  • << Previous: 2. Why Do a Literature Review?
  • Next: 4. Analysing the Literature >>
  • Last Updated: Feb 8, 2022 5:25 PM
  • URL: https://libguides.uwi.edu/litreviewsoe
  • Chester Fritz Library
  • Library of the Health Sciences
  • Thormodsgard Law Library

Literature Reviews

  • Get started
  • What is a Literature Review?
  • Finding Literature Reviews

Literature Review in a Paper

Systematic/semi-systematic literature reviews, defining your research question, where will you search, how will you search, documenting & organizing, types of literature.

  • Library Books
  • How to Videos
  • Communicating & Citing Research
  • Bibliography

On This Page:

  • Your Search Strategy
  • Search Strategy Resources
  • Search Techniques
  • Documenting & Organizing Your Search
  • References & Further Reading

Since you are situating your research within the larger scholarly conversation rather than summarizing everything that has been written about a topic, here are some questions that can help you decide what to include in your literature review:

  • Authors who wrote about your topic or a similar topic
  • Most frequently cited works/authors on your topic
  • Who identified the research gap that you seek to address?
  • What conflicts are there about your topic?
  • What has been most recently written about your topic?

Strategies for identifying important authors on a topic include:

  • Looking up encyclopedia entries on your topic. Encylopedias generally reach out to topic experts, inviting them to write the relevant chapter. Thus, both the entry authors and the citations they list are great starting points for your research.
  • Look up your topic in a citation tracking database such as Web of Science and see which articles and authors are cited the most on your topic. Also take a look at who has cited those articles/authors to make sure they are citing them for positive reasons.

Both systematic and semi-systematic literature reviews require you to establish a search methodology before conducting your literature review. You will need to identify:

  • Where you will search (which databases, journals, other resources)
  • How you will search (search terms, search strategies)
  • What you will include/exclude from your results (specific criteria)

You will also want to take detailed notes as you search. See Library Books and Documenting and Organizing for resources to help guide you in your search.

It is integral to spend time honing and defining your research question before searching the literature. Here are a couple tools used for by particular science and social science disciplines to help you define your research question:

  • PICOT  / PICO  (quantitative evidence-based research/synthesis) and
  • SPIDER  (qualitative evidence-based research/synthesis) 

PICO (Quantitative) and SPIDER (Qualitative)

Cooke, Smith, & Booth (2012).

*The "T" (PICOT) is left out of the above study. It represents Time, or the duration of data collection (Riva, Malik, Burnie, Endicott, & Busse, 2012)

Engineering PICO*

P = Population, Problem, Process

I = Intervention, Inquiry, Investigation, Improvement

C = Comparison (current practice or opposing viewpoints)

O = Outcomes (measuring what worked best)

*Read more about it on Arizona State University Library's "Engineering -- Formulating questions w/PICO" guide:  https://libguides.asu.edu/engineering/PICO

Your research topic and type of literature review will help you determine where to look.

For literature reviews within a paper, you will likely at least want to search an important subject database and a citation tracking database.

  • Subject Research Guides can help you identity important subject databases
  • Web of Science and Google Scholar are citation tracking databases

For systematic/semi-systematic literature reviews, you will likely be more comprehensive in your search. In addition to the databases mentioned above, you may want to:

  • Use the Library Search and "Expand Beyond Library" to search everything indexed by UND library databases and additional sources, such as open access materials
  • Search WorldCat or/and Google Books, particularly for humanities disciplines
  • Search government documents or other gray literature resources relevant for your discipline

Dissertations and Theses can also help you with a literature review, as these tend to include thorough literature reviews on a topic. Take a look at their literature review section and citations.

  • CFL Research Guides Identifies research starting points for different subjects.

Restricted to UND affiliates (students, faculty, and staff)

Dissertations from 1861 - present. Master's theses from 1988 - present. Includes full text from both UND and external dissertations and theses. Hosted by Proquest.

Search Strategies

Conventional subject searching in databases.

Subject database searching generally includes developing a search strategy around subject terms, reflecting aspects of the research question. You may want to use Booleans (AND/OR/NOT) and wild card operators (*/!) to help you create a thorough and precise search strategy. Searches are often restricted by language and date, and sometimes geographic region, through the use of database  limiters .

Example research question and search strategy

Research question: Is there a correlation between fast food advertising and childhood obesity?

Prelude to developing a search strategy: How could that correlation be shown? Perhaps the number of ads by fast food companies over time and childhood obesity over time? How can I tell whether those ads target children? Perhaps if the ads include cartoons or toys or character mascots they can be considered to target children; perhaps previous research will help me identify additional methods, as well. What words could be used to describe "fast food," "advertising," "children" and "obesity"?

Initial search strategy: (kid* or child*) AND (market* OR advertis*) AND "fast food" AND (obesity OR weight OR fat)

Updated search strategy after initial search: (kid* or child*) AND (market* OR advertis*) AND ("fast food" OR "quick service")

Citation Chaining & Citation Searching (Backward & Forward Snowballing)

These techniques refer to checking reference lists and citing articles (articles that have cited the article that you are currently looking at). Citation chaining involves checking references on all included papers identified by various search methods so that relevant references not yet identified can be added to the pool of included studies. It also includes checking articles that cited an included paper. Many research databases link citing articles to each article record. Databases that are useful for citation searching include Google Scholar, Web of Science, CINAHL Complete, Wiley Online, and others. Access Chester Fritz Library's most used databases by visiting our  home page   and clicking on QuickLinks or the complete list by visiting our  A-Z Databases page .

Traditional vs. Comprehensive Pearl Growing

Traditional Pearl Growing (TPG) begins with one or more target articles, judged to be such due to their relevancy to the research topic. The target article is called a pearl. It's a step beyond the citation chaining and searching methods. The researcher then identifies keywords to add to their search from aspects of the article (e.g., abstract, subject terms, author, etc.). Hawkins and Wagers (1982) coined this process as "growing more pearls" (as cited in  Schlosser,  Wendt ,  Bhavnani , & Nail‐Chiwetalu, 2006).

Comprehensive Pearl Growing (CPG) involves the following process: (1) Start with a compilation of studies from a relevant review or a topical bibliography; (2) determine relevant databases for these studies; (3) determine how these studies are indexed in database 1 in terms of keywords and quality filters; (4) find other relevant articles in database 1 (or as many are relevant) using the index terms in a Building Block query; and (5) end when articles retrieved provide diminishing relevance. Thus, rather than beginning with only one pearl, CPG requires of the searcher to begin with a compilation of studies from a relevant narrative review or a topical bibliography. Like TPG, CPG makes use of existing studies to determine the keywords and quality filters under which they are indexed in order to retrieve more articles of the same kind   (Schlosser, Wendt, Bhavnani, & Nail‐Chiwetalu, 2006).

Although pearl growing techniques are effective across disciplines, they may be particularly strategic for interdisciplinary research questions in which multiple controlled vocabularies (e.g., thesauri, database subject terms, discipline-specific terminology), are integral to pulling together  sources across research databases (Schlosser, Wendt, Bhavnani , & Nail‐Chiwetalu, 2006).

Text-Mining

In Software Engineering, various text-mining (TM) techniques are used more and more to implement systematic literature review processes, however further research is needed--read Feng, Chiam, and Lo (2017) linked below for more information.

Document Your Literature Search

Use paper and pen, the below excel file, or online tools or applications like Trello to set up a system for documenting your search strategy. This contributes to research transparency and gives you a mechanism to provide quick and accurate documentation of your search strategies when pre-registering systematic review protocol or being questioned about how you searched the literature (and what you may have missed) by supervisors, colleagues, or reviewers.

  • Search Strategy Documentation Template Be systematic by documenting your search strategy (keywords, databases, etc.). This helps you to remember what you have done before and provides documentation for research transparency.

For systematic reviews or meta-analyses, use the PRISMA or MOOSE checklists to evaluate each included resource for inclusion.

  • PRISMA Checklist Transparent Reporting of Systematic Reviews and Meta-analyses.
  • MOOSE Guidelines for Meta-Analyses and Systematic Reviews of Observational Studies* *Modified from Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA 2000;283:2008–12.

Organize Your Literature

  • Citation Managers A research guide providing information and resources for a handful of the most popular citation managers.
  • EndNote @ UND Page of the Citation Managers research guide discussing UND's EndNote subscription and use.
  • Mendeley Mendeley (Elsevier) is a free reference manager and an academic social network. Manage your research, showcase your work, connect and collaborate.
  • Zotero Zotero is a free, easy-to-use citation management tool to help you collect, organize, cite, and share research.

Where Will You Look?

The literature you gather greatly depends upon the sources that you look in. Studies appearing in peer-reviewed journals are easy to locate but will likely over-represent significant and novel results, while certain types of grey literature (e.g., dissertations and theses; self-published manuscripts; unpublished studies; conference abstracts, presentations, and proceedings; regulatory data; unpublished trial data; government publications; reports such as white papers, working papers, and internal documentation; patents; and policies & procedures) may be more difficult to find and access in full text--for example, you may need to contact authors or organizations directly. It is good practice to use listserv and distribution lists for this type of material along with direct personal contacts, keeping in mind that the latter may bias the results towards those in support of a particular contact's central beliefs and research results (Cooper, 2010).

Obviously, this means that limiting your search to journals in databases may skew results towards statistically significant findings, biasing your pool of studies which would be lacking in null, or inconclusive, results. You can also search for grey literature in institutional repositories like UND Scholarly Commons , government/professional organizations and conference websites, Open Data Repositories , open preprint repositories, theses and dissertation databases, online Researcher Communities , and journals that publish Registered Reports  or null and inconclusive findings like PLOS ONE .

Author's Versions & Grey Literature Database Examples:

Offers open access resources

  • Open Science Framework Search Search OSF projects and data files. OSF is a free and open source project management repository that supports researchers across their entire project lifecycle.
  • << Previous: Finding Literature Reviews
  • Next: Library Books >>
  • Last Updated: Dec 5, 2023 8:31 PM
  • URL: https://libguides.und.edu/literature-reviews

Banner

  • McGill Library

Systematic Reviews, Scoping Reviews, and other Knowledge Syntheses

  • Documenting the search methods
  • Types of knowledge syntheses
  • How to conduct a knowledge synthesis
  • Identifying the research question
  • Developing the protocol
  • Database-specific operators and fields
  • Search filters and tools
  • Exporting search results
  • Deduplicating
  • Grey literature and other supplementary search methods

Documenting and reporting the search methods

Important information to document for the search, addressing offensive terms in search strategies.

  • Updating the database searches
  • Resources for screening, appraisal, and synthesis
  • Writing the review
  • Additional training resources

The PRISMA-S Group has developed an extension to PRISMA to assist researchers in documenting their literature searches for systematic reviews and other knowledge syntheses. The checklist identifies what to document in terms of information sources and methods, search strategies, peer review, and records management.

This checklist (table 1) as well as explanations and elaborations are provided in the following article:

Rethlefsen ML, Kirtley S, Waffenschmidt S, Ayala AP, Moher D, Page MJ, et al. PRISMA-S: An extension to the PRISMA Statement for reporting literature searches in systematic reviews. Syst Rev. 2021;10(1):39.  https://dx.doi.org/10.1186/s13643-020-01542-z

Table 1 is reproduced here per a Creative Commons Attribution 4.0 International License

Changes made: The column header "Item #" was changed to "#" for better display.

The PRISMA Flow Diagram

The  PRISMA flow diagram  is used to illustrate the flow of records through the search and screening process. 

Record the information for each database and document the number of records before and after removing duplicates.

Total # of records before removing duplicates: 1023 + 2046 = 3069

Total # of records after removing duplicates: 2056

Original searches (Copy and paste exactly as executed):

Ovid MEDLINE ALL (R)

Addressing outdated language

Creating comprehensive searches on certain topics might require you to search antiquated, non-standard, exclusionary, and potentially offensive terms in order to capture older literature. Colleagues at the University of Michigan provide suggested processes and wording for addressing this reality.

Addressing antiquated, non-standard, exclusionary, and potentially offensive terms in evidence syntheses and systematic searches. Townsend, Whitney; Anderson, Patricia; Capellari, Emily; Haines, Kate; Hansen, Sam; James, LaTeesa; MacEachern, Mark; Rana, Gurpreet; Saylor, Kate. 2022-09-09. https://dx.doi.org/10.7302/6408

Due to a large influx of requests, there may be an extended wait time for librarian support on knowledge syntheses.

Find a librarian in your subject area to help you with your knowledge synthesis project.

Or contact the librarians at the Schulich Library of Physical Sciences, Life Sciences, and Engineering s [email protected]

Need help? Ask us!

  • << Previous: Grey literature and other supplementary search methods
  • Next: Updating the database searches >>
  • Last Updated: Feb 9, 2024 4:42 PM
  • URL: https://libraryguides.mcgill.ca/knowledge-syntheses

McGill Library • Questions? Ask us! Privacy notice

Harvey Cushing/John Hay Whitney Medical Library

  • Collections
  • Research Help

YSN Doctoral Programs: Steps in Conducting a Literature Review

  • Biomedical Databases
  • Global (Public Health) Databases
  • Soc. Sci., History, and Law Databases
  • Grey Literature
  • Trials Registers
  • Data and Statistics
  • Public Policy
  • Google Tips
  • Recommended Books
  • Steps in Conducting a Literature Review

What is a literature review?

A literature review is an integrated analysis -- not just a summary-- of scholarly writings and other relevant evidence related directly to your research question.  That is, it represents a synthesis of the evidence that provides background information on your topic and shows a association between the evidence and your research question.

A literature review may be a stand alone work or the introduction to a larger research paper, depending on the assignment.  Rely heavily on the guidelines your instructor has given you.

Why is it important?

A literature review is important because it:

  • Explains the background of research on a topic.
  • Demonstrates why a topic is significant to a subject area.
  • Discovers relationships between research studies/ideas.
  • Identifies major themes, concepts, and researchers on a topic.
  • Identifies critical gaps and points of disagreement.
  • Discusses further research questions that logically come out of the previous studies.

APA7 Style resources

Cover Art

APA Style Blog - for those harder to find answers

1. Choose a topic. Define your research question.

Your literature review should be guided by your central research question.  The literature represents background and research developments related to a specific research question, interpreted and analyzed by you in a synthesized way.

  • Make sure your research question is not too broad or too narrow.  Is it manageable?
  • Begin writing down terms that are related to your question. These will be useful for searches later.
  • If you have the opportunity, discuss your topic with your professor and your class mates.

2. Decide on the scope of your review

How many studies do you need to look at? How comprehensive should it be? How many years should it cover? 

  • This may depend on your assignment.  How many sources does the assignment require?

3. Select the databases you will use to conduct your searches.

Make a list of the databases you will search. 

Where to find databases:

  • use the tabs on this guide
  • Find other databases in the Nursing Information Resources web page
  • More on the Medical Library web page
  • ... and more on the Yale University Library web page

4. Conduct your searches to find the evidence. Keep track of your searches.

  • Use the key words in your question, as well as synonyms for those words, as terms in your search. Use the database tutorials for help.
  • Save the searches in the databases. This saves time when you want to redo, or modify, the searches. It is also helpful to use as a guide is the searches are not finding any useful results.
  • Review the abstracts of research studies carefully. This will save you time.
  • Use the bibliographies and references of research studies you find to locate others.
  • Check with your professor, or a subject expert in the field, if you are missing any key works in the field.
  • Ask your librarian for help at any time.
  • Use a citation manager, such as EndNote as the repository for your citations. See the EndNote tutorials for help.

Review the literature

Some questions to help you analyze the research:

  • What was the research question of the study you are reviewing? What were the authors trying to discover?
  • Was the research funded by a source that could influence the findings?
  • What were the research methodologies? Analyze its literature review, the samples and variables used, the results, and the conclusions.
  • Does the research seem to be complete? Could it have been conducted more soundly? What further questions does it raise?
  • If there are conflicting studies, why do you think that is?
  • How are the authors viewed in the field? Has this study been cited? If so, how has it been analyzed?

Tips: 

  • Review the abstracts carefully.  
  • Keep careful notes so that you may track your thought processes during the research process.
  • Create a matrix of the studies for easy analysis, and synthesis, across all of the studies.
  • << Previous: Recommended Books
  • Last Updated: Jan 4, 2024 10:52 AM
  • URL: https://guides.library.yale.edu/YSNDoctoral

Duke University Libraries

Literature Reviews

  • Getting started

What is a literature review?

Why conduct a literature review, stages of a literature review, lit reviews: an overview (video), check out these books.

  • Types of reviews
  • 1. Define your research question
  • 2. Plan your search
  • 3. Search the literature
  • 4. Organize your results
  • 5. Synthesize your findings
  • 6. Write the review
  • Thompson Writing Studio This link opens in a new window
  • Need to write a systematic review? This link opens in a new window

search methods for literature review

Contact a Librarian

Ask a Librarian

search methods for literature review

Definition: A literature review is a systematic examination and synthesis of existing scholarly research on a specific topic or subject.

Purpose: It serves to provide a comprehensive overview of the current state of knowledge within a particular field.

Analysis: Involves critically evaluating and summarizing key findings, methodologies, and debates found in academic literature.

Identifying Gaps: Aims to pinpoint areas where there is a lack of research or unresolved questions, highlighting opportunities for further investigation.

Contextualization: Enables researchers to understand how their work fits into the broader academic conversation and contributes to the existing body of knowledge.

search methods for literature review

tl;dr  A literature review critically examines and synthesizes existing scholarly research and publications on a specific topic to provide a comprehensive understanding of the current state of knowledge in the field.

What is a literature review NOT?

❌ An annotated bibliography

❌ Original research

❌ A summary

❌ Something to be conducted at the end of your research

❌ An opinion piece

❌ A chronological compilation of studies

The reason for conducting a literature review is to:

search methods for literature review

Literature Reviews: An Overview for Graduate Students

While this 9-minute video from NCSU is geared toward graduate students, it is useful for anyone conducting a literature review.

search methods for literature review

Writing the literature review: A practical guide

Available 3rd floor of Perkins

search methods for literature review

Writing literature reviews: A guide for students of the social and behavioral sciences

Available online!

search methods for literature review

So, you have to write a literature review: A guided workbook for engineers

search methods for literature review

Telling a research story: Writing a literature review

search methods for literature review

The literature review: Six steps to success

search methods for literature review

Systematic approaches to a successful literature review

Request from Duke Medical Center Library

search methods for literature review

Doing a systematic review: A student's guide

  • Next: Types of reviews >>
  • Last Updated: Feb 15, 2024 1:45 PM
  • URL: https://guides.library.duke.edu/lit-reviews

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

SMU Libraries logo

  •   SMU Libraries
  • Scholarship & Research
  • Teaching & Learning
  • Bridwell Library
  • Business Library
  • DeGolyer Library
  • Fondren Library
  • Hamon Arts Library
  • Underwood Law Library
  • Fort Burgwin Library
  • Exhibits & Digital Collections
  • SMU Scholar
  • Special Collections & Archives
  • Connect With Us
  • Research Guides by Subject
  • How Do I . . . ? Guides
  • Find Your Librarian
  • Writing Support

Evidence Syntheses and Systematic Reviews: Overview

  • Choosing a Review

Analyze and Report

What is evidence synthesis.

Evidence Synthesis: general term used to refer to any method of identifying, selecting, and combining results from multiple studies. There are several types of reviews which fall under this term; the main ones are in the table below: 

Types of Reviews

General steps for conducting systematic reviews.

The number of steps for conducting Evidence Synthesis varies a little, depending on the source that one consults. However, the following steps are generally accepted in how Systematic Reviews are done:

  • Identify a gap in the literature and form a well-developed and answerable research question which will form the basis of your search
  • Select a framework that will help guide the type of study you’re undertaking
  • Different guidelines are used for documenting and reporting the protocols of your systematic review before the review is conducted. The protocol is created following whatever guideline you select.
  • Select Databases and Grey Literature Sources
  • For steps 3 and 4, it is advisable to consult a librarian before embarking on this phase of the review process. They can recommend databases and other sources to use and even help design complex searches.
  • A protocol is a detailed plan for the project, and after it is written, it should be registered with an appropriate registry.
  • Search Databases and Other Sources
  • Not all databases use the same search syntax, so when searching multiple databases, use search syntaxes that would work in individual databases.
  • Use a citation management tool to help store and organize your citations during the review process; great help when de-duplicating your citation results
  • Inclusion and exclusion criteria already developed help you remove articles that are not relevant to your topic. 
  • Assess the quality of your findings to eliminate bias in either the design of the study or in the results/conclusions (generally not done outside of Systematic Reviews).

Extract and Synthesize

  • Extract the data from what's left of the studies that have been analyzed
  • Extraction tools are used to get data from individual studies that will be analyzed or summarized. 
  • Synthesize the main findings of your research

Report Findings

Report the results using a statistical approach or in a narrative form.

Need More Help?

Librarians can:

  • Provide guidance on which methodology best suits your goals
  • Recommend databases and other information sources for searching
  • Design and implement comprehensive and reproducible database-specific search strategies 
  • Recommend software for article screening
  • Assist with the use of citation management
  • Offer best practices on documentation of searches

Related Guides

  • Literature Reviews
  • Choose a Citation Manager
  • Project Management

Steps of a Systematic Review - Video

  • Next: Choosing a Review >>
  • Last Updated: Feb 16, 2024 5:40 PM
  • URL: https://guides.smu.edu/evidencesyntheses

Literature Reviews

  • Getting Started
  • Choosing a Type of Review
  • Developing a Research Question
  • Searching the Literature
  • Searching Tips
  • ChatGPT [beta]
  • Documenting your Search
  • Using Citation Managers
  • Concept Mapping
  • Concept Map Definition

MindMeister

  • Writing the Review
  • Further Resources

Additional Tools

Google slides.

GSlides can create concept maps using their Diagram feature. Insert > Diagram > Hierarchy will give you some editable templates to use.

Tutorial on diagrams in GSlides .

MICROSOFT WORD

MS Word can create concept maps using Insert > SmartArt Graphic. Select Process, Cycle, Hierarchy, or Relationship to see templates.

NVivo  is software for qualitative analysis that has a concept map feature. Zotero libraries can be uploaded using ris files. NVivo Concept Map information.

A concept map or mind map is a visual representation of knowledge that illustrates relationships between concepts or ideas. It is a tool for organizing and representing information in a hierarchical and interconnected manner. At its core, a concept map consists of nodes, which represent individual concepts or ideas, and links, which depict the relationships between these concepts .

Below is a non-exhaustive list of tools that can facilitate the creation of concept maps.

search methods for literature review

www.canva.com

Canva is a user-friendly graphic design platform that enables individuals to create visual content quickly and easily. It offers a diverse array of customizable templates, design elements, and tools, making it accessible to users with varying levels of design experience. 

Pros: comes with many pre-made concept map templates to get you started

Cons : not all features are available in the free version

Explore Canva concept map templates here .

Note: Although Canva advertises an "education" option, this is for K-12 only and does not apply to university users.

search methods for literature review

www.lucidchart.com

Lucid has two tools that can create mind maps (what they're called inside Lucid): Lucidchart is the place to build, document, and diagram, and Lucidspark is the place to ideate, connect, and plan.

Lucidchart is a collaborative online diagramming and visualization tool that allows users to create a wide range of diagrams, including flowcharts, org charts, wireframes, and mind maps. Its mind-mapping feature provides a structured framework for brainstorming ideas, organizing thoughts, and visualizing relationships between concepts. 

Lucidspark , works as a virtual whiteboard. Here, you can add sticky notes, develop ideas through freehand drawing, and collaborate with your teammates. Has only one template for mind mapping.

Explore Lucid mind map creation here .

How to create mind maps using LucidSpark: 

Note: U-M students have access to Lucid through ITS. [ info here ] Choose the "Login w Google" option, use your @umich.edu account, and access should happen automatically.

search methods for literature review

www.figma.com

Figma is a cloud-based design tool that enables collaborative interface design and prototyping. It's widely used by UI/UX designers to create, prototype, and iterate on digital designs. Figma is the main design tool, and FigJam is their virtual whiteboard:

Figma  is a comprehensive design tool that enables designers to create and prototype high-fidelity designs

FigJam focuses on collaboration and brainstorming, providing a virtual whiteboard-like experience, best for concept maps

Explore FigJam concept maps here .

search methods for literature review

Note: There is a " Figma for Education " version for students that will provide access. Choose the "Login w Google" option, use your @umich.edu account, and access should happen automatically.

search methods for literature review

www.mindmeister.com

MindMeister  is an online mind mapping tool that allows users to visually organize their thoughts, ideas, and information in a structured and hierarchical format. It provides a digital canvas where users can create and manipulate nodes representing concepts or topics, and connect them with lines to show relationships and associations.

Features : collaborative, permits multiple co-authors, and multiple export formats. The free version allows up to 3 mind maps.

Explore  MindMeister templates here .

  • << Previous: Using Citation Managers
  • Next: Writing the Review >>
  • Last Updated: Feb 15, 2024 1:47 PM
  • URL: https://guides.lib.umich.edu/litreview

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Wiley-Blackwell Online Open

Logo of blackwellopen

Literature searching methods or guidance and their application to public health topics: A narrative review

Andrea heath.

1 Information Services, National Institute for Health and Care Excellence (NICE), London UK

2 Information Services, National Institute for Health and Care Excellence (NICE), Manchester UK

Daniel Tuvey

Information specialists conducting searches for systematic reviews need to consider key questions around which and how many sources to search. This is particularly important for public health topics where evidence may be found in diverse sources.

The objective of this review is to give an overview of recent studies on information retrieval guidance and methods that could be applied to public health evidence and used to guide future searches.

A literature search was performed in core databases and supplemented by browsing health information journals and citation searching. Results were sifted and reviewed.

Seventy‐two papers were found and grouped into themes covering sources and search techniques. Public health topics were poorly covered in this literature.

Many researchers follow the recommendations to search multiple databases. The review topic influences decisions about sources. Additional sources covering grey literature eliminate bias but are time‐consuming and difficult to search systematically. Public health searching is complex, often requiring searches in multidisciplinary sources and using additional methods.

Conclusions

Search planning is advisable to enable decisions about which and how many sources to search. This could improve with more work on modelling search scenarios, particularly in public health topics, to examine where publications were found and guide future research.

Key messages

  • Key questions for information specialists of how many databases and which databases to search cannot be answered with a “one size fits all” approach.
  • Advice from the Cochrane Handbook is to search medline , embase and central as a minimum, and this advice is often but not always followed.
  • Combining database searching with additional techniques including website and grey literature searching reduces bias but is time‐consuming and will not necessarily produce valuable results.
  • Pre‐planning is important, particularly for complex topics, and consideration needs to be given to the topic, type of intervention and type of study required.
  • Sources, information type and volume associated with public health searching mean that planning and taking iterative search steps can be beneficial.

The National Institute for Health and Care Excellence (NICE) was established in 1999 with the aim of providing national guidance and advice to improve health and care in the United Kingdom (UK). NICE public health guidance has been published since 2006, covering key areas of public health such as smoking cessation, obesity and physical activity. NICE methods, originally developed for clinical topics, were adapted to suit the different needs of public health guidance. NICE guideline recommendations are based on a review of the best available evidence. Systematic and reproducible methods are used to create evidence reviews, on which these recommendations are based.

When starting a systematic search on any topic, there are key questions, including “which sources should be searched?” and “how many sources should be searched?” The NICE methods manual advises searchers to include “a mix of databases, websites and other sources” (NICE, 2018 ). However, the number of or identity of databases, websites or sources is not specified. The manual goes on to state that the sources will depend on “the subject of the review question and the type of evidence sought.” Suggestions are offered for sources based on the type of question being searched. This paper is interested in systematic reviews and other evidence synthesis that can support public health recommendations.

Handbooks and methods manuals from other organisations supply some answers to the key questions. The Methodological Expectations of Cochrane Intervention Reviews (MECIR) standards suggest that, to identify as many relevant references as possible and to minimise bias in systematic reviews of health interventions, it should be mandatory to search Central Register of Controlled Trials (CENTRAL), embase and medline (Higgins et al., 2020 ). Generally, review methodology handbooks do not itemise exactly which databases they recommend. The SIGN Handbook (Scottish Intercollegiate Guidelines Network (SIGN), 2019 ) lists the core databases plus “Internet sites relevant to the topic” and World Health Organization International Clinical Trials Registry Platform. The Campbell Collaboration explain that the decision about which topic‐specific databases to search in addition to those routinely searched (e.g., medline and embase ) is influenced by “the topic of the review, access to specific databases and budget considerations” (Kugley et al., 2017 ).

The key search questions need to be reassessed in the context of public health, which is a growing discipline in evidence‐based practice that requires a different approach to clinical review questions. The Centre for Reviews and Dissemination guidance (Centre for Reviews & Dissemination, 2009 ) for undertaking systematic reviews points out that public health topics require a wider range of databases to be searched than a clinical review. This is due to the different searching needs of public health topics compared with clinical topics. Public health topics do not necessarily look for evidence effectiveness only. Interventions may be complex, and therefore, it is advisable to additionally search for evidence on “processes, mechanisms and theory” (Thomas et al., 2019 ). Searching for public health topics involves an understanding of “subject breadth and technical demands of the databases to be searched, the fluidity and lack of standardization of the vocabulary, and the relative scarcity of high‐quality investigations at the appropriate level of geographic specificity” (Alpi, 2005 ).

This narrative review has been undertaken alongside a NICE research project examining which sources identified included publications for public health topics at NICE. It is also a follow up and extension of an earlier paper (Levay et al., 2015 ) that retrospectively assessed the contribution of medline and other key sources to public health evidence reviews at NICE. Levay et al. ( 2015 ) explored two core themes of public health literature searching: the variety of databases needed to cover a multidisciplinary evidence base and the range of search techniques required to find different types of evidence. It confirmed that there is no “one size fits all” solution and recommended pre‐project planning, testing the appropriateness of sources, the value of topic‐specific databases and the efficiency and suitability of non‐database search methods. This review explores if these findings have been confirmed by later literature.

Aim and objectives

The aim was to give an overview of studies published between 2015 and March 2021 on literature searching guidance and methods that could be applied to searching systematically for reviews of public health evidence.

The objectives were to identify:

  • studies describing theories and concepts relating to search methods, sources, systems and techniques;
  • studies assessing the impact of which sources were searched and how many sources were chosen;
  • how the retrieved studies could be applied to reviews of public health evidence;
  • key lessons from the literature to guide searching for public health topics; and
  • key gaps in the evidence on searching for public health topics.

The Faculty of Public Health's definition of public health was used, to be consistent with Levay et al. ( 2015 ). Public health means “promoting and protecting health and well‐being, preventing ill health and prolonging life through the organised efforts of society,” which incorporates three key domains of health improvement, improving services and health protection (Faculty of Public Health, 2016 ).

A literature search was performed in May 2017 and updated in February 2019 in a range of bibliographic databases. Additional abbreviated updates were performed in December 2019 and March 2021. The strategies were developed by an information specialist at NICE and peer reviewed by another information specialist. See Appendix A for details of the search strategy. The databases were searched using a combination of subject headings and free‐text terms in the title and abstract fields to describe search methods, strategies, techniques, approaches and databases. For practical purposes, the search strategies were limited to English language as no resources were available to translate papers in other languages. The searches were limited to “2015 to current,” because the search was intended to be a follow up to Levay et al. ( 2015 ).

The search strategy was developed in the medline bibliographic database (Ovid interface, 1946 to February Week 2 2019) and adapted as appropriate for the following databases:

  • Applied Social Sciences Index & Abstracts (ASSIA)—ProQuest—to present;
  • embase —Ovid—1974 to 2019 week 06;
  • Library & Information Science Abstracts (LISA)—ProQuest—to present;
  • Library, Information Science & Technology Abstracts (LISTA)—EBSCO Host—to present;
  • medline Epub ahead of print—Ovid—12 February 2019; and
  • medline ‐in‐Process—Ovid—13 February 2019.

The database searches were supplemented in February 2019 by browsing the tables of contents on the websites of the following information science journals:

  • Health Information and Libraries Journal;
  • IFLA Journal;
  • Information Retrieval Journal;
  • Journal of the Canadian Medical Libraries Association;
  • Journal of European Association of Health Information and Libraries (EAHIL);
  • Journal of Information Science; and
  • Journal of the Medical Library Association.

The Web of Science (WoS) Core Collection was also searched to check the references of the key papers (backwards citation searching) and for later papers citing these key papers (forwards citation searching). This incorporated:

  • Science Citation Index Expanded (1990 to present);
  • Social Sciences Citation Index (1990 to present);
  • Arts & Humanities Citation Index (1990 to present); and
  • Emerging Sources Citation Index (2015 to present).

The abbreviated updates were performed by browsing an in‐house information science current awareness bulletin, papers discussed in a team journal club and an in‐house tool aiming to keep team members up to date. Papers in the current awareness bulletin are found through searches in embase , lisa and lista and browsing the information science journals named above. Papers to be discussed in the team journal club are found by manual scanning of Tables of Contents from a range of journals on public health, information science and research methods. The in‐house tool sources papers in a few ways including monitoring of email lists, social media and library and information conferences. These sources were browsed from the date of the February 2019 update. The 7569 results from the searches were downloaded to EndNote for initial processing, and after removing duplicates, there were 5587 remaining results in February 2019. One information specialist screened the title and abstracts of the results and selected abstracts to consider, conferring with two other information specialists to make a final decision about which publications to order as full text. See Appendix B for the screening criteria. In total, 122 publications were obtained at full text, and one information specialist reviewed them. A formal quality assessment of papers was not performed. After screening, 72 of the publications were deemed relevant for inclusion in this review.

RESULTS AND DISCUSSION

There is a growing literature on the two key questions of “which sources” and “how many.” Two main themes have emerged: papers that focus on issues regarding the contribution of specific databases or sources and papers examining the guidance behind the search techniques required for a search. Searching for public health topics has not been covered as extensively as clinical topics and lessons learnt when searching for clinical topics are not necessarily applicable in other subject areas.

Theme one—Database choice

Database choice—number of databases.

The number of databases searched for systematic reviews and meta‐analysis has increased since the 1990s. An analysis (Lam & McDiarmid, 2016 ) of the number of databases searched in 1994, 2004 and 2014 reports that the mean number of databases searched grew from one in 1994 to four by 2014. It is not surprising therefore that the subject of how many and which databases to search is a key theme.

Database choice—MECIR compliance

The Cochrane Handbook advises that the most important databases to search for reviews of interventions are medline , embase and central , as a minimum, if searching for reports of trials (Higgins et al., 2020 ). This is to reduce the likelihood of bias. In practice, this advice is not always strictly followed. A recent sample of systematic reviews suggested that only 10% of them conducted a comprehensive search using a range of sources (de Kock et al., 2020 ).

Halladay et al. ( 2015 ) examined 50 Cochrane Reviews of therapeutic interventions that searched pubmed and embase and found that the benefit of searching embase was “modest” compared with pubmed . central is not mentioned in the study.

Another paper found that pubmed contained 70.9% of included publications from a selection of Cochrane Reviews from different Cochrane groups, reinforcing that it is important not to rely on one database. However, it also highlighted that 70.9% is the upper limit of what could be found and a poor search strategy may retrieve less (Frandsen, Eriksen, et al., 2019 ). In a related paper by the same authors, it was found that searching embase and pubmed in the same selection of Cochrane reviews increased the coverage of included publications slightly, depending on the Cochrane group (Frandsen et al., 2021 ). Searching both databases still did not retrieve all relevant publications, illustrating that it is important to consider supplementing searches with additional sources.

This point is reflected by the Vassar et al. ( 2017 ) paper examining neurology systematic reviews and meta‐analyses. They reflected that only searching medline and embase could lead to bias in the sample of primary studies used to make recommendations or summaries of effects.

In a paper looking at the optimum search strategy for core outcome sets, Gargon et al. ( 2015 ) discovered that 97% of included studies were indexed in medline . However, the search strategy only found 87% of included records, demonstrating that it cannot be assumed that a search strategy will pick up each relevant result indexed in the database.

Nussbaumer‐Streit et al. ( 2018 ) and Ewald et al. ( 2020 ), in a two‐part project, experimented with the MECIR recommended databases by rerunning 60 Cochrane Review searches with combinations of medline , embase and central . In part one, they found that, with the abbreviated search approaches, in 8% to 27% of the Cochrane Reviews, there would be a change of conclusion, and in 2% to 5%, the opposite conclusion would be reached. In 5% to 12%, it would have been impossible to draw a conclusion. In part two, looking at treatment effect estimates, they found an abbreviated search approach gave identical or similar treatment effect estimates in 47 of the 60 Cochrane Reviews. However, in 6% to 13% of the Cochrane reviews, relevant differences occurred. This highlights that abbreviated searches make a different impact depending on which facet of a systematic review is being considered. Ultimately, to make conclusions with the greatest possible certainty, a comprehensive search should be performed, and the authors recommend this should include specialised databases. They acknowledge that some of the abbreviated literature searches could be an acceptable option for rapid evidence synthesis, if the “decision‐makers are willing to accept less certainty” (p. 1; Nussbaumer‐Streit et al., 2018 ).

These conclusions are reinforced by a later paper published by some of the same authors looking at three case studies of rapid reviews (Affengruber et al., 2020 ). For each of three Cochrane Reviews (two clinical and one public health), an abbreviated literature search was performed, replacing the comprehensive literature search. In this instance, the conclusions of the two clinical topics would have been unchanged if a rapid review abbreviated search had been followed. However, the third case study was a public health topic, and the abbreviated search found substantially less relevant results than the comprehensive search. This would have resulted in the Cochrane Review authors being unable to draw a conclusion anymore. This leads to a reflection that, although a rapid review approach may work for some clinical topics, an abbreviated literature search may not be adequate in public health.

Three agri‐food public health case studies have a similar finding that cross‐cutting topics do not benefit from abbreviated searches. In their case studies, “methodological shortcuts” (searching one database only or only searching bibliographic databases) resulted in relevant results being omitted (Pham et al., 2016 ).

Some papers found that searching the minimum medline , embase and central is not enough to find all relevant studies (Aagaard et al., 2016 ). However, searching additional databases is not necessarily the answer. A conference presentation (Posey et al., 2016 ) analysed 97 systematic reviews of clinical topics and found that an average of four or five databases were searched per review but that 95%–100% of included publications could be found in a combination of three databases (one medical, one general and one topic‐specific database). Searching additional databases increased volume without the reward of additional included publications. Both Aagaard et al. ( 2016 ) and Posey et al. ( 2016 ) recommend that instead of searching additional databases, time would be better spent using additional search methods like reference checking and citation searching.

Database choice—Topic, intervention and study

The choice of databases can depend on the topic of the review (Hartling et al., 2016 ). If the topic is multidisciplinary, multiple databases will need to be searched in order to find studies from each discipline involved (Harari et al., 2020 ). National, regional and subject‐specific databases may be relevant (Whaley et al., 2020 ). The type of intervention and the study type can affect the appropriate databases to choose, as well as the topic (Goossen et al., 2018 ; Wood et al., 2017 ). In the case of economic evaluations, the authors note that the majority of searching is done in medline with only some searches in embase and other databases. The key specialist database for this area was NHS Economic Evaluation Database, but with its demise in 2015, there is a need for “methodologically appropriate strategies” to be used in searching medline and embase (Arber et al., 2018 ).

The importance of a robust search strategy is highlighted when searching for qualitative reviews (Wright et al., 2015 ). The Cumulative Index of Nursing and Allied Health Literature (CINAHL) is a good source of qualitative studies, with Rogers et al. ( 2018 ) noting that this is because it has the best controlled vocabulary for qualitative research. Rogers et al. ( 2018 ) conclude that for qualitative dementia research, if CINAHL and PsycINFO are searched, then medline and embase are not required. Frandsen, Gildberg, et al. ( 2019 ) were looking at “a wide range of topics within health research” and concluded that CINAHL, along with Scopus and ProQuest Dissertations and Theses Global, provided the greatest retrieval for qualitative reviews. This illustrates the importance of considering the type of research being undertaken and the type of evidence being sought when deciding on which databases to search.

Database choice—Currency

It is important to select databases that provide the most current results. Two papers, Duffy et al. ( 2016 ) and Thompson et al. ( 2016 ), compare medline and pubmed and illustrate that the choice of source affected comprehensiveness. This has changed since 2016, and medline ALL now “covers all of the available content and metadata in pubmed with a delay of one day” (Lefebvre et al., 2019 ). Although the databases now provide equivalent content, it is still important that searchers check the currency of the databases they search.

Database choice—Guidance versus practice

There has been some work comparing guidance to practice in database choice. Cooper, Booth, et al. ( 2018 ) performed a literature review to see if there is a consensus between guidance documents (e.g., Cochrane Handbook and NICE manual) and published studies on literature searching methods. They found that there was not a consensus on an approved number of databases to search, leading them to state that “researchers should be focused on which databases were searched and why, and which databases were not searched and why.” This means that databases should be searched if they have a demonstrable value to the review, rather than because there is an optimal number to use.

Wood et al. ( 2017 ) compared resources searched in practice with recommendations from NICE and SuRe Info on how to conduct economic evaluations. As mentioned above, they found that although most systematic reviews conformed with NICE and SuRe recommendations to search medline , only some searched embase , and little searching was done on specialist economics databases. Reviews that do not follow recommended practice are at risk of publication bias and missing relevant studies.

Database choice—Metrics

Some papers have approached the subject of database choice in an empirical way by using metrics to assess which databases provide the most value and how much work needs to be done to find relevant studies. Demonstrating value is key to Ross‐White and Godfrey ( 2017 ), where they suggest using the number‐needed‐to‐retrieve to decide on the value of a database. This method was found to be a valuable way to measure how much effort is needed to retrieve an included publication.

Cooper, Lovell, et al. ( 2018 ) and Cooper, Varley‐Campbell, et al. ( 2018 ) found that “Capture‐recapture” is a useful method for planning work at the beginning of a review because it can be used to estimate the potential number of studies likely to be found.

Theme 2—Supplementary search sources

Supplementary search sources—search techniques.

The second theme identified in the results was the role of supplementary search techniques, including the addition of grey literature and website searching. Booth ( 2016b ) has suggested that for qualitative research, these techniques should be the focus, rather than searching a large number of bibliographic databases. Delaney and Tamas ( 2018 ) also question reliance on databases as the principal source of evidence, finding fault with database indexing, particularly for cross‐cutting topics, and the potential bias caused by only looking at the published studies found in databases. They suggest researchers consider alternative sources and think critically about their information retrieval options. Echoing this, Boulos et al. ( 2021 ) find that a range of supplementary sources need to be searched in combination with databases to find all relevant prognostic factor studies.

Cooper, Lovell, et al. ( 2018 ) and Cooper, Varley‐Campbell, et al. ( 2018 ) provide an example of a review where supplementary searching was useful compared with bibliographic databases. They found, in a combined environmental and public health review, that the databases contributed only two minimally useful included publications out of around 21,000 results. However, the supplementary search methods retrieved just 453 references for screening, and this produced nine studies to include, of which four made unique contributions to the quantitative and qualitative synthesis.

Although investigating an approach for a systematic search for epidemiologic publications, Waffenschmidt et al. ( 2017 ) found that around 14% of publications could only be found by handsearching meeting websites and regional journals that would not have been indexed in bibliographic databases.

Supplementary search sources—Value of grey literature and unpublished data

Grey literature has been defined and redefined over time, but the general consensus is that it can include publication types such as theses, government documents, research and project reports that have not been published by a mainstream publisher (Farace & Schöpfel, 2010 ). It can also include unpublished data, such as clinical trials in ongoing research registries. These publication types can be found in some specialised bibliographic databases, but coverage can be sporadic, and they should not be relied on as the main source of grey literature. Although being unpublished can be what makes searchers reluctant to include this data, it should be included if it meets the objectives of a specific systematic review (Whaley et al., 2020 ).

Grey literature and unpublished data help to avoid publication bias, because searching sources that only cover published results may just return more of the same evidence. By contrast, grey literature searches and searches of clinical trial registries may reduce bias by retrieving evidence from a more diverse range of sources (Pradhan et al., 2018 ). This could influence the conclusions and consequently health care decisions (Halfpenny et al., 2016 ). In some cases, not including the results of unpublished trials could mean that the effects of treatments are overestimated (Bagg et al., 2020 ). Despite this, clinical trials registries are not a commonly searched source. Gray et al. ( 2019 ) found that for surgery reviews, registries were used in 79.2% of Cochrane Reviews compared with 6.4% of reviews in high‐impact journals, even though they contained at least one additional relevant study.

Similar to database choice, the decision to search grey literature and unpublished studies may be guided by the type of research being undertaken. Farrah and Mierzwinski‐Urban ( 2019 ) illustrate this when looking specifically at non‐drug health technologies, finding that in horizon scanning reports for new and emerging technologies, almost half of the studies cited were grey literature and that clinicaltrials.gov was one of the most frequently cited sources.

Grey literature may also be a useful source in fields where it is challenging to generate evidence because the population is vulnerable or it is difficult for services to access or engage with them. Enticott et al. ( 2018 ) found that useful research had been done by agencies who had access to refugees and asylum seekers but that this work is generally disseminated via grey literature instead of in journals. A search focussed on peer‐reviewed literature would probably miss this valuable source of evidence.

Similarly, for environmental evidence reviews, Konno and Pullin ( 2020 ) warn of “unrepresentative samples of studies and biased estimates of true effects” in reviews that do not search multiple platforms or supplementary sources.

Coleman et al. ( 2020 ) discuss this in relation to searching for evidence on programme theories, a challenging area to search with evidence found in sources including websites, blogs and newspaper articles. They conclude that the optimal search for their programme theory topic involves databases like medline , embase and cinahl combined with searches of Google and Google Scholar.

Supplementary search sources—Challenges of grey literature and unpublished data

Being systematic and reproducible are key tenets of evidence‐based medicine reviews, and it can be difficult to uphold these principles in grey literature searches. For example, the nature of grey literature means that it is generally web based, not necessarily indexed in a standard way, and does not usually have a standard vocabulary. Even the titles can be misleading (Godin et al., 2015 ). Hanneke and Young ( 2017 ) comment on the lack of detailed information often given about grey literature in search histories. Some reviews will state that grey literature has been searched without providing detail of which database, website or search engine has been used, which means it is difficult to judge how systematic they have been or to reproduce their searches. Unlike records found in a peer reviewed source, records found in a grey literature source may also lack information on publication characteristics like date and contributor (Godin et al., 2015 ).

Even in a discipline like public health, where evaluation of interventions may not be reported in journal articles, the challenges of grey literature searching are noted (Adams et al., 2016 ). After looking at three public health case studies, the authors reflect on the importance of search methods, search efficiency and the challenges introduced by grey literature searching like replicability of searches and time needed to perform them.

The additional time required for grey literature searching may not be rewarded by finding relevant results (Halfpenny et al., 2016 ). Hartling et al. ( 2017 ) explored how often systematic reviews search for unpublished literature, dissertations and non‐English reports in two clinical and one psychosocial reviews. Their results show that, although most in their sample search for it, few included publications are found, and those that are retrieved do not usually have any impact on the review findings. This conclusion is echoed by Schmucker et al. ( 2017 ) and Wilson et al. ( 2017 ) who both found that searching for unpublished data made a difference in only a small number of reviews and did not necessarily change the conclusions or strength of evidence of recommendations.

Supplementary search sources—Website searching

Website searching may also be considered as a supplementary technique. However, it can be difficult to distinguish it from grey literature searching. On one hand, the aim of a web search is often to find grey literature (Briscoe et al., 2020 ), and it may be the route to finding unpublished reports, government papers and trials from ongoing registries. However, website search results also cover published literature like journal articles. The two can therefore be considered separately.

The growth of the web in recent years means that a far greater amount of literature can be found than in the past (Briscoe, 2016 ). Google and Google Scholar are the most popular search engines. Typically, web searches are cut down versions of bibliographic database searches (Briscoe, Nunns, et al., 2020 ).

Like grey literature searching, website searching can present the challenges of poor website functionality, not knowing which to search and the possibility of not finding unique relevant results (Stansfield et al., 2016 ).

Website searching can also introduce bias to a search, if not planned effectively. Curkovic and Kosec ( 2018 ) warn of the risk of a “bubble effect” in which the algorithms used in some commercial search engines to personalise results mean that many internet search results are biased and can affect the validity of reviews. Even geographical location can affect both the results and ranking of results in Google searches (Cooper et al., 2021 ) and Google Scholar searches (Pozsgai et al., 2020 ). Similarly, using language restrictions can introduce bias, particularly in topics where the evidence base is mainly in a non‐English setting (Chaabna et al., 2020 ).

Reproducibility of searches, a key feature of systematic searching, can also be challenging because of the transient and changing nature of the internet and website domains. Curkovic and Kosec ( 2018 ) suggest that internet searches should therefore be used only as a supplementary source for scoping out systematic review strategies.

Google Scholar is a free web search engine that can be used to find academic literature, and the subject of its suitability for this task is discussed in recent papers. Halevi et al. ( 2017 ) focus specifically on whether Google Scholar is suitable as a source of scientific information. They acknowledge that it is “essentially an enormous web crawler” and therefore has deeper coverage than WoS and Scopus. Bramer et al. ( 2017 ) explain that this is likely to be because it indexes the full text of articles, meaning that it can find studies where the context of the subject is described in the full text but not the abstract or subject terms. There is agreement, however, that Google Scholar should only be used as a supplement to other sources. This is based on “lack of quality assurance and lack of transparency about the resources it covers” and search shortcomings like a 256 character limit for search terms (Halevi et al., 2017 ; Harari et al., 2020 ).

These issues are acceptable if the researcher is only interested in a specific result but not for reproducible searches. Gusenbauer and Haddaway ( 2019 ) criticise Google Scholar as being “user friendly at any cost” and explain its popularity with users as being due to convenience and lack of awareness of its shortcomings. They go on to say in a later paper (Gusenbauer & Haddaway, 2021 ) that researchers should know what can and cannot be done depending on the functional capabilities of any individual search system. They cite Google Scholar as an example of “how a system can be perfectly suited for one type of search, while failing miserably for another” (p. 3). This conveys the message that it is important to understand the strengths and limitations of each resource when planning which resources to search.

There are also practical issues to consider in using websites efficiently. Levay et al. ( 2016 ) compared Google Scholar with WoS for citation searching for public health reviews. They recommend using WoS instead of Google Scholar, based on the reliability of WoS, and the ease of searching it and downloading results. Despite Google Scholar being free to use and WoS requiring a subscription, the time spent on finding and downloading results made WoS more cost effective. This could be why a paper examining the use of citation searching in Cochrane Reviews found that Google Scholar was the least popular source (Briscoe et al., 2020 ).

Supplementary search techniques—Citation searching

Supplementary search techniques and their benefits are discussed in recent papers. Their importance in finding additional relevant results leads Booth ( 2016a ) to recommend that reference checking (i.e., backwards citation searching) should be standard practice rather than regarded as a supplementary technique.

Citation searching is particularly useful in scenarios where core concepts are hard to find using keywords (Briscoe, Bethel, et al., 2020 ) or where topics are broad or ill defined (Rogers et al., 2020 ). An example of its value is illustrated by Bethel et al. ( 2021 ) when they describe included references from medline and embase that were missed in database searches but retrieved by citation searching. In this case, it “reaffirmed the purpose of supplementary searching” and illustrates a place in finding results not only unavailable in bibliographic databases but also missed by these searches.

The Cochrane Handbook lists searching reference lists as mandatory (Lefebvre et al., 2019 ), recommending that “review authors should use included studies and any relevant systematic reviews when conducting backward citation searching” (Briscoe, Nunns, et al., 2020 , p. 171). Goossen et al. ( 2018 ) support this idea in their study of systematic reviews in surgery, in which screening citation lists in WoS and searching citation lists of related reviews contributed substantially. However, Rogers et al. ( 2020 ) introduce a caveat in their study looking at citation searching for implementation studies for dementia care. Although both Scopus and WoS found relevant studies missed by database searches, the paper noted that being able to locate a record in Scopus or WoS did not mean it would be retrieved by the search strategy. This echoes the same finding in papers discussing bibliographic databases.

There are additional benefits to citation searching as part of a systematic search. Levay et al. ( 2016 ) examined public health literature searching and found that “citation searches can be developed in a series of focussed steps that avoid unnecessary amounts of results.” Unlike web searches and grey literature, a systematic approach can be maintained and recorded to aid reproducibility, as long as the required information is retained at the time of searching. Citation searching also has the potential to reveal parallel topics of interest to the research, which may not be identified by a traditional keyword search (Hinde & Spackman, 2015 ). Citation searching and other supplementary techniques may identify potentially relevant studies, but their value can be affected by the effort involved, and it is unclear if it will lead to additional studies if it is done after extensive database searches (Wright et al., 2015 ).

Supplementary search techniques—Guidance versus practice

As with database choice, there has been some work comparing guidance that has been set out in handbooks versus practice in supplementary searching. Cooper et al. ( 2017 ) identified five search methods from the methodology handbooks (contacting study authors or experts, citation chasing, handsearching, trial register searching and web searching) and examined how these were applied in practice. They found that, although studies do generally follow recommended best practice, further research is needed to help understand how and when to use supplementary search strategies.

Alternative approaches to traditional searching

There are also papers that focus on alternative models of searching for topics that do not fit easily into the Patient population, Intervention, Comparison and Outcome (PICO) structure. They are useful for subjects with complex interventions or concepts where either appropriate indexing terms are not available or they cannot be expressed in a series of well‐defined subject headings. It is important to select a search approach that is appropriate to the type of review being done, the type of evidence required and the subject area. Savolainen's theoretical paper explains “exploratory search,” discussing two frameworks: the berrypicking model and information foraging theory. Both frameworks involve exploratory browsing and focussed searching (Savolainen, 2018 ). Searching behaviour needs to be “open‐ended, dynamic and multi‐faceted” in these approaches, meaning that both frameworks provide a “different but complementary” image of the exploratory search process as a combination of focused searching and exploratory browsing.

A few papers have approached complex topic areas by taking an iterative or “stepped” approach to searching. For this approach, “searching is done in several stages, with each search taking into account the evidence that has already been retrieved” (NICE, 2018 ). Public health is an example of a subject area where search questions are often highly complex and do not necessarily lend themselves to methods that work for clinical reviews, meaning that alternative search methods may be particularly useful (Mathes et al., 2017 ).

Enticott et al. ( 2018 ) used this approach for their systematic search for grey literature on refugees and asylum seekers. They started by looking at the included studies from an initial search of academic literature and advice from experts to inform an initial grey literature search. This was followed by a targeted search for grey literature from 20 countries that resettle refugees, supplemented with further Google and Google Scholar searches. This targeted and stepped approach led to the discovery of almost double “eligible results” of the initial grey literature search.

Palliative care is another example of a challenging topic area that has concepts and terms that are heterogeneous, poorly indexed and non‐standardised. Zwakman et al. ( 2018 ) test an approach to this challenge, describing “PALETTE,” an iterative search method that involves doing an initial literature search to develop an understanding of the topic, gaining expert opinion and doing other exploratory work. The search strategy is built using “golden bullets” (key studies), which are analysed to mine key indexing terms and free text to use in the primary literature search in key databases, which is followed by citation tracking.

The search should be appropriate to the type of review being conducted, and this may require alternative approaches. Booth et al. ( 2019 ) give a framework for conducting literature searches for realist reviews. Realist reviews offer a theory‐driven method to evidence synthesis and “explore how a complex intervention works, for whom and under what circumstances” (p. 2). A realist literature search needs to be iterative and may include a scoping search, using grey literature sources and supplementary search methods. These realist approaches are interesting for public health reviews because they also consider complex interventions and how they might be applied across large populations.

Application to public health evidence

Coverage in public health versus clinical topics.

The aim of this narrative review was to give an overview of recent studies on information retrieval guidance and methods that could be applied to public health evidence. The largest proportion of results examined clinical topics with fewer focussing on public health topics. This could be because evidence synthesis is more likely to be done on clinical topics. A survey on characteristics of Health Technology Assessment (HTA) in five countries found that, although over 80% were on drugs, devices, surgery or other clinical programmes, only 5% were on public health (Lavis et al., 2010 ). If most HTA topics are clinical, it is not surprising that the majority of studies on information retrieval are also on clinical subjects.

Public health research synthesis is complex and challenging compared with clinical research synthesis for a variety of reasons. Although clinical medicine can prioritise randomised controlled trials (RCTs) to provide evidence, the nature of public health interventions means that it may not be possible or ethical to conduct an RCT. Public health reviews may therefore have to rely on other study designs, such as observational studies (Frieden, 2017 ; Mathes et al., 2017 ). Evidence reviews in public health may need to search for a wider range of study types and be less able to use search filters to narrow the results. Managing this additional volume is a key task to consider in planning an evidence review for a public health topic.

Additionally, outcomes for public health interventions “do not always occur at the same operational level as the intervention,” meaning that interventions at a personal level may have outcomes at population level and vice versa (Kelly et al., 2010 ). This means that the search needs to be planned and scoped in the initial stages of the review, so that the right decisions can be made about where to search and which methods to employ. The complex relationships between these factors mean that it can be difficult to understand them all at the beginning of the review and the search needs to build up a picture of the area, using alternative approaches, such as the stepped methods described above.

Terminology is complicated in public health literature searching, as the concepts can be hard to define and have multiple meanings according to the context in which they are being used. This is problematic for searching because it is time‐consuming to construct a search, it can be difficult to be comprehensive and it could lead to large volumes of results. This is illustrated by the search on exercise by Grande Antonio et al. ( 2015 ) who note the relevance of exercise to many disciplines. This could result in extra time spent finding all indexing terms on exercise and assessing which are appropriate for any individual search. The authors also report on changing definitions for exercise which could add to the relevant free‐text terms used and therefore impact on time spent constructing strategies and volume of results found.

The multidisciplinary nature of public health means that all relevant evidence may not be found in one location or type of source. The search is more likely to return a higher volume of results, as it will need to be run in multiple databases (Hanneke & Young, 2017 ). Hanneke and Young ( 2017 ) examined information sources for obesity prevention policy and recommend searching pubmed , multidisciplinary and economics databases and grey literature plus citation reference searching and handsearching. This illustrates that there is a need to tailor search resources to the topic. Also, to be comprehensive, there needs to be a range of sources searched and different search techniques used.

Search scenarios

Although many of the studies found in the literature investigate how the publications included in a review were actually retrieved, some papers have taken this analysis a step further to model scenarios of searching a reduced number of databases without missing any of the included publications. For example, Aagaard et al. ( 2016 ) present tables on the cumulative effects of searching up to five databases, discussing the combined recall of medline , embase and central on reviews of musculoskeletal topics. For this specific clinical area, they recommend that an optimal literature search for RCTs should include the three core databases plus two additional databases and other search techniques like grey literature and citation searching.

Bramer et al. ( 2017 ) retrospectively checked the source of each included publication in 58 systematic reviews from clinical and public health topics to discover where they were found and which resources retrieved unique included publications. The best combination of databases in terms of recall were embase , medline , WoS and Google Scholar, although it was recommended to search subject‐specific databases when relevant to the topic.

Urhan et al. ( 2019 ) evaluated the source of included publications in food science reviews. They too looked for the best combination of resources and found this to be WoS, two specialised databases and reference checking. The specialised databases and reference checking found included publications that were present in the other databases but had been missed by the strategies. As other authors have commented, this illustrates the importance of recognising that no search is 100% sensitive and it is worth investing time in making decisions about the best sources to search and techniques to use to maximise retrieval of relevant evidence.

Goossen et al. ( 2020 ) focused on systematic reviews in a sample of 86 Overviews of Reviews and also reported on the value of reference checking to complement database searching. In their sample, medline , Epistemonikos and reference checking were the optimal combination of sources to find the majority of systematic reviews.

All these papers echo the findings of other authors in this review, that a mix of core databases and supplementary sources or techniques are needed for optimal results with the possible addition of topic‐specific resources when needed.

An aid to facilitate the modelling of search sources is provided by Bethel et al. ( 2021 ) in their paper describing a search summary table. They developed the table to aid decisions around which databases and supplementary search sources to search based on where evidence has been found in previous searches. The benefit of using this search summary table to work out the optimal search strategy for a topic is illustrated by Coleman et al. ( 2020 ). They use it to determine the minimum set of resources needed to find all the primary publications for their topic on programme theories on pressure ulcers.

Implications of the results

The results of the narrative review illustrate that researchers have, in many cases, been following recommendations from the Cochrane Handbook and other methods manuals to search several databases, including core databases such as medline and embase . There cannot be an exact recommended number to search or a defined list of databases to search. This is because the optimal databases to search depends on the topic, the type of interventions being searched for and the type of study required. The papers reviewed indicate that it is important to consider what kind of research is being done at the beginning and to let this inform decisions about where and what to search.

Additional search techniques (e.g., citation searching) and sources (e.g., grey literature) have been shown to increase the likelihood of finding more relevant studies. In some cases, these are publications that would influence decisions about effectiveness of an intervention. However, using additional techniques or sources can also increase the time spent on a search without producing anything valuable. The additional sources or search techniques may find additional relevant papers, but these will not necessarily change recommendations or conclusions. This calls into question whether the additional time and effort spent on this work is always of value. It also suggests that it is worthwhile to do some testing in the scoping stage of a review, to estimate whether additional sources will retrieve results that will reward the extra time and effort required. This would involve checking a small sample of results to see if they were relevant for the review. Cooper, Lovell, et al. ( 2018 ), Cooper, Varley‐Campbell, et al. ( 2018 ) and Bethel et al. ( 2021 ) have both discussed methods for this kind of testing work.

Limiting bias is seen as a key reason for searching beyond one or two bibliographic databases and for searching additional sources or techniques. In the case of grey literature, it can potentially find unpublished results, meaning that conclusions and recommendations are not solely based on published research.

Some sources are, by their nature, difficult to search systematically, and it is challenging to reproduce results. Web searching, particularly a source like Google Scholar, is criticised for a lack of transparency and consistency of results. Interestingly, although researchers may search additional sources to minimise bias, they may inadvertently introduce bias, unless they consider how some search engines personalise results.

A few papers made observations about the importance of the quality of search strategies, for example, Arber et al. ( 2018 ), Frandsen, Gildberg, et al. ( 2019 ) and Wright et al. ( 2015 ). A study could be indexed in a database but not found by the search strategy, so the construction of a robust search strategy is key to a good literature search. The discussion highlighted the difficulty of producing a comprehensive search strategy in public health reviews, given that the multidisciplinary nature of the search often leads to a higher volume of results.

Key gaps in the evidence on searching for public health topics

There is less direct evidence on public health literature searching compared with clinical topic literature searching. Although there are a few papers exploring the challenges and complexities of public health literature searching, there is room for more studies describing or comparing approaches to searching for public health topics.

Most of the existing studies on both clinical and public health topics are descriptive studies looking at one example. It would be interesting to see more studies covering multiple examples which could be synthesised to provide stronger recommendations.

It would also be useful to see more work on scenarios examining where included publications have been found in previous reviews, to inform future searches in similar topics. Although there has been some work done on this generally, there has been a lack of work specifically on public health reviews in recent years. This type of modelling could be useful for helping inform decisions about how to approach searches for public health topics.

Key lessons from the literature to guide searching for public health topics

Public health literature searches need to be managed effectively, according to the time and resources available to the review. The groundwork needs to be given sufficient time to explore the topic and the different contexts in which an intervention might have been used. This leads to questions about the suitability of databases and the identification of resources applicable to the topic. Once the sources have been identified, they must be searched efficiently without creating unmanageable volumes of results. This can be difficult to achieve in a multidisciplinary topic where terminology has multiple and competing meanings. The search must be planned to achieve a good balance of resources. These are the kinds of decision that information specialists are well placed to advise.

However, adding more databases with similar coverage may not be effective. It can be more helpful to spend time on other techniques to find other types of evidence. The searches for grey literature and unpublished data can be time‐consuming, given the difficulties of doing them in a transparent and reproducible way. Therefore, search planning and iterative steps are required to understand the evidence base in order to plan the optimal approach to that review and the types of evidence it requires. Realist review literature search methods may be a good model to follow, because of their exploratory approach to searching.

There are some key lessons from the literature that can be applied to public health reviews. In public health topics as in other fields, there is no set number or list of sources that should be searched. Public health is also a discipline that benefits, perhaps more than clinical disciplines, from pre‐search planning and consideration of the type of information being sought. Searching additional sources to retrieve grey literature may be particularly rewarding when seeking evidence on populations or interventions that are harder to find in journals. The additional sources are, however, not guaranteed to retrieve unique papers, and they need to be searched carefully to avoid introducing new sources of bias.

CONFLICT OF INTEREST

Andrea Heath has no interests to declare. Paul Levay has no interests to declare. Daniel Tuvey has no interests to declare.

ACKNOWLEDGEMENTS

The authors would like to thank Marion Spring, Liz Walton, Nicola Walsh and Lynda Ayiku.

APPENDIX A. 

Search strategy.

The search strategy was developed in the medline bibliographic database (Ovid interface, 1946 to February Week 2 2019) and adapted as appropriate.

APPENDIX B. 

Inclusion/exclusion criteria.

Heath, A. , Levay, P. , & Tuvey, D. (2022). Literature searching methods or guidance and their application to public health topics: A narrative review . Health Information & Libraries Journal , 39 , 6–21. 10.1111/hir.12414 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

This study was conducted as part of NICE methods development, and no additional funding was received.

  • Aagaard, T. , Lund, H. , & Juhl, C. (2016). Optimizing literature search in systematic reviews—Are MEDLINE, EMBASE and CENTRAL enough for identifying effect studies within the area of musculoskeletal disorders? BMC Medical Research Methodology , 16 , 161. 10.1186/s12874-016-0264-6 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Adams, J. , Hillier‐Brown, F. C. , Moore, H. J. , Lake, A. A. , Araujo‐Soares, V. , White, M. , & Summerbell, C. (2016). Searching and synthesising ‘grey literature’ and ‘grey information’ in public health: Critical reflections on three case studies . Systematic Reviews , 5 ( 1 ), 164. 10.1186/s13643-016-0337-y [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Affengruber, L. , Wagner, G. , Waffenschmidt, S. , Lhachimi, S. K. , Nussbaumer‐Streit, B. , Thaler, K. , Griebler, U. , Klerings, I. , & Gartlehner, G. (2020). Combining abbreviated literature searches with single‐reviewer screening: Three case studies of rapid reviews . Systematic Reviews , 9 ( 1 ), 162. 10.1186/s13643-020-01413-7 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Alpi, K. M. (2005). Expert searching in public health . Journal of the Medical Library Association , 93 ( 1 ), 97–103. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC545128/ [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Arber, M. , Glanville, J. , Isojarvi, J. , Baragula, E. , Edwards, M. , Shaw, A. , & Wood, H. (2018). Which databases should be used to identify studies for systematic reviews of economic evaluations? International Journal of Technology Assessment in Health Care , 34 ( 6 ), 547–554. 10.1017/s0266462318000636 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bagg, M. K. , O’Hagan, E. , Zahara, P. , Wand, B. M. , Hübscher, M. , Moseley, G. L. , & McAuley, J. H. (2020). Systematic reviews that include only published data may overestimate the effectiveness of analgesic medicines for low back pain: A systematic review and meta‐analysis . Journal of Clinical Epidemiology , 124 , 149–159. 10.1016/j.jclinepi.2019.12.006 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bethel, A. , Rogers, M. , & Abbott, R. (2021). Use of a search summary table to improve systematic review search methods, results, and efficiency . Journal of the Medical Library Association , 109 ( 1 ), 97–106. 10.5195/jmla.2021.809 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Booth, A. (2016a). Over 85% of included studies in systematic reviews are on MEDLINE . Journal of Clinical Epidemiology , 79 , 165–166. 10.1016/j.jclinepi.2016.04.002 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Booth, A. (2016b). Searching for qualitative research for inclusion in systematic reviews: A structured methodological review . Systematic Reviews , 5 , 74. 10.1186/s13643-016-0249-x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Booth, A. , Briscoe, S. , & Wright, J. M. (2019). The “Realist Search”: A systematic scoping review of current practice and reporting . Research Synthesis Methods , 11 ( 1 ), 14–35. 10.1002/jrsm.1386 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Boulos, L. , Ogilvie, R. , & Hayden, J. A. (2021). Search methods for prognostic factor systematic reviews: A methodologic investigation . Journal of the Medical Library Association , 109 ( 1 ), 23–32. 10.5195/jmla.2021.939 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bramer, W. M. , Rethlefsen, M. L. , Kleijnen, J. , & Franco, O. H. (2017). Optimal database combinations for literature searches in systematic reviews: A prospective exploratory study . Systematic Reviews , 6 ( 1 ), 245. 10.1186/s13643-017-0644-y [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Briscoe, S. (2016). Eysenbach, Tuische and Diepgen’s evaluation of web searching for identifying unpublished studies for systematic reviews: An innovative study which is still relevant today . Evidence Based Library and Information Practice , 11 , 108. 10.18438/B8F049 [ CrossRef ] [ Google Scholar ]
  • Briscoe, S. , Bethel, A. , & Rogers, M. (2020). Conduct and reporting of citation searching in Cochrane systematic reviews: A cross‐sectional study . Research Synthesis Methods , 11 ( 2 ), 169–180. 10.1002/jrsm.1355 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Briscoe, S. , Nunns, M. , & Shaw, L. (2020). How do Cochrane authors conduct web searching to identify studies? Findings from a cross‐sectional sample of Cochrane Reviews . Health Information & Libraries Journal , 37 ( 4 ), 293–318. 10.1111/hir.12313 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Centre for Reviews and Dissemination . (2009). Systematic reviews: CRD’s guidance for undertaking reviews in health care . University of York Centre for Reviews and Dissemination. https://www.york.ac.uk/media/crd/Systematic_Reviews.pdf [ Google Scholar ]
  • Chaabna, K. , Cheema, S. , Abraham, A. , & Mamtani, R. (2020). Strengthening literature search strategies for systematic reviews reporting population health in the Middle East and North Africa: A meta‐research study . Journal of Evidence‐Based Medicine , 13 ( 3 ), 192–198. 10.1111/jebm.12394 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Coleman, S. , Wright, J. M. , Nixon, J. , Schoonhoven, L. , Twiddy, M. , & Greenhalgh, J. (2020). Searching for programme theories for a realist evaluation: A case study comparing an academic database search and a simple Google search . BMC Medical Research Methodology , 20 ( 1 ), 217. 10.1186/s12874-020-01084-x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cooper, C. , Booth, A. , Britten, N. , & Garside, R. (2017). A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: A methodological review . Systematic Reviews , 6 ( 1 ), 234. 10.1186/s13643-017-0625-1 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cooper, C. , Booth, A. , Varley‐Campbell, J. , Britten, N. , & Garside, R. (2018). Defining the process to literature searching in systematic reviews: A literature review of guidance and supporting studies . BMC Medical Research Methodology , 18 ( 1 ), 85. 10.1186/s12874-018-0545-3 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cooper, C. , Lorenc, T. , & Schauberger, U. (2021). What you see depends on where you sit: The effect of geographical location on web‐searching for systematic reviews: A case study . Research Synthesis Methods , 12 ( 4 ), 557–570. 10.1002/jrsm.1485 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cooper, C. , Lovell, R. , Husk, K. , Booth, A. , & Garside, R. (2018). Supplementary search methods were more effective and offered better value than bibliographic database searching: A case study from public health and environmental enhancement . Research Synthesis Methods , 9 ( 2 ), 195–223. 10.1002/jrsm.1286 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cooper, C. , Varley‐Campbell, J. , Booth, A. , Britten, N. , & Garside, R. (2018). Systematic review identifies six metrics and one method for assessing literature search effectiveness but no consensus on appropriate use . Journal of Clinical Epidemiology , 99 , 53–63. 10.1016/j.jclinepi.2018.02.025 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Curkovic, M. , & Kosec, A. (2018). Bubble effect: Including internet search engines in systematic reviews introduces selection bias and impedes scientific reproducibility . BMC Medical Research Methodology , 18 , 130. 10.1186/s12874-018-0599-2 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • de Kock, S. , Stirk, L. , Ross, J. , Duffy, S. , Noake, C. , & Misso, K. (2020). Systematic review search methods evaluated using the preferred reporting of items for systematic reviews and meta‐analyses and the risk of bias in systematic reviews tool . International Journal of Technology Assessment in Health Care , 37 , 1–5. 10.1017/S0266462320002135 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Delaney, A. , & Tamas, P. A. (2018). Searching for evidence or approval? A commentary on database search in systematic reviews and alternative information retrieval methodologies . Research Synthesis Methods , 9 ( 1 ), 124–131. 10.1002/jrsm.1282 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Duffy, S. , de Kock, S. , Misso, K. , Noake, C. , Ross, J. , & Stirk, L. (2016). Supplementary searches of PubMed to improve currency of MEDLINE and MEDLINE in‐process searches via Ovid . Journal of the Medical Library Association , 104 , 309–312. 10.3163/1536-5050.104.4.011 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Enticott, J. , Buck, K. , & Shawyer, F. (2018). Finding “hard to find” literature on hard to find groups: A novel technique to search grey literature on refugees and asylum seekers . International Journal of Methods in Psychiatric Research , 27 ( 1 ), e1580. 10.1002/mpr.1580 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ewald, H. , Klerings, I. , Wagner, G. , Heise, T. L. , Dobrescu, A. I. , Armijo‐Olivo, S. , Stratil, J. M. , Lhachimi, S. K. , Mittermayr, T. , Gartlehner, G. , Nussbaumer‐Streit, B. , & Hemkens, L. G. (2020). Abbreviated and comprehensive literature searches led to identical or very similar effect estimates: A meta‐epidemiological study . Journal of Clinical Epidemiology , 128 , 1–12. 10.1016/j.jclinepi.2020.08.002 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Faculty of Public Health . (2016). Good Public Health Practice framework 2016 . https://www.fph.org.uk/media/1304/good‐public‐health‐practice‐framework_‐2016_final.pdf [ Google Scholar ]
  • Farace, D. , & Schöpfel, J. (2010). Grey literature in library and information studies. In Bates M. J. & Maack M. N., Encyclopedia of library and information sciences , (3rd ed., pp. 2029–2039). CRC Press. https://hal.archives‐ouvertes.fr/hal‐01981296/document [ Google Scholar ]
  • Farrah, K. A. , & Mierzwinski‐Urban, M. (2019). Almost half of references in reports on new and emerging nondrug health technologies are grey literature . Journal of the Medical Library Association , 107 ( 1 ), 43–48. 10.5195/jmla.2019.539 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Frandsen, T. F. , Eriksen, M. B. , Hammer, D. M. G. , & Christensen, J. B. (2019). PubMed coverage varied across specialties and over time: A large‐scale study of included studies in Cochrane reviews . Journal of Clinical Epidemiology , 112 , 59–66. 10.1016/j.jclinepi.2019.04.015 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Frandsen, T. F. , Eriksen, M. B. , Hammer, D. M. G. , Christensen, J. B. , & Wallin, J. A. (2021). Using Embase as a supplement to PubMed in Cochrane reviews differed across fields . Journal of Clinical Epidemiology , 133 , 24–31. 10.1016/j.jclinepi.2020.12.022 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Frandsen, T. F. , Gildberg, F. A. , & Tingleff, E. B. (2019). Searching for qualitative health research required several databases and alternative search strategies: A study of coverage in bibliographic databases . Journal of Clinical Epidemiology , 114 , 118–124. 10.1016/j.jclinepi.2019.06.013 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Frieden, T. R. (2017). Evidence for health decision making—Beyond randomized, controlled trials . The New England Journal of Medicine , 377 ( 5 ), 465–475. 10.1056/NEJMra1614394 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gargon, E. , Williamson, P. R. , & Clarke, M. (2015). Collating the knowledge base for core outcome set development: Developing and appraising the search strategy for a systematic review . BMC Medical Research Methodology , 15 ( 1 ), 26. 10.1186/s12874-015-0019-9 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Godin, K. , Stapleton, J. , Kirkpatrick Sharon, I. , Hanning Rhona, M. , & Leatherdale Scott, T. (2015). Applying systematic review search methods to the grey literature: A case study examining guidelines for school‐based breakfast programs in Canada . Systematic Reviews , 4 , 138. 10.1186/s13643-015-0125-0 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Goossen, K. , Hess, S. , Lunny, C. , & Pieper, D. (2020). Database combinations to retrieve systematic reviews in overviews of reviews: A methodological study . BMC Medical Research Methodology , 20 ( 1 ), 138. 10.1186/s12874-020-00983-3 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Goossen, K. , Tenckhoff, S. , Probst, P. , Grummich, K. , Mihaljevic, A. L. , Buchler, M. W. , & Diener, M. K. (2018). Optimal literature search for systematic reviews in surgery . Langenbeck’s Archives of Surgery , 403 ( 1 ), 119–129. 10.1007/s00423-017-1646-x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grande Antonio, J. , Hoffmann, T. , & Glasziou, P. (2015). Searching for randomized controlled trials and systematic reviews on exercise. A descriptive study . Sao Paulo Medical Journal , 133 , 109–114. 10.1590/1516-3180.2013.8040011 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gray, H. M. , Simpson, A. , Bowers, A. , Johnson, A. L. , & Vassar, M. (2019). Trial registry use in surgery systematic reviews: A cross‐sectional study . Journal of Surgical Research , 247 , 323–331. 10.1016/j.jss.2019.09.067 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gusenbauer, M. , & Haddaway, N. R. (2019). Which academic search systems are suitable for systematic reviews or meta‐analyses? Evaluating retrieval qualities of Google Scholar, PubMed and 26 other resources . Research Synthesis Methods , 11 , 181–217. 10.1002/jrsm.1378 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gusenbauer, M. , & Haddaway, N. R. (2021). What every researcher should know about searching—Clarified concepts, search advice, and an agenda to improve finding in academia . Research Synthesis Methods , 12 ( 2 ), 136–147. 10.1002/jrsm.1457 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Halevi, G. , Moed, H. , & Bar‐Ilan, J. (2017). Suitability of Google Scholar as a source of scientific information and as a source of data for scientific evaluation‐review of the Literature . Journal of Informetrics , 11 ( 3 ), 823–834. 10.1016/j.joi.2017.06.005 [ CrossRef ] [ Google Scholar ]
  • Halfpenny, N. J. A. , Quigley, J. M. , Thompson, J. C. , & Scott, D. A. (2016). Value and usability of unpublished data sources for systematic reviews and network meta‐analyses . Evidence Based Medicine , 21 ( 6 ), 208–213. 10.1136/ebmed-2016-110494 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Halladay, C. W. , Trikalinos, A. , Schmid, T. , Schmid, H. , & Dahabreh Issa, J. (2015). Using data sources beyond PubMed has a modest impact on the results of systematic reviews of therapeutic interventions . Journal of Clinical Epidemiology , 68 , 1076–1084. 10.1016/j.jclinepi.2014.12.017 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hanneke, R. , & Young, S. K. (2017). Information sources for obesity prevention policy research: A review of systematic reviews . Systematic Reviews , 6 ( 1 ), 156. 10.1186/s13643-017-0543-2 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Harari, M. B. , Parola, H. R. , Hartwell, C. J. , & Riegelman, A. (2020). Literature searches in systematic reviews and meta‐analyses: A review, evaluation, and recommendations . Journal of Vocational Behavior , 118 , 103377. 10.1016/j.jvb.2020.103377 [ CrossRef ] [ Google Scholar ]
  • Hartling, L. , Featherstone, R. , Nuspl, M. , Shave, K. , Dryden Donna, M. , & Vandermeer, B. (2016). The contribution of databases to the results of systematic reviews: A cross‐sectional study . BMC Medical Research Methodology , 16 , 127. 10.1186/s12874-016-0232-1 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hartling, L. , Featherstone, R. , Nuspl, M. , Shave, K. , Dryden Donna, M. , & Vandermeer, B. (2017). Grey literature in systematic reviews: A cross‐sectional study of the contribution of non‐English reports, unpublished studies and dissertations to the results of meta‐analyses in child‐relevant reviews . BMC Medical Research Methodology , 17 , 64. 10.1186/s12874-017-0347-z [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Higgins, J. P. T. , Lasserson, T. , Chandler, J. , Tovey, D. , Thomas, J. , & Flemyng, E. C. R. (2020). Methodological expectations of Cochrane intervention reviews . Cochrane. https://community.cochrane.org/mecir‐manual [ Google Scholar ]
  • Hinde, S. , & Spackman, E. (2015). Bidirectional citation searching to completion: An exploration of literature searching methods . Pharmacoeconomics , 33 , 5–11. 10.1007/s40273-014-0205-3 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kelly, M. , Morgan, A. , Ellis, S. , Younger, T. , Huntley, J. , & Swann, C. (2010). Evidence based public health: A review of the experience of the National Institute of Health and Clinical Excellence (NICE) of developing public health guidance in England . Social Science & Medicine , 71 ( 6 ), 1056–1062. 10.1016/j.socscimed.2010.06.032 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Konno, K. , & Pullin, A. S. (2020). Assessing the risk of bias in choice of search sources for environmental meta‐analyses . Research Synthesis Methods , 11 , 698–713. 10.1002/jrsm.1433 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kugley, S. , Wade, A. , Thomas, J. , Mahood, Q. , Jørgensen, A.‐M.‐K. , Hammerstrøm, K. , & Sathe, N. (2017). Searching for studies: A guide to information retrieval for Campbell systematic reviews . Campbell Systematic Reviews , 13 ( 1 ), 1–73. 10.4073/cmg.2016.1 [ CrossRef ] [ Google Scholar ]
  • Lam, M. T. , & McDiarmid, M. (2016). Increasing number of databases searched in systematic reviews and meta‐analyses between 1994 and 2014 . Journal of the Medical Library Association , 104 , 284–289. 10.3163/1536-5050.104.4.006 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lavis, J. N. , Wilson, M. G. , Grimshaw, J. M. , Haynes, R. B. , Ouimet, M. , Raina, P. , Gruen, R. L. , & Graham, I. D. (2010). Supporting the use of health technology assessments in policy making about health systems . International Journal of Technology Assessment in Health Care , 26 ( 4 ), 405–414. 10.1017/S026646231000108X [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lefebvre, C. , Glanville, J. , Briscoe, S. , Littlewood, A. , Marshall, C. , Metzendorf, M.‐I. , Noel‐Storr, A. , Rader, T. , Shokraneh, F. , Thomas, J. , & Wieland, L. (2019). Chapter 4: Searching for and selecting studies. In Higgins J., Thomas J., Chandler J., Cumpston M., Li T., Page M., & Welch V. (Eds.), Cochrane handbook for systematic reviews of interventions (6.0) . Cochrane John Wiley & Son. [ Google Scholar ]
  • Levay, P. , Ainsworth, N. , Kettle, R. , & Morgan, A. (2016). Identifying evidence for public health guidance: A comparison of citation searching with Web of Science and Google Scholar . Research Synthesis Methods , 7 , 34–45. 10.1002/jrsm.1158 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Levay, P. , Raynor, M. , & Tuvey, D. (2015). The contributions of MEDLINE, other bibliographic databases and various search techniques to NICE Public Health Guidance . Evidence Based Library and Information Practice , 10 ( 1 ), 50. 10.18438/B82P55 [ CrossRef ] [ Google Scholar ]
  • Mathes, T. , Antoine, S.‐L. , Prengel, P. , Buhn, S. , Polus, S. , & Pieper, D. (2017). Health technology assessment of public health interventions: A synthesis of methodological guidance . International Journal of Technology Assessment in Health Care , 33 ( 2 ), 135–146. 10.1017/s0266462317000228 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • NICE . (2018). Developing NICE guidelines: The manual . https://www.nice.org.uk/process/pmg20/chapter/identifying‐the‐evidence‐literature‐searching‐and‐evidence‐submission [ PubMed ] [ Google Scholar ]
  • Nussbaumer‐Streit, B. , Klerings, I. , Wagner, G. , Heise, T. L. , Dobrescu, A. I. , Armijo‐Olivo, S. , Stratil, J. M. , Persad, E. , Lhachimi, S. K. , Van Noord, M. G. , Mittermayr, T. , Zeeb, H. , Hemkens, L. , & Gartlehner, G. (2018). Abbreviated literature searches were viable alternatives to comprehensive searches: A meta‐epidemiological study . Journal of Clinical Epidemiology , 102 , 1–11. 10.1016/j.jclinepi.2018.05.022 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pham, M. T. , Waddell, L. , Rajić, A. , Sargeant, J. M. , Papadopoulos, A. , & McEwen, S. A. (2016). Implications of applying methodological shortcuts to expedite systematic reviews: Three case studies using systematic reviews from agri‐food public health . Research Synthesis Methods , 7 ( 4 ), 433–446. 10.1002/jrsm.1215 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Posey, R. , Walker, J. , & Crowell, K. (2016). Knowing When to stop: Final results vs. work involved in systematic review database searching . Medical Library Association. Annual Meeting. https://cdr.lib.unc.edu/indexablecontent/uuid:6f8509fa‐140c‐4a5a‐9e0e‐cee42cbe9436 [ Google Scholar ]
  • Pozsgai, G. , Lovei, G. L. , Vasseur, L. , Gurr, G. , Batary, P. , Korponai, J. , Littlewood, N. A. , Liu, J. , Mora, A. , Obrycki, J. , Reynolds, O. , Stockan, J. A. , VanVolkenburg, H. , Zhang, J. , Zhou, W. , & You, M. (2020). A comparative analysis reveals irreproducibility in searches of scientific literature . BioRxiv , 20200320997783. 10.1101/2020.03.20.997783 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pradhan, R. , Garnick, K. , Barkondaj, B. , Jordan, H. S. , Ash, A. , & Yu, H. (2018). Inadequate diversity of information resources searched in US‐affiliated systematic reviews and meta‐analyses: 2005–2016 . Journal of Clinical Epidemiology , 102 , 50–62. 10.1016/j.jclinepi.2018.05.024 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rogers, M. , Bethel, A. , & Abbott, R. (2018). Locating qualitative studies in dementia on MEDLINE, EMBASE, CINAHL, and PsycINFO: A comparison of search strategies . Research Synthesis Methods , 9 ( 4 ), 579–586. 10.1002/jrsm.1280 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rogers, M. , Bethel, A. , & Briscoe, S. (2020). Resources for forwards citation searching for implementation studies in dementia care: A case study comparing Web of Science and Scopus . Research Synthesis Methods , 11 ( 3 ), 379–386. 10.1002/jrsm.1400 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ross‐White, A. , & Godfrey, C. (2017). Is there an optimum number needed to retrieve to justify inclusion of a database in a systematic review search? Health Information and Libraries Journal , 34 ( 3 ), 217–224. 10.1111/hir.12185 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Savolainen, R. (2018). Berrypicking and information foraging: Comparison of two theoretical frameworks for studying exploratory search . Journal of Information Science , 44 ( 5 ), 580–593. 10.1177/0165551517713168 [ CrossRef ] [ Google Scholar ]
  • Schmucker, C. M. , Blumle, A. , Schell, L. K. , Schwarzer, G. , Oeller, P. , Cabrera, L. , Von Elm, E. , Briel, M. , & Meerpohl, J. J. (2017). Systematic review finds that study data not published in full text articles have unclear impact on meta‐analyses results in medical research . PLoS One , 12 ( 4 ), e0176210. 10.1371/journal.pone.0176210 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Scottish Intercollegiate Guidelines Network (SIGN) . (2019). A guideline developer’s handbook (p. SIGN publication no. 50). SIGN. http://www.sign.ac.uk [ Google Scholar ]
  • Stansfield, C. , Dickson, K. , & Bangpan, M. (2016). Exploring issues in the conduct of website searching and other online sources for systematic reviews: How can we be systematic? Systematic Reviews , 5 , 191. 10.1186/s13643-016-0371-9 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Thomas, J. , Petticrew, M. , Noyes, J. , Chandler, J. , Rehfuess, E. , Tugwell, P. W. V. (2019). Chapter 17: Intervention complexity. In W. V. (editors). Higgins J. P. T., Thomas J., Chandler J., Cumpston M., Li T., & Page M. J. (Eds.), Cochrane Handbook for Systematic Reviews of Interventions version 6.0 (updated July 2019) . Cochrane. https://training.cochrane.org/handbook/current/chapter‐17#section‐17‐3 [ Google Scholar ]
  • Thompson, J. C. , Quigley, J. M. , Halfpenny, N. J. A. , Scott, D. A. , & Hawkins, N. S. (2016). Importance and methods of searching for E‐publications ahead of print in systematic reviews . Evidence‐Based Medicine , 21 ( 2 ), 55–59. 10.1136/ebmed-2015-110374 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Urhan, T. K. , Rempel, H. G. , Meunier‐Goddik, L. , & Penner, M. H. (2019). Information retrieval in food science research II: Accounting for relevance when evaluating database performance . Journal of Food Science , 84 ( 10 ), 2729–2735. 10.1111/1750-3841.14769 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vassar, M. , Yerokhin, V. , Sinnett, P. M. , Weiher, M. , Muckelrath, H. , Carr, B. , Varney, L. , & Cook, G. (2017). Database selection in systematic reviews: An insight through clinical neurology . Health Information and Libraries Journal , 34 ( 2 ), 156–164. 10.1111/hir.12176 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Waffenschmidt, S. , Hermanns, T. , Gerber‐Grote, A. , & Mostardt, S. (2017). No suitable precise or optimized epidemiologic search filters were available for bibliographic databases . Journal of Clinical Epidemiology , 82 , 112–118. 10.1016/j.jclinepi.2016.08.008 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Whaley, P. , Aiassa, E. , Beausoleil, C. , Beronius, A. , Bilotta, G. , Boobis, A. , de Vries, R. , Hanberg, A. , Hoffmann, S. , Hunt, N. , Kwiatkowski, C. F. , Lam, J. , Lipworth, S. , Martin, O. , Randall, N. , Rhomberg, L. , Rooney, A. A. , Schünemann, H. J. , Wikoff, D. , … Halsall, C. (2020). Recommendations for the conduct of systematic reviews in toxicology and environmental health research (COSTER) . Environment International , 143 , 105926. 10.1016/j.envint.2020.105926 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wilson, L. M. , Sharma, R. , Dy, S. M. , Waldfogel, J. M. , & Robinson, K. A. (2017). Searching ClinicalTrials.gov did not change the conclusions of a systematic review . Journal of Clinical Epidemiology , 90 , 127–135. 10.1016/j.jclinepi.2017.07.009 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wood, H. , Arber, M. , & Glanville, J. M. (2017). Systematic reviews of economic evaluations: How extensive are their searches? International Journal of Technology Assessment in Health Care , 33 ( 1 ), 25–31. 10.1017/s0266462316000660 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wright, K. , Golder, S. , & Lewis‐Light, K. (2015). What value is the CINAHL database when searching for systematic reviews of qualitative studies? Systematic Reviews , 4 , 104. 10.1186/s13643-015-0069-4 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zwakman, M. , Verberne, L. M. , Kars, M. C. , Hooft, L. , van Delden, J. J. M. , & Spijker, R. (2018). Introducing PALETTE: An iterative method for conducting a literature search for a review in palliative care . BMC Palliative Care , 17 ( 1 ), 82. 10.1186/s12904-018-0335-z [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Open access
  • Published: 14 February 2024

Citizens’ perspectives on relocating care: a scoping review

  • L. J. Damen 1 ,
  • L. H. D. Van Tuyl 1 ,
  • J. C. Korevaar 1 , 3 ,
  • B. J. Knottnerus 1 &
  • J. D. De Jong 1 , 2  

BMC Health Services Research volume  24 , Article number:  202 ( 2024 ) Cite this article

104 Accesses

6 Altmetric

Metrics details

Healthcare systems around the world are facing large challenges. There are increasing demands and costs while at the same time a diminishing health workforce. Without reform, healthcare systems are unsustainable. Relocating care, for example, from hospitals to sites closer to patients’ homes, is expected to make a key contribution to keeping healthcare sustainable. Given the significant impact of this initiative on citizens, we conducted a scoping review to provide insight into the factors that influence citizens’ attitudes towards relocating care.

A scoping review was conducted. The search was performed in the following databases: Pubmed, Embase, Cinahl, and Scopus. Articles had to include relocating healthcare and citizens’ perspectives on this topic and the articles had to be about a European country with a strong primary care system. After applying the inclusion and exclusion criteria, 70 articles remained.

Factors positively influencing citizens’ attitudes towards relocating care included: convenience, familiarity, accessibility, patients having more control over their disease, and privacy. Factors influencing negative attitudes included: concerns about the quality of care, familiarity, the lack of physical examination, contact with others, convenience, and privacy. Furthermore, in general, most citizens preferred to relocate care in the studies we found, especially from the hospital to care provided at home.

Several factors influencing the attitude of citizens towards relocating care were found. These factors are very important when determining citizens’ preferences for the location of their healthcare. The majority of studies in this review reported that citizens are in favour of relocating care. In general citizens’ perspectives on relocating care are very often missing in articles. It was significant that very few studies on relocation from the hospital to the general practitioner were identified.

Peer Review reports

Introduction

Demand for healthcare is increasing across the world due to a number of developments including populations ageing, technical advances in medical care, and rising incomes [ 1 , 2 , 3 ]. With an increase in demand, costs will also rise, while at the same time a diminishing health workforce. [ 1 , 2 , 3 , 4 , 5 ]. Consequently, reforms within the healthcare system will be necessary in order to control increasing healthcare costs and staff shortages [ 1 , 2 , 3 ]. It is assumed that reforming healthcare systems with a view to making better use of resources will make a key contribution to keeping healthcare sustainable. Estimates suggest that one fifth of health spending could be channelled towards better use, thus improving healthcare efficiency [ 6 ]. Increased efficiency could be accomplished in several ways. These may include: reducing the number of patients who receive low-value or unnecessary care; providing the same care with fewer resources, for instance by providing care in more cost-effective settings rather than in hospitals; or by reducing administrative processes that add no value [ 6 ]. This article focuses on providing care with fewer resources by relocating it to more cost-effective settings. This, in the first instance, would mean from secondary care to primary care. The thought behind this is that general practitioners (GPs) can generally provide care at less expense than hospitals for certain procedures that do not need hospital staff or environment [ 6 ]. These may include minor interventions, such as the placement of an intra-uterine device (IUD), or follow-up care, such as yearly blood- and ultrasounds, for patients who have been treated for cancer[ 6 , 7 , 8 , 9 ]. Relocating care to control costs could also include relocating care from secondary to homecare, self-care or eHealth [ 10 ]. Delivering care digitally can prevent a patient from having to go to the hospital. For example, an app could be used to monitor a patient receiving oxygen at home. Care commonly provided by the GP could also be relocated, to self-care, eHealth or to other healthcare providers (HCPs), like a physiotherapist or dietitian. This could result in more time for the GP to take on other secondary or primary care tasks.

It is important for relocating care to succeed, to get insights into the perspectives and needs of healthcare providers and citizens. Although involving citizens is a very important aspect of policy-making processes, it is an often overlooked form of evidence according to the World Health Organization (WHO) [ 11 ]. Citizen engagement will strengthen societal trust, will lead to more effective public policies and will lead to an improved quality of care. Furthermore, citizen engagement is essential because healthcare systems are transitioning towards a patient-centered approach, where citizens' perspectives on quality are inherently meaningful and should be a primary focus within healthcare systems [ 12 ].Extensive research has already been undertaken regarding the perspective of healthcare providers [ 9 , 13 , 14 , 15 , 16 ], the quality and outcomes of care [ 17 , 18 , 19 , 20 ] and the cost perspectives [ 10 , 17 , 18 , 20 , 21 ], but not regarding the citizens' perspective on relocating care. To our knowledge, a review about citizens’ perspectives on relocating care does not exist yet. We have, therefore, conducted a scoping review with the goal of describing the findings and range of research concerning citizens’ perspectives on relocating care in more detail. A strong primary care system is required to make relocating care possible [ 6 ]. We, therefore, searched for studies that were undertaken in countries in Europe with a strong primary care [ 22 ]. Table 1 describes the characteristics of countries with strong primary care. The research questions answered in this review are: (1) Which factors influence citizens’ attitudes towards relocating care? (2) What are citizens’ preferences towards the location of care?

The aim of this review is to understand citizens’ attitudes and preferences towards relocating care. As this topic is quite broad and may be studied using many different study designs, and considering that we are not aware of any prior synthesis on this topic, a scoping review rather than a systematic review was conducted. This scoping review was carried out on the basis of the guideline by Arksey and O’Malley [ 23 ]. The review includes the following key phases: 1) identifying the research question; 2) identifying relevant studies; 3) study selection; 4) charting the data, and; 5) collating, summarising, and reporting the results.

The search strategy and selection of literature

An initial broad search of the literature was undertaken by the first author in order to identify relevant articles that could be used for designing a search strategy. During this search, 18 key articles were identified, which included citizens, preference, and relocating care, these three terms formed the basis of our search strategy. A qualified medical information specialist was consulted in order to design and execute a sensitive search strategy. The medical information specialist also advised on which databases were most likely to contain the type of studies we were seeking and thus constituted an initial search strategy. This was refined several times after consultation. The final version was first used on the Pubmed database and then converted for each of these subsequent databases, Embase, Cinahl, and Scopus. The final search strategy, shown in Appendix A , was able to find 16 out of the 18 key articles identified. In total, it identified 19.587 articles. Duplicate references were removed, leaving 11.080 unique references. The most recent search was executed on 5 July 2022.

The selection process was performed by all authors. First, inclusion and exclusion criteria were developed. There were several inclusion criteria for this scoping review. The topic of the articles had to be citizens’ perspectives on relocating care. Only articles related to European countries with strong primary care systems were included, as a strong primary care system is required to make relocating care possible [ 6 ]. These countries were: the Netherlands, the United Kingdom, Belgium, Spain, Portugal, Finland, Estonia, Lithuania, Denmark, and Slovenia [ 22 ]. Only articles written in English, Dutch, or German were included as these were languages sufficiently mastered by the authors. In addition, all study designs were included. An overview of inclusion and exclusion criteria are shown in Table  2 . In order to calibrate the inclusion process, the researchers independently applied the inclusion and exclusion criteria to a selection of three hundred articles. The task was to include, or exclude, articles based on the title alone. The results were discussed by the researchers to see if there was a maximum margin of disagreement up to 10%. This percentage was agreed in advance by the researchers. During this process, the inclusion and exclusion criteria were further refined (See Table  2 ). As disagreement remained, a second round of calibration was performed on 50 articles, including both titles and abstracts. The disagreement rate was now only 4% and therefore all the remaining articles were distributed among the reviewers to be scored, based on the title and abstract. After screening on the title and abstract, 167 references remained and two key articles that were not found with the search were added. These articles were distributed among the researchers once more in order to read the full text. While reading the full texts, another three relevant articles were identified through the references. These were then added too. This resulted in a total of 172 full text articles. Results from included articles were charted in a spreadsheet, which was tested by the researchers before using it. When one of the reviewers had doubts about an article, it was read by a second reviewer and the outcomes were discussed until the two researchers came to an agreement.

Data extraction

A spreadsheet was created to categorise the information that contributed to answering the research questions.

The information extracted from the articles was structured according to the type of relocation, including: relocating from the hospital to the GP, to care at home, to self-care, or to eHealth, and relocating from the GP to self-care, to care at home, or to eHealth. The difference between self-care and care at home is that self-care does not involve a healthcare provider, unlike care at home. Both forms of relocating do not involve eHealth. When the article was about eHealth it was catalogued with the eHealth category. Articles that remained, of which there was only one, were placed within the category ‘other’.

The information extracted included factors that determined citizens’ attitudes towards relocating care. All of these factors were coded by highlighter and categorised. The categories were discussed within the research team. Subsequently, we made a top three of factors for each form of relocation that occurred most often.

Furthermore, we extracted information regarding preferences for healthcare location in the articles. Citizens could have a preference for either keeping care its current location, relocating care, or a combination of both, suggesting that citizens may prefer a hybrid approach where some aspects of healthcare are relocated, while others remain in their current location. Citizens could also express equal preferences for both locations. In addition, we compared the outcomes of the one-armed, the two-armed, and the hypothetical studies, to see if there were major differences, in the preferences for healthcare location, resulting from their methodological approaches. In the one-armed studies, care was relocated for all participants in the study [ 24 ]. In the two-armed studies there was one group of participants where care was relocated, but also one group who received care as usual. The outcomes of the two groups were then compared. Hypothetical studies, presented scenarios without actual choices. They asked citizens how they would feel if care were relocated. Two-armed studies are generally considered of higher quality than one-armed and hypothetical studies, due to the presence of both an experimental group and a control group, which increases their internal validity [ 25 ].

Search flow

A total of 19,587 references were identified from the databases, of which 8,507 were duplicates, as shown in Fig.  1 . At the end of the selection process, 70 full text articles were included. The characteristics of these studies are shown in Table  3 .

figure 1

Flowchart of the review process

The majority of studies of citizens’ perspectives on relocating care took place in the UK ( N  = 44), followed by the Netherlands ( N  = 13), and Denmark ( N  = 11). One study is from Spain and one from Estonia. Most studies are one-armed ( N  = 42), followed by two-armed ( n  = 19), and nine studies were hypothetical. While eight studies are from 2013, most studies were published quite recently in 2019 ( N  = 7), 2020 ( N  = 6), 2021 ( N  = 16), and 2022 ( N  = 9). Relocating care from the hospital to eHealth is the form of relocating that is most often examined within the studies identified ( N  = 28) [ 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 ]. This is followed by relocating from the hospital to self-care ( N  = 15) [ 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 ] and care at home ( N  = 13) [ 30 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 , 78 , 79 , 80 ]. Forms of relocating care that are not frequently studied include relocating from the hospital to the GP ( N  = 7) [ 16 , 69 , 81 , 82 , 83 , 84 , 85 ] and from the GP to self-care ( N  = 4) [ 86 , 87 , 88 , 89 ]. Five more forms of relocating are listed under the heading “other”. These include: relocating from the hospital to a community-based clinic [ 90 ]; from outpatient visits to a one-stop clinic [ 91 ]; nurse home visits that were replaced by eHealth [ 92 ]; hospital care relocated to a mobile chemotherapy unit [ 93 ]; and, care relocated from the GP to eHealth [ 94 ]. Most studies are about the relocation of care for oncology patients ( N  = 19), followed by citizens in general ( N  = 10), and cardiology patients ( N  = 8).

Which factors influence citizens’ attitudes towards relocating care?

Convenience.

The most frequently cited factors influencing citizens’ attitudes towards relocating care are shown in Table  4 . Convenience was most often reported, from the citizens’ perspective, as an advantage of relocating care. This was true for all forms of relocation [ 27 , 28 , 29 , 30 , 32 , 33 , 34 , 38 , 41 , 42 , 45 , 47 , 49 , 52 , 53 , 54 , 58 , 59 , 60 , 65 , 66 , 67 , 69 , 70 , 73 , 78 , 82 , 84 , 85 , 86 , 88 , 90 , 93 , 94 ]. Citizens think of relocating as convenient because in most cases it saves travel time [ 26 , 29 , 53 ]. It saves costs [ 26 , 69 ]. It avoids stress due to factors such as transport problems, busy traffic, travelling while you are sick, or long sojourns in waiting rooms [ 26 , 53 , 73 , 93 ]. When relocating to self-care it was very often mentioned that it is an advantage to have more flexibility [ 30 , 86 ]. Citizens can do a self-test whenever and wherever they want, without having to consider opening hours, for example [ 59 , 66 , 67 ]. Convenience was also mentioned as a reason for not wanting to relocate care. This factor was especially mentioned when relocating from the hospital or GP to self-care [ 59 , 60 , 86 ]. With regard to home dialysis, some citizens said that they did not have the space at home to do this. It was, therefore, not convenient [ 60 ]. In addition, for citizens living close to the hospital, self-care was sometimes more expensive and did not save time [ 59 , 86 ].

Familiarity

Familiarity was another factor which was reported as important to citizens regarding their attitude towards relocating care [ 29 , 30 , 31 , 32 , 33 , 58 , 61 , 67 , 68 , 69 , 70 , 73 , 74 , 77 , 83 , 84 , 85 , 86 , 90 , 94 ]. Some citizens feel more familiar with their GP than with a hospital specialist and would, therefore, want to relocate care [ 83 , 84 ]. Other citizens experience a sense of familiarity due to the environment in which care is provided. When receiving care at home, citizens feel more familiar, because they are in their own environment with their own support system [ 29 , 30 , 50 , 58 , 70 , 77 ]. In addition, when receiving care at home, the HCP enters the personal space of the patient. This, according to some of the patients, provided a better and more personal connection with the HCP. As shown in Table  4 , familiarity is also named as a reason not to want to relocate. While some citizens said that they had a better relationship with their GP, others said they were more familiar with the specialist so they would rather go there [ 85 ]. Some citizens thought that personal contact was reduced when using eHealth. They felt that it was more distant [ 31 , 33 , 36 , 47 , 51 ]. In addition, during telephone consultations, citizens did not feel a sense of familiarity if they had never seen the HCP before and therefore could not picture the face belonging to the voice. [ 29 ]. With regard to self-care, some citizens did not feel a sense of familiarity because this care is usually performed alone, while they preferred to have the support of a HCP [ 60 , 63 ].

Accessibility

The third most frequently mentioned factor that influenced citizens’ perceptions of relocating care was “accessibility”. Citizens were more willing to relocate care when waiting times became shorter and so the accessibility became better [ 28 , 29 , 30 , 45 , 49 , 54 , 58 , 82 , 83 , 84 , 88 , 90 , 91 , 93 ]. For example when relocating from the hospital to the GP [ 82 , 83 , 84 ]. Regarding self-tests, citizens mentioned that they had very rapid access. They can pick up the test and then apply it directly, without having to make an appointment with a HCP, who is often not immediately available [ 30 , 54 , 55 , 58 ]. In addition, with a self-test you often get the results without delay [ 55 , 59 ]. With regard to eHealth, citizens said that access to the HCP improved because they could contact them easily when they had questions [ 28 , 49 ].

Patients have more control

Another advantage of relocating care, mentioned by citizens, is being more in control, especially when relocating care from the hospital to eHealth, self-care, or to care at home [ 30 , 54 , 58 , 60 , 70 , 73 ]. The sense of increased control can stem from two primary factors. Firstly, patients become more actively engaged in their healthcare, leading to a better understanding of their diagnoses and consequently, greater control over their condition [ 38 , 49 , 53 , 59 , 86 ]. Secondly, citizens felt more involved in the process of decision making regarding their healthcare, affording them the ability to influence what happens and when [ 49 , 50 , 59 , 74 ]. This gives them the feeling of having more control over their lives.

The last factor named as an advantage, but also as a disadvantage of relocating care, is ‘privacy’. Citizens who saw it as an advantage mentioned that there is more privacy at home using eHealth or self-care than there is in a hospital [ 53 , 54 , 55 , 58 , 60 , 66 , 69 , 70 , 74 ]. With regard to self-care there are a lot of articles about using self-tests to check for sexually transmitted infections or about administering drugs oneself at home in order to induce an abortion. Citizens indicated that having such tests carried out at a clinic may cause a lot of embarrassment [ 54 ]. You may run into acquaintances for example [ 67 ]. Self-care, on the other hand, is more anonymous and thus offers more privacy [ 55 ]. However, privacy is also named as an disadvantage by citizens. Regarding eHealth, some citizens are concerned about whether the privacy of their data can be guaranteed [ 33 ]. In addition, some citizens said that it was hard to find a private space in their house during the covid-19 crisis [ 30 ]. Furthermore, when care is being given at home, some citizens do not like the fact that other family members may witness them being treated [ 69 ] or that caregivers are having to enter their home, thus violating their privacy [ 70 ].

Quality of care

The most frequently mentioned factor for having a negative attitude towards relocating care is that citizens have concerns about the quality of care when care is being relocated, due to less expertise of the HCP or insufficient quality of the instrument or self-test, which will be involved in the new location [ 28 , 32 , 33 , 34 , 36 , 47 , 51 , 54 , 55 , 59 , 60 , 63 , 65 , 67 , 69 , 70 , 73 , 77 , 82 , 85 , 86 , 87 , 90 , 94 ]. Regarding relocating care to eHealth or self-care a lack of trust in eHealth technology [ 33 , 34 , 36 , 47 ], or a particular self-care device, [ 54 , 55 , 59 , 60 , 63 , 65 , 67 ] was reported very often. Citizens fear technical problems or that important factors might be overlooked. Neither do some citizens feel that they have the right skills for using the new eHealth technology [ 36 ] or performing self-care in the right way [ 54 , 60 , 65 , 67 ]. Regarding care at home, citizens were concerned with the absence of constant surveillance and a diminished contact with the doctor. Moreover, citizens felt that the hospital is better equipped [ 77 ]. With regard to relocating from the hospital to the GP, some citizens thought that the specialist had more expertise which was a reason for them not wanting to relocate [ 82 , 85 ].

No physical examination

Another factor for not wanting to relocate care is where it results in an absence of physical examination. This reason was named many times when relocating care from the hospital to eHealth [ 27 , 29 , 31 , 34 , 47 , 51 , 52 ] and relocating from the GP to self-care [ 86 , 89 ]. With regard to eHealth, some citizens say that they found it difficult because they are not able to demonstrate physical symptoms and they find it hard to describe problems without seeing the HCP [ 31 , 33 ].

Contact with others

The last factor, frequently mentioned as a disadvantage of relocating care, is less contact with their peers. This aspect was most mentioned regarding relocating from the hospital to care at home [ 69 , 70 , 73 ]. Some citizens enjoyed going to the hospital because of the social interaction with other citizens. They were afraid of social isolation [ 60 ].

What are citizens’ preferences regarding the location of care?

A total of 49 articles investigated citizens’ preferences regarding the location of healthcare. Their location preferences for each form of relocating care will be discussed below and are shown in Table  5 .

Within the articles about relocating from the hospital to eHealth, 23 articles out of 28 provided the preferences of respondents towards the location of care. In ten articles there was a preference for eHealth [ 28 , 32 , 33 , 34 , 42 , 44 , 45 , 46 , 50 , 53 ] and in six articles a preference for the hospital [ 26 , 31 , 36 , 39 , 43 , 48 ]. In four articles, citizens expressed a wish for a combination of eHealth and face to face contact [ 37 , 47 , 49 , 52 ]. In the remaining articles ( N  = 3), the preference was equal for the hospital and for eHealth [ 35 , 41 , 51 ].

Eight out of 15 articles about relocating from the hospital to self-care investigated citizens preferences for the location of care. In five articles citizens showed a preference for self-care [ 56 , 57 , 61 , 64 , 66 ] and in three articles for the hospital [ 55 , 60 , 65 ].

With regard to articles about relocating from the hospital to care at home, ten out of 13 articles investigated a preference for healthcare location. In eight articles, the participants had a preference for care at home [ 68 , 69 , 72 , 74 , 75 , 78 , 79 , 80 ]. In two articles, preferences for care at home and the hospital were equal [ 71 , 76 ]. There were no articles with a preference for the hospital.

Regarding relocating from the hospital to the GP, there were five out of seven articles investigating citizens preferences regarding healthcare location. In two articles, participants preferred the hospital over the GP [ 81 , 85 ]. In one they preferred the GP [ 84 ], and in the other, preferences were equal [ 16 ]. In the fifth study citizens could choose between three locations: the hospital, the GP, or care at Home. Here they preferred care at home followed by care at the general practice [ 69 ].

Two out of four articles about relocating from the GP to self-care investigated a preference for a healthcare location. In one article, citizens preferred self-care [ 86 ], and in the other, they preferred the GP [ 89 ].

Within the category “other”, there were two articles which investigated a preference for a healthcare location. In the article about relocating from the hospital to one-and-a-half line care, citizens preferred one-and-a-half line care [ 91 ]. The last article was about nurse home visits that were relocated to eHealth. Here, citizens preferred eHealth over the nurse visits [ 93 ].

Most articles adopted a one-armed approach. Since two-armed articles are often of higher quality, we compared the results of the one-armed, and the two-armed, articles. In total there were 19 two-armed articles of which 14 investigated a preference for healthcare location. In nine out of 14 articles citizens preferred relocating healthcare and in two articles they did not. In the other articles, preferences were equal. Of the 35 one-armed articles which investigated healthcare preferences in 18 articles, citizens gave a preference for relocating healthcare. Thus, in both cases, there is a preference for relocating care in just over half of the articles. We see here a different outcome than with the hypothetical studies ( N  = 10). Here there was no preference for relocating care in five out of seven articles.

This scoping review was conducted in order to provide insight into the factors that influence citizens attitudes towards relocating care. Seventy articles were included and most which were found were about relocating care from the hospital to eHealth. Most of these articles about eHealth were published in 2020 or later ( N  = 20). Only eight articles were published in 2019 or earlier. This is likely due to covid-19, which started in 2020 in Europe and required healthcare providers in many places to offer care online.

The first research question concerned which factors influence citizens attitudes towards relocating care. The most frequent reported factor for a positive attitude towards relocating care is “convenience”, according to citizens, followed by “familiarity”. Other factors that were in the top three of reasons for a positive attitude towards relocating care were “accessibility”, “patients have more control”, and “privacy”. The positive drivers for relocating care are almost the same for all forms of relocating. The two most mentioned factors for a negative attitude towards relocating care are, first of all, citizens having concerns about the quality of care and, secondly, citizens feel less familiar when care is being relocated. Other reasons to have a negative attitude towards relocating are “the lack of physical examination”, “contact with others”, “convenience”, and “privacy”.

The second research question concerned citizens’ preferences for healthcare location. In general, as far as the conditions and treatments mentioned in the articles are concerned, most citizens favoured relocating healthcare. Especially with regard to care at home, there were no articles found where citizens had a preference for the hospital instead of care at home. In addition, eHealth and self-care are also carried out from home. Citizens thus prefer receiving care at home.

Not all articles investigated preferences for the location of healthcare, and of those which did, most were one-armed. However, there were no major differences found when comparing the outcomes of the one-armed and two-armed studies. This contrasted with the hypothetical studies, where citizens did not prefer relocating care in the majority of cases. This may be due to the fact that citizens are familiar with the current situation and do not know, or find it difficult to imagine, what a new situation will look like. Citizens may not want to relocate because familiarity is an important aspect of healthcare, as described earlier.

The articles found included a wide variety of conditions and phases of treatment. We would have preferred to distinguish between different conditions and treatment phases, as these aspects may determine the preference for healthcare location. For example, it might be the case that citizens would like to relocate follow-up cancer care to care at home, while keeping the treatment itself in the hospital. However, the large variation in conditions and phases of treatment resulted in a small N per condition or phase of treatment and this hampered further in-depth analysis.

Relocating care often involves not only the location changing, but also other aspects. For instance, the care provider may change too, for example a telephone consultation with a nurse instead of a face to face appointment with the specialist in the hospital [ 32 , 53 ]. And in some cases, the purpose of treatment changed, for example, a telephone consultation that was meant for providing information and supporting patients, while a face to face consultation was more focused on looking for signs of recurrent disease [ 29 ]. All of these factors together determine the preference for healthcare location. So it is not only the location on which citizens base their preference. It is, therefore, important to take all aspects into account, not only the geography when investigating the preferences for healthcare location.

Strengths and limitations

A strength of this scoping review is that it has a broad search strategy developed together with a medical information specialist. This resulted in over 11.000 references that were all assessed. However, the search strategy may not have been broad enough, as some articles were missed, including two of the 18 key articles. This was known beforehand and so we investigated why the two key articles were not found. One key article was not found because we did not use the word “experience” [ 16 ] while the other focused on the terms “breast cancer”, “follow-up care”, and “healthcare models” [ 81 ], which we did not use in our search strategy. The words used in these two articles were not words we saw repeated in other relevant articles. Adding any of the key words yielded about 5,800 additional results in Pubmed alone. Therefore, we chose to add the key articles manually and left these words out of the search string. All statements made in this article are based on the conditions and forms of care that recurred in the studies we found. There may be other forms of care that could be relocated that have not been discussed in this article.

Another limitation of this study is that the articles are not double reviewed because of the large number of references found. However, to calibrate the inclusion process, the researchers applied the inclusion and exclusion criteria to a selection of 350 articles. Also, it was decided to start with reviewing abstracts, instead of titles, which is the normal procedure [ 23 ].

A limitation of a scoping review is that it analyses studies that use a range of data collection and techniques. This makes it more difficult to synthesise the results of the studies [ 23 ]. A strong point of this review is that we made a comparison between one-armed and two-armed articles and that approximately the same results emerged in the articles.

Research implications

A knowledge gap we identified is that citizens’ perspectives on relocating care received relatively little attention within the current literature. In particular, we found limited literature focusing on citizens’ perspectives regarding the relocation of care from the hospital to the GP. This gap is significant, because this is one of the forms of relocating that governments think of first in order to limit healthcare costs [ 6 , 7 , 8 ]. There are several studies about this subject but they do not involve the citizens’ perspective. Despite the importance of including citizens' perspectives in policy-making processes, it often remains underrepresented in the literature [ 11 ]. The World Health Organization (WHO) emphasizes that citizen engagement can enhance societal trust and lead to more effective public policies.

Another knowledge gap we identified is that insufficient research has been done on different treatment phases and conditions in healthcare with regard to citizens’ perspectives and relocating care. To fill this gap, future research should delve deeper into the relationship between the factors leading to particular attitudes towards relocating care, and preferences for location of care and different conditions and treatment phases, including diagnosis, treatment phase and aftercare.

Our study has also revealed practical implications that can inform healthcare policy and decision-making. Firstly, the factors we have identified can serve as conditions that governments can use to improve acceptance among citizens regarding healthcare location. They can be used as conditions that have to be met, and that can be used to direct citizens to a particular location. Secondly, it's evident from our findings that citizens generally prefer receiving care from home. This preference presents an opportunity for governments to invest in home-based healthcare services, potentially leading to higher citizen satisfaction and more cost-effective healthcare delivery.

Positive factors influencing the attitude of citizens towards relocating care are almost the same for all forms of this development—with convenience as the most important. The most often reported factor for having a negative attitude towards relocating care are concerns about the quality of care. The factors found are very important when determining a citizens’ preference for a particular healthcare location. The majority of studies in this review reported that citizens are in favour of relocating care, especially to care at home. Several knowledge gaps were identified. Strikingly, very few studies on relocation from the hospital to the GP were identified.

Availability of data and materials

Not applicable. The studies we used are accessible to everyone. All studies used are included in the references.

Abbreviations

General practitioner

Healthcare provider

Intra-uterine device

Organisation for Economic Co-operation and Development. OECD work on health. Paris: OECD; 2021. p. 1–44.

Google Scholar  

Rechel B, Doyle Y, Grundy E, McKee M. How can health systems respond to population ageing? Technical report. Copenhagen: World Health Organization; 2009. Report No.: 10.

Rudnicka E, Napierała P, Podfigurna A, Męczekalski B, Smolarczyk R, Grymowicz M. The World Health Organization (WHO) approach to healthy ageing. Maturitas. 2020;139:6–11.

Article   PubMed   PubMed Central   Google Scholar  

Liu JX, Goryakin Y, Maeda A, Bruckner T, Scheffler R. Global health workforce labor market projections for 2030. Hum Resour Health. 2017;15(1):11.

Boniol M, Kunjumen T, Nair TS, Siyam A, Campbell J, Diallo K. The global health workforce stock and distribution in 2020 and 2030: a threat to equity and ‘universal’ health coverage? BMJ Glob Health. 2022;7(6):e009316.

Organisation for Economic Co-operation and Development. Tackling wasteful spending on Health. Paris: OECD; 2017.

Taskforce De Juiste Zorg Op de Juiste Plek. De Juiste zorg op de juiste plek rapport: Wie durft? Den Haag: Ministerie van Volksgezondheid, Welzijn en Sport; 2018.

Sibbald B, McDonald R, Roland M. Shifting care from hospitals to the community: a review of the evidence on quality and efficiency. J Health Serv Res Policy. 2007;12(2):110–7.

Article   PubMed   Google Scholar  

Liemburg GB, Korevaar JC, van Zomeren WT, Berendsen AJ, Brandenbarg D. Follow-up of curatively treated cancer in primary care: a qualitative study of the views of Dutch GPs. Br J Gen Pract. 2022;72(721):e592–600.

Gupta Strategists. No place like home. An analysis of the growing movement away from hospitals towards providing medical care to patients in their own homes. Amsterdam: Gutpa Strategists; 2016.

World Health Organization. Implementing citizen engagement within evidence-informed policy-making: an overview of purpose and methods. Geneva: World Health Organization; 2022.

Sofaer S, Firminger K. Patient perceptions of the quality of health services. Annu Rev Public Health. 2005;26:513–59.

Van Hoof SJM, Kroese MEAL, Spreeuwenberg MD, Elissen AMJ, Meerlo RJ, Hanraets MMH, et al. Substitution of hospital care with primary care: defining the conditions of primary Care Plus. Int J Integr Care. 2016;16(1):12.

Firet L, de Bree C, Verhoeks CM, Teunissen DA, Lagro-Janssen AL. Mixed feelings: general practitioners’ attitudes towards eHealth for stress urinary incontinence-a qualitative study. BMC Fam Pract. 2019;20:1–8.

Article   Google Scholar  

Noels EC, Wakkee M, van den Bos RR, Bindels PJE, Nijsten T, Lugtenberg M. Substitution of low-risk skin cancer hospital care towards primary care: a qualitative study on views of general practitioners and dermatologists. Plos One. 2019;14(3): e0213595.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Van Hoof SJ, Spreeuwenberg MD, Kroese ME, Steevens J, Meerlo RJ, Hanraets MM, et al. Substitution of outpatient care with primary care: a feasibility study on the experiences among general practitioners, medical specialists and patients. BMC Fam Pract. 2016;17(1):1–9.

Crawford DC, Li CS, Sprague S, Bhandari M. Clinical and cost implications of inpatient versus outpatient orthopedic surgeries: a systematic review of the published literature. Orthop Rev (Pavia). 2015;7(4):6177.

PubMed   Google Scholar  

Calkins TE, Mosher ZA, Throckmorton TW, Brolin TJ. Safety and cost effectiveness of outpatient total shoulder Arthroplasty: a systematic review. J Am Acad Orthop Surg. 2022;30(2):e233–41.

Qin C, Dekker RG, Blough JT, Kadakia AR. Safety and outcomes of inpatient compared with outpatient surgical procedures for ankle fractures. J Bone Joint Surg Am. 2016;98(20):1699–705.

van Hoof SJM, Quanjel TCC, Kroese M, Spreeuwenberg MD, Ruwaard D. Substitution of outpatient hospital care with specialist care in the primary care setting: a systematic review on quality of care, health and costs. Plos One. 2019;14(8):e0219957.

Article   ADS   PubMed   PubMed Central   Google Scholar  

Cryer L, Shannon SB, Van Amsterdam M, Leff B. Costs for “hospital at home” patients were 19 percent lower, with equal or better outcomes compared to similar inpatients. Health Aff (Millwood). 2012;31(6):1237–43.

Kringos D, Boerma W, Bourgueil Y, Cartier T, Dedeu T, Hasvold T, et al. The strength of primary care in Europe: an international comparative study. Br J Gen Pract. 2013;63(616):e742–50.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Evans SR. Clinical trial structures. J Exp Stroke Transl Med. 2010;3(1):8–18.

Malay S, Chung KC. The choice of controls for providing validity and evidence in clinical research. Plast Reconstr Surg. 2012;130(4):959–65.

Abdelmotagly Y, Noureldin M, Paramore L, Kummar R, Nedas T, Hindley R, et al. The impact of remote urology outpatient clinics during the COVID-19 pandemic. J Endoluminal Endourol. 2021;4(3):e17–25.

Barsom EZ, Jansen M, Tanis PJ, van de Ven AWH, van Blussé M, Buskens CJ, et al. Video consultation during follow up care: effect on quality of care and patient- and provider attitude in patients with colorectal cancer. Surg Endosc. 2021;35(3):1278–87.

Bager P, Hentze R, Nairn C. Outpatients with inflammatory bowel disease (IBD) strongly prefer annual telephone calls from an IBD nurse instead of outpatient visits. Gastroenterol Nurs. 2013;36(2):92–6.

Beaver K, Williamson S, Chalmers K. Telephone follow-up after treatment for breast cancer: views and experiences of patients and specialist breast care nurses. J Clin Nurs. 2010;19(19–20):2916–24.

Boydell N, Reynolds-Wright JJ, Cameron ST, Harden J. Women’s experiences of a telemedicine abortion service (up to 12 weeks) implemented during the coronavirus (COVID‐19) pandemic: a qualitative evaluation. BJOG: Int J Obstet Gynecol. 2021;128(11):1752–61.

Article   CAS   Google Scholar  

Brewer A, Coleman V. Adaptation of a stoma care pathway and use of telephone clinics during the pandemic: patient experience survey. Br J Nurs. 2022;31(1):8–14.

Casey R, Powell L, Braithwaite M, Booth C, Sizer B, Corr J. Nurse-led phone call follow-up clinics are effective for patients with prostate cancer. J Patient Exp. 2017;4(3):114–20.

Damery S, Jones J, O’Connell Francischetto E, Jolly K, Lilford R, Ferguson J. Remote consultations versus standard face-to-face appointments for liver transplant patients in routine hospital care: feasibility randomized controlled trial of myVideoClinic. J Med Internet Res. 2021;23(9):e19232.

Duncan H, Russell RK. Role for structured telephone clinics in paediatric gastroenterology: reflections, lessons and patient feedback. BMJ Open Gastroenterol. 2019;6(1):e000245.

Hansen JB, Sørensen JF, Glassou EN, Homilius M, Hansen TB. Reducing patient–staff contact in fast-track total hip arthroplasty has no effect on patient-reported outcomes, but decreases satisfaction amongst patients with self-perceived complications: analysis of 211 patients. Acta Orthop. 2022;93:264.

Heeno E, Biesenbach I, Englund C, Lund M, Toft A, Lund L. Patient perspective on telemedicine replacing physical consultations in urology during the COVID-19 lockdown in Denmark. Scandinavian J Urol. 2021;55(3):177–83.

Jones MT, Arif R, Rai A. Patient experiences with telemedicine in a national health service rheumatology outpatient department during coronavirus disease-19. J Patient Exp. 2021;8:23743735211034972.

PubMed   PubMed Central   Google Scholar  

Khan Z, Kershaw V, Madhuvrata P, Radley S, Connor M. Patient experience of telephone consultations in gynaecology: a service evaluation. BJOG: Int J Obstet Gynecol. 2021;128(12):1958–65.

Kimman ML, Dellaert BG, Boersma LJ, Lambin P, Dirksen CD. Follow-up after treatment for breast cancer: one strategy fits all? An investigation of patient preferences using a discrete choice experiment. Acta Oncol. 2010;49(3):328–37.

Kjeldsted E, Lindblad KV, Bødtcher H, Sørensen DM, Rosted E, Christensen HG, et al. A population-based survey of patients’ experiences with teleconsultations in cancer care in Denmark during the COVID-19 pandemic. Acta Oncol. 2021;60(10):1352–60.

Article   CAS   PubMed   Google Scholar  

Knudsen LR, de Thurah A, Lomborg K. Experiences with telehealth followup in patients with rheumatoid arthritis: a qualitative interview study. Arthritis Care Res. 2018;70(9):1366–72.

Lee J, Hynes C, Humphries G, Thumbikat P. Pilot study to explore the use of video consultation for outpatient follow up of spinal cord injury (SCI) patients. Clin Rehabil. 2017;31(12):1690.

Lim K, Neal-Smith G, Mitchell C, Xerri J, Chuanromanee P. Perceptions of the use of artificial intelligence in the diagnosis of skin cancer: an outpatient survey. Clin Exp Dermatol. 2022;47(3):542–6.

Lo WB, Herbert K, Rodrigues D. Clinical effectiveness of and family experience with telephone consultation in a regional pediatric neurosurgery center in the United Kingdom. J Neurosurg Pediatr. 2021;28(4):483–9.

Patel S, Douglas-Moore J. A reflection on an adapted approach from face‐to‐face to telephone consultations in our Urology Outpatient Department during the COVID‐19 pandemic–a pathway for change to future practice? BJU Int. 2020;126(3):339–41.

Rovira A, Brar S, Munroe-Gray T, Ofo E, Rodriguez C, Kim D. Telephone consultation for two-week-wait ENT and head and neck cancer referrals: initial evaluation including patient satisfaction. J Laryngology Otology. 2022;136(7):615–21.

Singh N, Datta M. Single-centre telephone survey on patients’ perspectives regarding remote paediatric outpatient consultations in a district general hospital. BMJ Paediatrics Open. 2020;4(1):e000885.

Stavrou M, Lioutas E, Lioutas J, Davenport RJ. Experiences of remote consulting for patients and neurologists during the COVID-19 pandemic in Scotland. BMJ Neurol Open. 2021;3(2):e000173.

Trace S, Collinson A, Searle A, Lithander F. Using videoconsultations to deliver dietary advice to children with chronic kidney disease: a qualitative study of parent and child perspectives. J Hum Nutr Dietetics. 2020;33(6):881–9.

Tyler JM, Pratt AC, Wooster J, Vasilakis C, Wood RM. The impact of increased outpatient telehealth during COVID-19: retrospective analysis of patient survey and routine activity data from a major healthcare system in England. Int J Health Plann Manag. 2021;36(4):1338–45.

Van Erkel FM, Pet MJ, Bossink EH, Van de Graaf CF, Hodes MT, Van Ogtrop SN, et al. Experiences of patients and health care professionals on the quality of telephone follow-up care during the COVID-19 pandemic: a large qualitative study in a multidisciplinary academic setting. BMJ Open. 2022;12(3):e058361.

Watters C, Miller B, Kelly M, Burnay V, Karagama Y, Chevretton E. Virtual voice clinics in the COVID-19 era: have they been helpful? Eur Arch Otorhinolaryngol. 2021;278:4113–8.

Williamson S, Chalmers K, Beaver K. Patient experiences of nurse-led telephone follow-up following treatment for colorectal cancer. Eur J Oncol Nurs. 2015;19(3):237–43.

Aicken CR, Fuller SS, Sutcliffe LJ, Estcourt CS, Gkatzidou V, Oakeshott P, et al. Young people’s perceptions of smartphone-enabled self-testing and online care for sexually transmitted infections: qualitative interview study. BMC Public Health. 2016;16(1):1–11.

Baraitser P, Brown KC, Gleisner Z, Pearce V, Kumar U, Brady M. ‘Do it yourself’sexual health care: the user experience. Sex Health. 2011;8(1):23–9.

Boons CC, Timmers L, Janssen JJ, Swart EL, Hugtenburg JG, Hendrikse NH. Feasibility of and patients’ perspective on nilotinib dried blood spot self-sampling. Eur J Clin Pharmacol. 2019;75:825–9.

Bundgaard JS, Raaschou-Pedersen DT, Todsen T, Ringgaard A, Torp-Pedersen C, Von Buchwald C, et al. Danish citizens’ preferences for at-home oropharyngeal/nasal SARS-CoV-2 specimen collection. Int J Infect Dis. 2021;109:195–8.

Cameron S, Glasier A, Dewart H, Johnstone A. Women’s experiences of the final stage of early medical abortion at home: results of a pilot survey. BMJ Sex Reprod Health. 2010;36(4):213–6.

Grogan A, Coughlan M, Prizeman G, O’Connell N, O’Mahony N, Quinn K, et al. The patients’ perspective of international normalized ratio self-testing, remote communication of test results and confidence to move to self-management. J Clin Nurs. 2017;26(23–24):4379–89.

Haroon S, Griva K, Davenport A. Factors affecting uptake of home hemodialysis among self-care dialysis unit patients. Hemodial Int. 2020;24(4):460–9.

Hope J. A patient perspective on the barriers to home dialysis. J Ren care. 2013;39(S1):3–8.

Article   MathSciNet   PubMed   Google Scholar  

Hoyos J, Maté T, Guerras J-M, Donat M, Agustí C, Kuske M, et al. Preference towards HIV Self-Testing above other testing options in a sample of men who have sex with men from five European countries. Int J Environ Res Public Health. 2021;18(9):4804.

Den Oudendammer WM, Broerse JE. Towards a decision aid for self-tests: users’ experiences in the Netherlands. Health Expect. 2019;22(5):983–92.

Tompson AC, Ward AM, McManus RJ, Perera R, Thompson MJ, Heneghan CJ, et al. Acceptability and psychological impact of out-of-office monitoring to diagnose hypertension: an evaluation of survey data from primary care patients. Br J Gen Pract. 2019;69(683):e389–97.

Tonna A, Anthony G, Tonna I, Paudyal V, Forbes-McKay K, Laing R, et al. Home self-administration of intravenous antibiotics as part of an outpatient parenteral antibiotic therapy service: a qualitative study of the perspectives of patients who do not self-administer. BMJ Open. 2019;9(1):e027475.

Veerus P, Hallik R, Jänes J, Jõers K, Paapsi K, Laidra K, et al. Human papillomavirus self-sampling for long-term non-attenders in cervical cancer screening: a randomised feasibility study in Estonia. J Med Screen. 2022;29(1):53–60.

Witzel T, Bourne A, Burns F, Rodger A, McCabe L, Gabriel M, et al. HIV self-testing intervention experiences and kit usability: results from a qualitative study among men who have sex with men in the SELPHI (Self‐Testing Public Health Intervention) randomized controlled trial in England and Wales. HIV Med. 2020;21(3):189–97.

Bendien S, van Leeuwen M, Lau H, Ten Brinke A, Visser L, de Koning E, et al. Home-based intravenous treatment with reslizumab for severe asthma in the Netherlands–An evaluation. Respir Med. 2022;194:106776.

Corrie P, Moody A, Armstrong G, Nolasco S, Lao-Sirieix S, Bavister L, et al. Is community treatment best? A randomised trial comparing delivery of cancer treatment in the hospital, home and GP surgery. Br J Cancer. 2013;109(6):1549–55.

Dismore LL, Echevarria C, Van Wersch A, Gibson J, Bourke S. What are the positive drivers and potential barriers to implementation of hospital at home selected by low-risk DECAF score in the UK: a qualitative study embedded within a randomised controlled trial. BMJ Open. 2019;9(4):e026609.

Goossens LM, Utens CM, Smeenk FW, Donkers B, van Schayck OC, Rutten-van Mölken MP. Should I stay or should I go home? A latent class analysis of a discrete choice experiment on hospital-at-home. Value Health. 2014;17(5):588–96.

Hansson H, Kjaergaard H, Johansen C, Hallström I, Christensen J, Madsen M, et al. Hospital-based home care for children with cancer: feasibility and psychosocial impact on children and their families. Pediatr Blood Cancer. 2013;60(5):865–72.

Hansson H, Kjaergaard H, Schmiegelow K, Hallström I. Hospital-based home care for children with cancer: a qualitative exploration of family members’ experiences in Denmark. Eur J Cancer Care. 2012;21(1):59–66.

Jepsen LØ, Høybye MT, Hansen DG, Marcher CW, Friis LS. Outpatient management of intensively treated acute leukemia patients—the patients’ perspective. Support Care Cancer. 2016;24:2111–8.

Lohr PA, Wade J, Riley L, Fitzgibbon A, Furedi A. Women’s opinions on the home management of early medical abortion in the UK. BMJ Sex Reprod Health. 2010;36(1):21–5.

Rosted E, Aabom B, Hølge-Hazelton B, Raunkiær M. Comparing two models of outpatient specialised palliative care. BMC Palliat Care. 2021;20(1):1–13.

Schiff R, Oyston M, Quinn M, Walters S, McEnhill P, Collins M. Hospital at home: another piece of the armoury against COVID-19. Future Healthc J. 2022;9(1):90–5.

Uitdehaag MJ, Van Putten PG, Van Eijck CH, Verschuur EM, Van der Gaast A, Pek CJ, et al. Nurse-led follow-up at home vs. conventional medical outpatient clinic follow-up in patients with incurable upper gastrointestinal cancer: a randomized study. J Pain Symptom Manag. 2014;47(3):518–30.

Utens CM, Goossens LM, Van Schayck OC, Rutten-van Mölken MP, Van Litsenburg W, Janssen A, et al. Patient preference and satisfaction in hospital-at-home and usual hospital care for COPD exacerbations: results of a randomised controlled trial. Int J Nurs Stud. 2013;50(11):1537–49.

Van Ramshorst J, Duffels M, De Boer S, Bos-Schaap A, Drexhage O, Walburg S, et al. Connected care for endocarditis and heart failure patients: a hospital-at-home programme. Neth Heart J. 2022;30(6):319–27.

Baena-Cañada JM, Ramirez-Daffos P, Cortes-Carmona C, Rosado-Varela P, Nieto-Vera J, Benitez-Rodriguez E. Follow-up of long-term survivors of breast cancer in primary care versus specialist attention. Fam Pract. 2013;30(5):525–32.

Van Bodegom-Vos L, De Jong JD, Spreeuwenberg P, Curfs EC, Groenewegen PP. Are patients’ preferences for shifting services from medical specialists to general practitioners related to the type of medical intervention? Qual Prim Care. 2013;21(2):81–95.

Pollard L, Rogers S, Shribman J, Sprigings D, Sinfield P. A study of role expansion: a new GP role in cardiology care. BMC Health Serv Res. 2014;14(1):1–10.

Milosevic S, Joseph-Williams N, Pell B, Cain E, Hackett R, Murdoch F, et al. Managing lower urinary tract symptoms in primary care: qualitative study of GPs’ and patients’ experiences. Br J Gen Pract. 2021;71(710):e685–92.

Wildeboer JA, Van de Ven ART, De Boer D. Substitution of care for chronic heart failure from the hospital to the general practice: patients’ perspectives. BMC Fam Pract. 2018;19(1):8.

Cottrell E, McMillan K, Chambers R. A cross-sectional survey and service evaluation of simple telehealth in primary care: what do patients think? BMJ Open. 2012;2(6):e001392.

Scott A, Jones C. An exploration of the attitudes and perceptions of the UK public towards self-care for minor ailments. Br J Nurs. 2020;29(1):44–9.

McAteer A, Yi D, Watson V, Norwood P, Ryan M, Hannaford PC, et al. Exploring preferences for symptom management in primary care: a discrete choice experiment using a questionnaire survey. Br J Gen Pract. 2015;65(636):e478–88.

Fletcher B, Hinton L, McManus R, Rivero-Arias O. Patient preferences for management of high blood pressure in the UK: a discrete choice experiment. Br J Gen Pract. 2019;69(686):e629–37.

Heath G, Greenfield S, Redwood S. The meaning of ‘place’in families’ lived experiences of paediatric outpatient care in different settings: a descriptive phenomenological study. Health Place. 2015;31:46–53.

King KE. Patient satisfaction in a one-stop Haematuria clinic and urology outpatients: a comparison of clinics. Int J Urol Nurs. 2016;10(3):127–36.

Fitzsimmons DA, Thompson J, Bentley CL, Mountain GA. Comparison of patient perceptions of Telehealth-supported and specialist nursing interventions for early stage COPD: a qualitative study. BMC Health Serv Res. 2016;16:1–12.

Mitchell T. Patients’ experiences of receiving chemotherapy in outpatient clinic and/or onboard a unique nurse-led mobile chemotherapy unit: a qualitative study. Eur J Cancer Care. 2013;22(4):430–9.

Cook EJ, Randhawa G, Large S, Guppy A, Chater AM, Ali N. Barriers and facilitators to using NHS Direct: a qualitative study ofusers’ andnon-users’. BMC Health Serv Res. 2014;14(1):1–12.

Download references

Acknowledgements

We would like to thank Linda Schoonmade, medical information specialist, for her contribution to this research in helping develop the search strategy.

The authors received no specific funding for this work.

Author information

Authors and affiliations.

Nivel, Netherlands Institute for Health Services Research, Utrecht, the Netherlands

L. J. Damen, L. H. D. Van Tuyl, J. C. Korevaar, B. J. Knottnerus & J. D. De Jong

CAPHRI, Maastricht University, PO Box 616, 6200 MD Maastricht, Maastricht, the Netherlands

J. D. De Jong

The Hague University of Applied Sciences, The Hague, the Netherlands

J. C. Korevaar

You can also search for this author in PubMed   Google Scholar

Contributions

The selection process of articles was performed by all authors. L.D. wrote the main manuscript text. All authors reviewed and edited the manuscript. L.T., J.J., B.K. and J.K. supervised.

Corresponding author

Correspondence to L. J. Damen .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: appendix a..

Search string Pubmed.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Damen, L.J., Van Tuyl, L.H.D., Korevaar, J.C. et al. Citizens’ perspectives on relocating care: a scoping review. BMC Health Serv Res 24 , 202 (2024). https://doi.org/10.1186/s12913-024-10671-3

Download citation

Received : 16 June 2023

Accepted : 01 February 2024

Published : 14 February 2024

DOI : https://doi.org/10.1186/s12913-024-10671-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Relocating care
  • Citizens’ perspectives
  • Primary care
  • Health policy

BMC Health Services Research

ISSN: 1472-6963

search methods for literature review

  • Systematic review
  • Open access
  • Published: 12 February 2024

Exploring the role of professional identity in the implementation of clinical decision support systems—a narrative review

  • Sophia Ackerhans   ORCID: orcid.org/0009-0005-9269-6854 1 ,
  • Thomas Huynh 1 ,
  • Carsten Kaiser 1 &
  • Carsten Schultz 1  

Implementation Science volume  19 , Article number:  11 ( 2024 ) Cite this article

516 Accesses

3 Altmetric

Metrics details

Clinical decision support systems (CDSSs) have the potential to improve quality of care, patient safety, and efficiency because of their ability to perform medical tasks in a more data-driven, evidence-based, and semi-autonomous way. However, CDSSs may also affect the professional identity of health professionals. Some professionals might experience these systems as a threat to their professional identity, as CDSSs could partially substitute clinical competencies, autonomy, or control over the care process. Other professionals may experience an empowerment of the role in the medical system. The purpose of this study is to uncover the role of professional identity in CDSS implementation and to identify core human, technological, and organizational factors that may determine the effect of CDSSs on professional identity.

We conducted a systematic literature review and included peer-reviewed empirical studies from two electronic databases (PubMed, Web of Science) that reported on key factors to CDSS implementation and were published between 2010 and 2023. Our explorative, inductive thematic analysis assessed the antecedents of professional identity-related mechanisms from the perspective of different health care professionals (i.e., physicians, residents, nurse practitioners, pharmacists).

One hundred thirty-one qualitative, quantitative, or mixed-method studies from over 60 journals were included in this review. The thematic analysis found three dimensions of professional identity-related mechanisms that influence CDSS implementation success: perceived threat or enhancement of professional control and autonomy, perceived threat or enhancement of professional skills and expertise, and perceived loss or gain of control over patient relationships. At the technological level, the most common issues were the system’s ability to fit into existing clinical workflows and organizational structures, and its ability to meet user needs. At the organizational level, time pressure and tension, as well as internal communication and involvement of end users were most frequently reported. At the human level, individual attitudes and emotional responses, as well as familiarity with the system, most often influenced the CDSS implementation. Our results show that professional identity-related mechanisms are driven by these factors and influence CDSS implementation success. The perception of the change of professional identity is influenced by the user’s professional status and expertise and is improved over the course of implementation.

This review highlights the need for health care managers to evaluate perceived professional identity threats to health care professionals across all implementation phases when introducing a CDSS and to consider their varying manifestations among different health care professionals. Moreover, it highlights the importance of innovation and change management approaches, such as involving health professionals in the design and implementation process to mitigate threat perceptions. We provide future areas of research for the evaluation of the professional identity construct within health care.

Peer Review reports

Contributions to the literature

We provide a comprehensive literature review and narrative synthesis of the role of professional identity in CDSS implementation among diverse health care professionals and identify human, technological, and organizational determinants that influence professional identity and implementation.

The review shows that a perceived threat to professional identity plays a significant role in explaining failures of CDSS implementation. As such, our study highlights the need to recognize significant challenges related to professional identity in the implementation of CDSS and similar technologies. A better understanding and awareness of individual barriers to CDSS implementation among health professionals can promote the diffusion of such data-driven tools in health care.

This narrative synthesis maps, interconnects, and reinterprets existing empirical research and provides a foundation for further research to explore the complex interrelationships and influences of perceived professional identity-related mechanisms among health care professionals in the context of CDSS implementations.

Health care organizations increasingly implement clinical decision support systems (CDSSs) due to rising treatment costs and health care professional staff shortages [ 1 , 2 ]. CDSSs provide passive and active referential information, computer-based order sets, reminders, alerts, and patient-specific data to health care professionals at the point of care by matching patient characteristics to a computerized knowledge base [ 1 , 3 , 4 ]. These systems complement existing electronic health record (EHR) systems [ 5 ] and support various functional areas of medical care, such as preventative health, diagnosis, therapy, and medication [ 6 , 7 ]. Research has shown that CDSSs can improve patient safety and quality of care [ 8 , 9 , 10 ] by preventing medication errors and enhancing decision-making quality [ 11 ]. However, despite their potential benefits, their successful implementation into the clinical workflow remains low [ 1 , 12 ]. To facilitate CDSS acceptance and minimize user resistance, it is crucial to understand the factors affecting implementation success and identify the sources of resistance among the users [ 1 , 13 , 14 ].

In the health care innovation management and implementation science literature, a range of theoretical approaches have been used to examine the implementation and diffusion of health care information technologies. Technology acceptance theories focus on key determinants of individual technology adoption, such as ease of use , perceived usefulness or performance expectancy of the technology itself [ 15 , 16 , 17 ]. Organizational theories emphasize the importance of moving beyond an exclusive focus on the acceptance of technology by individuals. Instead, they advocate for examining behaviors and decisions with a focus on organizational structures and processes, cultural and professional norms, and social and political factors such as policies, laws, and regulations [ 18 , 19 ]. Other studies analyze the implementation of new technologies in health care from a behavioral theory perspective [ 20 ] and propose frameworks to explain how and why resistances emerge among users, which may have cognitive, affective, social, or environmental origins [ 13 , 21 , 22 ]. For example, the Theoretical Domains Framework has been applied to the behavior of health care professionals and serve as the basis for studies identifying influences on the implementation of new medical technologies, processes, or guidelines [ 21 , 23 ]. Other, more holistic, implementation frameworks, such as the Nonadoption, Abandonment, Scale-up, Spread and Sustainability framework , identify determinants as part of a complex system to facilitate CDSS implementation efforts across health care settings [ 13 ].

However, these theoretical approaches do not sufficiently take into account the unique organizational and social system in hospitals, which is characterized by strong hierarchies and the socialization of physicians into isolated structures and processes, making CDSS implementation particularly difficult [ 5 , 24 , 25 ]. Health care professionals are considered to have an entrenched professional identity characterized by the acquisition of a high level of expertise and knowledge over a long period of time, as well as by their decision-making authority and autonomy in clinical interventions. Defined roles and structures of different professional groups in medical organizations help to manage the multitude of tasks under high time pressure [ 26 ]. In addition, heath care professionals bear a high degree of responsibility in terms of ensuring medical quality and patient well-being [ 27 ]. Changing their professional identity is particularly difficult as they work in organizational contexts with high levels of inertia and long-lived core values based on established practices and routines [ 27 ]. This resilience of health care professionals’ identity makes it particularly difficult to implement new technologies into everyday medical practice [ 28 ].

By integrating existing evidence into an individual physician’s decision-making processes, CDSSs carry the disruptive potential to undermine existing, highly formalized clinical knowledge and expertise and professional decision-making autonomy [ 5 , 24 , 29 , 30 ]. Research has shown that health professionals may perceive new technologies, such as CDSSs, as a threat to their professional identity and draw potential consequences for themselves and their professional community, such as the change of established organizational hierarchies, loss of control, power, status, and prestige [ 31 , 32 , 33 ]. Nevertheless, other studies have shown that health professionals view CDSSs as tools that increase their autonomy over clinical decisions and improve their relationship with patients [ 34 , 35 ]. In addition, these consequences may vary widely by country, professional status, and medical setting. As a result, the use and efficacy of CDSSs differ around the world [ 24 ]. We therefore suggest that a better understanding of the identity-undermining or identity-enhancing consequences of CDSSs is needed. Despite growing academic interest, there is surprisingly scant research on the role of perceived identity threats and enhancements across different professional hierarchies during CDSS implementation and how they relate to other human, technological, and organizational influencing factors [ 5 , 36 , 37 ].

Therefore, the purpose of this narrative review is to analyze the state of knowledge on the individual, technological, and organizational circumstances that lead various health professionals to perceive CDSSs as a threat or enhancement of their professional identity. In doing so, this study takes an exploratory approach and determines human , organizational , and technological factors for the successful implementation of CDSSs. Our study extends the current knowledge of CDSS implementation by deconstructing professional identity related mechanisms and identifying the antecedents of these perceived threats and enhancements. It addresses calls for research to explore identity theory and social evaluations in the context of new system implementation [ 5 , 38 , 39 ] by aiming to answer the following research questions: What are the human, technological, and organizational factors that lead different health care professionals to perceive a CDSS as a threat or an enhancement of their professional identity? And, how do perceptions of threat and enhancement of professional identity influence CDSS implementation?

This study is designed to guide medical practice, health IT providers, and health policy in their understanding of the mechanisms that lead to conflicts between health professionals’ identity and CDSS implementation. It is intended to identify practices that may support the implementation and long-term use of CDSSs. By narratively merging insights and underlying concepts from existing literature on innovation management, implementation science, and identity theory with the findings of the empirical studies included in this review, we aim to provide a comprehensive framework that can effectively guide further research on the implementation of CDSSs.

Understanding professional identity

Following recent literature, professional identity refers to an individual’s self-perception and experiences as a member of a profession and plays a central role in how professionals interpret and act in their work situations [ 25 , 37 , 40 , 41 , 42 ]. It is closely tied to a sense of belonging to a professional group and the identification with the roles and responsibilities associated with that occupation. Professionals typically adhere to a set of ethical principles and values that are integral to their professional identity and guide their behavior and decision-making. They are expected to have specialized knowledge and expertise in their field. In return, they are granted a high degree of self-efficacy, autonomy, and ability to act in carrying out these tasks [ 25 , 43 ]. In addition, professionals make active use of their identities in order to define and change situations. Self-continuity and self-esteem encourages these professionals to align their standards of identification with the perceptions of others and themselves [ 44 ]. Many professions have formal organizations or associations that promote and regulate their shared professional identity [ 45 ]. Membership in these associations, adherence to their standards and to a shared culture within their field, including common rituals, practices, and traditions, may reinforce their professional identity [ 33 , 36 , 45 ].

Studies in the field of health care innovation management and implementation science reported a number of professional identity conflicts that shape individual behavioral responses to change and innovation [ 5 , 24 , 33 , 36 , 45 , 46 ]. The first set of conflicts relates to individual factors and expectations, such as their personality traits, cognitive style, demographics, and education. For example, user perception of a new technology can be influenced by professional self-efficacy, which can be described as perceived feeling of competence, control and ability to perform [ 47 ]. Studies have shown that innovations with a negative impact on individual’s sense of efficacy tend to be perceived as threatening, resulting in a lower likelihood of successful implementation. Users who do not believe in their ability to use the new system felt uncomfortable and unconfident in the workplace and were more likely to resist the new system [ 48 , 49 ].

The second set of studies relates professional identity to sense-making, which involves the active process of acquiring knowledge and comprehending change based on existing professional identities as frames of references [ 50 ]. For example, Jensen and Aanestad [ 51 ] showed that health care professionals endorsed the implementation of an EHR system only if it was perceived to be congruent with their own role and the physician’s practice, rather than focusing on functional improvements that the system could have provided. Bernardi and Exworthy [ 52 ] found that health care professionals with hybrid roles, bearing both clinical and managerial responsibilities, use their social position to convince health care professionals to adopt medical technologies only when they address the concerns of health care professionals.

The final set of studies address struggles related to a disruption of structures and processes that lead to the reorganization of the health professions [ 53 , 54 ] and the introduction of new professional logics [ 55 ]. These can result in threat perceptions from the perspective of health professionals regarding their competence, autonomy, and control over clinical decisions and outcomes. Accordingly, the perception of new systems not only influences their use or non-use, but implies a dynamic interaction with the professional identity of the users [ 56 ]. CDSSs may be perceived as deskilling or as a skill enhancement by reducing or empowering the responsibilities of users and thereby as compromising or enhancing the professional role, autonomy and status.

Taking the classical theoretical frameworks for the evaluation of health information systems [ 57 ] and this understanding of professional identity as a starting point, our narrative review identifies, reinterprets, and interconnects the key factors to CDSS implementation related to threats or enhancement of health professionals’ identity in different health care settings.

We conducted a comprehensive search of the Web of Science and PubMed databases to identify peer-reviewed studies on CDSS implementations published between January 2010 and September 2023. An initial review of the literature, including previous related literature reviews, yielded the key terms to be used in designing the search strings [ 1 , 49 ]. We searched for English articles whose titles, abstracts, or keywords contained at least one of the search terms, such as “clinical decision support system,” “computer physician order entry,” “electronic prescribing,” or “expert system.” To ensure that the identified studies relate to CDSS implementation, usage, or adoption from the perspective of health care organizations and health care professionals, we included, for example, the words “hospital,” “clinic,” “medical,” and “health.” The final search strings are provided in Table S 1 (Additional file 1). We obtained a total of 6212 articles. From this initial list, we removed 1461 duplicates, 6 non-retrievable studies, and 1 non-English articles. This left us with a total of 4744 articles for the screening of the titles, abstracts, and full texts. Three authors independently reviewed these articles to identify empirical papers which met the following inclusion criteria: (a) evaluated a CDSS as a study object, (b) examined facilitating factors or barriers impacting either CDSS adoption, use or implementation, (c) were examined from the perspective of health care professionals or medical facilities, and (d) represented an empirical study. We identified 220 studies that met our inclusion criteria. The three authors independently assessed the methodological quality of these 220 selected studies using the Mixed Methods Appraisal tool (MMAT), version 2018 [ 58 ]. The MMAT can be used for the qualitative evaluation of five different study designs, i.e., qualitative, quantitative, and mixed methods approaches. It is a qualitative scale that evaluates the aim of a study, its adequacy to the research question, the methodology used, the study design, participant recruitment, data collection, data analysis, presentation of findings, and the discussion and conclusion sections of the article [ 59 ]. One hundred thirty-one studies were included in the review after excluding studies based on the MMAT criteria, primarily due to a lack of a defined research question or a mismatch between the research question and the data collected [ 58 ]. Any disagreement about the inclusion of a publication between was resolved through internal discussion. Figure  1 summarizes our complete screening process.

figure 1

Overview of article screening process

The studies included in the review were then subject to a qualitative content analysis procedure [ 60 , 61 ] using MAXQDA, version 2020. For data analysis, we initially followed the principle of “open coding” [ 62 ]. We divided the studies equally among the three authors, and through an initial, first-order exploratory analysis, we identified numerous codes, which were labeled with key terms from the studies. Based on a preliminary literature review, we then developed a reference guide with the main categories of classic theoretical frameworks for health information systems implementation (human, technology, organization) [ 57 ] and further characteristics of the study. Second-order categories were obtained through axial coding [ 62 ], which reduced the number of initial codes but also revealed concepts that could not be mapped to these three categories (i.e., perceived threat to professional autonomy and control). This allowed us to identify concepts related to professional identity. Subsequently, a subset of 10% of the studies was randomly selected and coded by a second coder independently of the first coder [ 63 ]. Then, an inter-coder reliability analysis was performed between the samples of coder 1 and coder 2. For this purpose, Cohen’s kappa, a measure of agreement between two independent categorical samples, was calculated. Cohen’s kappa showed that there was a high agreement in coding ( k  = 0.8) [ 64 ]. We coded for the following aspects: human, organizational, technological, professional identity factor conceptualizations, dependent variables, study type and type of data, time-frame, clinician type sample, description of the CDSS, implementation phase [ 65 ], target area of medical care [ 7 ], and applied medical specialty. Tables 2 , 3 , 4 , 5 , 6  and 7 and Table S 2 provide detailed data as per the key coding categories.

Descriptive analysis

A total of 131 studies were included in our review. In line with recent reviews of CDSS implementation research [ 6 , 14 , 57 ], the reviewed articles are distributed widely across journals (Table  1 ).

The examined articles were drawn from 69 journals, 55 of which provide only one article. The BMC Medical Informatics and Decision Making and International Journal of Medical Informatics published nearly a third of the included studies, with 67 articles overall in medical informatics journals. There are additional clusters in medical specialty-related (33), health services, public health, or health care management-related (12), and implementation science-related (2) journals. The journals’ 5-year impact factor measured in 2022 ranged between 2.9 and 9.7. Of our included articles, 67 were published between 2010 and 2016, while 64 were published between 2017 and 2023.

The review includes a mixture of qualitative ( n  = 61), quantitative ( n  = 40), and mixed methods ( n  = 30) studies. Unless otherwise noted, studies indicated as qualitative studies in Table S 2 involved interviews and quantitative studies involved surveys. Interviews with individual health care professionals were the most common data collection method used ( n  = 38), followed by surveys ( n  = 58), and focus group interviews ( n  = 25). Most of the interviews were conducted with physicians ( n  = 60) and nursing professionals ( n  = 23). The studies were performed at various sites and specialties, with primary care settings ( n  = 35), emergency ( n  = 11), and pediatric ( n  = 6) departments being represented most frequently. Forty-five articles researched exclusively physicians and 10 covered nurse practitioners as respondents in their sample. Four studies surveyed pharmacists, one study surveyed medical residents as a single target group, and 20 articles included clinical leaders in addition to clinicians to their sample. Twenty-eight studies were longitudinal, although studying system implementation at one point in time will insufficiently explain the expected impact of the novel system on, e.g., the organizational performance outcomes over time [ 67 ]. The studies collected data in 29 different countries, with the most common being the USA ( n  = 41), the UK ( n  = 18), and the Netherlands ( n  = 11).

Included studies were additionally coded according to the implementation phase in which the study was conducted (i.e., exploration, adoption/preparation, implementation, sustainment phase) [ 65 ]. In 43 of the included studies, the analysis was conducted during the exploration phase, i.e., during a clinical trial or an exploration of the functionality and applicability of a CDSS. Nineteen studies were conducted in the active implementation phase, 15 studies in an implementation adoption or preparation phase, and 46 studies in a sustainment phase (i.e., implementation completed and long-term system use). The revealing studies involved an investigation in multiple implementation phases.

Following Berner’s study [ 7 ], we classified the examined CDSSs of the included studies according to specific target areas of care. As such, in 93 articles, CDSSs for planning or implementing treatment were studied. Thirty-seven studies examined CDSSs whose goal was prevention or preventive care screening. In 31 studies, the functional focus of the CDSSs was to provide specific suggestions for potential diagnoses that match a patient’s symptoms. Seventeen CDSSs of the included studies focused on follow-up management , 15 studies studied CDSSs for hospital and provider efficiency care plans and 12 focused on cost reduction and improved patient convenience (i.e., through duplicate testing alerts). Most CDSSs supported medication-related decisions and processes, such as prescribing, administration, and monitoring for effectiveness and adverse effects ( n  = 30). An overview of the characteristics of the included studies can be found in Table S 2 .

In the 131 included studies, we identified 1219 factors, which we categorized into human, technological, organizational, and professional identity threat and enhancement-related factors to implementation (Table  2 ). The total amount of factors is reported in Table  2 for each of our framework’s dimension and for each of our inferred factor sub-categories. The following section delves into the elements of our framework (Fig.  1 ), starting with the most commonly identified factors. Finally, the CDSS implementation outcomes are described.

Technological factors

At the technological level, perceptions of threat to professional identity were associated with factors related to the nature of the clinical purpose of the CDSS and system quality, such as compatibility of the CDSS with current clinical workflows [ 68 , 69 , 70 ], customization flexibility, intuitive navigation [ 71 , 72 , 126 ], and scientific evidence and transparency of the decision-outcome [ 73 , 74 , 191 ] . A total of 532 technological factors in 125 included studies were identified. In 21 studies, technological factors were related to study participants’ perceptions of professional identity threat, while in 9 studies these factors were related to perceived professional identity enhancements (Table  3 ). The exemplary quotes are chosen based on their clarity and representativeness related to the overall themes.

The reviewed studies focused primarily on medication-oriented CDSSs. Relevance, accuracy, and transparency of the recommendations’ quality and scientific evidence were found to be crucial for their acceptance and use. “ Irrelevant, inaccurate, excessive, and misleading alerts ” were associated with alert fatigue and lack of trust [ 72 , 75 , 76 , 127 , 144 ]. Some senior physicians preferred the provision of evidence-based guidelines that would reinforce their knowledge, while others advised junior physicians to override the CDSS recommendations in favor of their own instructions. However, residents tended to follow CDSS recommendations and used them to enhance their confidence about a clinical decision [ 69 , 77 , 128 ]. Physicians had diverse perceptions of the scientific evidence supporting the CDSS recommendations. Some regarded it as abstract or useless information that was not applicable to clinical decision making in practice. These physicians preferred a more conventional approach to learning from the “eminences” of their discipline while pragmatically engaging in the “art and craft” of medicine. CDSSs were perceived as increasingly undermining clinical work and expertise among health professionals [ 24 ]. In some studies examining AI (artificial intelligence)-based CDSS, explainability and transparency of the CDSS recommendations played a major role in maintaining control over the therapeutic process [ 78 , 129 ].

Many studies indicated that the introduction of a CDSS was perceived as a disruptive change to established clinical workflows and practices [ 12 , 79 , 80 , 81 , 167 ]. The fit of CDSS with standardized clinical workflows was seen as critical to the CDSS implementation. Senior clinicians preferred their own workflows and protocols for complex patient cases [ 82 ]. Geriatricians, for example, considered CDSS recommendations inappropriate for their clinical workflows because geriatric patients are typically multi-morbid and require individualized care [ 77 ]. Intuitiveness and interactivity of the CDSS were found to reduce the perceived threat to professional identity [ 5 ], and customization and adjustment of alerts based on specialties’ and individual preferences were perceived to increase competence [ 10 , 127 , 130 ]. Physicians considered that successful implementation of the CDSS depends on the integration of existing clinical processes and routine activities and requires collaboration as well as knowledge sharing among experienced professionals [ 24 ].

Organizational factors

A total of 287 organizational factors in 104 included studies were identified. In 17 studies, organizational factors were related to study participants’ perceptions of professional identity threat, while in 7 studies these factors were related to perceived professional identity enhancements (Table  4 ). In the included studies, organizational factors influencing professionals’ perceived threat to their identity have been studied from multiple perspectives, such as internal collaboration and communication [ 145 , 178 ], (top) managers’ leadership and support [ 79 , 83 ], innovation culture and psychological safety [ 24 ], organizational silos and hierarchical boundaries [ 69 , 70 ], and the relevance of social norms and endorsement of professional peers [ 161 ].

The empirical studies showed that the innovation culture plays a critical role in driving change in health care organizations. In this regard, resistance to the implementation of CDSSs may be due to a lack of organizational support as well as physicians’ desire to maintain the status quo in health care delivery [ 24 , 70 , 75 ]. Several key factors influenced the implementation in this regard. These included appropriate timing of the implementation project, user involvement, and dissemination of understandable information through appropriate communication channels [ 70 ]. Some studies showed that an innovation culture characterized by interdependence and cooperation promotes social interaction (i.e., a psychologically safe environment ), which in turn facilitates problem-solving and learning related to CDSS use [ 193 , 194 ]. For example, nursing practitioners recognized the potential of CDSSs for collaboration in complex cases, which had a positive impact on team and organizational culture development [ 24 ].

S upportive leadership (e.g., by department leaders) was found to be critical to successful CDSS implementation. This includes providing the necessary resources, such as time and space for training, technical support, and user involvement in the implementation process, which were negatively associated with perceived loss of control and autonomy [ 11 , 69 , 79 , 83 , 84 , 145 , 174 ]. Involving not only senior physicians but also nursing and paramedical leaders increased the legitimacy of CDSSs throughout the professional hierarchy and helped to overcome the negative effect of low status on psychological safety by flattening hierarchical distances [ 24 , 70 , 72 ]. In contrast, imposing a CDSS on users, led to resistance. Some physicians and nurses felt that the use of the CDSS was not under their voluntary control (i.e., “we have no choice”, “it’s not an option to not use it”) because these systems have become “as essential as … carrying a pen and a stethoscope,” with physicians feeling that they now “are reliant on the CDSS” [ 10 ]. In other cases, top-down decisions led to the resolution of initial resistance toward the CDSS [ 167 ]. Overall, committed leadership that involved users and transcended professional silos and hierarchies was critical to successful CDSS implementation. In this context, an established hierarchy and culture of physician autonomy impeded communication, collaboration, and learning across professional and disciplinary boundaries [ 54 , 195 , 196 ]. A well-designed CDSS minimized professional boundaries by, for example, empowering nurses and paramedics to make independent treatment decisions [ 8 , 180 ]. CDSSs thus provided structured means for nonmedical professionals to receive support in their clinical decision-making that was otherwise reserved for professionals with higher authority [ 34 ]. Since CDSSs allow widespread access to scientific evidence, they often led to nursing practitioners’ control or oversight of medical decisions, putting junior physicians in an inferior position, and thus providing an occasion to renegotiate professional boundaries and to dispute the distribution of power [ 24 , 77 ].

In addition, the provision of sufficient training and technical support were essential to ensure that physicians and nursing practitioners felt confident in using the CDSS and increased their satisfaction with the system [ 77 , 85 ]. Embedding new CDSSs into routine practice required communication and collaboration among professionals with clinical expertise and those with IT expertise [ 86 , 145 , 178 ]. Involving physicians and nursing practitioners in decision-making processes increased their willingness to change their long-standing practice patterns and embrace the newly introduced CDSS [ 5 , 10 ]. Facilitating the CDSS uptake therefore required legitimization of the system’s designers and exploited data sources [ 24 ]. Similarly, the success or failure of CDSSs implementation depended on the ability of the new system to align with existing clinical processes and routine activities. Often, successful adoption was at risk when the implementation was too far away from the reality of clinical practice because those responsible for designing the CDSS poorly understood the rationale for designing the system in a particular way [ 145 ].

In addition, some studies indicated that resistance was overcome by communicating the benefits of the CDSS through contextual activities and providing opportunities to experience the system firsthand. Sharing positive implementation experiences and fostering discussions among actual and potential users could bridge the gap between perceptions and actual use [ 145 , 146 ]. In this regard, endorsement from “ respected ” and “ passionate ” internal change promoters , such as expert peers, was seen as key to overcoming user resistance [ 82 ]. Confirmation from clinical experts that the new system improves efficiency and quality of care was essential for the general system acceptance [ 154 ]. Thus, social influence played an important role, especially in the initial phase of system use, while this influence decreased as users gained experience with the CDSS [ 182 ].

Human factors

A total of 197 human factors in 99 included studies were identified. In 17 studies, human factors were related to study participants’ perceptions of professional identity threat, while in 6 studies these factors were related to perceived professional identity enhancements. Table 5 summarizes the key findings from the included articles, which relate to three factors: individual attitudes and emotional responses, experience and familiarization with the CDSS , and trust in the CDSS and its underlying source.

It is reported in the empirical studies that physicians often failed to fully utilize the features of CDSSs, such as protocols, reminders, and charting templates, because they often lacked experience and familiarization with the CDSS [ 3 , 79 , 87 , 127 ]. In addition to insufficient training and time constraints, limited IT skills were reported as the main reasons [ 83 , 87 , 147 , 185 ]. As a result, users interacted with the CDSS in unintended ways, leading to data entry errors and potential security concerns [ 88 ]. According to Mozaffar et al. [ 131 ], this includes physicians’ tendency to enter incorrect data or select the wrong medication due to misleading data presentations in the system. Inadequate IT skills and lack of user training also contributed to limited understanding of the full functionality of CDSSs. As such, physicians interviewed in one study expressed the lack of knowledge about basic features of a CDSS, including alerts, feedback, and customization options, as a major implementation barrier [ 127 ]. Some studies reported that the lack of system customization to meet the personal preferences of users and the lack of system training weakened their confidence in the system and compromised their clinical decision-making autonomy [ 10 , 83 , 89 , 90 , 127 , 183 ].

Some studies indicated that there were trust issues among physicians and nursing practitioners regarding the credibility of the decision-making outcome [ 132 , 154 ], the accuracy of the CDSS recommendations’ algorithm [ 146 ], and the timeliness of medical guidelines in the CDSS [ 127 ]. Seniors appreciated medication-related alerts but felt that their own decision-making autonomy regarding drug selection and dosing was compromised by the CDSS [ 74 ]. However, they tended to use the CDSS as a teaching tool for their junior colleagues, advising them to consult it when in doubt [ 77 , 128 ]. In some cases, this led to junior physicians accepting CDSS suggestions, such as computer-generated dosages, without independent verification [ 128 , 144 , 154 ].

Several studies indicated that the CDSS introduction elicited different individual attitudes and emotional responses . More tenured health care professionals were “ frightened ” when confronted with a new CDSS. Others perceived the CDSS as a “ necessary evil ” or “ unwelcome disruption ” [ 81 ], leading to skepticism, despair, and anxiety [ 3 , 145 , 167 ]. Younger physicians, on the other hand, tended to be “ thrilled ” and embraced the technology’s benefits [ 84 , 147 , 167 ]. Motivation, enthusiasm, and a “can do” attitude toward learning orientation and skill development positively influenced engagement in CDSS [ 11 , 83 , 84 , 145 , 184 ].

The role of professional identity threat and enhancement perceptions in CDSS implementation

Overall, we found 90 factors in 65 included studies related to perceptions of professional identity threat among the study participants. Forty-four factors in 34 included studies were associated with perceived professional identity enhancements. We identified three key dimensions of professional identity threat and enhancement perceptions among health care professionals impacting CDSS implementation along different implementation phases [ 197 ]. Table 6 contains exemplary quotes illustrating the findings.

A number of physicians perceived CDSSs as an ultimate threat to professional control and autonomy , leading to a potential deterioration of professional clinical judgment [ 30 , 69 , 77 , 154 , 155 ]. Most nurse practitioners, on the other hand, experienced a shift in decision-making power, providing an occasion to renegotiate professional boundaries in favor of health care professionals with lower levels of expertise [ 24 ]. Thus, nurses associated the implementation of a CDSS with enhanced professional control and autonomy in the performance of tasks [ 34 , 155 , 169 ]. Pharmacists often advocated for medication-related CDSSs, which in turn increased physician dependency and resistance to new tasks [ 12 , 84 , 178 ]. The latter was a consequence of physicians’ increasing reliance on pharmacists for complex drug therapies, as physicians had to relinquish some decision-making authority to pharmacists by restructuring of decision-making processes [ 74 ].

Senior physicians frequently expressed concerns about overreliance on CDSS and potential erosion of expertise , which they believed led to patient safety risks [ 10 , 24 , 75 , 89 , 155 ]. They complained that overreliance on CDSS recommendations interfered with their cognition processes. For example, in medication-related CDSSs, clinical data such as treatment duration, units of measure, or usual doses are often based on pharmacy defaults that may not be appropriate for certain patients. According to these physicians, their junior colleagues might not double-check recommended medication doses and treatment activities, leading to increased patient safety risk [ 131 ]. In another study, general practitioners expressed concerns about the deskilling of future physicians through CDSSs. Some CDSSs required a high level of clinical expertise, skill, and knowledge regarding the correct entry of clinical information (e.g., symptoms) for proper support in clinical decisions. Many physicians feared that the use of CDSSs would erode this knowledge and thus allow the CDSS recommendations to lead to incorrect decisions [ 30 ]. This potential loss of skills and expertise was seen as particularly problematic in situations where decision support for medications and e-prescriptions varied from facility to facility. Physicians working at different institutions who relied on the CDSS for medication treatment support used at one institution reported that they had difficulties making the correct clinical decisions at the other institution [ 154 ]. From the reviewed articles, it appeared that senior physicians perceived CDSSs as an intrusion into their professional role and object to their expertise and time being misused for “ data entry work ” [ 10 ]. They enjoyed the freedom to decide what to prescribe, when to prescribe it, and whether or not to receive more information about it [ 77 ] and were determined not to “ surrender ” and “ be made to use [the CDSS] ” [ 82 ].

In line with the increasing dependence of physicians on pharmacists when using CDSS for medication treatment, pharmacists used the CDSS to demonstrate their professional skills and to further develop their professional role [ 178 ]. Nurse practitioners were empowered by CDSSs guidance to systematically update medications and measurements during their hectic daily clinic routine [ 24 , 91 ], to independently manage more complicated scenarios [ 8 ], and to facilitate their decision-making [ 92 ]. Some physicians stated that CDSS recommendations facilitated their critical thinking to critically reflect on the medication more than usual and facilitated more conscious decisions [ 133 ]. Increased professional identity enhancement in terms of skills and expertise were thus often associated with technological factors such as enhanced patient safety, improved efficiency, and quality of care [ 9 ].

Furthermore, physicians strongly associated their professional identity with their central role in the quality of patient care based on a high level of empathy and trust between physician and patient [ 45 , 195 ]. Their perceived threat to professional identity lead to a sense of loss in clinical professionalism and control over patient relationships [ 162 , 170 ] . CDSS usage was perceived as unprofessional or disrupting to the power dynamic between them and their patients [ 89 , 93 , 171 ]. As a result, they indicated that established personal patient relationships were affected by imposed CDSS use [ 81 ]. Other physicians saw CDSSs as having potential to enhance patient relationships providing them with more control over the system and treatment time, facilitating information and knowledge sharing with patients and building trust between patients and physicians [ 35 , 94 ].

Mapping the perceptions of threat and enhancement of professional identity among physicians and other health care professionals identified in each study to implementation phases allowed for an examination of the evolution of identity perceptions in CDSS implementations. Table 7 assigns the identity perceptions among physicians and other health care professionals to the different implementation phases. The findings illustrate that threat perceptions were predominantly perceived before and at the beginning of implementation. With steady training, use and familiarization with the CDSS, the perceived threat to professional identity slightly decreased in the sustainment phase, compared to the pre-implementation phase, while perceptions of enhancement of professional identity increased. During the exploration phase, physicians in particular perceived the CDSS as undermining their professional identity, and this perception remained relatively constant through the sustainment phase. Other health care professionals, such as nurse practitioners and pharmacists often changed their perspective over the course of the implementation phases and perceived the CDSS as supporting their control, autonomy, and skill enhancement at work.

CDSS implementation outcomes

In total, we identified 93 benefits related to CDSS implementation in the reviewed studies (Table  2 ). The most commonly evaluated benefits were improvements in work efficiency and effectiveness through the use of CDSSs, improvements in patient safety, and improvements in the quality of care . Prevention of prescription and treatment errors was also frequently mentioned. The included studies measured CDSS implementation in various ways, which we classified into seven groups (Table  8 ). Most studies measured or evaluated self-reported interest in using the system or intention, willingness to use, or adoption , followed by self-reported attitude toward CDSSs , and both self-reported and objective measure of implementation success . Objective actual use measurement was evaluated in only 10 studies, while self-reported use was measured in seven studies, and self-reported satisfaction and performance of the system was measured in five studies. Both self-reported and objective measure of usefulness and usability was measured in one study.

Although we included 40 quantitative studies in our review, only a few of these empirically measured the direct effect of professional identity threat or related organizational consequences on implementation, adoption, or use of CDSSs. Two studies empirically demonstrated a direct significant negative relationship between perceived professional autonomy and intention to CDSS use [ 5 , 48 ]. Another four studies found empirical evidence of an indirect negative association between threats to professional identity and actual CDSS use. Physicians disagreed with the CDSS recommendation because they perceived insufficient control and autonomy over clinical decision making [ 79 , 88 ] and lacked confidence in the quality of the CDSS and its scientific evidence [ 154 ].

Main findings

The purpose of this narrative review was to identify, reinterpret, and interconnect existing empirical evidence to highlight individual, technological, and organizational factors that contribute to professional identity threat and enhancement perceptions among clinicians and its implications for CDSS implementation in health care organizations. Using evidence from 131 reviewed empirical studies, we develop a framework for the engagement of health care professionals by deconstructing the antecedents of professional identity threats and enhancements (Fig. 2 ). Our proposed framework highlights the role of cognitive perceptions and response mechanisms due to professional identity struggles or reinforcements of different individual health care professionals in the implementation of CDSSs. Our work therefore contributes to the growing literature on perceived identity deteriorations with insights into how knowledge-intensive organizations may cope with these threats [ 37 , 45 , 46 ]. We categorized clinicians’ professional identity perceptions into three dimensions: (1) perceived threat and enhancement of professional control and autonomy , (2) perceived threat and enhancement of professional skills and expertise , and (3) perceived loss and gain of control over patient relationships . These dimensions influenced CDSS implementation depending on the end user’s change of status and expertise over the course of different implementation phases. While senior physicians tended to perceive CDSSs as undermining their professional identity across all implementation stages, nurse practitioners, pharmacists, and junior physicians increasingly perceived CDSS as enhancing their control, autonomy, and clinical expertise. Physicians, on the other hand, were positive about the support provided by the CDSS in terms of better control of the physician–patient relationship. In most studies, professional identity incongruence was associated with technological factors, particularly the lack of adaption of the system to existing clinical workflows and organizational structures (i.e., process routines), and the fact that CDSS functionalities have to meet the needs of users. The lack or presence of system usability and intuitive workflow design were also frequently associated as antecedents of professional identity loss. The other dimensions (i.e., human and organizational factors) were encountered less often in relation to professional identity mechanisms among health care professionals. Only six studies found empirical evidence of an indirect or direct negative relationship between health professionals’ perceived threats to professional identity and outcomes of CDSS implementation, whereas no study explicitly analyzed the relationship between dimensions of professional identity enhancement and outcomes of CDSS adoption and implementation.

figure 2

A framework for the role of professional identity in CDSS implementation

Interpretations, implications and applicability to implementation strategies

The results indicate that healthcare professionals may perceive CDSSs as valuable tools for their daily clinical decision-making, which can improve their competence, autonomy, and control over the relationship with the patient and their course of treatment. These benefits are realized when the system is optimally integrated into the clinical workflow, meets users’ needs, and delivers high quality results. Involving users in design processes, usability testing, and pre-implementation training and monitoring can increase user confidence and trust in the system early in implementation and lead to greater adoption of the CDSS [ 146 ]. To address trust issues in the underlying algorithm of the CDSS, direct and open communication, transparency in decision-making values, and clinical evidence validation of the CDSS are crucial [ 154 ]. CDSS reminders and alerts should be designed to be unobtrusive to minimize the perceived loss of autonomy over clinical decisions [ 77 ].

Contrary, the implementation of a CDSS often lead to substantial changes of professional identity and thereby often associated with fear and anxiety. A sense of a loss of autonomy and control was linked to lower adoption rates and thus implementation failure. Cognitive styles, which may be expressed in emotional reactions of users toward the CDSS, reinforced reluctance to implement and use the system [ 145 , 167 ]. This underscores the importance of finding expert peers and professionals who are motivated and positive toward CDSS adoption and use, and who can communicate and promote the professional appropriateness and benefits of the CDSS to their colleagues [ 82 , 83 , 184 ]. This promotes a focus on the improvement and benefits of the CDSS while maintaining the integrity, perceived autonomy, control, and expertise of physicians and nurses.

Accordingly, the included studies show that health professionals respond to the professional identity threat triggered by the CDSS implementation by actively maintaining, claiming, or completely changing their identity [ 39 ], which is consistent with previous studies elaborating on the self-verification of professionals [ 44 ]. For example, physicians delegated routine tasks to other actors to maintain control over the delivery of services and thereby enhance their professional status [ 201 ]. Pharmacists used the introduction of CDSS for drug treatment to demonstrate their skills to physicians and to further develop their professional role [ 178 ]. Maintaining authority over the clinical workflow without the need for additional relational work with lower-status professionals was seen as one of the main factors for health care professionals’ CDSS acceptance in our findings [ 10 , 12 , 84 , 178 ]. Physicians influence change processes, such as the implementation of CDSS, in a way that preserves the status quo of physicians’ responsibilities and practices. They often stated their objective to avoid increasing dependence on lower-status professionals such as nurses or pharmacists who were gaining control by using the new CDSS. In addition, CDSS users frequently criticized the system’s lack of fit with clinical work processes and that the systems were not able to replace the clinical expertise and knowledge [ 12 , 34 , 77 , 82 ]. The loss of control over the patient-physician relationship also represented a key component of identity undermining through the introduction of CDSSs. Many physicians expressed that their trust-building interaction with patients was eroded by the functionalities of the CDSS [ 81 , 170 ]. The fact that the use of CDSSs saves time in patient therapy and treatment, freeing up time for their patients, was rarely expressed [ 12 , 147 ]. This underscores the need to cope with the physician’s strong identification with their professional role, their tendency to preserve the status quo, and self-defense against technological change during the implementation of CDSSs.

Furthermore, the reviewed studies emphasized the importance of both inter- and intra-professional involvement, collaboration, and communication in health care organizations, during the CDSS implementation, suggesting that these mechanisms influence the extent and quality of cooperative behavior, psychologically safe environments, and role adaptation of different professional groups [ 26 , 54 , 55 , 202 ]. Among the studies we reviewed, managerial support and collaboration influenced coordination during CDSS implementation [ 82 , 83 , 174 ], such as by providing usability testing and time for efforts to change the understanding of why and how health care professionals should modify their routine practices [ 74 , 95 ].

Overall, the review shows that the consideration of perceived professional identity mechanisms among health care professionals plays an important role when implementing new CDSSs in health care organizations. Additionally, perceived threats and enhancements of professional identity should be considered and regularly assessed in long-term oriented implementation strategies. These strategies often include methods or techniques to improve the adoption, implementation, and sustainability of a clinical program or practices [ 203 ] and may span from planning (i.e., conducting a local needs assessment, developing a formal implementation plan) to educating (i.e., conduct educational meetings, distribute educational materials) to restructuring professional roles to managing quality (i.e., provide clinical supervision, audit, and feedback) [ 204 , 205 ]. To ensure implementation, health care professionals of all hierarchies should be involved in the planning and decision-making processes related to CDSS implementation. Continuous feedback loops between health care professionals, IT staff, and implementation managers can help identify unforeseen threats to professional identity and necessary adjustments to the implementation plan. The review found that perceived identity threats particularly need to be addressed among highly specialized physicians to account for their knowledge-intensive skills, expertise, and clinical workflows [ 24 , 96 ]. In addition, the purpose of CDSS implementation and information about how it aligns with organizational strategic goals and individual professional development should be clearly and continuously communicated at all stages of implementation.

Our review also confirms that health care professionals’ perceptions of the effectiveness of CDSSs reinforce the impact of organizational readiness for the ongoing and required transformation of healthcare [ 17 ]. Comprehensive assessments of the suitability of the system for established or changing clinical workflows and the technical quality of the CDSS should be prioritized at the beginning of the implementation. Training programs should be developed to help professionals adapt to the new medical systems and allay fears of a loss of competence or relevance. To mitigate threats to professional identity in the long term, it is necessary to foster an organizational culture of adaptability, learning, and psychological safety, in which it is acceptable to make mistakes and learn from them. In addition, ongoing leadership support and professional development opportunities are critical to ensure that health care professionals continue to adapt their roles and keep pace with technological developments [ 79 , 84 ].

Limitations

A literature review of a large sample of empirical studies has many advantages [ 206 ]. However, some limitations arise from the study design. First, our included studies were mainly conducted in the USA or UK (see Table S 2 ). The dominance of these two countries may pose a potential bias, as different cultures may have different implications for CDSS implementation and threat perceptions among health care professionals. Therefore, there is a need for caution in generalizing the findings on the impact of human, technological, and organizational factors on professional identity perceptions among professionals across different cultures. More studies are needed to provide a nuanced understanding of professional identity mechanisms among health care professionals across a broader range of cultures and countries.

Second, broad search terms were used to identify a larger number of articles in the literature review and to identify professional identity based on implementation and adoption factors mentioned in the included studies from the perspective of health professionals who were not specifically identified as threats to or enhancements of professional identity. This could also be considered a methodological strength, as this review combines findings from qualitative, quantitative, and mixed methods studies on this construct from a large and diverse field of research on CDSS implementation. However, non-English language articles or articles that did not pass the MMAT assessment may have been overlooked, which would have provided valuable information on further barriers and facilitators (i.e., threats to professional identity in different cultures), affecting the rigor of this study.

Third, most of the studies reviewed captured CDSSs for use in primary care settings. CDSSs in highly specialized specialties or those that frequently treat multi-morbid patients, such as cardiology and geriatrics, require features that allow for detailed workflow customization. In such specialties, even more attention needs to be paid to balancing provider autonomy and workflow standardization [ 97 ]. As such, future research should provide the missing evidence in such complex settings.

Fourth, we were only able to identify a limited number of studies that empirically analyzed the causal relationships included in our framework. There is a lack of studies that use longitudinal research designs, quantitative data, or experimental study designs. Therefore, the identified effects of technological, organizational, and human factors on professional identity and consequently on implementation success need to be interpreted with caution. Future research should test whether the determinants and effects of professional identity mechanisms among healthcare professionals can be observed in real-world settings.

Professional identity threat is a key cognitive state that impedes CDSS implementation among various health care professionals and along all implementation phases [ 31 , 45 ]. Health care managers need to engage in supportive leadership behaviors, communicate the benefits of CDSSs, and leverage supportive organizational practices to mitigate the perception and effect of professional identity threat. An innovation culture needs to support the use of CDSSs and top management commitment should reduce uncertainty about why a new CDSS is needed [ 24 ]. Therefore, leaders should raise awareness of the relevant CDSS functionalities and communicate the terms and conditions of use. It is crucial to involve clinicians in updating CDSS features and developing new ones to ensure that CDSSs can be quickly updated to reflect rapid developments in guideline development [ 195 ]. One way to achieve this is to engage proactive, respected, and passionate individuals who can train colleagues to use the CDSS and promote the potential benefits of the system [ 70 , 82 ].

Our framework presented in this study provides a relevant foundation for further research on the complex relationship between human, technological, and organizational implementation factors and professional identity among different health care professionals. The findings also guide health care management experts and IT system developers in designing new CDSSs and implementation strategies by considering the ingrained norms and cognitions of health care professionals. As suggested above, more research is needed to determine whether some barriers or facilitators are universal across all types of CDSSs or whether there are domain-dependent patterns. In this context, research that explicitly focuses on AI-based CDSSs becomes increasingly important as they become more relevant in medical practice. In fact, five of the studies included in our research, conducted over the last 3 years, examined factors related to the adoption and implementation of AI-based CDSS [ 73 , 74 , 96 , 205 , 206 ]. AI-based CDSSs extend to full automation and can discover new relationships and make predictions based on learned patterns [ 97 ]. However, with their opaque and automated decision-making processes, AI-based systems may increasingly challenge professional identity as they increasingly disrupt traditional practices and hierarchies within healthcare organizations, posing a threat to professional expertise and autonomy [ 156 ]. This may further hinder the implementation and sustainable use of these systems compared to non-AI-based systems. Future research could examine overlaps in barriers and facilitators between CDSSs and AI-based systems, which are of relevance for professional identity threat perceptions among health care professionals, and assess the reasons behind these differences. In addition, translating the findings for different medical contexts may provide valuable insights. This can eventually lead to guidelines for the development of CDSS for different specialties.

Some factors were found less frequently during our analysis; in particular, communication of the benefits of a CDSS to users, the importance of trust across different hierarchies and among staff involved in implementation, and government-level factors related to the environment. While the former factors represent important psychological safety and acceptance of the CDSS, the level of the environment represents a minor role in the perception of professional identity. Future research is needed, however, to determine whether all of these factors play an important role in CDSS implementation. Furthermore, future research could explore the role of middle managers and team managers in health care organizations rather than the role of senior management in managing professional identity threats when leading change. Our narrative review found that clinical middle managers may have a special role in legitimizing CDSSs [ 156 ]. In addition, a future research opportunity arises from the perceived role and identity enhancement through new technologies and their consequences for social evaluation in hierarchical healthcare organizations [ 35 , 132 , 155 ].

Overall, the findings of this review are particularly relevant for managers of CDSS implementation projects. Thoughtful management of professional identity threat factors identified in this review can help overcome barriers and facilitate the implementation of CDSSs. By addressing practical implications and research gaps, future studies can contribute to a deeper understanding of the threat to professional identity and provide evidence for effective implementation strategies of CDSSs and thus for a higher quality and efficiency in the increasingly overburdened health care system.

Availability of data and materials

The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.

Abbreviations

Artificial intelligence

  • Clinical decision support system

Electronic health record

Mixed Methods Appraisal tool

Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. Npj Digit Med. 2020;3:1–10.

Article   Google Scholar  

Antoniadi AM, Du Y, Guendouz Y, Wei L, Mazo C, Becker BA, et al. Current challenges and future opportunities for xai in machine learning-based clinical decision support systems: A systematic review. Appl Sci. 2021;11:5088.

Article   CAS   Google Scholar  

Ash JS, Sittig DF, Wright A, Mcmullen C, Shapiro M, Bunce A, et al. Clinical decision support in small community practice settings: A case study. J Am Med Informatics Assoc. 2011;18:879–82.

Prakash AV, Das S. Medical practitioner’s adoption of intelligent clinical diagnostic decision support systems: A mixed-methods study. Inf Manag. 2021;58:103524.

Esmaeilzadeh P, Sambasivan M, Kumar N, Nezakati H. Adoption of clinical decision support systems in a developing country: Antecedents and outcomes of physician’s threat to perceived professional autonomy. Int J Med Inform. 2015;84:548–60.

Article   PubMed   Google Scholar  

Westerbeek L, Ploegmakers KJ, de Bruijn GJ, Linn AJ, van Weert JCM, Daams JG, et al. Barriers and facilitators influencing medication-related CDSS acceptance according to clinicians: A systematic review. Int J Med Inform. 2021;152:104506.

Berner ES. Clinical decision support systems: state of the art. AHRQ Publication No. 09-0069-EF. Rockville: Agency for Healthcare Research and Quality; 2009.

Usmanova G, Gresh A, Cohen MA, Kim Y, Srivastava A, Joshi CS, et al. Acceptability and barriers to use of the ASMAN provider-facing electronic platform for Peripartum Care in Public Facilities in Madhya Pradesh and Rajasthan, India: a qualitative study using the technology acceptance Model-3. Int J Environ Res Public Health. 2020;17:8333.

Article   PubMed   PubMed Central   Google Scholar  

Singh K, Johnson L, Devarajan R, Shivashankar R, Sharma P, Kondal D, et al. Acceptability of a decision-support electronic health record system and its impact on diabetes care goals in South Asia: a mixed-methods evaluation of the CARRS trial. Diabet Med. 2018;35:1644–54.

Article   CAS   PubMed   Google Scholar  

Holden RJ. Physicians’ beliefs about using EMR and CPOE: In pursuit of a contextualized understanding of health it use behavior. Int J Med Inform. 2010;79:71–80.

Devine EB, Williams EC, Martin DP, Sittig DF, Tarczy-Hornoch P, Payne TH, et al. Prescriber and staff perceptions of an electronic prescribing system in primary care: A qualitative assessment. BMC Med Inform Decis Mak. 2010;10:72.

Shibl R, Lawley M, Debuse J. Factors influencing decision support system acceptance. Decis Support Syst. 2013;54:953–61.

Abell B, Naicker S, Rodwell D, Donovan T, Tariq A, Baysari M, et al. Identifying barriers and facilitators to successful implementation of computerized clinical decision support systems in hospitals: a NASSS framework-informed scoping review. Implement Sci. 2023;18:32.

Kilsdonk E, Peute LW, Jaspers MWM. Factors influencing implementation success of guideline-based clinical decision support systems: A systematic review and gaps analysis. Int J Med Inform. 2017;98:56–64.

Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly. 1989:319–40.

Venkatesh V, Davis FD. Theoretical extension of the Technology Acceptance Model: Four longitudinal field studies. Manage Sci. 2000;46:186–204.

Söling S, Demirer I, Köberlein-Neu J, Hower KI, Müller BS, Pfaff H, et al. Complex implementation mechanisms in primary care: do physicians’ beliefs about the effectiveness of innovation play a mediating role? Applying a realist inquiry and structural equation modeling approach in a formative evaluation study. BMC Prim Care. 2023;24:1–14.

Birken SA, Bunger AC, Powell BJ, Turner K, Clary AS, Klaman SL, et al. Organizational theory for dissemination and implementation research. Implement Sci. 2017;12:1–15.

Vance Wilson E, Lankton NK. Modeling patients’ acceptance of provider-delivered E-health. J Am Med Informatics Assoc. 2004;11:241–8.

Bandura A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol Rev. 1977;84:191–215.

Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12:1–18.

Eccles M, Grimshaw J, Walker A, Johnston M, Pitts N. Changing the behavior of healthcare professionals: The use of theory in promoting the uptake of research findings. J Clin Epidemiol. 2005;58:107–12.

Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7:1–17.

Liberati EG, Ruggiero F, Galuppo L, Gorli M, González-Lorenzo M, Maraldi M, et al. What hinders the uptake of computerized decision support systems in hospitals? A qualitative study and framework for implementation. Implement Sci. 2017;12:1–13.

Pratt MG, Rockmann KW, Kaufmann JB. Constructing professional identity: The role of work and identity learning cycles in the customization of identity among medical residents. Acad Manag J. 2006;49:235–62.

Beane M, Orlikowski WJ. What difference does a robot make? The material enactment of distributed coordination. Organ Sci. 2015;26:1553–73.

Reay T, Goodrick E, Waldorff SB, Casebeer A. Getting leopards to change their spots: Co-creating a new professional role identity. Acad Manag J. 2017;60(3):1043–70.

Hu PJH, Chau PYK, Liu Sheng OR. Adoption of telemedicine technology by health care organizations: An exploratory study. J Organ Comput Electron Commer. 2002;12:197–221.

Alohali M, Carton F, O’Connor Y. Investigating the antecedents of perceived threats and user resistance to health information technology: a case study of a public hospital. J Decis Syst. 2020;29:27–52.

McParland CR, Cooper MA, Johnston B. Differential Diagnosis Decision Support Systems in Primary and Out-of-Hours Care: A Qualitative Analysis of the Needs of Key Stakeholders in Scotland. J Prim Care Community Heal. 2019;10:57–61.

Google Scholar  

Lapointe L, Rivard S. A multilevel model of resistance to information technology implementation. MIS Q. 2005;29:461–91.

Craig K, Thatcher JB, Grover V. The IT identity threat: A conceptual definition and operational measure. J Manag Inf Syst. 2019;36:259–88.

Jussupow E, Spohrer K, Heinzl A. Identity Threats as a Reason for Resistance to Artificial Intelligence: Survey Study With Medical Students and Professionals. JMIR Form Res. 2022;6(3):e28750.

Jeffery AD, Novak LL, Kennedy B, Dietrich MS, Mion LC. Participatory design of probability-based decision support tools for in-hospital nurses. J Am Med Informatics Assoc. 2017;24:1102–10.

Richardson JE, Ash JS. A clinical decision support needs assessment of community-based physicians. J Am Med Informatics Assoc. 2011;18:28–35.

Walter Z, Lopez MS. Physician acceptance of information technologies: Role of perceived threat to professional autonomy. Decis Support Syst. 2008;46:206–15.

Jussupow E, Spohrer K, Heinzl A, Link C. I am; We are - Conceptualizing Professional Identity Threats from Information Technology. 2019.

Karunakaran A. Status-Authority Asymmetry between Professions: The Case of 911 Dispatchers and Police Officers. Adm Sci Q. 2022;67:423–68.

Tripsas M. Technology, identity, and inertia through the lens of “The Digital Photography Company.” Organ Sci. 2009;20:441–60.

Chreim S, Williams BE, Hinings CR. Interlevel influences on the reconstruction of professional role identity. Acad Manag J. 2007;50:1515–39.

Ibarra H. Provisional selves: Experimenting with image and identity in professional adaptation. Adm Sci Q. 1999;44:764–91.

Jussupow E, Spohrer K, Dibbern J, Heinzl A. AI changes who we are-Doesn’t IT? Intelligent decision support and physicians’ professional identity. In: Proceedings of the Twenty-Sixth European Conference on Information Systems, Portsmouth, UK, 2018. pp. 1–11.

Freidson E. The Reorganization of the Medical Profession. Med Care Res Rev. 1985;42:11–35.

Burke PJ, Stets JE. Trust and Commitment through Self-Verification. Soc Psychol Q. 1999;62:347–66.

Mishra AN, Anderson C, Angst CM, Agarwal R. Electronic Health Records Assimilation and Physician Identity Evolution: An Identity Theory Perspective. Inf Syst Res. 2012;23:738–60.

Mirbabaie M, Brünker F, Möllmann Frick NRJ, Stieglitz S. The rise of artificial intelligence – understanding the AI identity threat at the workplace. Electron Mark. 2022;32:73–99.

Klaus T, Blanton JE. User resistance determinants and the psychological contract in enterprise system implementations. Eur J Inf Syst. 2010;19:625–36.

Sambasivan M, Esmaeilzadeh P, Kumar N, Nezakati H. Intention to adopt clinical decision support systems in a developing country: Effect of Physician’s perceived professional autonomy, involvement and belief: A cross-sectional study. BMC Med Inform Decis Mak. 2012;12:1–8.

Elsbach KD. Relating physical environment to self-categorizations: Identity threat and affirmation in a non-territorial office space. Adm Sci Q. 2003;48(4):622–54.

Carter M, Grover V. Me, my self, and I(T). MIS Quart. 2015;39:931–58.

Jensen TB, Aanestad M. Hospitality and hostility in hospitals: A case study of an EPR adoption among surgeons. Eur J Inf Syst. 2007;16:672–80.

Bernardi R, Exworthy M. Clinical managers’ identity at the crossroad of multiple institutional logics in it innovation: The case study of a health care organization in England. Inf Syst J. 2020;30:566–95.

Doolin B. Power and resistance in the implementation of a medical management information system. Inf Syst J. 2004;14:343–62.

Barrett M, Oborn E, Orlikowski WJ, Yates J. Reconfiguring Boundary Relations: Robotic Innovations in Pharmacy Work. Organ Sci. 2012;23:1448–66.

Kellogg KC. Subordinate Activation Tactics: Semi-professionals and Micro-level Institutional Change in Professional Organizations. Adm Sci Q. 2019;64:928–75.

Pratt MG. The good, the bad, and the ambivalent: Managing identification among Amway distributors. Adm Sci Q. 2000;45:456–93.

Yusof MM, Kuljis J, Papazafeiropoulou A, Stergioulas LK. An evaluation framework for Health Information Systems: human, organization and technology-fit factors (HOT-fit). Int J Med Inform. 2008;77:386–98.

Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ Inf. 2018;34:285–91.

Abdellatif A, Bouaud J, Lafuente-Lafuente C, Belmin J, Séroussi B. Computerized Decision Support Systems for Nursing Homes: A Scoping Review. J Am Med Dir Assoc. 2021;22:984–94.

Gioia DA, Chittipeddi K. Sensemaking and sensegiving in strategic change initiation. Strateg Manag J. 1991;12:433–48.

Gioia DA, Corley KG, Hamilton AL. Seeking Qualitative Rigor in Inductive Research: Notes on the Gioia Methodology. Organ Res Methods. 2012;16:15–31.

Corbin JM, Strauss AL. Basics of qualitative research. Techniques and procedures for developing grounded theory. Los Angeles: Sage; 2015.

O’Connor C, Joffe H. Intercoder Reliability in Qualitative Research: Debates and Practical Guidelines. Int J Qual Methods. 2020;19:1–13.

Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Can J Stat. 1999;27:3–23.

Article   MathSciNet   Google Scholar  

Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Heal Ment Heal Serv Res. 2011;38:4–23.

Clarivate. 2022 Journal Impact Factor, Journal Citation Reports. 2023.

Damanpour F, Walker RM, Avellaneda CN. Combinative effects of innovation types and organizational performance: A longitudinal study of service organizations. J Manag Stud. 2009;46:650–75.

Ploegmakers KJ, Medlock S, Linn AJ, Lin Y, Seppälä LJ, Petrovic M, et al. Barriers and facilitators in using a Clinical Decision Support System for fall risk management for older people: a European survey. Eur Geriatr Med. 2022;13:395–405.

Laka M, Milazzo A, Merlin T. Factors that impact the adoption of clinical decision support systems (Cdss) for antibiotic management. Int J Environ Res Public Health. 2021;18:1–14.

Masterson Creber RM, Dayan PS, Kuppermann N, Ballard DW, Tzimenatos L, Alessandrini E, et al. Applying the RE-AIM Framework for the Evaluation of a Clinical Decision Support Tool for Pediatric Head Trauma: A Mixed-Methods Study. Appl Clin Inform. 2018;9:693–703.

de Watteville A, Pielmeier U, Graf S, Siegenthaler N, Plockyn B, Andreassen S, et al. Usability study of a new tool for nutritional and glycemic management in adult intensive care: Glucosafe 2. J Clin Monit Comput. 2021;35:525–35.

Feldstein AC, Schneider JL, Unitan R, Perrin NA, Smith DH, Nichols GA, et al. Health care worker perspectives inform optimization of patient panel-support tools: A qualitative study. Popul Health Manag. 2013;16:107–19.

Jansen-Kosterink S, van Velsen L, Cabrita M. Clinician acceptance of complex clinical decision support systems for treatment allocation of patients with chronic low back pain. BMC Med Inform Decis Mak. 2021;21:137.

Russ AL, Zillich AJ, McManus MS, Doebbeling BN, Saleem JJ. Prescribers’ interactions with medication alerts at the point of prescribing: A multi-method, in situ investigation of the human-computer interaction. Int J Med Inform. 2012;81:232–43.

Cresswell K, Callaghan M, Mozaffar H, Sheikh A. NHS Scotland’s Decision Support Platform: A formative qualitative evaluation. BMJ Heal Care Informatics. 2019;26:1–9.

Harry ML, Truitt AR, Saman DM, Henzler-Buckingham HA, Allen CI, Walton KM, et al. Barriers and facilitators to implementing cancer prevention clinical decision support in primary care: a qualitative study. BMC Health Serv Res. 2019;19:534.

Catho G, Centemero NS, Catho H, Ranzani A, Balmelli C, Landelle C, et al. Factors determining the adherence to antimicrobial guidelines and the adoption of computerised decision support systems by physicians: A qualitative study in three European hospitals. Int J Med Inform. 2020;141:104233.

Liu X, Barreto EF, Dong Y, Liu C, Gao X, Tootooni MS, et al. Discrepancy between perceptions and acceptance of clinical decision support Systems: implementation of artificial intelligence for vancomycin dosing. BMC Med Inform Decis Mak. 2023;23:1–9.

Zaidi STR, Marriott JL. Barriers and facilitators to adoption of a web-based antibiotic decision support system. South Med Rev. 2012;5:42–9.

PubMed   PubMed Central   Google Scholar  

Singh D, Spiers S, Beasley BW. Characteristics of CPOE systems and obstacles to implementation that physicians believe will affect adoption. South Med J. 2011;104:418–21.

Agarwal R, Angst CM, DesRoches CM, Fischer MA. Technological viewpoints (frames) about electronic prescribing in physician practices. J Am Med Informatics Assoc. 2010;17:425–31.

Hains IM, Ward RL, Pearson SA. Implementing a web-based oncology protocol system in Australia: Evaluation of the first 3 years of operation. Intern Med J. 2012;42:57–64.

Fossum M, Ehnfors M, Fruhling A, Ehrenberg A. An evaluation of the usability of a computerized decision support system for nursing homes. Appl Clin Inform. 2011;2:420–36.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Sukums F, Mensah N, Mpembeni R, Massawe S, Duysburgh E, Williams A, et al. Promising adoption of an electronic clinical decision support system for antenatal and intrapartum care in rural primary healthcare facilities in sub-Saharan Africa: The QUALMAT experience. Int J Med Inform. 2015;84:647–57.

Cracknell AV. Healthcare professionals’ attitudes of implementing a chemotherapy electronic prescribing system: A mixed methods study. J Oncol Pharm Pract. 2020;26:1164–71.

Peute LW, Aarts J, Bakker PJM, Jaspers MWM. Anatomy of a failure: A sociotechnical evaluation of a laboratory physician order entry system implementation. Int J Med Inform. 2010;79:e58-70.

Sedlmayr B, Patapovas A, Kirchner M, Sonst A, Müller F, Pfistermeister B, et al. Comparative evaluation of different medication safety measures for the emergency department: Physicians’ usage and acceptance of training, poster, checklist and computerized decision support. BMC Med Inform Decis Mak 2013;13.

Noormohammad SF, Mamlin BW, Biondich PG, McKown B, Kimaiyo SN, Were MC. Changing course to make clinical decision support work in an HIV clinic in Kenya. Int J Med Inform. 2010;79(3):204–10.

Hsu WWQ, Chan EWY, Zhang ZJ, Lin ZX, Bian ZX, Wong ICK. Chinese medicine students’ views on electronic prescribing: A survey in Hong Kong. Eur J Integr Med. 2015;7:47–54.

Kortteisto T, Komulainen J, Mäkelä M, Kunnamo I, Kaila M. Clinical decision support must be useful, functional is not enough: A qualitative study of computer-based clinical decision support in primary care. BMC Health Serv Res 2012;12.

Koskela T, Sandström S, Mäkinen J, Liira H. User perspectives on an electronic decision-support tool performing comprehensive medication reviews - A focus group study with physicians and nurses. BMC Med Inform Decis Mak. 2016;16:1–9.

Zhai Y, Yu Z, Zhang Q, Qin W, Yang C, Zhang Y. Transition to a new nursing information system embedded with clinical decision support: a mixed-method study using the HOT-fit framework. BMC Med Inform Decis Mak. 2022;22:1–20.

Abidi S, Vallis M, Piccinini-Vallis H, Imran SA, Abidi SSR. Diabetes-related behavior change knowledge transfer to primary care practitioners and patients: Implementation and evaluation of a digital health platform. JMIR Med Informatics. 2018;6:e9629.

Greenberg JK, Otun A, Nasraddin A, Brownson RC, Kuppermann N, Limbrick DD, et al. Electronic clinical decision support for children with minor head trauma and intracranial injuries: a sociotechnical analysis. BMC Med Inform Decis Mak. 2021;21:1–11.

Trafton J, Martins S, Michel M, Lewis E, Wang D, Combs A, et al. Evaluation of the acceptability and usability of a decision support system to encourage safe and effective use of opioid therapy for chronic, noncancer pain by primary care providers. Pain Med. 2010;11:575–85.

Berge GT, Granmo OC, Tveit TO, Munkvold BE, Ruthjersen AL, Sharma J. Machine learning-driven clinical decision support system for concept-based searching: a field trial in a Norwegian hospital. BMC Med Inform Decis Mak. 2023;23:1–15.

Chung P, Scandlyn J, Dayan PS, Mistry RD. Working at the intersection of context, culture, and technology: Provider perspectives on antimicrobial stewardship in the emergency department using electronic health record clinical decision support. Am J Infect Control. 2017;45:1198–202.

Arts DL, Medlock SK, Van Weert HCPM, Wyatt JC, Abu-Hanna A. Acceptance and barriers pertaining to a general practice decision support system for multiple clinical conditions: A mixed methods evaluation. PLoS ONE. 2018;13:e0193187.

Ash JS, Chase D, Baron S, Filios MS, Shiffman RN, Marovich S, et al. Clinical Decision Support for Worker Health: A Five-Site Qualitative Needs Assessment in Primary Care Settings. Appl Clin Inform. 2020;11:635–43.

English D, Ankem K, English K. Acceptance of clinical decision support surveillance technology in the clinical pharmacy. Informatics Heal Soc Care. 2017;42:135–52.

Gezer M, Hunter B, Hocking JS, Manski-Nankervis JA, Goller JL. Informing the design of a digital intervention to support sexually transmissible infection care in general practice: a qualitative study exploring the views of clinicians. Sex Health 2023.

Helldén A, Al-Aieshy F, Bastholm-Rahmner P, Bergman U, Gustafsson LL, Höök H, et al. Development of a computerised decisions support system for renal risk drugs targeting primary healthcare. BMJ Open. 2015;5:1–9.

Hinderer M, Boeker M, Wagner SA, Binder H, Ückert F, Newe S, et al. The experience of physicians in pharmacogenomic clinical decision support within eight German university hospitals. Pharmacogenomics. 2017;18:773–85.

Jeffries M, Salema NE, Laing L, Shamsuddin A, Sheikh A, Avery A, et al. The implementation, use and sustainability of a clinical decision support system for medication optimisation in primary care: A qualitative evaluation. PLoS ONE. 2021;16:e0250946.

Kanagasundaram NS, Bevan MT, Sims AJ, Heed A, Price DA, Sheerin NS. Computerized clinical decision support for the early recognition and management of acute kidney injury: A qualitative evaluation of end-user experience. Clin Kidney J. 2016;9:57–62.

Kastner M, Li J, Lottridge D, Marquez C, Newton D, Straus SE. Development of a prototype clinical decision support tool for osteoporosis disease management: A qualitative study of focus groups. BMC Med Inform Decis Mak. 2010;10:1–15.

Khajouei R, Wierenga PC, Hasman A, Jaspers MW. Clinicians satisfaction with CPOE ease of use and effect on clinicians’ workflow, efficiency and medication safety. Int J Med Inform. 2011;80(5):297–309.

Langton JM, Blanch B, Pesa N, Park JM, Pearson SA. How do medical doctors use a web-based oncology protocol system? A comparison of Australian doctors at different levels of medical training using logfile analysis and an online survey. BMC Med Inform Decis Mak. 2013;13:1–11.

Litvin CB, Ornstein SM, Wessell AM, Nemeth LS, Nietert PJ. Adoption of a clinical decision support system to promote judicious use of antibiotics for acute respiratory infections in primary care. Int J Med Inform. 2012;81:521–6.

Pratt R, Saman DM, Allen C, Crabtree B, Ohnsorg K, Sperl-Hillen JAM, et al. Assessing the implementation of a clinical decision support tool in primary care for diabetes prevention: a qualitative interview study using the Consolidated Framework for Implementation Science. BMC Med Inform Decis Mak. 2022;22:1–9.

Robertson J, Moxey AJ, Newby DA, Gillies MB, Williamson M, Pearson SA. Electronic information and clinical decision support for prescribing: State of play in Australian general practice. Fam Pract. 2011;28:93–101.

Rock C, Abosi O, Bleasdale S, Colligan E, Diekema DJ, Dullabh P, et al. Clinical Decision Support Systems to Reduce Unnecessary Clostridioides difficile Testing Across Multiple Hospitals. Clin Infect Dis. 2022;75:1187–93.

Roebroek LO, Bruins J, Delespaul P, Boonstra A, Castelein S. Qualitative analysis of clinicians’ perspectives on the use of a computerized decision aid in the treatment of psychotic disorders. BMC Med Inform Decis Mak. 2020;20(1):1–12.

Salwei ME, Carayon P, Hoonakker PLT, Hundt AS, Wiegmann D, Pulia M, et al. Workflow integration analysis of a human factors-based clinical decision support in the emergency department. Appl Ergon. 2021;97:103498.

Sayood SJ, Botros M, Suda KJ, Foraker R, Durkin MJ. Attitudes toward using clinical decision support in community pharmacies to promote antibiotic stewardship. J Am Pharm Assoc. 2021;61(5):565–71.

Seliaman ME, Albahly MS. The Reasons for Physicians and Pharmacists’ Acceptance of Clinical Support Systems in Saudi Arabia. Int J Environ Res Public Health 2023;20.

Sheehan B, Nigrovic LE, Dayan PS, Kuppermann N, Ballard DW, Alessandrini E, et al. Informing the design of clinical decision support services for evaluation of children with minor blunt head trauma in the emergency department: A sociotechnical analysis. J Biomed Inform. 2013;46:905–13.

Shi Y, Amill-Rosario A, Rudin RS, Fischer SH, Shekelle P, Scanlon DP, et al. Barriers to using clinical decision support in ambulatory care: Do clinics in health systems fare better? J Am Med Informatics Assoc. 2021;28:1667–75.

Snyder ME, Adeoye-Olatunde OA, Gernant SA, DiIulio J, Jaynes HA, Doucette WR, et al. A user-centered evaluation of medication therapy management alerts for community pharmacists: Recommendations to improve usability and usefulness. Res Soc Adm Pharm. 2021;17:1433–43.

Van Biesen W, Van Cauwenberge D, Decruyenaere J, Leune T, Sterckx S. An exploration of expectations and perceptions of practicing physicians on the implementation of computerized clinical decision support systems using a Qsort approach. BMC Med Inform Decis Mak. 2022;22:1–10.

Vandenberg AE, Vaughan CP, Stevens M, Hastings SN, Powers J, Markland A, et al. Improving geriatric prescribing in the ED: a qualitative study of facilitators and barriers to clinical decision support tool use. Int J Qual Heal Care. 2017;29:117–23.

Westerbeek L, de Bruijn GJ, van Weert HC, Abu-Hanna A, Medlock S, van Weert JCM. General Practitioners’ needs and wishes for clinical decision support Systems: A focus group study. Int J Med Inform. 2022;168: 104901.

Cranfield S, Hendy J, Reeves B, Hutchings A, Collin S, Fulop N. Investigating healthcare IT innovations: a “conceptual blending” approach. J Health Organ Manag. 2015;29:1131–48.

Jeon J, Taneva S, Kukreti V, Trbovich P, Easty AC, Rossos PG, Cafazzo JA. Toward successful migration to computerized physician order entry for chemotherapy. Curr Oncol. 2014;21(2):221–8.

Patel VL, Shortliffe EH, Stefanelli M, Szolovits P, Berthold MR, Bellazzi R, et al. The Coming of Age of Artificial Intelligence in Medicine. Artif Intell Med. 2009;46:5–17.

Finley EP, Schneegans S, Tami C, Pugh MJ, McGeary D, Penney L, Sharpe Potter J. Implementing prescription drug monitoring and other clinical decision support for opioid risk mitigation in a military health care setting: a qualitative feasibility study. J Am Med Inform Assoc. 2018;25(5):515–22.

Lugtenberg M, Weenink JW, Van Der Weijden T, Westert GP, Kool RB. Implementation of multiple-domain covering computerized decision support systems in primary care: A focus group study on perceived barriers. BMC Med Inform Decis Mak. 2015;15:1–11.

Chow A, Lye DCB, Arah OA. Psychosocial determinants of physicians’ acceptance of recommendations by antibiotic computerised decision support systems: A mixed methods study. Int J Antimicrob Agents. 2015;45:295–304.

Ford E, Edelman N, Somers L, Shrewsbury D, Lopez Levy M, van Marwijk H, et al. Barriers and facilitators to the adoption of electronic clinical decision support systems: a qualitative interview study with UK general practitioners. BMC Med Inform Decis Mak. 2021;21:1–13.

Jung SY, Hwang H, Lee K, Lee HY, Kim E, Kim M, et al. Barriers and facilitators to implementation of medication decision support systems in electronic medical records: Mixed methods approach based on structural equation modeling and qualitative analysis. JMIR Med Informatics. 2020;8:1–14.

Mozaffar H, Cresswell K, Williams R, Bates DW, Sheikh A. Exploring the roots of unintended safety threats associated with the introduction of hospital ePrescribing systems and candidate avoidance and/or mitigation strategies: A qualitative study. BMJ Qual Saf. 2017;26:722–33.

McDermott L, Yardley L, Little P, Ashworth M, Gulliford M. Developing a computer delivered, theory based intervention for guideline implementation in general practice. BMC Fam Pract 2010;11.

Rieckert A, Teichmann AL, Drewelow E, Kriechmayr C, Piccoliori G, Woodham A, et al. Reduction of inappropriate medication in older populations by electronic decision support (the PRIMA-eDS project): A survey of general practitioners’ experiences. J Am Med Informatics Assoc. 2019;26:1323–32.

Anderson JA, Godwin KM, Saleem JJ, Russell S, Robinson JJ, Kimmel B. Accessibility, usability, and usefulness of a Web-based clinical decision support tool to enhance provider-patient communication around Self-management to Prevent (STOP) Stroke. Health Informatics J. 2014;20:261–74.

Carayon P, Cartmill R, Blosky MA, Brown R, Hackenberg M, Hoonakker P, et al. ICU nurses’ acceptance of electronic health records. J Am Med Informatics Assoc. 2011;18:812–9.

Garabedian PM, Gannon MP, Aaron S, Wu E, Burns Z, Samal L. Human-centered design of clinical decision support for management of hypertension with chronic kidney disease. BMC Med Inform Decis Mak. 2022;22:1–12.

Jeffries M, Salema NE, Laing L, Shamsuddin A, Sheikh A, Avery T, Keers RN. Using sociotechnical theory to understand medication safety work in primary care and prescribers’ use of clinical decision support: a qualitative study. BMJ Open. 2023;13(4):e068798.

Lugtenberg M, Pasveer D, van der Weijden T, Westert GP, Kool RB. Exposure to and experiences with a computerized decision support intervention in primary care: results from a process evaluation. BMC Fam Pract. 2015;16(1):1–10.

Mozaffar H, Cresswell KM, Lee L, Williams R, Sheikh A; NIHR ePrescribing Programme Team. Taxonomy of delays in the implementation of hospital computerized physician order entry and clinical decision support systems for prescribing: A longitudinal qualitative study. BMC Med Inform Decis Mak. 2016;16:1–14.

Tabla S, Calafiore M, Legrand B, Descamps A, Andre C, Rochoy M, et al. Artificial Intelligence and Clinical Decision Support Systems or Automated Interpreters: What Characteristics Are Expected by French General Practitioners? Stud Health Technol Inform. 2022;290:887–91.

PubMed   Google Scholar  

Thomas CP, Kim M, McDonald A, Kreiner P, Kelleher SJ, Blackman MB, et al. Prescribers’ expectations and barriers to electronic prescribing of controlled substances. J Am Med Informatics Assoc. 2012;19:375–81.

Yui BH, Jim WT, Chen M, Hsu JM, Liu CY, Lee TT. Evaluation of computerized physician order entry system- A satisfaction survey in Taiwan. J Med Syst. 2012;36:3817–24.

Zha H, Liu K, Tang T, Yin Y-H, Dou B, Jiang L, et al. Acceptance of clinical decision support system to prevent venous thromboembolism among nurses: an extension of the UTAUT model. BMC Med Inform Decis Mak. 2022;22:1–12.

Abramson EL, Patel V, Malhotra S, Pfoh ER, Nena Osorio S, Cheriff A, et al. Physician experiences transitioning between an older versus newer electronic health record for electronic prescribing. Int J Med Inform. 2012;81:539–48.

Cresswell K, Lee L, Mozaffar H, Williams R, Sheikh A, Robertson A, et al. Sustained User Engagement in Health Information Technology: The Long Road from Implementation to System Optimization of Computerized Physician Order Entry and Clinical Decision Support Systems for Prescribing in Hospitals in England. Health Serv Res. 2017;52:1928–57.

Klarenbeek SE, Schuurbiers-Siebers OCJ, van den Heuvel MM, Prokop M, Tummers M. Barriers and facilitators for implementation of a computerized clinical decision support system in lung cancer multidisciplinary team meetings—a qualitative assessment. Biology (Basel). 2021;10:1–15.

Abdel-Qader DH, Cantrill JA, Tully MP. Satisfaction predictors and attitudes towards electronic prescribing systems in three UK hospitals. Pharm World Sci. 2010;32:581–93.

Ballard DW, Rauchwerger AS, Reed ME, Vinson DR, Mark DG, Offerman SR, et al. Emergency physicians’ knowledge and attitudes of clinical decision support in the electronic health record: A survey-based study. Acad Emerg Med. 2013;20:352–60.

Buenestado D, Elorz J, Pérez-Yarza EG, Iruetaguena A, Segundo U, Barrena R, et al. Evaluating acceptance and user experience of a guideline-based clinical decision support system execution platform. J Med Syst. 2013;37:1–9.

Huguet N, Ezekiel-Herrera D, Gunn R, Pierce A, O’Malley J, Jones M, Gold R. Uptake of a Cervical Cancer Clinical Decision Support Tool: A Mixed-Methods Study. Appl Clin Inform. 2023;14(03):594–9.

Paulsen MM, Varsi C, Paur I, Tangvik RJ, Andersen LF. Barriers and facilitators for implementing a decision support system to prevent and treat disease-related malnutrition in a hospital setting: Qualitative study. JMIR Form Res 2019;3.

Varsi C, Andersen LF, Koksvik GT, Severinsen F, Paulsen MM. Intervention-related, contextual and personal factors affecting the implementation of an evidence-based digital system for prevention and treatment of malnutrition in elderly institutionalized patients: a qualitative study. BMC Health Serv Res. 2023;23:1–12.

Zhai Y, Yu Z, Zhang Q, Zhang YX. Barriers and facilitators to implementing a nursing clinical decision support system in a tertiary hospital setting: A qualitative study using the FITT framework. Int J Med Inform. 2022;166:104841.

Carland JE, Elhage T, Baysari MT, Stocker SL, Marriott DJE, Taylor N, et al. Would they trust it? An exploration of psychosocial and environmental factors affecting prescriber acceptance of computerised dose-recommendation software. Br J Clin Pharmacol. 2021;87:1215–33.

Wannheden C, Hvitfeldt-Forsberg H, Eftimovska E, Westling K, Ellenius J. Boosting Quality Registries with Clinical Decision Support Functionality. Methods Inf Med. 2017;56:339–43.

Cranfield S, Hendy J, Reeves B, Hutchings A, Collin S, Fulop N. Investigating healthcare IT innovations: a “conceptual blending” approach. J Health Organ Manag. 2015;29(7):1131–48.

Hsiao JL, Wu WC, Chen RF. Factors of accepting pain management decision support systems by nurse anesthetists. BMC Med Inform Decis Mak 2013;13.

Jeng DJF, Tzeng GH. Social influence on the use of Clinical Decision Support Systems: Revisiting the Unified Theory of Acceptance and Use of Technology by the fuzzy DEMATEL technique. Comput Ind Eng. 2012;62:819–28.

Liu Y, Hao H, Sharma MM, Harris Y, Scofi J, Trepp R, et al. Clinician Acceptance of Order Sets for Pain Management: A Survey in Two Urban Hospitals. Appl Clin Inform. 2022;13:447–55.

Zakane SA, Gustafsson LL, Tomson G, Loukanova S, Sié A, Nasiell J, et al. Guidelines for maternal and neonatal “point of care”: Needs of and attitudes towards a computerized clinical decision support system in rural Burkina Faso. Int J Med Inform. 2014;83:459–69.

Mertz E, Bolarinwa O, Wides C, Gregorich S, Simmons K, Vaderhobli R, et al. Provider Attitudes Toward the Implementation of Clinical Decision Support Tools in Dental Practice. J Evid Based Dent Pract. 2015;15:152–63.

De Vries AE, Van Der Wal MHL, Nieuwenhuis MMW, De Jong RM, Van Dijk RB, Jaarsma T, et al. Perceived barriers of heart failure nurses and cardiologists in using clinical decision support systems in the treatment of heart failure patients. BMC Med Inform Decis Mak 2013;13.

Ahmad N, Du S, Ahmed F, ul Amin N, Yi X. Healthcare professionals satisfaction and AI-based clinical decision support system in public sector hospitals during health crises: a cross-sectional study. Inf Technol Manag. 2023:1–13.

Van Cauwenberge D, Van Biesen W, Decruyenaere J, Leune T, Sterckx S. “Many roads lead to Rome and the Artificial Intelligence only shows me one road”: an interview study on physician attitudes regarding the implementation of computerised clinical decision support systems. BMC Med Ethics. 2022;23:1–14.

Wijnhoven F. Organizational Learning for Intelligence Amplification Adoption: Lessons from a Clinical Decision Support System Adoption Project. Inf Syst Front. 2022;24:731–44.

Sittig DF, Wright A, Simonaitis L, Carpenter JD, Allen GO, Doebbeling BN, et al. The state of the art in clinical knowledge management: An inventory of tools and techniques. Int J Med Inform. 2010;79:44–57.

Simon SR, Keohane CA, Amato M, Coffey M, Cadet B, Zimlichman E, et al. Lessons learned from implementation of computerized provider order entry in 5 community hospitals: A qualitative study. BMC Med Inform Decis Mak 2013;13.

Hor CP, O’Donnell JM, Murphy AW, O’Brien T, Kropmans TJB. General practitioners’ attitudes and preparedness towards Clinical Decision Support in e-Prescribing (CDS-eP) adoption in the West of Ireland: a cross sectional study. BMC Med Inform Decis Mak. 2010;10:2.

Abejirinde IOO, Zweekhorst M, Bardají A, Abugnaba-Abanga R, Apentibadek N, De Brouwere V, et al. Unveiling the black box of diagnostic and clinical decision support systems for antenatal care: Realist evaluation. JMIR MHealth UHealth. 2018;6:e11468.

Charani E, Kyratsis Y, Lawson W, Wickens H, Brannigan ET, Moore LSP, et al. An analysis of the development and implementation of a smartphone application for the delivery of antimicrobial prescribing policy: Lessons learnt. J Antimicrob Chemother. 2013;68:960–7.

Patel R, Green W, Shahzad MW, Larkin C. Use of mobile clinical decision support software by junior doctors at a UK Teaching Hospital: Identification and evaluation of barriers to engagement. JMIR Mhealth Uhealth. 2015;3(3):e4388.

Hsiao JL, Chen RF. Critical factors influencing physicians’ intention to use computerized clinical practice guidelines: An integrative model of activity theory and the technology acceptance model. BMC Med Inform Decis Mak. 2015;16:1–15.

Khan S, McCullagh L, Press A, Kharche M, Schachter A, Pardo S, et al. Formative assessment and design of a complex clinical decision support tool for pulmonary embolism. Evid Based Med. 2016;21:7–13.

Randell R, Dowding D. Organisational influences on nurses’ use of clinical decision support systems. Int J Med Inform. 2010;79:412–21.

Frisinger A, Papachristou P. The voice of healthcare: introducing digital decision support systems into clinical practice-a qualitative study. BMC Prim Care. 2023;24(1):67.

Ifinedo P. Using an Extended Theory of Planned Behavior to Study Nurses’ Adoption of Healthcare Information Systems in Nova Scotia. Int J Technol Diffus. 2017;8:1–17.

Maslej MM, Kloiber S, Ghassemi M, Yu J, Hill SL. Out with AI, in with the psychiatrist: a preference for human-derived clinical decision support in depression care. Transl Psychiatry. 2023;13:1–9.

Jeffries M, Keers RN, Phipps DL, Williams R, Brown B, Avery AJ, et al. Developing a learning health system: Insights from a qualitative process evaluation of a pharmacist-led electronic audit and feedback intervention to improve medication safety in primary care. PLoS ONE. 2018;13:1–16.

Malo C, Neveu X, Archambault PM, Émond M, Gagnon MP. Exploring nurses’ intention to use a computerized platform in the resuscitation unit: Development and validation of a questionnaire based on the theory of planned behavior. J Med Internet Res 2012;14.

Porter A, Dale J, Foster T, Logan P, Wells B, Snooks H. Implementation and use of computerised clinical decision support (CCDS) in emergency pre-hospital care: A qualitative study of paramedic views and experience using Strong Structuration Theory. Implement Sci. 2018;13:1–10.

Teferi GH, Wonde TE, Tadele MM, Assaye BT, Hordofa ZR, Ahmed MH, et al. Perception of physicians towards electronic prescription system and associated factors at resource limited setting 2021: Cross sectional study. PLoS ONE. 2022;17:1–11.

Wrzosek N, Zimmermann A, Balwicki Ł. Doctors’ perceptions of e-prescribing upon its mandatory adoption in poland, using the unified theory of acceptance and use of technology method. Healthc 2020;8.

O’Sullivan D, Doyle J, Michalowski W, Wilk S, Thomas R, Farion K. Expanding usability analysis with intrinsic motivation concepts to learn about CDSS adoption: A case study. Health Policy Technol. 2014;3(2):113–25.

Wickström H, Tuvesson H, Öien R, Midlöv P, Fagerström C. Health Care Staff’s Experiences of Engagement When Introducing a Digital Decision Support System for Wound Management: Qualitative Study. JMIR Hum Factors. 2020;7:1–10.

Overby CL, Erwin AL, Abul-Husn NS, Ellis SB, Scott SA, Obeng AO, et al. Physician attitudes toward adopting genome-guided prescribing through clinical decision support. J Pers Med. 2014;4:35–49.

Elnahal SM, Joynt KE, Bristol SJ, Jha AK. Electronic health record functions differ between best and worst hospitals. Am J Manag Care. 2011;17(4):e121.

Grout RW, Cheng ER, Carroll AE, Bauer NS, Downs SM. A six-year repeated evaluation of computerized clinical decision support system user acceptability. Int J Med Inform. 2018;112:74–81.

Sicotte C, Taylor L, Tamblyn R. Predicting the use of electronic prescribing among early adopters in primary care Recherche Prédire le taux d ’ utilisation de la prescription électronique chez ceux qui viennent de l ’ adopter dans un contexte de soins primaires 2013;59.

Pevnick JM, Asch SM, Adams JL, Mattke S, Patel MH, Ettner SL, et al. Adoption and use of stand-alone electronic prescribing in a health plan-sponsored initiative. Am J Manag Care. 2010;16:182–9.

Meulendijk M, Spruit M, Drenth-Van Maanen C, Numans M, Brinkkemper S, Jansen P. General practitioners’ attitudes towards decision-supported prescribing: An analysis of the Dutch primary care sector. Health Informatics J. 2013;19:247–63.

B S, A P, M K, A S, F M, B P-KP, et al. Comparative evaluation of different medication safety measures for the emergency department: physicians’ usage and acceptance of training, poster, checklist and computerized decision support. BMC Med Inform Decis Mak. 2013;13:79.

Holden RJ, Karsh BT. The Technology Acceptance Model: Its past and its future in health care. J Biomed Inform. 2010;43:159–72.

Edmondson AC. Speaking up in the operating room: How team leaders promote learning in interdisciplinary action teams. J Manag Stud. 2003;40:1419–52.

Heinze KL, Heinze JE. Individual innovation adoption and the role of organizational culture. Rev Manag Sci. 2020;14:561–86.

Nembhard IM, Edmondson AC. Making it safe: The effects of leader inclusiveness and professional status on psychological safety and improvement efforts in health care teams. J Organ Behav Int J Ind Occup Organ Psychol Behav. 2006;27:941–66.

Singer SJ, Hayes JE, Gray GC, Kiang MV. Making time for learning-oriented leadership in multidisciplinary hospital management groups. Health Care Manage Rev. 2015;40:300–12.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Heal Ment Heal Serv Res. 2011;38:65–76.

Liang H, Xue Y, Ke W, Wei KK. Understanding the influence of team climate on it use. J Assoc Inf Syst. 2010;11:414–32.

Holden RJ, Brown RL, Scanlon MC, Karsh BT. Modeling nurses’ acceptance of bar coded medication administration technology at a pediatric hospital. J Am Med Informatics Assoc. 2012;19:1050–8.

Liu C, Zhu Q, Holroyd KA, Seng EK. Status and trends of mobile-health applications for iOS devices: a developer’s perspective. J Syst Softw. 2011;84:2022–33.

Currie G, Lockett A, Finn R, Martin G, Waring J. Institutional Work to Maintain Professional Power: Recreating the Model of Medical Professionalism. Organ Stud. 2012;33:937–62.

DiBenigno J, Kellogg KC. Beyond Occupational Differences: The Importance of Cross-cutting demographics and dyadic toolkits for collaboration in a US hospital. Adm Sci Q. 2014;59(3):375–408.

Curran GM, Landes SJ, Arrossi S, Paolino M, Orellana L, Thouyaret L, et al. Mixed-methods approach to evaluate an mHealth intervention to increase adherence to triage of human papillomavirus-positive women who have performed self-collection (the ATICA study): Study protocol for a hybrid type i cluster randomized effectiveness-imp. Trials. 2004;20:1–12.

Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69:123–57.

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: Recommendations for specifying and reporting. Implement Sci. 2013;8:1–11.

Baumeister RF, Leary MR. Writing narrative literature reviews. Rev Gen Psychol. 1997;1:311–20.

Download references

Open Access funding enabled and organized by Projekt DEAL. Parts of the study are supported by the research grand by the German Bundesministerium für Bildung und Forschung (BMBF) Augmented Auditive Intelligence (A2I). Reference: 16SV8599.

Author information

Authors and affiliations.

Kiel Institute for Responsible Innovation, University of Kiel, Westring 425, 24118, Kiel, Germany

Sophia Ackerhans, Thomas Huynh, Carsten Kaiser & Carsten Schultz

You can also search for this author in PubMed   Google Scholar

Contributions

SA conceived the study, developed the literature search, screened citation titles, abstracts, and full-text articles, conducted the MMAT screening, cleaned, coded, analyzed, and interpreted one third of the data, and conceptualized and wrote the sections of the manuscript. TH conceived the study, developed the literature search, screened citation titles, abstracts, and full-text articles, conducted the MMAT screening, cleaned, coded, analyzed, and interpreted one third of the data, and edited the sections of the manuscript. CK screened citation titles, abstracts, and full-text articles, conducted the MMAT screening, cleaned, coded, analyzed, and interpreted one third of the data, and revised the manuscript. CS planned and coordinated the study and edited the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Sophia Ackerhans .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: table s1..

Final search strings used to identify articles for the review. Table S2. Characteristics of included studies.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ackerhans, S., Huynh, T., Kaiser, C. et al. Exploring the role of professional identity in the implementation of clinical decision support systems—a narrative review. Implementation Sci 19 , 11 (2024). https://doi.org/10.1186/s13012-024-01339-x

Download citation

Received : 08 August 2023

Accepted : 09 January 2024

Published : 12 February 2024

DOI : https://doi.org/10.1186/s13012-024-01339-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Professional identity
  • Identity threat
  • Health care
  • Implementation

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

search methods for literature review

COMMENTS

  1. How to carry out a literature search for a systematic review: a

    Literature reviews are conducted for the purpose of (a) locating information on a topic or identifying gaps in the literature for areas of future study, (b) synthesising conclusions in an area of ambiguity and (c) helping clinicians and researchers inform decision-making and practice guidelines.

  2. A systematic approach to searching: an efficient and complete method to

    The described method can be used to create complex and comprehensive search strategies for different databases and interfaces, such as those that are needed when searching for relevant references for systematic reviews, and will assist both information specialists and practitioners when they are searching the biomedical literature. Go to:

  3. Methods for Literature Search

    For advice on the scope of the project, refinement of the key questions, and preparation of this technical review, we consulted technical experts in the following fields: employer purchasing strategies, provider performance assessment, consumer use of report cards and consumer preferences for health care information, risk adjustment, and economics.

  4. Chapter 9 Methods for Literature Reviews

    Among other methods, literature reviews are essential for: (a) identifying what has been written on a subject or topic; (b) determining the extent to which a specific research area reveals any interpretable trends or patterns; (c) aggregating empirical findings related to a narrow research question to support evidence-based practice; (d) generat...

  5. Defining the process to literature searching in systematic reviews: a

    Published studies were identified through 'pearl growing', citation chasing, a search of PubMed using the systematic review methods filter, and the authors' topic knowledge. The relevant sections within each guidance document were then read and re-read, with the aim of determining key methodological stages.

  6. How to Write a Literature Review

    Step 1 - Search for relevant literature Step 2 - Evaluate and select sources Step 3 - Identify themes, debates, and gaps Step 4 - Outline your literature review's structure Step 5 - Write your literature review Free lecture slides Other interesting articles Frequently asked questions Introduction Quick Run-through Step 1 & 2 Step 3 Step 4 Step 5

  7. Rapid reviews methods series: Guidance on literature search

    This paper is part of a series of methodological guidance from the Cochrane Rapid Reviews Methods Group. Rapid reviews (RR) use modified systematic review methods to accelerate the review process while maintaining systematic, transparent and reproducible methods. In this paper, we address considerations for RR searches. We cover the main areas relevant to the search process: preparation and ...

  8. Literature Review: Developing a search strategy

    Library Guides Literature Review Developing a search strategy Literature Review: Developing a search strategy From research question to search strategy Turning your research question into a search strategy Watch on Keeping a record of your search activity

  9. How to undertake a literature search: a step-by-step guide

    Review Literature as Topic* Undertaking a literature search can be a daunting prospect. Breaking the exercise down into smaller steps will make the process more manageable. This article suggests 10 steps that will help readers complete this task, from identifying key concepts to choosing databases for the search and saving the …

  10. Guides: Literature Review: How to search effectively

    Guides Literature Review How to search effectively 1. Identify search words Analyse your research topic or question What are the main ideas? What concepts or theories have you already covered? Write down your main ideas, synonyms, related words and phrases. Tips If you're looking for particular types of research, you can use these as search words.

  11. Systematic Reviews: Step 3: Conduct Literature Searches

    Systematic review quality is highly dependent on the literature search(es) used to identify studies. To follow best practices for reporting search strategies, as well as increase reproducibility and transparency, document various elements of the literature search for your review.

  12. Guidance on Conducting a Systematic Literature Review

    In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature reviews in planning education and research. Introduction

  13. Optimal database combinations for literature searches in systematic

    Investigators and information specialists searching for relevant references for a systematic review (SR) are generally advised to search multiple databases and to use additional methods to be able to adequately identify all literature related to the topic of interest [1,2,3,4,5,6].The Cochrane Handbook, for example, recommends the use of at least MEDLINE and Cochrane Central and, when ...

  14. Search strategy formulation for systematic reviews: Issues, challenges

    In this review, we focus on literature searching, specifically the development of the search strategies used in systematic reviews. This is a complex process ( Cooper et al., 2018 ; Lefebvre et al., 2020 ), in which the search methods and choice of databases to be used to identify literature for the systematic review are specified and peer ...

  15. Methodological Approaches to Literature Review

    A literature review is defined as "a critical analysis of a segment of a published body of knowledge through summary, classification, and comparison of prior research studies, reviews of literature, and theoretical articles." (The Writing Center University of Winconsin-Madison 2022) A literature review is an integrated analysis, not just a summa...

  16. 3. Methods for Searching the Literature

    Methods for Searching the Literature - The Literature Review - LibGuides at The University of the West Indies The Literature Review: 3. Methods for Searching the Literature 1. Introduction 2. Why Do a Literature Review? 3. Methods for Searching the Literature 4. Analysing the Literature 5. Organizing the Literature Review 6. Writing the Review 1.

  17. Literature review as a research methodology: An ...

    This is why the literature review as a research method is more relevant than ever. Traditional literature reviews often lack thoroughness and rigor and are conducted ad hoc, rather than following a specific methodology. Therefore, questions can be raised about the quality and trustworthiness of these types of reviews.

  18. Your Literature Search

    For literature reviews within a paper, you will likely at least want to search an important subject database and a citation tracking database. Subject Research Guides can help you identity important subject databases. Web of Science and Google Scholar are citation tracking databases. For systematic/semi-systematic literature reviews, you will ...

  19. Documenting the search methods

    Indicate when search strategies from other literature reviews were adapted or reused for a substantive part or all of the search, citing the previous review(s). Updates: 12: Report the methods used to update the search(es) (e.g., rerunning searches, email alerts). Dates of searches: 13: For each search strategy, provide the date when the last ...

  20. Steps in Conducting a Literature Review

    A literature review is an integrated analysis-- not just a summary-- of scholarly writings and other relevant evidence related directly to your research question.That is, it represents a synthesis of the evidence that provides background information on your topic and shows a association between the evidence and your research question.

  21. Getting started

    What is a literature review? Definition: A literature review is a systematic examination and synthesis of existing scholarly research on a specific topic or subject. Purpose: It serves to provide a comprehensive overview of the current state of knowledge within a particular field. Analysis: Involves critically evaluating and summarizing key findings, methodologies, and debates found in ...

  22. Evidence Syntheses and Systematic Reviews: Overview

    Narrative literature review: Standalone review (not to be confused with a literature review in an empirical study), may be broad or focused, represents a range of levels of comprehensiveness: May or may not be comprehensive: Varies: No: Narrative describes what is known and unknown, recommendations for future research, limitations of findings

  23. Defining the process to literature searching in systematic reviews: a

    A literature review. Two types of literature were reviewed: guidance and published studies. Nine guidance documents were identified, including: The Cochrane and Campbell Handbooks.

  24. Research Guides: Literature Reviews: Concept Mapping

    A concept map or mind map is a visual representation of knowledge that illustrates relationships between concepts or ideas. It is a tool for organizing and representing information in a hierarchical and interconnected manner. At its core, a concept map consists of nodes, which represent individual concepts or ideas, and links, which depict the relationships between these concepts.

  25. Literature searching methods or guidance and their application to

    Realist review literature search methods may be a good model to follow, because of their exploratory approach to searching. There are some key lessons from the literature that can be applied to public health reviews. In public health topics as in other fields, there is no set number or list of sources that should be searched. Public health is ...

  26. Cardiomyopathy in Celiac Disease: A Systematic Review

    The aim of this study was to review and synthesize data from the literature on this topic and potentially reveal a more evidence-based causal relationship. (2) Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were used to search Medline, Embase, and Scopus databases from database inception ...

  27. Citizens' perspectives on relocating care: a scoping review

    An initial broad search of the literature was undertaken by the first author in order to identify relevant articles that could be used for designing a search strategy. During this search, 18 key articles were identified, which included citizens, preference, and relocating care, these three terms formed the basis of our search strategy.

  28. Exploring the role of professional identity in the implementation of

    We conducted a comprehensive search of the Web of Science and PubMed databases to identify peer-reviewed studies on CDSS implementations published between January 2010 and September 2023. An initial review of the literature, including previous related literature reviews, yielded the key terms to be used in designing the search strings [1, 49 ...