Reference management. Clean and simple.

The top list of academic search engines

academic search engines

1. Google Scholar

4. science.gov, 5. semantic scholar, 6. baidu scholar, get the most out of academic search engines, frequently asked questions about academic search engines, related articles.

Academic search engines have become the number one resource to turn to in order to find research papers and other scholarly sources. While classic academic databases like Web of Science and Scopus are locked behind paywalls, Google Scholar and others can be accessed free of charge. In order to help you get your research done fast, we have compiled the top list of free academic search engines.

Google Scholar is the clear number one when it comes to academic search engines. It's the power of Google searches applied to research papers and patents. It not only lets you find research papers for all academic disciplines for free but also often provides links to full-text PDF files.

  • Coverage: approx. 200 million articles
  • Abstracts: only a snippet of the abstract is available
  • Related articles: ✔
  • References: ✔
  • Cited by: ✔
  • Links to full text: ✔
  • Export formats: APA, MLA, Chicago, Harvard, Vancouver, RIS, BibTeX

Search interface of Google Scholar

BASE is hosted at Bielefeld University in Germany. That is also where its name stems from (Bielefeld Academic Search Engine).

  • Coverage: approx. 136 million articles (contains duplicates)
  • Abstracts: ✔
  • Related articles: ✘
  • References: ✘
  • Cited by: ✘
  • Export formats: RIS, BibTeX

Search interface of Bielefeld Academic Search Engine aka BASE

CORE is an academic search engine dedicated to open-access research papers. For each search result, a link to the full-text PDF or full-text web page is provided.

  • Coverage: approx. 136 million articles
  • Links to full text: ✔ (all articles in CORE are open access)
  • Export formats: BibTeX

Search interface of the CORE academic search engine

Science.gov is a fantastic resource as it bundles and offers free access to search results from more than 15 U.S. federal agencies. There is no need anymore to query all those resources separately!

  • Coverage: approx. 200 million articles and reports
  • Links to full text: ✔ (available for some databases)
  • Export formats: APA, MLA, RIS, BibTeX (available for some databases)

Search interface of Science.gov

Semantic Scholar is the new kid on the block. Its mission is to provide more relevant and impactful search results using AI-powered algorithms that find hidden connections and links between research topics.

  • Coverage: approx. 40 million articles
  • Export formats: APA, MLA, Chicago, BibTeX

Search interface of Semantic Scholar

Although Baidu Scholar's interface is in Chinese, its index contains research papers in English as well as Chinese.

  • Coverage: no detailed statistics available, approx. 100 million articles
  • Abstracts: only snippets of the abstract are available
  • Export formats: APA, MLA, RIS, BibTeX

Search interface of Baidu Scholar

RefSeek searches more than one billion documents from academic and organizational websites. Its clean interface makes it especially easy to use for students and new researchers.

  • Coverage: no detailed statistics available, approx. 1 billion documents
  • Abstracts: only snippets of the article are available
  • Export formats: not available

Search interface of RefSeek

Consider using a reference manager like Paperpile to save, organize, and cite your references. Paperpile integrates with Google Scholar and many popular databases, so you can save references and PDFs directly to your library using the Paperpile buttons:

search engine for literature review

Google Scholar is an academic search engine, and it is the clear number one when it comes to academic search engines. It's the power of Google searches applied to research papers and patents. It not only let's you find research papers for all academic disciplines for free, but also often provides links to full text PDF file.

Semantic Scholar is a free, AI-powered research tool for scientific literature developed at the Allen Institute for AI. Sematic Scholar was publicly released in 2015 and uses advances in natural language processing to provide summaries for scholarly papers.

BASE , as its name suggest is an academic search engine. It is hosted at Bielefeld University in Germany and that's where it name stems from (Bielefeld Academic Search Engine).

CORE is an academic search engine dedicated to open access research papers. For each search result a link to the full text PDF or full text web page is provided.

Science.gov is a fantastic resource as it bundles and offers free access to search results from more than 15 U.S. federal agencies. There is no need any more to query all those resources separately!

search engine for literature review

SCI Journal

28 Best Academic Search Engines That make your research easier

Photo of author

This post may contain affiliate links that allow us to earn a commission at no expense to you. Learn more

Academic Search Engines

If you’re a researcher or scholar, you know that conducting effective online research is a critical part of your job. And if you’re like most people, you’re always on the lookout for new and better ways to do it. 

I’m sure you are familiar with some research databases. But, top researchers keep an open mind and are always looking for inspiration in unexpected places. 

This article aims to give you an edge over researchers that rely mainly on Google for their entire research process.

Our list of 28 academic search engines will start with the more familiar to less.

Table of Contents

#1. Google Scholar

Academic Search Engines

Google Scholar is an academic search engine that indexes the full text or metadata of scholarly literature across an array of publishing formats and disciplines.

Great for academic research, you can use Google Scholar to find articles from academic journals, conference proceedings, theses, and dissertations. The results returned by Google Scholar are typically more relevant and reliable than those from regular search engines like Google.

Tip: You can restrict your results to peer-reviewed articles only by clicking on the “Scholarly”

  • Scholarly results are typically more relevant and reliable than those from regular search engines like Google.
  • You can restrict your results to peer-reviewed articles only by clicking on the “Scholarly” tab.
  • Google Scholar database Coverage is extensive, with approx. 200 million articles indexed.
  • Abstracts are available for most articles.
  • Related articles are shown, as well as the number of times an article has been cited.
  • Links to full text are available for many articles.
  • Abstracts are only a snippet of the full article, so you might need to do additional searching to get the full information you need.
  • Not all articles are available in full text.

Google Scholar is completely free.

#2. ERIC (Education Resources Information Center) 

search engine for literature review

ERIC (short for educational resources information center) is a great academic search engine that focuses on education-related literature. It is sponsored by the U.S. Department of Education and produced by the Institute of Education Sciences. 

ERIC indexes over a million articles, reports, conference papers, and other resources on all aspects of education from early childhood to higher education. So, search results are more relevant to Education on ERIC. 

  • Extensive coverage: ERIC indexes over a million articles, reports, and other resources on all aspects of education from early childhood to higher education.
  • You can limit your results to peer-reviewed journals by clicking on the “Peer-Reviewed” tab.
  • Great search engine for educators, as abstracts are available for most articles.

ERIC is a free online database of education-related literature. 

You might also like:

  • Best Plagiarism Checkers For Research Papers
  • 30+ Essential Software For Researchers
  • Best AI-Based Summary Generators
  • 25 Best Schools For International Relations In The US
  • GPTZero Review

#3. Wolfram Alpha

search engine for literature review

Wolfram Alpha is a “computational knowledge engine” that can answer factual questions posed in natural language. It can be a useful search tool. 

Type in a question like “What is the square root of 64?” or “What is the boiling point of water?” and Wolfram Alpha will give you an answer.

Wolfram Alpha can also be used to find academic articles. Just type in your keywords and Wolfram Alpha will generate a list of academic articles that match your query.

Tip: You can restrict your results to peer-reviewed journals by clicking on the “Scholarly” tab.

  • Can answer factual questions posed in natural language.
  • Can be used to find academic articles.
  • Results are ranked by relevance.
  • Results can be overwhelming, so it’s important to narrow down your search criteria as much as possible.
  • The experience feels a bit more structured but it could also be a bit restrictive

Wolfram Alpha offers a few pricing options, including a “Pro” subscription that gives you access to additional features, such as the ability to create custom reports. You can also purchase individual articles or download them for offline use.

Pro costs $5.49 and Pro Premium costs $9.99

#4. iSEEK Education 

  • 15 Best Websites To Download Research Papers For Free
  • 15 Best Academic Research Trend Prediction Platforms
  • Academic Tools
  • 15 Best Academic Networking And Collaboration Platforms

iSEEK is a search engine targeting students, teachers, administrators, and caregiver. It’s designed to be safe with editor-reviewed content.

iSEEK Education also includes a “Cited by” feature which shows you how often an article has been cited by other researchers.

  • Editor-reviewed content.
  • “Cited by” feature shows how often an article has been cited by other researchers.
  • Limited to academic content.
  • Doesn’t have the breadth of coverage that some of the other academic search engines have.

iSEEK Education is free to use.

#5. BASE (Bielefeld Academic Search Engine)

search engine for literature review

BASE is hosted at Bielefeld University in Germany and that’s where it name stems from (Bielefeld Academic Search Engine). 

Known as “one of the most comprehensive academic web search engines,” it contains over 100 million documents from 4,000 different sources. 

Users can narrow their search using the advanced search option, so regardless of whether you need a book, a review, a lecture, a video or a thesis, BASE has what you need.

BASE indexes academic articles from a variety of disciplines, including the arts, humanities, social sciences, and natural sciences.

  • One of the world’s most voluminous search engines, 
  • Indexes academic articles from a variety of disciplines, especially for academic web resources
  • Includes an “Advanced Search” feature that lets you restrict your results to peer-reviewed journals.
  • Doesn’t include abstracts for most articles.
  • Doesn’t have related articles, references, cited by

BASE is free to use.

  • 10 Best Reference Management Software for Research 2023
  • 15 Best Academic Networking and Collaboration Platforms
  • 30+ Essential Software for Researchers
  • 15 Best Academic Blogging and Content Management 
  • 11 Best Academic Writing Tools For Researchers

search engine for literature review

CORE is an academic search engine that focuses on open access research papers. A link to the full text PDF or complete text web page is supplied for each search result. It’s academic search engine dedicated to open access research papers.

  • Focused on open access research papers.
  • Links to full text PDF or complete text web page are supplied for each search result.
  • Export formats include BibTeX, Endnote, RefWorks, Zotero.
  • Coverage is limited to open access research papers.
  • No abstracts are available for most articles.
  • No related articles, references, or cited by features.

CORE is free to use.

  • Best Plagiarism Checkers for Research Papers in 2024

#7. Science.gov

search engine for literature review

Science.gov is a search engine developed and managed by the United States government. It includes results from a variety of scientific databases, including NASA, EPA, USGS, and NIST. 

US students are more likely to have early exposure to this tool for scholarly research. 

  • Coverage from a variety of scientific databases (200 million articles and reports).
  • Links to full text are available for some articles.

Science.gov is free to use.

  • 15 Best Academic Journal Discovery Platforms
  • Sci Hub Review 

#8. Semantic Scholar

search engine for literature review

Semantic Scholar is a recent entrant to the field. Its goal is to provide more relevant and effective search results via artificial intelligence-powered methods that detect hidden relationships and connections between research topics.

  • Powered by artificial intelligence, which enhances search results.
  • Covers a large number of academic articles (approx. 40 million).
  • Related articles, references, and cited by features are all included.
  • Links to full text are available for most articles.

Semantic Scholar is free to use.

  • 11 Best Academic Writing Tools For Researchers 
  • 10 Best Reference Management Software for Research 
  • 15 Best Academic Journal Discovery Platforms 

#9. RefSeek

search engine for literature review

RefSeek searches more than five billion documents, including web pages, books, encyclopedias, journals, and newspapers.

This is one of the free search engines that feels like Yahoo with a massive directory. It could be good when you are just looking for research ideas from unexpected angles. It could lead you to some other database that you might not know such as the CIA The World Factbook, which is a great reference tool.

  • Searches more than five billion documents.
  • The Documents tab is very focused on research papers and easy to use.
  • Results can be filtered by date, type of document, and language.
  • Good source for free academic articles, open access journals, and technical reports.
  • The navigation and user experience is very dated even to millenials…
  • It requires more than 3 clicks to dig up interesting references (which is how it could lead to you something beyond the 1st page of Google)
  • The top part of the results are ALL ads (well… it’s free to use)

RefSeek is free to use.

#10. ResearchGate 

search engine for literature review

A mixture of social networking site + forum + content databases where researchers can build their profile, share research papers, and interact with one another.

Although it is not an academic search engine that goes outside of its site, ResearchGate ‘s library of works offers an excellent choice for any curious scholar.

There are more than 100 million publications available on the site from over 11 million researchers. It is possible to search by publication, data, and author, as well as to ask the researchers questions. 

  • A great place to find research papers and researchers.
  • Can follow other researchers and get updates when they share new papers or make changes to their profile.
  • The network effect can be helpful in finding people who have expertise in a particular topic.
  • Interface is not as user friendly
  • Can be overwhelming when trying to find relevant papers.
  • Some papers are behind a paywall.

ResearchGate is free to use.

  • 15 Best Academic Research Trend Prediction Platforms 
  • 25 Best Tools for Tracking Research Impact and Citations

#11. DataONE Search (formerly CiteULike) 

search engine for literature review

A social networking site for academics who want to share and discover academic articles and papers.

  • A great place to find academic papers that have been shared by other academics.
  • Some papers are behind a paywall

CiteULike is free to use.

#12. DataElixir 

search engine for literature review

DataElixir is deigned to help you find, understand and use data. It includes a curated list of the best open datasets, tools and resources for data science.

  • Dedicated resource for finding open data sets, tools, and resources for data science.
  • The website is easy to navigate.
  • The content is updated regularly
  • The resources are grouped by category.
  • Not all of the resources are applicable to academic research.
  • Some of the content is outdated.

DataElixir is free to use.

#13. LazyScholar – browser extension

search engine for literature review

LazyScholar is a free browser plugin that helps you discover free academic full texts, metrics, and instant citation and sharing links. Lazy Scholar is created Colby Vorland, a postdoctoral fellow at Indiana University.

  • It can integrate with your library to find full texts even when you’re off-campus.
  • Saves your history and provides an interface to find it.
  • A pre-formed citation is availlable in over 900 citation styles.
  • Can recommend you topics and scans new PubMed listings to suggest new papers
  • Results can be a bit hit or miss

LazyScholar is free to use.

#14. CiteseerX – digital library from PenState

search engine for literature review

CiteseerX is a digital library stores and indexes research articles in Computer Science and related fields. The site has a robust search engine that allows you to filter results by date, author.

  • Searches a large number of academic papers.
  • Results can be filtered by date, author, and topic.
  • The website is easy to use.
  • You can create an account and save your searches for future reference.

CiteseerX is free to use.

  • Surfer Review: Is It Worth It?
  • 25 Best Tools For Tracking Research Impact And Citations

#15. The Lens – patents search 

The Lens or the Patent Lens is an online patent and scholarly literature search facility, provided by Cambia, an Australia-based non-profit organization.

search engine for literature review

  • Searches for a large number of academic papers.

The price range can be free for non-profit use to $5,000 for commercial enterprise.

#16. Fatcat – wiki for bibliographic catalog 

search engine for literature review

Fatcat is an open bibliographic catalog of written works. The scope of works is somewhat flexible, with a focus on published research outputs like journal articles, pre-prints, and conference proceedings. Records are collaboratively editable, versioned, available in bulk form, and include URL-agnostic file-level metadata.

  • Open source and collaborative
  • You can be part of the community that is very focused on its mission
  • The archival file-level metadata (verified digests and long-term copies) is a great feature.
  • Could prove to be another rabbit hole
  • People either love or hate the text-only interface

#17. Lexis Web – Legal database

search engine for literature review

Are you researching legal topics? You can turn to Lexis Web for any law-related questions you may have. The results are drawn from legal sites and can be filtered based on criteria such as news, blogs, government, and commercial. Additionally, users can filter results by jurisdiction, practice area, source and file format.

  • Results are drawn from legal sites.
  • Filters are available based on criteria such as news, blogs, government, and commercial.
  • Users can filter results by jurisdiction, practice area, source and file format.
  • Not all law-related questions will be answered by this search engine.
  • Coverage is limited to legal sites only.

Lexis Web is free for up to three searches per day. After that, a subscription is required.

#18. Infotopia – part of the VLRC family

search engine for literature review

Infotopia touts itself as an “alternative to Google safe search.” Scholarly book results are curated by librarians, teachers, and other educational workers. Users can select from a range of topics such as art, health, and science and technology, and then see a list of resources pertaining to the topic. 

Consequently, if you aren’t able to find what you are looking for within Infotopia’s pages, you will probably find it on one of its many suggested websites.

#19. Virtual Learning Resources Center

search engine for literature review

Virtual Learning Resources Center (VLRC) is an academic search engine that features thousands of academic sites chosen by educators and librarians worldwide. Using an index generated from a research portal, university, and library internet subject guides, students and instructors can find current, authoritative information for school.

  • Thousands of academic information websites indexed by it. You will also be able to get more refined results with custom Google search, which will speed up your research. 
  • Many people consider VLRC as one of the best free search engines to start looking for research material. 
  • TeachThought rated the Virtual LRC #3 in it’s list of 100 Search Engines For Academic Research
  • More relevant to education 
  • More relevant to students

search engine for literature review

Powered by Google Custom Search Engine (CSE), Jurn is a free online search engine for accessing and downloading free full-text scholarly papers. It was created by David Haden in a public open beta version in February 2009, initially for locating open access electronic journal articles in the arts and humanities.

After the indexing process was completed, a website containing additional public directories of web links to indexed publications was introduced in mid-2009. The Jurn search service and directory has been regularly modified and cleaned since then.

  • A great resource for finding academic papers that are behind paywalls.
  • The content is updated regularly.uren

Jurn is free to use.

#21. WorldWideScience

search engine for literature review

The Office of Scientific and Technical Information—a branch of the Office of Science within the U.S. Department of Energy—hosts the portal WorldWideScience , which has dubbed itself “The Global Science Gateway.”

Over 70 countries’ databases are used on the website. When a user enters a query, it contacts databases from all across the world and shows results in both English and translated journals and academic resources.

  • Results can be filtered by language and type of resource
  • Interface is easy to use
  • Contains both academic journal articles and translated academic resources 
  • The website can be difficult to navigate.

WorldWideScience is free to use.

#22. Google Books

search engine for literature review

A user can browse thousands of books on Google Books, from popular titles to old titles, to find pages that include their search terms. You can look through pages, read online reviews, and find out where to buy a hard copy once you find the book you are interested in.

#23. DOAJ (Directory of Open Access Journals)

search engine for literature review

DOAJ is a free search engine for scientific and scholarly materials. It is a searchable database with over 8,000 peer-reviewed research papers organized by subject. It’s one of the most comprehensive libraries of scientific and scholarly resources, with over 8,000 journals available on a variety of themes.

#24. Baidu Scholar

search engine for literature review

Baidu Xueshu (Academic) is the Chinese version for Google Scholar. IDU Scholar indexes academic papers from a variety of disciplines in both Chinese and English.

  • Articles are available in full text PDF.
  • Covers a variety of academic disciplines.
  • No abstracts are available for most articles, but summaries are provided for some.
  • A great portal that takes you to different specialized research platform
  • You need to be able to read Chinese to use the site
  • Since 2021 there is a rise of focus on China and the Chinese Communist Party

Baidu Scholar is free to use.

#25. PubMed Central

search engine for literature review

PubMed is a free search engine that provides references and abstracts for medical, life sciences, and biomedical topics.

If you’re studying anything related to healthcare or science, this site is perfect. PublicMed Central is operated by the National Center for Biotechnology Information, a division of the U.S. National Library of Medicine. It contains more than 3 million full-text journal articles. 

It’s similar to PubMed Health, which focuses on health-related research and includes abstracts and citations to over 26 million articles.

#26. MEDLINE®

search engine for literature review

MEDLINE® is a paid subscription database for life sciences and biomedicine that includes more than 28 million citations to journal articles. For finding reliable, carefully chosen health information, Medline Plus provides a powerful search tool and even a dictionary.

  • A great database for life sciences and biomedicine.
  • Contains more than 28 million references to journal articles.
  • References can be filtered by date, type of document, and language.
  • The database is expensive to access.
  • Some people find it difficult to navigate and find what they are looking for.

MEDLINE is not free to use ( pricing information ).

Defunct Academic Search Engines 

#27. microsoft academic  .

Microsoft Academic

Microsoft Academic Search seemed to be a failure from the beginning. It ended in 2012, then re-launched in 2016 as Microsoft Academic. It provides the researcher with the opportunity to search academic publications,

Microsoft Academic used to be the second-largest academic search engine after Google Scholar. Microsoft Academic provides a wealth of data for free, but Microsoft has announced that it will shut Microsoft Academic down in by 2022. 

#28. Scizzle

search engine for literature review

Designed to help researchers stay on top of the literature by setting up email alerts, based on key terms, for newspapers.

Unfortunately, academic search engines come and go. These are two that are no longer available.

Final Thoughts

There are many academic search engines that can help researchers and scholars find the information they need. This list provides a variety of options, starting with more familiar engines and moving on to less well-known ones. 

Keeping an open mind and exploring different sources is essential for conducting effective online research. With so much information at our fingertips, it’s important to make sure we’re using the best tools available to us.

Tell us in the comment below which academic search engine have you not heard of? Which database do you think we should add? What database do your professional societies use? What are the most useful academic websites for research in your opinion?

There is more.

Check out our other articles on the Best Academic Tools Series for Research below.

  • Learn how to get more done with these Academic Writing Tools  
  • Learn how to proofread your work with these Proofreading Tools
  • Learn how to broaden your research landscape with these Academic Search Engines
  • Learn how to manage multiple research projects with these Project Management Tools
  • Learn how to run effective survey research with these Survey Tools for Research
  • Learn how get more insights from important conversations and interviews with Transcription Tools
  • Learn how to manage the ever-growing list of references with these Reference Management Software
  • Learn how to double your productivity with literature reviews with these AI-Based Summary Generators
  • Learn how to build and develop your audience with these Academic Social Network Sites
  • Learn how to make sure your content is original and trustworthy with these Plagiarism Checkers
  • Learn how to talk about your work effectively with these Science Communication Tools

Photo of author

10 thoughts on “28 Best Academic Search Engines That make your research easier”

Thank you so much Joannah..I have found this information useful to me as librarian in an academic library

You are welcome! We are happy to hear that!

Thank You Team, for providing a comprehensive list of academic search engines that can help make research easier for students and scholars. The variety of search engines included offers a range of options for finding scholarly articles, journals, and other academic resources. The article also provides a brief summary of each search engine’s features, which helps in determining which one is the best fit for a specific research topic. Overall, this article is a valuable resource for anyone looking for a quick and easy way to access a wealth of academic information.

Thank you for taking the time to share your feedback with us. We are delighted to hear that you found our list of academic search engines helpful in making research easier for students and scholars. We understand the importance of having a variety of options when it comes to finding scholarly articles, journals, and other academic resources, and we strive to provide a comprehensive list of resources to meet those needs.

We are glad that you found the brief summary of each search engine’s features helpful in determining which one is the best fit for a specific research topic. Our goal is to make it easy for our readers to access valuable academic information and we’re glad that we were able to achieve that for you.

We appreciate your support and thank you for your kind words. We will continue to provide valuable resources for students and researchers in the future. Please let us know if you have any further questions or suggestions.

No more questions Thank You

I cannot thank you enough!!! thanks alot 🙂

Typography animation is a technique that combines text and motion to create visually engaging and dynamic animations. It involves animating individual letters, words, or phrases in various ways to convey a message, evoke emotions, or enhance the visual impact of a design or video. – Typography Animation Techniques Tools and Online Software {43}

Hi Joannah! Here’s another one you may want to add! Expontum ( https://www.expontum.com/ ) – Helps researchers quickly find knowledge gaps and identify what research projects have been completed before. Thanks!

Expontum – Helps researchers quickly find knowledge gaps and identify what research projects have been completed before. Expontum is free, open access, and available to all globally with no paid versions of the site. Automated processes scan research article information 24/7 so this website is constantly updating. By looking at over 35 million research publications (240 million by the end of 2023), the site has 146 million tagged research subjects and 122 million tagged research attributes. Learn more about methodology and sources on the Expontum About Page ( https://www.expontum.com/about.php )

Hey Ryan, I clicked and checked your site and thought it was very relevant to our reader. Thank you for sharing. And, we will be reviewing your site soon.

Sounds good! Thanks, Joannah!

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

We maintain and update science journals and scientific metrics. Scientific metrics data are aggregated from publicly available sources. Please note that we do NOT publish research papers on this platform. We do NOT accept any manuscript.

search engine for literature review

2012-2024 © scijournal.org

A free, AI-powered research tool for scientific literature

  • Charles E. Jones
  • Pattern Recognition

New & Improved API for Developers

Introducing semantic reader in beta.

Stay Connected With Semantic Scholar Sign Up What Is Semantic Scholar? Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI.

U.S. flag

An official website of the United States government

Here's how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Effective March 14, 2024, HHS Virtual Library customers will have a streamlined login , with a common URL for all HHS customers using a PIV card. IHS and other HHS customers without a PIV card will continue to login using temporary credentials provided by the NIH Library. Further guidance is available here .

Literature Search: Databases and Gray Literature

The literature search.

  • A systematic review search includes a search of databases, gray literature, personal communications, and a handsearch of high impact journals in the related field.  See our list of recommended databases and gray literature sources on this page.
  • a comprehensive literature search can not be dependent on a single database, nor on bibliographic databases only.
  • inclusion of multiple databases helps avoid publication bias (georaphic bias or bias against publication of negative results).
  • The Cochrane Collaboration recommends PubMed, Embase and the Cochrane Central Register of Controlled Trials (CENTRAL) at a minimum.     
  • NOTE:  The Cochrane Collaboration and the IOM recommend that the literature search be conducted by librarians or persons with extensive literature search experience. Please contact the NIH Librarians for assistance with the literature search component of your systematic review. 

Cochrane Library

A collection of six databases that contain different types of high-quality, independent evidence to inform healthcare decision-making. Search the Cochrane Central Register of Controlled Trials here.

European database of biomedical and pharmacologic literature.

PubMed comprises more than 21 million citations for biomedical literature from MEDLINE, life science journals, and online books.

Largest abstract and citation database of peer-reviewed literature and quality web sources. Contains conference papers.

Web of Science

World's leading citation databases. Covers over 12,000 of the highest impact journals worldwide, including Open Access journals and over 150,000 conference proceedings. Coverage in the sciences, social sciences, arts, and humanities, with coverage to 1900.

Subject Specific Databases

APA PsycINFO

Over 4.5 million abstracts of peer-reviewed literature in the behavioral and social sciences. Includes conference papers, book chapters, psychological tests, scales and measurement tools.

CINAHL Plus

Comprehensive journal index to nursing and allied health literature, includes books, nursing dissertations, conference proceedings, practice standards and book chapters.

Latin American and Caribbean health sciences literature database

Gray Literature

  • Gray Literature  is the term for information that falls outside the mainstream of published journal and mongraph literature, not controlled by commercial publishers
  • hard to find studies, reports, or dissertations
  • conference abstracts or papers
  • governmental or private sector research
  • clinical trials - ongoing or unpublished
  • experts and researchers in the field     
  • Library catalogs
  • Professional association websites
  • Google Scholar  - Search scholarly literature across many disciplines and sources, including theses, books, abstracts and articles.
  • Dissertation Abstracts - dissertation and theses database - NIH Library biomedical librarians can access and search for you.
  • NTIS  - central resource for government-funded scientific, technical, engineering, and business related information.
  • AHRQ  - agency for healthcare research and quality
  • Open Grey  - system for information on grey literature in Europe. Open access to 700,000 references to the grey literature.
  • World Health Organization  - providing leadership on global health matters, shaping the health research agenda, setting norms and standards, articulating evidence-based policy options, providing technical support to countries and monitoring and assessing health trends.
  • New York Academy of Medicine Grey Literature Report  - a bimonthly publication of The New York Academy of Medicine (NYAM) alerting readers to new gray literature publications in health services research and selected public health topics. NOTE: Discontinued as of Jan 2017, but resources are still accessible.
  • Gray Source Index
  • OpenDOAR - directory of academic repositories
  • International Clinical Trials Registery Platform  - from the World Health Organization
  • Australian New Zealand Clinical Trials Registry
  • Brazilian Clinical Trials Registry
  • Chinese Clinical Trial Registry - 
  • ClinicalTrials.gov   - U.S.  and international federally and privately supported clinical trials registry and results database
  • Clinical Trials Registry  - India
  • EU clinical Trials Register
  • Japan Primary Registries Network  
  • Pan African Clinical Trials Registry
  • En español – ExME
  • Em português – EME

Literature searches: what databases are available?

Posted on 6th April 2021 by Izabel de Oliveira

""

Many types of research require a search of the medical literature as part of the process of understanding the current evidence or knowledge base. This can be done using one or more biomedical bibliographic databases. [1]

Bibliographic databases make the information contained in the papers more visible to the scientific community and facilitate locating the desired literature.

This blog describes some of the main bibliographic databases which index medical journals.

PubMed was launched in 1996 and, since June 1997, provides free and unlimited access for all users through the internet. PubMed database contains more than 30 million references of biomedical literature from approximately 7,000 journals. The largest percentage of records in PubMed comes from MEDLINE (95%), which contains 25 million records from over 5,600 journals. Other records derive from other sources such as In-process citations, ‘Ahead of Print’ citations, NCBI Bookshelf, etc.

The second largest component of PubMed is PubMed Central (PMC) . Launched in 2000, PMC is a permanent collection of full-text life sciences and biomedical journal articles. PMC also includes articles deposited by journal publishers and author manuscripts, published articles that are submitted in compliance with the public access policies of the National Institutes of Health (NIH) and other research funding agencies. PMC contains approximately 4.5 million articles.

Some National Library of Medicine (NLM) resources associated with PubMed are the NLM Catalog and MedlinePlus. The NLM Catalog contains bibliographic records for over 1.4 million journals, books, audiovisuals, electronic resources, and other materials. It also includes detailed indexing information for journals in PubMed and other NCBI databases, although not all materials in the NLM Catalog are part of NLM’s collection. MedlinePlus is a consumer health website providing information on various health topics, drugs, dietary supplements, and health tools.

MeSH (Medical Subject Headings) is the NLM controlled vocabulary used for indexing articles in PubMed. It is used by indexers who analyze and maintain the PubMed database to reflect the subject content of journal articles as they are published. Indexers typically select 10–12 MeSH terms to describe every paper.

Embase is considered the second most popular database after MEDLINE. More than 32 million records from over 8,200 journals from more than 95 countries, and ‘grey literature’ from over 2.4 million conference abstracts, are estimated to be in the Embase content.

Embase contains subtopics in health care such as complementary and alternative medicine, prognostic studies, telemedicine, psychiatry, and health technology. Besides that, it is also widely used for research on drug-related topics as it offers better coverage than MEDLINE on pharmaceutics-related literature.

In 2010, Embase began to include all MEDLINE citations. MEDLINE records are delivered to Elsevier daily and are incorporated into Embase after de-duplication with records already indexed by Elsevier to produce ‘MEDLINE-unique’ records. These MEDLINE-unique records are not re-indexed by Elsevier. However, their indexing is mapped to Emtree terms used in Embase to ensure that Emtree terminology can be used to search all Embase records, including those originally derived from MEDLINE.

Since this coverage expansion—at least in theory and without taking into consideration the different indexing practices of the two databases—a search in Embase alone should cover every record in both Embase and MEDLINE, making Embase a possible “one-stop” search engine for medical research [1].

Emtree is a hierarchically structured, controlled vocabulary for biomedicine and the related life sciences. It includes a whole range of terms for drugs, diseases, medical devices, and essential life science concepts. Emtree is used to index all of the Embase content. This process includes full-text indexing of journal articles, which is done by experts.

The most important index of the technical-scientific literature in Latin America and the Caribbean, LILACS , was created in 1985 to record scientific and technical production in health. It has been maintained and updated by a network of more than 600 institutions of education, government, and health research and coordinated by Latin America and Caribbean Center on Health Sciences Information (BIREME), Pan American Health Organization (PAHO), and World Health Organization (WHO).

LILACS contains scientific and technical literature from over 908 journals from 26 countries in Latin America and the Caribbean, with free access. About 900,000 records from articles with peer review, theses and dissertations, government documents, conference proceedings, and books; more than 480,000 of them are available with the full-text link in open access.

The LILACS Methodology is a set of standards, manuals, guides, and applications in continuous development, intended for the collection, selection, description, indexing of documents, and generation of databases. This centralised methodology enables the cooperation between Latin American and Caribbean countries to create local and national databases, all feeding into the LILACS database.  Currently, the databases LILACS, BBO, BDENF, MEDCARIB, and national databases of the countries of Latin America are part of the LILACS System.

Health Sciences Descriptors (DeCS) is the multilingual and structured vocabulary created by BIREME to serve as a unique language in indexing articles from scientific journals, books, congress proceedings, technical reports, and other types of materials, and also for searching and retrieving subjects from scientific literature from information sources available on the Virtual Health Library (VHL) such as LILACS, MEDLINE, and others. It was developed from the MeSH with the purpose of permitting the use of common terminology for searching in multiple languages, and providing a consistent and unique environment for the retrieval of information. DeCS vocabulary is dynamic and totals 34,118 descriptors and qualifiers, of which 29,716 come from MeSH, and 4,402 are exclusive.

Cochrane CENTRAL

The Cochrane Central Register of Controlled Trials (CENTRAL) is a database of reports of randomized and quasi-randomized controlled trials. Most records are obtained from the bibliographic databases PubMed and Embase, with additional records from the published and unpublished sources of CINAHL, ClinicalTrials.gov, and the WHO’s International Clinical Trials Registry Platform.

Although CENTRAL first began publication in 1996, records are included irrespective of the date of publication, and the language of publication is also not a restriction to being included in the database.  You won’t find the full text to the article on CENTRAL but there is often a summary of the article, in addition to the standard details of author, source, and year.

Within CENTRAL, there are ‘Specialized Registers’ which are collected and maintained by Cochrane Review Groups (plus a few Cochrane Fields), which include reports of controlled trials relevant to their area of interest. Some Cochrane Centres search the general healthcare literature of their countries or regions in order to contribute records to CENTRAL.

ScienceDirect

ScienceDirect i s Elsevier’s most important peer-reviewed academic literature platform. It was launched in 1997 and contains 16 million records from over 2,500 journals, including over 250 Open Access publications, such as Cell Reports and The Lancet Global Health, as well as 39,000 eBooks.

ScienceDirect topics include:

  • health sciences;
  • life sciences;
  • physical sciences;
  • engineering;
  • social sciences; and
  • humanities.

Web of Science

Web of Science (previously Web of Knowledge) is an online scientific citation indexing service created in 1997 by the Institute for Scientific Information (ISI), and currently maintained by Clarivate Analytics.

Web of Science covers several fields of the sciences, social sciences, and arts and humanities. Its main resource is the Web of Science Core Collection which includes over 1 billion cited references dating back to 1900, indexed from 21,100 peer-reviewed journals, including Open Access journals, books and proceedings.

Web of Science also offers regional databases which cover:

  • Latin America (SciELO Citation Index);
  • China (Chinese Science Citation Database);
  • Korea (Korea Citation Index);
  • Russia (Russian Science Citation Index).

Boolean operators

To make the search more precise, we can use boolean operators in databases between our keywords.

We use boolean operators to focus on a topic, particularly when this topic contains multiple search terms, and to connect various pieces of information in order to find exactly what we are looking for.

Boolean operators connect the search words to either narrow or broaden the set of results. The three basic boolean operators are: AND, OR, and NOT.

  • AND narrows a search by telling the database that all keywords used must be found in the article in order for it to appear in our results.
  • OR broadens a search by telling the database that any of the words it connects are acceptable (this is useful when we are searching for synonymous words).
  • NOT narrows the search by telling the database to eliminate all terms that follow it from our search results (this is helpful when we are interested in a specific aspect of a topic or when we want to exclude a type of article.

References (pdf)

You may also be interested in the following blogs for further reading:

Conducting a systematic literature search

Reviewing the evidence: what method should I use?

Cochrane Crowd for students: what’s in it for you?

' src=

Izabel de Oliveira

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

""

Epistemonikos: the world’s largest repository of healthcare systematic reviews

Learn more about the Epistemonikos Foundation and its repository of healthcare systematic reviews. The first in a series of three blogs.

""

How do you use the Epistemonikos database?

Learn how to use the Epistemonikos database, the world’s largest multilingual repository of healthcare evidence. The second in a series of three blogs.

""

Epistemonikos: All you need is L·OVE

Discover more about the ‘Living OVerview of Evidence’ platform from Epistemonikos, which maps the best evidence relevant for making health decisions. The final blog in a series of three focusing on the Epistemonikos Foundation.

  • Open access
  • Published: 14 August 2018

Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies

  • Chris Cooper   ORCID: orcid.org/0000-0003-0864-5607 1 ,
  • Andrew Booth 2 ,
  • Jo Varley-Campbell 1 ,
  • Nicky Britten 3 &
  • Ruth Garside 4  

BMC Medical Research Methodology volume  18 , Article number:  85 ( 2018 ) Cite this article

197k Accesses

199 Citations

118 Altmetric

Metrics details

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving readers clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before.

The purpose of this review is to determine if a shared model of the literature searching process can be detected across systematic review guidance documents and, if so, how this process is reported in the guidance and supported by published studies.

A literature review.

Two types of literature were reviewed: guidance and published studies. Nine guidance documents were identified, including: The Cochrane and Campbell Handbooks. Published studies were identified through ‘pearl growing’, citation chasing, a search of PubMed using the systematic review methods filter, and the authors’ topic knowledge.

The relevant sections within each guidance document were then read and re-read, with the aim of determining key methodological stages. Methodological stages were identified and defined. This data was reviewed to identify agreements and areas of unique guidance between guidance documents. Consensus across multiple guidance documents was used to inform selection of ‘key stages’ in the process of literature searching.

Eight key stages were determined relating specifically to literature searching in systematic reviews. They were: who should literature search, aims and purpose of literature searching, preparation, the search strategy, searching databases, supplementary searching, managing references and reporting the search process.

Conclusions

Eight key stages to the process of literature searching in systematic reviews were identified. These key stages are consistently reported in the nine guidance documents, suggesting consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews. Further research to determine the suitability of using the same process of literature searching for all types of systematic review is indicated.

Peer Review reports

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving review stakeholders clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before. This is in contrast to the information science literature, which has developed information processing models as an explicit basis for dialogue and empirical testing. Without an explicit model, research in the process of systematic literature searching will remain immature and potentially uneven, and the development of shared information models will be assumed but never articulated.

One way of developing such a conceptual model is by formally examining the implicit “programme theory” as embodied in key methodological texts. The aim of this review is therefore to determine if a shared model of the literature searching process in systematic reviews can be detected across guidance documents and, if so, how this process is reported and supported.

Identifying guidance

Key texts (henceforth referred to as “guidance”) were identified based upon their accessibility to, and prominence within, United Kingdom systematic reviewing practice. The United Kingdom occupies a prominent position in the science of health information retrieval, as quantified by such objective measures as the authorship of papers, the number of Cochrane groups based in the UK, membership and leadership of groups such as the Cochrane Information Retrieval Methods Group, the HTA-I Information Specialists’ Group and historic association with such centres as the UK Cochrane Centre, the NHS Centre for Reviews and Dissemination, the Centre for Evidence Based Medicine and the National Institute for Clinical Excellence (NICE). Coupled with the linguistic dominance of English within medical and health science and the science of systematic reviews more generally, this offers a justification for a purposive sample that favours UK, European and Australian guidance documents.

Nine guidance documents were identified. These documents provide guidance for different types of reviews, namely: reviews of interventions, reviews of health technologies, reviews of qualitative research studies, reviews of social science topics, and reviews to inform guidance.

Whilst these guidance documents occasionally offer additional guidance on other types of systematic reviews, we have focused on the core and stated aims of these documents as they relate to literature searching. Table  1 sets out: the guidance document, the version audited, their core stated focus, and a bibliographical pointer to the main guidance relating to literature searching.

Once a list of key guidance documents was determined, it was checked by six senior information professionals based in the UK for relevance to current literature searching in systematic reviews.

Identifying supporting studies

In addition to identifying guidance, the authors sought to populate an evidence base of supporting studies (henceforth referred to as “studies”) that contribute to existing search practice. Studies were first identified by the authors from their knowledge on this topic area and, subsequently, through systematic citation chasing key studies (‘pearls’ [ 1 ]) located within each key stage of the search process. These studies are identified in Additional file  1 : Appendix Table 1. Citation chasing was conducted by analysing the bibliography of references for each study (backwards citation chasing) and through Google Scholar (forward citation chasing). A search of PubMed using the systematic review methods filter was undertaken in August 2017 (see Additional file 1 ). The search terms used were: (literature search*[Title/Abstract]) AND sysrev_methods[sb] and 586 results were returned. These results were sifted for relevance to the key stages in Fig.  1 by CC.

figure 1

The key stages of literature search guidance as identified from nine key texts

Extracting the data

To reveal the implicit process of literature searching within each guidance document, the relevant sections (chapters) on literature searching were read and re-read, with the aim of determining key methodological stages. We defined a key methodological stage as a distinct step in the overall process for which specific guidance is reported, and action is taken, that collectively would result in a completed literature search.

The chapter or section sub-heading for each methodological stage was extracted into a table using the exact language as reported in each guidance document. The lead author (CC) then read and re-read these data, and the paragraphs of the document to which the headings referred, summarising section details. This table was then reviewed, using comparison and contrast to identify agreements and areas of unique guidance. Consensus across multiple guidelines was used to inform selection of ‘key stages’ in the process of literature searching.

Having determined the key stages to literature searching, we then read and re-read the sections relating to literature searching again, extracting specific detail relating to the methodological process of literature searching within each key stage. Again, the guidance was then read and re-read, first on a document-by-document-basis and, secondly, across all the documents above, to identify both commonalities and areas of unique guidance.

Results and discussion

Our findings.

We were able to identify consensus across the guidance on literature searching for systematic reviews suggesting a shared implicit model within the information retrieval community. Whilst the structure of the guidance varies between documents, the same key stages are reported, even where the core focus of each document is different. We were able to identify specific areas of unique guidance, where a document reported guidance not summarised in other documents, together with areas of consensus across guidance.

Unique guidance

Only one document provided guidance on the topic of when to stop searching [ 2 ]. This guidance from 2005 anticipates a topic of increasing importance with the current interest in time-limited (i.e. “rapid”) reviews. Quality assurance (or peer review) of literature searches was only covered in two guidance documents [ 3 , 4 ]. This topic has emerged as increasingly important as indicated by the development of the PRESS instrument [ 5 ]. Text mining was discussed in four guidance documents [ 4 , 6 , 7 , 8 ] where the automation of some manual review work may offer efficiencies in literature searching [ 8 ].

Agreement between guidance: Defining the key stages of literature searching

Where there was agreement on the process, we determined that this constituted a key stage in the process of literature searching to inform systematic reviews.

From the guidance, we determined eight key stages that relate specifically to literature searching in systematic reviews. These are summarised at Fig. 1 . The data extraction table to inform Fig. 1 is reported in Table  2 . Table 2 reports the areas of common agreement and it demonstrates that the language used to describe key stages and processes varies significantly between guidance documents.

For each key stage, we set out the specific guidance, followed by discussion on how this guidance is situated within the wider literature.

Key stage one: Deciding who should undertake the literature search

The guidance.

Eight documents provided guidance on who should undertake literature searching in systematic reviews [ 2 , 4 , 6 , 7 , 8 , 9 , 10 , 11 ]. The guidance affirms that people with relevant expertise of literature searching should ‘ideally’ be included within the review team [ 6 ]. Information specialists (or information scientists), librarians or trial search co-ordinators (TSCs) are indicated as appropriate researchers in six guidance documents [ 2 , 7 , 8 , 9 , 10 , 11 ].

How the guidance corresponds to the published studies

The guidance is consistent with studies that call for the involvement of information specialists and librarians in systematic reviews [ 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 ] and which demonstrate how their training as ‘expert searchers’ and ‘analysers and organisers of data’ can be put to good use [ 13 ] in a variety of roles [ 12 , 16 , 20 , 21 , 24 , 25 , 26 ]. These arguments make sense in the context of the aims and purposes of literature searching in systematic reviews, explored below. The need for ‘thorough’ and ‘replicable’ literature searches was fundamental to the guidance and recurs in key stage two. Studies have found poor reporting, and a lack of replicable literature searches, to be a weakness in systematic reviews [ 17 , 18 , 27 , 28 ] and they argue that involvement of information specialists/ librarians would be associated with better reporting and better quality literature searching. Indeed, Meert et al. [ 29 ] demonstrated that involving a librarian as a co-author to a systematic review correlated with a higher score in the literature searching component of a systematic review [ 29 ]. As ‘new styles’ of rapid and scoping reviews emerge, where decisions on how to search are more iterative and creative, a clear role is made here too [ 30 ].

Knowing where to search for studies was noted as important in the guidance, with no agreement as to the appropriate number of databases to be searched [ 2 , 6 ]. Database (and resource selection more broadly) is acknowledged as a relevant key skill of information specialists and librarians [ 12 , 15 , 16 , 31 ].

Whilst arguments for including information specialists and librarians in the process of systematic review might be considered self-evident, Koffel and Rethlefsen [ 31 ] have questioned if the necessary involvement is actually happening [ 31 ].

Key stage two: Determining the aim and purpose of a literature search

The aim: Five of the nine guidance documents use adjectives such as ‘thorough’, ‘comprehensive’, ‘transparent’ and ‘reproducible’ to define the aim of literature searching [ 6 , 7 , 8 , 9 , 10 ]. Analogous phrases were present in a further three guidance documents, namely: ‘to identify the best available evidence’ [ 4 ] or ‘the aim of the literature search is not to retrieve everything. It is to retrieve everything of relevance’ [ 2 ] or ‘A systematic literature search aims to identify all publications relevant to the particular research question’ [ 3 ]. The Joanna Briggs Institute reviewers’ manual was the only guidance document where a clear statement on the aim of literature searching could not be identified. The purpose of literature searching was defined in three guidance documents, namely to minimise bias in the resultant review [ 6 , 8 , 10 ]. Accordingly, eight of nine documents clearly asserted that thorough and comprehensive literature searches are required as a potential mechanism for minimising bias.

The need for thorough and comprehensive literature searches appears as uniform within the eight guidance documents that describe approaches to literature searching in systematic reviews of effectiveness. Reviews of effectiveness (of intervention or cost), accuracy and prognosis, require thorough and comprehensive literature searches to transparently produce a reliable estimate of intervention effect. The belief that all relevant studies have been ‘comprehensively’ identified, and that this process has been ‘transparently’ reported, increases confidence in the estimate of effect and the conclusions that can be drawn [ 32 ]. The supporting literature exploring the need for comprehensive literature searches focuses almost exclusively on reviews of intervention effectiveness and meta-analysis. Different ‘styles’ of review may have different standards however; the alternative, offered by purposive sampling, has been suggested in the specific context of qualitative evidence syntheses [ 33 ].

What is a comprehensive literature search?

Whilst the guidance calls for thorough and comprehensive literature searches, it lacks clarity on what constitutes a thorough and comprehensive literature search, beyond the implication that all of the literature search methods in Table 2 should be used to identify studies. Egger et al. [ 34 ], in an empirical study evaluating the importance of comprehensive literature searches for trials in systematic reviews, defined a comprehensive search for trials as:

a search not restricted to English language;

where Cochrane CENTRAL or at least two other electronic databases had been searched (such as MEDLINE or EMBASE); and

at least one of the following search methods has been used to identify unpublished trials: searches for (I) conference abstracts, (ii) theses, (iii) trials registers; and (iv) contacts with experts in the field [ 34 ].

Tricco et al. (2008) used a similar threshold of bibliographic database searching AND a supplementary search method in a review when examining the risk of bias in systematic reviews. Their criteria were: one database (limited using the Cochrane Highly Sensitive Search Strategy (HSSS)) and handsearching [ 35 ].

Together with the guidance, this would suggest that comprehensive literature searching requires the use of BOTH bibliographic database searching AND supplementary search methods.

Comprehensiveness in literature searching, in the sense of how much searching should be undertaken, remains unclear. Egger et al. recommend that ‘investigators should consider the type of literature search and degree of comprehension that is appropriate for the review in question, taking into account budget and time constraints’ [ 34 ]. This view tallies with the Cochrane Handbook, which stipulates clearly, that study identification should be undertaken ‘within resource limits’ [ 9 ]. This would suggest that the limitations to comprehension are recognised but it raises questions on how this is decided and reported [ 36 ].

What is the point of comprehensive literature searching?

The purpose of thorough and comprehensive literature searches is to avoid missing key studies and to minimize bias [ 6 , 8 , 10 , 34 , 37 , 38 , 39 ] since a systematic review based only on published (or easily accessible) studies may have an exaggerated effect size [ 35 ]. Felson (1992) sets out potential biases that could affect the estimate of effect in a meta-analysis [ 40 ] and Tricco et al. summarize the evidence concerning bias and confounding in systematic reviews [ 35 ]. Egger et al. point to non-publication of studies, publication bias, language bias and MEDLINE bias, as key biases [ 34 , 35 , 40 , 41 , 42 , 43 , 44 , 45 , 46 ]. Comprehensive searches are not the sole factor to mitigate these biases but their contribution is thought to be significant [ 2 , 32 , 34 ]. Fehrmann (2011) suggests that ‘the search process being described in detail’ and that, where standard comprehensive search techniques have been applied, increases confidence in the search results [ 32 ].

Does comprehensive literature searching work?

Egger et al., and other study authors, have demonstrated a change in the estimate of intervention effectiveness where relevant studies were excluded from meta-analysis [ 34 , 47 ]. This would suggest that missing studies in literature searching alters the reliability of effectiveness estimates. This is an argument for comprehensive literature searching. Conversely, Egger et al. found that ‘comprehensive’ searches still missed studies and that comprehensive searches could, in fact, introduce bias into a review rather than preventing it, through the identification of low quality studies then being included in the meta-analysis [ 34 ]. Studies query if identifying and including low quality or grey literature studies changes the estimate of effect [ 43 , 48 ] and question if time is better invested updating systematic reviews rather than searching for unpublished studies [ 49 ], or mapping studies for review as opposed to aiming for high sensitivity in literature searching [ 50 ].

Aim and purpose beyond reviews of effectiveness

The need for comprehensive literature searches is less certain in reviews of qualitative studies, and for reviews where a comprehensive identification of studies is difficult to achieve (for example, in Public health) [ 33 , 51 , 52 , 53 , 54 , 55 ]. Literature searching for qualitative studies, and in public health topics, typically generates a greater number of studies to sift than in reviews of effectiveness [ 39 ] and demonstrating the ‘value’ of studies identified or missed is harder [ 56 ], since the study data do not typically support meta-analysis. Nussbaumer-Streit et al. (2016) have registered a review protocol to assess whether abbreviated literature searches (as opposed to comprehensive literature searches) has an impact on conclusions across multiple bodies of evidence, not only on effect estimates [ 57 ] which may develop this understanding. It may be that decision makers and users of systematic reviews are willing to trade the certainty from a comprehensive literature search and systematic review in exchange for different approaches to evidence synthesis [ 58 ], and that comprehensive literature searches are not necessarily a marker of literature search quality, as previously thought [ 36 ]. Different approaches to literature searching [ 37 , 38 , 59 , 60 , 61 , 62 ] and developing the concept of when to stop searching are important areas for further study [ 36 , 59 ].

The study by Nussbaumer-Streit et al. has been published since the submission of this literature review [ 63 ]. Nussbaumer-Streit et al. (2018) conclude that abbreviated literature searches are viable options for rapid evidence syntheses, if decision-makers are willing to trade the certainty from a comprehensive literature search and systematic review, but that decision-making which demands detailed scrutiny should still be based on comprehensive literature searches [ 63 ].

Key stage three: Preparing for the literature search

Six documents provided guidance on preparing for a literature search [ 2 , 3 , 6 , 7 , 9 , 10 ]. The Cochrane Handbook clearly stated that Cochrane authors (i.e. researchers) should seek advice from a trial search co-ordinator (i.e. a person with specific skills in literature searching) ‘before’ starting a literature search [ 9 ].

Two key tasks were perceptible in preparing for a literature searching [ 2 , 6 , 7 , 10 , 11 ]. First, to determine if there are any existing or on-going reviews, or if a new review is justified [ 6 , 11 ]; and, secondly, to develop an initial literature search strategy to estimate the volume of relevant literature (and quality of a small sample of relevant studies [ 10 ]) and indicate the resources required for literature searching and the review of the studies that follows [ 7 , 10 ].

Three documents summarised guidance on where to search to determine if a new review was justified [ 2 , 6 , 11 ]. These focused on searching databases of systematic reviews (The Cochrane Database of Systematic Reviews (CDSR) and the Database of Abstracts of Reviews of Effects (DARE)), institutional registries (including PROSPERO), and MEDLINE [ 6 , 11 ]. It is worth noting, however, that as of 2015, DARE (and NHS EEDs) are no longer being updated and so the relevance of this (these) resource(s) will diminish over-time [ 64 ]. One guidance document, ‘Systematic reviews in the Social Sciences’, noted, however, that databases are not the only source of information and unpublished reports, conference proceeding and grey literature may also be required, depending on the nature of the review question [ 2 ].

Two documents reported clearly that this preparation (or ‘scoping’) exercise should be undertaken before the actual search strategy is developed [ 7 , 10 ]).

The guidance offers the best available source on preparing the literature search with the published studies not typically reporting how their scoping informed the development of their search strategies nor how their search approaches were developed. Text mining has been proposed as a technique to develop search strategies in the scoping stages of a review although this work is still exploratory [ 65 ]. ‘Clustering documents’ and word frequency analysis have also been tested to identify search terms and studies for review [ 66 , 67 ]. Preparing for literature searches and scoping constitutes an area for future research.

Key stage four: Designing the search strategy

The Population, Intervention, Comparator, Outcome (PICO) structure was the commonly reported structure promoted to design a literature search strategy. Five documents suggested that the eligibility criteria or review question will determine which concepts of PICO will be populated to develop the search strategy [ 1 , 4 , 7 , 8 , 9 ]. The NICE handbook promoted multiple structures, namely PICO, SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) and multi-stranded approaches [ 4 ].

With the exclusion of The Joanna Briggs Institute reviewers’ manual, the guidance offered detail on selecting key search terms, synonyms, Boolean language, selecting database indexing terms and combining search terms. The CEE handbook suggested that ‘search terms may be compiled with the help of the commissioning organisation and stakeholders’ [ 10 ].

The use of limits, such as language or date limits, were discussed in all documents [ 2 , 3 , 4 , 6 , 7 , 8 , 9 , 10 , 11 ].

Search strategy structure

The guidance typically relates to reviews of intervention effectiveness so PICO – with its focus on intervention and comparator - is the dominant model used to structure literature search strategies [ 68 ]. PICOs – where the S denotes study design - is also commonly used in effectiveness reviews [ 6 , 68 ]. As the NICE handbook notes, alternative models to structure literature search strategies have been developed and tested. Booth provides an overview on formulating questions for evidence based practice [ 69 ] and has developed a number of alternatives to the PICO structure, namely: BeHEMoTh (Behaviour of interest; Health context; Exclusions; Models or Theories) for use when systematically identifying theory [ 55 ]; SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) for identification of social science and evaluation studies [ 69 ] and, working with Cooke and colleagues, SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) [ 70 ]. SPIDER has been compared to PICO and PICOs in a study by Methley et al. [ 68 ].

The NICE handbook also suggests the use of multi-stranded approaches to developing literature search strategies [ 4 ]. Glanville developed this idea in a study by Whitting et al. [ 71 ] and a worked example of this approach is included in the development of a search filter by Cooper et al. [ 72 ].

Writing search strategies: Conceptual and objective approaches

Hausner et al. [ 73 ] provide guidance on writing literature search strategies, delineating between conceptually and objectively derived approaches. The conceptual approach, advocated by and explained in the guidance documents, relies on the expertise of the literature searcher to identify key search terms and then develop key terms to include synonyms and controlled syntax. Hausner and colleagues set out the objective approach [ 73 ] and describe what may be done to validate it [ 74 ].

The use of limits

The guidance documents offer direction on the use of limits within a literature search. Limits can be used to focus literature searching to specific study designs or by other markers (such as by date) which limits the number of studies returned by a literature search. The use of limits should be described and the implications explored [ 34 ] since limiting literature searching can introduce bias (explored above). Craven et al. have suggested the use of a supporting narrative to explain decisions made in the process of developing literature searches and this advice would usefully capture decisions on the use of search limits [ 75 ].

Key stage five: Determining the process of literature searching and deciding where to search (bibliographic database searching)

Table 2 summarises the process of literature searching as reported in each guidance document. Searching bibliographic databases was consistently reported as the ‘first step’ to literature searching in all nine guidance documents.

Three documents reported specific guidance on where to search, in each case specific to the type of review their guidance informed, and as a minimum requirement [ 4 , 9 , 11 ]. Seven of the key guidance documents suggest that the selection of bibliographic databases depends on the topic of review [ 2 , 3 , 4 , 6 , 7 , 8 , 10 ], with two documents noting the absence of an agreed standard on what constitutes an acceptable number of databases searched [ 2 , 6 ].

The guidance documents summarise ‘how to’ search bibliographic databases in detail and this guidance is further contextualised above in terms of developing the search strategy. The documents provide guidance of selecting bibliographic databases, in some cases stating acceptable minima (i.e. The Cochrane Handbook states Cochrane CENTRAL, MEDLINE and EMBASE), and in other cases simply listing bibliographic database available to search. Studies have explored the value in searching specific bibliographic databases, with Wright et al. (2015) noting the contribution of CINAHL in identifying qualitative studies [ 76 ], Beckles et al. (2013) questioning the contribution of CINAHL to identifying clinical studies for guideline development [ 77 ], and Cooper et al. (2015) exploring the role of UK-focused bibliographic databases to identify UK-relevant studies [ 78 ]. The host of the database (e.g. OVID or ProQuest) has been shown to alter the search returns offered. Younger and Boddy [ 79 ] report differing search returns from the same database (AMED) but where the ‘host’ was different [ 79 ].

The average number of bibliographic database searched in systematic reviews has risen in the period 1994–2014 (from 1 to 4) [ 80 ] but there remains (as attested to by the guidance) no consensus on what constitutes an acceptable number of databases searched [ 48 ]. This is perhaps because thinking about the number of databases searched is the wrong question, researchers should be focused on which databases were searched and why, and which databases were not searched and why. The discussion should re-orientate to the differential value of sources but researchers need to think about how to report this in studies to allow findings to be generalised. Bethel (2017) has proposed ‘search summaries’, completed by the literature searcher, to record where included studies were identified, whether from database (and which databases specifically) or supplementary search methods [ 81 ]. Search summaries document both yield and accuracy of searches, which could prospectively inform resource use and decisions to search or not to search specific databases in topic areas. The prospective use of such data presupposes, however, that past searches are a potential predictor of future search performance (i.e. that each topic is to be considered representative and not unique). In offering a body of practice, this data would be of greater practicable use than current studies which are considered as little more than individual case studies [ 82 , 83 , 84 , 85 , 86 , 87 , 88 , 89 , 90 ].

When to database search is another question posed in the literature. Beyer et al. [ 91 ] report that databases can be prioritised for literature searching which, whilst not addressing the question of which databases to search, may at least bring clarity as to which databases to search first [ 91 ]. Paradoxically, this links to studies that suggest PubMed should be searched in addition to MEDLINE (OVID interface) since this improves the currency of systematic reviews [ 92 , 93 ]. Cooper et al. (2017) have tested the idea of database searching not as a primary search method (as suggested in the guidance) but as a supplementary search method in order to manage the volume of studies identified for an environmental effectiveness systematic review. Their case study compared the effectiveness of database searching versus a protocol using supplementary search methods and found that the latter identified more relevant studies for review than searching bibliographic databases [ 94 ].

Key stage six: Determining the process of literature searching and deciding where to search (supplementary search methods)

Table 2 also summaries the process of literature searching which follows bibliographic database searching. As Table 2 sets out, guidance that supplementary literature search methods should be used in systematic reviews recurs across documents, but the order in which these methods are used, and the extent to which they are used, varies. We noted inconsistency in the labelling of supplementary search methods between guidance documents.

Rather than focus on the guidance on how to use the methods (which has been summarised in a recent review [ 95 ]), we focus on the aim or purpose of supplementary search methods.

The Cochrane Handbook reported that ‘efforts’ to identify unpublished studies should be made [ 9 ]. Four guidance documents [ 2 , 3 , 6 , 9 ] acknowledged that searching beyond bibliographic databases was necessary since ‘databases are not the only source of literature’ [ 2 ]. Only one document reported any guidance on determining when to use supplementary methods. The IQWiG handbook reported that the use of handsearching (in their example) could be determined on a ‘case-by-case basis’ which implies that the use of these methods is optional rather than mandatory. This is in contrast to the guidance (above) on bibliographic database searching.

The issue for supplementary search methods is similar in many ways to the issue of searching bibliographic databases: demonstrating value. The purpose and contribution of supplementary search methods in systematic reviews is increasingly acknowledged [ 37 , 61 , 62 , 96 , 97 , 98 , 99 , 100 , 101 ] but understanding the value of the search methods to identify studies and data is unclear. In a recently published review, Cooper et al. (2017) reviewed the literature on supplementary search methods looking to determine the advantages, disadvantages and resource implications of using supplementary search methods [ 95 ]. This review also summarises the key guidance and empirical studies and seeks to address the question on when to use these search methods and when not to [ 95 ]. The guidance is limited in this regard and, as Table 2 demonstrates, offers conflicting advice on the order of searching, and the extent to which these search methods should be used in systematic reviews.

Key stage seven: Managing the references

Five of the documents provided guidance on managing references, for example downloading, de-duplicating and managing the output of literature searches [ 2 , 4 , 6 , 8 , 10 ]. This guidance typically itemised available bibliographic management tools rather than offering guidance on how to use them specifically [ 2 , 4 , 6 , 8 ]. The CEE handbook provided guidance on importing data where no direct export option is available (e.g. web-searching) [ 10 ].

The literature on using bibliographic management tools is not large relative to the number of ‘how to’ videos on platforms such as YouTube (see for example [ 102 ]). These YouTube videos confirm the overall lack of ‘how to’ guidance identified in this study and offer useful instruction on managing references. Bramer et al. set out methods for de-duplicating data and reviewing references in Endnote [ 103 , 104 ] and Gall tests the direct search function within Endnote to access databases such as PubMed, finding a number of limitations [ 105 ]. Coar et al. and Ahmed et al. consider the role of the free-source tool, Zotero [ 106 , 107 ]. Managing references is a key administrative function in the process of review particularly for documenting searches in PRISMA guidance.

Key stage eight: Documenting the search

The Cochrane Handbook was the only guidance document to recommend a specific reporting guideline: Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [ 9 ]. Six documents provided guidance on reporting the process of literature searching with specific criteria to report [ 3 , 4 , 6 , 8 , 9 , 10 ]. There was consensus on reporting: the databases searched (and the host searched by), the search strategies used, and any use of limits (e.g. date, language, search filters (The CRD handbook called for these limits to be justified [ 6 ])). Three guidance documents reported that the number of studies identified should be recorded [ 3 , 6 , 10 ]. The number of duplicates identified [ 10 ], the screening decisions [ 3 ], a comprehensive list of grey literature sources searched (and full detail for other supplementary search methods) [ 8 ], and an annotation of search terms tested but not used [ 4 ] were identified as unique items in four documents.

The Cochrane Handbook was the only guidance document to note that the full search strategies for each database should be included in the Additional file 1 of the review [ 9 ].

All guidance documents should ultimately deliver completed systematic reviews that fulfil the requirements of the PRISMA reporting guidelines [ 108 ]. The guidance broadly requires the reporting of data that corresponds with the requirements of the PRISMA statement although documents typically ask for diverse and additional items [ 108 ]. In 2008, Sampson et al. observed a lack of consensus on reporting search methods in systematic reviews [ 109 ] and this remains the case as of 2017, as evidenced in the guidance documents, and in spite of the publication of the PRISMA guidelines in 2009 [ 110 ]. It is unclear why the collective guidance does not more explicitly endorse adherence to the PRISMA guidance.

Reporting of literature searching is a key area in systematic reviews since it sets out clearly what was done and how the conclusions of the review can be believed [ 52 , 109 ]. Despite strong endorsement in the guidance documents, specifically supported in PRISMA guidance, and other related reporting standards too (such as ENTREQ for qualitative evidence synthesis, STROBE for reviews of observational studies), authors still highlight the prevalence of poor standards of literature search reporting [ 31 , 110 , 111 , 112 , 113 , 114 , 115 , 116 , 117 , 118 , 119 ]. To explore issues experienced by authors in reporting literature searches, and look at uptake of PRISMA, Radar et al. [ 120 ] surveyed over 260 review authors to determine common problems and their work summaries the practical aspects of reporting literature searching [ 120 ]. Atkinson et al. [ 121 ] have also analysed reporting standards for literature searching, summarising recommendations and gaps for reporting search strategies [ 121 ].

One area that is less well covered by the guidance, but nevertheless appears in this literature, is the quality appraisal or peer review of literature search strategies. The PRESS checklist is the most prominent and it aims to develop evidence-based guidelines to peer review of electronic search strategies [ 5 , 122 , 123 ]. A corresponding guideline for documentation of supplementary search methods does not yet exist although this idea is currently being explored.

How the reporting of the literature searching process corresponds to critical appraisal tools is an area for further research. In the survey undertaken by Radar et al. (2014), 86% of survey respondents (153/178) identified a need for further guidance on what aspects of the literature search process to report [ 120 ]. The PRISMA statement offers a brief summary of what to report but little practical guidance on how to report it [ 108 ]. Critical appraisal tools for systematic reviews, such as AMSTAR 2 (Shea et al. [ 124 ]) and ROBIS (Whiting et al. [ 125 ]), can usefully be read alongside PRISMA guidance, since they offer greater detail on how the reporting of the literature search will be appraised and, therefore, they offer a proxy on what to report [ 124 , 125 ]. Further research in the form of a study which undertakes a comparison between PRISMA and quality appraisal checklists for systematic reviews would seem to begin addressing the call, identified by Radar et al., for further guidance on what to report [ 120 ].

Limitations

Other handbooks exist.

A potential limitation of this literature review is the focus on guidance produced in Europe (the UK specifically) and Australia. We justify the decision for our selection of the nine guidance documents reviewed in this literature review in section “ Identifying guidance ”. In brief, these nine guidance documents were selected as the most relevant health care guidance that inform UK systematic reviewing practice, given that the UK occupies a prominent position in the science of health information retrieval. We acknowledge the existence of other guidance documents, such as those from North America (e.g. the Agency for Healthcare Research and Quality (AHRQ) [ 126 ], The Institute of Medicine [ 127 ] and the guidance and resources produced by the Canadian Agency for Drugs and Technologies in Health (CADTH) [ 128 ]). We comment further on this directly below.

The handbooks are potentially linked to one another

What is not clear is the extent to which the guidance documents inter-relate or provide guidance uniquely. The Cochrane Handbook, first published in 1994, is notably a key source of reference in guidance and systematic reviews beyond Cochrane reviews. It is not clear to what extent broadening the sample of guidance handbooks to include North American handbooks, and guidance handbooks from other relevant countries too, would alter the findings of this literature review or develop further support for the process model. Since we cannot be clear, we raise this as a potential limitation of this literature review. On our initial review of a sample of North American, and other, guidance documents (before selecting the guidance documents considered in this review), however, we do not consider that the inclusion of these further handbooks would alter significantly the findings of this literature review.

This is a literature review

A further limitation of this review was that the review of published studies is not a systematic review of the evidence for each key stage. It is possible that other relevant studies could help contribute to the exploration and development of the key stages identified in this review.

This literature review would appear to demonstrate the existence of a shared model of the literature searching process in systematic reviews. We call this model ‘the conventional approach’, since it appears to be common convention in nine different guidance documents.

The findings reported above reveal eight key stages in the process of literature searching for systematic reviews. These key stages are consistently reported in the nine guidance documents which suggests consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews.

In Table 2 , we demonstrate consensus regarding the application of literature search methods. All guidance documents distinguish between primary and supplementary search methods. Bibliographic database searching is consistently the first method of literature searching referenced in each guidance document. Whilst the guidance uniformly supports the use of supplementary search methods, there is little evidence for a consistent process with diverse guidance across documents. This may reflect differences in the core focus across each document, linked to differences in identifying effectiveness studies or qualitative studies, for instance.

Eight of the nine guidance documents reported on the aims of literature searching. The shared understanding was that literature searching should be thorough and comprehensive in its aim and that this process should be reported transparently so that that it could be reproduced. Whilst only three documents explicitly link this understanding to minimising bias, it is clear that comprehensive literature searching is implicitly linked to ‘not missing relevant studies’ which is approximately the same point.

Defining the key stages in this review helps categorise the scholarship available, and it prioritises areas for development or further study. The supporting studies on preparing for literature searching (key stage three, ‘preparation’) were, for example, comparatively few, and yet this key stage represents a decisive moment in literature searching for systematic reviews. It is where search strategy structure is determined, search terms are chosen or discarded, and the resources to be searched are selected. Information specialists, librarians and researchers, are well placed to develop these and other areas within the key stages we identify.

This review calls for further research to determine the suitability of using the conventional approach. The publication dates of the guidance documents which underpin the conventional approach may raise questions as to whether the process which they each report remains valid for current systematic literature searching. In addition, it may be useful to test whether it is desirable to use the same process model of literature searching for qualitative evidence synthesis as that for reviews of intervention effectiveness, which this literature review demonstrates is presently recommended best practice.

Abbreviations

Behaviour of interest; Health context; Exclusions; Models or Theories

Cochrane Database of Systematic Reviews

The Cochrane Central Register of Controlled Trials

Database of Abstracts of Reviews of Effects

Enhancing transparency in reporting the synthesis of qualitative research

Institute for Quality and Efficiency in Healthcare

National Institute for Clinical Excellence

Population, Intervention, Comparator, Outcome

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Setting, Perspective, Intervention, Comparison, Evaluation

Sample, Phenomenon of Interest, Design, Evaluation, Research type

STrengthening the Reporting of OBservational studies in Epidemiology

Trial Search Co-ordinators

Booth A. Unpacking your literature search toolbox: on search styles and tactics. Health Information & Libraries Journal. 2008;25(4):313–7.

Article   Google Scholar  

Petticrew M, Roberts H. Systematic reviews in the social sciences: a practical guide. Oxford: Blackwell Publishing Ltd; 2006.

Book   Google Scholar  

Institute for Quality and Efficiency in Health Care (IQWiG). IQWiG Methods Resources. 7 Information retrieval 2014 [Available from: https://www.ncbi.nlm.nih.gov/books/NBK385787/ .

NICE: National Institute for Health and Care Excellence. Developing NICE guidelines: the manual 2014. Available from: https://www.nice.org.uk/media/default/about/what-we-do/our-programmes/developing-nice-guidelines-the-manual.pdf .

Sampson M. MJ, Lefebvre C, Moher D, Grimshaw J. Peer Review of Electronic Search Strategies: PRESS; 2008.

Google Scholar  

Centre for Reviews & Dissemination. Systematic reviews – CRD’s guidance for undertaking reviews in healthcare. York: Centre for Reviews and Dissemination, University of York; 2009.

eunetha: European Network for Health Technology Assesment Process of information retrieval for systematic reviews and health technology assessments on clinical effectiveness 2016. Available from: http://www.eunethta.eu/sites/default/files/Guideline_Information_Retrieval_V1-1.pdf .

Kugley SWA, Thomas J, Mahood Q, Jørgensen AMK, Hammerstrøm K, Sathe N. Searching for studies: a guide to information retrieval for Campbell systematic reviews. Oslo: Campbell Collaboration. 2017; Available from: https://www.campbellcollaboration.org/library/searching-for-studies-information-retrieval-guide-campbell-reviews.html

Lefebvre C, Manheimer E, Glanville J. Chapter 6: searching for studies. In: JPT H, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions; 2011.

Collaboration for Environmental Evidence. Guidelines for Systematic Review and Evidence Synthesis in Environmental Management.: Environmental Evidence:; 2013. Available from: http://www.environmentalevidence.org/wp-content/uploads/2017/01/Review-guidelines-version-4.2-final-update.pdf .

The Joanna Briggs Institute. Joanna Briggs institute reviewers’ manual. 2014th ed: the Joanna Briggs institute; 2014. Available from: https://joannabriggs.org/assets/docs/sumari/ReviewersManual-2014.pdf

Beverley CA, Booth A, Bath PA. The role of the information specialist in the systematic review process: a health information case study. Health Inf Libr J. 2003;20(2):65–74.

Article   CAS   Google Scholar  

Harris MR. The librarian's roles in the systematic review process: a case study. Journal of the Medical Library Association. 2005;93(1):81–7.

PubMed   PubMed Central   Google Scholar  

Egger JB. Use of recommended search strategies in systematic reviews and the impact of librarian involvement: a cross-sectional survey of recent authors. PLoS One. 2015;10(5):e0125931.

Li L, Tian J, Tian H, Moher D, Liang F, Jiang T, et al. Network meta-analyses could be improved by searching more sources and by involving a librarian. J Clin Epidemiol. 2014;67(9):1001–7.

Article   PubMed   Google Scholar  

McGowan J, Sampson M. Systematic reviews need systematic searchers. J Med Libr Assoc. 2005;93(1):74–80.

Rethlefsen ML, Farrell AM, Osterhaus Trzasko LC, Brigham TJ. Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews. J Clin Epidemiol. 2015;68(6):617–26.

Weller AC. Mounting evidence that librarians are essential for comprehensive literature searches for meta-analyses and Cochrane reports. J Med Libr Assoc. 2004;92(2):163–4.

Swinkels A, Briddon J, Hall J. Two physiotherapists, one librarian and a systematic literature review: collaboration in action. Health Info Libr J. 2006;23(4):248–56.

Foster M. An overview of the role of librarians in systematic reviews: from expert search to project manager. EAHIL. 2015;11(3):3–7.

Lawson L. OPERATING OUTSIDE LIBRARY WALLS 2004.

Vassar M, Yerokhin V, Sinnett PM, Weiher M, Muckelrath H, Carr B, et al. Database selection in systematic reviews: an insight through clinical neurology. Health Inf Libr J. 2017;34(2):156–64.

Townsend WA, Anderson PF, Ginier EC, MacEachern MP, Saylor KM, Shipman BL, et al. A competency framework for librarians involved in systematic reviews. Journal of the Medical Library Association : JMLA. 2017;105(3):268–75.

Cooper ID, Crum JA. New activities and changing roles of health sciences librarians: a systematic review, 1990-2012. Journal of the Medical Library Association : JMLA. 2013;101(4):268–77.

Crum JA, Cooper ID. Emerging roles for biomedical librarians: a survey of current practice, challenges, and changes. Journal of the Medical Library Association : JMLA. 2013;101(4):278–86.

Dudden RF, Protzko SL. The systematic review team: contributions of the health sciences librarian. Med Ref Serv Q. 2011;30(3):301–15.

Golder S, Loke Y, McIntosh HM. Poor reporting and inadequate searches were apparent in systematic reviews of adverse effects. J Clin Epidemiol. 2008;61(5):440–8.

Maggio LA, Tannery NH, Kanter SL. Reproducibility of literature search reporting in medical education reviews. Academic medicine : journal of the Association of American Medical Colleges. 2011;86(8):1049–54.

Meert D, Torabi N, Costella J. Impact of librarians on reporting of the literature searching component of pediatric systematic reviews. Journal of the Medical Library Association : JMLA. 2016;104(4):267–77.

Morris M, Boruff JT, Gore GC. Scoping reviews: establishing the role of the librarian. Journal of the Medical Library Association : JMLA. 2016;104(4):346–54.

Koffel JB, Rethlefsen ML. Reproducibility of search strategies is poor in systematic reviews published in high-impact pediatrics, cardiology and surgery journals: a cross-sectional study. PLoS One. 2016;11(9):e0163309.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Fehrmann P, Thomas J. Comprehensive computer searches and reporting in systematic reviews. Research Synthesis Methods. 2011;2(1):15–32.

Booth A. Searching for qualitative research for inclusion in systematic reviews: a structured methodological review. Systematic Reviews. 2016;5(1):74.

Article   PubMed   PubMed Central   Google Scholar  

Egger M, Juni P, Bartlett C, Holenstein F, Sterne J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health technology assessment (Winchester, England). 2003;7(1):1–76.

Tricco AC, Tetzlaff J, Sampson M, Fergusson D, Cogo E, Horsley T, et al. Few systematic reviews exist documenting the extent of bias: a systematic review. J Clin Epidemiol. 2008;61(5):422–34.

Booth A. How much searching is enough? Comprehensive versus optimal retrieval for technology assessments. Int J Technol Assess Health Care. 2010;26(4):431–5.

Papaioannou D, Sutton A, Carroll C, Booth A, Wong R. Literature searching for social science systematic reviews: consideration of a range of search techniques. Health Inf Libr J. 2010;27(2):114–22.

Petticrew M. Time to rethink the systematic review catechism? Moving from ‘what works’ to ‘what happens’. Systematic Reviews. 2015;4(1):36.

Betrán AP, Say L, Gülmezoglu AM, Allen T, Hampson L. Effectiveness of different databases in identifying studies for systematic reviews: experience from the WHO systematic review of maternal morbidity and mortality. BMC Med Res Methodol. 2005;5

Felson DT. Bias in meta-analytic research. J Clin Epidemiol. 1992;45(8):885–92.

Article   PubMed   CAS   Google Scholar  

Franco A, Malhotra N, Simonovits G. Publication bias in the social sciences: unlocking the file drawer. Science. 2014;345(6203):1502–5.

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. Grey literature in systematic reviews: a cross-sectional study of the contribution of non-English reports, unpublished studies and dissertations to the results of meta-analyses in child-relevant reviews. BMC Med Res Methodol. 2017;17(1):64.

Schmucker CM, Blümle A, Schell LK, Schwarzer G, Oeller P, Cabrera L, et al. Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research. PLoS One. 2017;12(4):e0176210.

Egger M, Zellweger-Zahner T, Schneider M, Junker C, Lengeler C, Antes G. Language bias in randomised controlled trials published in English and German. Lancet (London, England). 1997;350(9074):326–9.

Moher D, Pham B, Lawson ML, Klassen TP. The inclusion of reports of randomised trials published in languages other than English in systematic reviews. Health technology assessment (Winchester, England). 2003;7(41):1–90.

Pham B, Klassen TP, Lawson ML, Moher D. Language of publication restrictions in systematic reviews gave different results depending on whether the intervention was conventional or complementary. J Clin Epidemiol. 2005;58(8):769–76.

Mills EJ, Kanters S, Thorlund K, Chaimani A, Veroniki A-A, Ioannidis JPA. The effects of excluding treatments from network meta-analyses: survey. BMJ : British Medical Journal. 2013;347

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. The contribution of databases to the results of systematic reviews: a cross-sectional study. BMC Med Res Methodol. 2016;16(1):127.

van Driel ML, De Sutter A, De Maeseneer J, Christiaens T. Searching for unpublished trials in Cochrane reviews may not be worth the effort. J Clin Epidemiol. 2009;62(8):838–44.e3.

Buchberger B, Krabbe L, Lux B, Mattivi JT. Evidence mapping for decision making: feasibility versus accuracy - when to abandon high sensitivity in electronic searches. German medical science : GMS e-journal. 2016;14:Doc09.

Lorenc T, Pearson M, Jamal F, Cooper C, Garside R. The role of systematic reviews of qualitative evidence in evaluating interventions: a case study. Research Synthesis Methods. 2012;3(1):1–10.

Gough D. Weight of evidence: a framework for the appraisal of the quality and relevance of evidence. Res Pap Educ. 2007;22(2):213–28.

Barroso J, Gollop CJ, Sandelowski M, Meynell J, Pearce PF, Collins LJ. The challenges of searching for and retrieving qualitative studies. West J Nurs Res. 2003;25(2):153–78.

Britten N, Garside R, Pope C, Frost J, Cooper C. Asking more of qualitative synthesis: a response to Sally Thorne. Qual Health Res. 2017;27(9):1370–6.

Booth A, Carroll C. Systematic searching for theory to inform systematic reviews: is it feasible? Is it desirable? Health Info Libr J. 2015;32(3):220–35.

Kwon Y, Powelson SE, Wong H, Ghali WA, Conly JM. An assessment of the efficacy of searching in biomedical databases beyond MEDLINE in identifying studies for a systematic review on ward closures as an infection control intervention to control outbreaks. Syst Rev. 2014;3:135.

Nussbaumer-Streit B, Klerings I, Wagner G, Titscher V, Gartlehner G. Assessing the validity of abbreviated literature searches for rapid reviews: protocol of a non-inferiority and meta-epidemiologic study. Systematic Reviews. 2016;5:197.

Wagner G, Nussbaumer-Streit B, Greimel J, Ciapponi A, Gartlehner G. Trading certainty for speed - how much uncertainty are decisionmakers and guideline developers willing to accept when using rapid reviews: an international survey. BMC Med Res Methodol. 2017;17(1):121.

Ogilvie D, Hamilton V, Egan M, Petticrew M. Systematic reviews of health effects of social interventions: 1. Finding the evidence: how far should you go? J Epidemiol Community Health. 2005;59(9):804–8.

Royle P, Milne R. Literature searching for randomized controlled trials used in Cochrane reviews: rapid versus exhaustive searches. Int J Technol Assess Health Care. 2003;19(4):591–603.

Pearson M, Moxham T, Ashton K. Effectiveness of search strategies for qualitative research about barriers and facilitators of program delivery. Eval Health Prof. 2011;34(3):297–308.

Levay P, Raynor M, Tuvey D. The Contributions of MEDLINE, Other Bibliographic Databases and Various Search Techniques to NICE Public Health Guidance. 2015. 2015;10(1):19.

Nussbaumer-Streit B, Klerings I, Wagner G, Heise TL, Dobrescu AI, Armijo-Olivo S, et al. Abbreviated literature searches were viable alternatives to comprehensive searches: a meta-epidemiological study. J Clin Epidemiol. 2018;102:1–11.

Briscoe S, Cooper C, Glanville J, Lefebvre C. The loss of the NHS EED and DARE databases and the effect on evidence synthesis and evaluation. Res Synth Methods. 2017;8(3):256–7.

Stansfield C, O'Mara-Eves A, Thomas J. Text mining for search term development in systematic reviewing: A discussion of some methods and challenges. Research Synthesis Methods.n/a-n/a.

Petrova M, Sutcliffe P, Fulford KW, Dale J. Search terms and a validated brief search filter to retrieve publications on health-related values in Medline: a word frequency analysis study. Journal of the American Medical Informatics Association : JAMIA. 2012;19(3):479–88.

Stansfield C, Thomas J, Kavanagh J. 'Clustering' documents automatically to support scoping reviews of research: a case study. Res Synth Methods. 2013;4(3):230–41.

PubMed   Google Scholar  

Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi S. PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv Res. 2014;14:579.

Andrew B. Clear and present questions: formulating questions for evidence based practice. Library Hi Tech. 2006;24(3):355–68.

Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qual Health Res. 2012;22(10):1435–43.

Whiting P, Westwood M, Bojke L, Palmer S, Richardson G, Cooper J, et al. Clinical effectiveness and cost-effectiveness of tests for the diagnosis and investigation of urinary tract infection in children: a systematic review and economic model. Health technology assessment (Winchester, England). 2006;10(36):iii-iv, xi-xiii, 1–154.

Cooper C, Levay P, Lorenc T, Craig GM. A population search filter for hard-to-reach populations increased search efficiency for a systematic review. J Clin Epidemiol. 2014;67(5):554–9.

Hausner E, Waffenschmidt S, Kaiser T, Simon M. Routine development of objectively derived search strategies. Systematic Reviews. 2012;1(1):19.

Hausner E, Guddat C, Hermanns T, Lampert U, Waffenschmidt S. Prospective comparison of search strategies for systematic reviews: an objective approach yielded higher sensitivity than a conceptual one. J Clin Epidemiol. 2016;77:118–24.

Craven J, Levay P. Recording database searches for systematic reviews - what is the value of adding a narrative to peer-review checklists? A case study of nice interventional procedures guidance. Evid Based Libr Inf Pract. 2011;6(4):72–87.

Wright K, Golder S, Lewis-Light K. What value is the CINAHL database when searching for systematic reviews of qualitative studies? Syst Rev. 2015;4:104.

Beckles Z, Glover S, Ashe J, Stockton S, Boynton J, Lai R, et al. Searching CINAHL did not add value to clinical questions posed in NICE guidelines. J Clin Epidemiol. 2013;66(9):1051–7.

Cooper C, Rogers M, Bethel A, Briscoe S, Lowe J. A mapping review of the literature on UK-focused health and social care databases. Health Inf Libr J. 2015;32(1):5–22.

Younger P, Boddy K. When is a search not a search? A comparison of searching the AMED complementary health database via EBSCOhost, OVID and DIALOG. Health Inf Libr J. 2009;26(2):126–35.

Lam MT, McDiarmid M. Increasing number of databases searched in systematic reviews and meta-analyses between 1994 and 2014. Journal of the Medical Library Association : JMLA. 2016;104(4):284–9.

Bethel A, editor Search summary tables for systematic reviews: results and findings. HLC Conference 2017a.

Aagaard T, Lund H, Juhl C. Optimizing literature search in systematic reviews - are MEDLINE, EMBASE and CENTRAL enough for identifying effect studies within the area of musculoskeletal disorders? BMC Med Res Methodol. 2016;16(1):161.

Adams CE, Frederick K. An investigation of the adequacy of MEDLINE searches for randomized controlled trials (RCTs) of the effects of mental health care. Psychol Med. 1994;24(3):741–8.

Kelly L, St Pierre-Hansen N. So many databases, such little clarity: searching the literature for the topic aboriginal. Canadian family physician Medecin de famille canadien. 2008;54(11):1572–3.

Lawrence DW. What is lost when searching only one literature database for articles relevant to injury prevention and safety promotion? Injury Prevention. 2008;14(6):401–4.

Lemeshow AR, Blum RE, Berlin JA, Stoto MA, Colditz GA. Searching one or two databases was insufficient for meta-analysis of observational studies. J Clin Epidemiol. 2005;58(9):867–73.

Sampson M, Barrowman NJ, Moher D, Klassen TP, Pham B, Platt R, et al. Should meta-analysts search Embase in addition to Medline? J Clin Epidemiol. 2003;56(10):943–55.

Stevinson C, Lawlor DA. Searching multiple databases for systematic reviews: added value or diminishing returns? Complementary Therapies in Medicine. 2004;12(4):228–32.

Suarez-Almazor ME, Belseck E, Homik J, Dorgan M, Ramos-Remus C. Identifying clinical trials in the medical literature with electronic databases: MEDLINE alone is not enough. Control Clin Trials. 2000;21(5):476–87.

Taylor B, Wylie E, Dempster M, Donnelly M. Systematically retrieving research: a case study evaluating seven databases. Res Soc Work Pract. 2007;17(6):697–706.

Beyer FR, Wright K. Can we prioritise which databases to search? A case study using a systematic review of frozen shoulder management. Health Info Libr J. 2013;30(1):49–58.

Duffy S, de Kock S, Misso K, Noake C, Ross J, Stirk L. Supplementary searches of PubMed to improve currency of MEDLINE and MEDLINE in-process searches via Ovid. Journal of the Medical Library Association : JMLA. 2016;104(4):309–12.

Katchamart W, Faulkner A, Feldman B, Tomlinson G, Bombardier C. PubMed had a higher sensitivity than Ovid-MEDLINE in the search for systematic reviews. J Clin Epidemiol. 2011;64(7):805–7.

Cooper C, Lovell R, Husk K, Booth A, Garside R. Supplementary search methods were more effective and offered better value than bibliographic database searching: a case study from public health and environmental enhancement (in Press). Research Synthesis Methods. 2017;

Cooper C, Booth, A., Britten, N., Garside, R. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: A methodological review. (In Press). BMC Systematic Reviews. 2017.

Greenhalgh T, Peacock R. Effectiveness and efficiency of search methods in systematic reviews of complex evidence: audit of primary sources. BMJ (Clinical research ed). 2005;331(7524):1064–5.

Article   PubMed Central   Google Scholar  

Hinde S, Spackman E. Bidirectional citation searching to completion: an exploration of literature searching methods. PharmacoEconomics. 2015;33(1):5–11.

Levay P, Ainsworth N, Kettle R, Morgan A. Identifying evidence for public health guidance: a comparison of citation searching with web of science and Google scholar. Res Synth Methods. 2016;7(1):34–45.

McManus RJ, Wilson S, Delaney BC, Fitzmaurice DA, Hyde CJ, Tobias RS, et al. Review of the usefulness of contacting other experts when conducting a literature search for systematic reviews. BMJ (Clinical research ed). 1998;317(7172):1562–3.

Westphal A, Kriston L, Holzel LP, Harter M, von Wolff A. Efficiency and contribution of strategies for finding randomized controlled trials: a case study from a systematic review on therapeutic interventions of chronic depression. Journal of public health research. 2014;3(2):177.

Matthews EJ, Edwards AG, Barker J, Bloor M, Covey J, Hood K, et al. Efficient literature searching in diffuse topics: lessons from a systematic review of research on communicating risk to patients in primary care. Health Libr Rev. 1999;16(2):112–20.

Bethel A. Endnote Training (YouTube Videos) 2017b [Available from: http://medicine.exeter.ac.uk/esmi/workstreams/informationscience/is_resources,_guidance_&_advice/ .

Bramer WM, Giustini D, de Jonge GB, Holland L, Bekhuis T. De-duplication of database search results for systematic reviews in EndNote. Journal of the Medical Library Association : JMLA. 2016;104(3):240–3.

Bramer WM, Milic J, Mast F. Reviewing retrieved references for inclusion in systematic reviews using EndNote. Journal of the Medical Library Association : JMLA. 2017;105(1):84–7.

Gall C, Brahmi FA. Retrieval comparison of EndNote to search MEDLINE (Ovid and PubMed) versus searching them directly. Medical reference services quarterly. 2004;23(3):25–32.

Ahmed KK, Al Dhubaib BE. Zotero: a bibliographic assistant to researcher. J Pharmacol Pharmacother. 2011;2(4):303–5.

Coar JT, Sewell JP. Zotero: harnessing the power of a personal bibliographic manager. Nurse Educ. 2010;35(5):205–7.

Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

Sampson M, McGowan J, Tetzlaff J, Cogo E, Moher D. No consensus exists on search reporting methods for systematic reviews. J Clin Epidemiol. 2008;61(8):748–54.

Toews LC. Compliance of systematic reviews in veterinary journals with preferred reporting items for systematic reviews and meta-analysis (PRISMA) literature search reporting guidelines. Journal of the Medical Library Association : JMLA. 2017;105(3):233–9.

Booth A. "brimful of STARLITE": toward standards for reporting literature searches. Journal of the Medical Library Association : JMLA. 2006;94(4):421–9. e205

Faggion CM Jr, Wu YC, Tu YK, Wasiak J. Quality of search strategies reported in systematic reviews published in stereotactic radiosurgery. Br J Radiol. 2016;89(1062):20150878.

Mullins MM, DeLuca JB, Crepaz N, Lyles CM. Reporting quality of search methods in systematic reviews of HIV behavioral interventions (2000–2010): are the searches clearly explained, systematic and reproducible? Research Synthesis Methods. 2014;5(2):116–30.

Yoshii A, Plaut DA, McGraw KA, Anderson MJ, Wellik KE. Analysis of the reporting of search strategies in Cochrane systematic reviews. Journal of the Medical Library Association : JMLA. 2009;97(1):21–9.

Bigna JJ, Um LN, Nansseu JR. A comparison of quality of abstracts of systematic reviews including meta-analysis of randomized controlled trials in high-impact general medicine journals before and after the publication of PRISMA extension for abstracts: a systematic review and meta-analysis. Syst Rev. 2016;5(1):174.

Akhigbe T, Zolnourian A, Bulters D. Compliance of systematic reviews articles in brain arteriovenous malformation with PRISMA statement guidelines: review of literature. Journal of clinical neuroscience : official journal of the Neurosurgical Society of Australasia. 2017;39:45–8.

Tao KM, Li XQ, Zhou QH, Moher D, Ling CQ, Yu WF. From QUOROM to PRISMA: a survey of high-impact medical journals' instructions to authors and a review of systematic reviews in anesthesia literature. PLoS One. 2011;6(11):e27611.

Wasiak J, Tyack Z, Ware R. Goodwin N. Jr. Poor methodological quality and reporting standards of systematic reviews in burn care management. International wound journal: Faggion CM; 2016.

Tam WW, Lo KK, Khalechelvam P. Endorsement of PRISMA statement and quality of systematic reviews and meta-analyses published in nursing journals: a cross-sectional study. BMJ Open. 2017;7(2):e013905.

Rader T, Mann M, Stansfield C, Cooper C, Sampson M. Methods for documenting systematic review searches: a discussion of common issues. Res Synth Methods. 2014;5(2):98–115.

Atkinson KM, Koenka AC, Sanchez CE, Moshontz H, Cooper H. Reporting standards for literature searches and report inclusion criteria: making research syntheses more transparent and easy to replicate. Res Synth Methods. 2015;6(1):87–95.

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol. 2016;75:40–6.

Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, Lefebvre C. An evidence-based practice guideline for the peer review of electronic search strategies. J Clin Epidemiol. 2009;62(9):944–52.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ (Clinical research ed). 2017;358.

Whiting P, Savović J, Higgins JPT, Caldwell DM, Reeves BC, Shea B, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.

Relevo R, Balshem H. Finding evidence for comparing medical interventions: AHRQ and the effective health care program. J Clin Epidemiol. 2011;64(11):1168–77.

Medicine Io. Standards for Systematic Reviews 2011 [Available from: http://www.nationalacademies.org/hmd/Reports/2011/Finding-What-Works-in-Health-Care-Standards-for-Systematic-Reviews/Standards.aspx .

CADTH: Resources 2018.

Download references

Acknowledgements

CC acknowledges the supervision offered by Professor Chris Hyde.

This publication forms a part of CC’s PhD. CC’s PhD was funded through the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme (Project Number 16/54/11). The open access fee for this publication was paid for by Exeter Medical School.

RG and NB were partially supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care South West Peninsula.

The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

Author information

Authors and affiliations.

Institute of Health Research, University of Exeter Medical School, Exeter, UK

Chris Cooper & Jo Varley-Campbell

HEDS, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK

Andrew Booth

Nicky Britten

European Centre for Environment and Human Health, University of Exeter Medical School, Truro, UK

Ruth Garside

You can also search for this author in PubMed   Google Scholar

Contributions

CC conceived the idea for this study and wrote the first draft of the manuscript. CC discussed this publication in PhD supervision with AB and separately with JVC. CC revised the publication with input and comments from AB, JVC, RG and NB. All authors revised the manuscript prior to submission. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Chris Cooper .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:.

Appendix tables and PubMed search strategy. Key studies used for pearl growing per key stage, working data extraction tables and the PubMed search strategy. (DOCX 30 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Cooper, C., Booth, A., Varley-Campbell, J. et al. Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies. BMC Med Res Methodol 18 , 85 (2018). https://doi.org/10.1186/s12874-018-0545-3

Download citation

Received : 20 September 2017

Accepted : 06 August 2018

Published : 14 August 2018

DOI : https://doi.org/10.1186/s12874-018-0545-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Literature Search Process
  • Citation Chasing
  • Tacit Models
  • Unique Guidance
  • Information Specialists

BMC Medical Research Methodology

ISSN: 1471-2288

search engine for literature review

Banner

Best Practice for Literature Searching

  • Literature Search Best Practice
  • What is literature searching?
  • What are literature reviews?
  • Hierarchies of evidence
  • 1. Managing references
  • 2. Defining your research question
  • 3. Where to search
  • 4. Search strategy
  • 5. Screening results
  • 6. Paper acquisition
  • 7. Critical appraisal
  • Further resources
  • Training opportunities and videos
  • Join FSTA student advisory board This link opens in a new window
  • Chinese This link opens in a new window
  • Italian This link opens in a new window
  • Persian This link opens in a new window
  • Portuguese This link opens in a new window
  • Spanish This link opens in a new window

Where to search

To conduct a successful literature review, you need to conduct a comprehensive search so that you feel confident that you’ve found all the relevant literature on the topic you are investigating.   You can put together an excellent search, but if you are not looking in the right places, you will not find the literature you need.

undefined

Access means getting the full text of research.  Discovery means finding out about the existence of research.  It is a very common mistake to use tools that are better for access for the task of discovery. Doing this means that you are doing your searching backwards!

The first step of your search must be discovery, and only after you know what is out there do you need to start acquiring the full text of the articles you need.  Making this common mistake of reversing the order wastes time and negatively affects the quality of your literature review because you will be much less likely to find all the research you should be finding about your question.

Search options

  • Search engines
  • Academic search engines
  • Researcher platforms
  • Publisher platforms
  • Library discovery services

The best tools for conducting your literature searches comprehensively and systematically are subject specific databases.

Databases abstract and index the content of academic journals from multiple publishers, and when appropriate other publication types such as trade journals, standards, reports, conference papers and patents. They are designed for discovery—i.e. finding out that a piece of research exists and giving you the bibliographic details you need to find that piece of research. 

The difference between subject-specific databases and general databases

Databases can either be multidisciplinary or focused on a discipline like chemistry, or language and literature, or the sciences of food and health. The focus defines the database's scope--what information is included within it--and how you find that information.

A subject focused database is usually built around a thesaurus of subject terms based in its discipline. Because of their breadth of coverage, multidisciplinary databases don't have thesauri, and this means that they are far more likely to return what are called false hits, or noise, where the term you search is not used in the sense you need.  

Sometimes even when a database does have a thesaurus but one focused on a different discipline than where your topic falls, you will still get false hits with a term.  Information retrieval in the area of food is complex because of the broadness of the field.  For example:

  • Searching pig  in a general database will bring back content where an animal has been used in preclinical trials, from livestock research, to the use of pork in food.
  • In a health-focused database, the search options and filters will have been developed for the human health field, which may not be helpful for searching food science topics not related to human health.
  • Searching spirits in PubMed (which does have a thesaurus focused on biomedical terms) or the multidisciplinary databases Web of Science or Scopus, (which do not have thesauri) you get moods and the supernatural mixed in with research focused on alcoholic beverages.

FSTA, focused on food science, not only doesn’t bring back supernatural false hits for spirits , it brings back many more relevant ones about distilled alcoholic beverages because of how each record has been tagged, or indexed, with the subject specific term spirits, even when that term does not appear in an article’s title or abstract.

What is indexing and why is it helpful in searching?

search engine for literature review

For example, in FSTA , if you search the thesaurus term aroma it pulls together all the results where the authors used the word aroma to describe a central element of the research, but also works by authors who used the words  odor ,  odour  or  smell .

Similarly, research about  Baijiu , Luzhou-flavor liquors, Luzhou-flavour liquors, Moutai liquors, and  Moutai-flavor liquors  are all gathered under the thesaurus subject heading  Chinese liquors .

Some databases rely on machine learning to do the indexing, while others like FSTA have editorial teams of experts who do the work more accurately.  

Search engines like Google allow you to find all sorts of information on the internet, but they are not designed specifically for finding scholarly information, so are terrible for literature searches.  

However, they are good for finding governmental information like U.S. Department of Agriculture research funding instructions, scientific reports from the UK Food Standards Agency or the European Food Safety Authority, or guidelines from organizations like the World Health Organization.  Academic search engines and most databases do not include this type of document.  The database FSTA is an exception, since it indexes legislation, standards and reports (but not funding instructions). 

Unlike general search engines, academic search engines like Google Scholar do focus on scholarly information, but they:

  • DO NOT exercise editorial standards about the content's quality, which results in the inclusion of predatory journal articles, and also distracting, sometimes silly, completely irrelevant citations like these lunch menus that have been indexed as though they are academic papers.    

search engine for literature review

DO NOT allow precise control over searches, even with advanced search options. 

  • DO NOT use indexing, which means you only find results written in the language in which you are searching. 
  • DO cover all disciplines, which means that your searches are likely to bring back lots of false hits. 
Search engines can be useful for accessing the full text of articles and patents but using them for discovery is an inefficient--and potentially hazardous--way to research.

Platforms like Academia.edu and ResearchGate allow researchers to create profiles to showcase their work and share their articles. Both can be useful for acquiring full-text articles; however, because researchers create and maintain their own profiles, searching these platforms will not give you a comprehensive overview of a field—you’ll only find the work of researchers who have chosen to participate.

Don’t confuse these platforms with discovery services, such as databases, which are specifically designed to be comprehensive in the subject area they cover in order to help researchers find relevant information.

Some tools might seem to be full text databases but are actually publisher-specific platforms. ScienceDirect, the subscription platform hosting Elsevier’s journal articles, is a notable example. ScienceDirect makes it very easy to access Elsevier content, but only about 20% of food science research is published in Elsevier journals. Using that platform or any other single publisher platform to search for content will drastically limit your search.

Library discovery services are designed for discovery and access.  They are intended to make it easy for a user to search in one place to find everything in the library's collections-print and e-books, articles, and more.  They make it easy to access the full text of everything they own or subscribe to, or link to inter-library loan forms to borrow material from other libraries.  The disadvantages of using them for the discovery process are that: 

  • They sometimes can make it seem as though you are seeing everything the library has in its collections, when, for a variety of behind the scenes reasons, this is almost always not true
  • They are interdisciplinary by nature because the library's collections will span many subjects, which means that they lack subject specific features to help build targeted searches.

BEST PRACTICE RECOMMENDATION :  Look at the library discovery page for a link to the subject specific databases.  You can often see a list ordered by subjects or alphabetically or both.    

search engine for literature review

Best practice!

BEST PRACTICE RECOMMENDATION : familiarise yourself with the databases you have access to, including subscription databases –get to know their scope (what content they index) and also how to search them, including using thesaurus functions if available, so that you can use each to their full capacity.

BEST PRACTICE RECOMMENDATION : remember that research for a literature review is a two-step process—first is discovery of research, and the second is accessing the research you’ve determined you need. Don’t switch the order of the steps! If you limit your search to the research outputs that you think that you have easy access to, you will almost certainly end up with a biased review that is neither systematic nor comprehensive.

  • << Previous: 2. Defining your research question
  • Next: 4. Search strategy >>
  • Last Updated: Sep 15, 2023 2:17 PM
  • URL: https://ifis.libguides.com/literature_search_best_practice

Book cover

Making Literature Reviews Work: A Multidisciplinary Guide to Systematic Approaches pp 145–200 Cite as

Search Strategies for [Systematic] Literature Reviews

  • Rob Dekkers 4 ,
  • Lindsey Carey 5 &
  • Peter Langhorne 6  
  • First Online: 11 August 2022

1775 Accesses

3 Citations

After setting review questions as discussed in the previous chapter, the search for relevant publications is the next step of a literature review.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

JEL is the abbreviation of the ‘Journal of Economics Literature’, published by the American Economic Association, which launched this coding system.

Actually, Schlosser et al. ( 2006 , p. 571 ff.) call it ‘traditional pearl growing.’ The term ‘classical’ pearl growing has been adopted to ensure consistency throughout the book.

The wording ‘topical bibliography’ by Schlosser et al. ( 2006 , p. 574) has been replaced with ‘topical survey’ in order to connect better to the terminology in this book.

Webster and Watson ( 2002 , p. xvi) call it forward searching and backward searching rather than snowballing. See Table 5.3 for the nomenclature used in the book for search strategies.

Aguillo IF (2012) Is Google Scholar useful for bibliometrics? A webometric analysis. Scientometrics 91(2):343–351. https://doi.org/10.1007/s11192-011-0582-8

Bardia A, Wahner-Roedler DL, Erwin PL, Sood A (2006) Search strategies for retrieving complementary and alternative medicine clinical trials in oncology. Integr Cancer Ther 5(3):202–205. https://doi.org/10.1177/1534735406292146

Bates MJ (1989) The design of browsing and berrypicking techniques for the online search interface. Online Rev 13(5):407–424

Google Scholar  

Bates MJ (2007) What is browsing—really? A model drawing from behavioural science research. Inform Res 20(4). http://informationr.net/ir/12-4/paper330.html

Benzies KM, Premji S, Hayden KA, Serrett K (2006) State-of-the-evidence reviews: advantages and challenges of including grey literature. Worldviews Evid Based Nurs 3(2):55–61. https://doi.org/10.1111/j.1741-6787.2006.00051.x

Bernardo M, Simon A, Tarí JJ, Molina-Azorín JF (2015) Benefits of management systems integration: a literature review. J Clean Prod 94:260–267. https://doi.org/10.1016/j.jclepro.2015.01.075

Beynon R, Leeflang MM, McDonald S, Eisinga A, Mitchell RL, Whiting P, Glanville JM (2013) Search strategies to identify diagnostic accuracy studies in MEDLINE and EMBASE. Cochrane Database Syst Rev (9). https://doi.org/10.1002/14651858.MR000022.pub3

Bolton JE (1971) Small firms—report of the committee of inquiry on small firms (4811). London

Boluyt N, Tjosvold L, Lefebvre C, Klassen TP, Offringa M (2008) Usefulness of systematic review search strategies in finding child health systematic reviews in MEDLINE. Arch Pediatr Adolesc Med 162(2):111–116. https://doi.org/10.1001/archpediatrics.2007.40

Booth A, Noyes J, Flemming K, Gerhardus A, Wahlster P, van der Wilt GJ, Rehfuess E (2018) Structured methodology review identified seven (RETREAT) criteria for selecting qualitative evidence synthesis approaches. J Clinic Epidemiol 99:41–52. https://doi.org/10.1016/j.jclinepi.2018.03.003

Chesbrough H (2012) Open innovation: where we’ve been and where we’re going. Res Technol Manag 55(4):20–27. https://doi.org/10.5437/08956308X5504085

Chesbrough HW (2003) Open innovation: the new imperative for creating and profiting from technology. Harvard Business School Press, Boston

Conn VS, Valentine JC, Cooper HM, Rantz MJ (2003) Grey literature in meta-analyses. Nurs Res 52(4):256–261

de la Torre Díez I, Cosgaya HM, Garcia-Zapirain B, López-Coronado M (2016) Big data in health: a literature review from the year 2005. J Med Syst 40(9):209. https://doi.org/10.1007/s10916-016-0565-7

Dekkers R, Hicks C (2019) How many cases do you need for studies into operations management? Guidance based on saturation. In: Paper presented at the 26th EurOMA conference, Helsinki

Dekkers R, Koukou MI, Mitchell S, Sinclair S (2019) Engaging with open innovation: a Scottish perspective on its opportunities, challenges and risks. J Innov Econ Manag 28(1):193–226. https://doi.org/10.3917/jie.028.0187

Dieste O, Grimán A, Juristo N (2009) Developing search strategies for detecting relevant experiments. Empir Softw Eng 14(5):513–539. https://doi.org/10.1007/s10664-008-9091-7

Eady AM, Wilczynski NL, Haynes RB (2008) PsycINFO search strategies identified methodologically sound therapy studies and review articles for use by clinicians and researchers. J Clin Epidemiol 61(1):34–40. https://doi.org/10.1016/j.jclinepi.2006.09.016

Egger M, Zellweger-Zähner T, Schneider M, Junker C, Lengeler C, Antes G (1997) Language bias in randomised controlled trials published in English and German. The Lancet 350(9074):326–329. https://doi.org/10.1016/S0140-6736(97)02419-7

Eisenhardt KM (1989) Agency theory: an assessment and review. Acad Manag Rev 14(1):57–74. https://doi.org/10.5465/AMR.1989.4279003

Finfgeld-Connett D, Johnson ED (2013) Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews. J Adv Nurs 69(1):194–204. https://doi.org/10.1111/j.1365-2648.2012.06037.x

Frederick JT, Steinman LE, Prohaska T, Satariano WA, Bruce M, Bryant L, Snowden M (2007) Community-based treatment of late life depression: an expert panel-informed literature review. Am J Prev Med 33(3):222–249. https://doi.org/10.1016/j.amepre.2007.04.035

Glanville J, Kaunelis D, Mensinkai S (2009) How well do search filters perform in identifying economic evaluations in MEDLINE and EMBASE. Int J Technol Assess Health Care 25(4):522–529. https://doi.org/10.1017/S0266462309990523

Godin K, Stapleton J, Kirkpatrick SI, Hanning RM, Leatherdale ST (2015) Applying systematic review search methods to the grey literature: a case study examining guidelines for school-based breakfast programs in Canada. Syst Rev 4(1):138. https://doi.org/10.1186/s13643-015-0125-0

Green BN, Johnson CD, Adams A (2006) Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. J Chiropr Med 5(3):101–117. https://doi.org/10.1016/S0899-3467(07)60142-6

Greenhalgh T, Peacock R (2005) Effectiveness and efficiency of search methods in systematic reviews of complex evidence: audit of primary sources. BMJ 331(7524):1064–1065. https://doi.org/10.1136/bmj.38636.593461.68

Grégoire G, Derderian F, le Lorier J (1995) Selecting the language of the publications included in a meta-analysis: is there a Tower of Babel bias? J Clin Epidemiol 48(1):159–163

Gross T, Taylor AG, Joudrey DN (2015) Still a lot to lose: the role of controlled vocabulary in keyword searching. Catalog Classific Q 53(1):1–39. https://doi.org/10.1080/01639374.2014.917447

Grosso G, Godos J, Galvano F, Giovannucci EL (2017) Coffee, caffeine, and health outcomes: an umbrella review. Annu Rev Nutr 37(1):131–156. https://doi.org/10.1146/annurev-nutr-071816-064941

Gusenbauer M, Haddaway NR (2020) Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res Synthesis Methods 11(2):181–217. https://doi.org/10.1002/jrsm.1378

Haddaway NR, Bayliss HR (2015) Shades of grey: two forms of grey literature important for reviews in conservation. Biol Cons 191:827–829. https://doi.org/10.1016/j.biocon.2015.08.018

Haddaway NR, Collins AM, Coughlin D, Kirk S (2015) The role of Google Scholar in evidence reviews and its applicability to grey literature searching. PLoS One 10(9):e0138237. https://doi.org/10.1371/journal.pone.0138237

Harari MB, Parola HR, Hartwell CJ, Riegelman A (2020) Literature searches in systematic reviews and meta-analyses: a review, evaluation, and recommendations. J Vocat Behav 118:103377. https://doi.org/10.1016/j.jvb.2020.103377

Harzing A-WK, van der Wal R (2008) Google Scholar as a new source for citation analysis. Ethics Sci Environ Politics 8(1):61–73. https://doi.org/10.3354/esep00076

Hausner E, Guddat C, Hermanns T, Lampert U, Waffenschmidt S (2016) Prospective comparison of search strategies for systematic reviews: an objective approach yielded higher sensitivity than a conceptual one. J Clin Epidemiol 77:118–124. https://doi.org/10.1016/j.jclinepi.2016.05.002

Hausner E, Waffenschmidt S, Kaiser T, Simon M (2012) Routine development of objectively derived search strategies. Syst Rev 1(1):19. https://doi.org/10.1186/2046-4053-1-19

Havill NL, Leeman J, Shaw-Kokot J, Knafl K, Crandell J, Sandelowski M (2014) Managing large-volume literature searches in research synthesis studies. Nurs Outlook 62(2):112–118. https://doi.org/10.1016/j.outlook.2013.11.002

Hildebrand AM, Iansavichus AV, Haynes RB, Wilczynski NL, Mehta RL, Parikh CR, Garg AX (2014) High-performance information search filters for acute kidney injury content in PubMed, Ovid Medline and Embase. Nephrol Dial Transplant 29(4):823–832. https://doi.org/10.1093/ndt/gft531

Hinckeldeyn J, Dekkers R, Kreutzfeldt J (2015) Productivity of product design and engineering processes—unexplored territory for production management techniques? Int J Oper Prod Manag 35(4):458–486. https://doi.org/10.1108/IJOPM-03-2013-0101

Hopewell S, Clarke M, Lefebvre C, Scherer R (2007) Handsearching versus electronic searching to identify reports of randomized trials. Cochrane Database Syst Rev (2):MR000001. https://doi.org/10.1002/14651858.mr000001.pub2

Isckia T, Lescop D (2011) Une analyse critique des fondements de l’innovation ouverte. Rev Fr Gest 210(1):87–98

Jackson JL, Kuriyama A (2019) How often do systematic reviews exclude articles not published in English? J Gen Intern Med 34(8):1388–1389. https://doi.org/10.1007/s11606-019-04976-x

Jennex ME (2015) Literature reviews and the review process: an editor-in-chief’s perspective. Commun Assoc Inf Syst 36:139–146. https://doi.org/10.17705/1CAIS.03608

Jensen MC, Meckling WH (1976) Theory of the firm: managerial behavior, agency costs and ownership structure. J Financ Econ 3(4):305–360. https://doi.org/10.1016/0304-405X(76)90026-X

Jüni P, Holenstein F, Sterne J, Bartlett C, Egger M (2002) Direction and impact of language bias in meta-analyses of controlled trials: empirical study. Int J Epidemiol 31(1):115–123. https://doi.org/10.1093/ije/31.1.115

Kennedy MM (2007) Defining a literature. Educ Res 36(3):139. https://doi.org/10.3102/0013189x07299197

Koffel JB (2015) Use of recommended search strategies in systematic reviews and the impact of librarian involvement: a cross-sectional survey of recent authors. PLoS One 10(5):e0125931. https://doi.org/10.1371/journal.pone.0125931

Koffel JB, Rethlefsen ML (2016) Reproducibility of search strategies is poor in systematic reviews published in high-impact pediatrics, cardiology and surgery journals: a cross-sectional study. PLoS One 11(9):e0163309. https://doi.org/10.1371/journal.pone.0163309

Lawal AK, Rotter T, Kinsman L, Sari N, Harrison L, Jeffery C, Flynn R (2014) Lean management in health care: definition, concepts, methodology and effects reported (systematic review protocol). Syst Rev 3(1):103. https://doi.org/10.1186/2046-4053-3-103

Levay P, Ainsworth N, Kettle R, Morgan A (2016) Identifying evidence for public health guidance: a comparison of citation searching with Web of Science and Google Scholar. Res Synthesis Methods 7(1):34–45. https://doi.org/10.1002/jrsm.1158

Li L, Smith HE, Atun R, Tudor Car L (2019) Search strategies to identify observational studies in MEDLINE and Embase. Cochrane Database Syst Rev (3). https://doi.org/10.1002/14651858.MR000041.pub2

Linton JD, Thongpapanl NT (2004) Ranking the technology innovation management journals. J Prod Innov Manag 21(2):123–139. https://doi.org/10.1111/j.0737-6782.2004.00062.x

Lokker C, McKibbon KA, Wilczynski NL, Haynes RB, Ciliska D, Dobbins M, Straus SE (2010) Finding knowledge translation articles in CINAHL. Studies Health Technol Inform 160(2):1179–1183

Lu Z (2011) PubMed and beyond: a survey of web tools for searching biomedical literature. Database. https://doi.org/10.1093/database/baq036

MacSuga-Gage AS, Simonsen B (2015) Examining the effects of teacher—directed opportunities to respond on student outcomes: a systematic review of the literature. Educ Treat Child 38(2):211–239. https://doi.org/10.1353/etc.2015.0009

Mahood Q, Van Eerd D, Irvin E (2014) Searching for grey literature for systematic reviews: challenges and benefits. Res Synthesis Methods 5(3):221–234. https://doi.org/10.1002/jrsm.1106

Marangunić N, Granić A (2015) Technology acceptance model: a literature review from 1986 to 2013. Univ Access Inf Soc 14(1):81–95. https://doi.org/10.1007/s10209-014-0348-1

Mc Elhinney H, Taylor B, Sinclair M, Holman MR (2016) Sensitivity and specificity of electronic databases: the example of searching for evidence on child protection issues related to pregnant women. Evid Based Midwifery 14(1):29–34

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C (2016) PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol 75:40–46. https://doi.org/10.1016/j.jclinepi.2016.01.021

McManus RJ, Wilson S, Delaney BC, Fitzmaurice DA, Hyde CJ, Tobias S, Hobbs FDR (1998) Review of the usefulness of contacting other experts when conducting a literature search for systematic reviews. Br Med J 317(7172):1562–1563 https://doi.org/10.1136/bmj.317.7172.1562

Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi (2014) PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv Res 14(1):579. https://doi.org/10.1186/s12913-014-0579-0

Mitnick BM (1973) Fiduciary rationality and public policy: the theory of agency and some consequences. In: Paper presented at the annual meeting of the American political science association, New Orleans, LA

Morrison A, Polisena J, Husereau D, Moulton K, Clark M, Fiander M, Rabb D (2012) The effect of English-language restriction on systematic review-based meta-analyses: a systematic review of empirical studies. Int J Technol Assess Health Care 28(2):138–144. https://doi.org/10.1017/S0266462312000086

Neuhaus C, Daniel HD (2008) Data sources for performing citation analysis: an overview. J Document 64(2):193–210. https://doi.org/10.1108/00220410810858010

O’Mara-Eves A, Thomas J, McNaught J, Miwa M, Ananiadou S (2015) Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst Rev 4(1):5. https://doi.org/10.1186/2046-4053-4-5

Ogilvie D, Foster CE, Rothnie H, Cavill N, Hamilton V, Fitzsimons CF, Mutrie N (2007) Interventions to promote walking: systematic review. BMJ 334(7605):1204. https://doi.org/10.1136/bmj.39198.722720.BE

Onetti A (2019) Turning open innovation into practice: trends in European corporates. J Bus Strateg 42(1):51–58. https://doi.org/10.1108/JBS-07-2019-0138

Papaioannou D, Sutton A, Carroll C, Booth A, Wong R (2010) Literature searching for social science systematic reviews: consideration of a range of search techniques. Health Info Libr J 27(2):114–122. https://doi.org/10.1111/j.1471-1842.2009.00863.x

Pappas C, Williams I (2011) Grey literature: its emerging importance. J Hosp Librariansh 11(3):228–234. https://doi.org/10.1080/15323269.2011.587100

Pearson M, Moxham T, Ashton K (2011) Effectiveness of search strategies for qualitative research about barriers and facilitators of program delivery. Eval Health Prof 34(3):297–308. https://doi.org/10.1177/0163278710388029

Piggott-McKellar AE, McNamara KE, Nunn PD, Watson JEM (2019) What are the barriers to successful community-based climate change adaptation? A review of grey literature. Local Environ 24(4):374–390. https://doi.org/10.1080/13549839.2019.1580688

Piller F, West J (2014) Firms, users, and innovations: an interactive model of coupled innovation. In: Chesbrough HW, Vanhaverbeke W, West J (eds) New frontiers in open innovation. Oxford University Press, Oxford, pp 29–49

Poole R, Kennedy OJ, Roderick P, Fallowfield JA, Hayes PC, Parkes J (2017) Coffee consumption and health: umbrella review of meta-analyses of multiple health outcomes. BMJ 359:j5024. https://doi.org/10.1136/bmj.j5024

Priem RL, Butler JE (2001) Is the resource-based “view” a useful perspective for strategic management research? Acad Manag Rev 26(1):22–40. https://doi.org/10.5465/amr.2001.4011928

Relevo R (2012) Effective search strategies for systematic reviews of medical tests. J Gener Internal Med 27(1):S28–S32. https://doi.org/10.1007/s11606-011-1873-8

Rethlefsen ML, Farrell AM, Osterhaus Trzasko LC, Brigham TJ (2015) Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews. J Clin Epidemiol 68(6):617–626. https://doi.org/10.1016/j.jclinepi.2014.11.025

Rewhorn S (2018) Writing your successful literature review. J Geogr High Educ 42(1):143–147. https://doi.org/10.1080/03098265.2017.1337732

Rietjens JA, Bramer WM, Geijteman EC, van der Heide A, Oldenmenger WH (2019) Development and validation of search filters to find articles on palliative care in bibliographic databases. Palliat Med 33(4):470–474. https://doi.org/10.1177/0269216318824275

Rogers M, Bethel A, Abbott R (2018) Locating qualitative studies in dementia on MEDLINE, EMBASE, CINAHL, and PsycINFO: a comparison of search strategies. Res Synthesis Methods 9(4):579–586. https://doi.org/10.1002/jrsm.1280

Rosenstock TS, Lamanna C, Chesterman S, Bell P, Arslan A, Richards M, Zhou W (2016) The scientific basis of climate-smart agriculture: a systematic review protocol. CGIAR, Copenhagen

Ross SA (1973) The economic theory of agency: the principal’s problem. Am Econ Rev 63(2):134–139

Rosumeck S, Wagner M, Wallraf S, Euler U (2020) A validation study revealed differences in design and performance of search filters for qualitative research in PsycINFO and CINAHL. J Clin Epidemiol 128:101–108. https://doi.org/10.1016/j.jclinepi.2020.09.031

Rowley J, Slack F (2004) Conducting a literature review. Manag Res News 27(6):31–39. https://doi.org/10.1108/01409170410784185

Rudestam K, Newton R (1992) Surviving your dissertation: a comprehensive guide to content and process. Sage, London

Salgado EG, Dekkers R (2018) Lean product development: nothing new under the sun? Int J Manag Rev 20(4):903–933. https://doi.org/10.1111/ijmr.12169

Savoie I, Helmer D, Green CJ, Kazanjian A (2003) BEYOND MEDLINE: reducing bias through extended systematic review search. Int J Technol Assess Health Care 19(1):168–178. https://doi.org/10.1017/S0266462303000163

Schlosser RW, Wendt O, Bhavnani S et al (2006) Use of information-seeking strategies for developing systematic reviews and engaging in evidence-based practice: the application of traditional and comprehensive Pearl growing. A review. Int J Language Commun Disorders 41(5):567–582. https://doi.org/10.1080/13682820600742190

Schryen G (2015) Writing qualitative IS literature reviews—guidelines for synthesis, interpretation, and guidance of research. Commun Assoc Inf Syst 34:286–325. https://doi.org/10.17705/1CAIS.03712

Shishank S, Dekkers R (2013) Outsourcing: decision-making methods and criteria during design and engineering. Product Plan Control Manage Oper 24(4–5):318–336. https://doi.org/10.1080/09537287.2011.648544

Silagy CA, Middleton P, Hopewell S (2002) Publishing protocols of systematic reviews comparing what was done to what was planned. JAMA 287(21):2831–2834. https://doi.org/10.1001/jama.287.21.2831

Soldani J, Tamburri DA, Van Den Heuvel W-J (2018) The pains and gains of microservices: a systematic grey literature review. J Syst Softw 146:215–232. https://doi.org/10.1016/j.jss.2018.09.082

Stevinson C, Lawlor DA (2004) Searching multiple databases for systematic reviews: added value or diminishing returns? Complement Ther Med 12(4):228–232. https://doi.org/10.1016/j.ctim.2004.09.003

Swift JK, Wampold BE (2018) Inclusion and exclusion strategies for conducting meta-analyses. Psychother Res 28(3):356–366. https://doi.org/10.1080/10503307.2017.1405169

Swift JK, Callahan JL, Cooper M, Parkin SR (2018) The impact of accommodating client preference in psychotherapy: a meta-analysis. J Clin Psychol 74(11):1924–1937. https://doi.org/10.1002/jclp.22680

Tanon AA, Champagne F, Contandriopoulos A-P, Pomey M-P, Vadeboncoeur A, Nguyen H (2010) Patient safety and systematic reviews: finding papers indexed in MEDLINE, EMBASE and CINAHL. Qual Saf Health Care 19(5):452–461. https://doi.org/10.1136/qshc.2008.031401

Tillett S, Newbold E (2006) Grey literature at the British library: revealing a hidden resource. Interlend Document Supply 34(2):70–73. https://doi.org/10.1108/02641610610669769

Trott P, Hartmann D (2009) Why ‘open innovation’ is old wine in new bottles. Int J Innov Manag 13(4):715–736. https://doi.org/10.1142/S1363919609002509

vom Brocke J, Simons A, Riemer K, Niehaves B, Plattfaut R, Cleven A (2015) Standing on the shoulders of giants: challenges and recommendations of literature search in information systems research. Commun Assoc Inf Syst 37:205–224. https://doi.org/10.17705/1CAIS.03709

Webster J, Watson RT (2002) Analyzing the past to prepare for the future: writing a literature review. MIS Q 26(2):xiii–xxiii

Wellington JJ, Bathmaker A, Hunt C, McCulloch G, Sikes P (2005) Succeeding with your doctorate. Sage, Thousand Oaks

Wilczynski NL, Haynes RB (2007) EMBASE search strategies achieved high sensitivity and specificity for retrieving methodologically sound systematic reviews. J Clin Epidemiol 60(1):29–33. https://doi.org/10.1016/j.jclinepi.2006.04.001

Wilczynski NL, Marks S, Haynes RB (2007) Search strategies for identifying qualitative studies in CINAHL. Qual Health Res 17(5):705–710. https://doi.org/10.1177/1049732306294515

Wohlin C, Prikladnicki R (2013) Systematic literature reviews in software engineering. Inf Softw Technol 55(6):919–920. https://doi.org/10.1016/j.infsof.2013.02.002

Wong SS-L, Wilczynski NL, Haynes RB (2006) Optimal CINAHL search strategies for identifying therapy studies and review articles. J Nurs Scholarsh 38(2):194–199. https://doi.org/10.1111/j.1547-5069.2006.00100.x

Zhang L, Ajiferuke I, Sampson M (2006) Optimizing search strategies to identify randomized controlled trials in MEDLINE. BMC Med Res Methodol 6(1):23. https://doi.org/10.1186/1471-2288-6-23

Download references

Author information

Authors and affiliations.

University of Glasgow, Glasgow, UK

Rob Dekkers

Glasgow Caledonian University, Glasgow, UK

Lindsey Carey

Prof. Peter Langhorne

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Dekkers, R., Carey, L., Langhorne, P. (2022). Search Strategies for [Systematic] Literature Reviews. In: Making Literature Reviews Work: A Multidisciplinary Guide to Systematic Approaches. Springer, Cham. https://doi.org/10.1007/978-3-030-90025-0_5

Download citation

DOI : https://doi.org/10.1007/978-3-030-90025-0_5

Published : 11 August 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-90024-3

Online ISBN : 978-3-030-90025-0

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 06 December 2017

Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study

  • Wichor M. Bramer 1 ,
  • Melissa L. Rethlefsen 2 ,
  • Jos Kleijnen 3 , 4 &
  • Oscar H. Franco 5  

Systematic Reviews volume  6 , Article number:  245 ( 2017 ) Cite this article

143k Accesses

826 Citations

88 Altmetric

Metrics details

Within systematic reviews, when searching for relevant references, it is advisable to use multiple databases. However, searching databases is laborious and time-consuming, as syntax of search strategies are database specific. We aimed to determine the optimal combination of databases needed to conduct efficient searches in systematic reviews and whether the current practice in published reviews is appropriate. While previous studies determined the coverage of databases, we analyzed the actual retrieval from the original searches for systematic reviews.

Since May 2013, the first author prospectively recorded results from systematic review searches that he performed at his institution. PubMed was used to identify systematic reviews published using our search strategy results. For each published systematic review, we extracted the references of the included studies. Using the prospectively recorded results and the studies included in the publications, we calculated recall, precision, and number needed to read for single databases and databases in combination. We assessed the frequency at which databases and combinations would achieve varying levels of recall (i.e., 95%). For a sample of 200 recently published systematic reviews, we calculated how many had used enough databases to ensure 95% recall.

A total of 58 published systematic reviews were included, totaling 1746 relevant references identified by our database searches, while 84 included references had been retrieved by other search methods. Sixteen percent of the included references (291 articles) were only found in a single database; Embase produced the most unique references ( n  = 132). The combination of Embase, MEDLINE, Web of Science Core Collection, and Google Scholar performed best, achieving an overall recall of 98.3 and 100% recall in 72% of systematic reviews. We estimate that 60% of published systematic reviews do not retrieve 95% of all available relevant references as many fail to search important databases. Other specialized databases, such as CINAHL or PsycINFO, add unique references to some reviews where the topic of the review is related to the focus of the database.

Conclusions

Optimal searches in systematic reviews should search at least Embase, MEDLINE, Web of Science, and Google Scholar as a minimum requirement to guarantee adequate and efficient coverage.

Peer Review reports

Investigators and information specialists searching for relevant references for a systematic review (SR) are generally advised to search multiple databases and to use additional methods to be able to adequately identify all literature related to the topic of interest [ 1 , 2 , 3 , 4 , 5 , 6 ]. The Cochrane Handbook, for example, recommends the use of at least MEDLINE and Cochrane Central and, when available, Embase for identifying reports of randomized controlled trials [ 7 ]. There are disadvantages to using multiple databases. It is laborious for searchers to translate a search strategy into multiple interfaces and search syntaxes, as field codes and proximity operators differ between interfaces. Differences in thesaurus terms between databases add another significant burden for translation. Furthermore, it is time-consuming for reviewers who have to screen more, and likely irrelevant, titles and abstracts. Lastly, access to databases is often limited and only available on subscription basis.

Previous studies have investigated the added value of different databases on different topics [ 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 ]. Some concluded that searching only one database can be sufficient as searching other databases has no effect on the outcome [ 16 , 17 ]. Nevertheless others have concluded that a single database is not sufficient to retrieve all references for systematic reviews [ 18 , 19 ]. Most articles on this topic draw their conclusions based on the coverage of databases [ 14 ]. A recent paper tried to find an acceptable number needed to read for adding an additional database; sadly, however, no true conclusion could be drawn [ 20 ]. However, whether an article is present in a database may not translate to being found by a search in that database. Because of this major limitation, the question of which databases are necessary to retrieve all relevant references for a systematic review remains unanswered. Therefore, we research the probability that single or various combinations of databases retrieve the most relevant references in a systematic review by studying actual retrieval in various databases.

The aim of our research is to determine the combination of databases needed for systematic review searches to provide efficient results (i.e., to minimize the burden for the investigators without reducing the validity of the research by missing relevant references). A secondary aim is to investigate the current practice of databases searched for published reviews. Are included references being missed because the review authors failed to search a certain database?

Development of search strategies

At Erasmus MC, search strategies for systematic reviews are often designed via a librarian-mediated search service. The information specialists of Erasmus MC developed an efficient method that helps them perform searches in many databases in a much shorter time than other methods. This method of literature searching and a pragmatic evaluation thereof are published in separate journal articles [ 21 , 22 ]. In short, the method consists of an efficient way to combine thesaurus terms and title/abstract terms into a single line search strategy. This search is then optimized. Articles that are indexed with a set of identified thesaurus terms, but do not contain the current search terms in title or abstract, are screened to discover potential new terms. New candidate terms are added to the basic search and evaluated. Once optimal recall is achieved, macros are used to translate the search syntaxes between databases, though manual adaptation of the thesaurus terms is still necessary.

Review projects at Erasmus MC cover a wide range of medical topics, from therapeutic effectiveness and diagnostic accuracy to ethics and public health. In general, searches are developed in MEDLINE in Ovid (Ovid MEDLINE® In-Process & Other Non-Indexed Citations, Ovid MEDLINE® Daily and Ovid MEDLINE®, from 1946); Embase.com (searching both Embase and MEDLINE records, with full coverage including Embase Classic); the Cochrane Central Register of Controlled Trials (CENTRAL) via the Wiley Interface; Web of Science Core Collection (hereafter called Web of Science); PubMed restricting to records in the subset “as supplied by publisher” to find references that not yet indexed in MEDLINE (using the syntax publisher [sb]); and Google Scholar. In general, we use the first 200 references as sorted in the relevance ranking of Google Scholar. When the number of references from other databases was low, we expected the total number of potential relevant references to be low. In this case, the number of hits from Google Scholar was limited to 100. When the overall number of hits was low, we additionally searched Scopus, and when appropriate for the topic, we included CINAHL (EBSCOhost), PsycINFO (Ovid), and SportDiscus (EBSCOhost) in our search.

Beginning in May 2013, the number of records retrieved from each search for each database was recorded at the moment of searching. The complete results from all databases used for each of the systematic reviews were imported into a unique EndNote library upon search completion and saved without deduplication for this research. The researchers that requested the search received a deduplicated EndNote file from which they selected the references relevant for inclusion in their systematic review. All searches in this study were developed and executed by W.M.B.

Determining relevant references of published reviews

We searched PubMed in July 2016 for all reviews published since 2014 where first authors were affiliated to Erasmus MC, Rotterdam, the Netherlands, and matched those with search registrations performed by the medical library of Erasmus MC. This search was used in earlier research [ 21 ]. Published reviews were included if the search strategies and results had been documented at the time of the last update and if, at minimum, the databases Embase, MEDLINE, Cochrane CENTRAL, Web of Science, and Google Scholar had been used in the review. From the published journal article, we extracted the list of final included references. We documented the department of the first author. To categorize the types of patient/population and intervention, we identified broad MeSH terms relating to the most important disease and intervention discussed in the article. We copied from the MeSH tree the top MeSH term directly below the disease category or, in to case of the intervention, directly below the therapeutics MeSH term. We selected the domain from a pre-defined set of broad domains, including therapy, etiology, epidemiology, diagnosis, management, and prognosis. Lastly, we checked whether the reviews described limiting their included references to a particular study design.

To identify whether our searches had found the included references, and if so, from which database(s) that citation was retrieved, each included reference was located in the original corresponding EndNote library using the first author name combined with the publication year as a search term for each specific relevant publication. If this resulted in extraneous results, the search was subsequently limited using a distinct part of the title or a second author name. Based on the record numbers of the search results in EndNote, we determined from which database these references came. If an included reference was not found in the EndNote file, we presumed the authors used an alternative method of identifying the reference (e.g., examining cited references, contacting prominent authors, or searching gray literature), and we did not include it in our analysis.

Data analysis

We determined the databases that contributed most to the reviews by the number of unique references retrieved by each database used in the reviews. Unique references were included articles that had been found by only one database search. Those databases that contributed the most unique included references were then considered candidate databases to determine the most optimal combination of databases in the further analyses.

In Excel, we calculated the performance of each individual database and various combinations. Performance was measured using recall, precision, and number needed to read. See Table  1 for definitions of these measures. These values were calculated both for all reviews combined and per individual review.

Performance of a search can be expressed in different ways. Depending on the goal of the search, different measures may be optimized. In the case of a clinical question, precision is most important, as a practicing clinician does not have a lot of time to read through many articles in a clinical setting. When searching for a systematic review, recall is the most important aspect, as the researcher does not want to miss any relevant references. As our research is performed on systematic reviews, the main performance measure is recall.

We identified all included references that were uniquely identified by a single database. For the databases that retrieved the most unique included references, we calculated the number of references retrieved (after deduplication) and the number of included references that had been retrieved by all possible combinations of these databases, in total and per review. For all individual reviews, we determined the median recall, the minimum recall, and the percentage of reviews for which each single database or combination retrieved 100% recall.

For each review that we investigated, we determined what the recall was for all possible different database combinations of the most important databases. Based on these, we determined the percentage of reviews where that database combination had achieved 100% recall, more than 95%, more than 90%, and more than 80%. Based on the number of results per database both before and after deduplication as recorded at the time of searching, we calculated the ratio between the total number of results and the number of results for each database and combination.

Improvement of precision was calculated as the ratio between the original precision from the searches in all databases and the precision for each database and combination.

To compare our practice of database usage in systematic reviews against current practice as evidenced in the literature, we analyzed a set of 200 recent systematic reviews from PubMed. On 5 January 2017, we searched PubMed for articles with the phrase “systematic review” in the title. Starting with the most recent articles, we determined the databases searched either from the abstract or from the full text until we had data for 200 reviews. For the individual databases and combinations that were used in those reviews, we multiplied the frequency of occurrence in that set of 200 with the probability that the database or combination would lead to an acceptable recall (which we defined at 95%) that we had measured in our own data.

Our earlier research had resulted in 206 systematic reviews published between 2014 and July 2016, in which the first author was affiliated with Erasmus MC [ 21 ]. In 73 of these, the searches and results had been documented by the first author of this article at the time of the last search. Of those, 15 could not be included in this research, since they had not searched all databases we investigated here. Therefore, for this research, a total of 58 systematic reviews were analyzed. The references to these reviews can be found in Additional file 1 . An overview of the broad topical categories covered in these reviews is given in Table  2 . Many of the reviews were initiated by members of the departments of surgery and epidemiology. The reviews covered a wide variety of disease, none of which was present in more than 12% of the reviews. The interventions were mostly from the chemicals and drugs category, or surgical procedures. Over a third of the reviews were therapeutic, while slightly under a quarter answered an etiological question. Most reviews did not limit to certain study designs, 9% limited to RCTs only, and another 9% limited to other study types.

Together, these reviews included a total of 1830 references. Of these, 84 references (4.6%) had not been retrieved by our database searches and were not included in our analysis, leaving in total 1746 references. In our analyses, we combined the results from MEDLINE in Ovid and PubMed (the subset as supplied by publisher) into one database labeled MEDLINE.

Unique references per database

A total of 292 (17%) references were found by only one database. Table  3 displays the number of unique results retrieved for each single database. Embase retrieved the most unique included references, followed by MEDLINE, Web of Science, and Google Scholar. Cochrane CENTRAL is absent from the table, as for the five reviews limited to randomized trials, it did not add any unique included references. Subject-specific databases such as CINAHL, PsycINFO, and SportDiscus only retrieved additional included references when the topic of the review was directly related to their special content, respectively nursing, psychiatry, and sports medicine.

Overall performance

The four databases that had retrieved the most unique references (Embase, MEDLINE, Web of Science, and Google Scholar) were investigated individually and in all possible combinations (see Table  4 ). Of the individual databases, Embase had the highest overall recall (85.9%). Of the combinations of two databases, Embase and MEDLINE had the best results (92.8%). Embase and MEDLINE combined with either Google Scholar or Web of Science scored similarly well on overall recall (95.9%). However, the combination with Google Scholar had a higher precision and higher median recall, a higher minimum recall, and a higher proportion of reviews that retrieved all included references. Using both Web of Science and Google Scholar in addition to MEDLINE and Embase increased the overall recall to 98.3%. The higher recall from adding extra databases came at a cost in number needed to read (NNR). Searching only Embase produced an NNR of 57 on average, whereas, for the optimal combination of four databases, the NNR was 73.

Probability of appropriate recall

We calculated the recall for individual databases and databases in all possible combination for all reviews included in the research. Figure  1 shows the percentages of reviews where a certain database combination led to a certain recall. For example, in 48% of all systematic reviews, the combination of Embase and MEDLINE (with or without Cochrane CENTRAL; Cochrane CENTRAL did not add unique relevant references) reaches a recall of at least 95%. In 72% of studied systematic reviews, the combination of Embase, MEDLINE, Web of Science, and Google Scholar retrieved all included references. In the top bar, we present the results of the complete database searches relative to the total number of included references. This shows that many database searches missed relevant references.

Percentage of systematic reviews for which a certain database combination reached a certain recall. The X -axis represents the percentage of reviews for which a specific combination of databases, as shown on the y -axis, reached a certain recall (represented with bar colors). Abbreviations: EM Embase, ML MEDLINE, WoS Web of Science, GS Google Scholar. Asterisk indicates that the recall of all databases has been calculated over all included references. The recall of the database combinations was calculated over all included references retrieved by any database

Differences between domains of reviews

We analyzed whether the added value of Web of Science and Google Scholar was dependent of the domain of the review. For 55 reviews, we determined the domain. See Fig.  2 for the comparison of the recall of Embase, MEDLINE, and Cochrane CENTRAL per review for all identified domains. For all but one domain, the traditional combination of Embase, MEDLINE, and Cochrane CENTRAL did not retrieve enough included references. For four out of five systematic reviews that limited to randomized controlled trials (RCTs) only, the traditional combination retrieved 100% of all included references. However, for one review of this domain, the recall was 82%. Of the 11 references included in this review, one was found only in Google Scholar and one only in Web of Science.

Percentage of systematic reviews of a certain domain for which the combination Embase, MEDLINE and Cochrane CENTRAL reached a certain recall

Reduction in number of results

We calculated the ratio between the number of results found when searching all databases, including databases not included in our analyses, such as Scopus, PsycINFO, and CINAHL, and the number of results found searching a selection of databases. See Fig.  3 for the legend of the plots in Figs.  4 and 5 . Figure  4 shows the distribution of this value for individual reviews. The database combinations with the highest recall did not reduce the total number of results by large margins. Moreover, in combinations where the number of results was greatly reduced, the recall of included references was lower.

Legend of Figs. 3 and 4

The ratio between number of results per database combination and the total number of results for all databases

The ratio between precision per database combination and the total precision for all databases

Improvement of precision

To determine how searching multiple databases affected precision, we calculated for each combination the ratio between the original precision, observed when all databases were searched, and the precision calculated for different database combinations. Figure  5 shows the improvement of precision for 15 databases and database combinations. Because precision is defined as the number of relevant references divided by the number of total results, we see a strong correlation with the total number of results.

Status of current practice of database selection

From a set of 200 recent SRs identified via PubMed, we analyzed the databases that had been searched. Almost all reviews (97%) reported a search in MEDLINE. Other databases that we identified as essential for good recall were searched much less frequently; Embase was searched in 61% and Web of Science in 35%, and Google Scholar was only used in 10% of all reviews. For all individual databases or combinations of the four important databases from our research (MEDLINE, Embase, Web of Science, and Google Scholar), we multiplied the frequency of occurrence of that combination in the random set, with the probability we found in our research that this combination would lead to an acceptable recall of 95%. The calculation is shown in Table  5 . For example, around a third of the reviews (37%) relied on the combination of MEDLINE and Embase. Based on our findings, this combination achieves acceptable recall about half the time (47%). This implies that 17% of the reviews in the PubMed sample would have achieved an acceptable recall of 95%. The sum of all these values is the total probability of acceptable recall in the random sample. Based on these calculations, we estimate that the probability that this random set of reviews retrieved more than 95% of all possible included references was 40%. Using similar calculations, also shown in Table  5 , we estimated the probability that 100% of relevant references were retrieved is 23%.

Our study shows that, to reach maximum recall, searches in systematic reviews ought to include a combination of databases. To ensure adequate performance in searches (i.e., recall, precision, and number needed to read), we find that literature searches for a systematic review should, at minimum, be performed in the combination of the following four databases: Embase, MEDLINE (including Epub ahead of print), Web of Science Core Collection, and Google Scholar. Using that combination, 93% of the systematic reviews in our study obtained levels of recall that could be considered acceptable (> 95%). Unique results from specialized databases that closely match systematic review topics, such as PsycINFO for reviews in the fields of behavioral sciences and mental health or CINAHL for reviews on the topics of nursing or allied health, indicate that specialized databases should be used additionally when appropriate.

We find that Embase is critical for acceptable recall in a review and should always be searched for medically oriented systematic reviews. However, Embase is only accessible via a paid subscription, which generally makes it challenging for review teams not affiliated with academic medical centers to access. The highest scoring database combination without Embase is a combination of MEDLINE, Web of Science, and Google Scholar, but that reaches satisfactory recall for only 39% of all investigated systematic reviews, while still requiring a paid subscription to Web of Science. Of the five reviews that included only RCTs, four reached 100% recall if MEDLINE, Web of Science, and Google Scholar combined were complemented with Cochrane CENTRAL.

The Cochrane Handbook recommends searching MEDLINE, Cochrane CENTRAL, and Embase for systematic reviews of RCTs. For reviews in our study that included RCTs only, indeed, this recommendation was sufficient for four (80%) of the reviews. The one review where it was insufficient was about alternative medicine, specifically meditation and relaxation therapy, where one of the missed studies was published in the Indian Journal of Positive Psychology . The other study from the Journal of Advanced Nursing is indexed in MEDLINE and Embase but was only retrieved because of the addition of KeyWords Plus in Web of Science. We estimate more than 50% of reviews that include more study types than RCTs would miss more than 5% of included references if only traditional combination of MEDLINE, Embase, and Cochrane CENTAL is searched.

We are aware that the Cochrane Handbook [ 7 ] recommends more than only these databases, but further recommendations focus on regional and specialized databases. Though we occasionally used the regional databases LILACS and SciELO in our reviews, they did not provide unique references in our study. Subject-specific databases like PsycINFO only added unique references to a small percentage of systematic reviews when they had been used for the search. The third key database we identified in this research, Web of Science, is only mentioned as a citation index in the Cochrane Handbook, not as a bibliographic database. To our surprise, Cochrane CENTRAL did not identify any unique included studies that had not been retrieved by the other databases, not even for the five reviews focusing entirely on RCTs. If Erasmus MC authors had conducted more reviews that included only RCTs, Cochrane CENTRAL might have added more unique references.

MEDLINE did find unique references that had not been found in Embase, although our searches in Embase included all MEDLINE records. It is likely caused by difference in thesaurus terms that were added, but further analysis would be required to determine reasons for not finding the MEDLINE records in Embase. Although Embase covers MEDLINE, it apparently does not index every article from MEDLINE. Thirty-seven references were found in MEDLINE (Ovid) but were not available in Embase.com . These are mostly unique PubMed references, which are not assigned MeSH terms, and are often freely available via PubMed Central.

Google Scholar adds relevant articles not found in the other databases, possibly because it indexes the full text of all articles. It therefore finds articles in which the topic of research is not mentioned in title, abstract, or thesaurus terms, but where the concepts are only discussed in the full text. Searching Google Scholar is challenging as it lacks basic functionality of traditional bibliographic databases, such as truncation (word stemming), proximity operators, the use of parentheses, and a search history. Additionally, search strategies are limited to a maximum of 256 characters, which means that creating a thorough search strategy can be laborious.

Whether Embase and Web of Science can be replaced by Scopus remains uncertain. We have not yet gathered enough data to be able to make a full comparison between Embase and Scopus. In 23 reviews included in this research, Scopus was searched. In 12 reviews (52%), Scopus retrieved 100% of all included references retrieved by Embase or Web of Science. In the other 48%, the recall by Scopus was suboptimal, in one occasion as low as 38%.

Of all reviews in which we searched CINAHL and PsycINFO, respectively, for 6 and 9% of the reviews, unique references were found. For CINAHL and PsycINFO, in one case each, unique relevant references were found. In both these reviews, the topic was highly related to the topic of the database. Although we did not use these special topic databases in all of our reviews, given the low number of reviews where these databases added relevant references, and observing the special topics of those reviews, we suggest that these subject databases will only add value if the topic is related to the topic of the database.

Many articles written on this topic have calculated overall recall of several reviews, instead of the effects on all individual reviews. Researchers planning a systematic review generally perform one review, and they need to estimate the probability that they may miss relevant articles in their search. When looking at the overall recall, the combination of Embase and MEDLINE and either Google Scholar or Web of Science could be regarded sufficient with 96% recall. This number however is not an answer to the question of a researcher performing a systematic review, regarding which databases should be searched. A researcher wants to be able to estimate the chances that his or her current project will miss a relevant reference. However, when looking at individual reviews, the probability of missing more than 5% of included references found through database searching is 33% when Google Scholar is used together with Embase and MEDLINE and 30% for the Web of Science, Embase, and MEDLINE combination. What is considered acceptable recall for systematic review searches is open for debate and can differ between individuals and groups. Some reviewers might accept a potential loss of 5% of relevant references; others would want to pursue 100% recall, no matter what cost. Using the results in this research, review teams can decide, based on their idea of acceptable recall and the desired probability which databases to include in their searches.

Strengths and limitations

We did not investigate whether the loss of certain references had resulted in changes to the conclusion of the reviews. Of course, the loss of a minor non-randomized included study that follows the systematic review’s conclusions would not be as problematic as losing a major included randomized controlled trial with contradictory results. However, the wide range of scope, topic, and criteria between systematic reviews and their related review types make it very hard to answer this question.

We found that two databases previously not recommended as essential for systematic review searching, Web of Science and Google Scholar, were key to improving recall in the reviews we investigated. Because this is a novel finding, we cannot conclude whether it is due to our dataset or to a generalizable principle. It is likely that topical differences in systematic reviews may impact whether databases such as Web of Science and Google Scholar add value to the review. One explanation for our finding may be that if the research question is very specific, the topic of research might not always be mentioned in the title and/or abstract. In that case, Google Scholar might add value by searching the full text of articles. If the research question is more interdisciplinary, a broader science database such as Web of Science is likely to add value. The topics of the reviews studied here may simply have fallen into those categories, though the diversity of the included reviews may point to a more universal applicability.

Although we searched PubMed as supplied by publisher separately from MEDLINE in Ovid, we combined the included references of these databases into one measurement in our analysis. Until 2016, the most complete MEDLINE selection in Ovid still lacked the electronic publications that were already available in PubMed. These could be retrieved by searching PubMed with the subset as supplied by publisher. Since the introduction of the more complete MEDLINE collection Epub Ahead of Print , In-Process & Other Non-Indexed Citations , and Ovid MEDLINE® , the need to separately search PubMed as supplied by publisher has disappeared. According to our data, PubMed’s “as supplied by publisher” subset retrieved 12 unique included references, and it was the most important addition in terms of relevant references to the four major databases. It is therefore important to search MEDLINE including the “Epub Ahead of Print, In-Process, and Other Non-Indexed Citations” references.

These results may not be generalizable to other studies for other reasons. The skills and experience of the searcher are one of the most important aspects in the effectiveness of systematic review search strategies [ 23 , 24 , 25 ]. The searcher in the case of all 58 systematic reviews is an experienced biomedical information specialist. Though we suspect that searchers who are not information specialists or librarians would have a higher possibility of less well-constructed searches and searches with lower recall, even highly trained searchers differ in their approaches to searching. For this study, we searched to achieve as high a recall as possible, though our search strategies, like any other search strategy, still missed some relevant references because relevant terms had not been used in the search. We are not implying that a combined search of the four recommended databases will never result in relevant references being missed, rather that failure to search any one of these four databases will likely lead to relevant references being missed. Our experience in this study shows that additional efforts, such as hand searching, reference checking, and contacting key players, should be made to retrieve extra possible includes.

Based on our calculations made by looking at random systematic reviews in PubMed, we estimate that 60% of these reviews are likely to have missed more than 5% of relevant references only because of the combinations of databases that were used. That is with the generous assumption that the searches in those databases had been designed sensitively enough. Even when taking into account that many searchers consider the use of Scopus as a replacement of Embase, plus taking into account the large overlap of Scopus and Web of Science, this estimate remains similar. Also, while the Scopus and Web of Science assumptions we made might be true for coverage, they are likely very different when looking at recall, as Scopus does not allow the use of the full features of a thesaurus. We see that reviewers rarely use Web of Science and especially Google Scholar in their searches, though they retrieve a great deal of unique references in our reviews. Systematic review searchers should consider using these databases if they are available to them, and if their institution lacks availability, they should ask other institutes to cooperate on their systematic review searches.

The major strength of our paper is that it is the first large-scale study we know of to assess database performance for systematic reviews using prospectively collected data. Prior research on database importance for systematic reviews has looked primarily at whether included references could have theoretically been found in a certain database, but most have been unable to ascertain whether the researchers actually found the articles in those databases [ 10 , 12 , 16 , 17 , 26 ]. Whether a reference is available in a database is important, but whether the article can be found in a precise search with reasonable recall is not only impacted by the database’s coverage. Our experience has shown us that it is also impacted by the ability of the searcher, the accuracy of indexing of the database, and the complexity of terminology in a particular field. Because these studies based on retrospective analysis of database coverage do not account for the searchers’ abilities, the actual findings from the searches performed, and the indexing for particular articles, their conclusions lack immediate translatability into practice. This research goes beyond retrospectively assessed coverage to investigate real search performance in databases. Many of the articles reporting on previous research concluded that one database was able to retrieve most included references. Halladay et al. [ 10 ] and van Enst et al. [ 16 ] concluded that databases other than MEDLINE/PubMed did not change the outcomes of the review, while Rice et al. [ 17 ] found the added value of other databases only for newer, non-indexed references. In addition, Michaleff et al. [ 26 ] found that Cochrane CENTRAL included 95% of all RCTs included in the reviews investigated. Our conclusion that Web of Science and Google Scholar are needed for completeness has not been shared by previous research. Most of the previous studies did not include these two databases in their research.

We recommend that, regardless of their topic, searches for biomedical systematic reviews should combine Embase, MEDLINE (including electronic publications ahead of print), Web of Science (Core Collection), and Google Scholar (the 200 first relevant references) at minimum. Special topics databases such as CINAHL and PsycINFO should be added if the topic of the review directly touches the primary focus of a specialized subject database, like CINAHL for focus on nursing and allied health or PsycINFO for behavioral sciences and mental health. For reviews where RCTs are the desired study design, Cochrane CENTRAL may be similarly useful. Ignoring one or more of the databases that we identified as the four key databases will result in more precise searches with a lower number of results, but the researchers should decide whether that is worth the >increased probability of losing relevant references. This study also highlights once more that searching databases alone is, nevertheless, not enough to retrieve all relevant references.

Future research should continue to investigate recall of actual searches beyond coverage of databases and should consider focusing on the most optimal database combinations, not on single databases.

Levay P, Raynor M, Tuvey D. The contributions of MEDLINE, other bibliographic databases and various search techniques to NICE public health guidance. Evid Based Libr Inf Pract. 2015;10:50–68.

Article   Google Scholar  

Stevinson C, Lawlor DA. Searching multiple databases for systematic reviews: added value or diminishing returns? Complement Ther Med. 2004;12:228–32.

Article   CAS   PubMed   Google Scholar  

Lawrence DW. What is lost when searching only one literature database for articles relevant to injury prevention and safety promotion? Inj Prev. 2008;14:401–4.

Lemeshow AR, Blum RE, Berlin JA, Stoto MA, Colditz GA. Searching one or two databases was insufficient for meta-analysis of observational studies. J Clin Epidemiol. 2005;58:867–73.

Article   PubMed   Google Scholar  

Zheng MH, Zhang X, Ye Q, Chen YP. Searching additional databases except PubMed are necessary for a systematic review. Stroke. 2008;39:e139. author reply e140

Beyer FR, Wright K. Can we prioritise which databases to search? A case study using a systematic review of frozen shoulder management. Health Inf Libr J. 2013;30:49–58.

Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions: The Cochrane Collaboration, London, United Kingdom. 2011.

Wright K, Golder S, Lewis-Light K. What value is the CINAHL database when searching for systematic reviews of qualitative studies? Syst Rev. 2015;4:104.

Article   PubMed   PubMed Central   Google Scholar  

Wilkins T, Gillies RA, Davies K. EMBASE versus MEDLINE for family medicine searches: can MEDLINE searches find the forest or a tree? Can Fam Physician. 2005;51:848–9.

PubMed   Google Scholar  

Halladay CW, Trikalinos TA, Schmid IT, Schmid CH, Dahabreh IJ. Using data sources beyond PubMed has a modest impact on the results of systematic reviews of therapeutic interventions. J Clin Epidemiol. 2015;68:1076–84.

Ahmadi M, Ershad-Sarabi R, Jamshidiorak R, Bahaodini K. Comparison of bibliographic databases in retrieving information on telemedicine. J Kerman Univ Med Sci. 2014;21:343–54.

Google Scholar  

Lorenzetti DL, Topfer L-A, Dennett L, Clement F. Value of databases other than MEDLINE for rapid health technology assessments. Int J Technol Assess Health Care. 2014;30:173–8.

Beckles Z, Glover S, Ashe J, Stockton S, Boynton J, Lai R, Alderson P. Searching CINAHL did not add value to clinical questions posed in NICE guidelines. J Clin Epidemiol. 2013;66:1051–7.

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. The contribution of databases to the results of systematic reviews: a cross-sectional study. BMC Med Res Methodol. 2016;16:1–13.

Aagaard T, Lund H, Juhl C. Optimizing literature search in systematic reviews—are MEDLINE, EMBASE and CENTRAL enough for identifying effect studies within the area of musculoskeletal disorders? BMC Med Res Methodol. 2016;16:161.

van Enst WA, Scholten RJ, Whiting P, Zwinderman AH, Hooft L. Meta-epidemiologic analysis indicates that MEDLINE searches are sufficient for diagnostic test accuracy systematic reviews. J Clin Epidemiol. 2014;67:1192–9.

Rice DB, Kloda LA, Levis B, Qi B, Kingsland E, Thombs BD. Are MEDLINE searches sufficient for systematic reviews and meta-analyses of the diagnostic accuracy of depression screening tools? A review of meta-analyses. J Psychosom Res. 2016;87:7–13.

Bramer WM, Giustini D, Kramer BM, Anderson PF. The comparative recall of Google Scholar versus PubMed in identical searches for biomedical systematic reviews: a review of searches used in systematic reviews. Syst Rev. 2013;2:115.

Bramer WM, Giustini D, Kramer BMR. Comparing the coverage, recall, and precision of searches for 120 systematic reviews in Embase, MEDLINE, and Google Scholar: a prospective study. Syst Rev. 2016;5:39.

Ross-White A, Godfrey C. Is there an optimum number needed to retrieve to justify inclusion of a database in a systematic review search? Health Inf Libr J. 2017;33:217–24.

Bramer WM, Rethlefsen ML, Mast F, Kleijnen J. A pragmatic evaluation of a new method for librarian-mediated literature searches for systematic reviews. Res Synth Methods. 2017. doi: 10.1002/jrsm.1279 .

Bramer WM, de Jonge GB, Rethlefsen ML, Mast F, Kleijnen J. A systematic approach to searching: how to perform high quality literature searches more efficiently. J Med Libr Assoc. 2018.

Rethlefsen ML, Farrell AM, Osterhaus Trzasko LC, Brigham TJ. Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews. J Clin Epidemiol. 2015;68:617–26.

McGowan J, Sampson M. Systematic reviews need systematic searchers. J Med Libr Assoc. 2005;93:74–80.

PubMed   PubMed Central   Google Scholar  

McKibbon KA, Haynes RB, Dilks CJW, Ramsden MF, Ryan NC, Baker L, Flemming T, Fitzgerald D. How good are clinical MEDLINE searches? A comparative study of clinical end-user and librarian searches. Comput Biomed Res. 1990;23:583–93.

Michaleff ZA, Costa LO, Moseley AM, Maher CG, Elkins MR, Herbert RD, Sherrington C. CENTRAL, PEDro, PubMed, and EMBASE are the most comprehensive databases indexing randomized controlled trials of physical therapy interventions. Phys Ther. 2011;91:190–7.

Download references

Acknowledgements

Not applicable

Melissa Rethlefsen receives funding in part from the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number UL1TR001067. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Availability of data and materials

The datasets generated and/or analyzed during the current study are available from the corresponding author on a reasonable request.

Author information

Authors and affiliations.

Medical Library, Erasmus MC, Erasmus University Medical Centre Rotterdam, 3000 CS, Rotterdam, the Netherlands

Wichor M. Bramer

Spencer S. Eccles Health Sciences Library, University of Utah, Salt Lake City, Utah, USA

Melissa L. Rethlefsen

Kleijnen Systematic Reviews Ltd., York, UK

Jos Kleijnen

School for Public Health and Primary Care (CAPHRI), Maastricht University, Maastricht, the Netherlands

Department of Epidemiology, Erasmus MC, Erasmus University Medical Centre Rotterdam, Rotterdam, the Netherlands

Oscar H. Franco

You can also search for this author in PubMed   Google Scholar

Contributions

WB, JK, and OF designed the study. WB designed the searches used in this study and gathered the data. WB and ML analyzed the data. WB drafted the first manuscript, which was revised critically by the other authors. All authors have approved the final manuscript.

Corresponding author

Correspondence to Wichor M. Bramer .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

WB has received travel allowance from Embase for giving a presentation at a conference. The other authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:.

Reviews included in the research . References to the systematic reviews published by Erasmus MC authors that were included in the research. (DOCX 19 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Bramer, W.M., Rethlefsen, M.L., Kleijnen, J. et al. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. Syst Rev 6 , 245 (2017). https://doi.org/10.1186/s13643-017-0644-y

Download citation

Received : 21 August 2017

Accepted : 24 November 2017

Published : 06 December 2017

DOI : https://doi.org/10.1186/s13643-017-0644-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Databases, bibliographic
  • Review literature as topic
  • Sensitivity and specificity
  • Information storage and retrieval

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

search engine for literature review

Research management

Sponsored by

Elsevier logo

Streamline your research using academic search engines

Specialist search engines can put the most relevant literature at your fingertips, but which is the best one for you, and how can you optimise your searches?

Jack Wang's avatar

Additional Links

  • More on this topic

Elsevier logo

Elsevier helps researchers and healthcare professionals advance science and improve health outcomes for the benefit of society.

Discover elsevier.

Magnifying-glass-over-scrambled-block-letters

Created in partnership with

The University of Queensland logo

You may also like

A female politician gives a speech - academics must tailor their communications with politicians in order to have their research affect policy

Popular resources

.css-1txxx8u{overflow:hidden;max-height:81px;text-indent:0px;} When good enough is not enough

Emotions and learning: what role do emotions play in how and why students learn, contextual learning: linking learning to the real world, five leadership tips for women in higher education, artificial intelligence and academic integrity: striking a balance.

Knowing what has already been established within the field is the first step in any research project. So all researchers need to combine an in-depth understanding of their topic with a broad awareness of the discipline at large to push the boundaries of existing knowledge.

But reconceptualising volumes of peer-reviewed literature over extended periods of time is not a straightforward process. How can academic search engines (ASEs) help streamline a literature search and allow researchers to better formulate insightful research questions?

  • Why does open access make publishing more complicated?
  • Using literature reviews to strengthen research: tips for PhDs and supervisors
  • Understanding peer review: what it is, how it works and why it is important

What is an academic search engine?

Academic search engines aim to combine the convenience and power of web-based search engines with the rigour of peer-reviewed scholarly sources. In contrast to traditional academic databases, which often sit behind a paywall, most ASEs are freely accessible and often link to full-text research articles. ASE searches return publications that are sorted by topic and significance in the field, with the most frequently cited publications appearing higher in the list by default. Researchers can strategically use ASEs to compile an expansive bibliography and streamline the literature review process.

How do academic search engines work?

The underlying algorithms used by search engines are often referred to as “web crawlers”; these index a constant stream of online traffic. The metadata generated through this pre-filtering process are what allow search engines to return immediate results in response to keyword queries. The metadata generated by the search engine algorithms (and in some cases artificial intelligence tools) can be used to find networks of related articles, all of which can be saved into customisable reading lists or batch exported into reference management software.

What is the best academic search engine for your needs?

ASEs with a broad multidisciplinary focus will naturally have the biggest database of sources, and Google Scholar has traditionally been the leader on this front. Other ASEs are all playing catch-up, but Bielefeld Academic Search Engine (BASE) , Semantic Scholar and Refseek have all expanded the number of documents hosted within their databases. To generate metadata for millions of sources, Google Scholar harnesses the ubiquity of Google’s web-crawling algorithm, while Semantic Scholar uses AI-driven techniques. The proprietary nature of these tools can limit transparency and user control, and the iterative nature of these tools can compromise search reproducibility. In fact, even consecutive queries using identical search terms in Google Scholar may yield inconsistent results . In contrast, BASE uses an internationally standardised protocol for harvesting metadata while disclosing their list of content providers , and may be better suited for meta-analyses or systematic literature reviews.

While ASEs are typically free for end users, the availability of full-text research articles can be quite limited. CORE mitigates this by hosting only articles published in open access journals, but that may not be a viable option for your topic.

Access to ASEs may also vary depending on your location – for instance, Google is blocked or censored in some parts of the world – and it can be difficult to rely on ASEs as their only literature search tool. The ASE landscape can be quite volatile overall, with Microsoft Academic – the previous main competitor to Google Scholar – shutting down its operations in 2021 . The best approach may still be to pair an ASE with a more traditional academic database (such as Web of Science or Scopus ) along with databases specifically tailored for your discipline ( ERIC , SSRN , Pubmed , CiteseerX ).

Top search tips

Regardless of which ASE you choose, as a researcher, you need to use a consistent approach when planning a search.

  • Summarise your topic or research questions into one or two sentences.
  • Underline keywords in your topic and list their synonyms as alternate search terms.
  • Search using different combinations of keywords, and assess if there are too many or too few relevant results.
  • Sort the results by publication time frame and citation counts, and save any relevant articles to a personalised reading list.
  • Use the “cited by” or “related articles” functionality of ASEs to expand the scope of your key references.

A common search mistake is not incorporating Boolean operators into your search strategy. Google Scholar, for example, uses the following Boolean operators:

  • AND limits results by only returning articles that are relevant to all the search terms (for example, learning AND teachers)
  • OR expands your results by returning articles relevant to either of the search terms (for example, learning OR teachers)
  • The minus sign (-) limits results by excluding keywords (so, learning -teachers)
  • -site excludes results from a website (teachers -site:wikipedia.org)
  • ~ expands your results by including synonyms for the key term in the search (~teachers)
  • “” limits your results by only showing articles with the exact phrasing (“professional learning for teachers”).

Making it work

ASEs are just another tool in a researcher’s toolkit, and you can be creative in how you choose to use them. You can make a separate reading list for every new paper you are writing, and quickly share these reference lists with your co-authors to speed up the final copy-editing process. You can create email alerts every time a prominent author in the field (yourself included!) publishes a new paper, or when a new study cites your work. ASEs can be used strategically to improve the public accessibility of academic literature and to help you form new collaborations.

Jack Wang is an associate professor in the School of Chemistry and Molecular Biosciences at the University of Queensland. He was awarded 2020 Australian University Teacher of the Year.

If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the THE Campus newsletter .

This video gives more tips on how to use academic search engines such as Google Scholar in research.

When good enough is not enough

Decolonising the curriculum – how do i get started, a diy guide to starting your own journal, entice students back to the classroom with community quizzes, how hard can it be testing ai detection tools, address bias in teaching, learning and assessment in five steps.

Register for free

and unlock a host of features on the THE site

search engine for literature review

Help us improve our Library guides with this 5 minute survey . We appreciate your feedback!

  • UOW Library
  • Key guides for students

Literature Review

How to search effectively.

  • Find examples of literature reviews
  • How to write a literature review
  • Grey literature

The  Literature searching interactive tutorial  includes self-paced, guided activities to assist you in developing  effective search skills..

1. Identify search words

Analyse your research topic or question.

  • What are the main ideas?
  • What concepts or theories have you already covered?
  • Write down your main ideas, synonyms, related words and phrases.
  • If you're looking for specific types of research, use these suggested terms: qualitative, quantitative, methodology, review, survey, test, trend (and more).
  • Be aware of UK and US spelling variations. E.g. organisation OR organization, ageing OR aging.
  • Interactive Keyword Builder
  • Identifying effective keywords

2. Connect your search words

Find results with one or more search words.

Use OR between words that mean the same thing.

E.g.  adolescent  OR  teenager

This search will find results with either (or both) of the search words.

Find results with two search words

Use AND between words which represent the main ideas in the question.

E.g. adolescent AND “physical activity”

This will find results with both of the search words.

Exclude search words

Use NOT to exclude words that you don’t want in your search results.

E.g. (adolescent OR teenager) NOT “young adult”

3. Use search tricks

Search for different word endings.

Truncation *

The asterisk symbol * will help you search for different word endings.

E.g. teen* will find results with the words: teen, teens, teenager, teenagers

Specific truncation symbols will vary. Check the 'Help' section of the database you are searching.

Search for common phrases

Phrase searching “...........”

Double quotation marks help you search for common phrases and make your results more relevant.

E.g. “physical activity” will find results with the words physical activity together as a phrase.

Search for spelling variations within related terms

Wildcards ?

Wildcard symbols allow you to search for spelling variations within the same or related terms.

E.g. wom?n will find results with women OR woman

Specific wild card symbols will vary. Check the 'Help' section of the database you are searching.

Search terms within specific ranges of each other

Proximity  w/#

Proximity searching allows you to specify where your search terms will appear in relation to each other.

E.g.  pain w/10 morphine will search for pain within ten words of morphine

Specific proximity symbols will vary. Check the 'Help' section of the database you are searching.

4. Improve your search results

All library databases are different and you can't always search and refine in the same way. Try to be consistent when transferring your search in the library databases you have chosen.

Narrow and refine your search results by:

  • year of publication or date range (for recent or historical research)
  • document or source type (e.g. article, review or book)
  • subject or keyword (for relevance). Try repeating your search using the 'subject' headings or 'keywords' field to focus your search
  • searching in particular fields, i.e. citation and abstract. Explore the available dropdown menus to change the fields to be searched.

When searching, remember to:

Adapt your search and keep trying.

Searching for information is a process and you won't always get it right the first time. Improve your results by changing your search and trying again until you're happy with what you have found.

Keep track of your searches

Keeping track of searches saves time as you can rerun them, store references, and set up regular alerts for new research relevant to your topic.

Most library databases allow you to register with a personal account. Look for a 'log in', 'sign in' or 'register' button to get started.

  • Literature review search tracker (Excel spreadsheet)

Manage your references

There are free and subscription reference management programs available on the web or to download on your computer.

  • EndNote - The University has a license for EndNote. It is available for all students and staff, although is recommended for postgraduates and academic staff.
  • Zotero - Free software recommended for undergraduate students.
  • Previous: How to write a literature review
  • Next: Where to search when doing a literature review
  • Last Updated: Mar 13, 2024 8:37 AM
  • URL: https://uow.libguides.com/literaturereview

Insert research help text here

LIBRARY RESOURCES

Library homepage

Library SEARCH

A-Z Databases

STUDY SUPPORT

Learning Co-Op (academic skills support)

Referencing and citing

Digital Skills Hub

MORE UOW SERVICES

UOW homepage

Student support and wellbeing

IT Services

search engine for literature review

On the lands that we study, we walk, and we live, we acknowledge and respect the traditional custodians and cultural knowledge holders of these lands.

search engine for literature review

Copyright & disclaimer | Privacy & cookie usage

How to undertake a literature search: a step-by-step guide

Affiliation.

  • 1 Literature Search Specialist, Library and Archive Service, Royal College of Nursing, London.
  • PMID: 32279549
  • DOI: 10.12968/bjon.2020.29.7.431

Undertaking a literature search can be a daunting prospect. Breaking the exercise down into smaller steps will make the process more manageable. This article suggests 10 steps that will help readers complete this task, from identifying key concepts to choosing databases for the search and saving the results and search strategy. It discusses each of the steps in a little more detail, with examples and suggestions on where to get help. This structured approach will help readers obtain a more focused set of results and, ultimately, save time and effort.

Keywords: Databases; Literature review; Literature search; Reference management software; Research questions; Search strategy.

  • Databases, Bibliographic*
  • Information Storage and Retrieval / methods*
  • Nursing Research
  • Review Literature as Topic*

DigitalCommons@University of Nebraska - Lincoln

  • < Previous Article
  • Next Article >

Home > Libraries > Library Philosophy and Practice - Electronic Journal > 8043

Libraries at University of Nebraska-Lincoln

Library Philosophy and Practice (e-journal)

Library Philosophy and Practice (e-journal)

Harnessing artificial intelligence-powered search engines for the literature review process.

Omobolanle Seri Fasola , Ajayi Crowther University, Oyo, Nigeria Follow

Date of this Version

Winter 11-13-2023

Literature reviews are an essential part of research, and they require a significant amount of time and effort to be able to search through the millions of academic papers both online and physically. However, with the advent of AI-powered search engines, the process of conducting literature reviews has become more efficient and effective. AI-powered search engines use machine learning algorithms to analyze large volumes of data and provide researchers with relevant and accurate information in a matter of seconds. The importance of AI-powered search engines in literature reviews cannot be overstated. They help researchers save time, improve the accuracy and relevance of their results, and provide comprehensive results that traditional literature review methods may miss. The purpose of this article is to provide researchers with a comprehensive guide on how to leverage AI-powered search engines for the literature review process. It discusses the advantages of using AI-powered search engines, how to choose the right search engine, tips for using them effectively and avoid pitfalls in the use of these tools.

Since March 14, 2024

Advanced Search

Search Help

  • Notify me via email or RSS
  • Library Philosophy and Practice - Electronic Journal Website
  • Copyright Statement
  • Instructions for Authors
  • Advice for Contributors
  • Collections
  • Disciplines

Author Corner

  • Submission Guidelines
  • Guide to Submitting
  • Submit your paper or article
  • Libraries Website
  • Library Philosophy and Practice Editorial Board

Home | About | FAQ | My Account | Accessibility Statement

Privacy Copyright

search engine for literature review

  • Research Guides
  • Topic Guides

AI Tools for Research

  • Searching literature
  • Assessing AI research tools
  • Popular chatbots and research
  • AI in library subscriptions

Searching with AI tools

Multi-functional ai "research assistants", other tools for literature searching, related guides.

  • Mapping literature
  • Summarizing literature
  • Working with data
  • Writing code
  • AI and evidence synthesis

The following tools aim to help you find research on a topic. They all have AI- or machine learning-based features, such as semantic search, text summarization, conversational interfaces, and more.

Important notes:

  • Most tools search one or more open scholarly metadata sources such as Semantic Scholar, CrossRef, OpenAlex, and others. Some of these sources have >200 million records and others are more restricted. None of the sources are as comprehensive as Google Scholar , which is estimated to have almost 400 million records. Check the tool's help documentation for information about its sources. See this Wikipedia page for a sortable list of the largest metadata aggregators.
  • Access to citation/abstract metadata and the full-text of open access articles still omits a vast amount of scholarly research contained in full-text paywalled articles. When crafting answers and summaries, tools without access to full-text will base their answers on abstracts.
  • Tools with a connection to scholarly literature do not tend to make up fake citations, but they may cite a real reference in a way that misrepresents its contents.
  • None of these tools are appropriate for systematic review searching, which requires explicit search strategies that are documented and reproducible. For recommendations and help with systematic review searching, and to get in touch with our service team through Temple Libraries' Evidence Synthesis & Systematic Reviews page .
  • Off campus access bookmarklet Get the full text of paywalled articles using Temple Libraries' off-campus access bookmarklet. Add it to your web browser's bookmark bar. Click it to quickly reload any webpage through the Libraries' remote access proxy server.
  • Elicit A pay-only multifunctional research assistant with a substantial free trial. Searches ~126 million items from Semantic Scholar, which only includes open access full text. Extracts information from PDFs and has a Zotero integration. Works on a paid "credits" system, with 5000 free starter credits.
  • Scispace Multi-functional research assistant tool with a free tier. Includes AI summarizing, information extraction, personalization features, Chrome browser extension, Zotero integration, and other utilities. Literature search draws on a corpus of >150 million items. Writing assistant portion has a freemium model. Use the SciSpace Chrome browser extension to enhance Google Scholar search results. The extension will put icons on your results page that allow you to chat with a search result, find similar papers, or run the same search in SciSpace.
  • scite_ A pay-only research assistant built on a database of citation statements drawn from a body of >187 million items from open repositories such as PubMed and Unpaywall and some indexed from commercial publishers. Can be used anonymously for free for literature discovery.
  • Consensus Searches Semantic Scholar database using a proprietary combination of semantic and keyword search . Uses AI to extract, aggregate, and summarize findings based on the top few search results. Free accounts can make unlimited searches and can bookmark articles and save searches.
  • Dimensions The largest collection of linked data in a single database, including grants, publications, datasets, clinical trials, patents, policy documents. Has numerous AI-powered features, such as TLDR summaries, citation network maps.
  • jenni Searches for relevant research as you write a document. Substantial free tier, allows uploads of your own sources, including PDFs.
  • Keenious Searches for relevant research as you write in a Word or Google Doc. Can also search for research based on a seed article you supply. 240 million scholarly papers. https://www.keenious.com/plans -->
  • R Discovery Literature search tool with other features like recommended feeds based on your preferred topics, a journal recommender and an AI writing assistant.
  • Semantic Scholar This link opens in a new window Note: Temple users, choose "Sign in with your Institution," to enable full-text article access, save papers to your library, and create custom alerts. Semantic Scholar is a free and non-profit AI-powered academic search engine from the Allen Institute for AI with >200 million items covering all disciplines. Has many personalization features, AI-generated TLDR summaries, Semantic Reader with hypothes.is annotation tool, Research Feed search alerts, easy citation manager exporting, and an Ask This Paper AI chatbot. Extensive help documentation in the FAQ .
  • Connect to Resources from Off-Campus by Erin Finnerty Last Updated Feb 2, 2022 239 views this year
  • << Previous: AI in library subscriptions
  • Next: Mapping literature >>
  • Last Updated: Mar 14, 2024 3:06 PM
  • URL: https://guides.temple.edu/ai-research-tools

Temple University

University libraries.

See all library locations

  • Library Directory
  • Locations and Directions
  • Frequently Called Numbers

Twitter Icon

Need help? Email us at [email protected]

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Indian J Anaesth
  • v.60(9); 2016 Sep

Literature search for research planning and identification of research problem

Anju grewal.

Department of Anaesthesiology, Dayanand Medical College and Hospital, Ludhiana, Punjab, India

Hanish Kataria

1 Department of Surgery, Government Medical College and Hospital, Chandigarh, India

2 Department of Cardiac Anaesthesia, All India Institute of Medical Sciences, New Delhi, India

Literature search is a key step in performing good authentic research. It helps in formulating a research question and planning the study. The available published data are enormous; therefore, choosing the appropriate articles relevant to your study in question is an art. It can be time-consuming, tiring and can lead to disinterest or even abandonment of search in between if not carried out in a step-wise manner. Various databases are available for performing literature search. This article primarily stresses on how to formulate a research question, the various types and sources for literature search, which will help make your search specific and time-saving.

INTRODUCTION

Literature search is a systematic and well-organised search from the already published data to identify a breadth of good quality references on a specific topic.[ 1 ] The reasons for conducting literature search are numerous that include drawing information for making evidence-based guidelines, a step in the research method and as part of academic assessment.[ 2 ] However, the main purpose of a thorough literature search is to formulate a research question by evaluating the available literature with an eye on gaps still amenable to further research.

Research problem[ 3 ] is typically a topic of interest and of some familiarity to the researcher. It needs to be channelised by focussing on information yet to be explored. Once we have narrowed down the problem, seeking and analysing existing literature may further straighten out the research approach.

A research hypothesis[ 4 ] is a carefully created testimony of how you expect the research to proceed. It is one of the most important tools which aids to answer the research question. It should be apt containing necessary components, and raise a question that can be tested and investigated.

The literature search can be exhaustive and time-consuming, but there are some simple steps which can help you plan and manage the process. The most important are formulating the research questions and planning your search.

FORMULATING THE RESEARCH QUESTION

Literature search is done to identify appropriate methodology, design of the study; population sampled and sampling methods, methods of measuring concepts and techniques of analysis. It also helps in determining extraneous variables affecting the outcome and identifying faults or lacunae that could be avoided.

Formulating a well-focused question is a critical step for facilitating good clinical research.[ 5 ] There can be general questions or patient-oriented questions that arise from clinical issues. Patient-oriented questions can involve the effect of therapy or disease or examine advantage versus disadvantage for a group of patients.[ 6 ]

For example, we want to evaluate the effect of a particular drug (e.g., dexmedetomidine) for procedural sedation in day care surgery patients. While formulating a research question, one should consider certain criteria, referred as ‘FINER’ (F-Feasible, I-Interesting, N-Novel, E-Ethical, R-Relevant) criteria.[ 5 ] The idea should be interesting and relevant to clinical research. It should either confirm, refute or add information to already done research work. One should also keep in mind the patient population under study and the resources available in a given set up. Also the entire research process should conform to the ethical principles of research.

The patient or study population, intervention, comparison or control arm, primary outcome, timing of measurement of outcome (PICOT) is a well-known approach for framing a leading research question.[ 7 , 8 ] Dividing the questions into key components makes it easy and searchable. In this case scenario:

  • Patients (P) – What is the important group of patients? for example, day care surgery
  • Intervention (I) – What is the important intervention? for example, intravenous dexmedetomidine
  • Comparison (C) – What is the important intervention of comparison? for example, intravenous ketamine
  • Outcome (O) – What is the effect of intervention? for example, analgesic efficacy, procedural awareness, drug side effects
  • Time (T) – Time interval for measuring the outcome: Hourly for first 4 h then 4 hourly till 24 h post-procedure.

Multiple questions can be formulated from patient's problem and concern. A well-focused question should be chosen for research according to significance for patient interest and relevance to our knowledge. Good research questions address the lacunae in available literature with an aim to impact the clinical practice in a constructive manner. There are limited outcome research and relevant resources, for example, electronic database system, database and hospital information system in India. Even when these factors are available, data about existing resources is not widely accessible.[ 9 ]

TYPES OF MEDICAL LITERATURE

(Further details in chapter ‘Types of studies and research design’ in this issue).

Primary literature

Primary sources are the authentic publication of an expert's new evidence, conclusions and proposals (case reports, clinical trials, etc) and are usually published in a peer-reviewed journal. Preliminary reports, congress papers and preprints also constitute primary literature.[ 2 ]

Secondary literature

Secondary sources are systematic review articles or meta-analyses where material derived from primary source literature are infererred and evaluated.[ 2 ]

Tertiary literature

Tertiary literature consists of collections that compile information from primary or secondary literature (eg., reference books).[ 2 ]

METHODS OF LITERATURE SEARCH

There are various methods of literature search that are used alone or in combination [ Table 1 ]. For past few decades, searching the local as well as national library for books, journals, etc., was the usual practice and still physical literature exploration is an important component of any systematic review search process.[ 10 , 11 ] With the advancement of technology, the Internet is now the gateway to the maze of vast medical literature.[ 12 ] Conducting a literature review involves web-based search engines, i.e., Google, Google Scholar, etc., [ Table 2 ], or using various electronic research databases to identify materials that describe the research topic or those homologous to it.[ 13 , 14 ]

Methods of literature search

An external file that holds a picture, illustration, etc.
Object name is IJA-60-635-g001.jpg

Web based methods of literature search

An external file that holds a picture, illustration, etc.
Object name is IJA-60-635-g002.jpg

The various databases available for literature search include databases for original published articles in the journals [ Table 2 ] and evidence-based databases for integrated information available as systematic reviews and abstracts [ Table 3 ].[ 12 , 14 ] Most of these are not freely available to the individual user. PubMed ( http://www.ncbi.nlm.nih.gov/pubmed/ ) is the largest available resource since 1996; however, a large number of sources now provide free access to literature in the biomedical field.[ 15 ] More than 26 million citations from Medline, life science journals and online books are included in PubMed. Links to the full-text material are included in citations from PubMed Central and publisher web sites.[ 16 ] The choice of databases depends on the subject of interest and potential coverage by the different databases. Education Resources Information Centre is a free online digital library of education research and information sponsored by the Institute of Education Sciences of the U.S. Department of Education, available at http://eric.ed.gov/ . No one database can search all the medical literature. There is need to search several different databases. At a minimum, PubMed or Medline, Embase and the Cochrane central trials Registry need to be searched. When searching these databases, emphasis should be given to meta-analysis, systematic reviews randomised controlled trials and landmark studies.

Electronic source of Evidence-Based Database

An external file that holds a picture, illustration, etc.
Object name is IJA-60-635-g003.jpg

Time allocated to the search needs attention as exploring and selecting data are early steps in the research method and research conducted as part of academic assessment have narrow timeframes.[ 17 ] In Indian scenario, limited outcome research and accessibility to data leads to less thorough knowledge of nature of research problem. This results in the formulation of the inappropriate research question and increases the time to literature search.

TYPES OF SEARCH

Type of search can be described in different forms according to the subject of interest. It increases the chances of retrieving relevant information from a search.

Translating research question to keywords

This will provide results based on any of the words specified; hence, they are the cornerstone of an effective search. Synonyms/alternate terms should be considered to elicit further information, i.e., barbiturates in place of thiopentone. Spellings should also be taken into account, i.e., anesthesia in place of anaesthesia (American and British). Most databases use controlled word-stock to establish common search terms (or keywords). Some of these alternative keywords can be looked from database thesaurus.[ 4 ] Another strategy is combining keywords with Boolean operators. It is important to keep a note of keywords and methods used in exploring the literature as these will need to be described later in the design of search process.

‘Medical Subject Heading (MeSH) is the National Library of Medicine's controlled hierarchical vocabulary that is used for indexing articles in PubMed, with more specific terms organised underneath more general terms’.[ 17 ] This provides a reliable way to retrieve citations that use different terminology for identical ideas, as it indexes articles based on content. Two features of PubMed that can increase yield of specific articles are ‘Automatic term mapping’ and ‘automatic term explosion’.[ 4 ]

For example, if the search keyword is heart attack, this term will match with MeSH transcription table heading and then explode into various subheadings. This helps to construct the search by adding and selecting MeSH subheadings and families of MeSH by use of hyperlinks.[ 4 ]

We can set limits to a clinical trial for retrieving higher level of evidence (i.e., randomised controlled clinical trial). Furthermore, one can browse through the link entitled ‘Related Articles’. This PubMed feature searches for similar citations using an intricate algorithm that scans titles, abstracts and MeSH terms.[ 4 ]

Phrase search

This will provide pages with only the words typed in the phrase, in that exact order and with no words in between them.

Boolean operators

AND, OR and NOT are the three Boolean operators named after the mathematician George Boole.[ 18 ] Combining two words using ‘AND’ will fetch articles that mention both the words. Using ‘OR’ will widen the search and fetch more articles that mention either subject. While using the term ‘NOT’ to combine words will fetch articles containing the first word but not the second, thus narrowing the search.

Filters can also be used to refine the search, for example, article types, text availability, language, age, sex and journal categories.

Overall, the recommendations for methodology of literature search can be as below (Creswell)[ 19 ]

  • Identify keywords and use them to search articles from library and internet resources as described above
  • Search several databases to search articles related to your topic
  • Use thesaurus to identify terms to locate your articles
  • Find an article that is similar to your topic; then look at the terms used to describe it, and use them for your search
  • Use databases that provide full-text articles (free through academic libraries, Internet or for a fee) as much as possible so that you can save time searching for your articles
  • If you are examining a topic for the first time and unaware of the research on it, start with broad syntheses of the literature, such as overviews, summaries of the literature on your topic or review articles
  • Start with the most recent issues of the journals, and look for studies about your topic and then work backward in time. Follow-up on references at the end of the articles for more sources to examine
  • Refer books on a single topic by a single author or group of authors or books that contain chapters written by different authors
  • Next look for recent conference papers. Often, conference papers report the latest research developments. Contact authors of pertinent studies. Write or phone them, asking if they know of studies related to your area of interest
  • The easy access and ability to capture entire articles from the web make it attractive. However, check these articles carefully for authenticity and quality and be cautious about whether they represent systematic research.

The whole process of literature search[ 20 ] is summarised in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is IJA-60-635-g004.jpg

Process of literature search

Literature search provides not only an opportunity to learn more about a given topic but provides insight on how the topic was studied by previous analysts. It helps to interpret ideas, detect shortcomings and recognise opportunities. In short, systematic and well-organised research may help in designing a novel research.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

IMAGES

  1. The Importance of Literature Review in Scientific Research Writing

    search engine for literature review

  2. Selecting sources for your literature search

    search engine for literature review

  3. Literature reviews

    search engine for literature review

  4. The Importance of Literature Review in Scientific Research Writing

    search engine for literature review

  5. 15 Literature Review Examples (2024)

    search engine for literature review

  6. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    search engine for literature review

VIDEO

  1. Free Keyword Research Tool, Very Accurate Search Est SEO

  2. LIBRARY SEARCH ENGINE GUIDE FOR LECTURERS, STAFF AND RESEARCHERS

  3. What is search marketing with example

  4. LIBRARY SEARCH ENGINE GUIDE FOR LECTURERS, STAFF AND RESEARCHERS

  5. Library and Information Science P-05. M-13. Search engines, concept, types of search engines

  6. MENGENAL SEARCH ENGINE

COMMENTS

  1. The best academic search engines [Update 2024]

    1. Google Scholar. Google Scholar is the clear number one when it comes to academic search engines. It's the power of Google searches applied to research papers and patents. It not only lets you find research papers for all academic disciplines for free but also often provides links to full-text PDF files. Coverage: approx. 200 million articles.

  2. 28 Best Academic Search Engines That make your research easier

    ERIC (short for educational resources information center) is a great academic search engine that focuses on education-related literature. It is sponsored by the U.S. Department of Education and produced by the Institute of Education Sciences. ERIC indexes over a million articles, reports, conference papers, and other resources on all aspects of education from early childhood to higher education.

  3. Semantic Scholar

    Semantic Scholar uses groundbreaking AI and engineering to understand the semantics of scientific literature to help Scholars discover relevant research. ... Our API now includes paper search, better documentation, and increased stability. Join hundreds of other developers and start building your scholarly app today.

  4. How to carry out a literature search for a systematic review: a

    A literature search is distinguished from, but integral to, a literature review. Literature reviews are conducted for the purpose of (a) locating information on a topic or identifying gaps in the literature for areas of future study, (b) synthesising conclusions in an area of ambiguity and (c) helping clinicians and researchers inform decision-making and practice guidelines.

  5. Comparison of Four Search Engines and their efficacy With Emphasis on

    Review of literature accomplishes for literature search in medical sciences on the internet by different search engines that they were compared, evaluated and reviewed by the other researchers in different countries to identify their search's features and abilities (5-10) (15-22). This research was done to select the most effective search ...

  6. Literature Search: Databases and Gray Literature

    Gray Literature. Gray Literature is the term for information that falls outside the mainstream of published journal and mongraph literature, not controlled by commercial publishers. includes: hard to find studies, reports, or dissertations. conference abstracts or papers. governmental or private sector research.

  7. List of academic databases and search engines

    This article contains a representative list of notable databases and search engines useful in an academic setting for finding and ... Education literature and resources dating back to 1966. ... identification in research and academic publishing. List: biography, education, employment, works, grants, peer-review. Over 9.3 million profiles. Free ...

  8. Bubble effect: including internet search engines in systematic reviews

    Systematic literature reviews start with a bibliographic database search accessing a large number of peer-reviewed scientific studies. It has become increasingly frequent to include internet search engines as access points as well [6, 7, 9]. Internet search engines can be useful in reviewing literature not found in common bibliographic databases.

  9. Literature searches: what databases are available?

    PubMed. PubMed was launched in 1996 and, since June 1997, provides free and unlimited access for all users through the internet. PubMed database contains more than 30 million references of biomedical literature from approximately 7,000 journals. The largest percentage of records in PubMed comes from MEDLINE (95%), which contains 25 million ...

  10. Review of Literature: Search Engines and Strategies

    Task: Conduct a comprehensive literature search. Step 1: Select wider databases and search engines as discussed above. Step 2: Identify keywords and make keyword sets as follows: Keyword set 1: effect, Mulligan, low back, diabetes mellitus. Note: The Keyword set 1 is directly taken from the title and students often make a mistake of using the ...

  11. Defining the process to literature searching in systematic reviews: a

    Background Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving readers clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence. Information specialists and review teams ...

  12. 3. Where to search

    Where to search. To conduct a successful literature review, you need to conduct a comprehensive search so that you feel confident that you've found all the relevant literature on the topic you are investigating. You can put together an excellent search, but if you are not looking in the right places, you will not find the literature you need.

  13. Search Strategies for [Systematic] Literature Reviews

    A search strategy is the method by which relevant sources are found, usually by searching selected databases and search engines using a mix of keywords, controlled vocabulary and search operators. Relevance is determined by a review question for which guidelines can be found in Sections 4.2 and 4.3.

  14. Optimal database combinations for literature searches in systematic

    Investigators and information specialists searching for relevant references for a systematic review (SR) are generally advised to search multiple databases and to use additional methods to be able to adequately identify all literature related to the topic of interest [1,2,3,4,5,6].The Cochrane Handbook, for example, recommends the use of at least MEDLINE and Cochrane Central and, when ...

  15. Streamline your research using academic search engines

    Researchers can strategically use ASEs to compile an expansive bibliography and streamline the literature review process. How do academic search engines work? The underlying algorithms used by search engines are often referred to as "web crawlers"; these index a constant stream of online traffic. The metadata generated through this pre ...

  16. A systematic approach to searching: an efficient and complete method to

    INTRODUCTION. Librarians and information specialists are often involved in the process of preparing and completing systematic reviews (SRs), where one of their main tasks is to identify relevant references to include in the review [].Although several recommendations for the process of searching have been published [2-6], none describe the development of a systematic search strategy from ...

  17. Guides: Literature Review: How to search effectively

    Specific proximity symbols will vary. Check the 'Help' section of the database you are searching. 4. Improve your search results. All library databases are different and you can't always search and refine in the same way. Try to be consistent when transferring your search in the library databases you have chosen.

  18. How to undertake a literature search: a step-by-step guide

    Abstract. Undertaking a literature search can be a daunting prospect. Breaking the exercise down into smaller steps will make the process more manageable. This article suggests 10 steps that will help readers complete this task, from identifying key concepts to choosing databases for the search and saving the results and search strategy.

  19. An evidence-based review of academic web search engines, 2014-2016

    "An evidence-based review of academic web search engines, 2014-2016: Implications for librarians' practice and research agenda" (2017).Libraries. 74. ... A literature review was conducted to find articles, conference presentations, and books about the use or utility of Google Books, Google Scholar, and Microsoft Academic for scholarly use ...

  20. Literature searching methods or guidance and their application to

    Google Scholar is a free web search engine that can be used to find academic literature, and the subject of its suitability for this task is discussed in recent papers. ... Realist review literature search methods may be a good model to follow, because of their exploratory approach to searching.

  21. Harnessing Artificial Intelligence-powered Search Engines for The

    Literature reviews are an essential part of research, and they require a significant amount of time and effort to be able to search through the millions of academic papers both online and physically. However, with the advent of AI-powered search engines, the process of conducting literature reviews has become more efficient and effective. AI-powered search engines use machine learning ...

  22. Research Guides: AI Tools for Research: Searching literature

    Literature search draws on a corpus of >150 million items. Writing assistant portion has a freemium model. Use the SciSpace Chrome browser extension to enhance Google Scholar search results. The extension will put icons on your results page that allow you to chat with a search result, find similar papers, or run the same search in SciSpace.

  23. (Pdf) Features Exploration: Analysis of Literature on Search Engine

    Abstract. This paper is an analytic review of literature focused on search engine and search engine optimisation techniques from period 2005 to 2016. The paper focuses on various features of ...

  24. Literature search for research planning and identification of research

    Literature search is a key step in performing good authentic research. It helps in formulating a research question and planning the study. The available published data are enormous; therefore, choosing the appropriate articles relevant to your study in question is an art. ... Conducting a literature review involves web-based search engines, i.e ...

  25. An Overview of Search Engine Marketing: A Systematic Literature Review

    PDF | On Oct 1, 2023, Alperen Şahin and others published An Overview of Search Engine Marketing: A Systematic Literature Review | Find, read and cite all the research you need on ResearchGate