Grad Coach

Research Aims, Objectives & Questions

The “Golden Thread” Explained Simply (+ Examples)

By: David Phair (PhD) and Alexandra Shaeffer (PhD) | June 2022

The research aims , objectives and research questions (collectively called the “golden thread”) are arguably the most important thing you need to get right when you’re crafting a research proposal , dissertation or thesis . We receive questions almost every day about this “holy trinity” of research and there’s certainly a lot of confusion out there, so we’ve crafted this post to help you navigate your way through the fog.

Overview: The Golden Thread

  • What is the golden thread
  • What are research aims ( examples )
  • What are research objectives ( examples )
  • What are research questions ( examples )
  • The importance of alignment in the golden thread

What is the “golden thread”?  

The golden thread simply refers to the collective research aims , research objectives , and research questions for any given project (i.e., a dissertation, thesis, or research paper ). These three elements are bundled together because it’s extremely important that they align with each other, and that the entire research project aligns with them.

Importantly, the golden thread needs to weave its way through the entirety of any research project , from start to end. In other words, it needs to be very clearly defined right at the beginning of the project (the topic ideation and proposal stage) and it needs to inform almost every decision throughout the rest of the project. For example, your research design and methodology will be heavily influenced by the golden thread (we’ll explain this in more detail later), as well as your literature review.

The research aims, objectives and research questions (the golden thread) define the focus and scope ( the delimitations ) of your research project. In other words, they help ringfence your dissertation or thesis to a relatively narrow domain, so that you can “go deep” and really dig into a specific problem or opportunity. They also help keep you on track , as they act as a litmus test for relevance. In other words, if you’re ever unsure whether to include something in your document, simply ask yourself the question, “does this contribute toward my research aims, objectives or questions?”. If it doesn’t, chances are you can drop it.

Alright, enough of the fluffy, conceptual stuff. Let’s get down to business and look at what exactly the research aims, objectives and questions are and outline a few examples to bring these concepts to life.

Free Webinar: How To Find A Dissertation Research Topic

Research Aims: What are they?

Simply put, the research aim(s) is a statement that reflects the broad overarching goal (s) of the research project. Research aims are fairly high-level (low resolution) as they outline the general direction of the research and what it’s trying to achieve .

Research Aims: Examples  

True to the name, research aims usually start with the wording “this research aims to…”, “this research seeks to…”, and so on. For example:

“This research aims to explore employee experiences of digital transformation in retail HR.”   “This study sets out to assess the interaction between student support and self-care on well-being in engineering graduate students”  

As you can see, these research aims provide a high-level description of what the study is about and what it seeks to achieve. They’re not hyper-specific or action-oriented, but they’re clear about what the study’s focus is and what is being investigated.

Need a helping hand?

research methods objectives questions

Research Objectives: What are they?

The research objectives take the research aims and make them more practical and actionable . In other words, the research objectives showcase the steps that the researcher will take to achieve the research aims.

The research objectives need to be far more specific (higher resolution) and actionable than the research aims. In fact, it’s always a good idea to craft your research objectives using the “SMART” criteria. In other words, they should be specific, measurable, achievable, relevant and time-bound”.

Research Objectives: Examples  

Let’s look at two examples of research objectives. We’ll stick with the topic and research aims we mentioned previously.  

For the digital transformation topic:

To observe the retail HR employees throughout the digital transformation. To assess employee perceptions of digital transformation in retail HR. To identify the barriers and facilitators of digital transformation in retail HR.

And for the student wellness topic:

To determine whether student self-care predicts the well-being score of engineering graduate students. To determine whether student support predicts the well-being score of engineering students. To assess the interaction between student self-care and student support when predicting well-being in engineering graduate students.

  As you can see, these research objectives clearly align with the previously mentioned research aims and effectively translate the low-resolution aims into (comparatively) higher-resolution objectives and action points . They give the research project a clear focus and present something that resembles a research-based “to-do” list.

The research objectives detail the specific steps that you, as the researcher, will take to achieve the research aims you laid out.

Research Questions: What are they?

Finally, we arrive at the all-important research questions. The research questions are, as the name suggests, the key questions that your study will seek to answer . Simply put, they are the core purpose of your dissertation, thesis, or research project. You’ll present them at the beginning of your document (either in the introduction chapter or literature review chapter) and you’ll answer them at the end of your document (typically in the discussion and conclusion chapters).  

The research questions will be the driving force throughout the research process. For example, in the literature review chapter, you’ll assess the relevance of any given resource based on whether it helps you move towards answering your research questions. Similarly, your methodology and research design will be heavily influenced by the nature of your research questions. For instance, research questions that are exploratory in nature will usually make use of a qualitative approach, whereas questions that relate to measurement or relationship testing will make use of a quantitative approach.  

Let’s look at some examples of research questions to make this more tangible.

Research Questions: Examples  

Again, we’ll stick with the research aims and research objectives we mentioned previously.  

For the digital transformation topic (which would be qualitative in nature):

How do employees perceive digital transformation in retail HR? What are the barriers and facilitators of digital transformation in retail HR?  

And for the student wellness topic (which would be quantitative in nature):

Does student self-care predict the well-being scores of engineering graduate students? Does student support predict the well-being scores of engineering students? Do student self-care and student support interact when predicting well-being in engineering graduate students?  

You’ll probably notice that there’s quite a formulaic approach to this. In other words, the research questions are basically the research objectives “converted” into question format. While that is true most of the time, it’s not always the case. For example, the first research objective for the digital transformation topic was more or less a step on the path toward the other objectives, and as such, it didn’t warrant its own research question.  

So, don’t rush your research questions and sloppily reword your objectives as questions. Carefully think about what exactly you’re trying to achieve (i.e. your research aim) and the objectives you’ve set out, then craft a set of well-aligned research questions . Also, keep in mind that this can be a somewhat iterative process , where you go back and tweak research objectives and aims to ensure tight alignment throughout the golden thread.

The importance of strong alignment 

Alignment is the keyword here and we have to stress its importance . Simply put, you need to make sure that there is a very tight alignment between all three pieces of the golden thread. If your research aims and research questions don’t align, for example, your project will be pulling in different directions and will lack focus . This is a common problem students face and can cause many headaches (and tears), so be warned.

Take the time to carefully craft your research aims, objectives and research questions before you run off down the research path. Ideally, get your research supervisor/advisor to review and comment on your golden thread before you invest significant time into your project, and certainly before you start collecting data .  

Recap: The golden thread

In this post, we unpacked the golden thread of research, consisting of the research aims , research objectives and research questions . You can jump back to any section using the links below.

As always, feel free to leave a comment below – we always love to hear from you. Also, if you’re interested in 1-on-1 support, take a look at our private coaching service here.

research methods objectives questions

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Narrative analysis explainer

38 Comments

Isaac Levi

Thank you very much for your great effort put. As an Undergraduate taking Demographic Research & Methodology, I’ve been trying so hard to understand clearly what is a Research Question, Research Aim and the Objectives in a research and the relationship between them etc. But as for now I’m thankful that you’ve solved my problem.

Hatimu Bah

Well appreciated. This has helped me greatly in doing my dissertation.

Dr. Abdallah Kheri

An so delighted with this wonderful information thank you a lot.

so impressive i have benefited a lot looking forward to learn more on research.

Ekwunife, Chukwunonso Onyeka Steve

I am very happy to have carefully gone through this well researched article.

Infact,I used to be phobia about anything research, because of my poor understanding of the concepts.

Now,I get to know that my research question is the same as my research objective(s) rephrased in question format.

I please I would need a follow up on the subject,as I intends to join the team of researchers. Thanks once again.

Tosin

Thanks so much. This was really helpful.

Ishmael

I know you pepole have tried to break things into more understandable and easy format. And God bless you. Keep it up

sylas

i found this document so useful towards my study in research methods. thanks so much.

Michael L. Andrion

This is my 2nd read topic in your course and I should commend the simplified explanations of each part. I’m beginning to understand and absorb the use of each part of a dissertation/thesis. I’ll keep on reading your free course and might be able to avail the training course! Kudos!

Scarlett

Thank you! Better put that my lecture and helped to easily understand the basics which I feel often get brushed over when beginning dissertation work.

Enoch Tindiwegi

This is quite helpful. I like how the Golden thread has been explained and the needed alignment.

Sora Dido Boru

This is quite helpful. I really appreciate!

Chulyork

The article made it simple for researcher students to differentiate between three concepts.

Afowosire Wasiu Adekunle

Very innovative and educational in approach to conducting research.

Sàlihu Abubakar Dayyabu

I am very impressed with all these terminology, as I am a fresh student for post graduate, I am highly guided and I promised to continue making consultation when the need arise. Thanks a lot.

Mohammed Shamsudeen

A very helpful piece. thanks, I really appreciate it .

Sonam Jyrwa

Very well explained, and it might be helpful to many people like me.

JB

Wish i had found this (and other) resource(s) at the beginning of my PhD journey… not in my writing up year… 😩 Anyways… just a quick question as i’m having some issues ordering my “golden thread”…. does it matter in what order you mention them? i.e., is it always first aims, then objectives, and finally the questions? or can you first mention the research questions and then the aims and objectives?

UN

Thank you for a very simple explanation that builds upon the concepts in a very logical manner. Just prior to this, I read the research hypothesis article, which was equally very good. This met my primary objective.

My secondary objective was to understand the difference between research questions and research hypothesis, and in which context to use which one. However, I am still not clear on this. Can you kindly please guide?

Derek Jansen

In research, a research question is a clear and specific inquiry that the researcher wants to answer, while a research hypothesis is a tentative statement or prediction about the relationship between variables or the expected outcome of the study. Research questions are broader and guide the overall study, while hypotheses are specific and testable statements used in quantitative research. Research questions identify the problem, while hypotheses provide a focus for testing in the study.

Saen Fanai

Exactly what I need in this research journey, I look forward to more of your coaching videos.

Abubakar Rofiat Opeyemi

This helped a lot. Thanks so much for the effort put into explaining it.

Lamin Tarawally

What data source in writing dissertation/Thesis requires?

What is data source covers when writing dessertation/thesis

Latifat Muhammed

This is quite useful thanks

Yetunde

I’m excited and thankful. I got so much value which will help me progress in my thesis.

Amer Al-Rashid

where are the locations of the reserch statement, research objective and research question in a reserach paper? Can you write an ouline that defines their places in the researh paper?

Webby

Very helpful and important tips on Aims, Objectives and Questions.

Refiloe Raselane

Thank you so much for making research aim, research objectives and research question so clear. This will be helpful to me as i continue with my thesis.

Annabelle Roda-Dafielmoto

Thanks much for this content. I learned a lot. And I am inspired to learn more. I am still struggling with my preparation for dissertation outline/proposal. But I consistently follow contents and tutorials and the new FB of GRAD Coach. Hope to really become confident in writing my dissertation and successfully defend it.

Joe

As a researcher and lecturer, I find splitting research goals into research aims, objectives, and questions is unnecessarily bureaucratic and confusing for students. For most biomedical research projects, including ‘real research’, 1-3 research questions will suffice (numbers may differ by discipline).

Abdella

Awesome! Very important resources and presented in an informative way to easily understand the golden thread. Indeed, thank you so much.

Sheikh

Well explained

New Growth Care Group

The blog article on research aims, objectives, and questions by Grad Coach is a clear and insightful guide that aligns with my experiences in academic research. The article effectively breaks down the often complex concepts of research aims and objectives, providing a straightforward and accessible explanation. Drawing from my own research endeavors, I appreciate the practical tips offered, such as the need for specificity and clarity when formulating research questions. The article serves as a valuable resource for students and researchers, offering a concise roadmap for crafting well-defined research goals and objectives. Whether you’re a novice or an experienced researcher, this article provides practical insights that contribute to the foundational aspects of a successful research endeavor.

yaikobe

A great thanks for you. it is really amazing explanation. I grasp a lot and one step up to research knowledge.

UMAR SALEH

I really found these tips helpful. Thank you very much Grad Coach.

Rahma D.

I found this article helpful. Thanks for sharing this.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Technical Support
  • Find My Rep

You are here

100 Questions (and Answers) About Research Methods

100 Questions (and Answers) About Research Methods

  • Neil J. Salkind
  • Description

"How do I create a good research hypothesis?"

"How do I know when my literature review is finished?"

"What is the difference between a sample and a population?"

"What is power and why is it important?"

In an increasingly data-driven world, it is more important than ever for students as well as professionals to better understand the process of research. This invaluable guide answers the essential questions that students ask about research methods in a concise and accessible way.

See what’s new to this edition by selecting the Features tab on this page. Should you need additional information or have questions regarding the HEOA information provided for this title, including what is new to this edition, please email [email protected] . Please include your name, contact information, and the name of the title for which you would like more information. For information on the HEOA, please go to http://ed.gov/policy/highered/leg/hea08/index.html .

For assistance with your order: Please email us at [email protected] or connect with your SAGE representative.

SAGE 2455 Teller Road Thousand Oaks, CA 91320 www.sagepub.com

"This is a concise text that has good coverage of the basic concepts and elementary principles of research methods. It picks up where many traditional research methods texts stop and provides additional discussion on some of the hardest to understand concepts."

"I think it’s a great idea for a text (or series), and I have no doubt that the majority of students would find it helpful. The material is presented clearly, and it is easy to read and understand. My favorite example from those provided is on p. 7 where the author provides an actual checklist for evaluating the merit of a study. This is a great tool for students and would provide an excellent “practice” approach to learning this skill. Over time students wouldn’t need a checklist, but I think it would be invaluable for those students with little to no research experience."

I already am using 3 other books. This is a good book though.

Did not meet my needs

I had heard good things about Salkind's statistics book and wanted to review his research book as well. The 100 questions format is cute, and may provide a quick answer to a specific student question. However, it's not really organized in a way that I find particularly useful for a more integrated course that progressively develop and builds upon concepts.

comes across as a little disorganized, plus a little too focused on psychology and statistics.

This text is a great resource guide for graduate students. But it may not work as well with undergraduates orienting themselves to the research process. However, I will use it as a recommended text for students.

Key Features

· The entire research process is covered from start to finish: Divided into nine parts, the book  guides readers from the initial asking of questions, through the analysis and interpretation of data, to the final report

· Each question and answer provides a stand-alone explanation: Readers gain enough information on a particular topic to move on to the next question, and topics can be read in any order

· Most questions and answers supplement others in the book: Important material is reinforced, and connections are made between the topics

· Each answer ends with referral to three other related questions: Readers are shown where to go for additional information on the most closely related topics

Sample Materials & Chapters

Question #16: Question #16: How Do I Know When My Literature Review Is Finished?

Question #32: How Can I Create a Good Research Hypothesis?

Question #40: What Is the Difference Between a Sample and a Population, and Why

Question #92: What Is Power, and Why Is It Important?

For instructors

Select a purchasing option.

404 Not found

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Research process
  • Writing Strong Research Questions | Criteria & Examples

Writing Strong Research Questions | Criteria & Examples

Published on 30 October 2022 by Shona McCombes . Revised on 12 December 2023.

A research question pinpoints exactly what you want to find out in your work. A good research question is essential to guide your research paper , dissertation , or thesis .

All research questions should be:

  • Focused on a single problem or issue
  • Researchable using primary and/or secondary sources
  • Feasible to answer within the timeframe and practical constraints
  • Specific enough to answer thoroughly
  • Complex enough to develop the answer over the space of a paper or thesis
  • Relevant to your field of study and/or society more broadly

Writing Strong Research Questions

Table of contents

How to write a research question, what makes a strong research question, research questions quiz, frequently asked questions.

You can follow these steps to develop a strong research question:

  • Choose your topic
  • Do some preliminary reading about the current state of the field
  • Narrow your focus to a specific niche
  • Identify the research problem that you will address

The way you frame your question depends on what your research aims to achieve. The table below shows some examples of how you might formulate questions for different purposes.

Using your research problem to develop your research question

Note that while most research questions can be answered with various types of research , the way you frame your question should help determine your choices.

Prevent plagiarism, run a free check.

Research questions anchor your whole project, so it’s important to spend some time refining them. The criteria below can help you evaluate the strength of your research question.

Focused and researchable

Feasible and specific, complex and arguable, relevant and original.

The way you present your research problem in your introduction varies depending on the nature of your research paper . A research paper that presents a sustained argument will usually encapsulate this argument in a thesis statement .

A research paper designed to present the results of empirical research tends to present a research question that it seeks to answer. It may also include a hypothesis – a prediction that will be confirmed or disproved by your research.

As you cannot possibly read every source related to your topic, it’s important to evaluate sources to assess their relevance. Use preliminary evaluation to determine whether a source is worth examining in more depth.

This involves:

  • Reading abstracts , prefaces, introductions , and conclusions
  • Looking at the table of contents to determine the scope of the work
  • Consulting the index for key terms or the names of important scholars

An essay isn’t just a loose collection of facts and ideas. Instead, it should be centered on an overarching argument (summarised in your thesis statement ) that every part of the essay relates to.

The way you structure your essay is crucial to presenting your argument coherently. A well-structured essay helps your reader follow the logic of your ideas and understand your overall point.

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (‘ x affects y because …’).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses. In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, December 12). Writing Strong Research Questions | Criteria & Examples. Scribbr. Retrieved 15 April 2024, from https://www.scribbr.co.uk/the-research-process/research-question/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a research proposal | examples & templates, how to write a results section | tips & examples, what is a research methodology | steps & tips.

Book cover

Education Scholarship in Healthcare pp 41–50 Cite as

Designing a Research Question

  • Ahmed Ibrahim 3 &
  • Camille L. Bryant 3  
  • First Online: 29 November 2023

380 Accesses

This chapter discusses (1) the important role of research questions for descriptive, predictive, and causal studies across the three research paradigms (i.e., quantitative, qualitative, and mixed methods); (2) characteristics of quality research questions, and (3) three frameworks to support the development of research questions and their dissemination within scholarly work. For the latter, a description of the P opulation/ P articipants, I ntervention/ I ndependent variable, C omparison, and O utcomes (PICO) framework for quantitative research as well as variations depending on the type of research is provided. Second, we discuss the P articipants, central Ph enomenon, T ime, and S pace (PPhTS) framework for qualitative research. The combination of these frameworks is discussed for mixed-methods research. Further, templates and examples are provided to support the novice health scholar in developing research questions for applied and theoretical studies. Finally, we discuss the Create a Research Space (CARS) model for introducing research questions as part of a research study, to demonstrate how scholars can apply their knowledge when disseminating research.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Onwuegbuzie A, Leech N. Linking research questions to mixed methods data analysis procedures 1. Qual Rep. 2006;11(3):474–98. https://doi.org/10.46743/2160-3715/2006.1663 .

Article   Google Scholar  

Creswell JW, Poth CN. Qualitative inquiry and research design: choosing among five approaches. 4th ed. Thousand Oaks: Sage; 2018.

Google Scholar  

Johnson B, Christensen LB. Educational research: quantitative, qualitative, and mixed approaches. Thousand Oaks: Sage Publications, Inc.; 2020.

White P. Who’s afraid of research questions? The neglect of research questions in the methods literature and a call for question-led methods teaching. Int J Res Method Educ. 2013;36(3):213–27. https://doi.org/10.1080/1743727x.2013.809413 .

Lingard L. Joining a conversation: the problem/gap/hook heuristic. Perspect Med Educ. 2015;4(5):252–3. https://doi.org/10.1007/s40037-015-0211-y .

Article   PubMed   PubMed Central   Google Scholar  

Dillon JT. The classification of research questions. Rev Educ Res. 1984;54(3):327–61. https://doi.org/10.3102/00346543054003327 .

Dillon JT. Finding the question for evaluation research. Stud Educ Eval. 1987;13(2):139–51. https://doi.org/10.1016/S0191-491X(87)80027-5 .

Smith NL. Toward the justification of claims in evaluation research. Eval Program Plann. 1987;10(4):309–14. https://doi.org/10.1016/0149-7189(87)90002-4 .

Smith NL, Mukherjee P. Classifying research questions addressed in published evaluation studies. Educ Eval Policy Anal. 1994;16(2):223–30. https://doi.org/10.3102/01623737016002223 .

Shaughnessy JJ, Zechmeister EB, Zechmeister JS. Research methods in psychology. 9th ed. New York: McGraw Hill; 2011.

DeCuir-Gunby JT, Schutz PA. Developing a mixed methods proposal a practical guide for beginning researchers. Thousand Oaks: Sage; 2017.

Book   Google Scholar  

Creswell JW, Guetterman TC. Educational research: planning, conducting, and evaluating quantitative and qualitative research. 6th ed. New York: Pearson; 2019.

Ely M, Anzul M, Friedman T, Ganer D, Steinmetz AM. Doing qualitative research: circles within circles. London: Falmer Press; 1991.

Agee J. Developing qualitative research questions: a reflective process. Int J Qual Stud Educ. 2009;22(4):431–47. https://doi.org/10.1080/09518390902736512 .

Johnson RB, Onwuegbuzie AJ. Mixed methods research: a research paradigm whose time has come. Educ Res. 2004;33(7):14–26. https://doi.org/10.3102/0013189x033007014 .

Creamer EG. An introduction to fully integrated mixed methods research. Thousand Oaks: Sage; 2018.

Swales J. Genre analysis: English in academic and research settings. Cambridge: Cambridge University Press; 1990.

Swales J. Research genres: explorations and applications. Cambridge: Cambridge University Press; 2004.

Kendall PC, Norris LA, Rifkin LS, Silk JS. Introducing your research report: writing the introduction. In: Sternberg RJ, editor. Guide to publishing in psychology journals. 2nd ed. Cambridge: Cambridge University Press; 2018. p. 37–53. https://doi.org/10.1017/9781108304443.005 .

Thomson P, Kamler B. Writing for peer reviewed journals: strategies of getting published. Abingdon: Routledge; 2013.

Lingard L. Writing an effective literature review: Part I: Mapping the gap. Perspectives on Medical Education. 2018;7:47–49.

Download references

Author information

Authors and affiliations.

Johns Hopkins University School of Education, Baltimore, MD, USA

Ahmed Ibrahim & Camille L. Bryant

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ahmed Ibrahim .

Editor information

Editors and affiliations.

Johns Hopkins University School of Medicine, Baltimore, MD, USA

April S. Fitzgerald

Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA

Gundula Bosch

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Ibrahim, A., Bryant, C.L. (2023). Designing a Research Question. In: Fitzgerald, A.S., Bosch, G. (eds) Education Scholarship in Healthcare. Springer, Cham. https://doi.org/10.1007/978-3-031-38534-6_4

Download citation

DOI : https://doi.org/10.1007/978-3-031-38534-6_4

Published : 29 November 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-38533-9

Online ISBN : 978-3-031-38534-6

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Davidson D, Ellis Paine A, Glasby J, et al. Analysis of the profile, characteristics, patient experience and community value of community hospitals: a multimethod study. Southampton (UK): NIHR Journals Library; 2019 Jan. (Health Services and Delivery Research, No. 7.1.)

Cover of Analysis of the profile, characteristics, patient experience and community value of community hospitals: a multimethod study

Analysis of the profile, characteristics, patient experience and community value of community hospitals: a multimethod study.

Chapter 2 research objectives, questions and methodology.

In the light of the unfolding policy context and gaps within the existing literature outlined in Chapter 1 , and informed by conversations with key stakeholders (see Patient and public involvement ), this study aimed to provide a comprehensive analysis of the profile, characteristics, patient and carer experience and community engagement and value of community hospitals in contrasting local contexts. The specific objectives were to:

  • construct a national database and develop a typology of community hospitals
  • explore and understand the nature and extent of patients’ and carers’ experiences of community hospital care and services
  • investigate the value of the interdependent relationship between community hospitals and their communities through in-depth case studies of community value (qualitative study) and analysis of Charity Commission data (quantitative study).

In meeting these aims and objectives, the study addressed three overarching research questions (each with an associated set of more specific subquestions as summarised in Table 1 ):

TABLE 1

Research questions and objectives

  • What is a community hospital? In addressing this question, we drew on existing definitions and conceptualisations of ‘community hospitals’ as outlined in Chapter 1 , Research on community hospitals . Although our emphasis here was primarily empirical and descriptive, we were nevertheless guided by, and sought to contribute to, theoretical debates on definitions of community hospitals and their place within wider health and care systems, drawing on concepts of rural health care, chronic disease and complex care burden, integrated care and clinical leadership.
  • What are patients’ (and carers’) experiences of community hospitals? This element of the study was designed to contribute to the conceptualisation of the distinctive elements of community hospitals as understood through the ‘lived experiences’ of patients, rather than just satisfaction ratings. Here, we were influenced by prior analysis of the functional, technical and relational components of patient experience (e.g. environment and facilities, delivery of care, staff) alongside a more theoretical interest in the interpersonal, psychological and social dimensions of patient experience. Very early on in our study, through conversations with patient and public involvement (PPI) stakeholders, we recognised the importance of exploring and understanding the experience not only of patients but also of family carers, and hence we extended our initial question to include both patients’ and carers’ experiences.
  • What does the community do for its community hospital, and what does the community hospital do for its community? In addressing this question, we drew on notions of voluntarism and participation and brought together thinking from the separate bodies of literature on volunteering, philanthropy and co-production. This led us to question not just the level of voluntary support for community hospitals but also the different forms it took, how this varies between and within communities, how it is encouraged, organised and managed, and what difference it makes (outcomes). We also drew on notions of social value, including existing typologies, that encouraged us to question different forms of value (e.g. economic, social, human, symbolic) and different stakeholder groups (e.g. staff, patients, communities).

Given the diversity of the questions, we do not set out to provide an over-riding hypothesis or unified theoretical framework for the study as a whole. Instead, these concepts, frameworks and debates served as ‘sensitising categories’, shaping our approach to study design as well as data collection and analysis. 71 We return to these in Chapter 8 and augment them with new concepts that emerged from our analysis.

In addressing these diverse questions, we adopted a multimethod approach with a convergent design. Quantitative methods were employed to provide breadth of understanding relating to the questions concerning ‘what’, ‘where’ and ‘how much’, whereas qualitative methods provided depth of understanding, particularly in relation to questions of ‘how’, ‘why’ and ‘to what effect’.

The research was conducted in three distinct (although temporally overlapping) phases, each with a number of different associated elements and research methods: (1) mapping (database construction and analysis through data set reconciliation and verification), (2) qualitative case studies (semistructured interviews, discovery interviews, focus groups) and (3) quantitative analysis of charity commission data. Table 1 summarises the study objectives, questions and research methods. Each of the three phases of research are discussed in turn through the following sections of this chapter, before the final sections discuss data integration, PPI and ethics.

  • Phase 1: mapping and profiling community hospitals

Phase 1 of the research involved a national mapping exercise to address the first study question ‘what is a community hospital?’. It aimed to map the number and location of all hospitals in England to then provide a profile and definition of community hospitals. A database of characteristics would enable the profiling of community hospitals, inform a typology and support a sampling strategy for subsequent case studies. Data were collected from all four UK countries but, in accordance with the brief of the study, this report focuses on England. Reference is made to Scotland’s data as they were important in developing the methodology. The structure of the mapping comprised five elements:

  • literature review – constructing a working definition: (see Chapter 1 )
  • data set reconciliation – building a new database from multiple data sets
  • database analysis – developing an initial classification of community hospitals with beds
  • rapid telephone enquiry – refining the classification
  • verification – checking and refining the database through internet searches.

The flow of activities is depicted in Figure 1 .

Structure of the national mapping exercise.

Literature review: constructing a working definition

We developed a working definition of a community hospital as drawn from the literature (and as outlined in Chapter 1 ):

  • A hospital with < 100 beds serving a local population of up to 100,000 and providing direct access to GPs and local community staff.
  • Typically GP led, or nurse led with medical support from local GPs.
  • Services provided are likely to include inpatient care for older people, rehabilitation and maternity services, outpatient clinics and day care as well as minor injury and illness units, diagnostics and day surgery. The hospital may also be a base for the provision of outreach services by MDTs.
  • Will not have a 24-hour A&E nor provide complex surgery. In addition, a specialist hospital (e.g. a children’s hospital, a hospice or a specialist mental health or learning disability hospital) would not be classified as a community hospital.

The initial enquiry was framed around a ‘classic’ community hospital. The term was drawn directly from the Community Hospital Association 2008 classification, 72 describing classic community hospitals as ‘local community hospitals with inpatient facilities’ (i.e. with beds) and as distinct from community care resource centres (without beds), community care homes (integrated health and social care campus) or rehabilitation units. Although the term ‘classic’ was initially helpful in setting the boundaries of the study, it presented ongoing problems, such as whether it described all community hospitals with beds or a subset within that. Throughout the study, therefore, we have adopted the term ‘community hospital’ and omitted the adjective ‘classic’. Our focus, however, has remained on community hospitals with beds.

Data reconciliation: building a new database from multiple data sets

There was no up-to-date comprehensive database of community hospitals in England. The NHS Benchmarking Network [URL: www.nhsbenchmarking.nhs.uk (accessed 8 October 2018)] membership database was not comprehensive and could not be used to populate our hospital-level database because the data were anonymised. For this reason, one of our first tasks was to compile a new database, by bringing together existing health-care data sets, each of which provided different fields of information needed to test our working definition and to map and profile community hospitals.

Two types of data sets were collected. Centrally available data sets formed the starting point for the mapping study, providing codified data (see Appendix 1 ). As none of these centrally available data sets provided a comprehensive picture, it was necessary to supplement them through extensive internet searching and by talking to people in the field, as well as drawing on the expertise of research team members.

The base year for major data sets was 2012/13. Data were difficult to access, not comprehensive and spread across a greater number of sources. Four data sets were used:

  • Community Hospital Association databases of community hospitals (one from 2008 and another from 2013)
  • Patient-Led Assessments of the Care Environment (PLACE) 2013 [replacing the former Patient Environment Action Team programme]
  • Estates database – Estates Returns Information Collection (ERIC) 2012
  • NHS Digital activity by site of treatment 2012/13.

Barriers to obtaining site and activity data included (1) specific difficulties in the period 2012/13 when primary care trusts (PCTs) were being disbanded and clinical commissioning groups (CCGs) were being established (with effect from 31 March 2013) and (2) processes and caution in NHS Digital associated with releasing patient-sensitive data (even though we had not requested patient-based data). Quality problems were associated with the ‘location of treatment’ code, which was central to our enquiry identifying community hospitals but did not appear to be well used in England, leading to examples of missing data and inconsistent labels (described under reconciliation and duplication). The code also lacked stability as it changed with each new NHS reconfiguration in England.

The core data set for England, supplied by NHS Digital, was a list of all hospitals in England, based on ‘site of treatment code.’ Figure 2 shows the relationship between national data sets.

The relationship between four England data sets. CHA, Community Hospitals Associations.

The new database, populated through our reconciliation of these various data sets, provided a census of community hospitals at 2012/13, which was updated to August 2015 (e.g. when a hospital closed and then redeveloped, formed a new hospital replacing two old community hospitals, closed beds on a temporary basis and changed its name).

Database analysis: developing an initial classification of community hospitals with beds

Although the focus of this report is on England, it is important to mention our work on mapping community hospitals in Scotland, as this was instrumental in developing our approach to classifying data for England. Data sets on community hospitals in Scotland [Information Services Division (ISD) and government community hospital data sets: community hospital, general hospital, long-stay/psychiatric hospital, small long-stay hospital] were both more accessible and more comprehensive, lending themselves to early analysis (see Appendix 2 ).

An initial classification of hospitals in England was developed, informed by categories set out by Estates (community hospital, general acute hospital, long-stay hospital, multiservice hospital, short-term non-acute hospital, specialist hospital, support facility, treatment centre) and PLACE (acute/specialist, community, mental health only, mixed acute and mental health/mental health, treatment centre). It was combined with specialty classifications based mainly on NHS Digital inpatient activity data and developed further through analysis of Community Hospitals Association (CHA) data and discussions within the study team ( Table 2 ).

TABLE 2

Classification of all hospitals in England

Rapid telephone enquiry: refining the classification

Analysis of the Scotland data suggested that the code ‘GP specialty’ was a defining feature of community hospitals, but early analysis of the England data showed that this was less transferable. If we relied on GP specialty coding alone, many known community hospitals would be excluded from our database: not all community hospital inpatient beds in England were coded to GPs.

A short piece of empirical data collection was undertaken to understand the link between the specialty codes and practice and to test the working definition (based on the literature and on the Scottish data) that community hospitals were predominantly GP led. A telephone questionnaire was designed by the study team (see Appendix 3 ) and piloted through the CHA.

Seven hospitals from five specialty category codes (≥ 80% GP, < 80% GP and mixed specialties, general medicine, geriatric medicine, geriatric mixed specialties) were randomly selected. The test sample of 35 was reduced by four as a result of closure or conversion to nursing homes. The research team called the hospitals to gain contact details of the matron or ward manager ( n  = 20; the small sample size highlighting the difficulty of identifying leadership, especially when the community hospital is represented by a single ward), e-mailed the questionnaire, conducted telephone interviews with staff to complete the questionnaire (taking 10–20 minutes each), transcribed notes and returned the completed questionnaire to respondents ( n  = 12). Analysis of these telephone interviews gave us confidence in the specialty coding, while also confirming the need to be more expansive in our working definitions and categorisations.

Verification: checking and refining the database through internet searches

The mapping enquiry was finalised through five iterations of searching and checking. The CHA consulted its database and membership list (from both 2008 and 2013). A full internet search took place at two points, in February 2015 and August 2015, taking account of hospital closures and changes of function up to 2014/15, with further validation and amendments up to August 2015. By the end of the study, the 2012/13 data set, based on the NHS Digital Spine using ‘site of treatment code’, had been validated through a check of every potential community hospital. A total of 366 sites were examined through web-based and telephone enquiries, including 60 that were not present on the NHS Digital database (see Appendix 4 for the list of community hospitals with beds).

  • Phase 2: case studies

In order to explore patient and carer experience of community hospitals and aspects of community engagement and value, we undertook qualitative case studies. Although the initial aim of the case studies was to address the second and third research questions, the findings also enabled new insights into the first study question of ‘what is a community hospital’.

The decision to adopt a comparative case study design 73 across multiple community hospital sites was influenced by three factors. First, given the gaps in the literature highlighted in Chapter 1 , it would be useful to uncover different aspects of the patient experience, community engagement and value of community hospitals and enable the identification and analysis of common themes (looking for similarities, differences and patterns) both within and across cases. 74 – 76 Second, it provides a suitable way of ‘exemplifying’ sites, 77 given the variety of ownership models and locations. Third, it is useful in enabling an examination of ‘complex social phenomena’, 78 and, in particular, the social, functional, interpersonal and psychological factors that shape patient experiences, as well as those that influence community engagement and value. Below, we summarise the approach to case study selection for work packages 2 and 3, before moving on to discuss the research elements used.

Selection of case study sites

In selecting case study sites, we adopted a ‘realist’ approach to sampling, 79 moving back and forth between categories identified from the literature as being important for patient experience and community value and our learning about the characteristics of community hospitals identified from the mapping exercise. In order to reflect the diversity of community hospitals (highlighted in the literature and mapping), we selected cases in contrasting locations with different numbers of beds, ranges of services, ownership/provision and levels of voluntary income and deprivation.

To allow for a particular focus on variations in voluntary support for community hospitals, hinted at through the national mapping exercise and identified as a particular gap in the existing literature, we selected pairs of hospitals across four Clinical Commissioning Group (CCG) areas with contrasting levels of voluntary income but similar levels of deprivation. This would allow for a good comparison within and between cases (e.g. why two community hospitals within one CCG area, with similar levels of deprivation, have contrasting levels of voluntary support, given that previous research has tended to suggest a strong negative correlation between deprivation and voluntary activity).

Using these criteria, we selected eight case studies of hospitals of different sizes, ages and service profiles located across England (although mostly concentrated in the south, reflecting the national pattern of community hospital development; see Figure 7 ) in areas of contrasting levels of deprivation. Six of the buildings were owned by, and their main inpatient service was provided by, the NHS. Two were owned by the NHS but their main inpatient services were provided by a community interest company (CIC). We added a ninth case study, owned by a charity, to increase diversity in terms of ownership/provision (as there were very few examples of independently owned community hospitals, it was not possible to identify a matched pair). Table 3 provides a summary of the nine case studies selected, according to the data that were available from the mapping exercise. Fuller qualitative descriptions are provided in Chapter 4 and Appendix 5 .

TABLE 3

Profile of selected case studies

Case study data collection

The case studies involved seven research elements, as summarised in Table 4 . All elements were conducted over five visits to each case study. Across all case study sites and research methods, 241 people participated in the study through interviews and 130 people participated through 22 focus groups; a small number of people who participated in individual interviews also participated in focus groups (see Appendix 6 for full details).

TABLE 4

Research elements and focus

Scoping visits were made to each of the case studies in order to build relationships with key stakeholders (primarily matrons and chairpersons of Leagues of Friends), gather background information on the hospitals and local communities, identify potential study participants and collect key documents and data. Documents selected included hospital histories, annual reports, local service information (when available) and media coverage. Reviewing these helped to provide a basic understanding of the cases prior to the main fieldwork visits and added to our profiling of each of the case study hospitals.

We also aimed to gather hospital-level data from patient-reported experience measures (PREMs) 80 and the revised Friends and Family Test (FFT). 81 However, none of the case study community hospitals collected PREMs data, as this had only recently been required of community providers. Although all sites collected FFT scores, we were able to access data for only seven of the nine case studies because, in the remaining two cases, the trust compiled data at trust rather than hospital level and it was not possible to disaggregate the data. Furthermore, the FFT data were not strictly comparable as some scores covered inpatient care only, whereas others covered both inpatient and outpatient care.

Local reference group

We established a local reference group (LRG) in each of our case studies to bring local people together to steer, support and inform the research at the local level. These LRGs comprised key members of hospital staff, the League of Friends, volunteers and local voluntary and community groups, some of whom had also been patients and/or carers. Their role was to help build a picture of the local context to inform subsequent data collection elements, build support for the study within the local community and reflect on emerging findings and their implications for local practice. There were two LRG meetings per case study during the local fieldwork stage: one at the start of the fieldwork period (which focused on mapping the community hospital services and community links) and one at the end (which focused on discussing the emerging findings and their potential implications). The first LRG meeting for CH3 and CH4 was joint (for convenience) but the second meeting was separate. Following completion of the fieldwork and analysis, each LRG received a report of the findings relating to their specific case study (i.e. alongside this national report, we produced nine local reports).

Semistructured interviews with staff, volunteers and community representatives

We conducted semistructured interviews with staff ( n  = 89 staff across the nine cases), community stakeholders ( n  = 20) and volunteers ( n  = 35). Although most of the interviews were with single respondents, some were with two or, very occasionally, three people (depending on respondent preferences). Respondents were selected through purposive sampling 79 guided by the scoping visits, the initial LRG and snowballing. Each of the interviews explored the profile of the hospital and the local context, perceptions of patient and carer experience, and community engagement and value. The emphasis placed on the different sets of questions, however, varied between the groups of respondents (e.g. more time was spent on community engagement and value within the community stakeholder interviews, although we still asked questions relating to hospital profile and perceptions of patient/carer experience). Interviews were nearly all conducted face to face, although a small number were conducted via telephone, at respondent preference. Interviews with staff, volunteers and stakeholders lasted, on average, 60 minutes. All were digitally recorded and later transcribed verbatim.

Discovery interviews with patients

Rather than focusing on satisfaction levels, or other quantifiable measures of experience, the study was concerned with exploring the lived experience of being a patient using community hospital services. Lessons from previous studies show that gathering experiences in the form of stories enhances their power and richness, 36 so we selected an experience-centred interview method 82 that drew on the principles of narrative approaches 83 and, particularly, discovery interviewing. 84 Narrative approaches invite respondents to tell their stories uninterrupted, rather than respond to predetermined questions, giving control to the ‘storyteller’. This approach can elicit richer and more complete accounts than other methods 85 , 86 because reflection enables respondents to contextualise, and connect to, different aspects of their experiences. Discovery interviewing helps to capture patients’ experiences of health care when there may be pathways or clinical interventions central to patient experience. 87 As such, after a general opening question, our interviews focused around a very open question inviting respondents to tell their story of being a patient at the community hospital. We followed this by asking respondents to consider a visual representation we had developed of factors found in previous research to have shaped patient experience, to prompt people’s memories and thoughts (see Appendix 7 for an example of the discovery interview).

Our aim was to interview six patients from each case study. Our final sample across all sites was 60 patients. The small sample size reflected the in-depth nature of the interviews. We sought, as far as possible, to select patients with a mix of demographics (particularly in terms of gender), care pathways (particularly in terms of step up/step down) and services used (inpatient/outpatient). Potential participants were identified by the hospital matron and/or lead clinician and/or service leads. Each was written to by the hospital with a request to participate in the study and was sent an information sheet and an opt-in consent form. Patients who were willing to participate sent their replies directly to the study team. Written consent was provided prior to the commencement of the interview. In line with the Mental Capacity Act 2005 Code of Practice, 88 we made provision for the appointment of consultees when potential respondents lacked the capacity to consent to participation in the study, although this was not utilised.

Although many of our respondents were current inpatients, we also spoke to some inpatients who had been recently discharged and to outpatients from a range of different clinics. Outpatients who agreed to participate tended to be those using services several times a week (e.g. renal patients) or over a longer period of time (e.g. those with chronic conditions), rather than one-off users. Interviews with patients lasted between 30 and 90 minutes, were digitally recorded (in all cases except for two because of respondent preference/requirements) and transcribed verbatim. At the end of the interviews, we asked respondents to complete a short pro forma to gather basic demographic and service information for analysis purposes.

Semistructured interviews with carers

Semistructured interviews were conducted with carers in order to explore their experience of using the community hospital as a carer of an inpatient. Our aim was to interview three carers per case study; in total we spoke to 28 carers across the nine sites. Carers were either related to, or close friends of, patients (either current or recent) at the hospital. In most cases, we interviewed carers of patients who had also been interviewed, but in some cases carers were not directly linked to patients involved in the study (indeed, some carers were reflecting on the experience of caring for a patient who had recently died).

The main focus of the interviews was on the experience of being a carer of someone at the hospital, with our initial question reflecting the narrative approach adopted for patients by asking respondents to tell us their story of using the hospital. In addition, as the respondents were typically local residents, we also asked questions about their perceptions of patient experience, about local support for, and engagement with, the hospital and of value. Interviews with carers lasted, on average, 60 minutes. All were digitally recorded and later transcribed verbatim.

Focus groups

We conducted focus groups with members of MDTs, volunteers and community stakeholders. Although we had anticipated conducting each of the three focus groups in each of the case study sites, this was not always possible owing to practical reasons; for example, in some of the case study sites there were very few volunteers, making it difficult to organise a focus group. We ran focus groups with MDTs in eight of the nine case studies, involving a total of 43 respondents; with volunteers in six of the case studies, involving a total of 33 respondents; and with community stakeholders in eight of the cases, involving 54 respondents. Individual focus group respondents were selected through purposive sampling. We worked with LRGs and other key contacts to identify potential participants, each of whom was written to and asked to participate.

The focus groups complemented the interviews, enabling the inclusion of a wider range of perspectives in the study and, in particular, allowing us to observe the emergence of discussion, consensus and dissonance among groups of participants. They lasted, on average, 90 minutes and were digitally recorded and transcribed in full.

Telephone interviews with managers and commissioners

We conducted telephone interviews to explore the views of senior managers of provider organisations and commissioners of community hospitals. The nine case studies were based in five CCG areas where the main inpatient services were provided by four NHS trusts and one integrated health and social care CIC. Our aim was to interview one respondent from each of the providers and CCGs. In total, we spoke to five provider and four CCG representatives. The interviews explored the strategic context for the community hospitals involved in the study, alongside the perceptions of these senior stakeholders of patient experience and the value of community hospitals. The interviews lasted, on average, 60 minutes and were digitally recorded and later transcribed in full.

Qualitative case study data analysis

We adopted a thematic approach to qualitative data analysis, aided by the use of NVivo 11 (QSR International, Warrington, UK) for data management and exploration. Our approach was both inductive, with themes emerging from the data, and deductive, framed by our research questions and ongoing reading of the literature. Initial themes and codes were developed after three members of the team (AEP, DD and NLM), who collectively had been responsible for the case study data collection, reviewed the transcripts. The emerging themes, codes and associated findings were discussed at wider study team meetings, at the LRG meetings for individual case studies and at annual learning events that brought together participants from across the case studies. A refined coding frame was then tested by the same three members of the research team each coding a sample of transcripts; this led to a further refinement of the codes, while also helping to ensure that each of the researchers was adopting a similar approach.

In this report, we focus in particular on across-case comparisons, highlighting themes that emerged across the case studies, emphasising key points of similarity and difference between the cases, as relevant. In addition, we have produced individual reports for each of the local case study sites that have shared findings from our within-case analysis, as relevant for each individual hospital. Comparative analysis, including of the paired cases, will be developed further in future research articles, in which a focus on more specific aspects of the study will allow more space for presentation of such work.

Throughout the analysis, unique identifiers were used for the transcripts/respondents to help ensure confidentiality and anonymity. Sites were assigned a number (e.g. CH1) and respondents given a letter: patient (P), family carer (CA), staff (S), volunteer (V), community stakeholder (CY) and senior manager or commissioner (T), with sequential numbering, date of interview and initials of researcher added to provide an audit trail. This basic coding method is used throughout the report (e.g. CH1, S01 represents the first staff member to be interviewed at the first community hospital case study site). It is worth noting, however, that, although respondents were identified by a key characteristic (e.g. patient or staff) and their transcripts labelled as such, the boundaries between these categories were not discrete: many community stakeholders, for example, had also been patients or carers, and many staff were also members of the local community.

  • Phase 3: quantitative analysis of Charity Commission data

Collating data on charitable finance and volunteering support

The third phase of our research involved the quantitative analysis of data from the Charity Commission on voluntary income and volunteering for community hospitals across England. The aim of this activity was to examine charitable financial and volunteering support for community hospitals by investigating:

  • variations in the likelihood that hospitals receive support through a formal organisational structure such as a League of Friends, and if so, variations in its scale (in financial terms) between communities
  • uses of the funds raised (e.g. capital development, equipment, patient amenities).

We captured financial and volunteering data for registered charities from the Charity Commission (the Commission). The Commission holds details of organisations that have been recognised as charitable in law and that hold most of their assets in England, or have all or the majority of their trustees normally resident in England, or are companies incorporated in England. The data are described more fully in Appendix 9 .

Subject to a small number of exceptions, all charities in England with incomes of > £5000 must register with the Commission and submit financial statements consisting of trustees’ annual reports (returns) and annual accounts. The accounts of those charities whose income or expenditure exceeds a threshold of £25,000 are made available on the Commission’s website. 89 Charities that have income and expenditure of < £5000 a year have (since 2009) been exempted from the need to register. We identified 274 hospitals in England that satisfied the inclusion criteria for this research project ( Figure 3 ). We used the Charity Commission’s data to identify charities that support each of these hospitals, matching by name or through examining lists of charities registered in the locality where the hospital is based.

Community hospital and charities sampling frame.

We also directly approached eight non-registered charities (usually those with an income of < £5000 a year) that were known to have been established to support specific community hospitals, but received no usable data relating to them. Four hospitals in our data set were registered as charities themselves but were excluded from the analysis because they are exceptional cases of charitable action.

We found that 247 of these charities were registered in their own right (labelled ‘individual associated charities’ in Figure 3 ). The remainder were what is known as ‘linked’ charities, that is, entities associated with larger charitable organisations serving a NHS trust comprising several institutions. These ‘linked’ charities were excluded because it was not possible to disaggregate the support they provide to individual components of the trust. Financial information was available for the period from 1995 to 2014 (only small numbers of observations were available for years prior to that because digitisation of the register began only in the early 1990s).

Measurements

Financial information for at least 1 year between 1995 and 2014 was available for 245 charities in England, and this information formed the final sample for this part of the study. The number of non-zero financial reports to the Commission in each year ranged from 181 (1996) to 226 (2007). The data, covering the period to 2014, were the latest available at the time of analysis (2016). See Appendix 9 for full details of available charity reports by year. All financial information in this paper is presented at constant 2014 prices.

Using the Charity Commission website, we obtained copies of these accounts for those selected charities whose expenditure or income exceeded £25,000 in any one year. This gave data covering 358 separate financial years; the number of accounts available is shown in Table 5 .

TABLE 5

Accounts for larger charities (income of > £25,000)

We focused on the period from 2008 to 2013, when between 41 and 91 charities of interest generated at least one such financial return. Numbers vary because an individual charity may or may not exceed the £25,000 threshold at which its accounts are presented via the Charity Commission’s website, depending on fluctuations in its finances.

Charity accounts aggregate income and expenditure figures into a small number of general categories. These provide relatively little detail on income and expenditure and may even aggregate quite different sources of expenditure within the same funding stream. As such, to probe income sources and the application of expenditure in more detail, data were captured from the notes to the accounts of these charities. The extensive income data that were generated (21,733 items) were categorised to provide useful insights into sources of income. Classifying the expenditure of charities was not undertaken because of the complexity of the data and the limits to the usefulness of such an exercise. Appendix 9 provides further details of the extraction, classification and analysis of income and expenditure data.

Contribution: number of volunteers and estimates of input

The Charity Commission guidelines 90 require charities to record their best estimates of the number of individual UK volunteers involved in the charity during the financial year, excluding trustees (see Appendix 9 ).

Before 2013, data on volunteer numbers were often sparse, but, since that date, efforts have been made to gather more detailed information. Approximately 73,000 charities had supplied between one and three non-zero returns of their volunteer counts in the three years between 2013 and 2015, including > 90% of our charities. We calculated the average number of volunteers for the period in question. To provide an upper-bound estimate, we also take the maximum value returned for each charity.

Volunteer hours were estimated using regular survey data (Home Office Citizenship Survey, 2001–10; Community Life survey, 2012 onwards). We take the average number of hours per week reported by those who say they have given unpaid help to organisations during the previous year. This is approximately 2.2 hours. This is a minimum estimate and it may be that the actual numbers are larger than these survey data would imply. If we make the assumption that these are probably fairly regular volunteers, a higher figure of 3.05 hours per week is given if we take the average number of hours reported by those who say they volunteer either at least once a week or more frequently, or at least monthly but less frequently than once a week.

There are no studies that would tell us with any certainty whether or not volunteers in these kinds of organisations put in more or fewer hours than the volunteering population generally. We then multiply these two estimates of time inputs by the average and maximum volunteer numbers, respectively, to give the number of hours contributed by volunteers over the course of the year (assuming 46 weeks of volunteering a year). These can be converted to full-time equivalent numbers by dividing by 37.5 (hours per working week) and 46 (weeks per working year).

Opinions differ on the best method for calculating a cash equivalent for the value of volunteer labour. The lowest is to use the national minimum wage; others might include an estimate of the replacement cost (i.e. what it would cost the organisation to employ people to do the same tasks if they had to pay them), but this assumes knowledge of the tasks being undertaken. The national minimum wage for the period for which we have the most comprehensive volunteering data (2013–15) was £6.50 per hour. 91

Data convergence and integration

Although the quantitative (phases 1 and 3) and qualitative (phase 2) data were collected separately, they could nevertheless be considered ‘integrated’ because the different research elements were explicitly related to each other within a single study and in such a way ‘as to be mutually illuminating, thereby producing findings that are greater than the sum of the parts’. 92 Data triangulation, convergence and integration occurred in a number of different ways, at different stages of the research.

In phase 1 of the research, a revised definition and set of characteristics captured within the database was used to support development of a typology and informed the case study sampling for phase 2. For phase 3, the database informed the sample of charities selected for analysing voluntary income and volunteering data and providing additional data fields to be linked to the Charity Commission data.

Although the national quantitative data provided breadth to the study, these were limited and left questions unanswered. The local qualitative data brought depth to the question ‘what is a community hospital’, by helping to build a picture of the history, context and change over time. Qualitative interviews in work packages 2 and 3 were conducted concurrently, and triangulation of data between stakeholder, volunteer, staff, carer and patient interviews helped validate findings and strengthen our understanding of patient and carer experiences and community engagement and value.

In addition, the combination of researchers working on more than one work package, reflexive team meetings and the involvement of different representations in the team [CHA, University of Birmingham and Crystal Blue Consulting (London, UK)] allowed for healthy dialogue, debate and analysis. Emerging findings from each phase of the research were, for example, shared through internal working papers and discussed regularly at whole project team meetings.

  • Patient and public involvement

Our commitment to PPI ensured that patients, carers and the public were involved in this study before and during its conduct. PPI involvement in the study design was facilitated by one of the researchers (HT), who first consulted with 10 PPI members of the Swanage Health Forum, representing the League of Friends; a GP practice Patient Participation Group; Swanage Carers; Partnership for Older People’s Programme; Wayfinders; the Senior Forum; the Health and Wellbeing Board; Cancare; a public Governor for Dorset Healthcare NHS Trust; and a retired GP. This group provided an endorsement of the study’s proposed focus and methodology.

At the national level, 13 board members of CHA (four GPs, six nurses, two managers and one League of Friends member) co-produced the initial research proposal. Two members then became part of the study steering group, which met regularly throughout the study, supported the development of research materials and supporting documentation, helped facilitate access to potential case studies, contributed to the local and national reports and reviewed several drafts. We also engaged with approximately 100 delegates at three CHA annual conferences (presentations and workshops focused on working with findings) that included not only practitioners but members of community hospital Leagues of Friends.

In addition, a cross-study steering group, chaired by Professor Sir Lewis Ritchie, University of Aberdeen, provided guidance across all three Health Services and Delivery Research community hospital studies, with representation from the CHA, Attend (National League of Friends) and the Patients Association, alongside the three study teams. The steering group met seven times over the period of this study, offering opportunities to share findings and explore experiences between the studies.

As described in Local reference group , at the local level we established LRGs within each of our case study sites to bring local people together (hospital staff, volunteers and community members, a number of whom were patients and/or carers) to steer, support and inform the case study research. To facilitate cross-case learning, we brought together representatives from each of the LRGs three times to share experiences, identify best practice and network. Event themes reflected each of the three research questions, and the days offered time for case study representatives to work together, share across sites, hear from national experts, contribute to the ongoing development of the study and reflect on emerging findings and their implications.

  • Ethics approval

Ethics approval was provided by the University of Birmingham, in line with the Department of Health and Social Care’s Research Governance Framework, for work package 1 (national mapping) and elements of work package 3 (quantitative charitable finance and volunteering support data). The university also provided sponsorship for the whole study. The qualitative case studies required full ethics review through the National Research Ethics Service as they involved interviews with patients and carers and interviews and focus groups with NHS staff, volunteers and community stakeholders. The Wales Research Ethics Committee 6 reviewed this research and provided a favourable ethics opinion (study reference number: 16/WA/0021).

Informed by key stakeholder engagement and a review of the policy context and existing literature, this study explored the profile, characteristics, patient and carer experience, community engagement and value of community hospitals in England through a multimethod approach. The research was conducted in three overlapping phases – mapping, case studies and Charity Commission data analysis – that, together, involved a range of qualitative and quantitative methods. Data for each phase were collected and analysed separately but iteratively, with emerging findings discussed regularly through a range of mechanisms, including whole project team meetings and internal working papers. We involved key national and local stakeholders throughout the study, from design, through to data collection and analysis, and reporting and dissemination.

Having framed the study (see Chapter 1 ) and described our research methodology (see Chapter 2 ), we now move on to share the findings. Chapters 3 – 7 describe the findings emerging from different elements of the study, and Chapter 8 brings those findings together and discusses them in relation to the wider literature and their significance for knowledge and practice.

  • Cite this Page Davidson D, Ellis Paine A, Glasby J, et al. Analysis of the profile, characteristics, patient experience and community value of community hospitals: a multimethod study. Southampton (UK): NIHR Journals Library; 2019 Jan. (Health Services and Delivery Research, No. 7.1.) Chapter 2, Research objectives, questions and methodology.
  • PDF version of this title (6.1M)

In this Page

Other titles in this collection.

  • Health Services and Delivery Research

Recent Activity

  • Research objectives, questions and methodology - Analysis of the profile, charac... Research objectives, questions and methodology - Analysis of the profile, characteristics, patient experience and community value of community hospitals: a multimethod study

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Methods – Types, Examples and Guide

Research Methods – Types, Examples and Guide

Table of Contents

Research Methods

Research Methods

Definition:

Research Methods refer to the techniques, procedures, and processes used by researchers to collect , analyze, and interpret data in order to answer research questions or test hypotheses. The methods used in research can vary depending on the research questions, the type of data that is being collected, and the research design.

Types of Research Methods

Types of Research Methods are as follows:

Qualitative research Method

Qualitative research methods are used to collect and analyze non-numerical data. This type of research is useful when the objective is to explore the meaning of phenomena, understand the experiences of individuals, or gain insights into complex social processes. Qualitative research methods include interviews, focus groups, ethnography, and content analysis.

Quantitative Research Method

Quantitative research methods are used to collect and analyze numerical data. This type of research is useful when the objective is to test a hypothesis, determine cause-and-effect relationships, and measure the prevalence of certain phenomena. Quantitative research methods include surveys, experiments, and secondary data analysis.

Mixed Method Research

Mixed Method Research refers to the combination of both qualitative and quantitative research methods in a single study. This approach aims to overcome the limitations of each individual method and to provide a more comprehensive understanding of the research topic. This approach allows researchers to gather both quantitative data, which is often used to test hypotheses and make generalizations about a population, and qualitative data, which provides a more in-depth understanding of the experiences and perspectives of individuals.

Key Differences Between Research Methods

The following Table shows the key differences between Quantitative, Qualitative and Mixed Research Methods

Examples of Research Methods

Examples of Research Methods are as follows:

Qualitative Research Example:

A researcher wants to study the experience of cancer patients during their treatment. They conduct in-depth interviews with patients to gather data on their emotional state, coping mechanisms, and support systems.

Quantitative Research Example:

A company wants to determine the effectiveness of a new advertisement campaign. They survey a large group of people, asking them to rate their awareness of the product and their likelihood of purchasing it.

Mixed Research Example:

A university wants to evaluate the effectiveness of a new teaching method in improving student performance. They collect both quantitative data (such as test scores) and qualitative data (such as feedback from students and teachers) to get a complete picture of the impact of the new method.

Applications of Research Methods

Research methods are used in various fields to investigate, analyze, and answer research questions. Here are some examples of how research methods are applied in different fields:

  • Psychology : Research methods are widely used in psychology to study human behavior, emotions, and mental processes. For example, researchers may use experiments, surveys, and observational studies to understand how people behave in different situations, how they respond to different stimuli, and how their brains process information.
  • Sociology : Sociologists use research methods to study social phenomena, such as social inequality, social change, and social relationships. Researchers may use surveys, interviews, and observational studies to collect data on social attitudes, beliefs, and behaviors.
  • Medicine : Research methods are essential in medical research to study diseases, test new treatments, and evaluate their effectiveness. Researchers may use clinical trials, case studies, and laboratory experiments to collect data on the efficacy and safety of different medical treatments.
  • Education : Research methods are used in education to understand how students learn, how teachers teach, and how educational policies affect student outcomes. Researchers may use surveys, experiments, and observational studies to collect data on student performance, teacher effectiveness, and educational programs.
  • Business : Research methods are used in business to understand consumer behavior, market trends, and business strategies. Researchers may use surveys, focus groups, and observational studies to collect data on consumer preferences, market trends, and industry competition.
  • Environmental science : Research methods are used in environmental science to study the natural world and its ecosystems. Researchers may use field studies, laboratory experiments, and observational studies to collect data on environmental factors, such as air and water quality, and the impact of human activities on the environment.
  • Political science : Research methods are used in political science to study political systems, institutions, and behavior. Researchers may use surveys, experiments, and observational studies to collect data on political attitudes, voting behavior, and the impact of policies on society.

Purpose of Research Methods

Research methods serve several purposes, including:

  • Identify research problems: Research methods are used to identify research problems or questions that need to be addressed through empirical investigation.
  • Develop hypotheses: Research methods help researchers develop hypotheses, which are tentative explanations for the observed phenomenon or relationship.
  • Collect data: Research methods enable researchers to collect data in a systematic and objective way, which is necessary to test hypotheses and draw meaningful conclusions.
  • Analyze data: Research methods provide tools and techniques for analyzing data, such as statistical analysis, content analysis, and discourse analysis.
  • Test hypotheses: Research methods allow researchers to test hypotheses by examining the relationships between variables in a systematic and controlled manner.
  • Draw conclusions : Research methods facilitate the drawing of conclusions based on empirical evidence and help researchers make generalizations about a population based on their sample data.
  • Enhance understanding: Research methods contribute to the development of knowledge and enhance our understanding of various phenomena and relationships, which can inform policy, practice, and theory.

When to Use Research Methods

Research methods are used when you need to gather information or data to answer a question or to gain insights into a particular phenomenon.

Here are some situations when research methods may be appropriate:

  • To investigate a problem : Research methods can be used to investigate a problem or a research question in a particular field. This can help in identifying the root cause of the problem and developing solutions.
  • To gather data: Research methods can be used to collect data on a particular subject. This can be done through surveys, interviews, observations, experiments, and more.
  • To evaluate programs : Research methods can be used to evaluate the effectiveness of a program, intervention, or policy. This can help in determining whether the program is meeting its goals and objectives.
  • To explore new areas : Research methods can be used to explore new areas of inquiry or to test new hypotheses. This can help in advancing knowledge in a particular field.
  • To make informed decisions : Research methods can be used to gather information and data to support informed decision-making. This can be useful in various fields such as healthcare, business, and education.

Advantages of Research Methods

Research methods provide several advantages, including:

  • Objectivity : Research methods enable researchers to gather data in a systematic and objective manner, minimizing personal biases and subjectivity. This leads to more reliable and valid results.
  • Replicability : A key advantage of research methods is that they allow for replication of studies by other researchers. This helps to confirm the validity of the findings and ensures that the results are not specific to the particular research team.
  • Generalizability : Research methods enable researchers to gather data from a representative sample of the population, allowing for generalizability of the findings to a larger population. This increases the external validity of the research.
  • Precision : Research methods enable researchers to gather data using standardized procedures, ensuring that the data is accurate and precise. This allows researchers to make accurate predictions and draw meaningful conclusions.
  • Efficiency : Research methods enable researchers to gather data efficiently, saving time and resources. This is especially important when studying large populations or complex phenomena.
  • Innovation : Research methods enable researchers to develop new techniques and tools for data collection and analysis, leading to innovation and advancement in the field.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

This paper is in the following e-collection/theme issue:

Published on 16.4.2024 in Vol 26 (2024)

User-Centered Development of a Patient Decision Aid for Choice of Early Abortion Method: Multi-Cycle Mixed Methods Study

Authors of this article:

Author Orcid Image

Original Paper

  • Kate J Wahl 1 , MSc   ; 
  • Melissa Brooks 2 , MD   ; 
  • Logan Trenaman 3 , PhD   ; 
  • Kirsten Desjardins-Lorimer 4 , MD   ; 
  • Carolyn M Bell 4 , MD   ; 
  • Nazgul Chokmorova 4 , MD   ; 
  • Romy Segall 2 , BSc, MD   ; 
  • Janelle Syring 4 , MD   ; 
  • Aleyah Williams 1 , MPH   ; 
  • Linda C Li 5 , PhD   ; 
  • Wendy V Norman 4, 6 * , MD, MHSc   ; 
  • Sarah Munro 1, 3 * , PhD  

1 Department of Obstetrics and Gynecology, University of British Columbia, Vancouver, BC, Canada

2 Department of Obstetrics and Gynecology, Dalhousie University, Halifax, NS, Canada

3 Department of Health Systems and Population Health, School of Public Health, University of Washington, Seattle, WA, United States

4 Department of Family Practice, University of British Columbia, Vancouver, BC, Canada

5 Department of Physical Therapy, University of British Columbia, Vancouver, BC, Canada

6 Department of Public Health, Environments and Society, Faculty of Public Health and Policy, London School of Hygiene & Tropical Medicine, London, United Kingdom

*these authors contributed equally

Corresponding Author:

Kate J Wahl, MSc

Department of Obstetrics and Gynecology

University of British Columbia

4500 Oak Street

Vancouver, BC, V6H 3N1

Phone: 1 4165231923

Email: [email protected]

Background: People seeking abortion in early pregnancy have the choice between medication and procedural options for care. The choice is preference-sensitive—there is no clinically superior option and the choice depends on what matters most to the individual patient. Patient decision aids (PtDAs) are shared decision-making tools that support people in making informed, values-aligned health care choices.

Objective: We aimed to develop and evaluate the usability of a web-based PtDA for the Canadian context, where abortion care is publicly funded and available without legal restriction.

Methods: We used a systematic, user-centered design approach guided by principles of integrated knowledge translation. We first developed a prototype using available evidence for abortion seekers’ decisional needs and the risks, benefits, and consequences of each option. We then refined the prototype through think-aloud interviews with participants at risk of unintended pregnancy (“patient” participants). Interviews were audio-recorded and documented through field notes. Finally, we conducted a web-based survey of patients and health care professionals involved with abortion care, which included the System Usability Scale. We used content analysis to identify usability issues described in the field notes and open-ended survey questions, and descriptive statistics to summarize participant characteristics and close-ended survey responses.

Results: A total of 61 individuals participated in this study. Further, 11 patients participated in think-aloud interviews. Overall, the response to the PtDA was positive; however, the content analysis identified issues related to the design, language, and information about the process and experience of obtaining abortion care. In response, we adapted the PtDA into an interactive website and revised it to include consistent and plain language, additional information (eg, pain experience narratives), and links to additional resources on how to find an abortion health care professional. In total, 25 patients and 25 health care professionals completed the survey. The mean System Usability Scale score met the threshold for good usability among both patient and health care professional participants. Most participants felt that the PtDA was user-friendly (patients: n=25, 100%; health care professionals: n=22, 88%), was not missing information (patients: n=21, 84%; health care professionals: n=18, 72%), and that it was appropriate for patients to complete the PtDA before a consultation (patients: n=23, 92%; health care professionals: n=23, 92%). Open-ended responses focused on improving usability by reducing the length of the PtDA and making the website more mobile-friendly.

Conclusions: We systematically designed the PtDA to address an unmet need to support informed, values-aligned decision-making about the method of abortion. The design process responded to a need identified by potential users and addressed unique sensitivities related to reproductive health decision-making.

Introduction

In total, 1 in 3 pregnancy-capable people in Canada will have an abortion in their lifetimes, and most will seek care early in pregnancy [ 1 ]. Medication abortion (using the gold-standard mifepristone/misoprostol regimen) and procedural abortion are common, safe, and effective options for abortion care in the first trimester [ 2 , 3 ]. The choice between using medications and presenting to a facility for a procedure is a preference-sensitive decision; there is no clinically superior option and the choice depends on what matters most to the individual patient regarding the respective treatments and the features of those options [ 4 - 6 ].

The choice of method of abortion can involve a process of shared decision-making, in which the patient and health care professional share the best available evidence about options, and the patient is supported to consider those options and clarify an informed preference [ 7 ]. There are many types of interventions available to support shared decision-making, including interventions targeting health care professionals (eg, educational materials, meetings, outreach visits, audit and feedback, and reminders) and patients (eg, patient decision aids [PtDA], appointment preparation packages, empowerment sessions, printed materials, and shared decision-making education) [ 8 ]. Of these interventions, PtDAs are well-suited to address challenges to shared decision-making about the method of abortion, including limited patient knowledge, public misinformation about options, poor access to health care professionals with sufficient expertise, and apprehension about abortion counseling [ 9 ].

PtDAs are widely used interventions that support people in making informed, deliberate health care choices by explicitly describing the health problem and decision, providing information about each option, and clarifying patient values [ 10 ]. The results of the 2023 Cochrane systematic review of 209 randomized controlled trials indicate that, compared to usual care (eg, information pamphlets or webpages), the use of PtDAs results in increases in patient knowledge, expectations of benefits and harms, clarity about what matters most to them, and participation in making a decision [ 11 ]. Of the studies included in the systematic review, 1 tested the effect of a PtDA leaflet for method of abortion and found that patients eligible for both medication and procedural abortion who received the PtDA were more knowledgeable, and had lower risk perceptions and decisional conflict than those who were in the control group [ 12 ]. However, that PtDA was developed 20 years ago in the UK health system and was not publicly available. A recent environmental scan of PtDAs for a method of abortion found that other available options meet few of the criteria set by the International Patient Decision Aid Standards (IPDAS) collaboration and do not include language and content optimized for end users [ 9 , 13 ].

Consequently, no PtDAs for method of abortion were available in Canada at the time of this study. This was a critical gap for both patients and health care professionals as, in 2017, mifepristone/misoprostol medication abortion came to the market, offering a new method of choice for people seeking abortion in the first trimester [ 14 ]. Unlike most jurisdictions, in Canada medication abortion is typically prescribed in primary care and dispensed in community pharmacies. Offering a PtDA in preparation for a brief primary care consultation allows the person seeking abortion more time to digest new information, consider their preferences, be ready to discuss their options, and make a quality decision.

In this context, we identified a need for a high-quality and publicly available PtDA to support people in making an informed choice about the method of abortion that reflects what is most important to them. Concurrently, our team was working in collaboration with knowledge users (health care professionals, patients, and health system decision makers) who were part of a larger project to investigate the implementation of mifepristone in Canada [ 15 , 16 ]. We, therefore, aimed to develop and evaluate the usability of a web-based PtDA for the Canadian context, where abortion care is publicly funded and available without legal restriction.

Study Design

We performed a mixed methods user-centered development and evaluation study informed by principles of integrated knowledge translation. Integrated knowledge translation is an approach to collaborative research in which researchers and knowledge users work together to identify a problem, conduct research as equal partners to address that problem, and coproduce research products that aim to impact health service delivery [ 17 ]. We selected this approach to increase the likelihood that our end PtDAs would be relevant, useable, and used for patients and health care professionals in Canada [ 17 ]. The need for a PtDA was identified through engagement with health care professionals. In 2017, they highlighted the need for patients to be supported in choosing between procedural care—which historically represented more than 90% of abortions in Canada [ 18 ]—and the newly available medication option [ 19 , 20 ]. This need was reaffirmed in 2022 by the Canadian federal health agency, Health Canada, which circulated a request for proposals to generate “evidence-based, culturally-relevant information aimed at supporting people in their reproductive decision-making and in accessing abortion services as needed” [ 21 ].

We operationalized integrated knowledge translation principles in a user-centered design process. User-centered design “grounds the characteristics of an innovation in information about the individuals who use that innovation, with a goal of maximizing ‘usability in context’” [ 22 ]. In PtDA development, user-centered design involves iteratively understanding users, developing and refining a prototype, and observing user interaction with the prototype [ 23 , 24 ]. Like integrated knowledge translation, this approach is predicated on the assumption that involving users throughout the process increases the relevance of the PtDA and the likelihood of successful implementation [ 24 ].

Our design process included the following steps ( Figure 1 ): identification of evidence about abortion patients’ decisional needs and the attributes of medication and procedural abortion that matter most from a patient perspective; development of a paper-based prototype; usability testing via think-aloud interviews with potential end users; refinement of the PtDA prototype into an interactive website; usability testing via a survey with potential end users and abortion health care professionals; and final revisions before launching the PtDA for real-world testing. Our systematic process was informed by user-centered methods for PtDA development [ 23 , 24 ], guidance from the IPDAS collaboration [ 25 - 27 ], and the Standards for Universal Reporting of Patient Decision Aid Evaluation checklist [ 10 ].

research methods objectives questions

Our multidisciplinary team included experts in shared decision-making (SM and LT), a PhD student in patient-oriented knowledge translation (KJW), experts in integrated knowledge translation with health care professionals and policy makers (WVN and SM), clinical experts in abortion counseling and care (WVN and MB), a medical undergraduate student (RS), a research project coordinator (AW), and family medicine residents (KD-L, CMB, NC, and JS) who had an interest in abortion care. Additionally, a panel of experts external to the development process reviewed the PtDA for clinical accuracy following each revision of the prototype. These experts included coauthors of the national Society for Obstetricians and Gynaecologists of Canada (SOGC) clinical practice guidelines for abortion care in Canada. They were invited to this project because of their knowledge of first-trimester abortion care as well as their ability to support the implementation of the PtDA in guidelines and routine clinical practice.

Ethical Considerations

The research was approved by the University of British Columbia Children’s and Women’s Research Ethics Board (H16-01006) and the Nova Scotia Health Research Ethics Board (1027637). In each round of testing, participants received a CAD $20 (US $14.75) Amazon gift card by email for their participation.

Preliminary Work: Identification of Evidence

We identified the decisional needs of people seeking early abortion care using a 2018 systematic review of reasons for choosing an abortion method [ 28 ], an additional search that identified 1 study conducted in Canada following the 2017 availability of mifepristone/misoprostol medication abortion [ 29 ], and the SOGC clinical practice guidelines [ 2 , 3 ]. The review identified several key factors that matter most for patient choice of early abortion method: perceived simplicity and “naturalness,” fear of complication or bleeding , fear of anesthesia or surgery , timing of the procedure , and chance of sedation . The additional Canadian study found that the time required to complete the abortion and side effects were important factors. According to the SOGC clinical practice guidelines, the key information that should be communicated to the patient are gestational age limits and the risk of complications with increasing gestational age [ 2 , 3 ]. The guidelines also indicate that wait times , travel times , and cost considerations may be important in a person’s choice of abortion method and should be addressed [ 2 , 3 ].

We compiled a long list of attributes for our expert panel and then consolidated and refined the attribute list through each stage of the prototype evaluation. For evidence of how these factors differed for medication and procedural abortion, we drew primarily from the SOGC clinical practice guidelines for abortion [ 2 , 3 ]. For cost considerations, we described the range of federal, provincial, and population-specific programs that provide free coverage of abortion care for people in Canada.

Step 1: Developing the Prototype

Our goal was to produce an interactive, web-based PtDA that would be widely accessible to people seeking an abortion in Canada by leveraging the widespread use of digital health information, especially among reproductive-aged people [ 30 ]. Our first prototype was based on a previously identified paper-based question-and-answer comparison grid that presented evidence-based information about the medication and procedural options [ 9 , 31 ]. We calculated readability by inputting the plain text of the paper-based prototype into a Simple Measure of Gobbledygook (SMOG) Index calculator [ 32 ].

We made 2 intentional deviations from common practices in PtDA development [ 33 ]. First, we did not include an “opt-out” or “do nothing” option, which would describe the natural course of pregnancy. We chose to exclude this option to ensure clarity for users regarding the decision point; specifically, our decision point of interest was the method of abortion, not the choice to terminate or continue a pregnancy. Second, we characterized attributes of the options as key points rather than positive and negative features to avoid imposing value judgments onto subjective features (eg, having the abortion take place at home may be beneficial for some people but may be a deterrent for others).

Step 2: Usability Testing of the Prototype

We first conducted usability testing involving think-aloud interviews with patient participants to assess the paper-based prototype. Inclusion criteria included people aged 18-49 years assigned-female-at-birth who resided in Canada and could speak and read English. In January 2020, we recruited participants for the first round of think-aloud interviews [ 34 ] via email and poster advertising circulated to (1) a network of parent research advisors who were convened to guide a broader program of research about pregnancy and childbirth in British Columbia, Canada, and (2) a clinic providing surgical abortion care in Nova Scotia, Canada, as well as snowball sampling with participants. We purposively sought to advertise this study with these populations to ensure variation in age, ethnicity, level of education, parity, and abortion experience. Interested individuals reviewed this study information form and provided consent to participate, before scheduling an interview. The interviewer asked participants to think aloud as they navigated the prototype, for example describing what they liked or disliked, missing information, or lack of clarity. The interviewer noted the participant’s feedback on a copy of the prototype during the interview. Finally, the participant responded to questions adapted from the System Usability Scale [ 35 ], a measure designed to collect subjective ratings of a product’s usability, and completed a brief demographic questionnaire. The interviews were conducted via videoconferencing and were audio recorded. We deidentified the qualitative data and assigned each participant a unique identifier. Then, the interviewer listened to the recording and revised their field notes with additional information including relevant quotes.

For the analysis of think-aloud interviews, we used inductive content analysis to describe the usability and acceptability of different elements of the PtDA [ 36 ]. Further, 3 family medicine residents (KD-L, CMB, and NC) under guidance from a senior coauthor (SM) completed open coding to develop a list of initial categories, which we grouped under higher-order headings. We then organized these results in a table to illustrate usability issues (categories), illustrative participant quotes, and modifications to make. We then used the results of interviews to adapt the prototype into a web-based format, which we tested via further think-aloud interviews and a survey with people capable of becoming pregnant and health care professionals involved with abortion care.

Step 3: Usability Testing of the Website

For the web-based format, we used DecideApp PtDA open-source software, which provides a sustainable solution to the problems of low quality and high maintenance costs faced by web-based PtDAs by allowing developers to host, maintain, and update their tools at no cost. This software has been user-tested and can be accessed by phone, tablet, or computer [ 37 , 38 ]. It organizes a PtDA into 6 sections: Introduction, About Me, My Values, My Choice, Review, and Next Steps. In the My Values section, an interactive values clarification exercise allows users to rank and make trade-offs between attributes of the options. The final pages provide an opportunity for users to make a choice, complete a knowledge self-assessment, and consider the next steps to access their chosen method.

From July to August 2020, we recruited patient and health care professional participants using Twitter and the email list of the Canadian Abortion Providers Support platform, respectively. Participants received an email with a link to the PtDA and were redirected to the survey once they had navigated through the PtDA. As above, inclusion criteria included people aged 18-49 years assigned as female-at-birth who resided in Canada. Among health care professionals, we included eligible prescribers who may not have previously engaged in abortion care (family physicians, residents, nurse practitioners, and midwives), and allied health professionals and stakeholders who provide or support abortion care, who practiced in Canada. All participants had to speak and read English.

The survey included 3 sections: usability, implementation, and participant characteristics. The usability section consisted of the System Usability Scale [ 35 ], and purpose-built questions about what participants liked and disliked about the PtDA. The implementation section included open- and close-ended questions about how the PtDA compares to other resources and when it could be implemented in the care pathway. Patient participants also completed the Control Preference Scale, a validated measure used to determine their preferred role in decision-making (active, collaborative, or passive) [ 39 ]. Data on participant characteristics included gender, abortion experience (patient participants), and abortion practice (health care professional participants). We deidentified the qualitative data and assigned each participant a unique identifier. For the analysis of survey data, we characterized close-ended responses using descriptive statistics, and, following the analysis procedures described in Step 2 in the Methods section, used inductive content analysis of open-ended responses to generate categories associated with usability and implementation [ 36 ]. In 2021, we made minor revisions to the website based on the results of usability testing and published the PtDA for use in routine clinical care.

In the following sections, we outline the results of the development process including the results of the think-aloud interviews and survey, as well as the final decision aid prototype.

Our initial prototype, a paper-based question-and-answer comparison grid, presented evidence-based information comparing medication and procedural abortion. The first version of the prototype also included a second medication abortion regimen involving off-label use of methotrexate, however, we removed this option following a review by the clinical expert panel who advised us that there is very infrequent use of this regimen in Canada in comparison to the gold standard medication abortion option, mifepristone. Other changes at this stage involved clarifying the scope of practice (health care professionals other than gynecologists can perform a procedural abortion), abortion practice (gestational age limit and how the medication is taken), the abortion experience (what to expect in terms of bleeding), and risk (removing information about second- and third-trimester abortion). The updated prototype was finalized by a scientist (SM) and trainee (KJW) with expertise in PtDA development. The prototype (see Multimedia Appendix 1 ) was ultimately 4 pages long and described 18 attributes of each option framed as Frequently Asked Questions, including abortion eligibility (How far along in pregnancy can I be?), duration (How long does it take?), and side effects (How much will I bleed?). The SMOG grade level was 8.4.

Participant Characteristics

We included 11 participants in think-aloud interviews between January and July 2020, including 7 recruited through a parent research advisory network and 4 individuals who had recently attended an abortion clinic. The mean interview duration was 36 minutes (SD 6 minutes). The participants ranged in age from 31 to 37 years. All had been pregnant and 8 out of 11 (73%) participants had a personal experience of abortion (4 participants who had recently attended an abortion clinic and 4 participants from the parent research advisory who disclosed their experience during the interview). The characteristics of the sample are reported in Table 1 .

Overall, participants had a positive view of the paper-based, comparison grid PtDA. In total, 1 participant who had recently sought an abortion said, “I think this is great and super helpful. It would’ve been awesome to have had access to this right away … I don’t think there’s really anything missing from here that I was Googling about” (DA010). The only participant who expressed antichoice views indicated that the PtDA would be helpful to someone seeking to terminate a pregnancy (DA001). Another participant said, “[The PtDA] is not biased, it’s not like you’re going to die. It’s a fact, you know the facts and then you decide whether you want it or not. A lot of people feel it’s so shameful and judgmental, but this is very straightforward. I like it.” (DA002). Several participants stated they felt more informed and knowledgeable about the options.

In response to questions adapted from the System Usability Scale, all 11 participants agreed that the PtDA was easy to use, that most people could learn to use it quickly, and that they felt very confident using the prototype, and disagreed that it was awkward to use. In total, 8 (73%) participants agreed with the statement that the components of the PtDA were well-integrated. A majority of participants disagreed with the statements that the website was unnecessarily complex (n=8, 73%), that they would need the support of an expert to use it (n=8, 73%), that it was too inconsistent (n=9, 82%), and that they would need to learn a lot before using it (n=8, 73%). Further, 2 (18%) participants agreed with the statements that the PtDA was unnecessarily complex and that they would need to learn a lot before using it. Furthermore, 1 (9%) participant agreed with the statement that the PtDA was too inconsistent.

Through inductive analysis of think-aloud interviews, we identified 4 key usability categories: design, language, process, and experience.

Participants liked the side-by-side comparison layout, appreciated the summary of key points to remember, and said that overall, the presented information was clear. For example, 1 participant reflected, “I think it’s very clear ... it’s very simplistic, people will understand the left-hand column is for medical abortion and the right-hand column is for surgical.” (DA005) Some participants raised concerns about the aesthetics of the PtDA, difficulties recalling the headers across multiple pages, and the overall length of the PtDA.

Participants sought to clarify language at several points in the PtDA. Common feedback was that the gestational age limit for the medication and the procedure should be clarified. Participants also pointed out inconsistent use of language (eg, doctor and health care professional) and medical jargon.

Several participants were surprised to learn that family doctors could provide abortion care. Others noted that information about the duration—including travel time—and number of appointments for both medication and procedural abortion could be improved. In addition to clarifying the abortion process, several participants suggested including additional information and resources to help identify an abortion health care professional, understand when to seek help for abortion-related complications, and access emotional support. It was also important to participants that financial impacts (eg, hospital parking and menstrual pads) were included for each option.

Participants provided insight into the description of the physical, psychological, and other consequences associated with the abortion medication and procedure. Participants who had both types of abortion care felt that the description of pain that “may be worse than a period” was inaccurate. Other participants indicated that information about perceived and real risks was distressing or felt out of place, such as correcting myths about future fertility or breast cancer. Some participants indicated that patient stories would be valuable saying, for example, “I think what might be nice to help with the decision-making process is reading stories of people’s experiences” (DA006).

Modifications Made

Changes made based on these findings are described in Table 2 . Key user-centered modifications included transitioning to a web-based format with a consistent color scheme, clarifying who the PtDA is for (for typical pregnancies up to 10 weeks), adding information about telemedicine to reflect guidelines for the provision of abortion during pandemics, and developing brief first-person qualitative descriptions of the pain intensity for each option.

Through analysis of the interviews and consultation with our panel of clinical experts, we also identified that, among the 18 initial attributes in our prototype, 7 had the most relative importance to patients in choosing between medication and procedural abortion. These attributes also represented important differences between each option which forced participants to consider the trade-offs they were willing to make. Thus we moved all other potential attributes into an information section (My Options) that supported the user to gain knowledge before clarifying what mattered most to them by considering the differences between options (My Values).

a PtDA: patient decision aid.

b SOGC: Society of Obstetricians and Gynaecologists of Canada.

Description of the PtDA

As shown in Figure 2 , the revised version of the PtDA resulting from our systematic process is an interactive website. Initially, the title was My Body, My Choice ; however, this was changed to avoid association with antivaccine campaigns that co-opted this reproductive rights slogan. The new title, It’s My Choice or C’est Mon Choix , was selected for its easy use in English and French. The PtDA leads the user through 6 sections:

  • The Introduction section provides the user with information about the decision and the PtDA, as well as grids comparing positive and negative features of the abortion pill and procedure, including their chance of benefits (eg, effectiveness), harms (eg, complications), and other relevant factors (eg, number of appointments and cost).
  • The About Me section asks the user to identify any contraindications to the methods. It then prompts users to consider their privacy needs and gives examples of how this relates to each option (eg, the abortion pill can be explained to others as a miscarriage; procedural care can be completed quickly).
  • The My Values section includes a values clarification exercise, in which the user selects and weights (on a 0-100 scale) the relative importance of at least three of 7 decisional attributes: avoiding pain, avoiding bleeding, having the abortion at home, having an experience that feels like a miscarriage, having fewer appointments, less time off for recovery, and having a companion during the abortion.
  • The My Choice section highlights 1 option, based on the attribute weights the user assigned in the My Values section. For instance, if a user strongly preferred to avoid bleeding and have fewer appointments, the software would suggest that a procedural abortion would be a better match. For a user who preferred having the abortion at home and having a companion present, the software would suggest that a medication abortion would be a better match. The user selects the option they prefer.
  • The Review section asks the user to complete the 4-item SURE (Sure of Myself, Understand Information, Risk-Benefit Ratio, Encouragement) screening test [ 41 ], and advises them to talk with an expert if they answer “no” to any of the questions. This section also includes information phone lines to ensure that users can seek confidential, accurate, and nonjudgmental support.
  • Lastly, in the Next Steps section, users see a summary of their choice and the features that matter most to them, instructions for how to save the results, keep the results private, and find an abortion health care professional. Each section of the PtDA includes a “Leave” button in case users need to navigate away from the website quickly.

We calculated readability by inputting the plain text of the web-based PtDA into a SMOG Index calculator [ 32 ], which assessed the reading level of the web-based PtDA as grade 9.2.

To ensure users’ trust in the information as accurate and unbiased we provided a data declaration on the landing page: “the clinical information presented in this decision aid comes from Society of Obstetricians and Gynaecologists best practice guidelines.” On the landing page, we also specify “This website was developed by researchers at the University of British Columbia and Dalhousie University. This tool is not supported or connected to any pharmaceutical company.”

research methods objectives questions

A total of 50 participants, including 25 patients and 25 health care professionals, reviewed the PtDA website and completed the survey between January and March 2021. The majority of patient (n=23, 92%) and health care professional (n=23, 92%) participants identified as cisgender women. Among patient participants, 16% (n=4) reported one or more previous abortions in various clinical settings. More than half (n=16, 64%) of health care professionals offered care in private medical offices, with other locations including sexual health clinics, community health centers, and youth clinics. Many health care professionals were family physicians (n=11, 44%), and other common types were nurse practitioners (n=7, 28%) and midwives (n=3, 12%). The mean proportion of the clinical practice of each health care professional devoted to abortion care was 18% (SD 13%). Most health care professional respondents (n=18, 72%) were involved with the provision of medication, but not procedural, abortion care. The characteristics of patient and health care professional participants are reported in Table 3 .

a In total, 4 participants reported a history of abortion care, representing 6 abortion procedures.

b Not available.

The mean System Usability Score met the threshold for good usability among both patient (mean 85.7, SD 8.6) and health care professional (mean 80, SD 12) participants, although some health care professionals agreed with the statement, “I found the website to be unnecessarily complex,” (see Multimedia Appendix 3 for the full distribution of responses from patient and health care professionals). All 25 patients and 22 out of 25 (88%) health care professional respondents indicated that the user-friendliness of the PtDA was good or the best imaginable. When asked what they liked most about the PtDA, both participant groups described the ease of use, comparison of options, and the explicit values clarification exercise. When asked what they liked least about the PtDA, several health care professionals and some patients pointed out that it was difficult to use on a cell phone. A summary of usability results is presented in Table 4 .

In total, 21 (84%) patients and 18 (72%) health care professionals felt that the PtDA was not missing any information needed to decide about the method of abortion in early pregnancy. While acknowledging that it is “hard to balance being easy to read/understand while including enough accurate clinical information,” several health care professionals and some patients indicated that the PtDA was too long and repetitive. Among the 4 (16%) patient participants who felt information was missing, the most common suggestion was a tool for locating an abortion health care professional. The 7 (28%) health care professionals who felt information was missing primarily made suggestions about the medical information included in the PtDA (eg, listing midwives as health care professionals with abortion care in scope of practice and the appropriateness of gender-inclusive terminology) and the accessibility of information for various language and cultural groups.

a Not available.

Implementation

Participants viewed the PtDA as a positive addition to current resources. Patients with a history of abortion care described looking for the information on the internet and speaking with friends, family members, and health care professionals. Compared with these sources of information, many patients liked the credibility and anonymity of the PtDA, whereas some disliked that it was less personal than a conversation. Further, 18 (72%) health care professional participants said that the PtDA would add to or replace the resources they currently use in practice. Compared with these other resources, health care professionals liked that the PtDA could be explored by patients independently and that it would support them in thinking about the option that was best for them. The disadvantages of the PtDA compared with existing resources were the length—which health care professionals felt would make it difficult to use in a clinical interaction—and the lack of localized information. In total, 23 each (92%) of patient and health care professional participants felt that they would use the PtDA before a consultation.

Principal Results

We designed a web-based, interactive PtDA for the choice of method of abortion in early pregnancy [ 42 ], taking a user-centered approach that involved usability testing with 36 patients and 25 health care professionals. Both patient and health care professional participants indicated that the PtDA had good usability and would be a valuable resource for decision-making. This PtDA fills a critical need to support the autonomy of patients and shared decision-making with their health care professional related to the preference-sensitive choice of method of abortion.

Comparison With Prior Work

A 2017 systematic review and environmental scan found that existing PtDAs for the method of abortion are of suboptimal quality [ 9 ]. Of the 50 PtDAs identified, all but one were created without expertise in decision aid design (eg, abortion services, reproductive health organizations, and consumer health information organizations); however, the development process for this UK-based pamphlet-style PtDA was not reported. The remaining PtDAs were noninteractive websites, smartphone apps, and PDFs that were not tested with users. The authors found that the information about methods of abortion was presented in a disorganized, inconsistent, and unequal way. Subsequent work has found that existing PtDAs emphasize medical (versus social, emotional, and practical) attributes, do not include values clarification, and can be biased to persuade users of a certain method [ 13 ].

To address some of the challenges identified in the literature, we systematically structured and designed elements of the PtDA following newly proposed IPDAS criteria (eg, showing positive and negative features with equal detail) [ 33 ]. We included an explicit values-clarification exercise, which a recent meta-analysis found to decrease decisional conflict and values-incongruent choices [ 43 ].

We based the decision aid on comprehensive and up-to-date scientific evidence related to the effectiveness and safety of medication abortion and procedural abortion; however, less evidence was available for nonmedical attributes. For example, many existing PtDAs incorrectly frame privacy as a “factual advantage” of medication abortion [ 13 ]. To address this, we included privacy in the About Me section as something that means “different things to different people.” Similarly, evidence suggests that patients who do not feel appropriately informed about the pain associated with their method of abortion are less satisfied with their choice [ 44 , 45 ]; and the degree of pain experienced varies across options and among individuals. Following the suggestion of patient participants to include stories and recognizing that evidence for the inclusion of narratives in PtDAs is emerging [ 46 ], we elected to develop brief first-person qualitative descriptions of the pain experience. The inclusion of narratives in PtDAs may be effective in supporting patients to avoid surprise and regret, to minimize affective forecasting errors, and to “visualize” their health condition or treatment experience [ 46 ]. Guided by the narrative immersion model, our goal was to provide a “real-world preview” of the pain experience [ 47 ].

In addition to integrating user perspectives on the optimal tone, content, and format of the PtDA, user testing provided evidence to inform the future implementation of the PtDA. A clear barrier to the completion of the PtDA during the clinical encounter from the health care professional perspective was its length, supporting the finding of a recent rapid realist review, which theorized that health care professionals are less likely to use long or otherwise complex PtDAs that are difficult to integrate into routine practice [ 48 ]. However, 46 out of 50 (92%) participants endorsed the use of the PtDA by the patient alone before the initial consultation, which was aligned with the patient participant’s preference to take an active role in making the final decision about their method of abortion as well as the best practice of early, pre-encounter distribution of PtDAs [ 48 ].

A unique feature of this PtDA was that it resulted from a broader program of integrated knowledge translation designed to support access to medication abortion once mifepristone became available in Canada in 2017. Guided by the principle that including knowledge users in research yields results that are more relevant and useful [ 49 ], we developed the PtDA in response to a knowledge user need, involved health care professional users as partners in our research process, including as coauthors, and integrated feedback from the expert panel. This parallels a theory of PtDA implementation that proposes that early involvement of health care professionals in PtDA development “creates a sense of ownership, increases buy-in, helps to legitimize content, and ensures the PtDA (content and delivery) is consistent with current practice” thereby increasing the likelihood of PtDA integration into routine clinical settings [ 48 ].

Viewed through an integrated knowledge translation lens, our findings point toward future areas of work to support access to abortion in Canada. Several patient participants indicated a need for tools to identify health care professionals who offer abortion care. Some shared that their primary health care professionals did not offer medication abortion despite it being within their scope of practice, and instead referred them to an abortion clinic for methods of counseling and care. We addressed this challenge in the PtDA by including links to available resources, such as confidential phone lines that link patients to health care professionals in their region. On the website we also indicated that patient users could ask their primary care providers whether they provide abortion care; however, we acknowledge that this may place the patient in a vulnerable position if their health care professional is uncomfortable with, or unable to, provide this service for any reason. Future work should investigate opportunities to shorten the pathway to this time-sensitive care, including how to support patients who use the decision aid to act on their informed preference for the method of abortion. This work may involve developing a tool for patients to talk to their primary care provider about prescribing medication abortion.

Strengths and Limitations

Several factors affect the interpretation of our work. Although potential patient users participated in the iterative development process, the patient perspective was not represented in a formal advisory panel in the same way that the health care professional experts were. Participant characteristics collected for the think-aloud interviews demonstrated that our patient sample did not include people with lower education attainment, for whom the grade level and length of the PtDA could present a barrier [ 50 ]. Any transfer of the PtDA to jurisdictions outside Canada must consider how legal, regulatory, and other contextual factors affect the choice of the method of abortion. Since this study was completed, we have explored additional strategies to address these concerns, including additional user testing with people from equity-deserving groups, drop-down menus to adjust the level of detail, further plain language editing, and videos illustrating core content. Since the focus of this study was usability, we did not assess PtDA effectiveness, including impact on knowledge, decisional conflict, choice predisposition and decision, or concordance; however, a randomized controlled trial currently underway will measure the impact of the PtDA on these outcomes in a clinical setting. Finally, our integrated knowledge translation approach added to the robustness of our study by ensuring that health care professionals and patients were equal partners in the research process. One impact of this partnered approach is that our team has received funding support from Health Canada to implement the website on a national scale for people across Canada considering their abortion options [ 51 ].

Conclusions

The PtDA provides people choosing a method of early abortion and their health care professionals with a resource to understand methods of abortion available in the Canadian context and support to make a values-aligned choice. We designed the PtDA using a systematic approach that included both patient and health care professional participants to help ensure its relevance and usability. Our future work will seek to evaluate the implementation of the PtDA in clinical settings, create alternate formats to enhance accessibility, and develop a sustainable update policy. We will also continue to advance access to abortion care in Canada with our broader integrated knowledge translation program of research.

Acknowledgments

The authors thank the participants for contributing their time and expertise to the design of this tool. Family medicine residents CMB, NC, KD-L, and JS were supported by Sue Harris grants, Department of Family Practice, University of British Columbia. KJW was supported by the Vanier Scholar Award (2020-23). SM was supported by a Michael Smith Health Research BC Scholar Award (18270). WVN was supported by a Canadian Institutes of Health Research and Public Health Agency of Canada Chair in Applied Public Health Research (2014-2024, CPP-329455-107837). All grants underwent external peer review for scientific quality. The funders played no role in the design of this study, data collection, analysis, interpretation, or preparation of this paper.

Data Availability

Our ethics approval has specified the primary data is not available.

Authors' Contributions

KJW, SM, and MB conceived of and designed this study. CMB, NC, and KD-L led interview data collection, analysis, and interpretation with input from SM. RS and JS led survey data collection, analysis, and interpretation with input from SM and MB. AW, LCL, and WVN contributed to the synthesis and interpretation of results. KJW, SM, and LT wrote the first draft of this paper, and all authors contributed to this paper’s revisions and approved the final version.

Conflicts of Interest

None declared.

Patient decision aid prototype.

Raw data for pain narratives.

Full distribution of System Usability Scale scores for patients and providers.

  • Norman WV. Induced abortion in Canada 1974-2005: trends over the first generation with legal access. Contraception. 2012;85(2):185-191. [ CrossRef ] [ Medline ]
  • Costescu D, Guilbert E, Bernardin J, Black A, Dunn S, Fitzsimmons B, et al. Medical abortion. J Obstet Gynaecol Can. 2016;38(4):366-389. [ CrossRef ] [ Medline ]
  • Costescu D, Guilbert É. No. 360-induced abortion: surgical abortion and second trimester medical methods. J Obstet Gynaecol Can. 2018;40(6):750-783. [ CrossRef ] [ Medline ]
  • Wennberg JE. Unwarranted variations in healthcare delivery: implications for academic medical centres. BMJ. 2002;325(7370):961-964. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Elwyn G, Frosch D, Rollnick S. Dual equipoise shared decision making: definitions for decision and behaviour support interventions. Implement Sci. 2009;4:75. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sepucha KR, Mulley AG. A practical approach to measuring the quality of preference-sensitive decisions. In: Edwards A, Elwyn G, editors. Shared Decision-Making in Health Care: Achieving Evidence-based Patient Choice. Oxford. Oxford University Press; 2009;151-156.
  • Elwyn G, Laitner S, Coulter A, Walker E, Watson P, Thomson R. Implementing shared decision making in the NHS. BMJ. 2010;341:c5146. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Légaré F, Adekpedjou R, Stacey D, Turcotte S, Kryworuchko J, Graham ID, et al. Interventions for increasing the use of shared decision making by healthcare professionals. Cochrane Database Syst Rev. 2018;7(7):CD006732. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Donnelly KZ, Elwyn G, Thompson R. Quantity over quality-findings from a systematic review and environmental scan of patient decision aids on early abortion methods. Health Expect. 2018;21(1):316-326. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sepucha KR, Abhyankar P, Hoffman AS, Bekker HL, LeBlanc A, Levin CA, et al. Standards for Universal Reporting of Patient Decision Aid Evaluation studies: the development of SUNDAE checklist. BMJ Qual Saf. 2018;27(5):380-388. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Stacey D, Lewis KB, Smith M, Carley M, Volk R, Douglas EE, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2024;1(1):CD001431. [ CrossRef ] [ Medline ]
  • Wong SSM, Thornton JG, Gbolade B, Bekker HL. A randomised controlled trial of a decision-aid leaflet to facilitate women's choice between pregnancy termination methods. BJOG. 2006;113(6):688-694. [ CrossRef ] [ Medline ]
  • Donnelly KZ, Elwyn G, Theiler R, Thompson R. Promoting or undermining quality decision making? A qualitative content analysis of patient decision aids comparing surgical and medication abortion. Womens Health Issues. 2019;29(5):414-423. [ CrossRef ] [ Medline ]
  • Grant K. Long-awaited abortion pill Mifegymiso makes Canadian debut. The Globe and Mail. 2017. URL: https:/​/www.​theglobeandmail.com/​news/​national/​long-awaited-abortion-pill-mifegymiso-rolls-out-in-canada/​article33695167/​?ref=http:/​/www.​theglobeandmail.​com& [accessed 2023-04-03]
  • Norman WV, Munro S, Brooks M, Devane C, Guilbert E, Renner R, et al. Could implementation of mifepristone address Canada's urban-rural abortion access disparity: a mixed-methods implementation study protocol. BMJ Open. 2019;9(4):e028443. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Munro SB, Dunn S, Guilbert ER, Norman WV. Advancing reproductive health through policy-engaged research in abortion care. Semin Reprod Med. 2022;40(5-06):268-276. [ CrossRef ] [ Medline ]
  • Dunn SI, Bhati DK, Reszel J, Kothari A, McCutcheon C, Graham ID. Understanding how and under what circumstances integrated knowledge translation works for people engaged in collaborative research: metasynthesis of IKTRN casebooks. JBI Evid Implement. 2023;21(3):277-293. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Norman WV, Guilbert ER, Okpaleke C, Hayden AS, Lichtenberg ES, Paul M, et al. Abortion health services in Canada: results of a 2012 national survey. Can Fam Physician. 2016;62(4):e209-e217. [ FREE Full text ] [ Medline ]
  • Munro S, Guilbert E, Wagner MS, Wilcox ES, Devane C, Dunn S, et al. Perspectives among canadian physicians on factors influencing implementation of mifepristone medical abortion: a national qualitative study. Ann Fam Med. 2020;18(5):413-421. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Munro S, Wahl K, Soon JA, Guilbert E, Wilcox ES, Leduc-Robert G, et al. Pharmacist dispensing of the abortion pill in Canada: diffusion of innovation meets integrated knowledge translation. Implement Sci. 2021;16(1):76. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Health care policy and strategies program. Call for proposals: funding opportunities for sexual and reproductive health. Health Canada. URL: https://www.canada.ca/en/health-canada/programs/health-care-policy-strategies-program.html [accessed 2024-03-14]
  • Dopp AR, Parisi KE, Munson SA, Lyon AR. A glossary of user-centered design strategies for implementation experts. Transl Behav Med. 2019;9(6):1057-1064. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vaisson G, Provencher T, Dugas M, Trottier ME, Dansokho SC, Colquhoun H, et al. User involvement in the design and development of patient decision aids and other personal health tools: a systematic review. Med Decis Making. 2021;41(3):261-274. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Witteman HO, Maki KG, Vaisson G, Finderup J, Lewis KB, Steffensen KD, et al. Systematic development of patient decision aids: an update from the IPDAS collaboration. Med Decis Making. 2021;41(7):736-754. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Joseph-Williams N, Newcombe R, Politi M, Durand MA, Sivell S, Stacey D, et al. Toward minimum standards for certifying patient decision aids: a modified Delphi consensus process. Med Decis Making. 2014;34(6):699-710. [ CrossRef ] [ Medline ]
  • Stacey D, Volk RJ, IPDAS Evidence Update Leads (Hilary Bekker, Karina Dahl Steffensen, Tammy C. Hoffmann, Kirsten McCaffery, Rachel Thompson, Richard Thomson, Lyndal Trevena, Trudy van der Weijden, and Holly Witteman). The International Patient Decision Aid Standards (IPDAS) collaboration: evidence update 2.0. Med Decis Making. 2021;41(7):729-733. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hoffman AS, Volk RJ, Saarimaki A, Stirling C, Li LC, Härter M, et al. Delivering patient decision aids on the internet: definitions, theories, current evidence, and emerging research areas. BMC Med Inform Decis Mak. 2013;13(Suppl 2):S13. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kanstrup C, Mäkelä M, Graungaard AH. Women's reasons for choosing abortion method: a systematic literature review. Scand J Public Health. 2018;46(8):835-845. [ CrossRef ] [ Medline ]
  • Murray ME, Casson M, Pudwell J, Waddington A. Patients' motivation for surgical versus medical abortion. J Obstet Gynaecol Can. 2019;41(9):1325-1329. [ CrossRef ] [ Medline ]
  • Kummervold PE, Chronaki CE, Lausen B, Prokosch HU, Rasmussen J, Santana S, et al. eHealth trends in Europe 2005-2007: a population-based survey. J Med Internet Res. 2008;10(4):e42. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Early abortion options. Reproductive Health Access Project. 2022. URL: https://www.reproductiveaccess.org/resource/early-abortion-options/ [accessed 2019-03-24]
  • Readability Formulas: free readability assessment tools to help you write for your readers. URL: https://readabilityformulas.com/ [accessed 2022-12-15]
  • Martin RW, Andersen SB, O'Brien MA, Bravo P, Hoffmann T, Olling K, et al. Providing balanced information about options in patient decision aids: an update from the International Patient Decision Aid Standards. Med Decis Making. 2021;41(7):780-800. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lundgrén-Laine H, Salanterä S. Think-aloud technique and protocol analysis in clinical decision-making research. Qual Health Res. 2010;20(4):565-575. [ CrossRef ] [ Medline ]
  • Bangor A, Kortum PT, Miller JT. An empirical evaluation of the system usability scale. Int J Hum Comput Interact. 2008;24(6):574-594. [ CrossRef ]
  • Elo S, Kyngäs H. The qualitative content analysis process. J Adv Nurs. 2008;62(1):107-115. [ CrossRef ] [ Medline ]
  • Bansback N, Li LC, Lynd L, Bryan S. Development and preliminary user testing of the DCIDA (Dynamic Computer Interactive Decision Application) for 'nudging' patients towards high quality decisions. BMC Med Inform Decis Mak. 2014;14:62. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Trenaman L, Munro S, Almeida F, Ayas N, Hicklin J, Bansback N. Development of a patient decision aid prototype for adults with obstructive sleep apnea. Sleep Breath. 2016;20(2):653-661. [ CrossRef ] [ Medline ]
  • Degner LF, Sloan JA, Venkatesh P. The control preferences scale. Can J Nurs Res. 1997;29(3):21-43. [ Medline ]
  • Patev AJ, Hood KB. Towards a better understanding of abortion misinformation in the USA: a review of the literature. Cult Health Sex. 2021;23(3):285-300. [ CrossRef ] [ Medline ]
  • Légaré F, Kearing S, Clay K, Gagnon S, D'Amours D, Rousseau M, et al. Are you SURE?: Assessing patient decisional conflict with a 4-item screening test. Can Fam Physician. 2010;56(8):e308-e314. [ FREE Full text ] [ Medline ]
  • The Society of Obstetricians and Gynaecologists of Canada. URL: https://www.sexandu.ca/its-my-choice/ [accessed 2024-03-30]
  • Witteman HO, Ndjaboue R, Vaisson G, Dansokho SC, Arnold B, Bridges JFP, et al. Clarifying values: an updated and expanded systematic review and meta-analysis. Med Decis Making. 2021;41(7):801-820. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cavet S, Fiala C, Scemama A, Partouche H. Assessment of pain during medical abortion with home use of misoprostol. Eur J Contracept Reprod Health Care. 2017;22(3):207-211. [ CrossRef ] [ Medline ]
  • Baraitser P, Free C, Norman WV, Lewandowska M, Meiksin R, Palmer MJ, et al. Improving experience of medical abortion at home in a changing therapeutic, technological and regulatory landscape: a realist review. BMJ Open. 2022;12(11):e066650. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Shaffer VA, Brodney S, Gavaruzzi T, Zisman-Ilani Y, Munro S, Smith SK, et al. Do personal stories make patient decision aids more effective? An update from the International Patient Decision Aids Standards. Med Decis Making. 2021;41(7):897-906. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Shaffer VA, Focella ES, Hathaway A, Scherer LD, Zikmund-Fisher BJ. On the usefulness of narratives: an interdisciplinary review and theoretical model. Ann Behav Med. 2018;52(5):429-442. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Joseph-Williams N, Abhyankar P, Boland L, Bravo P, Brenner AT, Brodney S, et al. What works in implementing patient decision aids in routine clinical settings? A rapid realist review and update from the International Patient Decision Aid Standards collaboration. Med Decis Making. 2021;41(7):907-937. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guide to knowledge translation planning at CIHR: integrated and end-of-grant approaches. Government of Canada, Canadian Institutes of Health Research. 2012. URL: http://www.cihr-irsc.gc.ca/e/45321.html [accessed 2018-10-08]
  • Muscat DM, Smith J, Mac O, Cadet T, Giguere A, Housten AJ, et al. Addressing health literacy in patient decision aids: an update from the International Patient Decision Aid Standards. Med Decis Making. 2021;41(7):848-869. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • The CART access project. Contraception and Abortion Research Team (CART-GRAC). 2024. URL: https://cart-grac.ubc.ca/the-cart-access-project-2/ [accessed 2024-01-28]

Abbreviations

Edited by T Leung; submitted 07.05.23; peer-reviewed by G Sebastian, R French, B Zikmund-Fisher; comments to author 11.01.24; revised version received 23.02.24; accepted 25.02.24; published 16.04.24.

©Kate J Wahl, Melissa Brooks, Logan Trenaman, Kirsten Desjardins-Lorimer, Carolyn M Bell, Nazgul Chokmorova, Romy Segall, Janelle Syring, Aleyah Williams, Linda C Li, Wendy V Norman, Sarah Munro. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 16.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Questionnaire Design | Methods, Question Types & Examples

Questionnaire Design | Methods, Question Types & Examples

Published on July 15, 2021 by Pritha Bhandari . Revised on June 22, 2023.

A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information.

Questionnaires are commonly used in market research as well as in the social and health sciences. For example, a company may ask for feedback about a recent customer service experience, or psychology researchers may investigate health risk perceptions using questionnaires.

Table of contents

Questionnaires vs. surveys, questionnaire methods, open-ended vs. closed-ended questions, question wording, question order, step-by-step guide to design, other interesting articles, frequently asked questions about questionnaire design.

A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.

Designing a questionnaire means creating valid and reliable questions that address your research objectives , placing them in a useful order, and selecting an appropriate method for administration.

But designing a questionnaire is only one component of survey research. Survey research also involves defining the population you’re interested in, choosing an appropriate sampling method , administering questionnaires, data cleansing and analysis, and interpretation.

Sampling is important in survey research because you’ll often aim to generalize your results to the population. Gather data from a sample that represents the range of views in the population for externally valid results. There will always be some differences between the population and the sample, but minimizing these will help you avoid several types of research bias , including sampling bias , ascertainment bias , and undercoverage bias .

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research methods objectives questions

Questionnaires can be self-administered or researcher-administered . Self-administered questionnaires are more common because they are easy to implement and inexpensive, but researcher-administered questionnaires allow deeper insights.

Self-administered questionnaires

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Self-administered questionnaires can be:

  • cost-effective
  • easy to administer for small and large groups
  • anonymous and suitable for sensitive topics

But they may also be:

  • unsuitable for people with limited literacy or verbal skills
  • susceptible to a nonresponse bias (most people invited may not complete the questionnaire)
  • biased towards people who volunteer because impersonal survey requests often go ignored.

Researcher-administered questionnaires

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents.

Researcher-administered questionnaires can:

  • help you ensure the respondents are representative of your target audience
  • allow clarifications of ambiguous or unclear questions and answers
  • have high response rates because it’s harder to refuse an interview when personal attention is given to respondents

But researcher-administered questionnaires can be limiting in terms of resources. They are:

  • costly and time-consuming to perform
  • more difficult to analyze if you have qualitative responses
  • likely to contain experimenter bias or demand characteristics
  • likely to encourage social desirability bias in responses because of a lack of anonymity

Your questionnaire can include open-ended or closed-ended questions or a combination of both.

Using closed-ended questions limits your responses, while open-ended questions enable a broad range of answers. You’ll need to balance these considerations with your available time and resources.

Closed-ended questions

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. Closed-ended questions are best for collecting data on categorical or quantitative variables.

Categorical variables can be nominal or ordinal. Quantitative variables can be interval or ratio. Understanding the type of variable and level of measurement means you can perform appropriate statistical analyses for generalizable results.

Examples of closed-ended questions for different variables

Nominal variables include categories that can’t be ranked, such as race or ethnicity. This includes binary or dichotomous categories.

It’s best to include categories that cover all possible answers and are mutually exclusive. There should be no overlap between response items.

In binary or dichotomous questions, you’ll give respondents only two options to choose from.

White Black or African American American Indian or Alaska Native Asian Native Hawaiian or Other Pacific Islander

Ordinal variables include categories that can be ranked. Consider how wide or narrow a range you’ll include in your response items, and their relevance to your respondents.

Likert scale questions collect ordinal data using rating scales with 5 or 7 points.

When you have four or more Likert-type questions, you can treat the composite data as quantitative data on an interval scale . Intelligence tests, psychological scales, and personality inventories use multiple Likert-type questions to collect interval data.

With interval or ratio scales , you can apply strong statistical hypothesis tests to address your research aims.

Pros and cons of closed-ended questions

Well-designed closed-ended questions are easy to understand and can be answered quickly. However, you might still miss important answers that are relevant to respondents. An incomplete set of response items may force some respondents to pick the closest alternative to their true answer. These types of questions may also miss out on valuable detail.

To solve these problems, you can make questions partially closed-ended, and include an open-ended option where respondents can fill in their own answer.

Open-ended questions

Open-ended, or long-form, questions allow respondents to give answers in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered. For example, respondents may want to answer “multiracial” for the question on race rather than selecting from a restricted list.

  • How do you feel about open science?
  • How would you describe your personality?
  • In your opinion, what is the biggest obstacle for productivity in remote work?

Open-ended questions have a few downsides.

They require more time and effort from respondents, which may deter them from completing the questionnaire.

For researchers, understanding and summarizing responses to these questions can take a lot of time and resources. You’ll need to develop a systematic coding scheme to categorize answers, and you may also need to involve other researchers in data analysis for high reliability .

Question wording can influence your respondents’ answers, especially if the language is unclear, ambiguous, or biased. Good questions need to be understood by all respondents in the same way ( reliable ) and measure exactly what you’re interested in ( valid ).

Use clear language

You should design questions with your target audience in mind. Consider their familiarity with your questionnaire topics and language and tailor your questions to them.

For readability and clarity, avoid jargon or overly complex language. Don’t use double negatives because they can be harder to understand.

Use balanced framing

Respondents often answer in different ways depending on the question framing. Positive frames are interpreted as more neutral than negative frames and may encourage more socially desirable answers.

Use a mix of both positive and negative frames to avoid research bias , and ensure that your question wording is balanced wherever possible.

Unbalanced questions focus on only one side of an argument. Respondents may be less likely to oppose the question if it is framed in a particular direction. It’s best practice to provide a counter argument within the question as well.

Avoid leading questions

Leading questions guide respondents towards answering in specific ways, even if that’s not how they truly feel, by explicitly or implicitly providing them with extra information.

It’s best to keep your questions short and specific to your topic of interest.

  • The average daily work commute in the US takes 54.2 minutes and costs $29 per day. Since 2020, working from home has saved many employees time and money. Do you favor flexible work-from-home policies even after it’s safe to return to offices?
  • Experts agree that a well-balanced diet provides sufficient vitamins and minerals, and multivitamins and supplements are not necessary or effective. Do you agree or disagree that multivitamins are helpful for balanced nutrition?

Keep your questions focused

Ask about only one idea at a time and avoid double-barreled questions. Double-barreled questions ask about more than one item at a time, which can confuse respondents.

This question could be difficult to answer for respondents who feel strongly about the right to clean drinking water but not high-speed internet. They might only answer about the topic they feel passionate about or provide a neutral answer instead – but neither of these options capture their true answers.

Instead, you should ask two separate questions to gauge respondents’ opinions.

Strongly Agree Agree Undecided Disagree Strongly Disagree

Do you agree or disagree that the government should be responsible for providing high-speed internet to everyone?

Prevent plagiarism. Run a free check.

You can organize the questions logically, with a clear progression from simple to complex. Alternatively, you can randomize the question order between respondents.

Logical flow

Using a logical flow to your question order means starting with simple questions, such as behavioral or opinion questions, and ending with more complex, sensitive, or controversial questions.

The question order that you use can significantly affect the responses by priming them in specific directions. Question order effects, or context effects, occur when earlier questions influence the responses to later questions, reducing the validity of your questionnaire.

While demographic questions are usually unaffected by order effects, questions about opinions and attitudes are more susceptible to them.

  • How knowledgeable are you about Joe Biden’s executive orders in his first 100 days?
  • Are you satisfied or dissatisfied with the way Joe Biden is managing the economy?
  • Do you approve or disapprove of the way Joe Biden is handling his job as president?

It’s important to minimize order effects because they can be a source of systematic error or bias in your study.

Randomization

Randomization involves presenting individual respondents with the same questionnaire but with different question orders.

When you use randomization, order effects will be minimized in your dataset. But a randomized order may also make it harder for respondents to process your questionnaire. Some questions may need more cognitive effort, while others are easier to answer, so a random order could require more time or mental capacity for respondents to switch between questions.

Step 1: Define your goals and objectives

The first step of designing a questionnaire is determining your aims.

  • What topics or experiences are you studying?
  • What specifically do you want to find out?
  • Is a self-report questionnaire an appropriate tool for investigating this topic?

Once you’ve specified your research aims, you can operationalize your variables of interest into questionnaire items. Operationalizing concepts means turning them from abstract ideas into concrete measurements. Every question needs to address a defined need and have a clear purpose.

Step 2: Use questions that are suitable for your sample

Create appropriate questions by taking the perspective of your respondents. Consider their language proficiency and available time and energy when designing your questionnaire.

  • Are the respondents familiar with the language and terms used in your questions?
  • Would any of the questions insult, confuse, or embarrass them?
  • Do the response items for any closed-ended questions capture all possible answers?
  • Are the response items mutually exclusive?
  • Do the respondents have time to respond to open-ended questions?

Consider all possible options for responses to closed-ended questions. From a respondent’s perspective, a lack of response options reflecting their point of view or true answer may make them feel alienated or excluded. In turn, they’ll become disengaged or inattentive to the rest of the questionnaire.

Step 3: Decide on your questionnaire length and question order

Once you have your questions, make sure that the length and order of your questions are appropriate for your sample.

If respondents are not being incentivized or compensated, keep your questionnaire short and easy to answer. Otherwise, your sample may be biased with only highly motivated respondents completing the questionnaire.

Decide on your question order based on your aims and resources. Use a logical flow if your respondents have limited time or if you cannot randomize questions. Randomizing questions helps you avoid bias, but it can take more complex statistical analysis to interpret your data.

Step 4: Pretest your questionnaire

When you have a complete list of questions, you’ll need to pretest it to make sure what you’re asking is always clear and unambiguous. Pretesting helps you catch any errors or points of confusion before performing your study.

Ask friends, classmates, or members of your target audience to complete your questionnaire using the same method you’ll use for your research. Find out if any questions were particularly difficult to answer or if the directions were unclear or inconsistent, and make changes as necessary.

If you have the resources, running a pilot study will help you test the validity and reliability of your questionnaire. A pilot study is a practice run of the full study, and it includes sampling, data collection , and analysis. You can find out whether your procedures are unfeasible or susceptible to bias and make changes in time, but you can’t test a hypothesis with this type of study because it’s usually statistically underpowered .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Questionnaire Design | Methods, Question Types & Examples. Scribbr. Retrieved April 15, 2024, from https://www.scribbr.com/methodology/questionnaire/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, survey research | definition, examples & methods, what is a likert scale | guide & examples, reliability vs. validity in research | difference, types and examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

COMMENTS

  1. Research Questions, Objectives & Aims (+ Examples)

    The research aims, objectives and research questions (collectively called the "golden thread") are arguably the most important thing you need to get right when you're crafting a research proposal, dissertation or thesis.We receive questions almost every day about this "holy trinity" of research and there's certainly a lot of confusion out there, so we've crafted this post to help ...

  2. What Are Research Objectives and How to Write Them (with Examples)

    Formulating research objectives has the following five steps, which could help researchers develop a clear objective: 8. Identify the research problem. Review past studies on subjects similar to your problem statement, that is, studies that use similar methods, variables, etc.

  3. Research Objectives

    Why are research objectives important? Research objectives are important because they: Establish the scope and depth of your project: This helps you avoid unnecessary research. It also means that your research methods and conclusions can easily be evaluated.; Contribute to your research design: When you know what your objectives are, you have a clearer idea of what methods are most appropriate ...

  4. 100 Questions (and Answers) About Research Methods

    Key Features · The entire research process is covered from start to finish: Divided into nine parts, the book guides readers from the initial asking of questions, through the analysis and interpretation of data, to the final report · Each question and answer provides a stand-alone explanation: Readers gain enough information on a particular topic to move on to the next question, and topics ...

  5. Writing Strong Research Questions

    A good research question is essential to guide your research paper, dissertation, or thesis. All research questions should be: Focused on a single problem or issue. Researchable using primary and/or secondary sources. Feasible to answer within the timeframe and practical constraints. Specific enough to answer thoroughly.

  6. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  7. Research questions, hypotheses and objectives

    Research question. Interest in a particular topic usually begins the research process, but it is the familiarity with the subject that helps define an appropriate research question for a study. 1 Questions then arise out of a perceived knowledge deficit within a subject area or field of study. 2 Indeed, Haynes suggests that it is important to know "where the boundary between current ...

  8. Research Questions, Objectives & Aims (+ Examples)

    The search aims, objectives and research questions (collectively called the "golden thread") are arguably the of important point you need to get right at you're crafting a research propose, dissertation or thesis.We receive questions almost every days about all "holy trinity" of find and there's certainly a lot of confusion out there, so we've designed the post to help your ...

  9. Writing Strong Research Questions

    A good research question is essential to guide your research paper, dissertation, or thesis. All research questions should be: Focused on a single problem or issue. Researchable using primary and/or secondary sources. Feasible to answer within the timeframe and practical constraints. Specific enough to answer thoroughly.

  10. Research Questions

    Definition: Research questions are the specific questions that guide a research study or inquiry. These questions help to define the scope of the research and provide a clear focus for the study. Research questions are usually developed at the beginning of a research project and are designed to address a particular research problem or objective.

  11. Designing a Research Question

    Research questions are vital to qualitative, quantitative, and mixed-methods research. They "narrow the research objective and research purpose" ([]: p 475; [2, 3]) and determine the study methods (e.g., research paradigm, design, sampling method, instruments, and analysis).Despite the essential role the question holds in guiding and focusing research, White [] noted that academic ...

  12. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  13. Research Objectives

    Research Objectives. Research objectives refer to the specific goals or aims of a research study. They provide a clear and concise description of what the researcher hopes to achieve by conducting the research.The objectives are typically based on the research questions and hypotheses formulated at the beginning of the study and are used to guide the research process.

  14. What Is a Research Methodology?

    1. Focus on your objectives and research questions. The methodology section should clearly show why your methods suit your objectives and convince the reader that you chose the best possible approach to answering your problem statement and research questions. 2.

  15. Research Methodology

    Outline the main research questions and objectives; II. Research Design. Explain the research design chosen and why it is appropriate for the research question(s) and objectives ... Flexibility: Research methodology allows researchers to choose the most appropriate research methods and techniques based on the research question, ...

  16. Research objectives, questions and methodology

    Table 1 summarises the study objectives, questions and research methods. Each of the three phases of research are discussed in turn through the following sections of this chapter, before the final sections discuss data integration, PPI and ethics. Phase 1: mapping and profiling community hospitals.

  17. PDF DEVELOPING HYPOTHESIS AND RESEARCH QUESTIONS

    RESEARCH QUESTIONS. Qualitative Approach. The use of Research Questions as opposed to objectives or hypothesis, is more frequent. Characteristics Use of words- what or how. Specify whether the study: discovers, seeks to understand, explores or describes the experiences. Use of non-directional wording in the question.

  18. How to Align Your Research Proposal Elements

    Alignment is the logical and consistent connection between your research objectives, questions, and methods. It shows that your research proposal is well-designed, feasible, and relevant.

  19. Research Methods

    Quantitative research methods are used to collect and analyze numerical data. This type of research is useful when the objective is to test a hypothesis, determine cause-and-effect relationships, and measure the prevalence of certain phenomena. Quantitative research methods include surveys, experiments, and secondary data analysis.

  20. What Is Qualitative Research?

    Qualitative research methods. Each of the research approaches involve using one or more data collection methods.These are some of the most common qualitative methods: Observations: recording what you have seen, heard, or encountered in detailed field notes. Interviews: personally asking people questions in one-on-one conversations. Focus groups: asking questions and generating discussion among ...

  21. (PDF) Research questions and research objectives

    In every research, the terms 'research aim', 'research objectives', 'research questions' and 'research hypotheses' tend to have precise meaning, therefore defining the core objectives is the ...

  22. Journal of Medical Internet Research

    Objective: We aimed to develop and evaluate the usability of a web-based PtDA for the Canadian context, where abortion care is publicly funded and available without legal restriction. Methods: We used a systematic, user-centered design approach guided by principles of integrated knowledge translation.

  23. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.