Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

The role of teachers in continuous assessment: a model for primary schools in Windhoek

Profile image of Ismael Uiseb

Related Papers

Paulina Hamukonda

literature review on continuous assessment

Teklebrhan Berhe

The purpose of the study is to assess the prospects and implementing continuous assessment (CA) in in higher education. Data were collected through a structured questionnaire from instructors and students of Adigrat University as well as Mekelle and Aksum Universities for comparison purpose. Both quantitative and qualitative data were carried out. Result of this study indicated that, instructors were not continuously collecting information about student progress, small number of assessment is used in courses and few instructors give feedback at all. Significant number of instructors and students had poor knowledge and negative attitude towards CA. Based on the results, it can be concluded and recommend that instructors need to use the results from CA as a means of identifying students ’ progress and thereby providing support. Accordingly, departments need to have strong documentation and reporting systems, the maximum and minimum numbers of students in a class need to be put at a st...

FIGHTWELL MUTAMBO

The purpose of this study was to determine the perception of students and teachers have on Continuous Assessment in the two secondary school of Chiwala and Masala on the Copperbelt Province of the Republic of Zambia.Perception is considered to be of prime importance in the engagement of an individual in any activity. If the perception is low, then the individual’s commitment to the activity will most likely be low. The methods employed to determine perception were both quantitative and qualitative as the scaled questionnaires were used as well as focus group discussions and guided interviews.A total of 99 respondents were involved and these consisted of 70 grade 12 pupils out of population 1200 possible students. The teachers involved were 29 out of a possible 130 teachers. Percentages were used as the main method of analysis even though qualitative data were analysed qualitatively. The results indicated that the Continuous Assessment was perceived to be important by all stakeholders and to a large extent both pupils and teachers indicated that parents were well abreast with the existence of Continuous Assessment in the two schools. However the degree to which parents were interested and participated in its implementation needed evaluation. The practical subjects such Home Economics were said to be difficult to assess regularly due to the cost involved in carrying out assessment in buying of materials .This was also found to be true for science related subjects and caused the schools to schedule serious assessment on practical’s to mock examination time only. This was not enough as it implies that the practical subjects are taught theoretically. The practice of continuous assessment in the two schools has a positive will from the administrators however there is room for tremendous improvement to ensure participation of all. The guidance, careers and placement teachers who were the main coordinators of continuous assessment need to be motivated in some way as the task before them was quite huge. The maintenance of Continuous Assessment records and their processing required time and therefore their teaching loads should be lower. The school Governance Boards and administration should consider mobilising resources directed at continuous assessment so that practical subjects that are meant to equip skills are effectively taught. To reduce on physical space for storage of Continuous Assessment records the schools should consider in investing in electronic data facilities. The words pupils and students in this study are used interchangeably and mean one. Keywords: Perception; Challenges; Implementing; Continuous Assessment, Feedback.

Ethiopian Journal of Education and Sciences

Abiy Filate

Irene Vilakazi

temitope opaleye

ABSTRACT Evaluation of student performance in accordance with behavioural objectives is ascertained through continuous assessment. Hence, this study sought to determine the perception of teachers towards the use of continuous assessment senior secondary school. It was a descriptive survey and simple stratified random sampling technique was used to select a sample of one hundred and eighty (n=180) participants for the study. The instrument title: Teachers’ Perception Towards the Use of Continuous Assessment (TPTUCA) was used for data collection. Frequency counts; percentages and chi-square were used to analyze the data. Teachers have positive attitude towards continuous assessment use but it requires great deal of time and demanding. There were significance and meaningful association between some continuous assessment strategies such as essay, rating scale, practical demonstration and home work but not so for observation and multiple choice test. Also, essay test and socio-metrics in area of testing of previous knowledge were found to be significance in assessing achievement of the behavioural objective(s) by each student. Choice of assessment strategies covers the three domains of learning .i.e. Cognitive , affective and psychomotor. Recommendation was made on the need for in-service training and refreshers course for knowledge update and to ensure educational policy on evaluation of aggregate student performance through cognitive, affective and psychomotor skill. Effective and efficient supervision of classroom teachers should be in place for checks and balances.

Imamudin Husen

getinet seifu walde

This paper examines the status of the implementation of continuous assessment (CA) in Mettu University. A random stratified sampling method was used to select 309 students and 29 instructors and purposive method used to select quality assurance and faculty Deans. Questionnaires, focus group discussion, interview and documents were used for data collection. Quantitative data were analyzed in terms of descriptive statistics whereas qualitative data qualitatively. The finding of the study reviled that; instructors considered it as continuous testing, students perceived it as a method of assessment used to increase their academic result. The major challenges were: lack of clear manuals and guidelines, lack of continuous and adequate training, awareness and skills on the part of instructors, large class size, lack of infrastructure and instructional materials, poor communication of staff with concerned bodies. Based on the results recommendations were forwarded. Introduction Assessment is the process of making judgments about a student's performance on a particular task (Harlan, 1994). Arends (1997) also defined assessment as the full range of information gathered and synthesized by teachers for making decisions about their students. According to Gipps, C. V. (1994) it is a wide range of methods for evaluating pupil performance and attainment including formal testing and examinations, practical and oral assessment, classroom‐based assessment carried out by teachers and portfolios. These definitions suggest that Educational assessment is a broad term that includes many procedures used to obtain information about student achievement and learning progress. As correctly pointed out by Cone and Foster (1991), good measurement resulting in accurate data is the foundation of sound decision making. According to them there is little doubt among educational practitioners about the special value of assessment as a basic condition for effective learning. This is because of the fact that traditional ways of testing can only deal with a fraction of what some body want to evaluate. Therefore, (Alausa, 2004) indicated the major problems of assessment of learners have been in the approaches or methods of assessment. To solve these problems experts and educational policy makers' come up with the concept of continuous assessment (CA). Many educational systems all over the world have adopted this approach in assessing learners' achievement in many subject areas. This is because CA approaches can help to rectify the problem of mismatches between tests and classroom activities. According to Bolyard (2003) CA is a strategy used by teachers to support the attainment of goals and skills by learners over a period of time. It occurs as part of the daily interaction between teachers and

Dereje Mathewos

Dereje Mathewos , Mesfin Aberra

ABSTRACT The major objective of this research was to examine the practices and challenges in implementing continuous assessment while teaching English. To achieve this objective, the research was designed in descriptive survey. The participants of the study were 26 grade 11 English teachers from Alamura and Tabor secondary and preparatory school in Hawassa City Administration. Since they are a place where the researcher working place, they were selected by using purposive sampling technique. The data were collected through archival document analysis, semi-structured interview and questionnaire and analyzed using frequency, percentage and narrative analysis. The results indicated that there were differences between teachers in implementing continuous assessment with in the similar record sheet given. They gave more emphasis to grammar skill in assessing students’ performance. Tests, quizzes and final examinations, were the dominant assessment tools implemented by teachers. With regard to challenges, there were multiple determinants that hindered the effective implementation of the continuous assessment such as teachers being overloaded with teaching duties that consume most of their time, large class size, the bulky size of textbook that consisted lessons that should be completed within the prescribed academic year, lack of cooperation from other subject teachers during invigilation and the like. As a result, continuous assessment was not fully implemented as desired. It was recommended that teachers should need to have a clear plan to meaningfully implement continuous assessment and should balance all language skills in while assessment. Key words: Practice, Challenge, Assessment, Continuous assessment, Alternative Assessment, document analysis

RELATED PAPERS

Niels O Schiller

Geoforum Perspektiv

Deakin Law Review

John Sheehan

Alicia Wolhein

daxesh patel

Tonatiuh Molina Villa

What You Should Know About Science

Harry Collins

Scientific Reports

Till Luckenbach

Jurnal Standardisasi

ellia kristiningrum

Revista Brasileira de Paleontologia

Luiz Carlos Weinschütz

Dymyd Mychajlo

Journal of Cancer Prevention & Current Research

Danijela Scepanovic

Revista Portuguesa De Arqueologia

António Manuel S . P . Silva

Luka Tomašević

Jurnal Sains & Teknologi Modifikasi Cuaca

Bayu Prayoga

Journal of Nuclear Materials

Naoyuki Hashimoto

Elmilan Elmi

Atherosclerosis Supplements

Karmela Barišić

Revista De Derecho Penal Y Criminologia

María Marta González Tascón

Teologia i Człowiek

Szymon Drzyżdżyk , Marek Gilski

Journal of Sleep Research

Eszter Csabi

Lecture Notes in Computer Science

Pablo Zambrano

Studies on Russian Economic Development

tatiana mitrova

hjhds jyuttgf

Journal of Internal Medicine

Marianne Philippe

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

1Library

  • No results found

Pupils’ role in continuous assessment

Literature review, 3.7 pupils’ role in continuous assessment.

In Ghana, the basic school continuous assessment guide as described earlier in Sections 3.2.1 & 3.2.2, requires the teacher to plan, and sets learning objectives,

designs activities, mark and records pupils’ scores (MoE, 2004). The only role pupils play in the continuous assessment process is performing tasks assigned to them by the teacher.

The situation in Ghana reflects Gersch’s (1992) observation that although the process and purpose of assessment may vary from professional to professional, and indeed there are different emphases on test, observation and other techniques, pupils themselves are conventionally ascribed a subservient role in the whole assessment process. They are often expected to carry out specified tasks, answer specific questions, undertake written activities or follow set of procedures. The child is generally seen as a relatively ‘passive object’, and assessment is viewed as something which is ‘done to the child’ than involving very actively (p. 25). According to Gersch, if a child joins in too actively, or becomes too questioning or challenging, he or she might be regarded as interfering. Perhaps, historically, the idea of ‘children knowing their place’ and ‘being seen and not heard’ has left its mark when it comes to pupil assessment (p. 25).

For their part, Tilstone, Lacey, Porter and Robertson (2000) suggest pupils themselves have little role to play in the traditional perspective on assessment. It is something done for them. However, in a dynamic view of assessment, pupils have a central part to play. They are involved in setting their own targets and monitoring their own progress. There are several frameworks that can support pupil involvement, such as records of attainment. Currently, in Ghana, there is no provision in terms of pupil involvement in their assessment. It will be impossible for basic school pupils to play

any meaningful role in their assessments. There are no frameworks to support pupils’ involvement in their assessment.

As discussed earlier (Section 3.2.2), in Ghana, the continuous assessment model seems to apply the principles from the behaviourist learning theory. For example, the teachers assess and reinforce pupils’ responses (James, 2006) and make records on the basis of new assessments; the pupils’ progress is measured against performance criteria which are teacher-defined (Sebba, Byers and Rose, 1993). The literature for example, MoE 2004) shows that in Ghana pupils’ role in continuous assessment is limited to answering questions and working on tasks designed by teachers. If pupils’ involvement is to be fostered then in addition to principles drawing on behaviourist theory, the continuous assessment programme in Ghana has to adopt some principles from the cognitive, constructivist theories of learning (see Chapter 1). This however, requires radical changes in teachers’ beliefs, competencies and their conceptualisation of continuous assessment; these shifts may not occur easily.

3.7.1 Self- and peer-assessment

Literature shows that self- and peer-assessment are largely adopted in assessment practice that applies principles from the constructivists’ learning theory (Pollard et al., 2005). Self- and peer- assessment when applied in classrooms can foster improvement of all pupils, including those who record lower attainments in class. Black and Wiliam (1998) point out that assessment that involves pupils in their own self-evaluation is a key element in improving learning. This is succinctly, expressed by Assessment Reform Group (2002) cited by Clarke (2005) as follows:

Independent learners have the ability to seek out and gain new skills, new knowledge and new understandings. They are able to engage in self-reflection

and to identify the next steps in their learning. Teachers should equip learners with the desire and the capacity to take charge of their learning through developing the skills of self-assessment (p. 109).

Clarke (2005) suggests that one reason that peer-assessment is so valuable is because pupils often give and receive criticisms of their work more freely than in the traditional teacher/pupil interchange. Another advantage is that the language used by pupils to each other is the language they would naturally use, rather than school language. Further, peer-assessment can involve a few minutes of pupils helping each other to improve their work.

However, Rose, McNamara and O’Neil (1996) point out that in considering approaches to the greater involvement of pupils in self-assessment and the planning process, it is necessary to be clear about the purpose to be served by such an approach, and the practicalities of its implementation. Further, greater involvement of pupils in the management of their assessment and learning is dependent upon the development of teachers’ confidence in their own abilities to maintain effective classroom management.

Self-assessment does not occur automatically; Rose, McNamara and O’Neil (1996), in considering the involvement of pupils in self-assessment, identifies the importance of providing pupils with a range of skills before they can take more responsibility for their own learning. He lists the ability to recall, to summarise, to organise evidence, to reflect and to evaluate as prerequisites for effective self-evaluation. Also, Rose, McNamara and O’Neil (1996) describes the skills of attending, completing tasks, and joint goal setting as essential components of ‘learning to learn’, and provides

examples of ways in which pupils with learning difficulties have been encouraged to move towards achieving these requirements.

Since teacher education in Ghana does not emphasise assessment for learning in its programmes, teachers may lack competence, knowledge, skills and confidence to foster self-and peer-assessments in classrooms (see Chapter 2).

3.8 Summary of the chapter

The review has revealed that the nature of continuous assessment in Ghana, in relation to international perspectives of teacher assessment. Unlike teacher assessments done elsewhere, the continuous assessment comprises three distinctive activities: classroom exercises, tests and homework. These activities are designed specifically to measure attainments in order to get marks to fill pupils’ records. Pupils’ aggregated continuous assessment is added to external examination (BECE) for grading and certification.

However, literature from the UK and USA reveals that classroom assessments that focus more on informing teaching and learning (formative assessment), support lower attaining pupils to improve. These countries have relevant policies, support and resources to enhance teachers’ practices. The materials from Ghana, the UK and the USA will facilitate the discussion of the data from the fieldwork in Chapters 5, 6, 7. This will enable me to draw conclusion as to whether teachers’ continuous assessment practices support and enhance lower attaining pupils’ learning in classrooms.

METHODOLOGY, DESIGN, METHODS OF DATA

  • The cognitive, constructivist theories of learning
  • Background to the 1987 basic education reform in Ghana
  • The core curriculum and teaching syllabus
  • Issues relating to teacher continuous assessment practice
  • Teacher education and professional development in Ghana in relation to lower attainments
  • Continuous assessment activities
  • Continuous assessment and formative assessment
  • Contributing to external examination
  • Effects of formative assessment on lower attaining pupils
  • Effects of curriculum-based assessment on lower attaining pupils
  • Effects of summative assessment on lower attaining pupils
  • Approaches for enhancing lower attaining pupils’ performance
  • Pupils’ role in continuous assessment (You are here)
  • Background to choice of approach and methods
  • Justification for using mixed methods design
  • Focus groups of lower attaining pupils
  • Ethical issues
  • Reliability and validity issues

Related documents

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Med Educ
  • v.7(1); 2018 Feb

Logo of pmeded

Writing an effective literature review

Lorelei lingard.

Schulich School of Medicine & Dentistry, Health Sciences Addition, Western University, London, Ontario Canada

In the Writer’s Craft section we offer simple tips to improve your writing in one of three areas: Energy, Clarity and Persuasiveness. Each entry focuses on a key writing feature or strategy, illustrates how it commonly goes wrong, teaches the grammatical underpinnings necessary to understand it and offers suggestions to wield it effectively. We encourage readers to share comments on or suggestions for this section on Twitter, using the hashtag: #how’syourwriting?

This Writer’s Craft instalment is the first in a two-part series that offers strategies for effectively presenting the literature review section of a research manuscript. This piece alerts writers to the importance of not only summarizing what is known but also identifying precisely what is not, in order to explicitly signal the relevance of their research. In this instalment, I will introduce readers to the mapping the gap metaphor, the knowledge claims heuristic, and the need to characterize the gap.

Mapping the gap

The purpose of the literature review section of a manuscript is not to report what is known about your topic. The purpose is to identify what remains unknown— what academic writing scholar Janet Giltrow has called the ‘knowledge deficit’ — thus establishing the need for your research study [ 1 ]. In an earlier Writer’s Craft instalment, the Problem-Gap-Hook heuristic was introduced as a way of opening your paper with a clear statement of the problem that your work grapples with, the gap in our current knowledge about that problem, and the reason the gap matters [ 2 ]. This article explains how to use the literature review section of your paper to build and characterize the Gap claim in your Problem-Gap-Hook. The metaphor of ‘mapping the gap’ is a way of thinking about how to select and arrange your review of the existing literature so that readers can recognize why your research needed to be done, and why its results constitute a meaningful advance on what was already known about the topic.

Many writers have learned that the literature review should describe what is known. The trouble with this approach is that it can produce a laundry list of facts-in-the-world that does not persuade the reader that the current study is a necessary next step. Instead, think of your literature review as painting in a map of your research domain: as you review existing knowledge, you are painting in sections of the map, but your goal is not to end with the whole map fully painted. That would mean there is nothing more we need to know about the topic, and that leaves no room for your research. What you want to end up with is a map in which painted sections surround and emphasize a white space, a gap in what is known that matters. Conceptualizing your literature review this way helps to ensure that it achieves its dual goal: of presenting what is known and pointing out what is not—the latter of these goals is necessary for your literature review to establish the necessity and importance of the research you are about to describe in the methods section which will immediately follow the literature review.

To a novice researcher or graduate student, this may seem counterintuitive. Hopefully you have invested significant time in reading the existing literature, and you are understandably keen to demonstrate that you’ve read everything ever published about your topic! Be careful, though, not to use the literature review section to regurgitate all of your reading in manuscript form. For one thing, it creates a laundry list of facts that makes for horrible reading. But there are three other reasons for avoiding this approach. First, you don’t have the space. In published medical education research papers, the literature review is quite short, ranging from a few paragraphs to a few pages, so you can’t summarize everything you’ve read. Second, you’re preaching to the converted. If you approach your paper as a contribution to an ongoing scholarly conversation,[ 2 ] then your literature review should summarize just the aspects of that conversation that are required to situate your conversational turn as informed and relevant. Third, the key to relevance is to point to a gap in what is known. To do so, you summarize what is known for the express purpose of identifying what is not known . Seen this way, the literature review should exert a gravitational pull on the reader, leading them inexorably to the white space on the map of knowledge you’ve painted for them. That white space is the space that your research fills.

Knowledge claims

To help writers move beyond the laundry list, the notion of ‘knowledge claims’ can be useful. A knowledge claim is a way of presenting the growing understanding of the community of researchers who have been exploring your topic. These are not disembodied facts, but rather incremental insights that some in the field may agree with and some may not, depending on their different methodological and disciplinary approaches to the topic. Treating the literature review as a story of the knowledge claims being made by researchers in the field can help writers with one of the most sophisticated aspects of a literature review—locating the knowledge being reviewed. Where does it come from? What is debated? How do different methodologies influence the knowledge being accumulated? And so on.

Consider this example of the knowledge claims (KC), Gap and Hook for the literature review section of a research paper on distributed healthcare teamwork:

KC: We know that poor team communication can cause errors. KC: And we know that team training can be effective in improving team communication. KC: This knowledge has prompted a push to incorporate teamwork training principles into health professions education curricula. KC: However, most of what we know about team training research has come from research with co-located teams—i. e., teams whose members work together in time and space. Gap: Little is known about how teamwork training principles would apply in distributed teams, whose members work asynchronously and are spread across different locations. Hook: Given that much healthcare teamwork is distributed rather than co-located, our curricula will be severely lacking until we create refined teamwork training principles that reflect distributed as well as co-located work contexts.

The ‘We know that …’ structure illustrated in this example is a template for helping you draft and organize. In your final version, your knowledge claims will be expressed with more sophistication. For instance, ‘We know that poor team communication can cause errors’ will become something like ‘Over a decade of patient safety research has demonstrated that poor team communication is the dominant cause of medical errors.’ This simple template of knowledge claims, though, provides an outline for the paragraphs in your literature review, each of which will provide detailed evidence to illustrate a knowledge claim. Using this approach, the order of the paragraphs in the literature review is strategic and persuasive, leading the reader to the gap claim that positions the relevance of the current study. To expand your vocabulary for creating such knowledge claims, linking them logically and positioning yourself amid them, I highly recommend Graff and Birkenstein’s little handbook of ‘templates’ [ 3 ].

As you organize your knowledge claims, you will also want to consider whether you are trying to map the gap in a well-studied field, or a relatively understudied one. The rhetorical challenge is different in each case. In a well-studied field, like professionalism in medical education, you must make a strong, explicit case for the existence of a gap. Readers may come to your paper tired of hearing about this topic and tempted to think we can’t possibly need more knowledge about it. Listing the knowledge claims can help you organize them most effectively and determine which pieces of knowledge may be unnecessary to map the white space your research attempts to fill. This does not mean that you leave out relevant information: your literature review must still be accurate. But, since you will not be able to include everything, selecting carefully among the possible knowledge claims is essential to producing a coherent, well-argued literature review.

Characterizing the gap

Once you’ve identified the gap, your literature review must characterize it. What kind of gap have you found? There are many ways to characterize a gap, but some of the more common include:

  • a pure knowledge deficit—‘no one has looked at the relationship between longitudinal integrated clerkships and medical student abuse’
  • a shortcoming in the scholarship, often due to philosophical or methodological tendencies and oversights—‘scholars have interpreted x from a cognitivist perspective, but ignored the humanist perspective’ or ‘to date, we have surveyed the frequency of medical errors committed by residents, but we have not explored their subjective experience of such errors’
  • a controversy—‘scholars disagree on the definition of professionalism in medicine …’
  • a pervasive and unproven assumption—‘the theme of technological heroism—technology will solve what ails teamwork—is ubiquitous in the literature, but what is that belief based on?’

To characterize the kind of gap, you need to know the literature thoroughly. That means more than understanding each paper individually; you also need to be placing each paper in relation to others. This may require changing your note-taking technique while you’re reading; take notes on what each paper contributes to knowledge, but also on how it relates to other papers you’ve read, and what it suggests about the kind of gap that is emerging.

In summary, think of your literature review as mapping the gap rather than simply summarizing the known. And pay attention to characterizing the kind of gap you’ve mapped. This strategy can help to make your literature review into a compelling argument rather than a list of facts. It can remind you of the danger of describing so fully what is known that the reader is left with the sense that there is no pressing need to know more. And it can help you to establish a coherence between the kind of gap you’ve identified and the study methodology you will use to fill it.

Acknowledgements

Thanks to Mark Goldszmidt for his feedback on an early version of this manuscript.

PhD, is director of the Centre for Education Research & Innovation at Schulich School of Medicine & Dentistry, and professor for the Department of Medicine at Western University in London, Ontario, Canada.

  • Open access
  • Published: 23 April 2024

Designing feedback processes in the workplace-based learning of undergraduate health professions education: a scoping review

  • Javiera Fuentes-Cimma 1 , 2 ,
  • Dominique Sluijsmans 3 ,
  • Arnoldo Riquelme 4 ,
  • Ignacio Villagran   ORCID: orcid.org/0000-0003-3130-8326 1 ,
  • Lorena Isbej   ORCID: orcid.org/0000-0002-4272-8484 2 , 5 ,
  • María Teresa Olivares-Labbe 6 &
  • Sylvia Heeneman 7  

BMC Medical Education volume  24 , Article number:  440 ( 2024 ) Cite this article

130 Accesses

Metrics details

Feedback processes are crucial for learning, guiding improvement, and enhancing performance. In workplace-based learning settings, diverse teaching and assessment activities are advocated to be designed and implemented, generating feedback that students use, with proper guidance, to close the gap between current and desired performance levels. Since productive feedback processes rely on observed information regarding a student's performance, it is imperative to establish structured feedback activities within undergraduate workplace-based learning settings. However, these settings are characterized by their unpredictable nature, which can either promote learning or present challenges in offering structured learning opportunities for students. This scoping review maps literature on how feedback processes are organised in undergraduate clinical workplace-based learning settings, providing insight into the design and use of feedback.

A scoping review was conducted. Studies were identified from seven databases and ten relevant journals in medical education. The screening process was performed independently in duplicate with the support of the StArt program. Data were organized in a data chart and analyzed using thematic analysis. The feedback loop with a sociocultural perspective was used as a theoretical framework.

The search yielded 4,877 papers, and 61 were included in the review. Two themes were identified in the qualitative analysis: (1) The organization of the feedback processes in workplace-based learning settings, and (2) Sociocultural factors influencing the organization of feedback processes. The literature describes multiple teaching and assessment activities that generate feedback information. Most papers described experiences and perceptions of diverse teaching and assessment feedback activities. Few studies described how feedback processes improve performance. Sociocultural factors such as establishing a feedback culture, enabling stable and trustworthy relationships, and enhancing student feedback agency are crucial for productive feedback processes.

Conclusions

This review identified concrete ideas regarding how feedback could be organized within the clinical workplace to promote feedback processes. The feedback encounter should be organized to allow follow-up of the feedback, i.e., working on required learning and performance goals at the next occasion. The educational programs should design feedback processes by appropriately planning subsequent tasks and activities. More insight is needed in designing a full-loop feedback process, in which specific attention is needed in effective feedforward practices.

Peer Review reports

The design of effective feedback processes in higher education has been important for educators and researchers and has prompted numerous publications discussing potential mechanisms, theoretical frameworks, and best practice examples over the past few decades. Initially, research on feedback primarily focused more on teachers and feedback delivery, and students were depicted as passive feedback recipients [ 1 , 2 , 3 ]. The feedback conversation has recently evolved to a more dynamic emphasis on interaction, sense-making, outcomes in actions, and engagement with learners [ 2 ]. This shift aligns with utilizing the feedback process as a form of social interaction or dialogue to enhance performance [ 4 ]. Henderson et al. (2019) defined feedback processes as "where the learner makes sense of performance-relevant information to promote their learning." (p. 17). When a student grasps the information concerning their performance in connection to the desired learning outcome and subsequently takes suitable action, a feedback loop is closed so the process can be regarded as successful [ 5 , 6 ].

Hattie and Timperley (2007) proposed a comprehensive perspective on feedback, the so-called feedback loop, to answer three key questions: “Where am I going? “How am I going?” and “Where to next?” [ 7 ]. Each question represents a key dimension of the feedback loop. The first is the feed-up, which consists of setting learning goals and sharing clear objectives of learners' performance expectations. While the concept of the feed-up might not be consistently included in the literature, it is considered to be related to principles of effective feedback and goal setting within educational contexts [ 7 , 8 ]. Goal setting allows students to focus on tasks and learning, and teachers to have clear intended learning outcomes to enable the design of aligned activities and tasks in which feedback processes can be embedded [ 9 ]. Teachers can improve the feed-up dimension by proposing clear, challenging, but achievable goals [ 7 ]. The second dimension of the feedback loop focuses on feedback and aims to answer the second question by obtaining information about students' current performance. Different teaching and assessment activities can be used to obtain feedback information, and it can be provided by a teacher or tutor, a peer, oneself, a patient, or another coworker. The last dimension of the feedback loop is the feedforward, which is specifically associated with using feedback to improve performance or change behaviors [ 10 ]. Feedforward is crucial in closing the loop because it refers to those specific actions students must take to reduce the gap between current and desired performance [ 7 ].

From a sociocultural perspective, feedback processes involve a social practice consisting of intricate relationships within a learning context [ 11 ]. The main feature of this approach is that students learn from feedback only when the feedback encounter includes generating, making sense of, and acting upon the information given [ 11 ]. In the context of workplace-based learning (WBL), actionable feedback plays a crucial role in enabling learners to leverage specific feedback to enhance their performance, skills, and conceptual understandings. The WBL environment provides students with a valuable opportunity to gain hands-on experience in authentic clinical settings, in which students work more independently on real-world tasks, allowing them to develop and exhibit their competencies [ 3 ]. However, WBL settings are characterized by their unpredictable nature, which can either promote self-directed learning or present challenges in offering structured learning opportunities for students [ 12 ]. Consequently, designing purposive feedback opportunities within WBL settings is a significant challenge for clinical teachers and faculty.

In undergraduate clinical education, feedback opportunities are often constrained due to the emphasis on clinical work and the absence of dedicated time for teaching [ 13 ]. Students are expected to perform autonomously under supervision, ideally achieved by giving them space to practice progressively and providing continuous instances of constructive feedback [ 14 ]. However, the hierarchy often present in clinical settings places undergraduate students in a dependent position, below residents and specialists [ 15 ]. Undergraduate or junior students may have different approaches to receiving and using feedback. If their priority is meeting the minimum standards given pass-fail consequences and acting merely as feedback recipients, other incentives may be needed to engage with the feedback processes because they will need more learning support [ 16 , 17 ]. Adequate supervision and feedback have been recognized as vital educational support in encouraging students to adopt a constructive learning approach [ 18 ]. Given that productive feedback processes rely on observed information regarding a student's performance, it is imperative to establish structured teaching and learning feedback activities within undergraduate WBL settings.

Despite the extensive research on feedback, a significant proportion of published studies involve residents or postgraduate students [ 19 , 20 ]. Recent reviews focusing on feedback interventions within medical education have clearly distinguished between undergraduate medical students and residents or fellows [ 21 ]. To gain a comprehensive understanding of initiatives related to actionable feedback in the WBL environment for undergraduate health professions, a scoping review of the existing literature could provide insight into how feedback processes are designed in that context. Accordingly, the present scoping review aims to answer the following research question: How are the feedback processes designed in the undergraduate health professions' workplace-based learning environments?

A scoping review was conducted using the five-step methodological framework proposed by Arksey and O'Malley (2005) [ 22 ], intertwined with the PRISMA checklist extension for scoping reviews to provide reporting guidance for this specific type of knowledge synthesis [ 23 ]. Scoping reviews allow us to study the literature without restricting the methodological quality of the studies found, systematically and comprehensively map the literature, and identify gaps [ 24 ]. Furthermore, a scoping review was used because this topic is not suitable for a systematic review due to the varied approaches described and the large difference in the methodologies used [ 21 ].

Search strategy

With the collaboration of a medical librarian, the authors used the research question to guide the search strategy. An initial meeting was held to define keywords and search resources. The proposed search strategy was reviewed by the research team, and then the study selection was conducted in two steps:

An online database search included Medline/PubMed, Web of Science, CINAHL, Cochrane Library, Embase, ERIC, and PsycINFO.

A directed search of ten relevant journals in the health sciences education field (Academic Medicine, Medical Education, Advances in Health Sciences Education, Medical Teacher, Teaching and Learning in Medicine, Journal of Surgical Education, BMC Medical Education, Medical Education Online, Perspectives on Medical Education and The Clinical Teacher) was performed.

The research team conducted a pilot or initial search before the full search to identify if the topic was susceptible to a scoping review. The full search was conducted in November 2022. One team member (MO) identified the papers in the databases. JF searched in the selected journals. Authors included studies written in English due to feasibility issues, with no time span limitation. After eliminating duplicates, two research team members (JF and IV) independently reviewed all the titles and abstracts using the exclusion and inclusion criteria described in Table  2 and with the support of the screening application StArT [ 25 ]. A third team member (AR) reviewed the titles and abstracts when the first two disagreed. The reviewer team met again at a midpoint and final stage to discuss the challenges related to study selection. Articles included for full-text review were exported to Mendeley. JF independently screened all full-text papers, and AR verified 10% for inclusion. The authors did not analyze study quality or risk of bias during study selection, which is consistent with conducting a scoping review.

The analysis of the results incorporated a descriptive summary and a thematic analysis, which was carried out to clarify and give consistency to the results' reporting [ 22 , 24 , 26 ]. Quantitative data were analyzed to report the characteristics of the studies, populations, settings, methods, and outcomes. Qualitative data were labeled, coded, and categorized into themes by three team members (JF, SH, and DS). The feedback loop framework with a sociocultural perspective was used as the theoretical framework to analyze the results.

The keywords used for the search strategies were as follows:

Clinical clerkship; feedback; formative feedback; health professions; undergraduate medical education; workplace.

Definitions of the keywords used for the present review are available in Appendix 1 .

As an example, we included the search strategy that we used in the Medline/PubMed database when conducting the full search:

("Formative Feedback"[Mesh] OR feedback) AND ("Workplace"[Mesh] OR workplace OR "Clinical Clerkship"[Mesh] OR clerkship) AND (("Education, Medical, Undergraduate"[Mesh] OR undergraduate health profession*) OR (learner* medical education)).

Inclusion and exclusion criteria

The following inclusion and exclusion criteria were used (Table  1 ):

Data extraction

The research group developed a data-charting form to organize the information obtained from the studies. The process was iterative, as the data chart was continuously reviewed and improved as necessary. In addition, following Levac et al.'s recommendation (2010), the three members involved in the charting process (JF, LI, and IV) independently reviewed the first five selected studies to determine whether the data extraction was consistent with the objectives of this scoping review and to ensure consistency. Then, the team met using web-conferencing software (Zoom; CA, USA) to review the results and adjust any details in the chart. The same three members extracted data independently from all the selected studies, considering two members reviewing each paper [ 26 ]. A third team member was consulted if any conflict occurred when extracting data. The data chart identified demographic patterns and facilitated the data synthesis. To organize data, we used a shared Excel spreadsheet, considering the following headings: title, author(s), year of publication, journal/source, country/origin, aim of the study, research question (if any), population/sample size, participants, discipline, setting, methodology, study design, data collection, data analysis, intervention, outcomes, outcomes measure, key findings, and relation of findings to research question.

Additionally, all the included papers were uploaded to AtlasTi v19 to facilitate the qualitative analysis. Three team members (JF, SH, and DS) independently coded the first six papers to create a list of codes to ensure consistency and rigor. The group met several times to discuss and refine the list of codes. Then, one member of the team (JF) used the code list to code all the rest of the papers. Once all papers were coded, the team organized codes into descriptive themes aligned with the research question.

Preliminary results were shared with a number of stakeholders (six clinical teachers, ten students, six medical educators) to elicit their opinions as an opportunity to build on the evidence and offer a greater level of meaning, content expertise, and perspective to the preliminary findings [ 26 ]. No quality appraisal of the studies is considered for this scoping review, which aligns with the frameworks for guiding scoping reviews [ 27 ].

The datasets analyzed during the current study are available from the corresponding author upon request.

A database search resulted in 3,597 papers, and the directed search of the most relevant journals in the health sciences education field yielded 2,096 titles. An example of the results of one database is available in Appendix 2 . Of the titles obtained, 816 duplicates were eliminated, and the team reviewed the titles and abstracts of 4,877 papers. Of these, 120 were selected for full-text review. Finally, 61 papers were included in this scoping review (Fig.  1 ), as listed in Table  2 .

figure 1

PRISMA flow diagram for included studies, incorporating records identified through the database and direct searching

The selected studies were published between 1986 and 2022, and seventy-five percent (46) were published during the last decade. Of all the articles included in this review, 13% (8) were literature reviews: one integrative review [ 28 ] and four scoping reviews [ 29 , 30 , 31 , 32 ]. Finally, fifty-three (87%) original or empirical papers were included (i.e., studies that answered a research question or achieved a research purpose through qualitative or quantitative methodologies) [ 15 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 ].

Table 2 summarizes the papers included in the present scoping review, and Table  3 describes the characteristics of the included studies.

The thematic analysis resulted in two themes: (1) the organization of feedback processes in WBL settings, and (2) sociocultural factors influencing the organization of feedback processes. Table 4 gives a summary of the themes and subthemes.

Organization of feedback processes in WBL settings.

Setting learning goals (i.e., feed-up dimension).

Feedback that focuses on students' learning needs and is based on known performance standards enhances student response and setting learning goals [ 30 ]. Discussing goals and agreements before starting clinical practice enhances students' feedback-seeking behavior [ 39 ] and responsiveness to feedback [ 83 ]. Farrell et al. (2017) found that teacher-learner co-constructed learning goals enhance feedback interactions and help establish educational alliances, improving the learning experience [ 50 ]. However, Kiger (2020) found that sharing individualized learning plans with teachers aligned feedback with learning goals but did not improve students' perceived use of feedback [ 64 ]

Two papers of this set pointed out the importance of goal-oriented feedback, a dynamic process that depends on discussion of goal setting between teachers and students [ 50 ] and influences how individuals experience, approach, and respond to upcoming learning activities [ 34 ]. Goal-oriented feedback should be embedded in the learning experience of the clinical workplace, as it can enhance students' engagement in safe feedback dialogues [ 50 ]. Ideally, each feedback encounter in the WBL context should conclude, in addition to setting a plan of action to achieve the desired goal, with a reflection on the next goal [ 50 ].

Feedback strategies within the WBL environment. (i.e., feedback dimension)

In undergraduate WBL environments, there are several tasks and feedback opportunities organized in the undergraduate clinical workplace that can enable feedback processes:

Questions from clinical teachers to students are a feedback strategy [ 74 ]. There are different types of questions that the teacher can use, either to clarify concepts, to reach the correct answer, or to facilitate self-correction [ 74 ]. Usually, questions can be used in conjunction with other communication strategies, such as pauses, which enable self-correction by the student [ 74 ]. Students can also ask questions to obtain feedback on their performance [ 54 ]. However, question-and-answer as a feedback strategy usually provides information on either correct or incorrect answers and fewer suggestions for improvement, rendering it less constructive as a feedback strategy [ 82 ].

Direct observation of performance by default is needed to be able to provide information to be used as input in the feedback process [ 33 , 46 , 49 , 86 ]. In the process of observation, teachers can include clarification of objectives (i.e., feed-up dimension) and suggestions for an action plan (i.e., feedforward) [ 50 ]. Accordingly, Schopper et al. (2016) showed that students valued being observed while interviewing patients, as they received feedback that helped them become more efficient and effective as interviewers and communicators [ 33 ]. Moreover, it is widely described that direct observation improves feedback credibility [ 33 , 40 , 84 ]. Ideally, observation should be deliberate [ 33 , 83 ], informal or spontaneous [ 33 ], conducted by a (clinical) expert [ 46 , 86 ], provided immediately after the observation, and clinical teacher if possible, should schedule or be alert on follow-up observations to promote closing the gap between current and desired performance [ 46 ].

Workplace-based assessments (WBAs), by definition, entail direct observation of performance during authentic task demonstration [ 39 , 46 , 56 , 87 ]. WBAs can significantly impact behavioral change in medical students [ 55 ]. Organizing and designing formative WBAs and embedding these in a feedback dialogue is essential for effective learning [ 31 ].

Summative organization of WBAs is a well described barrier for feedback uptake in the clinical workplace [ 35 , 46 ]. If feedback is perceived as summative, or organized as a pass-fail decision, students may be less inclined to use the feedback for future learning [ 52 ]. According to Schopper et al. (2016), using a scale within a WBA makes students shift their focus during the clinical interaction and see it as an assessment with consequences [ 33 ]. Harrison et al. (2016) pointed out that an environment that only contains assessments with a summative purpose will not lead to a culture of learning and improving performance [ 56 ]. The recommendation is to separate the formative and summative WBAs, as feedback in summative instances is often not recognized as a learning opportunity or an instance to seek feedback [ 54 ]. In terms of the design, an organizational format is needed to clarify to students how formative assessments can promote learning from feedback [ 56 ]. Harrison et al. (2016) identified that enabling students to have more control over their assessments, designing authentic assessments, and facilitating long-term mentoring could improve receptivity to formative assessment feedback [ 56 ].

Multiple WBA instruments and systems are reported in the literature. Sox et al. (2014) used a detailed evaluation form to help students improve their clinical case presentation skills. They found that feedback on oral presentations provided by supervisors using a detailed evaluation form improved clerkship students’ oral presentation skills [ 78 ]. Daelmans et al. (2006) suggested that a formal in-training assessment programme composed by 19 assessments that provided structured feedback, could promote observation and verbal feedback opportunities through frequent assessments [ 43 ]. However, in this setting, limited student-staff interactions still hindered feedback follow-up [ 43 ]. Designing frequent WBA improves feedback credibility [ 28 ]. Long et al. (2021) emphasized that students' responsiveness to assessment feedback hinges on its perceived credibility, underlining the importance of credibility for students to effectively engage and improve their performance [ 31 ].

The mini-CEX is one of the most widely described WBA instruments in the literature. Students perceive that the mini-CEX allows them to be observed and encourages the development of interviewing skills [ 33 ]. The mini-CEX can provide feedback that improves students' clinical skills [ 58 , 60 ], as it incorporates a structure for discussing the student's strengths and weaknesses and the design of a written action plan [ 39 , 80 ]. When mini-CEXs are incorporated as part of a system of WBA, such as programmatic assessment, students feel confident in seeking feedback after observation, and being systematic allows for follow-up [ 39 ]. Students suggested separating grading from observation and using the mini-CEX in more informal situations [ 33 ].

Clinical encounter cards allow students to receive weekly feedback and make them request more feedback as the clerkship progresses [ 65 ]. Moreover, encounter cards stimulate that feedback is given by supervisors, and students are more satisfied with the feedback process [ 72 ]. With encounter card feedback, students are responsible for asking a supervisor for feedback before a clinical encounter, and supervisors give students written and verbal comments about their performance after the encounter [ 42 , 72 ]. Encounter cards enhance the use of feedback and add approximately one minute to the length of the clinical encounter, so they are well accepted by students and supervisors [ 72 ]. Bennett (2006) identified that Instant Feedback Cards (IFC) facilitated mid-rotation feedback [ 38 ]. Feedback encounter card comments must be discussed between students and supervisors; otherwise, students may perceive it as impersonal, static, formulaic, and incomplete [ 59 ].

Self-assessments can change students' feedback orientation, transforming them into coproducers of learning [ 68 ]. Self-assessments promote the feedback process [ 68 ]. Some articles emphasize the importance of organizing self-assessments before receiving feedback from supervisors, for example, discussing their appraisal with the supervisor [ 46 , 52 ]. In designing a feedback encounter, starting with a self-assessment as feed-up, discussing with the supervisor, and identifying areas for improvement is recommended, as part of the feedback dialogue [ 68 ].

Peer feedback as an organized activity allows students to develop strategies to observe and give feedback to other peers [ 61 ]. Students can act as the feedback provider or receiver, fostering understanding of critical comments and promoting evaluative judgment for their clinical practice [ 61 ]. Within clerkships, enabling the sharing of feedback information among peers allows for a better understanding and acceptance of feedback [ 52 ]. However, students can find it challenging to take on the peer assessor/feedback provider role, as they prefer to avoid social conflicts [ 28 , 61 ]. Moreover, it has been described that they do not trust the judgment of their peers because they are not experts, although they know the procedures, tasks, and steps well and empathize with their peer status in the learning process [ 61 ].

Bedside-teaching encounters (BTEs) provide timely feedback and are an opportunity for verbal feedback during performance [ 74 ]. Rizan et al. (2014) explored timely feedback delivered within BTEs and determined that it promotes interaction that constructively enhances learner development through various corrective strategies (e.g., question and answers, pauses, etc.). However, if the feedback given during the BTEs was general, unspecific, or open-ended, it could go unnoticed [ 74 ]. Torre et al. (2005) investigated which integrated feedback activities and clinical tasks occurred on clerkship rotations and assessed students' perceived quality in each teaching encounter [ 81 ]. The feedback activities reported were feedback on written clinical history, physical examination, differential diagnosis, oral case presentation, a daily progress note, and bedside feedback. Students considered all these feedback activities high-quality learning opportunities, but they were more likely to receive feedback when teaching was at the bedside than at other teaching locations [ 81 ].

Case presentations are an opportunity for feedback within WBL contexts [ 67 , 73 ]. However, both students and supervisors struggled to identify them as feedback moments, and they often dismissed questions and clarifications around case presentations as feedback [ 73 ]. Joshi (2017) identified case presentations as a way for students to ask for informal or spontaneous supervisor feedback [ 63 ].

Organization of follow-up feedback and action plans (i.e., feedforward dimension).

Feedback that generates use and response from students is characterized by two-way communication and embedded in a dialogue [ 30 ]. Feedback must be future-focused [ 29 ], and a feedback encounter should be followed by planning the next observation [ 46 , 87 ]. Follow-up feedback could be organized as a future self-assessment, reflective practice by the student, and/or a discussion with the supervisor or coach [ 68 ]. The literature describes that a lack of student interaction with teachers makes follow-up difficult [ 43 ]. According to Haffling et al. (2011), follow-up feedback sessions improve students' satisfaction with feedback compared to students who do not have follow-up sessions. In addition, these same authors reported that a second follow-up session allows verification of improved performances or confirmation that the skill was acquired [ 55 ].

Although feedback encounter forms are a recognized way of obtaining information about performance (i.e., feedback dimension), the literature does not provide many clear examples of how they may impact the feedforward phase. For example, Joshi et al. (2016) consider a feedback form with four fields (i.e., what did you do well, advise the student on what could be done to improve performance, indicate the level of proficiency, and personal details of the tutor). In this case, the supervisor highlighted what the student could improve but not how, which is the missing phase of the co-constructed action plan [ 63 ]. Whichever WBA instrument is used in clerkships to provide feedback, it should include a "next steps" box [ 44 ], and it is recommended to organize a long-term use of the WBA instrument so that those involved get used to it and improve interaction and feedback uptake [ 55 ]. RIME-based feedback (Reporting, Interpreting, Managing, Educating) is considered an interesting example, as it is perceived as helpful to students in knowing what they need to improve in their performance [ 44 ]. Hochberg (2017) implemented formative mid-clerkship assessments to enhance face-to-face feedback conversations and co-create an improvement plan [ 59 ]. Apps for structuring and storing feedback improve the amount of verbal and written feedback. In the study of Joshi et al. (2016), a reasonable proportion of students (64%) perceived that these app tools help them improve their performance during rotations [ 63 ].

Several studies indicate that an action plan as part of the follow-up feedback is essential for performance improvement and learning [ 46 , 55 , 60 ]. An action plan corresponds to an agreed-upon strategy for improving, confirming, or correcting performance. Bing-You et al. (2017) determined that only 12% of the articles included in their scoping review incorporated an action plan for learners [ 32 ]. Holmboe et al. (2004) reported that only 11% of the feedback sessions following a mini-CEX included an action plan [ 60 ]. Suhoyo et al. (2017) also reported that only 55% of mini-CEX encounters contained an action plan [ 80 ]. Other authors reported that action plans are not commonly offered during feedback encounters [ 77 ]. Sokol-Hessner et al. (2010) implemented feedback card comments with a space to provide written feedback and a specific action plan. In their results, 96% contained positive comments, and only 5% contained constructive comments [ 77 ]. In summary, although the recommendation is to include a “next step” box in the feedback instruments, evidence shows these items are not often used for constructive comments or action plans.

Sociocultural factors influencing the organization of feedback processes.

Multiple sociocultural factors influence interaction in feedback encounters, promoting or hampering the productivity of the feedback processes.

Clinical learning culture

Context impacts feedback processes [ 30 , 82 ], and there are barriers to incorporating actionable feedback in the clinical learning context. The clinical learning culture is partly determined by the clinical context, which can be unpredictable [ 29 , 46 , 68 ], as the available patients determine learning opportunities. Supervisors are occupied by a high workload, which results in limited time or priority for teaching [ 35 , 46 , 48 , 55 , 68 , 83 ], hindering students’ feedback-seeking behavior [ 54 ], and creating a challenge for the balance between patient care and student mentoring [ 35 ].

Clinical workplace culture does not always purposefully prioritize instances for feedback processes [ 83 , 84 ]. This often leads to limited direct observation [ 55 , 68 ] and the provision of poorly informed feedback. It is also evident that this affects trust between clinical teachers and students [ 52 ]. Supervisors consider feedback a low priority in clinical contexts [ 35 ] due to low compensation and lack of protected time [ 83 ]. In particular, lack of time appears to be the most significant and well-known barrier to frequent observation and workplace feedback [ 35 , 43 , 48 , 62 , 67 , 83 ].

The clinical environment is hierarchical [ 68 , 80 ] and can make students not consider themselves part of the team and feel like a burden to their supervisor [ 68 ]. This hierarchical learning environment can lead to unidirectional feedback, limit dialogue during feedback processes, and hinder the seeking, uptake, and use of feedback [ 67 , 68 ]. In a learning culture where feedback is not supported, learners are less likely to want to seek it and feel motivated and engaged in their learning [ 83 ]. Furthermore, it has been identified that clinical supervisors lack the motivation to teach [ 48 ] and the intention to observe or reobserve performance [ 86 ].

In summary, the clinical context and WBL culture do not fully use the potential of a feedback process aimed at closing learning gaps. However, concrete actions shown in the literature can be taken to improve the effectiveness of feedback by organizing the learning context. For example, McGinness et al. (2022) identified that students felt more receptive to feedback when working in a safe, nonjudgmental environment [ 67 ]. Moreover, supervisors and trainees identified the learning culture as key to establishing an open feedback dialogue [ 73 ]. Students who perceive culture as supportive and formative can feel more comfortable performing tasks and more willing to receive feedback [ 73 ].

Relationships

There is a consensus in the literature that trusting and long-term relationships improve the chances of actionable feedback. However, relationships between supervisors and students in the clinical workplace are often brief and not organized as more longitudinally [ 68 , 83 ], leaving little time to establish a trustful relationship [ 68 ]. Supervisors change continuously, resulting in short interactions that limit the creation of lasting relationships over time [ 50 , 68 , 83 ]. In some contexts, it is common for a student to have several supervisors who have their own standards in the observation of performance [ 46 , 56 , 68 , 83 ]. A lack of stable relationships results in students having little engagement in feedback [ 68 ]. Furthermore, in case of summative assessment programmes, the dual role of supervisors (i.e., assessing and giving feedback) makes feedback interactions perceived as summative and can complicate the relationship [ 83 ].

Repeatedly, the articles considered in this review describe that long-term and stable relationships enable the development of trust and respect [ 35 , 62 ] and foster feedback-seeking behavior [ 35 , 67 ] and feedback-giver behavior [ 39 ]. Moreover, constructive and positive relationships enhance students´ use of and response to feedback [ 30 ]. For example, Longitudinal Integrated Clerkships (LICs) promote stable relationships, thus enhancing the impact of feedback [ 83 ]. In a long-term trusting relationship, feedback can be straightforward and credible [ 87 ], there are more opportunities for student observation, and the likelihood of follow-up and actionable feedback improves [ 83 ]. Johnson et al. (2020) pointed out that within a clinical teacher-student relationship, the focus must be on establishing psychological safety; thus, the feedback conversations might be transformed [ 62 ].

Stable relationships enhance feedback dialogues, which offer an opportunity to co-construct learning and propose and negotiate aspects of the design of learning strategies [ 62 ].

Students as active agents in the feedback processes

The feedback response learners generate depends on the type of feedback information they receive, how credible the source of feedback information is, the relationship between the receiver and the giver, and the relevance of the information delivered [ 49 ]. Garino (2020) noted that students who are most successful in using feedback are those who do not take criticism personally, who understand what they need to improve and know they can do so, who value and feel meaning in criticism, are not surprised to receive it, and who are motivated to seek new feedback and use effective learning strategies [ 52 ]. Successful users of feedback ask others for help, are intentional about their learning, know what resources to use and when to use them, listen to and understand a message, value advice, and use effective learning strategies. They regulate their emotions, find meaning in the message, and are willing to change [ 52 ].

Student self-efficacy influences the understanding and use of feedback in the clinical workplace. McGinness et al. (2022) described various positive examples of self-efficacy regarding feedback processes: planning feedback meetings with teachers, fostering good relationships with the clinical team, demonstrating interest in assigned tasks, persisting in seeking feedback despite the patient workload, and taking advantage of opportunities for feedback, e.g., case presentations [ 67 ].

When students are encouraged to seek feedback aligned with their own learning objectives, they promote feedback information specific to what they want to learn and improve and enhance the use of feedback [ 53 ]. McGinness et al. (2022) identified that the perceived relevance of feedback information influenced the use of feedback because students were more likely to ask for feedback if they perceived that the information was useful to them. For example, if students feel part of the clinical team and participate in patient care, they are more likely to seek feedback [ 17 ].

Learning-oriented students aim to seek feedback to achieve clinical competence at the expected level [ 75 ]; they focus on improving their knowledge and skills and on professional development [ 17 ]. Performance-oriented students aim not to fail and to avoid negative feedback [ 17 , 75 ].

For effective feedback processes, including feed-up, feedback, and feedforward, the student must be feedback-oriented, i.e., active, seeking, listening to, interpreting, and acting on feedback [ 68 ]. The literature shows that feedback-oriented students are coproducers of learning [ 68 ] and are more involved in the feedback process [ 51 ]. Additionally, students who are metacognitively aware of their learning process are more likely to use feedback to reduce gaps in learning and performance [ 52 ]. For this, students must recognize feedback when it occurs and understand it when they receive it. Thus, it is important to organize training and promote feedback literacy so that students understand what feedback is, act on it, and improve the quality of feedback and their learning plans [ 68 ].

Table 5 summarizes those feedback tasks, activities, and key features of organizational aspects that enable each phase of the feedback loop based on the literature review.

The present scoping review identified 61 papers that mapped the literature on feedback processes in the WBL environments of undergraduate health professions. This review explored how feedback processes are organized in these learning contexts using the feedback loop framework. Given the specific characteristics of feedback processes in undergraduate clinical learning, three main findings were identified on how feedback processes are being conducted in the clinical environment and how these processes could be organized to support feedback processes.

First, the literature lacks a balance between the three dimensions of the feedback loop. In this regard, most of the articles in this review focused on reporting experiences or strategies for delivering feedback information (i.e., feedback dimension). Credible and objective feedback information is based on direct observation [ 46 ] and occurs within an interaction or a dialogue [ 62 , 88 ]. However, only having credible and objective information does not ensure that it will be considered, understood, used, and put into practice by the student [ 89 ].

Feedback-supporting actions aligned with goals and priorities facilitate effective feedback processes [ 89 ] because goal-oriented feedback focuses on students' learning needs [ 7 ]. In contrast, this review showed that only a minority of the studies highlighted the importance of aligning learning objectives and feedback (i.e., the feed-up dimension). To overcome this, supervisors and students must establish goals and agreements before starting clinical practice, as it allows students to measure themselves on a defined basis [ 90 , 91 ] and enhances students' feedback-seeking behavior [ 39 , 92 ] and responsiveness to feedback [ 83 ]. In addition, learning goals should be shared, and co-constructed, through a dialogue [ 50 , 88 , 90 , 92 ]. In fact, relationship-based feedback models emphasize setting shared goals and plans as part of the feedback process [ 68 ].

Many of the studies acknowledge the importance of establishing an action plan and promoting the use of feedback (i.e., feedforward). However, there is yet limited insight on how to best implement strategies that support the use of action plans, improve performance and close learning gaps. In this regard, it is described that delivering feedback without perceiving changes, results in no effect or impact on learning [ 88 ]. To determine if a feedback loop is closed, observing a change in the student's response is necessary. In other words, feedback does not work without repeating the same task [ 68 ], so teachers need to observe subsequent tasks to notice changes [ 88 ]. While feedforward is fundamental to long-term performance, it is shown that more research is needed to determine effective actions to be implemented in the WBL environment to close feedback loops.

Second, there is a need for more knowledge about designing feedback activities in the WBL environment that will generate constructive feedback for learning. WBA is the most frequently reported feedback activity in clinical workplace contexts [ 39 , 46 , 56 , 87 ]. Despite the efforts of some authors to use WBAs as a formative assessment and feedback opportunity, in several studies, a summative component of the WBA was presented as a barrier to actionable feedback [ 33 , 56 ]. Students suggest separating grading from observation and using, for example, the mini-CEX in informal situations [ 33 ]. Several authors also recommend disconnecting the summative components of WBAs to avoid generating emotions that can limit the uptake and use of feedback [ 28 , 93 ]. Other literature recommends purposefully designing a system of assessment using low-stakes data points for feedback and learning. Accordingly, programmatic assessment is a framework that combines both the learning and the decision-making function of assessment [ 94 , 95 ]. Programmatic assessment is a practical approach for implementing low-stakes as a continuum, giving opportunities to close the gap between current and desired performance and having the student as an active agent [ 96 ]. This approach enables the incorporation of low-stakes data points that target student learning [ 93 ] and provide performance-relevant information (i.e., meaningful feedback) based on direct observations during authentic professional activities [ 46 ]. Using low-stakes data points, learners make sense of information about their performance and use it to enhance the quality of their work or performance [ 96 , 97 , 98 ]. Implementing multiple instances of feedback is more effective than providing it once because it promotes closing feedback loops by giving the student opportunities to understand the feedback, make changes, and see if those changes were effective [ 89 ].

Third, the support provided by the teacher is fundamental and should be built into a reliable and long-term relationship, where the teacher must take the role of coach rather than assessor, and students should develop feedback agency and be active in seeking and using feedback to improve performance. Although it is recognized that institutional efforts over the past decades have focused on training teachers to deliver feedback, clinical supervisors' lack of teaching skills is still identified as a barrier to workplace feedback [ 99 ]. In particular, research indicates that clinical teachers lack the skills to transform the information obtained from an observation into constructive feedback [ 100 ]. Students are more likely to use feedback if they consider it credible and constructive [ 93 ] and based on stable relationships [ 93 , 99 , 101 ]. In trusting relationships, feedback can be straightforward and credible, and the likelihood of follow-up and actionable feedback improves [ 83 , 88 ]. Coaching strategies can be enhanced by teachers building an educational alliance that allows for trustworthy relationships or having supervisors with an exclusive coaching role [ 14 , 93 , 102 ].

Last, from a sociocultural perspective, individuals are the main actors in the learning process. Therefore, feedback impacts learning only if students engage and interact with it [ 11 ]. Thus, feedback design and student agency appear to be the main features of effective feedback processes. Accordingly, the present review identified that feedback design is a key feature for effective learning in complex environments such as WBL. Feedback in the workplace must ideally be organized and implemented to align learning outcomes, learning activities, and assessments, allowing learners to learn, practice, and close feedback loops [ 88 ]. To guide students toward performances that reflect long-term learning, an intensive formative learning phase is needed, in which multiple feedback processes are included that shape students´ further learning [ 103 ]. This design would promote student uptake of feedback for subsequent performance [ 1 ].

Strengths and limitations

The strengths of this study are (1) the use of an established framework, the Arksey and O'Malley's framework [ 22 ]. We included the step of socializing the results with stakeholders, which allowed the team to better understand the results from another perspective and offer a realistic look. (2) Using the feedback loop as a theoretical framework strengthened the results and gave a more thorough explanation of the literature regarding feedback processes in the WBL context. (3) our team was diverse and included researchers from different disciplines as well as a librarian.

The present scoping review has several limitations. Although we adhered to the recommended protocols and methodologies, some relevant papers may have been omitted. The research team decided to select original studies and reviews of the literature for the present scoping review. This caused some articles, such as guidelines, perspectives, and narrative papers, to be excluded from the current study.

One of the inclusion criteria was a focus on undergraduate students. However, some papers that incorporated undergraduate and postgraduate participants were included, as these supported the results of this review. Most articles involved medical students. Although the authors did not limit the search to medicine, maybe some articles involving students from other health disciplines needed to be included, considering the search in other databases or journals.

The results give insight in how feedback could be organized within the clinical workplace to promote feedback processes. On a small scale, i.e., in the feedback encounter between a supervisor and a learner, feedback should be organized to allow for follow-up feedback, thus working on required learning and performance goals. On a larger level, i.e., in the clerkship programme or a placement rotation, feedback should be organized through appropriate planning of subsequent tasks and activities.

More insight is needed in designing a closed loop feedback process, in which specific attention is needed in effective feedforward practices. The feedback that stimulates further action and learning requires a safe and trustful work and learning environment. Understanding the relationship between an individual and his or her environment is a challenge for determining the impact of feedback and must be further investigated within clinical WBL environments. Aligning the dimensions of feed-up, feedback and feedforward includes careful attention to teachers’ and students’ feedback literacy to assure that students can act on feedback in a constructive way. In this line, how to develop students' feedback agency within these learning environments needs further research.

Boud D, Molloy E. Rethinking models of feedback for learning: The challenge of design. Assess Eval High Educ. 2013;38:698–712.

Article   Google Scholar  

Henderson M, Ajjawi R, Boud D, Molloy E. Identifying feedback that has impact. In: The Impact of Feedback in Higher Education. Springer International Publishing: Cham; 2019. p. 15–34.

Chapter   Google Scholar  

Winstone N, Carless D. Designing effective feedback processes in higher education: A learning-focused approach. 1st ed. New York: Routledge; 2020.

Google Scholar  

Ajjawi R, Boud D. Assessment & Evaluation in Higher Education Researching feedback dialogue: an interactional analysis approach. 2015. https://doi.org/10.1080/02602938.2015.1102863 .

Carless D. Feedback loops and the longer-term: towards feedback spirals. Assess Eval High Educ. 2019;44:705–14.

Sadler DR. Formative assessment and the design of instructional systems. Instr Sci. 1989;18:119–44.

Hattie J, Timperley H. The Power of Feedback The Meaning of Feedback. Rev Educ Res. 2007;77:81–112.

Zarrinabadi N, Rezazadeh M. Why only feedback? Including feed up and feed forward improves nonlinguistic aspects of L2 writing. Language Teaching Research. 2023;27(3):575–92.

Fisher D, Frey N. Feed up, back, forward. Educ Leadersh. 2009;67:20–5.

Reimann A, Sadler I, Sambell K. What’s in a word? Practices associated with ‘feedforward’ in higher education. Assessment evaluation in higher education. 2019;44:1279–90.

Esterhazy R. Re-conceptualizing Feedback Through a Sociocultural Lens. In: Henderson M, Ajjawi R, Boud D, Molloy E, editors. The Impact of Feedback in Higher Education. Cham: Palgrave Macmillan; 2019. https://doi.org/10.1007/978-3-030-25112-3_5 .

Bransen D, Govaerts MJB, Sluijsmans DMA, Driessen EW. Beyond the self: The role of co-regulation in medical students’ self-regulated learning. Med Educ. 2020;54:234–41.

Ramani S, Könings KD, Ginsburg S, Van Der Vleuten CP. Feedback Redefined: Principles and Practice. J Gen Intern Med. 2019;34:744–53.

Atkinson A, Watling CJ, Brand PL. Feedback and coaching. Eur J Pediatr. 2022;181(2):441–6.

Suhoyo Y, Schonrock-Adema J, Emilia O, Kuks JBM, Cohen-Schotanus JA. Clinical workplace learning: perceived learning value of individual and group feedback in a collectivistic culture. BMC Med Educ. 2018;18:79.

Bowen L, Marshall M, Murdoch-Eaton D. Medical Student Perceptions of Feedback and Feedback Behaviors Within the Context of the “Educational Alliance.” Acad Med. 2017;92:1303–12.

Bok HGJ, Teunissen PW, Spruijt A, Fokkema JPI, van Beukelen P, Jaarsma DADC, et al. Clarifying students’ feedback-seeking behaviour in clinical clerkships. Med Educ. 2013;47:282–91.

Al-Kadri HM, Al-Kadi MT, Van Der Vleuten CPM. Workplace-based assessment and students’ approaches to learning: A qualitative inquiry. Med Teach. 2013;35(SUPPL):1.

Dennis AA, Foy MJ, Monrouxe LV, Rees CE. Exploring trainer and trainee emotional talk in narratives about workplace-based feedback processes. Adv Health Sci Educ. 2018;23:75–93.

Watling C, LaDonna KA, Lingard L, Voyer S, Hatala R. ‘Sometimes the work just needs to be done’: socio-cultural influences on direct observation in medical training. Med Educ. 2016;50:1054–64.

Bing-You R, Hayes V, Varaklis K, Trowbridge R, Kemp H, McKelvy D. Feedback for Learners in Medical Education: What is Known? A Scoping Review Academic Medicine. 2017;92:1346–54.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8:19–32.

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Ann Intern Med. 2018;169:467–73.

Colquhoun HL, Levac D, O’brien KK, Straus S, Tricco AC, Perrier L, et al. Scoping reviews: time for clarity in definition methods and reporting. J Clin Epidemiol. 2014;67:1291–4.

StArt - State of Art through Systematic Review. 2013.

Levac D, Colquhoun H, O’Brien KK. Scoping studies: Advancing the methodology. Implementation Science. 2010;5:1–9.

Peters MDJ, BPharm CMGHK, Parker PMD, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid Based Healthc. 2015;13:141–6.

Bing-You R, Varaklis K, Hayes V, Trowbridge R, Kemp H, McKelvy D, et al. The Feedback Tango: An Integrative Review and Analysis of the Content of the Teacher-Learner Feedback Exchange. Acad Med. 2018;93:657–63.

Ossenberg C, Henderson A, Mitchell M. What attributes guide best practice for effective feedback? A scoping review. Adv Health Sci Educ. 2019;24:383–401.

Spooner M, Duane C, Uygur J, Smyth E, Marron B, Murphy PJ, et al. Self -regulatory learning theory as a lens on how undergraduate and postgraduate learners respond to feedback: A BEME scoping review: BEME Guide No. 66. Med Teach. 2022;44:3–18.

Long S, Rodriguez C, St-Onge C, Tellier PP, Torabi N, Young M. Factors affecting perceived credibility of assessment in medical education: A scoping review. Adv Health Sci Educ. 2022;27:229–62.

Bing-You R, Hayes V, Varaklis K, Trowbridge R, Kemp H, McKelvy D. Feedback for Learners in Medical Education: What is Known? A Scoping Review: Lippincott Williams and Wilkins; 2017.

Schopper H, Rosenbaum M, Axelson R. “I wish someone watched me interview:” medical student insight into observation and feedback as a method for teaching communication skills during the clinical years. BMC Med Educ. 2016;16:286.

Crommelinck M, Anseel F. Understanding and encouraging feedback-seeking behaviour: a literature review. Med Educ. 2013;47:232–41.

Adamson E, Kinga L, Foy L, McLeodb M, Traynor J, Watson W, et al. Feedback in clinical practice: Enhancing the students’ experience through action research. Nurse Educ Pract. 2018;31:48–53.

Al-Mously N, Nabil NM, Al-Babtain SA, et al. Undergraduate medical students’ perceptions on the quality of feedback received during clinical rotations. Med Teach. 2014;36(Supplement 1):S17-23.

Bates J, Konkin J, Suddards C, Dobson S, Pratt D. Student perceptions of assessment and feedback in longitudinal integrated clerkships. Med Educ. 2013;47:362–74.

Bennett AJ, Goldenhar LM, Stanford K. Utilization of a Formative Evaluation Card in a Psychiatry Clerkship. Acad Psychiatry. 2006;30:319–24.

Bok HG, Jaarsma DA, Spruijt A, Van Beukelen P, Van Der Vleuten CP, Teunissen PW, et al. Feedback-giving behaviour in performance evaluations during clinical clerkships. Med Teach. 2016;38:88–95.

Bok HG, Teunissen PW, Spruijt A, Fokkema JP, van Beukelen P, Jaarsma DA, et al. Clarifying students’ feedback-seeking behaviour in clinical clerkships. Med Educ. 2013;47:282–91.

Calleja P, Harvey T, Fox A, Carmichael M, et al. Feedback and clinical practice improvement: A tool to assist workplace supervisors and students. Nurse Educ Pract. 2016;17:167–73.

Carey EG, Wu C, Hur ES, Hasday SJ, Rosculet NP, Kemp MT, et al. Evaluation of Feedback Systems for the Third-Year Surgical Clerkship. J Surg Educ. 2017;74:787–93.

Daelmans HE, Overmeer RM, Van der Hem-Stokroos HH, Scherpbier AJ, Stehouwer CD, van der Vleuten CP. In-training assessment: qualitative study of effects on supervision and feedback in an undergraduate clinical rotation. Medical education. 2006;40(1):51–8.

DeWitt D, Carline J, Paauw D, Pangaro L. Pilot study of a ’RIME’-based tool for giving feedback in a multi-specialty longitudinal clerkship. Med Educ. 2008;42:1205–9.

Dolan BM, O’Brien CL, Green MM. Including Entrustment Language in an Assessment Form May Improve Constructive Feedback for Student Clinical Skills. Med Sci Educ. 2017;27:461–4.

Duijn CC, Welink LS, Mandoki M, Ten Cate OT, Kremer WD, Bok HG. Am I ready for it? Students’ perceptions of meaningful feedback on entrustable professional activities. Perspectives on medical education. 2017;6:256–64.

Elnicki DM, Zalenski D. Integrating medical students’ goals, self-assessment and preceptor feedback in an ambulatory clerkship. Teach Learn Med. 2013;25:285–91.

Embo MP, Driessen EW, Valcke M, Van der Vleuten CP. Assessment and feedback to facilitate self-directed learning in clinical practice of Midwifery students. Medical teacher. 2010;32(7):e263-9.

Eva KW, Armson H, Holmboe E, Lockyer J, Loney E, Mann K, et al. Factors influencing responsiveness to feedback: On the interplay between fear, confidence, and reasoning processes. Adv Health Sci Educ. 2012;17:15–26.

Farrell L, Bourgeois-Law G, Ajjawi R, Regehr G. An autoethnographic exploration of the use of goal oriented feedback to enhance brief clinical teaching encounters. Adv Health Sci Educ. 2017;22:91–104.

Fernando N, Cleland J, McKenzie H, Cassar K. Identifying the factors that determine feedback given to undergraduate medical students following formative mini-CEX assessments. Med Educ. 2008;42:89–95.

Garino A. Ready, willing and able: a model to explain successful use of feedback. Adv Health Sci Educ. 2020;25:337–61.

Garner MS, Gusberg RJ, Kim AW. The positive effect of immediate feedback on medical student education during the surgical clerkship. J Surg Educ. 2014;71:391–7.

Bing-You R, Hayes V, Palka T, Ford M, Trowbridge R. The Art (and Artifice) of Seeking Feedback: Clerkship Students’ Approaches to Asking for Feedback. Acad Med. 2018;93:1218–26.

Haffling AC, Beckman A, Edgren G. Structured feedback to undergraduate medical students: 3 years’ experience of an assessment tool. Medical teacher. 2011;33(7):e349-57.

Harrison CJ, Könings KD, Dannefer EF, Schuwirth LWTT, Wass V, van der Vleuten CPMM. Factors influencing students’ receptivity to formative feedback emerging from different assessment cultures. Perspect Med Educ. 2016;5:276–84.

Harrison CJ, Könings KD, Schuwirth LW, Wass V, Van der Vleuten CP, Konings KD, et al. Changing the culture of assessment: the dominance of the summative assessment paradigm. BMC medical education. 2017;17:1–4.

Harvey P, Radomski N, O’Connor D. Written feedback and continuity of learning in a geographically distributed medical education program. Medical teacher. 2013;35(12):1009–13.

Hochberg M, Berman R, Ogilvie J, Yingling S, Lee S, Pusic M, et al. Midclerkship feedback in the surgical clerkship: the “Professionalism, Reporting, Interpreting, Managing, Educating, and Procedural Skills” application utilizing learner self-assessment. Am J Surg. 2017;213:212–6.

Holmboe ES, Yepes M, Williams F, Huot SJ. Feedback and the mini clinical evaluation exercise. Journal of general internal medicine. 2004;19(5):558–61.

Tai JHM, Canny BJ, Haines TP, Molloy EK. The role of peer-assisted learning in building evaluative judgement: opportunities in clinical medical education. Adv Health Sci Educ. 2016;21:659–76.

Johnson CE, Keating JL, Molloy EK. Psychological safety in feedback: What does it look like and how can educators work with learners to foster it? Med Educ. 2020;54:559–70.

Joshi A, Generalla J, Thompson B, Haidet P. Facilitating the Feedback Process on a Clinical Clerkship Using a Smartphone Application. Acad Psychiatry. 2017;41:651–5.

Kiger ME, Riley C, Stolfi A, Morrison S, Burke A, Lockspeiser T. Use of Individualized Learning Plans to Facilitate Feedback Among Medical Students. Teach Learn Med. 2020;32:399–409.

Kogan J, Shea J. Implementing feedback cards in core clerkships. Med Educ. 2008;42:1071–9.

Lefroy J, Walters B, Molyneux A, Smithson S. Can learning from workplace feedback be enhanced by reflective writing? A realist evaluation in UK undergraduate medical education. Educ Prim Care. 2021;32:326–35.

McGinness HT, Caldwell PHY, Gunasekera H, Scott KM. ‘Every Human Interaction Requires a Bit of Give and Take’: Medical Students’ Approaches to Pursuing Feedback in the Clinical Setting. Teach Learn Med. 2022. https://doi.org/10.1080/10401334.2022.2084401 .

Noble C, Billett S, Armit L, Collier L, Hilder J, Sly C, et al. ``It’s yours to take{’’}: generating learner feedback literacy in the workplace. Adv Health Sci Educ. 2020;25:55–74.

Ogburn T, Espey E. The R-I-M-E method for evaluation of medical students on an obstetrics and gynecology clerkship. Am J Obstet Gynecol. 2003;189:666–9.

Po O, Reznik M, Greenberg L. Improving a medical student feedback with a clinical encounter card. Ambul Pediatr. 2007;7:449–52.

Parkes J, Abercrombie S, McCarty T, Parkes J, Abercrombie S, McCarty T. Feedback sandwiches affect perceptions but not performance. Adv Health Sci Educ. 2013;18:397–407.

Paukert JL, Richards ML, Olney C. An encounter card system for increasing feedback to students. Am J Surg. 2002;183:300–4.

Rassos J, Melvin LJ, Panisko D, Kulasegaram K, Kuper A. Unearthing Faculty and Trainee Perspectives of Feedback in Internal Medicine: the Oral Case Presentation as a Model. J Gen Intern Med. 2019;34:2107–13.

Rizan C, Elsey C, Lemon T, Grant A, Monrouxe L. Feedback in action within bedside teaching encounters: a video ethnographic study. Med Educ. 2014;48:902–20.

Robertson AC, Fowler LC. Medical student perceptions of learner-initiated feedback using a mobile web application. Journal of medical education and curricular development. 2017;4:2382120517746384.

Scheidt PC, Lazoritz S, Ebbeling WL, Figelman AR, Moessner HF, Singer JE. Evaluation of system providing feedback to students on videotaped patient encounters. J Med Educ. 1986;61(7):585–90.

Sokol-Hessner L, Shea J, Kogan J. The open-ended comment space for action plans on core clerkship students’ encounter cards: what gets written? Acad Med. 2010;85:S110–4.

Sox CM, Dell M, Phillipi CA, Cabral HJ, Vargas G, Lewin LO. Feedback on oral presentations during pediatric clerkships: a randomized controlled trial. Pediatrics. 2014;134:965–71.

Spickard A, Gigante J, Stein G, Denny JC. Automatic capture of student notes to augment mentor feedback and student performance on patient write-ups. J Gen Intern Med. 2008;23:979–84.

Suhoyo Y, Van Hell EA, Kerdijk W, Emilia O, Schönrock-Adema J, Kuks JB, et al. nfluence of feedback characteristics on perceived learning value of feedback in clerkships: does culture matter? BMC medical education. 2017;17:1–7.

Torre DM, Simpson D, Sebastian JL, Elnicki DM. Learning/feedback activities and high-quality teaching: perceptions of third-year medical students during an inpatient rotation. Acad Med. 2005;80:950–4.

Urquhart LM, Ker JS, Rees CE. Exploring the influence of context on feedback at medical school: a video-ethnography study. Adv Health Sci Educ. 2018;23:159–86.

Watling C, Driessen E, van der Vleuten C, Lingard L. Learning culture and feedback: an international study of medical athletes and musicians. Med Educ. 2014;48:713–23.

Watling C, Driessen E, van der Vleuten C, Vanstone M, Lingard L. Beyond individualism: Professional culture and its influence on feedback. Med Educ. 2013;47:585–94.

Soemantri D, Dodds A, Mccoll G. Examining the nature of feedback within the Mini Clinical Evaluation Exercise (Mini-CEX): an analysis of 1427 Mini-CEX assessment forms. GMS J Med Educ. 2018;35:Doc47.

Van De Ridder JMMM, Stokking KM, McGaghie WC, ten Cate OTJ, van der Ridder JM, Stokking KM, et al. What is feedback in clinical education? Med Educ. 2008;42:189–97.

van de Ridder JMMM, McGaghie WC, Stokking KM, ten Cate OTJJ. Variables that affect the process and outcome of feedback, relevant for medical training: a meta-review. Med Educ. 2015;49:658–73.

Boud D. Feedback: ensuring that it leads to enhanced learning. Clin Teach. 2015. https://doi.org/10.1111/tct.12345 .

Brehaut J, Colquhoun H, Eva K, Carrol K, Sales A, Michie S, et al. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med. 2016;164:435–41.

Ende J. Feedback in clinical medical education. J Am Med Assoc. 1983;250:777–81.

Cantillon P, Sargeant J. Giving feedback in clinical settings. Br Med J. 2008;337(7681):1292–4.

Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No. 31. Med Teach. 2007;29:855–71.

Watling CJ, Ginsburg S. Assessment, feedback and the alchemy of learning. Med Educ. 2019;53:76–85.

van der Vleuten CPM, Schuwirth LWT, Driessen EW, Dijkstra J, Tigelaar D, Baartman LKJ, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34:205–14.

Schuwirth LWT, der Vleuten CPM. Programmatic assessment: from assessment of learning to assessment for learning. Med Teach. 2011;33:478–85.

Schut S, Driessen E, van Tartwijk J, van der Vleuten C, Heeneman S. Stakes in the eye of the beholder: an international study of learners’ perceptions within programmatic assessment. Med Educ. 2018;52:654–63.

Henderson M, Boud D, Molloy E, Dawson P, Phillips M, Ryan T, Mahoney MP. Feedback for learning. Closing the assessment loop. Framework for effective learning. Canberra, Australia: Australian Government, Department for Education and Training; 2018.

Heeneman S, Pool AO, Schuwirth LWT, van der Vleuten CPM, Driessen EW, Oudkerk A, et al. The impact of programmatic assessment on student learning: theory versus practice. Med Educ. 2015;49:487–98.

Lefroy J, Watling C, Teunissen P, Brand P, Watling C. Guidelines: the do’s, don’ts and don’t knows of feedback for clinical education. Perspect Med Educ. 2015;4:284–99.

Ramani S, Krackov SK. Twelve tips for giving feedback effectively in the clinical environment. Med Teach. 2012;34:787–91.

Telio S, Ajjawi R, Regehr G. The, “Educational Alliance” as a Framework for Reconceptualizing Feedback in Medical Education. Acad Med. 2015;90:609–14.

Lockyer J, Armson H, Könings KD, Lee-Krueger RC, des Ordons AR, Ramani S, et al. In-the-Moment Feedback and Coaching: Improving R2C2 for a New Context. J Grad Med Educ. 2020;12:27–35.

Black P, Wiliam D. Developing the theory of formative assessment. Educ Assess Eval Account. 2009;21:5–31.

Download references

Author information

Authors and affiliations.

Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago, Chile

Javiera Fuentes-Cimma & Ignacio Villagran

School of Health Professions Education, Maastricht University, Maastricht, Netherlands

Javiera Fuentes-Cimma & Lorena Isbej

Rotterdam University of Applied Sciences, Rotterdam, Netherlands

Dominique Sluijsmans

Centre for Medical and Health Profession Education, Department of Gastroenterology, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile

Arnoldo Riquelme

School of Dentistry, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile

Lorena Isbej

Sistema de Bibliotecas UC (SIBUC), Pontificia Universidad Católica de Chile, Santiago, Chile

María Teresa Olivares-Labbe

Department of Pathology, Faculty of Health, Medicine and Health Sciences, Maastricht University, Maastricht, Netherlands

Sylvia Heeneman

You can also search for this author in PubMed   Google Scholar

Contributions

J.F-C, D.S, and S.H. made substantial contributions to the conception and design of the work. M.O-L contributed to the identification of studies. J.F-C, I.V, A.R, and L.I. made substantial contributions to the screening, reliability, and data analysis. J.F-C. wrote th e main manuscript text. All authors reviewed the manuscript.

Corresponding author

Correspondence to Javiera Fuentes-Cimma .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1, supplementary material 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Fuentes-Cimma, J., Sluijsmans, D., Riquelme, A. et al. Designing feedback processes in the workplace-based learning of undergraduate health professions education: a scoping review. BMC Med Educ 24 , 440 (2024). https://doi.org/10.1186/s12909-024-05439-6

Download citation

Received : 25 September 2023

Accepted : 17 April 2024

Published : 23 April 2024

DOI : https://doi.org/10.1186/s12909-024-05439-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical clerkship
  • Feedback processes
  • Feedforward
  • Formative feedback
  • Health professions
  • Undergraduate medical education
  • Undergraduate healthcare education
  • Workplace learning

BMC Medical Education

ISSN: 1472-6920

literature review on continuous assessment

  • Open access
  • Published: 19 April 2024

A scoping review of continuous quality improvement in healthcare system: conceptualization, models and tools, barriers and facilitators, and impact

  • Aklilu Endalamaw 1 , 2 ,
  • Resham B Khatri 1 , 3 ,
  • Tesfaye Setegn Mengistu 1 , 2 ,
  • Daniel Erku 1 , 4 , 5 ,
  • Eskinder Wolka 6 ,
  • Anteneh Zewdie 6 &
  • Yibeltal Assefa 1  

BMC Health Services Research volume  24 , Article number:  487 ( 2024 ) Cite this article

674 Accesses

Metrics details

The growing adoption of continuous quality improvement (CQI) initiatives in healthcare has generated a surge in research interest to gain a deeper understanding of CQI. However, comprehensive evidence regarding the diverse facets of CQI in healthcare has been limited. Our review sought to comprehensively grasp the conceptualization and principles of CQI, explore existing models and tools, analyze barriers and facilitators, and investigate its overall impacts.

This qualitative scoping review was conducted using Arksey and O’Malley’s methodological framework. We searched articles in PubMed, Web of Science, Scopus, and EMBASE databases. In addition, we accessed articles from Google Scholar. We used mixed-method analysis, including qualitative content analysis and quantitative descriptive for quantitative findings to summarize findings and PRISMA extension for scoping reviews (PRISMA-ScR) framework to report the overall works.

A total of 87 articles, which covered 14 CQI models, were included in the review. While 19 tools were used for CQI models and initiatives, Plan-Do-Study/Check-Act cycle was the commonly employed model to understand the CQI implementation process. The main reported purposes of using CQI, as its positive impact, are to improve the structure of the health system (e.g., leadership, health workforce, health technology use, supplies, and costs), enhance healthcare delivery processes and outputs (e.g., care coordination and linkages, satisfaction, accessibility, continuity of care, safety, and efficiency), and improve treatment outcome (reduce morbidity and mortality). The implementation of CQI is not without challenges. There are cultural (i.e., resistance/reluctance to quality-focused culture and fear of blame or punishment), technical, structural (related to organizational structure, processes, and systems), and strategic (inadequate planning and inappropriate goals) related barriers that were commonly reported during the implementation of CQI.

Conclusions

Implementing CQI initiatives necessitates thoroughly comprehending key principles such as teamwork and timeline. To effectively address challenges, it’s crucial to identify obstacles and implement optimal interventions proactively. Healthcare professionals and leaders need to be mentally equipped and cognizant of the significant role CQI initiatives play in achieving purposes for quality of care.

Peer Review reports

Continuous quality improvement (CQI) initiative is a crucial initiative aimed at enhancing quality in the health system that has gradually been adopted in the healthcare industry. In the early 20th century, Shewhart laid the foundation for quality improvement by describing three essential steps for process improvement: specification, production, and inspection [ 1 , 2 ]. Then, Deming expanded Shewhart’s three-step model into ‘plan, do, study/check, and act’ (PDSA or PDCA) cycle, which was applied to management practices in Japan in the 1950s [ 3 ] and was gradually translated into the health system. In 1991, Kuperman applied a CQI approach to healthcare, comprising selecting a process to be improved, assembling a team of expert clinicians that understands the process and the outcomes, determining key steps in the process and expected outcomes, collecting data that measure the key process steps and outcomes, and providing data feedback to the practitioners [ 4 ]. These philosophies have served as the baseline for the foundation of principles for continuous improvement [ 5 ].

Continuous quality improvement fosters a culture of continuous learning, innovation, and improvement. It encourages proactive identification and resolution of problems, promotes employee engagement and empowerment, encourages trust and respect, and aims for better quality of care [ 6 , 7 ]. These characteristics drive the interaction of CQI with other quality improvement projects, such as quality assurance and total quality management [ 8 ]. Quality assurance primarily focuses on identifying deviations or errors through inspections, audits, and formal reviews, often settling for what is considered ‘good enough’, rather than pursuing the highest possible standards [ 9 , 10 ], while total quality management is implemented as the management philosophy and system to improve all aspects of an organization continuously [ 11 ].

Continuous quality improvement has been implemented to provide quality care. However, providing effective healthcare is a complicated and complex task in achieving the desired health outcomes and the overall well-being of individuals and populations. It necessitates tackling issues, including access, patient safety, medical advances, care coordination, patient-centered care, and quality monitoring [ 12 , 13 ], rooted long ago. It is assumed that the history of quality improvement in healthcare started in 1854 when Florence Nightingale introduced quality improvement documentation [ 14 ]. Over the passing decades, Donabedian introduced structure, processes, and outcomes as quality of care components in 1966 [ 15 ]. More comprehensively, the Institute of Medicine in the United States of America (USA) has identified effectiveness, efficiency, equity, patient-centredness, safety, and timeliness as the components of quality of care [ 16 ]. Moreover, quality of care has recently been considered an integral part of universal health coverage (UHC) [ 17 ], which requires initiatives to mobilise essential inputs [ 18 ].

While the overall objective of CQI in health system is to enhance the quality of care, it is important to note that the purposes and principles of CQI can vary across different contexts [ 19 , 20 ]. This variation has sparked growing research interest. For instance, a review of CQI approaches for capacity building addressed its role in health workforce development [ 21 ]. Another systematic review, based on random-controlled design studies, assessed the effectiveness of CQI using training as an intervention and the PDSA model [ 22 ]. As a research gap, the former review was not directly related to the comprehensive elements of quality of care, while the latter focused solely on the impact of training using the PDSA model, among other potential models. Additionally, a review conducted in 2015 aimed to identify barriers and facilitators of CQI in Canadian contexts [ 23 ]. However, all these reviews presented different perspectives and investigated distinct outcomes. This suggests that there is still much to explore in terms of comprehensively understanding the various aspects of CQI initiatives in healthcare.

As a result, we conducted a scoping review to address several aspects of CQI. Scoping reviews serve as a valuable tool for systematically mapping the existing literature on a specific topic. They are instrumental when dealing with heterogeneous or complex bodies of research. Scoping reviews provide a comprehensive overview by summarizing and disseminating findings across multiple studies, even when evidence varies significantly [ 24 ]. In our specific scoping review, we included various types of literature, including systematic reviews, to enhance our understanding of CQI.

This scoping review examined how CQI is conceptualized and measured and investigated models and tools for its application while identifying implementation challenges and facilitators. It also analyzed the purposes and impact of CQI on the health systems, providing valuable insights for enhancing healthcare quality.

Protocol registration and results reporting

Protocol registration for this scoping review was not conducted. Arksey and O’Malley’s methodological framework was utilized to conduct this scoping review [ 25 ]. The scoping review procedures start by defining the research questions, identifying relevant literature, selecting articles, extracting data, and summarizing the results. The review findings are reported using the PRISMA extension for a scoping review (PRISMA-ScR) [ 26 ]. McGowan and colleagues also advised researchers to report findings from scoping reviews using PRISMA-ScR [ 27 ].

Defining the research problems

This review aims to comprehensively explore the conceptualization, models, tools, barriers, facilitators, and impacts of CQI within the healthcare system worldwide. Specifically, we address the following research questions: (1) How has CQI been defined across various contexts? (2) What are the diverse approaches to implementing CQI in healthcare settings? (3) Which tools are commonly employed for CQI implementation ? (4) What barriers hinder and facilitators support successful CQI initiatives? and (5) What effects CQI initiatives have on the overall care quality?

Information source and search strategy

We conducted the search in PubMed, Web of Science, Scopus, and EMBASE databases, and the Google Scholar search engine. The search terms were selected based on three main distinct concepts. One group was CQI-related terms. The second group included terms related to the purpose for which CQI has been implemented, and the third group included processes and impact. These terms were selected based on the Donabedian framework of structure, process, and outcome [ 28 ]. Additionally, the detailed keywords were recruited from the primary health framework, which has described lists of dimensions under process, output, outcome, and health system goals of any intervention for health [ 29 ]. The detailed search strategy is presented in the Supplementary file 1 (Search strategy). The search for articles was initiated on August 12, 2023, and the last search was conducted on September 01, 2023.

Eligibility criteria and article selection

Based on the scoping review’s population, concept, and context frameworks [ 30 ], the population included any patients or clients. Additionally, the concepts explored in the review encompassed definitions, implementation, models, tools, barriers, facilitators, and impacts of CQI. Furthermore, the review considered contexts at any level of health systems. We included articles if they reported results of qualitative or quantitative empirical study, case studies, analytic or descriptive synthesis, any review, and other written documents, were published in peer-reviewed journals, and were designed to address at least one of the identified research questions or one of the identified implementation outcomes or their synonymous taxonomy as described in the search strategy. Based on additional contexts, we included articles published in English without geographic and time limitations. We excluded articles with abstracts only, conference abstracts, letters to editors, commentators, and corrections.

We exported all citations to EndNote x20 to remove duplicates and screen relevant articles. The article selection process includes automatic duplicate removal by using EndNote x20, unmatched title and abstract removal, citation and abstract-only materials removal, and full-text assessment. The article selection process was mainly conducted by the first author (AE) and reported to the team during the weekly meetings. The first author encountered papers that caused confusion regarding whether to include or exclude them and discussed them with the last author (YA). Then, decisions were ultimately made. Whenever disagreements happened, they were resolved by discussion and reconsideration of the review questions in relation to the written documents of the article. Further statistical analysis, such as calculating Kappa, was not performed to determine article inclusion or exclusion.

Data extraction and data items

We extracted first author, publication year, country, settings, health problem, the purpose of the study, study design, types of intervention if applicable, CQI approaches/steps if applicable, CQI tools and procedures if applicable, and main findings using a customized Microsoft Excel form.

Summarizing and reporting the results

The main findings were summarized and described based on the main themes, including concepts under conceptualizing, principles, teams, timelines, models, tools, barriers, facilitators, and impacts of CQI. Results-based convergent synthesis, achieved through mixed-method analysis, involved content analysis to identify the thematic presentation of findings. Additionally, a narrative description was used for quantitative findings, aligning them with the appropriate theme. The authors meticulously reviewed the primary findings from each included material and contextualized these findings concerning the main themes1. This approach provides a comprehensive understanding of complex interventions and health systems, acknowledging quantitative and qualitative evidence.

Search results

A total of 11,251 documents were identified from various databases: SCOPUS ( n  = 4,339), PubMed ( n  = 2,893), Web of Science ( n  = 225), EMBASE ( n  = 3,651), and Google Scholar ( n  = 143). After removing duplicates ( n  = 5,061), 6,190 articles were evaluated by title and abstract. Subsequently, 208 articles were assessed for full-text eligibility. Following the eligibility criteria, 121 articles were excluded, leaving 87 included in the current review (Fig.  1 ).

figure 1

Article selection process

Operationalizing continuous quality improvement

Continuous Quality Improvement (CQI) is operationalized as a cyclic process that requires commitment to implementation, teamwork, time allocation, and celebrating successes and failures.

CQI is a cyclic ongoing process that is followed reflexive, analytical and iterative steps, including identifying gaps, generating data, developing and implementing action plans, evaluating performance, providing feedback to implementers and leaders, and proposing necessary adjustments [ 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 ].

CQI requires committing to the philosophy, involving continuous improvement [ 19 , 38 ], establishing a mission statement [ 37 ], and understanding quality definition [ 19 ].

CQI involves a wide range of patient-oriented measures and performance indicators, specifically satisfying internal and external customers, developing quality assurance, adopting common quality measures, and selecting process measures [ 8 , 19 , 35 , 36 , 37 , 39 , 40 ].

CQI requires celebrating success and failure without personalization, leading each team member to develop error-free attitudes [ 19 ]. Success and failure are related to underlying organizational processes and systems as causes of failure rather than blaming individuals [ 8 ] because CQI is process-focused based on collaborative, data-driven, responsive, rigorous and problem-solving statistical analysis [ 8 , 19 , 38 ]. Furthermore, a gap or failure opens another opportunity for establishing a data-driven learning organization [ 41 ].

CQI cannot be implemented without a CQI team [ 8 , 19 , 37 , 39 , 42 , 43 , 44 , 45 , 46 ]. A CQI team comprises individuals from various disciplines, often comprising a team leader, a subject matter expert (physician or other healthcare provider), a data analyst, a facilitator, frontline staff, and stakeholders [ 39 , 43 , 47 , 48 , 49 ]. It is also important to note that inviting stakeholders or partners as part of the CQI support intervention is crucial [ 19 , 38 , 48 ].

The timeline is another distinct feature of CQI because the results of CQI vary based on the implementation duration of each cycle [ 35 ]. There is no specific time limit for CQI implementation, although there is a general consensus that a cycle of CQI should be relatively short [ 35 ]. For instance, a CQI implementation took 2 months [ 42 ], 4 months [ 50 ], 9 months [ 51 , 52 ], 12 months [ 53 , 54 , 55 ], and one year and 5 months [ 49 ] duration to achieve the desired positive outcome, while bi-weekly [ 47 ] and monthly data reviews and analyses [ 44 , 48 , 56 ], and activities over 3 months [ 57 ] have also resulted in a positive outcome.

Continuous quality improvement models and tools

There have been several models are utilized. The Plan-Do-Study/Check-Act cycle is a stepwise process involving project initiation, situation analysis, root cause identification, solution generation and selection, implementation, result evaluation, standardization, and future planning [ 7 , 36 , 37 , 45 , 47 , 48 , 49 , 50 , 51 , 53 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 ]. The FOCUS-PDCA cycle enhances the PDCA process by adding steps to find and improve a process (F), organize a knowledgeable team (O), clarify the process (C), understand variations (U), and select improvements (S) [ 55 , 71 , 72 , 73 ]. The FADE cycle involves identifying a problem (Focus), understanding it through data analysis (Analyze), devising solutions (Develop), and implementing the plan (Execute) [ 74 ]. The Logic Framework involves brainstorming to identify improvement areas, conducting root cause analysis to develop a problem tree, logically reasoning to create an objective tree, formulating the framework, and executing improvement projects [ 75 ]. Breakthrough series approach requires CQI teams to meet in quarterly collaborative learning sessions, share learning experiences, and continue discussion by telephone and cross-site visits to strengthen learning and idea exchange [ 47 ]. Another CQI model is the Lean approach, which has been conducted with Kaizen principles [ 52 ], 5 S principles, and the Six Sigma model. The 5 S (Sort, Set/Straighten, Shine, Standardize, Sustain) systematically organises and improves the workplace, focusing on sorting, setting order, shining, standardizing, and sustaining the improvement [ 54 , 76 ]. Kaizen principles guide CQI by advocating for continuous improvement, valuing all ideas, solving problems, focusing on practical, low-cost improvements, using data to drive change, acknowledging process defects, reducing variability and waste, recognizing every interaction as a customer-supplier relationship, empowering workers, responding to all ideas, and maintaining a disciplined workplace [ 77 ]. Lean Six Sigma, a CQI model, applies the DMAIC methodology, which involves defining (D) and measuring the problem (M), analyzing root causes (A), improving by finding solutions (I), and controlling by assessing process stability (C) [ 78 , 79 ]. The 5 C-cyclic model (consultation, collection, consideration, collaboration, and celebration), the first CQI framework for volunteer dental services in Aboriginal communities, ensures quality care based on community needs [ 80 ]. One study used meetings involving activities such as reviewing objectives, assigning roles, discussing the agenda, completing tasks, retaining key outputs, planning future steps, and evaluating the meeting’s effectiveness [ 81 ].

Various tools are involved in the implementation or evaluation of CQI initiatives: checklists [ 53 , 82 ], flowcharts [ 81 , 82 , 83 ], cause-and-effect diagrams (fishbone or Ishikawa diagrams) [ 60 , 62 , 79 , 81 , 82 ], fuzzy Pareto diagram [ 82 ], process maps [ 60 ], time series charts [ 48 ], why-why analysis [ 79 ], affinity diagrams and multivoting [ 81 ], and run chart [ 47 , 48 , 51 , 60 , 84 ], and others mentioned in the table (Table  1 ).

Barriers and facilitators of continuous quality improvement implementation

Implementing CQI initiatives is determined by various barriers and facilitators, which can be thematized into four dimensions. These dimensions are cultural, technical, structural, and strategic dimensions.

Continuous quality improvement initiatives face various cultural, strategic, technical, and structural barriers. Cultural dimension barriers involve resistance to change (e.g., not accepting online technology), lack of quality-focused culture, staff reporting apprehensiveness, and fear of blame or punishment [ 36 , 41 , 85 , 86 ]. The technical dimension barriers of CQI can include various factors that hinder the effective implementation and execution of CQI processes [ 36 , 86 , 87 , 88 , 89 ]. Structural dimension barriers of CQI arise from the organization structure, process, and systems that can impede the effective implementation and sustainability of CQI [ 36 , 85 , 86 , 87 , 88 ]. Strategic dimension barriers are, for example, the inability to select proper CQI goals and failure to integrate CQI into organizational planning and goals [ 36 , 85 , 86 , 87 , 88 , 90 ].

Facilitators are also grouped to cultural, structural, technical, and strategic dimensions to provide solutions to CQI barriers. Cultural challenges were addressed by developing a group culture to CQI and other rewards [ 39 , 41 , 80 , 85 , 86 , 87 , 90 , 91 , 92 ]. Technical facilitators are pivotal to improving technical barriers [ 39 , 42 , 53 , 69 , 86 , 90 , 91 ]. Structural-related facilitators are related to improving communication, infrastructure, and systems [ 86 , 92 , 93 ]. Strategic dimension facilitators include strengthening leadership and improving decision-making skills [ 43 , 53 , 67 , 86 , 87 , 92 , 94 , 95 ] (Table  2 ).

Impact of continuous quality improvement

Continuous quality improvement initiatives can significantly impact the quality of healthcare in a wide range of health areas, focusing on improving structure, the health service delivery process and improving client wellbeing and reducing mortality.

Structure components

These are health leadership, financing, workforce, technology, and equipment and supplies. CQI has improved planning, monitoring and evaluation [ 48 , 53 ], and leadership and planning [ 48 ], indicating improvement in leadership perspectives. Implementing CQI in primary health care (PHC) settings has shown potential for maintaining or reducing operation costs [ 67 ]. Findings from another study indicate that the costs associated with implementing CQI interventions per facility ranged from approximately $2,000 to $10,500 per year, with an average cost of approximately $10 to $60 per admitted client [ 57 ]. However, based on model predictions, the average cost savings after implementing CQI were estimated to be $5430 [ 31 ]. CQI can also be applied to health workforce development [ 32 ]. CQI in the institutional system improved medical education [ 66 , 96 , 97 ], human resources management [ 53 ], motivated staffs [ 76 ], and increased staff health awareness [ 69 ], while concerns raised about CQI impartiality, independence, and public accountability [ 96 ]. Regarding health technology, CQI also improved registration and documentation [ 48 , 53 , 98 ]. Furthermore, the CQI initiatives increased cleanliness [ 54 ] and improved logistics, supplies, and equipment [ 48 , 53 , 68 ].

Process and output components

The process component focuses on the activities and actions involved in delivering healthcare services.

Service delivery

CQI interventions improved service delivery [ 53 , 56 , 99 ], particularly a significant 18% increase in the overall quality of service performance [ 48 ], improved patient counselling, adherence to appropriate procedures, and infection prevention [ 48 , 68 ], and optimised workflow [ 52 ].

Coordination and collaboration

CQI initiatives improved coordination and collaboration through collecting and analysing data, onsite technical support, training, supportive supervision [ 53 ] and facilitating linkages between work processes and a quality control group [ 65 ].

Patient satisfaction

The CQI initiatives increased patient satisfaction and improved quality of life by optimizing care quality management, improving the quality of clinical nursing, reducing nursing defects and enhancing the wellbeing of clients [ 54 , 76 , 100 ], although CQI was not associated with changes in adolescent and young adults’ satisfaction [ 51 ].

CQI initiatives reduced medication error reports from 16 to 6 [ 101 ], and it significantly reduced the administration of inappropriate prophylactic antibiotics [ 44 ], decreased errors in inpatient care [ 52 ], decreased the overall episiotomy rate from 44.5 to 33.3% [ 83 ], reduced the overall incidence of unplanned endotracheal extubation [ 102 ], improving appropriate use of computed tomography angiography [ 103 ], and appropriate diagnosis and treatment selection [ 47 ].

Continuity of care

CQI initiatives effectively improve continuity of care by improving client and physician interaction. For instance, provider continuity levels showed a 64% increase [ 55 ]. Modifying electronic medical record templates, scheduling, staff and parental education, standardization of work processes, and birth to 1-year age-specific incentives in post-natal follow-up care increased continuity of care to 74% in 2018 compared to baseline 13% in 2012 [ 84 ].

The CQI initiative yielded enhanced efficiency in the cardiac catheterization laboratory, as evidenced by improved punctuality in procedure starts and increased efficiency in manual sheath-pulls inside [ 78 ].

Accessibility

CQI initiatives were effective in improving accessibility in terms of increasing service coverage and utilization rate. For instance, screening for cigarettes, nutrition counselling, folate prescription, maternal care, immunization coverage [ 53 , 81 , 104 , 105 ], reducing the percentage of non-attending patients to surgery to 0.9% from the baseline 3.9% [ 43 ], increasing Chlamydia screening rates from 29 to 60% [ 45 ], increasing HIV care continuum coverage [ 51 , 59 , 60 ], increasing in the uptake of postpartum long-acting reversible contraceptive use from 6.9% at the baseline to 25.4% [ 42 ], increasing post-caesarean section prophylaxis from 36 to 89% [ 62 ], a 31% increase of kangaroo care practice [ 50 ], and increased follow-up [ 65 ]. Similarly, the QI intervention increased the quality of antenatal care by 29.3%, correct partograph use by 51.7%, and correct active third-stage labour management, a 19.6% improvement from the baseline, but not significantly associated with improvement in contraceptive service uptake [ 61 ].

Timely access

CQI interventions improved the time care provision [ 52 ], and reduced waiting time [ 62 , 74 , 76 , 106 ]. For instance, the discharge process waiting time in the emergency department decreased from 76 min to 22 min [ 79 ]. It also reduced mean postprocedural length of stay from 2.8 days to 2.0 days [ 31 ].

Acceptability

Acceptability of CQI by healthcare providers was satisfactory. For instance, 88% of the faculty, 64% of the residents, and 82% of the staff believed CQI to be useful in the healthcare clinic [ 107 ].

Outcome components

Morbidity and mortality.

CQI efforts have demonstrated better management outcomes among diabetic patients [ 40 ], patients with oral mucositis [ 71 ], and anaemic patients [ 72 ]. It has also reduced infection rate in post-caesarean Sect. [ 62 ], reduced post-peritoneal dialysis peritonitis [ 49 , 108 ], and prevented pressure ulcers [ 70 ]. It is explained by peritonitis incidence from once every 40.1 patient months at baseline to once every 70.8 patient months after CQI [ 49 ] and a 63% reduction in pressure ulcer prevalence within 2 years from 2008 to 2010 [ 70 ]. Furthermore, CQI initiatives significantly reduced in-hospital deaths [ 31 ] and increased patient survival rates [ 108 ]. Figure  2 displays the overall process of the CQI implementations.

figure 2

The overall mechanisms of continuous quality improvement implementation

In this review, we examined the fundamental concepts and principles underlying CQI, the factors that either hinder or assist in its successful application and implementation, and the purpose of CQI in enhancing quality of care across various health issues.

Our findings have brought attention to the application and implementation of CQI, emphasizing its underlying concepts and principles, as evident in the existing literature [ 31 , 32 , 33 , 34 , 35 , 36 , 39 , 40 , 43 , 45 , 46 ]. Continuous quality improvement has shared with the principles of continuous improvement, such as a customer-driven focus, effective leadership, active participation of individuals, a process-oriented approach, systematic implementation, emphasis on design improvement and prevention, evidence-based decision-making, and fostering partnership [ 5 ]. Moreover, Deming’s 14 principles laid the foundation for CQI principles [ 109 ]. These principles have been adapted and put into practice in various ways: ten [ 19 ] and five [ 38 ] principles in hospitals, five principles for capacity building [ 38 ], and two principles for medication error prevention [ 41 ]. As a principle, the application of CQI can be process-focused [ 8 , 19 ] or impact-focused [ 38 ]. Impact-focused CQI focuses on achieving specific outcomes or impacts, whereas process-focused CQI prioritizes and improves the underlying processes and systems. These principles complement each other and can be utilized based on the objectives of quality improvement initiatives in healthcare settings. Overall, CQI is an ongoing educational process that requires top management’s involvement, demands coordination across departments, encourages the incorporation of views beyond clinical area, and provides non-judgemental evidence based on objective data [ 110 ].

The current review recognized that it was not easy to implement CQI. It requires reasonable utilization of various models and tools. The application of each tool can be varied based on the studied health problem and the purpose of CQI initiative [ 111 ], varied in context, content, structure, and usability [ 112 ]. Additionally, overcoming the cultural, technical, structural, and strategic-related barriers. These barriers have emerged from clinical staff, managers, and health systems perspectives. Of the cultural obstacles, staff non-involvement, resistance to change, and reluctance to report error were staff-related. In contrast, others, such as the absence of celebration for success and hierarchical and rational culture, may require staff and manager involvement. Staff members may exhibit reluctance in reporting errors due to various cultural factors, including lack of trust, hierarchical structures, fear of retribution, and a blame-oriented culture. These challenges pose obstacles to implementing standardized CQI practices, as observed, for instance, in community pharmacy settings [ 85 ]. The hierarchical culture, characterized by clearly defined levels of power, authority, and decision-making, posed challenges to implementing CQI initiatives in public health [ 41 , 86 ]. Although rational culture, a type of organizational culture, emphasizes logical thinking and rational decision-making, it can also create challenges for CQI implementation [ 41 , 86 ] because hierarchical and rational cultures, which emphasize bureaucratic norms and narrow definitions of achievement, were found to act as barriers to the implementation of CQI [ 86 ]. These could be solved by developing a shared mindset and collective commitment, establishing a shared purpose, developing group norms, and cultivating psychological preparedness among staff, managers, and clients to implement and sustain CQI initiatives. Furthermore, reversing cultural-related barriers necessitates cultural-related solutions: development of a culture and group culture to CQI [ 41 , 86 ], positive comprehensive perception [ 91 ], commitment [ 85 ], involving patients, families, leaders, and staff [ 39 , 92 ], collaborating for a common goal [ 80 , 86 ], effective teamwork [ 86 , 87 ], and rewarding and celebrating successes [ 80 , 90 ].

The technical dimension barriers of CQI can include inadequate capitalization of a project and insufficient support for CQI facilitators and data entry managers [ 36 ], immature electronic medical records or poor information systems [ 36 , 86 ], and the lack of training and skills [ 86 , 87 , 88 ]. These challenges may cause the CQI team to rely on outdated information and technologies. The presence of barriers on the technical dimension may challenge the solid foundation of CQI expertise among staff, the ability to recognize opportunities for improvement, a comprehensive understanding of how services are produced and delivered, and routine use of expertise in daily work. Addressing these technical barriers requires knowledge creation activities (training, seminar, and education) [ 39 , 42 , 53 , 69 , 86 , 90 , 91 ], availability of quality data [ 86 ], reliable information [ 92 ], and a manual-online hybrid reporting system [ 85 ].

Structural dimension barriers of CQI include inadequate communication channels and lack of standardized process, specifically weak physician-to-physician synergies [ 36 ], lack of mechanisms for disseminating knowledge and limited use of communication mechanisms [ 86 ]. Lack of communication mechanism endangers sharing ideas and feedback among CQI teams, leading to misunderstandings, limited participation and misinterpretations, and a lack of learning [ 113 ]. Knowledge translation facilitates the co-production of research, subsequent diffusion of knowledge, and the developing stakeholder’s capacity and skills [ 114 ]. Thus, the absence of a knowledge translation mechanism may cause missed opportunities for learning, inefficient problem-solving, and limited creativity. To overcome these challenges, organizations should establish effective communication and information systems [ 86 , 93 ] and learning systems [ 92 ]. Though CQI and knowledge translation have interacted with each other, it is essential to recognize that they are distinct. CQI focuses on process improvement within health care systems, aiming to optimize existing processes, reduce errors, and enhance efficiency.

In contrast, knowledge translation bridges the gap between research evidence and clinical practice, translating research findings into actionable knowledge for practitioners. While both CQI and knowledge translation aim to enhance health care quality and patient outcomes, they employ different strategies: CQI utilizes tools like Plan-Do-Study-Act cycles and statistical process control, while knowledge translation involves knowledge synthesis and dissemination. Additionally, knowledge translation can also serve as a strategy to enhance CQI. Both concepts share the same principle: continuous improvement is essential for both. Therefore, effective strategies on the structural dimension may build efficient and effective steering councils, information systems, and structures to diffuse learning throughout the organization.

Strategic factors, such as goals, planning, funds, and resources, determine the overall purpose of CQI initiatives. Specific barriers were improper goals and poor planning [ 36 , 86 , 88 ], fragmentation of quality assurance policies [ 87 ], inadequate reinforcement to staff [ 36 , 90 ], time constraints [ 85 , 86 ], resource inadequacy [ 86 ], and work overload [ 86 ]. These barriers can be addressed through strengthening leadership [ 86 , 87 ], CQI-based mentoring [ 94 ], periodic monitoring, supportive supervision and coaching [ 43 , 53 , 87 , 92 , 95 ], participation, empowerment, and accountability [ 67 ], involving all stakeholders in decision-making [ 86 , 87 ], a provider-payer partnership [ 64 ], and compensating staff for after-hours meetings on CQI [ 85 ]. The strategic dimension, characterized by a strategic plan and integrated CQI efforts, is devoted to processes that are central to achieving strategic priorities. Roles and responsibilities are defined in terms of integrated strategic and quality-related goals [ 115 ].

The utmost goal of CQI has been to improve the quality of care, which is usually revealed by structure, process, and outcome. After resolving challenges and effectively using tools and running models, the goal of CQI reflects the ultimate reason and purpose of its implementation. First, effectively implemented CQI initiatives can improve leadership, health financing, health workforce development, health information technology, and availability of supplies as the building blocks of a health system [ 31 , 48 , 53 , 68 , 98 ]. Second, effectively implemented CQI initiatives improved care delivery process (counselling, adherence with standards, coordination, collaboration, and linkages) [ 48 , 53 , 65 , 68 ]. Third, the CQI can improve outputs of healthcare delivery, such as satisfaction, accessibility (timely access, utilization), continuity of care, safety, efficiency, and acceptability [ 52 , 54 , 55 , 76 , 78 ]. Finally, the effectiveness of the CQI initiatives has been tested in enhancing responses related to key aspects of the HIV response, maternal and child health, non-communicable disease control, and others (e.g., surgery and peritonitis). However, it is worth noting that CQI initiative has not always been effective. For instance, CQI using a two- to nine-times audit cycle model through systems assessment tools did not bring significant change to increase syphilis testing performance [ 116 ]. This study was conducted within the context of Aboriginal and Torres Strait Islander people’s primary health care settings. Notably, ‘the clinics may not have consistently prioritized syphilis testing performance in their improvement strategies, as facilitated by the CQI program’ [ 116 ]. Additionally, by applying CQI-based mentoring, uptake of facility-based interventions was not significantly improved, though it was effective in increasing community health worker visits during pregnancy and the postnatal period, knowledge about maternal and child health and exclusive breastfeeding practice, and HIV disclosure status [ 117 ]. The study conducted in South Africa revealed no significant association between the coverage of facility-based interventions and Continuous Quality Improvement (CQI) implementation. This lack of association was attributed to the already high antenatal and postnatal attendance rates in both control and intervention groups at baseline, leaving little room for improvement. Additionally, the coverage of HIV interventions remained consistently high throughout the study period [ 117 ].

Regarding health care and policy implications, CQI has played a vital role in advancing PHC and fostering the realization of UHC goals worldwide. The indicators found in Donabedian’s framework that are positively influenced by CQI efforts are comparable to those included in the PHC performance initiative’s conceptual framework [ 29 , 118 , 119 ]. It is clearly explained that PHC serves as the roadmap to realizing the vision of UHC [ 120 , 121 ]. Given these circumstances, implementing CQI can contribute to the achievement of PHC principles and the objectives of UHC. For instance, by implementing CQI methods, countries have enhanced the accessibility, affordability, and quality of PHC services, leading to better health outcomes for their populations. CQI has facilitated identifying and resolving healthcare gaps and inefficiencies, enabling countries to optimize resource allocation and deliver more effective and patient-centered care. However, it is crucial to recognize that the successful implementation of Continuous Quality Improvement (CQI) necessitates optimizing the duration of each cycle, understanding challenges and barriers that extend beyond the health system and settings, and acknowledging that its effectiveness may be compromised if these challenges are not adequately addressed.

Despite abundant literature, there are still gaps regarding the relationship between CQI and other dimensions within the healthcare system. No studies have examined the impact of CQI initiatives on catastrophic health expenditure, effective service coverage, patient-centredness, comprehensiveness, equity, health security, and responsiveness.

Limitations

In conducting this review, it has some limitations to consider. Firstly, only articles published in English were included, which may introduce the exclusion of relevant non-English articles. Additionally, as this review follows a scoping methodology, the focus is on synthesising available evidence rather than critically evaluating or scoring the quality of the included articles.

Continuous quality improvement is investigated as a continuous and ongoing intervention, where the implementation time can vary across different cycles. The CQI team and implementation timelines were critical elements of CQI in different models. Among the commonly used approaches, the PDSA or PDCA is frequently employed. In most CQI models, a wide range of tools, nineteen tools, are commonly utilized to support the improvement process. Cultural, technical, structural, and strategic barriers and facilitators are significant in implementing CQI initiatives. Implementing the CQI initiative aims to improve health system blocks, enhance health service delivery process and output, and ultimately prevent morbidity and reduce mortality. For future researchers, considering that CQI is context-dependent approach, conducting scale-up implementation research about catastrophic health expenditure, effective service coverage, patient-centredness, comprehensiveness, equity, health security, and responsiveness across various settings and health issues would be valuable.

Availability of data and materials

The data used and/or analyzed during the current study are available in this manuscript and/or the supplementary file.

Shewhart WA, Deming WE. Memoriam: Walter A. Shewhart, 1891–1967. Am Stat. 1967;21(2):39–40.

Article   Google Scholar  

Shewhart WA. Statistical method from the viewpoint of quality control. New York: Dover; 1986. ISBN 978-0486652320. OCLC 13822053. Reprint. Originally published: Washington, DC: Graduate School of the Department of Agriculture, 1939.

Moen R, editor Foundation and History of the PDSA Cycle. Asian network for quality conference Tokyo. https://www.deming.org/sites/default/files/pdf/2015/PDSA_History_Ron_MoenPdf . 2009.

Kuperman G, James B, Jacobsen J, Gardner RM. Continuous quality improvement applied to medical care: experiences at LDS hospital. Med Decis Making. 1991;11(4suppl):S60–65.

Article   CAS   PubMed   Google Scholar  

Singh J, Singh H. Continuous improvement philosophy–literature review and directions. Benchmarking: An International Journal. 2015;22(1):75–119.

Goldstone J. Presidential address: Sony, Porsche, and vascular surgery in the 21st century. J Vasc Surg. 1997;25(2):201–10.

Radawski D. Continuous quality improvement: origins, concepts, problems, and applications. J Physician Assistant Educ. 1999;10(1):12–6.

Shortell SM, O’Brien JL, Carman JM, Foster RW, Hughes E, Boerstler H, et al. Assessing the impact of continuous quality improvement/total quality management: concept versus implementation. Health Serv Res. 1995;30(2):377.

CAS   PubMed   PubMed Central   Google Scholar  

Lohr K. Quality of health care: an introduction to critical definitions, concepts, principles, and practicalities. Striving for quality in health care. 1991.

Berwick DM. The clinical process and the quality process. Qual Manage Healthc. 1992;1(1):1–8.

Article   CAS   Google Scholar  

Gift B. On the road to TQM. Food Manage. 1992;27(4):88–9.

CAS   PubMed   Google Scholar  

Greiner A, Knebel E. The core competencies needed for health care professionals. health professions education: A bridge to quality. 2003:45–73.

McCalman J, Bailie R, Bainbridge R, McPhail-Bell K, Percival N, Askew D et al. Continuous quality improvement and comprehensive primary health care: a systems framework to improve service quality and health outcomes. Front Public Health. 2018:6 (76):1–6.

Sheingold BH, Hahn JA. The history of healthcare quality: the first 100 years 1860–1960. Int J Afr Nurs Sci. 2014;1:18–22.

Google Scholar  

Donabedian A. Evaluating the quality of medical care. Milbank Q. 1966;44(3):166–206.

Institute of Medicine (US) Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington (DC): National Academies Press (US). 2001. 2, Improving the 21st-century Health Care System. Available from: https://www.ncbi.nlm.nih.gov/books/NBK222265/ .

Rubinstein A, Barani M, Lopez AS. Quality first for effective universal health coverage in low-income and middle-income countries. Lancet Global Health. 2018;6(11):e1142–1143.

Article   PubMed   Google Scholar  

Agency for Healthcare Reserach and Quality. Quality Improvement and monitoring at your fingertips USA,: Agency for Healthcare Reserach and Quality. 2022. Available from: https://qualityindicators.ahrq.gov/ .

Anderson CA, Cassidy B, Rivenburgh P. Implementing continuous quality improvement (CQI) in hospitals: lessons learned from the International Quality Study. Qual Assur Health Care. 1991;3(3):141–6.

Gardner K, Mazza D. Quality in general practice - definitions and frameworks. Aust Fam Physician. 2012;41(3):151–4.

PubMed   Google Scholar  

Loper AC, Jensen TM, Farley AB, Morgan JD, Metz AJ. A systematic review of approaches for continuous quality improvement capacity-building. J Public Health Manage Pract. 2022;28(2):E354.

Hill JE, Stephani A-M, Sapple P, Clegg AJ. The effectiveness of continuous quality improvement for developing professional practice and improving health care outcomes: a systematic review. Implement Sci. 2020;15(1):1–14.

Candas B, Jobin G, Dubé C, Tousignant M, Abdeljelil AB, Grenier S, et al. Barriers and facilitators to implementing continuous quality improvement programs in colonoscopy services: a mixed methods systematic review. Endoscopy Int Open. 2016;4(02):E118–133.

Peters MD, Marnie C, Colquhoun H, Garritty CM, Hempel S, Horsley T, et al. Scoping reviews: reinforcing and advancing the methodology and application. Syst Reviews. 2021;10(1):1–6.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

McGowan J, Straus S, Moher D, Langlois EV, O’Brien KK, Horsley T, et al. Reporting scoping reviews—PRISMA ScR extension. J Clin Epidemiol. 2020;123:177–9.

Donabedian A. Explorations in quality assessment and monitoring: the definition of quality and approaches to its assessment. Health Administration Press, Ann Arbor. 1980;1.

World Health Organization. Operational framework for primary health care: transforming vision into action. Geneva: World Health Organization and the United Nations Children’s Fund (UNICEF); 2020 [updated 14 December 2020; cited 2023 Nov Oct 17]. Available from: https://www.who.int/publications/i/item/9789240017832 .

The Joanna Briggs Institute. The Joanna Briggs Institute Reviewers’ Manual :2014 edition. Australia: The Joanna Briggs Institute. 2014:88–91.

Rihal CS, Kamath CC, Holmes DR Jr, Reller MK, Anderson SS, McMurtry EK, et al. Economic and clinical outcomes of a physician-led continuous quality improvement intervention in the delivery of percutaneous coronary intervention. Am J Manag Care. 2006;12(8):445–52.

Ade-Oshifogun JB, Dufelmeier T. Prevention and Management of Do not return notices: a quality improvement process for Supplemental staffing nursing agencies. Nurs Forum. 2012;47(2):106–12.

Rubenstein L, Khodyakov D, Hempel S, Danz M, Salem-Schatz S, Foy R, et al. How can we recognize continuous quality improvement? Int J Qual Health Care. 2014;26(1):6–15.

O’Neill SM, Hempel S, Lim YW, Danz MS, Foy R, Suttorp MJ, et al. Identifying continuous quality improvement publications: what makes an improvement intervention ‘CQI’? BMJ Qual Saf. 2011;20(12):1011–9.

Article   PubMed   PubMed Central   Google Scholar  

Sibthorpe B, Gardner K, McAullay D. Furthering the quality agenda in Aboriginal community controlled health services: understanding the relationship between accreditation, continuous quality improvement and national key performance indicator reporting. Aust J Prim Health. 2016;22(4):270–5.

Bennett CL, Crane JM. Quality improvement efforts in oncology: are we ready to begin? Cancer Invest. 2001;19(1):86–95.

VanValkenburgh DA. Implementing continuous quality improvement at the facility level. Adv Ren Replace Ther. 2001;8(2):104–13.

Loper AC, Jensen TM, Farley AB, Morgan JD, Metz AJ. A systematic review of approaches for continuous quality improvement capacity-building. J Public Health Manage Practice. 2022;28(2):E354–361.

Ryan M. Achieving and sustaining quality in healthcare. Front Health Serv Manag. 2004;20(3):3–11.

Nicolucci A, Allotta G, Allegra G, Cordaro G, D’Agati F, Di Benedetto A, et al. Five-year impact of a continuous quality improvement effort implemented by a network of diabetes outpatient clinics. Diabetes Care. 2008;31(1):57–62.

Wakefield BJ, Blegen MA, Uden-Holman T, Vaughn T, Chrischilles E, Wakefield DS. Organizational culture, continuous quality improvement, and medication administration error reporting. Am J Med Qual. 2001;16(4):128–34.

Sori DA, Debelew GT, Degefa LS, Asefa Z. Continuous quality improvement strategy for increasing immediate postpartum long-acting reversible contraceptive use at Jimma University Medical Center, Jimma, Ethiopia. BMJ Open Qual. 2023;12(1):e002051.

Roche B, Robin C, Deleaval PJ, Marti MC. Continuous quality improvement in ambulatory surgery: the non-attending patient. Ambul Surg. 1998;6(2):97–100.

O’Connor JB, Sondhi SS, Mullen KD, McCullough AJ. A continuous quality improvement initiative reduces inappropriate prescribing of prophylactic antibiotics for endoscopic procedures. Am J Gastroenterol. 1999;94(8):2115–21.

Ursu A, Greenberg G, McKee M. Continuous quality improvement methodology: a case study on multidisciplinary collaboration to improve chlamydia screening. Fam Med Community Health. 2019;7(2):e000085.

Quick B, Nordstrom S, Johnson K. Using continuous quality improvement to implement evidence-based medicine. Lippincotts Case Manag. 2006;11(6):305–15 ( quiz 16 – 7 ).

Oyeledun B, Phillips A, Oronsaye F, Alo OD, Shaffer N, Osibo B, et al. The effect of a continuous quality improvement intervention on retention-in-care at 6 months postpartum in a PMTCT Program in Northern Nigeria: results of a cluster randomized controlled study. J Acquir Immune Defic Syndr. 2017;75(Suppl 2):S156–164.

Nyengerai T, Phohole M, Iqaba N, Kinge CW, Gori E, Moyo K, et al. Quality of service and continuous quality improvement in voluntary medical male circumcision programme across four provinces in South Africa: longitudinal and cross-sectional programme data. PLoS ONE. 2021;16(8):e0254850.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Wang J, Zhang H, Liu J, Zhang K, Yi B, Liu Y, et al. Implementation of a continuous quality improvement program reduces the occurrence of peritonitis in PD. Ren Fail. 2014;36(7):1029–32.

Stikes R, Barbier D. Applying the plan-do-study-act model to increase the use of kangaroo care. J Nurs Manag. 2013;21(1):70–8.

Wagner AD, Mugo C, Bluemer-Miroite S, Mutiti PM, Wamalwa DC, Bukusi D, et al. Continuous quality improvement intervention for adolescent and young adult HIV testing services in Kenya improves HIV knowledge. AIDS. 2017;31(Suppl 3):S243–252.

Le RD, Melanson SE, Santos KS, Paredes JD, Baum JM, Goonan EM, et al. Using lean principles to optimise inpatient phlebotomy services. J Clin Pathol. 2014;67(8):724–30.

Manyazewal T, Mekonnen A, Demelew T, Mengestu S, Abdu Y, Mammo D, et al. Improving immunization capacity in Ethiopia through continuous quality improvement interventions: a prospective quasi-experimental study. Infect Dis Poverty. 2018;7:7.

Kamiya Y, Ishijma H, Hagiwara A, Takahashi S, Ngonyani HAM, Samky E. Evaluating the impact of continuous quality improvement methods at hospitals in Tanzania: a cluster-randomized trial. Int J Qual Health Care. 2017;29(1):32–9.

Kibbe DC, Bentz E, McLaughlin CP. Continuous quality improvement for continuity of care. J Fam Pract. 1993;36(3):304–8.

Adrawa N, Ongiro S, Lotee K, Seret J, Adeke M, Izudi J. Use of a context-specific package to increase sputum smear monitoring among people with pulmonary tuberculosis in Uganda: a quality improvement study. BMJ Open Qual. 2023;12(3):1–6.

Hunt P, Hunter SB, Levan D. Continuous quality improvement in substance abuse treatment facilities: how much does it cost? J Subst Abuse Treat. 2017;77:133–40.

Azadeh A, Ameli M, Alisoltani N, Motevali Haghighi S. A unique fuzzy multi-control approach for continuous quality improvement in a radio therapy department. Qual Quantity. 2016;50(6):2469–93.

Memiah P, Tlale J, Shimabale M, Nzyoka S, Komba P, Sebeza J, et al. Continuous quality improvement (CQI) institutionalization to reach 95:95:95 HIV targets: a multicountry experience from the Global South. BMC Health Serv Res. 2021;21(1):711.

Yapa HM, De Neve JW, Chetty T, Herbst C, Post FA, Jiamsakul A, et al. The impact of continuous quality improvement on coverage of antenatal HIV care tests in rural South Africa: results of a stepped-wedge cluster-randomised controlled implementation trial. PLoS Med. 2020;17(10):e1003150.

Dadi TL, Abebo TA, Yeshitla A, Abera Y, Tadesse D, Tsegaye S, et al. Impact of quality improvement interventions on facility readiness, quality and uptake of maternal and child health services in developing regions of Ethiopia: a secondary analysis of programme data. BMJ Open Qual. 2023;12(4):e002140.

Weinberg M, Fuentes JM, Ruiz AI, Lozano FW, Angel E, Gaitan H, et al. Reducing infections among women undergoing cesarean section in Colombia by means of continuous quality improvement methods. Arch Intern Med. 2001;161(19):2357–65.

Andreoni V, Bilak Y, Bukumira M, Halfer D, Lynch-Stapleton P, Perez C. Project management: putting continuous quality improvement theory into practice. J Nurs Care Qual. 1995;9(3):29–37.

Balfour ME, Zinn TE, Cason K, Fox J, Morales M, Berdeja C, et al. Provider-payer partnerships as an engine for continuous quality improvement. Psychiatric Serv. 2018;69(6):623–5.

Agurto I, Sandoval J, De La Rosa M, Guardado ME. Improving cervical cancer prevention in a developing country. Int J Qual Health Care. 2006;18(2):81–6.

Anderson CI, Basson MD, Ali M, Davis AT, Osmer RL, McLeod MK, et al. Comprehensive multicenter graduate surgical education initiative incorporating entrustable professional activities, continuous quality improvement cycles, and a web-based platform to enhance teaching and learning. J Am Coll Surg. 2018;227(1):64–76.

Benjamin S, Seaman M. Applying continuous quality improvement and human performance technology to primary health care in Bahrain. Health Care Superv. 1998;17(1):62–71.

Byabagambi J, Marks P, Megere H, Karamagi E, Byakika S, Opio A, et al. Improving the quality of voluntary medical male circumcision through use of the continuous quality improvement approach: a pilot in 30 PEPFAR-Supported sites in Uganda. PLoS ONE. 2015;10(7):e0133369.

Hogg S, Roe Y, Mills R. Implementing evidence-based continuous quality improvement strategies in an urban Aboriginal Community Controlled Health Service in South East Queensland: a best practice implementation pilot. JBI Database Syst Rev Implement Rep. 2017;15(1):178–87.

Hopper MB, Morgan S. Continuous quality improvement initiative for pressure ulcer prevention. J Wound Ostomy Cont Nurs. 2014;41(2):178–80.

Ji J, Jiang DD, Xu Z, Yang YQ, Qian KY, Zhang MX. Continuous quality improvement of nutrition management during radiotherapy in patients with nasopharyngeal carcinoma. Nurs Open. 2021;8(6):3261–70.

Chen M, Deng JH, Zhou FD, Wang M, Wang HY. Improving the management of anemia in hemodialysis patients by implementing the continuous quality improvement program. Blood Purif. 2006;24(3):282–6.

Reeves S, Matney K, Crane V. Continuous quality improvement as an ideal in hospital practice. Health Care Superv. 1995;13(4):1–12.

Barton AJ, Danek G, Johns P, Coons M. Improving patient outcomes through CQI: vascular access planning. J Nurs Care Qual. 1998;13(2):77–85.

Buttigieg SC, Gauci D, Dey P. Continuous quality improvement in a Maltese hospital using logical framework analysis. J Health Organ Manag. 2016;30(7):1026–46.

Take N, Byakika S, Tasei H, Yoshikawa T. The effect of 5S-continuous quality improvement-total quality management approach on staff motivation, patients’ waiting time and patient satisfaction with services at hospitals in Uganda. J Public Health Afr. 2015;6(1):486.

PubMed   PubMed Central   Google Scholar  

Jacobson GH, McCoin NS, Lescallette R, Russ S, Slovis CM. Kaizen: a method of process improvement in the emergency department. Acad Emerg Med. 2009;16(12):1341–9.

Agarwal S, Gallo J, Parashar A, Agarwal K, Ellis S, Khot U, et al. Impact of lean six sigma process improvement methodology on cardiac catheterization laboratory efficiency. Catheter Cardiovasc Interv. 2015;85:S119.

Rahul G, Samanta AK, Varaprasad G A Lean Six Sigma approach to reduce overcrowding of patients and improving the discharge process in a super-specialty hospital. In 2020 International Conference on System, Computation, Automation and Networking (ICSCAN) 2020 July 3 (pp. 1-6). IEEE

Patel J, Nattabi B, Long R, Durey A, Naoum S, Kruger E, et al. The 5 C model: A proposed continuous quality improvement framework for volunteer dental services in remote Australian Aboriginal communities. Community Dent Oral Epidemiol. 2023;51(6):1150–8.

Van Acker B, McIntosh G, Gudes M. Continuous quality improvement techniques enhance HMO members’ immunization rates. J Healthc Qual. 1998;20(2):36–41.

Horine PD, Pohjala ED, Luecke RW. Healthcare financial managers and CQI. Healthc Financ Manage. 1993;47(9):34.

Reynolds JL. Reducing the frequency of episiotomies through a continuous quality improvement program. CMAJ. 1995;153(3):275–82.

Bunik M, Galloway K, Maughlin M, Hyman D. First five quality improvement program increases adherence and continuity with well-child care. Pediatr Qual Saf. 2021;6(6):e484.

Boyle TA, MacKinnon NJ, Mahaffey T, Duggan K, Dow N. Challenges of standardized continuous quality improvement programs in community pharmacies: the case of SafetyNET-Rx. Res Social Adm Pharm. 2012;8(6):499–508.

Price A, Schwartz R, Cohen J, Manson H, Scott F. Assessing continuous quality improvement in public health: adapting lessons from healthcare. Healthc Policy. 2017;12(3):34–49.

Gage AD, Gotsadze T, Seid E, Mutasa R, Friedman J. The influence of continuous quality improvement on healthcare quality: a mixed-methods study from Zimbabwe. Soc Sci Med. 2022;298:114831.

Chan YC, Ho SJ. Continuous quality improvement: a survey of American and Canadian healthcare executives. Hosp Health Serv Adm. 1997;42(4):525–44.

Balas EA, Puryear J, Mitchell JA, Barter B. How to structure clinical practice guidelines for continuous quality improvement? J Med Syst. 1994;18(5):289–97.

ElChamaa R, Seely AJE, Jeong D, Kitto S. Barriers and facilitators to the implementation and adoption of a continuous quality improvement program in surgery: a case study. J Contin Educ Health Prof. 2022;42(4):227–35.

Candas B, Jobin G, Dubé C, Tousignant M, Abdeljelil A, Grenier S, et al. Barriers and facilitators to implementing continuous quality improvement programs in colonoscopy services: a mixed methods systematic review. Endoscopy Int Open. 2016;4(2):E118–133.

Brandrud AS, Schreiner A, Hjortdahl P, Helljesen GS, Nyen B, Nelson EC. Three success factors for continual improvement in healthcare: an analysis of the reports of improvement team members. BMJ Qual Saf. 2011;20(3):251–9.

Lee S, Choi KS, Kang HY, Cho W, Chae YM. Assessing the factors influencing continuous quality improvement implementation: experience in Korean hospitals. Int J Qual Health Care. 2002;14(5):383–91.

Horwood C, Butler L, Barker P, Phakathi S, Haskins L, Grant M, et al. A continuous quality improvement intervention to improve the effectiveness of community health workers providing care to mothers and children: a cluster randomised controlled trial in South Africa. Hum Resour Health. 2017;15(1):39.

Hyrkäs K, Lehti K. Continuous quality improvement through team supervision supported by continuous self-monitoring of work and systematic patient feedback. J Nurs Manag. 2003;11(3):177–88.

Akdemir N, Peterson LN, Campbell CM, Scheele F. Evaluation of continuous quality improvement in accreditation for medical education. BMC Med Educ. 2020;20(Suppl 1):308.

Barzansky B, Hunt D, Moineau G, Ahn D, Lai CW, Humphrey H, et al. Continuous quality improvement in an accreditation system for undergraduate medical education: benefits and challenges. Med Teach. 2015;37(11):1032–8.

Gaylis F, Nasseri R, Salmasi A, Anderson C, Mohedin S, Prime R, et al. Implementing continuous quality improvement in an integrated community urology practice: lessons learned. Urology. 2021;153:139–46.

Gaga S, Mqoqi N, Chimatira R, Moko S, Igumbor JO. Continuous quality improvement in HIV and TB services at selected healthcare facilities in South Africa. South Afr J HIV Med. 2021;22(1):1202.

Wang F, Yao D. Application effect of continuous quality improvement measures on patient satisfaction and quality of life in gynecological nursing. Am J Transl Res. 2021;13(6):6391–8.

Lee SB, Lee LL, Yeung RS, Chan J. A continuous quality improvement project to reduce medication error in the emergency department. World J Emerg Med. 2013;4(3):179–82.

Chiang AA, Lee KC, Lee JC, Wei CH. Effectiveness of a continuous quality improvement program aiming to reduce unplanned extubation: a prospective study. Intensive Care Med. 1996;22(11):1269–71.

Chinnaiyan K, Al-Mallah M, Goraya T, Patel S, Kazerooni E, Poopat C, et al. Impact of a continuous quality improvement initiative on appropriate use of coronary CT angiography: results from a multicenter, statewide registry, the advanced cardiovascular imaging consortium (ACIC). J Cardiovasc Comput Tomogr. 2011;5(4):S29–30.

Gibson-Helm M, Rumbold A, Teede H, Ranasinha S, Bailie R, Boyle J. A continuous quality improvement initiative: improving the provision of pregnancy care for Aboriginal and Torres Strait Islander women. BJOG: Int J Obstet Gynecol. 2015;122:400–1.

Bennett IM, Coco A, Anderson J, Horst M, Gambler AS, Barr WB, et al. Improving maternal care with a continuous quality improvement strategy: a report from the interventions to minimize preterm and low birth weight infants through continuous improvement techniques (IMPLICIT) network. J Am Board Fam Med. 2009;22(4):380–6.

Krall SP, Iv CLR, Donahue L. Effect of continuous quality improvement methods on reducing triage to thrombolytic interval for Acute myocardial infarction. Acad Emerg Med. 1995;2(7):603–9.

Swanson TK, Eilers GM. Physician and staff acceptance of continuous quality improvement. Fam Med. 1994;26(9):583–6.

Yu Y, Zhou Y, Wang H, Zhou T, Li Q, Li T, et al. Impact of continuous quality improvement initiatives on clinical outcomes in peritoneal dialysis. Perit Dial Int. 2014;34(Suppl 2):S43–48.

Schiff GD, Goldfield NI. Deming meets Braverman: toward a progressive analysis of the continuous quality improvement paradigm. Int J Health Serv. 1994;24(4):655–73.

American Hospital Association Division of Quality Resources Chicago, IL: The role of hospital leadership in the continuous improvement of patient care quality. American Hospital Association. J Healthc Qual. 1992;14(5):8–14,22.

Scriven M. The Logic and Methodology of checklists [dissertation]. Western Michigan University; 2000.

Hales B, Terblanche M, Fowler R, Sibbald W. Development of medical checklists for improved quality of patient care. Int J Qual Health Care. 2008;20(1):22–30.

Vermeir P, Vandijck D, Degroote S, Peleman R, Verhaeghe R, Mortier E, et al. Communication in healthcare: a narrative review of the literature and practical recommendations. Int J Clin Pract. 2015;69(11):1257–67.

Eljiz K, Greenfield D, Hogden A, Taylor R, Siddiqui N, Agaliotis M, et al. Improving knowledge translation for increased engagement and impact in healthcare. BMJ open Qual. 2020;9(3):e000983.

O’Brien JL, Shortell SM, Hughes EF, Foster RW, Carman JM, Boerstler H, et al. An integrative model for organization-wide quality improvement: lessons from the field. Qual Manage Healthc. 1995;3(4):19–30.

Adily A, Girgis S, D’Este C, Matthews V, Ward JE. Syphilis testing performance in Aboriginal primary health care: exploring impact of continuous quality improvement over time. Aust J Prim Health. 2020;26(2):178–83.

Horwood C, Butler L, Barker P, Phakathi S, Haskins L, Grant M, et al. A continuous quality improvement intervention to improve the effectiveness of community health workers providing care to mothers and children: a cluster randomised controlled trial in South Africa. Hum Resour Health. 2017;15:1–11.

Veillard J, Cowling K, Bitton A, Ratcliffe H, Kimball M, Barkley S, et al. Better measurement for performance improvement in low- and middle-income countries: the primary Health Care Performance Initiative (PHCPI) experience of conceptual framework development and indicator selection. Milbank Q. 2017;95(4):836–83.

Barbazza E, Kringos D, Kruse I, Klazinga NS, Tello JE. Creating performance intelligence for primary health care strengthening in Europe. BMC Health Serv Res. 2019;19(1):1006.

Assefa Y, Hill PS, Gilks CF, Admassu M, Tesfaye D, Van Damme W. Primary health care contributions to universal health coverage. Ethiopia Bull World Health Organ. 2020;98(12):894.

Van Weel C, Kidd MR. Why strengthening primary health care is essential to achieving universal health coverage. CMAJ. 2018;190(15):E463–466.

Download references

Acknowledgements

Not applicable.

The authors received no fund.

Author information

Authors and affiliations.

School of Public Health, The University of Queensland, Brisbane, Australia

Aklilu Endalamaw, Resham B Khatri, Tesfaye Setegn Mengistu, Daniel Erku & Yibeltal Assefa

College of Medicine and Health Sciences, Bahir Dar University, Bahir Dar, Ethiopia

Aklilu Endalamaw & Tesfaye Setegn Mengistu

Health Social Science and Development Research Institute, Kathmandu, Nepal

Resham B Khatri

Centre for Applied Health Economics, School of Medicine, Grifth University, Brisbane, Australia

Daniel Erku

Menzies Health Institute Queensland, Grifth University, Brisbane, Australia

International Institute for Primary Health Care in Ethiopia, Addis Ababa, Ethiopia

Eskinder Wolka & Anteneh Zewdie

You can also search for this author in PubMed   Google Scholar

Contributions

AE conceptualized the study, developed the first draft of the manuscript, and managing feedbacks from co-authors. YA conceptualized the study, provided feedback, and supervised the whole processes. RBK provided feedback throughout. TSM provided feedback throughout. DE provided feedback throughout. EW provided feedback throughout. AZ provided feedback throughout. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Aklilu Endalamaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable because this research is based on publicly available articles.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Endalamaw, A., Khatri, R.B., Mengistu, T.S. et al. A scoping review of continuous quality improvement in healthcare system: conceptualization, models and tools, barriers and facilitators, and impact. BMC Health Serv Res 24 , 487 (2024). https://doi.org/10.1186/s12913-024-10828-0

Download citation

Received : 27 December 2023

Accepted : 05 March 2024

Published : 19 April 2024

DOI : https://doi.org/10.1186/s12913-024-10828-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Continuous quality improvement
  • Quality of Care

BMC Health Services Research

ISSN: 1472-6963

literature review on continuous assessment

  • Open access
  • Published: 19 April 2024

Person-centered care assessment tool with a focus on quality healthcare: a systematic review of psychometric properties

  • Lluna Maria Bru-Luna 1 ,
  • Manuel Martí-Vilar 2 ,
  • César Merino-Soto 3 ,
  • José Livia-Segovia 4 ,
  • Juan Garduño-Espinosa 5 &
  • Filiberto Toledano-Toledano 5 , 6 , 7  

BMC Psychology volume  12 , Article number:  217 ( 2024 ) Cite this article

338 Accesses

Metrics details

The person-centered care (PCC) approach plays a fundamental role in ensuring quality healthcare. The Person-Centered Care Assessment Tool (P-CAT) is one of the shortest and simplest tools currently available for measuring PCC. The objective of this study was to conduct a systematic review of the evidence in validation studies of the P-CAT, taking the “Standards” as a frame of reference.

First, a systematic literature review was conducted following the PRISMA method. Second, a systematic descriptive literature review of validity tests was conducted following the “Standards” framework. The search strategy and information sources were obtained from the Cochrane, Web of Science (WoS), Scopus and PubMed databases. With regard to the eligibility criteria and selection process, a protocol was registered in PROSPERO (CRD42022335866), and articles had to meet criteria for inclusion in the systematic review.

A total of seven articles were included. Empirical evidence indicates that these validations offer a high number of sources related to test content, internal structure for dimensionality and internal consistency. A moderate number of sources pertain to internal structure in terms of test-retest reliability and the relationship with other variables. There is little evidence of response processes, internal structure in measurement invariance terms, and test consequences.

The various validations of the P-CAT are not framed in a structured, valid, theory-based procedural framework like the “Standards” are. This can affect clinical practice because people’s health may depend on it. The findings of this study show that validation studies continue to focus on the types of validity traditionally studied and overlook interpretation of the scores in terms of their intended use.

Peer Review reports

Person-centered care (PCC)

Quality care for people with chronic diseases, functional limitations, or both has become one of the main objectives of medical and care services. The person-centered care (PCC) approach is an essential element not only in achieving this goal but also in providing high-quality health maintenance and medical care [ 1 , 2 , 3 ]. In addition to guaranteeing human rights, PCC provides numerous benefits to both the recipient and the provider [ 4 , 5 ]. Additionally, PCC includes a set of necessary competencies for healthcare professionals to address ongoing challenges in this area [ 6 ]. PCC includes the following elements [ 7 ]: an individualized, goal-oriented care plan based on individuals’ preferences; an ongoing review of the plan and the individual’s goals; support from an interprofessional team; active coordination among all medical and care providers and support services; ongoing information exchange, education and training for providers; and quality improvement through feedback from the individual and caregivers.

There is currently a growing body of literature on the application of PCC. A good example of this is McCormack’s widely known mid-range theory [ 8 ], an internationally recognized theoretical framework for PCC and how it is operationalized in practice. This framework forms a guide for care practitioners and researchers in hospital settings. This framework is elaborated in PCC and conceived of as “an approach to practice that is established through the formation and fostering of therapeutic relationships between all care providers, service users, and others significant to them, underpinned by values of respect for persons, [the] individual right to self-determination, mutual respect, and understanding” [ 9 ].

Thus, as established by PCC, it is important to emphasize that reference to the person who is the focus of care refers not only to the recipient but also to everyone involved in a care interaction [ 10 , 11 ]. PCC ensures that professionals are trained in relevant skills and methodology since, as discussed above, carers are among the agents who have the greatest impact on the quality of life of the person in need of care [ 12 , 13 , 14 ]. Furthermore, due to the high burden of caregiving, it is essential to account for caregivers’ well-being. In this regard, studies on professional caregivers are beginning to suggest that the provision of PCC can produce multiple benefits for both the care recipient and the caregiver [ 15 ].

Despite a considerable body of literature and the frequent inclusion of the term in health policy and research [ 16 ], PCC involves several complications. There is no standard consensus on the definition of this concept [ 17 ], which includes problematic areas such as efficacy assessment [ 18 , 19 ]. In addition, the difficulty of measuring the subjectivity involved in identifying the dimensions of the CPC and the infrequent use of standardized measures are acute issues [ 20 ]. These limitations and purposes motivated the creation of the Person-Centered Care Assessment Tool (P-CAT; [ 21 ]), which emerged from the need for a brief, economical, easily applied, versatile and comprehensive assessment instrument to provide valid and reliable measures of PCC for research purposes [ 21 ].

Person-centered care assessment tool (P-CAT)

There are several instruments that can measure PCC from different perspectives (i.e., the caregiver or the care recipient) and in different contexts (e.g., hospitals and nursing homes). However, from a practical point of view, the P-CAT is one of the shortest and simplest tools and contains all the essential elements of PCC described in the literature. It was developed in Australia to measure the approach of long-term residential settings to older people with dementia, although it is increasingly used in other healthcare settings, such as oncology units [ 22 ] and psychiatric hospitals [ 23 ].

Due to the brevity and simplicity of its application, the versatility of its use in different medical and care contexts, and its potential emic characteristics (i.e., constructs that can be cross-culturally applicable with reasonable and similar structure and interpretation; [ 24 ]), the P-CAT is one of the most widely used tests by professionals to measure PCC [ 25 , 26 ]. It has expanded to several countries with cultural and linguistic differences. Since its creation, it has been adapted in countries separated by wide cultural and linguistic differences, such as Norway [ 27 ], Sweden [ 28 ], China [ 29 ], South Korea [ 30 ], Spain [ 25 ], and Italy [ 31 ].

The P-CAT comprises 13 items rated on a 5-point ordinal scale (from “strongly disagree” to “strongly agree”), with high scores indicating a high degree of person-centeredness. The scale consists of three dimensions: person-centered care (7 items), organizational support (4 items) and environmental accessibility (2 items). In the original study ( n  = 220; [ 21 ]), the internal consistency of the instrument yielded satisfactory values for the total scale ( α  = 0.84) and good test-retest reliability ( r  =.66) at one-week intervals. A reliability generalization study conducted in 2021 [ 32 ] that estimated the internal consistency of the P-CAT and analyzed possible factors that could affect the it revealed that the mean α value for the 25 meta-analysis samples (some of which were part of the validations included in this study) was 0.81, and the only variable that had a statistically significant relationship with the reliability coefficient was the mean age of the sample. With respect to internal structure validity, three factors (56% of the total variance) were obtained, and content validity was assessed by experts, literature reviews and stakeholders [ 33 ].

Although not explicitly stated, the apparent commonality between validation studies of different versions of the P-CAT may be influenced by an influential decades-old validity framework that differentiates three categories: content validity, construct validity, and criterion validity [ 34 , 35 ]. However, a reformulation of the validity of the P-CAT within a modern framework, which would provide a different definition of validity, has not been performed.

Scale validity

Traditionally, validation is a process focused on the psychometric properties of a measurement instrument [ 36 ]. In the early 20th century, with the frequent use of standardized measurement tests in education and psychology, two definitions emerged: the first defined validity as the degree to which a test measures what it intends to measure, while the second described the validity of an instrument in terms of the correlation it presents with a variable [ 35 ].

However, in the past century, validity theory has evolved, leading to the understanding that validity should be based on specific interpretations for an intended purpose. It should not be limited to empirically obtained psychometric properties but should also be supported by the theory underlying the construct measured. Thus, to speak of classical or modern validity theory suggests an evolution in the classical or modern understanding of the concept of validity. Therefore, a classical approach (called classical test theory, CTT) is specifically differentiated from a modern approach. In general, recent concepts associated with a modern view of validity are based on (a) a unitary conception of validity and (b) validity judgments based on inferences and interpretations of the scores of a measure [ 37 , 38 ]. This conceptual advance in the concept of validity led to the creation of a guiding framework to for obtaining evidence to support the use and interpretation of the scores obtained by a measure [ 39 ].

This purpose is addressed by the Standards for Educational and Psychological Testing (“Standards”), a guide created by the American Educational Research Association (AERA), the American Psychological Association (APA) and the National Council on Measurement in Education (NCME) in 2014 with the aim of providing guidelines to assess the validity of the interpretations of scores of an instrument based on their intended use. Two conceptual aspects stand out in this modern view of validity: first, validity is a unitary concept centered on the construct; second, validity is defined as “the degree to which evidence and theory support the interpretations of test scores for proposed uses of tests” [ 37 ]. Thus, the “Standards” propose several sources that serve as a reference for assessing different aspects of validity. The five sources of valid evidence are as follows [ 37 ]: test content, response processes, internal structure, relations to other variables and consequences of testing. According to AERA et al. [ 37 ], test content validity refers to the relationship of the administration process, subject matter, wording and format of test items to the construct they are intended to measure. It is measured predominantly with qualitative methods but without excluding quantitative approaches. The validity of the responses is based on analysis of the cognitive processes and interpretation of the items by respondents and is measured with qualitative methods. Internal structure validity is based on the interrelationship between the items and the construct and is measured by quantitative methods. Validity in terms of the relationship with other variables is based on comparison between the variable that the instrument intends to measure and other theoretically relevant external variables and is measured by quantitative methods. Finally, validity based on the results of the test analyses consequences, both intended and unintended, that may be due to a source of invalidity. It is measured mainly by qualitative methods.

Thus, although validity plays a fundamental role in providing a strong scientific basis for interpretations of test scores, validation studies in the health field have traditionally focused on content validity, criterion validity and construct validity and have overlooked the interpretation and use of scores [ 34 ].

“Standards” are considered a suitable validity theory-based procedural framework for reviewing the validity of questionnaires due to its ability to analyze sources of validity from both qualitative and quantitative approaches and its evidence-based method [ 35 ]. Nevertheless, due to a lack of knowledge or the lack of a systematic description protocol, very few instruments to date have been reviewed within the framework of the “Standards” [ 39 ].

Current study

Although the P-CAT is one of the most widely used instruments by professionals and has seven validations [ 25 , 27 , 28 , 29 , 30 , 31 , 40 ], no analysis has been conducted of its validity within the framework of the “Standards”. That is, empirical evidence of the validity of the P-CAT has not been obtained in a way that helps to develop a judgment based on a synthesis of the available information.

A review of this type is critical given that some methodological issues seem to have not been resolved in the P-CAT. For example, although the multidimensionality of the P-CAT was identified in the study that introduced it, Bru-Luna et al. [ 32 ] recently stated that in adaptations of the P-CAT [ 25 , 27 , 28 , 29 , 30 , 40 ], the total score is used for interpretation and multidimensionality is disregarded. Thus, the multidimensionality of the original study was apparently not replicated. Bru-Luna et al. [ 32 ] also indicated that the internal structure validity of the P-CAT is usually underreported due to a lack of sufficiently rigorous approaches to establish with certainty how its scores are calculated.

The validity of the P-CAT, specifically its internal structure, appears to be unresolved. Nevertheless, substantive research and professional practice point to this measure as relevant to assessing PCC. This perception is contestable and judgment-based and may not be sufficient to assess the validity of the P-CAT from a cumulative and synthetic angle based on preceding validation studies. An adequate assessment of validity requires a model to conceptualize validity followed by a review of previous studies of the validity of the P-CAT using this model.

Therefore, the main purpose of this study was to conduct a systematic review of the evidence provided by P-CAT validation studies while taking the “Standards” as a framework.

The present study comprises two distinct but interconnected procedures. First, a systematic literature review was conducted following the PRISMA method ( [ 41 ]; Additional file 1; Additional file 2) with the aim of collecting all validations of the P-CAT that have been developed. Second, a systematic description of the validity evidence for each of the P-CAT validations found in the systematic review was developed following the “Standards” framework [ 37 ]. The work of Hawkins et al. [ 39 ], the first study to review validity sources according to the guidelines proposed by the “Standards”, was also used as a reference. Both provided conceptual and pragmatic guidance for organizing and classifying validity evidence for the P-CAT.

The procedure conducted in the systematic review is described below, followed by the procedure for examining the validity studies.

Systematic review

Search strategy and information sources.

Initially, the Cochrane database was searched with the aim of identifying systematic reviews of the P-CAT. When no such reviews were found, subsequent preliminary searches were performed in the Web of Science (WoS), Scopus and PubMed databases. These databases play a fundamental role in recent scientific literature since they are the main sources of published articles that undergo high-quality content and editorial review processes [ 42 ]. The search formula was as follows. The original P-CAT article [ 21 ] was located, after which all articles that cited it through 2021 were identified and analyzed. This approach ensured the inclusion of all validations. No articles were excluded on the basis of language to avoid language bias [ 43 ]. Moreover, to reduce the effects of publication bias, a complementary search in Google Scholar was also performed to allow the inclusion of “gray” literature [ 44 ]. Finally, a manual search was performed through a review of the references of the included articles to identify other articles that met the search criteria but were not present in any of the aforementioned databases.

This process was conducted by one of the authors and corroborated by another using the Covidence tool [ 45 ]. A third author was consulted in case of doubt.

Eligibility criteria and selection process

The protocol was registered in PROSPERO, and the search was conducted according to these criteria. The identification code is CRD42022335866.

The articles had to meet the following criteria for inclusion in the systematic review: (a) a methodological approach to P-CAT validations, (b) an experimental or quasiexperimental studies, (c) studies with any type of sample, and (d) studies in any language. We discarded studies that met at least one of the following exclusion criteria: (a) systematic reviews or bibliometric reviews of the instrument or meta-analyses or (b) studies published after 2021.

Data collection process

After the articles were selected, the most relevant information was extracted from each article. Fundamental data were recorded in an Excel spreadsheet for each of the sections: introduction, methodology, results and discussion. Information was also recorded about the limitations mentioned in each article as well as the practical implications and suggestions for future research.

Given the aim of the study, information was collected about the sources of validity of each study, including test content (judges’ evaluation, literature review and translation), response processes, internal structure (factor analysis, design, estimator, factor extraction method, factors and items, interfactor R, internal replication, effect of the method, and factor loadings), and relationships with other variables (convergent, divergent, concurrent and predictive validity) and consequences of measurement.

Description of the validity study

To assess the validity of the studies, an Excel table was used. Information was recorded for the seven articles included in the systematic review. The data were extracted directly from the texts of the articles and included information about the authors, the year of publication, the country where each P-CAT validation was produced and each of the five standards proposed in the “Standards” [ 37 ].

The validity source related to internal structure was divided into three sections to record information about dimensionality (e.g., factor analysis, design, estimator, factor extraction method, factors and items, interfactor R, internal replication, effect of the method, and factor loadings), reliability expression (i.e., internal consistency and test-retest) and the study of factorial invariance according to the groups into which it was divided (e.g., sex, age, profession) and the level of study (i.e., metric, intercepts). This approach allowed much more information to be obtained than relying solely on source validity based on internal structure. This division was performed by the same researcher who performed the previous processes.

Study selection and study characteristics

The systematic review process was developed according to the PRISMA methodology [ 41 ].

The WoS, Scopus, PubMed and Google Scholar databases were searched on February 12, 2022 and yielded a total of 485 articles. Of these, 111 were found in WoS, 114 in Scopus, 43 in PubMed and 217 in Google Scholar. In the first phase, the title and abstracts of all the articles were read. In this first screening, 457 articles were eliminated because they did not include studies with a methodological approach to P-CAT validation and one article was excluded because it was the original P-CAT article. This resulted in a total of 27 articles, 19 of which were duplicated in different databases and, in the case of Google Scholar, within the same database. This process yielded a total of eight articles that were evaluated for eligibility by a complete reading of the text. In this step, one of the articles was excluded due to a lack of access to the full text of the study [ 31 ] (although the original manuscript was found, it was impossible to access the complete content; in addition, the authors of the manuscript were contacted, but no reply was received). Finally, a manual search was performed by reviewing the references of the seven studies, but none were considered suitable for inclusion. Thus, the review was conducted with a total of seven articles.

Of the seven studies, six were original validations in other languages. These included Norwegian [ 27 ], Swedish [ 28 ], Chinese (which has two validations [ 29 , 40 ]), Spanish [ 25 ], and Korean [ 30 ]. The study by Selan et al. [ 46 ] included a modification of the Swedish version of the P-CAT and explored the psychometric properties of both versions (i.e., the original Swedish version and the modified version).

The item selection and screening process are illustrated in detail in Fig.  1 .

figure 1

PRISMA 2020 flow diagram for new systematic reviews including database searches

Validity analysis

To provide a clear overview of the validity analyses, Table  1 descriptively shows the percentages of items that provide information about the five standards proposed by the “Standards” guide [ 37 ].

The table shows a high number of validity sources related to test content and internal structure in relation to dimensionality and internal consistency, followed by a moderate number of sources for test-retest and relationship with other variables. A rate of 0% is observed for validity sources related to response processes, invariance and test consequences. Below, different sections related to each of the standards are shown, and the information is presented in more detail.

Evidence based on test content

The first standard, which focused on test content, was met for all items (100%). Translation, which refers to the equivalence of content between the original language and the target language, was met in the six articles that conducted validation in another language and/or culture. These studies reported that the validations were translated by bilingual experts and/or experts in the area of care. In addition, three studies [ 25 , 29 , 40 ] reported that the translation process followed International Test Commission guidelines, such as those of Beaton et al. [ 47 ], Guillemin [ 48 ], Hambleton et al. [ 49 ], and Muñiz et al. [ 50 ]. Evaluation by judges, who referred to the relevance, clarity and importance of the content, was divided into two categories: expert evaluation (a panel of expert judges for each of the areas to consider in the evaluation instrument) and experiential evaluation (potential participants testing the test). The first type of evaluation occurred in three of the articles [ 28 , 29 , 46 ], while the other occurred in two [ 25 , 40 ]. Only one of the items [ 29 ] reported that the scale contained items that reflected the dimension described in the literature. The validity evidence related to the test content presented in each article can be found in Table  2 .

Evidence based on response processes

The second standard, related to the validity of the response process, was obtained according to the “Standards” from the analysis of individual responses: “questioning test takers about their performance strategies or response to particular items (…), maintaining records that monitor the development of a response to a writing task (…), documentation of other aspects of performance, like eye movement or response times…” [ 37 ] (p. 15). According to the analysis of the validity of the response processes, none of the articles complied with this evidence.

Evidence based on internal structure

The third standard, validity related to internal structure, was divided into three sections. First, the dimensionality of each study was examined in terms of factor analysis, design, estimator, factor extraction method, factors and items, interfactor R, internal replication, effect of the method, and factor loadings. Le et al. [ 40 ] conducted an exploratory-confirmatory design while Sjögren et al. [ 28 ] conducted a confirmatory-exploratory design to assess construct validity using confirmatory factor analysis (CFA) and investigated it further using exploratory factor analysis (EFA). The remaining articles employed only a single form of factor analysis: three employed EFA, and two employed CFA. Regarding the next point, only three of the articles reported the factor extraction method used, including Kaiser’s eigenvalue, criterion, scree plot test, parallel analysis and Velicer’s MAP test. Instrument validations yielded a total of two factors in five of the seven articles, while one yielded a single dimension [ 25 ] and the other yielded three dimensions [ 29 ], as in the original instrument. The interfactor R was reported only in the study by Zhong and Lou [ 29 ], whereas in the study by Martínez et al. [ 25 ], it could be easily obtained since it consisted of only one dimension. Internal replication was also calculated in the Spanish validation by randomly splitting the sample into two to test the correlations between factors. The effectiveness of the method was not reported in any of the articles. This information is presented in Table  3 in addition to a summary of the factor loadings.

The second section examined reliability. All the studies presented measures of internal consistency conducted in their entirety with Cronbach’s α coefficient for both the total scale and the subscales. The ω coefficient of McDonald was not used in any case. Four of the seven articles performed a test-retest test. Martínez et al. [ 25 ] conducted a test-retest after a period of seven days, while Le et al. [ 40 ] and Rokstad et al. [ 27 ] performed it between one and two weeks later and Sjögren et al. [ 28 ] allowed approximately two weeks to pass after the initial test.

The third section analyzes the calculation of invariance, which was not reported in any of the studies.

Evidence based on relationships with other variables

In the fourth standard, based on validity according to the relationship with other variables, the articles that reported it used only convergent validity (i.e., it was hypothesized that the variables related to the construct measured by the test—in this case, person-centeredness—were positively or negatively related to another construct). Discriminant validity hypothesizes that the variables related to the PCC construct are not correlated in any way with any other variable studied. No article (0%) measured discriminant evidence, while four (57%) measured convergent evidence [ 25 , 29 , 30 , 46 ]. Convergent validity was obtained through comparisons with instruments such as the Person-Centered Climate Questionnaire–Staff Version (PCQ-S), the Staff-Based Measures of Individualized Care for Institutionalized Persons with Dementia (IC), the Caregiver Psychological Elder Abuse Behavior Scale (CPEAB), the Organizational Climate (CLIOR) and the Maslach Burnout Inventory (MBI). In the case of Selan et al. [ 46 ], convergent validity was assessed on two items considered by the authors as “crude measures of person-centered care (i.e., external constructs) giving an indication of the instruments’ ability to measure PCC” (p. 4). Concurrent validity, which measures the degree to which the results of one test are or are not similar to those of another test conducted at more or less the same time with the same participants, and predictive validity, which allows predictions to be established regarding behavior based on comparison between the values of the instrument and the criterion, were not reported in any of the studies.

Evidence based on the consequences of testing

The fifth and final standard was related to the consequences of the test. It analyzed the consequences, both intended and unintended, of applying the test to a given sample. None of the articles presented explicit or implicit evidence of this.

The last two sources of validity can be seen in Table  4 .

Table  5 shows the results of the set of validity tests for each study according to the described standards.

The main purpose of this article is to analyze the evidence of validity in different validation studies of the P-CAT. To gather all existing validations, a systematic review of all literature citing this instrument was conducted.

The publication of validation studies of the P-CAT has been constant over the years. Since the publication of the original instrument in 2010, seven validations have been published in other languages (taking into account the Italian version by Brugnolli et al. [ 31 ], which could not be included in this study) as well as a modification of one of these versions. The very unequal distribution of validations between languages and countries is striking. A recent systematic review [ 51 ] revealed that in Europe, the countries where the PCC approach is most widely used are the United Kingdom, Sweden, the Netherlands, Northern Ireland, and Norway. It has also been shown that the neighboring countries seem to exert an influence on each other due to proximity [ 52 ] such that they tend to organize healthcare in a similar way, as is the case for Scandinavian countries. This favors the expansion of PCC and explains the numerous validations we found in this geographical area.

Although this approach is conceived as an essential element of healthcare for most governments [ 53 ], PCC varies according to the different definitions and interpretations attributed to it, which can cause confusion in its application (e.g., between Norway and the United Kingdom [ 54 ]). Moreover, facilitators of or barriers to implementation depend on the context and level of development of each country, and financial support remains one of the main factors in this regard [ 53 ]. This fact explains why PCC is not globally widespread among all territories. In countries where access to healthcare for all remains out of reach for economic reasons, the application of this approach takes a back seat, as does the validation of its assessment tools. In contrast, in a large part of Europe or in countries such as China or South Korea that have experienced decades of rapid economic development, patients are willing to be involved in their medical treatment and enjoy more satisfying and efficient medical experiences and environments [ 55 ], which facilitates the expansion of validations of instruments such as the P-CAT.

Regarding validity testing, the guidelines proposed by the “Standards” [ 37 ] were followed. According to the analysis of the different validations of the P-CAT instrument, none of the studies used a structured validity theory-based procedural framework for conducting validation. The most frequently reported validity tests were on the content of the test and two of the sections into which the internal structure was divided (i.e., dimensionality and internal consistency).

In the present article, the most cited source of validity in the studies was the content of the test because most of the articles were validations of the P-CAT in other languages, and the authors reported that the translation procedure was conducted by experts in all cases. In addition, several of the studies employed International Test Commission guidelines, such as those by Beaton et al. [ 47 ], Guillemin [ 48 ], Hambleton et al. [ 49 ], and Muñiz et al. [ 50 ]. Several studies also assessed the relevance, clarity and importance of the content.

The third source of validity, internal structure, was the next most often reported, although it appeared unevenly among the three sections into which this evidence was divided. Dimensionality and internal consistency were reported in all studies, followed by test-retest consistency. In relation to the first section, factor analysis, a total of five EFAs and four CFAs were presented in the validations. Traditionally, EFA has been used in research to assess dimensionality and identify key psychological constructs, although this approach involves a number of inconveniences, such as difficulty testing measurement invariance and incorporating latent factors into subsequent analyses [ 56 ] or the major problem of factor loading matrix rotation [ 57 ]. Studies eventually began to employ CFA, a technique that overcame some of these obstacles [ 56 ] but had other drawbacks; for example, the strict requirement of zero cross-loadings often does not fit the data well, and misspecification of zero loadings tends to produce distorted factors [ 57 ]. Recently, exploratory structural equation modeling (ESEM) has been proposed. This technique is widely recommended both conceptually and empirically to assess the internal structure of psychological tools [ 58 ] since it overcomes the limitations of EFA and CFA in estimating their parameters [ 56 , 57 ].

The next section, reliability, reports the total number of items according to Cronbach’s α reliability coefficient. Reliability is defined as a combination of systematic and random influences that determine the observed scores on a psychological test. Reporting the reliability measure ensures that item-based scores are consistent, that the tool’s responses are replicable and that they are not modified solely by random noise [ 59 , 60 ]. Currently, the most commonly employed reliability coefficient in studies with a multi-item measurement scale (MIMS) is Cronbach’s α [ 60 , 61 ].

Cronbach’s α [ 62 ] is based on numerous strict assumptions (e.g., the test must be unidimensional, factor loadings must be equal for all items and item errors should not covary) to estimate internal consistency. These assumptions are difficult to meet, and their violation may produce small reliability estimates [ 60 ]. One of the alternative measures to α that is increasingly recommended by the scientific literature is McDonald’s ω [ 63 ], a composite reliability measure. This coefficient is recommended for congeneric scales in which tau equivalence is not assumed. It has several advantages. For example, estimates of ω are usually robust when the estimated model contains more factors than the true model, even with small samples, or when skewness in univariate item distributions produces lower biases than those found when using α [ 59 ].

The test-retest method was the next most commonly reported internal structure section in these studies. This type of reliability considers the consistency of the scores of a test between two measurements separated by a period [ 64 ]. It is striking that test-retest consistency does not have a prevalence similar to that of internal consistency since, unlike internal consistency, test-retest consistency can be assessed for practically all types of patient-reported outcomes. It is even considered by some measurement experts to report reliability with greater relevance than internal consistency since it plays a fundamental role in the calculation of parameters for health measures [ 64 ]. However, the literature provides little guidance regarding the assessment of this type of reliability.

The internal structure section that was least frequently reported in the studies in this review was invariance. A lack of invariance refers to a difference between scores on a test that is not explained by group differences in the structure it is intended to measure [ 65 ]. The invariance of the measure should be emphasized as a prerequisite in comparisons between groups since “if scale invariance is not examined, item bias may not be fully recognized and this may lead to a distorted interpretation of the bias in a particular psychological measure” [ 65 ].

Evidence related to other variables was the next most reported source of validity in the studies included in this review. Specifically, the four studies that reported this evidence did so according to convergent validity and cited several instruments. None of the studies included evidence of discriminant validity, although this may be because there are currently several obstacles related to the measurement of this type of validity [ 66 ]. On the one hand, different definitions are used in the applied literature, which makes its evaluation difficult; on the other hand, the literature on discriminant validity focuses on techniques that require the use of multiple measurement methods, which often seem to have been introduced without sufficient evidence or are applied randomly.

Validity related to response processes was not reported by any of the studies. There are several methods to analyze this validity. These methods can be divided into two groups: “those that directly access the psychological processes or cognitive operations (think aloud, focus group, and interviews), compared to those which provide indirect indicators which in turn require additional inference (eye tracking and response times)” [ 38 ]. However, this validity evidence has traditionally been reported less frequently than others in most studies, perhaps because there are fewer clear and accepted practices on how to design or report these studies [ 67 ].

Finally, the consequences of testing were not reported in any of the studies. There is debate regarding this source of validity, with two main opposing streams of thought. On the one hand [ 68 , 69 ]) suggests that consequences that appear after the application of a test should not derive from any source of test invalidity and that “adverse consequences only undermine the validity of an assessment if they can be attributed to a problem of fit between the test and the construct” (p. 6). In contrast, Cronbach [ 69 , 70 ] notes that adverse social consequences that may result from the application of a test may call into question the validity of the test. However, the potential risks that may arise from the application of a test should be minimized in any case, especially in regard to health assessments. To this end, it is essential that this aspect be assessed by instrument developers and that the experiences of respondents be protected through the development of comprehensive and informed practices [ 39 ].

This work is not without limitations. First, not all published validation studies of the P-CAT, such as the Italian version by Brugnolli et al. [ 31 ], were available. These studies could have provided relevant information. Second, many sources of validity could not be analyzed because the studies provided scant or no data, such as response processes [ 25 , 27 , 28 , 29 , 30 , 40 , 46 ], relationships with other variables [ 27 , 28 , 40 ], consequences of testing [ 25 , 27 , 28 , 29 , 30 , 40 , 46 ], or invariance [ 25 , 27 , 28 , 29 , 30 , 40 , 46 ] in the case of internal structure and interfactor R [ 27 , 28 , 30 , 40 , 46 ], internal replication [ 27 , 28 , 29 , 30 , 40 , 46 ] or the effect of the method [ 25 , 27 , 28 , 29 , 30 , 40 , 46 ] in the case of dimensionality. In the future, it is hoped that authors will become aware of the importance of validity, as shown in this article and many others, and provide data on unreported sources so that comprehensive validity studies can be performed.

The present work also has several strengths. The search was extensive, and many studies were obtained using three different databases, including WoS, one of the most widely used and authoritative databases in the world. This database includes a large number and variety of articles and is not fully automated due to its human team [ 71 , 72 , 73 ]. In addition, to prevent publication bias, gray literature search engines such as Google Scholar were used to avoid the exclusion of unpublished research [ 44 ]. Finally, linguistic bias was prevented by not limiting the search to articles published in only one or two languages, thus avoiding the overrepresentation of studies in one language and underrepresentation in others [ 43 ].

Conclusions

Validity is understood as the degree to which tests and theory support the interpretations of instrument scores for their intended use [ 37 ]. From this perspective, the various validations of the P-CAT are not presented in a structured, valid, theory-based procedural framework like the “Standards” are. After integration and analysis of the results, it was observed that these validation reports offer a high number of sources of validity related to test content, internal structure in dimensionality and internal consistency, a moderate number of sources for internal structure in terms of test-retest reliability and the relationship with other variables, and a very low number of sources for response processes, internal structure in terms of invariance, and test consequences.

Validity plays a fundamental role in ensuring a sound scientific basis for test interpretations because it provides evidence of the extent to which the data provided by the test are valid for the intended purpose. This can affect clinical practice as people’s health may depend on it. In this sense, the “Standards” are considered a suitable and valid theory-based procedural framework for studying this modern conception of questionnaire validity, which should be taken into account in future research in this area.

Although the P-CAT is one of the most widely used instruments for assessing PCC, as shown in this study, PCC has rarely been studied. The developers of measurement tests applied to the health care setting, on which the health and quality of life of many people may depend, should use this validity framework to reflect the clear purpose of the measurement. This approach is important because the equity of decision making by healthcare professionals in daily clinical practice may depend on the source of validity. Through a more extensive study of validity that includes the interpretation of scores in terms of their intended use, the applicability of the P-CAT, an instrument that was initially developed for long-term care homes for elderly people, could be expanded to other care settings. However, the findings of this study show that validation studies continue to focus on traditionally studied types of validity and overlook the interpretation of scores in terms of their intended use.

Data availability

All data relevant to the study were included in the article or uploaded as additional files. Additional template data extraction forms are available from the corresponding author upon reasonable request.

Abbreviations

American Educational Research Association

American Psychological Association

Confirmatory factor analysis

Organizational Climate

Caregiver Psychological Elder Abuse Behavior Scale

Exploratory factor analysis

Exploratory structural equation modeling

Staff-based Measures of Individualized Care for Institutionalized Persons with Dementia

Maslach Burnout Inventory

Multi-item measurement scale

Maximum likelihood

National Council on Measurement in Education

Person-Centered Care Assessment Tool

  • Person-centered care

Person-Centered Climate Questionnaire–Staff Version

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

International Register of Systematic Review Protocols

Standards for Educational and Psychological Testing

weighted least square mean and variance adjusted

Web of Science

Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academy; 2001.

Google Scholar  

International Alliance of Patients’ Organizations. What is patient-centred healthcare? A review of definitions and principles. 2nd ed. London, UK: International Alliance of Patients’ Organizations; 2007.

World Health Organization. WHO global strategy on people-centred and integrated health services: interim report. Geneva, Switzerland: World Health Organization; 2015.

Britten N, Ekman I, Naldemirci Ö, Javinger M, Hedman H, Wolf A. Learning from Gothenburg model of person centred healthcare. BMJ. 2020;370:m2738.

Article   PubMed   Google Scholar  

Van Diepen C, Fors A, Ekman I, Hensing G. Association between person-centred care and healthcare providers’ job satisfaction and work-related health: a scoping review. BMJ Open. 2020;10:e042658.

Article   PubMed   PubMed Central   Google Scholar  

Ekman N, Taft C, Moons P, Mäkitalo Å, Boström E, Fors A. A state-of-the-art review of direct observation tools for assessing competency in person-centred care. Int J Nurs Stud. 2020;109:103634.

American Geriatrics Society Expert Panel on Person-Centered Care. Person-centered care: a definition and essential elements. J Am Geriatr Soc. 2016;64:15–8.

Article   Google Scholar  

McCormack B, McCance TV. Development of a framework for person-centred nursing. J Adv Nurs. 2006;56:472–9.

McCormack B, McCance T. Person-centred practice in nursing and health care: theory and practice. Chichester, England: Wiley; 2016.

Nolan MR, Davies S, Brown J, Keady J, Nolan J. Beyond person-centred care: a new vision for gerontological nursing. J Clin Nurs. 2004;13:45–53.

McCormack B, McCance T. Person-centred nursing: theory, models and methods. Oxford, UK: Wiley-Blackwell; 2010.

Book   Google Scholar  

Abraha I, Rimland JM, Trotta FM, Dell’Aquila G, Cruz-Jentoft A, Petrovic M, et al. Systematic review of systematic reviews of non-pharmacological interventions to treat behavioural disturbances in older patients with dementia. The SENATOR-OnTop series. BMJ Open. 2017;7:e012759.

Anderson K, Blair A. Why we need to care about the care: a longitudinal study linking the quality of residential dementia care to residents’ quality of life. Arch Gerontol Geriatr. 2020;91:104226.

Bauer M, Fetherstonhaugh D, Haesler E, Beattie E, Hill KD, Poulos CJ. The impact of nurse and care staff education on the functional ability and quality of life of people living with dementia in aged care: a systematic review. Nurse Educ Today. 2018;67:27–45.

Smythe A, Jenkins C, Galant-Miecznikowska M, Dyer J, Downs M, Bentham P, et al. A qualitative study exploring nursing home nurses’ experiences of training in person centred dementia care on burnout. Nurse Educ Pract. 2020;44:102745.

McCormack B, Borg M, Cardiff S, Dewing J, Jacobs G, Janes N, et al. Person-centredness– the ‘state’ of the art. Int Pract Dev J. 2015;5:1–15.

Wilberforce M, Challis D, Davies L, Kelly MP, Roberts C, Loynes N. Person-centredness in the care of older adults: a systematic review of questionnaire-based scales and their measurement properties. BMC Geriatr. 2016;16:63.

Rathert C, Wyrwich MD, Boren SA. Patient-centered care and outcomes: a systematic review of the literature. Med Care Res Rev. 2013;70:351–79.

Sharma T, Bamford M, Dodman D. Person-centred care: an overview of reviews. Contemp Nurse. 2016;51:107–20.

Ahmed S, Djurkovic A, Manalili K, Sahota B, Santana MJ. A qualitative study on measuring patient-centered care: perspectives from clinician-scientists and quality improvement experts. Health Sci Rep. 2019;2:e140.

Edvardsson D, Fetherstonhaugh D, Nay R, Gibson S. Development and initial testing of the person-centered Care Assessment Tool (P-CAT). Int Psychogeriatr. 2010;22:101–8.

Tamagawa R, Groff S, Anderson J, Champ S, Deiure A, Looyis J, et al. Effects of a provincial-wide implementation of screening for distress on healthcare professionals’ confidence and understanding of person-centered care in oncology. J Natl Compr Canc Netw. 2016;14:1259–66.

Degl’ Innocenti A, Wijk H, Kullgren A, Alexiou E. The influence of evidence-based design on staff perceptions of a supportive environment for person-centered care in forensic psychiatry. J Forensic Nurs. 2020;16:E23–30.

Hulin CL. A psychometric theory of evaluations of item and scale translations: fidelity across languages. J Cross Cult Psychol. 1987;18:115–42.

Martínez T, Suárez-Álvarez J, Yanguas J, Muñiz J. Spanish validation of the person-centered Care Assessment Tool (P-CAT). Aging Ment Health. 2016;20:550–8.

Martínez T, Martínez-Loredo V, Cuesta M, Muñiz J. Assessment of person-centered care in gerontology services: a new tool for healthcare professionals. Int J Clin Health Psychol. 2020;20:62–70.

Rokstad AM, Engedal K, Edvardsson D, Selbaek G. Psychometric evaluation of the Norwegian version of the person-centred Care Assessment Tool. Int J Nurs Pract. 2012;18:99–105.

Sjögren K, Lindkvist M, Sandman PO, Zingmark K, Edvardsson D. Psychometric evaluation of the Swedish version of the person-centered Care Assessment Tool (P-CAT). Int Psychogeriatr. 2012;24:406–15.

Zhong XB, Lou VW. Person-centered care in Chinese residential care facilities: a preliminary measure. Aging Ment Health. 2013;17:952–8.

Tak YR, Woo HY, You SY, Kim JH. Validity and reliability of the person-centered Care Assessment Tool in long-term care facilities in Korea. J Korean Acad Nurs. 2015;45:412–9.

Brugnolli A, Debiasi M, Zenere A, Zanolin ME, Baggia M. The person-centered Care Assessment Tool in nursing homes: psychometric evaluation of the Italian version. J Nurs Meas. 2020;28:555–63.

Bru-Luna LM, Martí-Vilar M, Merino-Soto C, Livia J. Reliability generalization study of the person-centered Care Assessment Tool. Front Psychol. 2021;12:712582.

Edvardsson D, Innes A. Measuring person-centered care: a critical comparative review of published tools. Gerontologist. 2010;50:834–46.

Hawkins M, Elsworth GR, Nolte S, Osborne RH. Validity arguments for patient-reported outcomes: justifying the intended interpretation and use of data. J Patient Rep Outcomes. 2021;5:64.

Sireci SG. On the validity of useless tests. Assess Educ Princ Policy Pract. 2016;23:226–35.

Hawkins M, Elsworth GR, Osborne RH. Questionnaire validation practice: a protocol for a systematic descriptive literature review of health literacy assessments. BMJ Open. 2019;9:e030753.

American Educational Research Association, American Psychological Association. National Council on Measurement in Education. Standards for educational and psychological testing. Washington, DC: American Educational Research Association; 2014.

Padilla JL, Benítez I. Validity evidence based on response processes. Psicothema. 2014;26:136–44.

PubMed   Google Scholar  

Hawkins M, Elsworth GR, Hoban E, Osborne RH. Questionnaire validation practice within a theoretical framework: a systematic descriptive literature review of health literacy assessments. BMJ Open. 2020;10:e035974.

Le C, Ma K, Tang P, Edvardsson D, Behm L, Zhang J, et al. Psychometric evaluation of the Chinese version of the person-centred Care Assessment Tool. BMJ Open. 2020;10:e031580.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Int J Surg. 2021;88:105906.

Falagas ME, Pitsouni EI, Malietzis GA, Pappas G. Comparison of PubMed, Scopus, web of Science, and Google Scholar: strengths and weaknesses. FASEB J. 2008;22:338–42.

Grégoire G, Derderian F, Le Lorier J. Selecting the language of the publications included in a meta-analysis: is there a tower of Babel bias? J Clin Epidemiol. 1995;48:159–63.

Arias MM. Aspectos metodológicos Del metaanálisis (1). Pediatr Aten Primaria. 2018;20:297–302.

Covidence. Covidence systematic review software. Veritas Health Innovation, Australia. 2014. https://www.covidence.org/ . Accessed 28 Feb 2022.

Selan D, Jakobsson U, Condelius A. The Swedish P-CAT: modification and exploration of psychometric properties of two different versions. Scand J Caring Sci. 2017;31:527–35.

Beaton DE, Bombardier C, Guillemin F, Ferraz MB. Guidelines for the process of cross-cultural adaptation of self-report measures. Spine (Phila Pa 1976). 2000;25:3186–91.

Guillemin F. Cross-cultural adaptation and validation of health status measures. Scand J Rheumatol. 1995;24:61–3.

Hambleton R, Merenda P, Spielberger C. Adapting educational and psychological tests for cross-cultural assessment. Mahwah, NJ: Lawrence Erlbaum Associates; 2005.

Muñiz J, Elosua P, Hambleton RK. International test commission guidelines for test translation and adaptation: second edition. Psicothema. 2013;25:151–7.

Rosengren K, Brannefors P, Carlstrom E. Adoption of the concept of person-centred care into discourse in Europe: a systematic literature review. J Health Organ Manag. 2021;35:265–80.

Alharbi T, Olsson LE, Ekman I, Carlström E. The impact of organizational culture on the outcome of hospital care: after the implementation of person-centred care. Scand J Public Health. 2014;42:104–10.

Bensbih S, Souadka A, Diez AG, Bouksour O. Patient centered care: focus on low and middle income countries and proposition of new conceptual model. J Med Surg Res. 2020;7:755–63.

Stranz A, Sörensdotter R. Interpretations of person-centered dementia care: same rhetoric, different practices? A comparative study of nursing homes in England and Sweden. J Aging Stud. 2016;38:70–80.

Zhou LM, Xu RH, Xu YH, Chang JH, Wang D. Inpatients’ perception of patient-centered care in Guangdong province, China: a cross-sectional study. Inquiry. 2021. https://doi.org/10.1177/00469580211059482 .

Marsh HW, Morin AJ, Parker PD, Kaur G. Exploratory structural equation modeling: an integration of the best features of exploratory and confirmatory factor analysis. Annu Rev Clin Psychol. 2014;10:85–110.

Asparouhov T, Muthén B. Exploratory structural equation modeling. Struct Equ Model Multidiscip J. 2009;16:397–438.

Cabedo-Peris J, Martí-Vilar M, Merino-Soto C, Ortiz-Morán M. Basic empathy scale: a systematic review and reliability generalization meta-analysis. Healthc (Basel). 2022;10:29–62.

Flora DB. Your coefficient alpha is probably wrong, but which coefficient omega is right? A tutorial on using R to obtain better reliability estimates. Adv Methods Pract Psychol Sci. 2020;3:484–501.

McNeish D. Thanks coefficient alpha, we’ll take it from here. Psychol Methods. 2018;23:412–33.

Hayes AF, Coutts JJ. Use omega rather than Cronbach’s alpha for estimating reliability. But… Commun Methods Meas. 2020;14:1–24.

Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297–334.

McDonald R. Test theory: a unified approach. Mahwah, NJ: Erlbaum; 1999.

Polit DF. Getting serious about test-retest reliability: a critique of retest research and some recommendations. Qual Life Res. 2014;23:1713–20.

Ceylan D, Çizel B, Karakaş H. Testing destination image scale invariance for intergroup comparison. Tour Anal. 2020;25:239–51.

Rönkkö M, Cho E. An updated guideline for assessing discriminant validity. Organ Res Methods. 2022;25:6–14.

Hubley A, Zumbo B. Response processes in the context of validity: setting the stage. In: Zumbo B, Hubley A, editors. Understanding and investigating response processes in validation research. Cham, Switzerland: Springer; 2017. pp. 1–12.

Messick S. Validity of performance assessments. In: Philips G, editor. Technical issues in large-scale performance assessment. Washington, DC: Department of Education, National Center for Education Statistics; 1996. pp. 1–18.

Moss PA. The role of consequences in validity theory. Educ Meas Issues Pract. 1998;17:6–12.

Cronbach L. Five perspectives on validity argument. In: Wainer H, editor. Test validity. Hillsdale, MI: Erlbaum; 1988. pp. 3–17.

Birkle C, Pendlebury DA, Schnell J, Adams J. Web of Science as a data source for research on scientific and scholarly activity. Quant Sci Stud. 2020;1:363–76.

Bramer WM, Rethlefsen ML, Kleijnen J, Franco OH. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. Syst Rev. 2017;6:245.

Web of Science Group. Editorial selection process. Clarivate. 2024. https://clarivate.com/webofsciencegroup/solutions/%20editorial-selection-process/ . Accessed 12 Sept 2022.

Download references

Acknowledgements

The authors thank the casual helpers for their aid in information processing and searching.

This work is one of the results of research project HIM/2015/017/SSA.1207, “Effects of mindfulness training on psychological distress and quality of life of the family caregiver”. Main researcher: Filiberto Toledano-Toledano Ph.D. The present research was funded by federal funds for health research and was approved by the Commissions of Research, Ethics and Biosafety (Comisiones de Investigación, Ética y Bioseguridad), Hospital Infantil de México Federico Gómez, National Institute of Health. The source of federal funds did not control the study design, data collection, analysis, or interpretation, or decisions regarding publication.

Author information

Authors and affiliations.

Departamento de Educación, Facultad de Ciencias Sociales, Universidad Europea de Valencia, 46010, Valencia, Spain

Lluna Maria Bru-Luna

Departamento de Psicología Básica, Universitat de València, Blasco Ibáñez Avenue, 21, 46010, Valencia, Spain

Manuel Martí-Vilar

Departamento de Psicología, Instituto de Investigación de Psicología, Universidad de San Martín de Porres, Tomás Marsano Avenue 242, Lima 34, Perú

César Merino-Soto

Instituto Central de Gestión de la Investigación, Universidad Nacional Federico Villarreal, Carlos Gonzalez Avenue 285, 15088, San Miguel, Perú

José Livia-Segovia

Unidad de Investigación en Medicina Basada en Evidencias, Hospital Infantil de México Federico Gómez Instituto Nacional de Salud, Dr. Márquez 162, 06720, Doctores, Cuauhtémoc, Mexico

Juan Garduño-Espinosa & Filiberto Toledano-Toledano

Unidad de Investigación Multidisciplinaria en Salud, Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra, México-Xochimilco 289, Arenal de Guadalupe, 14389, Tlalpan, Mexico City, Mexico

Filiberto Toledano-Toledano

Dirección de Investigación y Diseminación del Conocimiento, Instituto Nacional de Ciencias e Innovación para la Formación de Comunidad Científica, INDEHUS, Periférico Sur 4860, Arenal de Guadalupe, 14389, Tlalpan, Mexico City, Mexico

You can also search for this author in PubMed   Google Scholar

Contributions

L.M.B.L. conceptualized the study, collected the data, performed the formal anal- ysis, wrote the original draft, and reviewed and edited the subsequent drafts. M.M.V. collected the data and reviewed and edited the subsequent drafts. C.M.S. collected the data, performed the formal analysis, wrote the original draft, and reviewed and edited the subsequent drafts. J.L.S. collected the data, wrote the original draft, and reviewed and edited the subsequent drafts. J.G.E. collected the data and reviewed and edited the subsequent drafts. F.T.T. conceptualized the study and reviewed and edited the subsequent drafts. L.M.B.L. conceptualized the study and reviewed and edited the subsequent drafts. M.M.V. conceptualized the study and reviewed and edited the subsequent drafts. C.M.S. reviewed and edited the subsequent drafts. J.G.E. reviewed and edited the subsequent drafts. F.T.T. conceptualized the study; provided resources, software, and supervision; wrote the original draft; and reviewed and edited the subsequent drafts.

Corresponding author

Correspondence to Filiberto Toledano-Toledano .

Ethics declarations

Ethics approval and consent to participate.

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Commissions of Research, Ethics and Biosafety (Comisiones de Investigación, Ética y Bioseguridad), Hospital Infantil de México Federico Gómez, National Institute of Health. HIM/2015/017/SSA.1207, “Effects of mindfulness training on psychological distress and quality of life of the family caregiver”.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Bru-Luna, L.M., Martí-Vilar, M., Merino-Soto, C. et al. Person-centered care assessment tool with a focus on quality healthcare: a systematic review of psychometric properties. BMC Psychol 12 , 217 (2024). https://doi.org/10.1186/s40359-024-01716-7

Download citation

Received : 17 May 2023

Accepted : 07 April 2024

Published : 19 April 2024

DOI : https://doi.org/10.1186/s40359-024-01716-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Person-centered care assessment tool

BMC Psychology

ISSN: 2050-7283

literature review on continuous assessment

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Hosted content
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Online First
  • Inappropriate use of proton pump inhibitors in clinical practice globally: a systematic review and meta-analysis
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-5111-7861 Amit K Dutta 1 ,
  • http://orcid.org/0000-0003-2472-3409 Vishal Sharma 2 ,
  • Abhinav Jain 3 ,
  • Anshuman Elhence 4 ,
  • Manas K Panigrahi 5 ,
  • Srikant Mohta 6 ,
  • Richard Kirubakaran 7 ,
  • Mathew Philip 8 ,
  • http://orcid.org/0000-0003-1700-7543 Mahesh Goenka 9 ,
  • Shobna Bhatia 10 ,
  • http://orcid.org/0000-0002-9435-3557 Usha Dutta 2 ,
  • D Nageshwar Reddy 11 ,
  • Rakesh Kochhar 12 ,
  • http://orcid.org/0000-0002-1305-189X Govind K Makharia 4
  • 1 Gastroenterology , Christian Medical College and Hospital Vellore , Vellore , India
  • 2 Gastroenterology , Post Graduate Institute of Medical Education and Research , Chandigarh , India
  • 3 Gastroenterology , Gastro 1 Hospital , Ahmedabad , India
  • 4 Gastroenterology and Human Nutrition , All India Institute of Medical Sciences , New Delhi , India
  • 5 Gastroenterology , All India Institute of Medical Sciences - Bhubaneswar , Bhubaneswar , India
  • 6 Department of Gastroenterology , Narayana Superspeciality Hospital , Kolkata , India
  • 7 Center of Biostatistics and Evidence Based Medicine , Vellore , India
  • 8 Lisie Hospital , Cochin , India
  • 9 Apollo Gleneagles Hospital , Kolkata , India
  • 10 Gastroenterology , National Institute of Medical Science , Jaipur , India
  • 11 Asian Institute of Gastroenterology , Hyderabad , India
  • 12 Gastroenterology , Paras Hospitals, Panchkula , Chandigarh , India
  • Correspondence to Dr Amit K Dutta, Gastroenterology, Christian Medical College and Hospital Vellore, Vellore, Tamil Nadu, India; akdutta1995{at}gmail.com

https://doi.org/10.1136/gutjnl-2024-332154

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • PROTON PUMP INHIBITION
  • META-ANALYSIS

We read with interest the population-based cohort studies by Abrahami et al on proton pump inhibitors (PPI) and the risk of gastric and colon cancers. 1 2 PPI are used at all levels of healthcare and across different subspecialties for various indications. 3 4 A recent systematic review on the global trends and practices of PPI recognised 28 million PPI users from 23 countries, suggesting that 23.4% of the adults were using PPI. 5 Inappropriate use of PPI appears to be frequent, although there is a lack of compiled information on the prevalence of inappropriate overuse of PPI. Hence, we conducted a systematic review and meta-analysis on the inappropriate overuse of PPI globally.

Supplemental material

Overall, 79 studies, including 20 050 patients, reported on the inappropriate overuse of PPI and were included in this meta-analysis. The pooled proportion of inappropriate overuse of PPI was 0.60 (95% CI 0.55 to 0.65, I 2 97%, figure 1 ). The proportion of inappropriate overuse by dose was 0.17 (0.08 to 0.33) and by duration of use was 0.17 (0.07 to 0.35). Subgroup analysis was done to assess for heterogeneity ( figure 2A ). No significant differences in the pooled proportion of inappropriate overuse were noted based on the study design, setting (inpatient or outpatient), data source, human development index of the country, indication for use, sample size estimation, year of publication and study quality. However, regional differences were noted (p<0.01): Australia—40%, North America—56%, Europe—61%, Asia—62% and Africa—91% ( figure 2B ). The quality of studies was good in 27.8%, fair in 62.03% and low in 10.12%. 6

  • Download figure
  • Open in new tab
  • Download powerpoint

Forest plot showing inappropriate overuse of proton pump inhibitors.

(A) Subgroup analysis of inappropriate overuse of proton pump inhibitors (PPI). (B) Prevalence of inappropriate overuse of PPI across different countries of the world. NA, data not available.

This is the first systematic review and meta-analysis on global prescribing inappropriateness of PPI. The results of this meta-analysis are concerning and suggest that about 60% of PPI prescriptions in clinical practice do not have a valid indication. The overuse of PPI appears to be a global problem and across all age groups including geriatric subjects (63%). Overprescription increases the patient’s cost, pill burden and risk of adverse effects. 7–9 The heterogeneity in the outcome data persisted after subgroup analysis. Hence, this may be inherent to the practice of PPI use rather than related to factors such as study design, setting or study quality.

Several factors (both physician and patient-related) may contribute to the high magnitude of PPI overuse. These include a long list of indications for use, availability of the drug ‘over the counter’, an exaggerated sense of safety, and lack of awareness about the correct indications, dose and duration of therapy. A recently published guideline makes detailed recommendations on the accepted indications for the use of PPI, including the dose and duration, and further such documents may help to promote its rational use. 3 Overall, there is a need for urgent adoption of PPI stewardship practices, as is done for antibiotics. Apart from avoiding prescription when there is no indication, effective deprescription strategies are also required. 10 We hope the result of the present systematic review and meta-analysis will create awareness about the current situation and translate into a change in clinical practice globally.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

  • Abrahami D ,
  • McDonald EG ,
  • Schnitzer ME , et al
  • Jearth V , et al
  • Malfertheiner P ,
  • Megraud F ,
  • Rokkas T , et al
  • Shanika LGT ,
  • Reynolds A ,
  • Pattison S , et al
  • O’Connell D , et al
  • Choudhury A ,
  • Gillis KA ,
  • Lees JS , et al
  • Paynter S , et al
  • Targownik LE ,
  • Fisher DA ,

X @drvishal82

Contributors AKD: concept, study design, data acquisition and interpretation, drafting the manuscript and approval of the manuscript. VS: study design, data acquisition, analysis and interpretation, drafting the manuscript and approval of the manuscript. AJ, AE, MKP, SM: data acquisition and interpretation, critical revision of the manuscript, and approval of the manuscript. RK: study design, data analysis and interpretation, critical revision of the manuscript and approval of the manuscript. MP, MG, SB, UD, DNR, RK: data interpretation, critical revision of the manuscript and approval of the manuscript. GKM: concept, study design, data interpretation, drafting the manuscript, critical revision and approval of the manuscript.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Provenance and peer review Not commissioned; internally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

Continuous Assessment and Improvement of Software Quality with DevOps-Based Hybrid Model of Automation Tools

  • Published: 30 September 2023
  • Volume 62 , pages 412–419, ( 2023 )

Cite this article

literature review on continuous assessment

  • Poonam Narang 1 &
  • Pooja Mittal 1  

173 Accesses

Explore all metrics

Software development strategies have progressed from classic waterfall to more recent DevOps culture. This journey through various methodologies covers many development models from waterfall, spiral, prototype and agile to the continuous phases of DevOps, that improve software quality and productivity to a much greater extent. DevOps employs different tools at each phase to automate the task of development and operations. Existence of many tools necessitates the use of a development process that incorporates a DevOps-based hybrid model of integrated automation tool chain (ITC). The goal of this research is to compare the performance of already proposed and implemented DevOps-based hybrid model to randomly selected another DevOps-based hybrid model of different tool chain. This research will help software developers and industrialists choose the finest ITCs from a plethora of alternatives that not only speed up the development process but also offer quality. For further study, proposals and implementations of another DevOps-based hybrid models for different ITCs can be designed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

literature review on continuous assessment

Similar content being viewed by others

literature review on continuous assessment

Structured software development versus agile software development: a comparative analysis

literature review on continuous assessment

Future of software development with generative AI

literature review on continuous assessment

Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

P. Narang and P. Mittal, “Hybrid model for software development: An integral comparison of DevOps automation tools,” Indones. J. Electr. Eng. Comput. Sci. 27 (1), 456–465 (2022). https://doi.org/10.11591/ijeecs.v27.i1.pp456-465

Article   Google Scholar  

P. Narang and P. Mittal, “Implementation of DevOps hybrid model for project management and deployment using Jenkins with plugins,” Int. J. Comput. Syst. Network Secur. 22 (8), 249–259 (2022). https://doi.org/10.22937/IJCSNS.2022.22.8.31

P. Narang and P. Mittal, “Performance assessment of traditional software development methodologies and DevOps automation culture,” Eng. Technol. Appl. Sci. Res. 12 (6), 9726–9732 (2022). https://doi.org/10.48084/etasr.5315

L. L. Beck and T. E. Perkins, “A survey of software engineering practice: Tools, methods, and results,” IEEE Trans. Software Eng. 9 (5), 541–561 (1983). https://doi.org/10.1109/tse.1983.235114

H. K. Buhrer, “Software development: what it is, what it should be, and how to get there,” ACM SIGSOFT Software Eng. Notes 28 (2) (2022). https://doi.org/10.1145/638750.638777

I. Jacobson and E. Seidewitz, “A new software engineering,” Commun. ACM 57 (12), 49–54 (2014). https://doi.org/10.1145/2677034

P. Narang, P. Mittal, P. Gulia, and Balkrishan, “Insights into DevOps automation tools employed at different stages of software development”, in Computational Intelligence in Software Modeling , Ed. by V. Jain, J. M. Chatterjee, A. Bansal, U. Kose, and A. Jain (De Gruyter, Berlin, 2022), pp. 93–106. https://doi.org/10.1515/9783110709247-007

Book   Google Scholar  

H. Edison, X. Wang., and K. Conboy, “Comparing methods for large-scale agile software development: A systematic literature review,” IEEE Trans. Software Eng. 48 (8), 2709–2731 (2021). https://doi.org/10.1109/TSE.2021.3069039

P. Narang and P. Mittal, “Software development methodologies: Trending from traditional to DOSE—An empirical study,” in Proc of IEEE Delhi Section of International Conference on Electrical, Electronics and Computer Engineering (DELCON-2022) . https://ieeexplore.ieee.org/document/9753613.

L. Traini, “Exploring performance assurance practices and challenges in Agile software development: An ethnographic study,” Empirical Software Eng. 27 , 74 (2022). https://doi.org/10.1007/s10664-021-10069-3

E. Soares, G. Sizilio, J. Santos, D. Alencar da Costa, and U. Kulesza, “The effects of continuous integration on software development: A systematic literature review,” Empirical Software Eng. 27 (28) (2022). https://doi.org/10.1007/s10664-021-10114-1

P. Narang and P. Mittal, “DevOps: Bringing efficiency in delivering software product: A review,” in Proceedings of National Conference on Future Innovations in Computing Technologies and Machine Learning (FICTML-17) (Maharshi Dayanand University, Rohtak (Haryana), 2017).

D. Atesogullari and A. Mishra, “Automation testing tools: A comparative view,” Int. J. Inf. Technol. Secur. 4 (12) (2020).

E. Pelivani, A. Besimi, and B. Cico, “An empirical study of user interface testing tools,” Int. J. Inf. Technol. Secur. 1 (14) (2022).

D. E. Rzig, F. Hassan, and M. Kessentini, “An empirical study on ML DevOps adoption trends, efforts, and benefits analysis,” Inf. Software Technol. 152 , 107037 (2022). https://doi.org/10.1016/j.infsof.2022.107037

W. W. Royce, Managing the development of large software systems. https://en.wikipedia.org/wiki/Winston_W._Royce. Accessed September 14, 2022.

G. Papadopoulos, “Moving from traditional to agile software development methodologies also on large, distributed projects,” Elsevier Procedia: Soc. Behav. Sci. 175 , 455–463 (2015). https://doi.org/10.1016/j.sbspro.2015.01.1223

O. Uludag, A. Putta, M. Paassivaara, and F. Matthes, “Evolution of the agile scaling frameworks,” in International Conference on Agile Software Development: XP 2021 (Agile Processes in Software Engineering and Extreme Programming) , (Springer, Cham, 2021), pp. 123–139. https://doi.org/10.1007/978-3-030-78098-2_8 .

T. Dingsoyr, N. B. Moe, T. E. Faegri, and E. A. Seim, “Exploring software development at the very large-scale: A revelatory case study and research agenda for agile method adaptation,” Empirical Software Eng. 23 , 490– 520 (2018).

A. Agrawal, M. A. Atiq, and L. S. Maurya, “A current study on the limitation of agile methods in industry using secure Google forms,” Elsevier Procedia Comput. Sci. 78 , 291–297 (2016). https://doi.org/10.1016/j.procs.2016.02.056

M. Gomes, R. Pereira, M. Silva, J. Braga de Vasconcelos, and À. Rocha, “KPIs for evaluation of DevOps teams,” in World Conference on Information Systems and Technologies (Springer, Cham, 2022), pp. 142–156. https://doi.org/10.1007/978-3-031-04829-6_13 .

A. Mishra and Z. Otaiwi, “DevOps and software quality: A systematic mapping,” Elsevier Comput. Sci. Rev. 38 , 100308 (2020). https://doi.org/10.1016/j.cosrev.2020.100308

P. Debois, “Agile infrastructure and operations: How infra-gile are you?,” in Proceedings of the Agile 2008 Conference (IEEE, Toronto, 2008). https://doi.org/10.1109/Agile.2008.42 .

A. A. Khan and M. Shameem, “Multicriteria decision-making taxonomy for DevOps challenging factors using analytical hierarchy process,” J. Software Evol. Process 32 (10), 11–13 (2020). https://doi.org/e2263.10.1002/smr.2263.

L. Leite, C. Rocha, F. Kon, D. Milojicic, and P. Meirelles, “A survey of DevOps concepts and challenges,” ACM Comput. Surv. 52 (6), 1–35 (2019). https://doi.org/10.1145/3359981

D. Trihinas, A. Tryfonos, M. D. Dikaiakos, and G. Pallis, “DevOps as a service: Pushing the boundaries of microservice adoption,” IEEE Internet Comput. 22 (3), 65–71 (2018).

Download references

The authors accepts no receiving of funding.

Author information

Authors and affiliations.

Department of Computer Science & Applications, Maharshi Dayanand University, 124001, Rohtak, Haryana, India

Poonam Narang & Pooja Mittal

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Poonam Narang .

Ethics declarations

The authors declare that they have no conflicts of interest.

Additional information

Publisher’s note..

Pleiades Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Narang, P., Mittal, P. Continuous Assessment and Improvement of Software Quality with DevOps-Based Hybrid Model of Automation Tools. J. Comput. Syst. Sci. Int. 62 , 412–419 (2023). https://doi.org/10.1134/S1064230723020144

Download citation

Received : 20 July 2022

Revised : 25 November 2022

Accepted : 06 December 2022

Published : 30 September 2023

Issue Date : April 2023

DOI : https://doi.org/10.1134/S1064230723020144

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Automation Tool
  • Hybrid Model
  • Software Development
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Continuous Assessment

    literature review on continuous assessment

  2. Continuous Assessment Week 3

    literature review on continuous assessment

  3. THE USE AND PRINCIPLES OF CONTINUOUS ASSESSMENT

    literature review on continuous assessment

  4. What is Continuous and Comprehensive Evaluation

    literature review on continuous assessment

  5. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    literature review on continuous assessment

  6. (PDF) Annual review or continuous assessment?

    literature review on continuous assessment

VIDEO

  1. Revision For Continuous Assessment For Infection 2019

  2. ENGLISH LITERATURE FINAL SUMATIVE ASSESSMENT_NAZIHAH_21_XII IPS 2

  3. Continuous Assessment @ Ceefashion Academy #diy #diyfashion #fashion

  4. Continuous Practical Assessment 23 1044

  5. Continuous and Comprehensive Assessment in Mathematics

  6. Continuous Assessment 1 Program Orientation (BTY101M). Future Goal ,Road Map and Action Plan

COMMENTS

  1. Full article: Continuous assessment fit for purpose? Analysing the

    Regardless of academics' acknowledging that the "most recent version of the assessment policy attempts to unpack continuous assessment ... Coursework versus examinations in end-of-module assessment: A literature review. Assessment & Evaluation in Higher Education, 40(3), 439-455.

  2. Continuous assessment for improved teaching and learning: a ...

    Continuous assessment for improved teaching and learning: a critical review to inform policy and practice. programme and meeting document. Corporate author. UNESCO International Bureau of Education; Person as author. Muskin, Joshua A. Document code. IBE/2017/WP/CD/13; Collation.

  3. A Literature Review of Assessment

    Assessment is frequently described as a four-step process.1,7 Learning outcomes or objectives are developed first. Second, assessment tools are created to measure how well the student has learned. The third step is the actual teaching and learning process, during and after which assess-ment tools can be administered.

  4. PDF Literature Overview on Summative Quizzes and Continuous Assessment

    Hernández (2012) presented a literature review that outlines the distinction between assessment ("about grading and reporting student achievements and about supporting students in their learning") and feedback: and how continuous assessment generally combines both formative and summative functions: and confusion in definitions of these two ...

  5. PDF Assessment of the implementation of continuous assessment: the case of

    Literature Review What is assessment? Scholars, based on what want to give more emphasis, have different views regarding educational assessment. ... continuous assessment system involves class tests, class exercises and homework and no attention is given to project work, which is the most important learning medium that allows pupils to take ...

  6. PDF The Essence 1 of Continuous Assessment

    Continuous assessment is formative by nature. The key here is that the collection of data about students' understanding of concepts, and their practice of the processes and habits of mind of science happens while the students are engaged in learning. When these data are used by teachers to make decisions about next steps for a student or ...

  7. Authentic and Continuous Assessment During the Pandemic ...

    2.3 A Case for Authentic and Continuous Assessment (ACA) The literature on AA and CA shows relevant similarities. First, they are both considered to be beneficial for student involvement, understanding, and inclusion. ... the three teacher respondents referred to the same need for change in assessment as reported in the literature review ...

  8. Does continuous assessment in higher education support ...

    A distinction is often made in the literature about "assessment of learning" and "assessment for learning" attributing a formative function to the latter while the former takes a summative function. While there may be disagreements among researchers and educators about such categorical distinctions there is consensus that both types of assessment are often used concurrently in higher ...

  9. Explaining individual student success using continuous assessment types

    Continuous assessment refers to the use of one or several assessments during the course period, instead of a single final exam in the last weeks of the semester. ... Economics of Education Review, 30, ... van den Bogaard, M. (2012). Explaining student success in engineering education at Delft University of Technology: A literature synthesis ...

  10. Assessment of clinical competence in competency-based education

    Objective. The purpose of this review is to explore the literature on continuous assessment in the evaluation of clinical competence, to examine the variables influencing the assessment of clinical competence, and to consider the impact of high-stakes summative assessment practices on student experiences, learning, and achievement.

  11. PDF An Empirical Analysis of the Impact of Continuous Assessment on the

    Abstract: Since the Bologna Process was adopted, continuous assessment has been a cornerstone in the curriculum of most of the courses in the different degrees offered by the Spanish Universities. Continuous assessment plays an important role in both students' and lecturers' academic lives. In this study, we analyze the effect of the ...

  12. The assessment cycle: Insights from a systematic literature review on

    As part of this larger study, we completed a literature map (J. S. London et al., 2020) that illuminated a subset of literature broadly related to the assessment and evaluation of BPE-related efforts, where Black Americans were the focus or among the participants in the article. This review is scoped to this subset of literature.

  13. An Empirical Analysis of the Impact of Continuous Assessment on the

    Since the Bologna Process was adopted, continuous assessment has been a cornerstone in the curriculum of most of the courses in the different degrees offered by the Spanish Universities. Continuous assessment plays an important role in both students' and lecturers' academic lives. In this study, we analyze the effect of the continuous assessment on the performance of the students in their ...

  14. The power of assessment feedback in teaching and learning: a ...

    From the comprehensive review of the literature, the concept of assessment feedback and how it contributes to school effectiveness is thoroughly discussed. The article presents assessment feedback as a valuable factor for educators and students seeking to ensure continuous school improvement.

  15. Implementing continuous assessment in an academic English writing

    Although continuous assessment is often used in different disciplines, its implementation in L2 writing is underexplored. Adopting the framework of learning-oriented assessment (Carless, 2015) and the concept of affordances (Gibson, 1986), this exploratory study examined a case study of how one teacher and her students perceived and shaped/utilized the motivational and learning affordances of ...

  16. (PDF) CONTINUOUS ASSESSMENT 2019

    Continuous assessment entails careful keeping of records on the pupils, continuously and systematically. It takes into consideration the termly or periodic performances of students in assignments ...

  17. Full article: Rethinking online assessment from university students

    2. Literature review. Chickering and Gamson suggested seven principles of good practice for teaching and learning in undergraduate education (Chickering & Gamson, Citation 1987), including encouraging contacts between students and faculty, developing cooperation among students, encouraging active learning, giving prompt feedback, emphasizing time on task, communicating high expectations, and ...

  18. PDF An Assessment of the Implementation of Continuous Assessment Learning

    Gama, S. (2022). An Assessment of the Implementation of Continuous Assessment Learning Activity in Secondary Schools in Chirumanzu District, Zimbabwe: A Case of Hama High School. Indiana Journal of Arts & Literature, 3(3), 23-26. : The introduction of Continuous Assessment Learning Activity (CALA) was considered by National

  19. (PDF) The role of teachers in continuous assessment: a model for

    Findings In chapter 2, the researcher has done literature review on continuous assessment. Some of the findings concur with cases stated in the literature review process. One such concurrence is the call to update and adapt assessment practices constantly to meet the changing needs of the societies. As stated in chapter 3 the data in raw form ...

  20. Pupils' role in continuous assessment

    Literature shows that self- and peer-assessment are largely adopted in assessment practice that applies principles from the constructivists' learning theory (Pollard et al., 2005). Self- and peer- assessment when applied in classrooms can foster improvement of all pupils, including those who record lower attainments in class.

  21. Writing an effective literature review

    Mapping the gap. The purpose of the literature review section of a manuscript is not to report what is known about your topic. The purpose is to identify what remains unknown—what academic writing scholar Janet Giltrow has called the 'knowledge deficit'—thus establishing the need for your research study [].In an earlier Writer's Craft instalment, the Problem-Gap-Hook heuristic was ...

  22. Designing feedback processes in the workplace-based learning of

    A scoping review was conducted using the five-step methodological framework proposed by Arksey and O'Malley (2005) [], intertwined with the PRISMA checklist extension for scoping reviews to provide reporting guidance for this specific type of knowledge synthesis [].Scoping reviews allow us to study the literature without restricting the methodological quality of the studies found ...

  23. PDF Safety Culture Assessment and Continuous Improvement in Aviation: A

    In support of FAA efforts to promote a positive safety culture, this report provides a review of the literature on safety culture assessment and promotion. Continuing research will focus on developing assessment tools and identifying methods and areas of continuous improvement. Methods. Literature Review Resources.

  24. A scoping review of continuous quality improvement in healthcare system

    The growing adoption of continuous quality improvement (CQI) initiatives in healthcare has generated a surge in research interest to gain a deeper understanding of CQI. However, comprehensive evidence regarding the diverse facets of CQI in healthcare has been limited. Our review sought to comprehensively grasp the conceptualization and principles of CQI, explore existing models and tools ...

  25. Person-centered care assessment tool with a focus on quality healthcare

    The person-centered care (PCC) approach plays a fundamental role in ensuring quality healthcare. The Person-Centered Care Assessment Tool (P-CAT) is one of the shortest and simplest tools currently available for measuring PCC. The objective of this study was to conduct a systematic review of the evidence in validation studies of the P-CAT, taking the "Standards" as a frame of reference.

  26. Inappropriate use of proton pump inhibitors in clinical practice

    We read with interest the population-based cohort studies by Abrahami et al on proton pump inhibitors (PPI) and the risk of gastric and colon cancers.1 2 PPI are used at all levels of healthcare and across different subspecialties for various indications.3 4 A recent systematic review on the global trends and practices of PPI recognised 28 million PPI users from 23 countries, suggesting that ...

  27. Continuous Assessment and Improvement of Software Quality ...

    Abstract Software development strategies have progressed from classic waterfall to more recent DevOps culture. This journey through various methodologies covers many development models from waterfall, spiral, prototype and agile to the continuous phases of DevOps, that improve software quality and productivity to a much greater extent. DevOps employs different tools at each phase to automate ...

  28. Assessment of genetically modified maize MON 94804 (application GMFF

    The GMO Panel assessed the applicant's literature searches on maize MON 94804, which include a scoping review, according to the guidelines given in EFSA (2010; 2019b). A systematic review as referred to in Regulation (EU) No 503/2013 has not been provided in support to the risk assessment of application GMFF-2022-10651.