• - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Quality improvement...

Quality improvement into practice

Read the full collection.

  • Related content
  • Peer review
  • Adam Backhouse , quality improvement programme lead 1 ,
  • Fatai Ogunlayi , public health specialty registrar 2
  • 1 North London Partners in Health and Care, Islington CCG, London N1 1TH, UK
  • 2 Institute of Applied Health Research, Public Health, University of Birmingham, B15 2TT, UK
  • Correspondence to: A Backhouse adam.backhouse{at}nhs.net

What you need to know

Thinking of quality improvement (QI) as a principle-based approach to change provides greater clarity about ( a ) the contribution QI offers to staff and patients, ( b ) how to differentiate it from other approaches, ( c ) the benefits of using QI together with other change approaches

QI is not a silver bullet for all changes required in healthcare: it has great potential to be used together with other change approaches, either concurrently (using audit to inform iterative tests of change) or consecutively (using QI to adapt published research to local context)

As QI becomes established, opportunities for these collaborations will grow, to the benefit of patients.

The benefits to front line clinicians of participating in quality improvement (QI) activity are promoted in many health systems. QI can represent a valuable opportunity for individuals to be involved in leading and delivering change, from improving individual patient care to transforming services across complex health and care systems. 1

However, it is not clear that this promotion of QI has created greater understanding of QI or widespread adoption. QI largely remains an activity undertaken by experts and early adopters, often in isolation from their peers. 2 There is a danger of a widening gap between this group and the majority of healthcare professionals.

This article will make it easier for those new to QI to understand what it is, where it fits with other approaches to improving care (such as audit or research), when best to use a QI approach, making it easier to understand the relevance and usefulness of QI in delivering better outcomes for patients.

How this article was made

AB and FO are both specialist quality improvement practitioners and have developed their expertise working in QI roles for a variety of UK healthcare organisations. The analysis presented here arose from AB and FO’s observations of the challenges faced when introducing QI, with healthcare providers often unable to distinguish between QI and other change approaches, making it difficult to understand what QI can do for them.

How is quality improvement defined?

There are many definitions of QI ( box 1 ). The BMJ ’s Quality Improvement series uses the Academy of Medical Royal Colleges definition. 6 Rather than viewing QI as a single method or set of tools, it can be more helpful to think of QI as based on a set of principles common to many of these definitions: a systematic continuous approach that aims to solve problems in healthcare, improve service provision, and ultimately provide better outcomes for patients.

Definitions of quality improvement

Improvement in patient outcomes, system performance, and professional development that results from a combined, multidisciplinary approach in how change is delivered. 3

The delivery of healthcare with improved outcomes and lower cost through continuous redesigning of work processes and systems. 4

Using a systematic change method and strategies to improve patient experience and outcome. 5

To make a difference to patients by improving safety, effectiveness, and experience of care by using understanding of our complex healthcare environment, applying a systematic approach, and designing, testing, and implementing changes using real time measurement for improvement. 6

In this article we discuss QI as an approach to improving healthcare that follows the principles outlined in box 2 ; this may be a useful reference to consider how particular methods or tools could be used as part of a QI approach.

Principles of QI

Primary intent— To bring about measurable improvement to a specific aspect of healthcare delivery, often with evidence or theory of what might work but requiring local iterative testing to find the best solution. 7

Employing an iterative process of testing change ideas— Adopting a theory of change which emphasises a continuous process of planning and testing changes, studying and learning from comparing the results to a predicted outcome, and adapting hypotheses in response to results of previous tests. 8 9

Consistent use of an agreed methodology— Many different QI methodologies are available; commonly cited methodologies include the Model for Improvement, Lean, Six Sigma, and Experience-based Co-design. 4 Systematic review shows that the choice of tools or methodologies has little impact on the success of QI provided that the chosen methodology is followed consistently. 10 Though there is no formal agreement on what constitutes a QI tool, it would include activities such as process mapping that can be used within a range of QI methodological approaches. NHS Scotland’s Quality Improvement Hub has a glossary of commonly used tools in QI. 11

Empowerment of front line staff and service users— QI work should engage staff and patients by providing them with the opportunity and skills to contribute to improvement work. Recognition of this need often manifests in drives from senior leadership or management to build QI capability in healthcare organisations, but it also requires that frontline staff and service users feel able to make use of these skills and take ownership of improvement work. 12

Using data to drive improvement— To drive decision making by measuring the impact of tests of change over time and understanding variation in processes and outcomes. Measurement for improvement typically prioritises this narrative approach over concerns around exactness and completeness of data. 13 14

Scale-up and spread, with adaptation to context— As interventions tested using a QI approach are scaled up and the degree of belief in their efficacy increases, it is desirable that they spread outward and be adopted by others. Key to successful diffusion of improvement is the adaption of interventions to new environments, patient and staff groups, available resources, and even personal preferences of healthcare providers in surrounding areas, again using an iterative testing approach. 15 16

What other approaches to improving healthcare are there?

Taking considered action to change healthcare for the better is not new, but QI as a distinct approach to improving healthcare is a relatively recent development. There are many well established approaches to evaluating and making changes to healthcare services in use, and QI will only be adopted more widely if it offers a new perspective or an advantage over other approaches in certain situations.

A non-systematic literature scan identified the following other approaches for making change in healthcare: research, clinical audit, service evaluation, and clinical transformation. We also identified innovation as an important catalyst for change, but we did not consider it an approach to evaluating and changing healthcare services so much as a catch-all term for describing the development and introduction of new ideas into the system. A summary of the different approaches and their definition is shown in box 3 . Many have elements in common with QI, but there are important difference in both intent and application. To be useful to clinicians and managers, QI must find a role within healthcare that complements research, audit, service evaluation, and clinical transformation while retaining the core principles that differentiate it from these approaches.

Alternatives to QI

Research— The attempt to derive generalisable new knowledge by addressing clearly defined questions with systematic and rigorous methods. 17

Clinical audit— A way to find out if healthcare is being provided in line with standards and to let care providers and patients know where their service is doing well, and where there could be improvements. 18

Service evaluation— A process of investigating the effectiveness or efficiency of a service with the purpose of generating information for local decision making about the service. 19

Clinical transformation— An umbrella term for more radical approaches to change; a deliberate, planned process to make dramatic and irreversible changes to how care is delivered. 20

Innovation— To develop and deliver new or improved health policies, systems, products and technologies, and services and delivery methods that improve people’s health. Health innovation responds to unmet needs by employing new ways of thinking and working. 21

Why do we need to make this distinction for QI to succeed?

Improvement in healthcare is 20% technical and 80% human. 22 Essential to that 80% is clear communication, clarity of approach, and a common language. Without this shared understanding of QI as a distinct approach to change, QI work risks straying from the core principles outlined above, making it less likely to succeed. If practitioners cannot communicate clearly with their colleagues about the key principles and differences of a QI approach, there will be mismatched expectations about what QI is and how it is used, lowering the chance that QI work will be effective in improving outcomes for patients. 23

There is also a risk that the language of QI is adopted to describe change efforts regardless of their fidelity to a QI approach, either due to a lack of understanding of QI or a lack of intention to carry it out consistently. 9 Poor fidelity to the core principles of QI reduces its effectiveness and makes its desired outcome less likely, leading to wasted effort by participants and decreasing its credibility. 2 8 24 This in turn further widens the gap between advocates of QI and those inclined to scepticism, and may lead to missed opportunities to use QI more widely, consequently leading to variation in the quality of patient care.

Without articulating the differences between QI and other approaches, there is a risk of not being able to identify where a QI approach can best add value. Conversely, we might be tempted to see QI as a “silver bullet” for every healthcare challenge when a different approach may be more effective. In reality it is not clear that QI will be fit for purpose in tackling all of the wicked problems of healthcare delivery and we must be able to identify the right tool for the job in each situation. 25 Finally, while different approaches will be better suited to different types of challenge, not having a clear understanding of how approaches differ and complement each other may mean missed opportunities for multi-pronged approaches to improving care.

What is the relationship between QI and other approaches such as audit?

Academic journals, healthcare providers, and “arms-length bodies” have made various attempts to distinguish between the different approaches to improving healthcare. 19 26 27 28 However, most comparisons do not include QI or compare QI to only one or two of the other approaches. 7 29 30 31 To make it easier for people to use QI approaches effectively and appropriately, we summarise the similarities, differences, and crossover between QI and other approaches to tackling healthcare challenges ( fig 1 ).

Fig 1

How quality improvement interacts with other approaches to improving healthcare

  • Download figure
  • Open in new tab
  • Download powerpoint

QI and research

Research aims to generate new generalisable knowledge, while QI typically involves a combination of generating new knowledge or implementing existing knowledge within a specific setting. 32 Unlike research, including pragmatic research designed to test effectiveness of interventions in real life, QI does not aim to provide generalisable knowledge. In common with QI, research requires a consistent methodology. This method is typically used, however, to prove or disprove a fixed hypothesis rather than the adaptive hypotheses developed through the iterative testing of ideas typical of QI. Both research and QI are interested in the environment where work is conducted, though with different intentions: research aims to eliminate or at least reduce the impact of many variables to create generalisable knowledge, whereas QI seeks to understand what works best in a given context. The rigour of data collection and analysis required for research is much higher; in QI a criterion of “good enough” is often applied.

Relationship with QI

Though the goal of clinical research is to develop new knowledge that will lead to changes in practice, much has been written on the lag time between publication of research evidence and system-wide adoption, leading to delays in patients benefitting from new treatments or interventions. 33 QI offers a way to iteratively test the conditions required to adapt published research findings to the local context of individual healthcare providers, generating new knowledge in the process. Areas with little existing knowledge requiring further research may be identified during improvement activities, which in turn can form research questions for further study. QI and research also intersect in the field of improvement science, the academic study of QI methods which seeks to ensure QI is carried out as effectively as possible. 34

Scenario: QI for translational research

Newly published research shows that a particular physiotherapy intervention is more clinically effective when delivered in short, twice-daily bursts rather than longer, less frequent sessions. A team of hospital physiotherapists wish to implement the change but are unclear how they will manage the shift in workload and how they should introduce this potentially disruptive change to staff and to patients.

Before continuing reading think about your own practice— How would you approach this situation, and how would you use the QI principles described in this article?

Adopting a QI approach, the team realise that, although the change they want to make is already determined, the way in which it is introduced and adapted to their wards is for them to decide. They take time to explain the benefits of the change to colleagues and their current patients, and ask patients how they would best like to receive their extra physiotherapy sessions.

The change is planned and tested for two weeks with one physiotherapist working with a small number of patients. Data are collected each day, including reasons why sessions were missed or refused. The team review the data each day and make iterative changes to the physiotherapist’s schedule, and to the times of day the sessions are offered to patients. Once an improvement is seen, this new way of working is scaled up to all of the patients on the ward.

The findings of the work are fed into a service evaluation of physiotherapy provision across the hospital, which uses the findings of the QI work to make recommendations about how physiotherapy provision should be structured in the future. People feel more positive about the change because they know colleagues who have already made it work in practice.

QI and clinical audit

Clinical audit is closely related to QI: it is often used with the intention of iteratively improving the standard of healthcare, albeit in relation to a pre-determined standard of best practice. 35 When used iteratively, interspersed with improvement action, the clinical audit cycle adheres to many of the principles of QI. However, in practice clinical audit is often used by healthcare organisations as an assurance function, making it less likely to be carried out with a focus on empowering staff and service users to make changes to practice. 36 Furthermore, academic reviews of audit programmes have shown audit to be an ineffective approach to improving quality due to a focus on data collection and analysis without a well developed approach to the action section of the audit cycle. 37 Clinical audits, such as the National Clinical Audit Programme in the UK (NCAPOP), often focus on the management of specific clinical conditions. QI can focus on any part of service delivery and can take a more cross-cutting view which may identify issues and solutions that benefit multiple patient groups and pathways. 30

Audit is often the first step in a QI process and is used to identify improvement opportunities, particularly where compliance with known standards for high quality patient care needs to be improved. Audit can be used to establish a baseline and to analyse the impact of tests of change against the baseline. Also, once an improvement project is under way, audit may form part of rapid cycle evaluation, during the iterative testing phase, to understand the impact of the idea being tested. Regular clinical audit may be a useful assurance tool to help track whether improvements have been sustained over time.

Scenario: Audit and QI

A foundation year 2 (FY2) doctor is asked to complete an audit of a pre-surgical pathway by looking retrospectively through patient documentation. She concludes that adherence to best practice is mixed and recommends: “Remind the team of the importance of being thorough in this respect and re-audit in 6 months.” The results are presented at an audit meeting, but a re-audit a year later by a new FY2 doctor shows similar results.

Before continuing reading think about your own practice— How would you approach this situation, and how would you use the QI principles described in this paper?

Contrast the above with a team-led, rapid cycle audit in which everyone contributes to collecting and reviewing data from the previous week, discussed at a regular team meeting. Though surgical patients are often transient, their experience of care and ideas for improvement are captured during discharge conversations. The team identify and test several iterative changes to care processes. They document and test these changes between audits, leading to sustainable change. Some of the surgeons involved work across multiple hospitals, and spread some of the improvements, with the audit tool, as they go.

QI and service evaluation

In practice, service evaluation is not subject to the same rigorous definition or governance as research or clinical audit, meaning that there are inconsistencies in the methodology for carrying it out. While the primary intent for QI is to make change that will drive improvement, the primary intent for evaluation is to assess the performance of current patient care. 38 Service evaluation may be carried out proactively to assess a service against its stated aims or to review the quality of patient care, or may be commissioned in response to serious patient harm or red flags about service performance. The purpose of service evaluation is to help local decision makers determine whether a service is fit for purpose and, if necessary, identify areas for improvement.

Service evaluation may be used to initiate QI activity by identifying opportunities for change that would benefit from a QI approach. It may also evaluate the impact of changes made using QI, either during the work or after completion to assess sustainability of improvements made. Though likely planned as separate activities, service evaluation and QI may overlap and inform each other as they both develop. Service evaluation may also make a judgment about a service’s readiness for change and identify any barriers to, or prerequisites for, carrying out QI.

QI and clinical transformation

Clinical transformation involves radical, dramatic, and irreversible change—the sort of change that cannot be achieved through continuous improvement alone. As with service evaluation, there is no consensus on what clinical transformation entails, and it may be best thought of as an umbrella term for the large scale reform or redesign of clinical services and the non-clinical services that support them. 20 39 While it is possible to carry out transformation activity that uses elements of QI approach, such as effective engagement of the staff and patients involved, QI which rests on iterative test of change cannot have a transformational approach—that is, one-off, irreversible change.

There is opportunity to use QI to identify and test ideas before full scale clinical transformation is implemented. This has the benefit of engaging staff and patients in the clinical transformation process and increasing the degree of belief that clinical transformation will be effective or beneficial. Transformation activity, once completed, could be followed up with QI activity to drive continuous improvement of the new process or allow adaption of new ways of working. As interventions made using QI are scaled up and spread, the line between QI and transformation may seem to blur. The shift from QI to transformation occurs when the intention of the work shifts away from continuous testing and adaptation into the wholesale implementation of an agreed solution.

Scenario: QI and clinical transformation

An NHS trust’s human resources (HR) team is struggling to manage its junior doctor placements, rotas, and on-call duties, which is causing tension and has led to concern about medical cover and patient safety out of hours. A neighbouring trust has launched a smartphone app that supports clinicians and HR colleagues to manage these processes with the great success.

This problem feels ripe for a transformation approach—to launch the app across the trust, confident that it will solve the trust’s problems.

Before continuing reading think about your own organisation— What do you think will happen, and how would you use the QI principles described in this article for this situation?

Outcome without QI

Unfortunately, the HR team haven’t taken the time to understand the underlying problems with their current system, which revolve around poor communication and clarity from the HR team, based on not knowing who to contact and being unable to answer questions. HR assume that because the app has been a success elsewhere, it will work here as well.

People get excited about the new app and the benefits it will bring, but no consideration is given to the processes and relationships that need to be in place to make it work. The app is launched with a high profile campaign and adoption is high, but the same issues continue. The HR team are confused as to why things didn’t work.

Outcome with QI

Although the app has worked elsewhere, rolling it out without adapting it to local context is a risk – one which application of QI principles can mitigate.

HR pilot the app in a volunteer specialty after spending time speaking to clinicians to better understand their needs. They carry out several tests of change, ironing out issues with the process as they go, using issues logged and clinician feedback as a source of data. When they are confident the app works for them, they expand out to a directorate, a division, and finally the transformational step of an organisation-wide rollout can be taken.

Education into practice

Next time when faced with what looks like a quality improvement (QI) opportunity, consider asking:

How do you know that QI is the best approach to this situation? What else might be appropriate?

Have you considered how to ensure you implement QI according to the principles described above?

Is there opportunity to use other approaches in tandem with QI for a more effective result?

How patients were involved in the creation of this article

This article was conceived and developed in response to conversations with clinicians and patients working together on co-produced quality improvement and research projects in a large UK hospital. The first iteration of the article was reviewed by an expert patient, and, in response to their feedback, we have sought to make clearer the link between understanding the issues raised and better patient care.

Contributors: This work was initially conceived by AB. AB and FO were responsible for the research and drafting of the article. AB is the guarantor of the article.

Competing interests: We have read and understood BMJ policy on declaration of interests and have no relevant interests to declare.

Provenance and peer review: This article is part of a series commissioned by The BMJ based on ideas generated by a joint editorial group with members from the Health Foundation and The BMJ , including a patient/carer. The BMJ retained full editorial control over external peer review, editing, and publication. Open access fees and The BMJ ’s quality improvement editor post are funded by the Health Foundation.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ .

  • Olsson-Brown A
  • Dixon-Woods M ,
  • Batalden PB ,
  • Berwick D ,
  • Øvretveit J
  • Academy of Medical Royal Colleges
  • Nelson WA ,
  • McNicholas C ,
  • Woodcock T ,
  • Alderwick H ,
  • ↵ NHS Scotland Quality Improvement Hub. Quality improvement glossary of terms. http://www.qihub.scot.nhs.uk/qi-basics/quality-improvement-glossary-of-terms.aspx .
  • McNicol S ,
  • Solberg LI ,
  • Massoud MR ,
  • Albrecht Y ,
  • Illingworth J ,
  • Department of Health
  • ↵ NHS England. Clinical audit. https://www.england.nhs.uk/clinaudit/ .
  • Healthcare Quality Improvement Partnership
  • McKinsey Hospital Institute
  • ↵ World Health Organization. WHO Health Innovation Group. 2019. https://www.who.int/life-course/about/who-health-innovation-group/en/ .
  • Sheffield Microsystem Coaching Academy
  • Davidoff F ,
  • Leviton L ,
  • Taylor MJ ,
  • Nicolay C ,
  • Tarrant C ,
  • Twycross A ,
  • ↵ University Hospitals Bristol NHS Foundation Trust. Is your study research, audit or service evaluation. http://www.uhbristol.nhs.uk/research-innovation/for-researchers/is-it-research,-audit-or-service-evaluation/ .
  • ↵ University of Sheffield. Differentiating audit, service evaluation and research. 2006. https://www.sheffield.ac.uk/polopoly_fs/1.158539!/file/AuditorResearch.pdf .
  • ↵ Royal College of Radiologists. Audit and quality improvement. https://www.rcr.ac.uk/clinical-radiology/audit-and-quality-improvement .
  • Gundogan B ,
  • Finkelstein JA ,
  • Brickman AL ,
  • Health Foundation
  • Johnston G ,
  • Crombie IK ,
  • Davies HT ,
  • Hillman T ,
  • ↵ NHS Health Research Authority. Defining research. 2013. https://www.clahrc-eoe.nihr.ac.uk/wp-content/uploads/2014/04/defining-research.pdf .

research and continuous quality improvement

EDITORIAL article

Editorial: continuous quality improvement (cqi)—advancing understanding of design, application, impact, and evaluation of cqi approaches.

\r\nRoss Bailie*

  • 1 The University of Sydney, The University Centre for Rural Health, Lismore, NSW, Australia
  • 2 James Cook University, College of Medicine and Dentistry, Townsville, QLD, Australia
  • 3 University Research Co., LLC, Chevy Chase, MD, United States

Editorial on the Research Topic

Continuous Quality Improvement (CQI)—Advancing Understanding of Design, Application, Impact, and Evaluation of CQI Approaches

Continuous quality improvement (CQI) approaches are increasingly used to bridge gaps between the evidence base for best practice, what actually happens in practice, and achievement of better population health outcomes. Among a range of quality improvement strategies, CQI is characterized by iterative use of processes to identify quality problems, develop solutions, and implement and evaluate changes. Application of CQI in health care is evolving and evidence of their success continues to emerge ( 1 – 3 ).

Through the Research Topic, “Continuous Quality Improvement (CQI)—Advancing Understanding of Design, Application, Impact, and Evaluation of CQI approaches,” we aimed to aggregate knowledge of useful approaches to tailoring CQI approaches for different contexts, and for implementation, scale-up and evaluation of CQI interventions/programs. This Research Topic has attracted seven original research reports and three “perspectives” papers. Thirty-six authors have contributed from eighteen research organizations, universities, and policy and service delivery organizations. All original research articles and one perspective paper come from the Australian Audit and Best Practice for Chronic Disease (ABCD) National Research Partnership (“ABCD Partnership”) in Indigenous primary healthcare settings ( 4 – 6 ). To some extent, this reflects the interests and connections of two of the Topic Editors, who were lead investigators on the ABCD Partnership. This Partnership has made a prominent contribution to original research on CQI in primary healthcare internationally, with over 50 papers published in the peer-reviewed literature over the past 10 years.

As most articles in this Research Topic arise from the ABCD Partnership, a brief overview of the program provides a useful backdrop. The program originated in 2002 in the Top End of the Northern Territory in Australia, and built on substantial prior research and evaluation of CQI methods in Indigenous primary healthcare. With substantial growth and enthusiastic support from service providers and researchers around Australia, the ABCD Partnership has focused since 2010 on exploring clinical performance variation, examining strategies for improving primary care, and working with health service staff, management and policy makers to enhance effective implementation of successful strategies ( 4 ). By the end of 2014, the ABCD Partnership had generated the largest and most comprehensive dataset on quality of care in Australian Indigenous primary healthcare settings. The Partnership’s work is being extended through the Centre of Research Excellence in Integrated Quality Improvement ( 6 ).

Several research papers included in this Research Topic illustrate consistent findings of wide variation in adherence to clinical best-practice guidelines between health centers ( Bailie et al. ; Burnett et al. ; Matthews et al. ). The papers also show variation among different aspects of care, with relatively good delivery of some modes of care [ Bailie et al. ; ( 7 )] and poor delivery of others—such as follow-up of abnormal clinical or laboratory findings. These findings are evident in eye care ( Burnett et al. ), general preventive clinical care ( Bailie et al. ), and in absolute cardiovascular risk assessment ( Matthews et al. ; Vasant et al. ). The findings are consistent with other ABCD-related publications on diabetes care ( 8 ), preventive health ( 9 ), maternal care ( 10 ), child health ( 11 ), rheumatic heart disease ( 12 ), and sexual health ( 13 ).

Systems to support good clinical care are explored by Woods et al. in five primary healthcare centers that were identified through ABCD data as achieving substantially greater improvement than others over successive CQI cycles. Attention to understanding and improving systems was shown to be vital to the improvements in clinical care achieved by these health centers. Improved staffing and commitment to working in the community were standout aspects of health center systems that underpinned improvements in clinical care.

On a wider scale, engagement by primary healthcare services in the ABCD Partnership has enabled assessment of system functioning at district, regional, state, and national levels, as reflected in stakeholders’ perceptions of barriers and enablers to addressing gaps in chronic illness care and child health, and identifying drivers for improvement ( Bailie et al. ). Primary drivers included staff capability, availability and use of information systems and decision support tools, embedding of CQI processes, and community engagement. We have also shown how consistent and sustained policy and infrastructure support for CQI enables large-scale and ongoing improvements in quality of care ( 3 ).

Commitment of the ABCD team to promoting effective use of CQI data is reflected in one “perspective” paper, which describes a theory-informed cyclical interactive dissemination strategy ( Laycock et al. ). Concurrent developmental evaluation provides a mechanism for learning and refinement over successive cycles ( 14 ).

The other two perspective articles (not specifically from the ABCD program) highlight the role of facilitation in CQI and the potential for application of CQI in health professional education. The emerging evidence on facilitation as a vital tool for effective CQI should guide resourcing and approaches to CQI ( Harvey and Lynch ). The approach builds on the humanistic principles of modern CQI methods—participation, engagement, shared decision-making, enabling others, and tailoring to context. The framework for CQI approaches to health professional education described by Clithero et al. directly addresses a critical need for innovative approaches to health workforce development that will strengthen community engagement and embed CQI principles into health system functioning. The scale and scope of need in workforce development is strongly evident in findings of the ABCD program.

Importantly, CQI methods are proving useful in assessing and potentially improving delivery of evidence-based health promotion practices ( Percival et al. ). Percival’s experience in this field highlights the health facility and wider system challenges facing effective implementation of CQI methods. In health promotion these barriers include low priority given to health promotion in the face of heavy demands for acute clinical care. This work in health promotion complements other research on applying CQI to social determinants of health more broadly ( 15 ), including community food supply ( 16 ), housing ( 17 ), and education ( 18 ).

The publications in this special issue address many of the “building blocks” of high performing primary care described by Bodenheimer and colleagues in the US; namely, four foundational components (engaged leadership, data-driven improvement, empanelment, and team-based care) that are vital to facilitate the implementation of the other six elements (patient-team partnership, population management, continuity of care, prompt access to care, comprehensiveness, and care coordination) ( 19 ). They are also relevant to Australian based work on clinical microsystems and development of CQI tools for mainstream general practice, such as the Primary Care-Practice Improvement Tool (with similar components to the ABCD systems assessment tool) ( 20 ).

Continuous quality improvement is vital to improving health outcomes through system strengthening. We anticipate substantial future development of CQI methods. By late 2017, there had been over 20,000 views of this Research Topic, and many articles have already been cited in peer-review manuscripts. Further research on CQI in primary healthcare would be well guided by a systematic scoping review of literature summarizing empirical research on current knowledge in the field, and identifying key knowledge gaps.

Author Contributions

RB wrote the first draft. JB has revised content and structure. SL and EB reviewed and edited subsequent drafts. All authors have approved the final version of the manuscript for publication.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. RB was the chief investigator on the ABCD National Research Partnership and is the chief investigator on the Centre of Research Excellence in Integrated Quality Improvement. All papers published in the Research Topic received peer review from members of the Frontiers in Public Health Policy panel of reviewers who were independent of named authors on any given article published in this volume, consistent with the journal policy on conflict-of-interest.

Acknowledgments

We would like to thank all of those who contributed to this Research Topic as authors, review editors, and colleagues.

The National Health and Medical Research Council funded the ABCD National Research Partnership Project (grant number 545267) and the Centre for Research Excellence in Integrated Quality Improvement (grant number 1078927). In-kind and financial support was provided by the Lowitja Institute and a range of Community-Controlled and Government agencies.

Abbreviations

ABCD, audit and best practice for chronic disease; CQI, continuous quality improvement.

1. Tricco AC, Ivers NM, Grimshaw JM, Moher D, Turner L, Galipeau J, et al. Effectiveness of quality improvement strategies on the management of diabetes: a systematic review and meta-analysis. Lancet (2012) 379(9833):2252–61. doi:10.1016/S0140-6736(12)60480-2

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Lewin S, Lavis JN, Oxman AD, Bastias G, Chopra M, Ciapponi A, et al. Supporting the delivery of cost-effective interventions in primary health-care systems in low-income and middle-income countries: an overview of systematic reviews. Lancet (2008) 372(9642):928–39. doi:10.1016/S0140-6736(08)61403-8

3. Bailie R, Matthews V, Larkins S, Thompson S, Burgess P, Weeramanthri T, et al. Impact of policy support on uptake of evidence-based continuous quality improvement activities and the quality of care for Indigenous Australians: a comparative case study. BMJ Open (2017) 7(10). doi:10.1136/bmjopen-2017-016626

CrossRef Full Text | Google Scholar

4. Bailie R, Si D, Shannon C, Semmens J, Rowley K, Scrimgeour DJ, et al. Study protocol: national research partnership to improve primary health care performance and outcomes for Indigenous peoples. BMC Health Serv Res (2010) 10:129. doi:10.1186/1472-6963-10-129

5. Bailie R, Matthews V, Brands J, Schierhout G. A systems-based partnership learning model for strengthening primary healthcare. Implement Sci (2013) 8(1):143. doi:10.1186/1748-5908-8-143

6. Bailie J, Schierhout G, Cunningham F, Yule J, Laycock A, Bailie R. Quality of primary health care for Aboriginal and Torres Strait Islander People in Australia: key research findings and messages for action from the ABCD National Research Partnership. Menzies Sch Health Res (2015). doi:10.13140/RG.2.1.3887.2801

7. Schierhout G, Matthews V, Connors C, Thompson S, Kwedza R, Kennedy C, et al. Improvement in delivery of type 2 diabetes services differs by mode of care: a retrospective longitudinal analysis in the Aboriginal and Torres Strait Islander primary health care setting. BMC Health Serv Res (2016) 16(1):560. doi:10.1186/s12913-016-1812-9

8. Matthews V, Schierhout G, McBroom J, Connors C, Kennedy C, Kwedza R, et al. Duration of participation in continuous quality improvement: a key factor explaining improved delivery of type 2 diabetes services. BMC Health Serv Res (2014) 14(1):578. doi:10.1186/s12913-014-0578-1

9. Bailie J, Matthews V, Laycock A, Schultz R, Burgess CP, Peiris D, et al. Improving preventive health care in Aboriginal and Torres Strait Islander primary care settings. Global Health (2017) 13(1):48. doi:10.1186/s12992-017-0267-z

10. Gibson-Helm ME, Teede HJ, Rumbold AR, Ranasinha S, Bailie RS, Boyle JA. Continuous quality improvement and metabolic screening during pregnancy at primary health centres attended by Aboriginal and Torres Strait Islander women. Med J Aust (2015) 203(9):369–70. doi:10.5694/mja14.01660

11. McAullay D, McAuley K, Bailie R, Mathews V, Jacoby P, Gardner K, et al. Sustained participation in annual continuous quality improvement activities improves quality of care for Aboriginal and Torres Strait Islander children. J Paediatr Child Health (2017). doi:10.1111/jpc.13673

12. Ralph AP, Fittock M, Schultz R, Thompson D, Dowden M, Clemens T, et al. Improvement in rheumatic fever and rheumatic heart disease management and prevention using a health centre-based continuous quality improvement approach. BMC Health Serv Res (2013) 13(1):525. doi:10.1186/1472-6963-13-525

13. Nattabi B, Matthews V, Bailie J, Rumbold A, Scrimgeour D, Schierhout G, et al. Wide variation in sexually transmitted infection testing and counselling at aboriginal primary health care centres in Australia: analysis of longitudinal continuous quality improvement data. BMC Infect Dis (2017) 17:148. doi:10.1186/s12879-017-2241-z

14. Laycock A, Bailie J, Matthews V, Cunningham F, Harvey G, Percival N, et al. A developmental evaluation to enhance stakeholder engagement in a wide-scale interactive project disseminating quality improvement data: study protocol for a mixed-methods study. BMJ Open (2017) 7:7. doi:10.1136/bmjopen-2017-016341

15. McDonald EL, Bailie R, Michel T. Development and trialling of a tool to support a systems approach to improve social determinants of health in rural and remote Australian communities: the healthy community assessment tool. Int J Equity Health (2013) 12(1):15. doi:10.1186/1475-9276-12-15

16. Brimblecombe J, van den Boogaard C, Wood B, Liberato SC, Brown J, Barnes A, et al. Development of the good food planning tool: a food system approach to food security in Indigenous Australian remote communities. Health Place (2015) 34:54–62. doi:10.1016/j.healthplace.2015.03.006

17. Bailie RS, Wayte KJ. A continuous quality improvement approach to indigenous housing and health. Environ Health (2006) 6(2):36–41.

Google Scholar

18. McCalman J, Bainbridge R, Russo S, Rutherford K, Tsey K, Wenitong M, et al. Psycho-social resilience, vulnerability and suicide prevention: impact evaluation of a mentoring approach to modify suicide risk for remote Indigenous Australian students at boarding school. BMC Public Health (2016) 16(1):98. doi:10.1186/s12889-016-2762-1

19. Bodenheimer T, Ghorob A, Willard-Grace R, Grumbach K. The 10 building blocks of high-performing primary care. Ann Fam Med (2014) 12(2):166–71. doi:10.1370/afm.1616

20. Crossland L, Janamian T, Sheehan M, Siskind V, Hepworth J, Jackson CL. Development and pilot study of the primary care practice improvement tool (PC-PIT): an innovative approach. Med J Aust (2014) 201(3):S52–5. doi:10.5694/mja14.00262

Keywords: primary health care, health systems research, continuous quality improvement, Aboriginal and Torres Strait Islander health, building block

Citation: Bailie R, Bailie J, Larkins S and Broughton E (2017) Editorial: Continuous Quality Improvement (CQI)—Advancing Understanding of Design, Application, Impact, and Evaluation of CQI Approaches. Front. Public Health 5:306. doi: 10.3389/fpubh.2017.00306

Received: 20 October 2017; Accepted: 03 November 2017; Published: 23 November 2017

Edited and Reviewed by: Kai Ruggeri , University of Cambridge, United Kingdom

Copyright: © 2017 Bailie, Bailie, Larkins and Broughton. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Ross Bailie, ross.bailie@sydney.edu.au

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

research and continuous quality improvement

Differentiating research, evidence-based practice, and quality improvement

Research, evidence-based practice (EBP), and quality improvement support the three main goals of the Magnet Recognition Program ® and the Magnet Model component of new knowledge, innovation, and improvements. The three main goals of the Magnet Recognition Program are to: 1) Promote quality in a setting that supports professional practice 2) Identify excellence in the delivery of nursing services to patients or residents 2) Disseminate best practices in nursing services.

The Magnet Model includes five components:

  • transformational leadership
  • structural empowerment
  • exemplary professional practice
  • new knowledge, innovation, and improvements
  • empirical quality outcomes.

To achieve the goals of the Magnet Recognition Program and the “new knowledge innovation and improvements” component of the Magnet Model, nurses at all levels of healthcare organizations must be involved. Many nurses may be unaware of the importance of their contributions to developing new knowledge, innovations, and improvements and may not be able to differentiate among those processes. This article explains the basic differences among research, EBP, and quality improvement (QI.) (See Comparing research, evidence-based practice, and quality improvement .)

Using audit and feedback to improve compliance with evidence-based practices

Implementation: The linchpin of evidence-based practice changes

Is this quality improvement or research?

Understanding research

The purpose of conducting research is to generate new knowledge or to validate existing knowledge based on a theory. Research studies involve systematic, scientific inquiry to answer specific research questions or test hypotheses using disciplined, rigorous methods. While research is about investigation, exploration, and discovery, it also requires an understanding of the philosophy of science. For research results to be considered reliable and valid, researchers must use the scientific method in orderly, sequential steps.

The process begins with burning (compelling) questions about a particular phenomenon, such as: What do we know about the phenomenon? What evidence has been developed and reported? What gaps exist in the knowledge base?

The first part of investigation involves a systematic, comprehensive review of the literature to answer those questions. Identified knowledge gaps typically provide the impetus for developing a specific research question (or questions), a hypothesis or hypotheses, or both. Next, a decision can be made on the underlying theory that will guide the study and aid selection of type of method to be used to explore the phenomenon.

The two main study methods are quantitative (numeric) and qualitative (verbal), although mixed methods using both are growing. Quantitative studies tend to explore relationships among a set of variables related to the phenomenon, whereas qualitative studies seek to understand the deeper meaning of the involved variables.

  • Quantitative studies typically involve scientific methodology to determine appropriate sample size, various designs to control for potential errors during data collection, and rigorous statistical analysis of the data.
  • Qualitative studies tend to explore life experiences to give them meaning.

In all research, discovery occurs as data are collected and analyzed and results and outcomes are interpreted.

A final important step in the research process is publication of study results with a description of how they contribute to the body of knowledge. Examples of potential nursing research include conducting a systematic review of studies on preventing catheter-associated urinary tract infections (CAUTI), a randomized controlled trial exploring new wound care methods, and a qualitative study to investigate the lived experiences of patients with a specific chronic disease.

Understanding EBP

Unlike research, EBP isn’t about developing new knowledge or validating existing knowledge. It’s about translating the evidence and applying it to clinical decision-making. The purpose of EBP is to use the best evidence available to make patient-care decisions. Most of the best evidence stems from research. But EBP goes beyond research use and includes clinical expertise as well as patient preferences and values. The use of EBP takes into consideration that sometimes the best evidence is that of opinion leaders and experts, even though no definitive knowledge from research results exists. Whereas research is about developing new knowledge, EBP involves innovation in terms of finding and translating the best evidence into clinical practice.

Steps in the EBP process

The EBP process has seven critical steps:

1. Cultivate a spirit of inquiry.

2. Ask a burning clinical question.

3. Collect the most relevant and best evidence.

4. Critically appraise the evidence.

5. Integrate evidence with clinical expertise, patient preferences, and values in making a practice decision or change.

6. Evaluate the practice decision or change.

7. Disseminate EBP results.

Cultivating a spirit of inquiry means that individually or collectively, nurses should always be asking questions about how to improve healthcare delivery. The burning clinical question commonly is triggered through either a problem focus or a knowledge focus. Problem-focused triggers may arise from identifying a clinical problem or from such areas as risk management, finance, or quality improvement. Knowledge-focused triggers may come from new research results or other literature findings, new philosophies of care, or new regulations.

Regardless of the origin, the next step in the EBP process is to review and appraise the literature. Whereas a literature review for research involves identifying gaps in knowledge, a literature review in EBP is done to find the best current evidence.

Hierarchy of evidence

In searching for the best available evidence, nurses must understand that a hierarchy exists with regard to the level and strength of evidence. All of the various hierarchies of evidence are similar to some degree.

  • The highest (strongest) level of evidence typically comes from a systematic review, a meta-analysis, or an established evidence-based clinical practice guideline based on a systematic review.
  • Other levels of evidence come from randomized controlled trials (RCTs), other types of quantitative studies, qualitative studies, and expert opinion and analyses.

Critical appraisal

Once the evidence is gathered, the researcher must critically appraise each study to ensure its credibility and clinical significance. Critical appraisal often is thought to be tedious and time-consuming. But it’s crucial to determine not only what was done and how, but how well it was done. An easy method for conducting critical appraisal is to answer these three key questions:

  • What were the results of the study? (In other words, what is the evidence?)
  • How valid are the results? (Can they be trusted?)
  • Will the results be helpful in caring for other patients? (Are they transferable?)

Final steps of EBP

The final steps of the EBP process include integrating the evidence with one’s clinical expertise, taking into account patient preferences, and evaluating the effectiveness of applying the evidence. Disseminating or reporting the results of EBP projects may help others learn about and apply the best evidence. Examples of potential EBP projects include implementing an evidence-based clinical practice guideline to reduce or prevent CAUTIs, evaluating an evidence-based intervention to improve wound healing, and applying an EBP to improve compliance with a specific treatment for a chronic disease.

Understanding QI

The purpose of QI is to use a systematic, data-guided approach to improve processes or outcomes. Principles and strategies involved in QI have evolved from organizational philosophies of total quality management and continuous quality improvement.

While the concept of quality can be subjective, QI in healthcare typically focuses on improving patient outcomes. So the key is to clearly define the outcome that needs to be improved, identify how the outcome will be measured, and develop a plan for implementing an intervention and collecting data before and after the intervention.

Various QI methods are available. A common format uses the acronym FOCUS-PDSA:

F ind a process to improve.

O rganize an effort to work on improvement.

C larify current knowledge of the process.

U nderstand process variation and performance capability.

S elect changes aimed at performance improvement.

P lan the change; analyze current data and predict the results.

D o it; execute the plan.

S tudy (analyze) the new data and check the results.

A ct; take action to sustain the gains.

Unlike research and EBP, QI typically doesn’t require extensive literature reviews and rigorous critical appraisal. Therefore, nurses may be much more involved in QI projects than EBP or research. Also, QI projects normally are site specific and results aren’t intended to provide generalizable knowledge or best evidence. Examples of QI projects include implementing a process to remove urinary catheters within a certain time frame, developing a process to improve wound-care documentation, and improving the process for patient education for a specific chronic disease.

Comparing research, evidence-based practice, and quality improvement

  • Applies a methodology, which may be quantitative or qualitative, to generate new knowledge, or validate existing knowledge based on a theory
  • Uses systematic, scientific inquiry and disciplined, rigorous methods to answer a research question or test a hypothesis about an intervention
  • Begins with a burning question and uses a systematic review of literature, including critical appraisal, to identify knowledge gaps
  • Contains variables that can be measured and/or manipulated to describe, explain, predict, and/or control phenomena, or to develop meaning, discovery, or understanding about a particular phenomenon

Evidence-based practice

  • Translates the best clinical evidence, typically from research results, to make patient care decisions
  • Involves more than research use; may include clinical expertise and knowledge gained through experience
  • Process begins with a burning question, which may arise from either problem focused or knowledge focused triggers
  • Involves a systematic review of literature, including critical appraisal, to find the best available evidence
  • Studies whether the evidence warrants a practice change
  • Evaluation includes these questions: If practice change was made, did it produce the expected results? If not, why not? If so, how will the new practice be sustained?

Quality improvement

  • Uses a system to monitor and evaluate the quality and appropriateness of care based on EBP and research
  • Involves A systematic method for improving processes, outcomes, or both
  • Evolved from continuous quality improvement and total quality management organizational philosophies
  • Focuses on systems, processes, or functions or a combination
  • Typically doesn’t require extensive review of literature or critical appraisal
  • QI projects are site-specific; results aren’t intended to provide generalizable knowledge or best evidence

Take-away points

  • Research, EBP, and QI support the three main goals of the Magnet Recognition Program and the Magnet Model components of new knowledge, innovation, and improvements.
  • Research applies a methodology (quantitative or qualitative) to develop new knowledge.
  • EBP seeks and applies the best clinical evidence, often from research, toward making patient-care decisions.
  • QI uses systematic processes to improve patient outcomes.

All nurses should know and understand the differences among these three concepts.

Brian T. Conner is an assistant professor and undergraduate program director in the College of Nursing at the Medical University of South Carolina in Charleston.

Selected references

American Nurses Credentialing Center. Magnet Program Overview. www.nursecredentialing.org/Magnet/ProgramOverview . Accessed April 21, 2014.

Brown SJ Evidence-Based Nursing: The Research-Practice Connection . 3rd ed. Burlington, MA: Jones & Bartlett Learning; 2013.

Burns N, Grove SK, Gray JR. The Practice of Nursing Research: Appraisal, Synthesis, and Generation of Evidence. 7th ed. St. Louis, MO: Elsevier Saunders; 2012.

Harris JL, Roussel L, Walters SE, et al. Project Planning and Management: A Guide for CNLs, DNPs, and Nurse Executives . Sudbury, MA: Jones & Bartlett Learning; 2011.

Melnyk BM, Fineout-Overholt E. Evidence-Based Practice in Nursing and Healthcare: A Guide to Best Practice . 2nd ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2010.

Sackett DL, Straus SE, Richardson WS, et al. Evidence-Based Medicine: How to Practice and Teach EBM . 2nd ed. London: Churchill Livingstone; 2000.

Tappen RM. Advanced Nursing research: From Theory to Practice . Sudbury, MA: Jones & Bartlett Learning; 2011.

Titler MG, Kleiber D, Steelman VJ, et al. The Iowa Model of Evidence-Based Practice to Promote Quality Care. Crit Care Nurs Clin North Am . 2001;13(4):497-509.

Visits: 35296

15 Comments .

Found this extremely helpful and broken down for easy understanding.

This is exactly what I needed, detailed and simple at the same time yet easy to understand

Briefly explained the research, EBP, and QI. Thank you!

so good thanks!! hopefully I pass my EBP test tmrw!

This article was helpful. Thank you

Thank You ! _ this the begining of a “Research made easy book”

very helpful

Why wouldn’t QI benefit from literature review and rigorous critical appraisal of the literature? Especially when DNPs and CNLs are involved. I would love to hear thoughts about this.

Very Helpful.

Research concentrate more on new knowledge and validating that existing knowledge based on a theory, while EBP is using the best evident to improve patient outcome and finally QI actually uses a systematic approach to improve the outcome.

Correct interpretation. Excellent

this article was very helpful

great information and so simple to understand

Useful brief tip. Thanks

Comments are closed.

research and continuous quality improvement

NurseLine Newsletter

  • First Name *
  • Last Name *
  • Hidden Referrer

*By submitting your e-mail, you are opting in to receiving information from Healthcom Media and Affiliates. The details, including your email address/mobile number, may be used to keep you informed about future products and services.

Test Your Knowledge

Recent posts.

research and continuous quality improvement

Human touch

research and continuous quality improvement

Leadership style matters

research and continuous quality improvement

Innovation in motion

research and continuous quality improvement

Nurse referrals to pharmacy

research and continuous quality improvement

Lived experience

research and continuous quality improvement

The nurse’s role in advance care planning

research and continuous quality improvement

Sleep and the glymphatic system

research and continuous quality improvement

Hypertrophic cardiomyopathy

research and continuous quality improvement

High school nurse camp

research and continuous quality improvement

Healthcare’s role in reducing gun violence

research and continuous quality improvement

Early Release: Nurses and firearm safety

research and continuous quality improvement

Gun violence: A public health issue

research and continuous quality improvement

Postpartum hemorrhage

research and continuous quality improvement

Let’s huddle up

research and continuous quality improvement

Nurses have the power

research and continuous quality improvement

ONS Voice Home

  • Latest Articles
  • Clinical Practice
  • ONS Leadership
  • Get Involved

A family of nurses

  • News and Views

The Difference Between Quality Improvement, Evidence-Based Practice, and Research

quality improvement, evidence-based practice and research in oncology

  • Share on Twitter
  • Share on Facebook
  • Share on Pinterest
  • Share on LinkedIn
  • Email Article
  • Print Article

As healthcare institutions become ever more complex and our focus on patient experience expands, nurses are leading and participating in research studies , evidence-based practice (EBP) projects, and quality improvement (QI) initiatives with a goal of improving patient outcomes. Research, EBP, and QI have subtle differences and frequent overlap , which can make it a challenge for nurses to identify the best option to investigating a clinical problem.

The first step is a comprehensive review of the literature with a medical librarian. This informs about the problem, what evidence has been reported, and what gaps exist in our knowledge of the problem. Then, synthesize the relevant literature and decide how best to proceed based on the outcome of the literature review, experience with previous projects, available resources, and staff time and effort.

Quality Improvement

QI projects typically don’t involve extensive literature reviews and are usually specific to one facility. The purpose of QI projects is to correct workflow processes, improve efficiencies, reduce variations in care, and address clinical administrative or educational problems . An example is assessing and implementing urinary catheter removal policies with a goal of removing catheters within a defined timeframe.

Evidence-Based Practice

EBP integrates the best available research evidence with clinical expertise and patient values to improve outcomes. The process involves asking a relevant clinical question, finding the best evidence to answer it, applying the evidence to practice, and evaluating the evidence based on clinical outcomes. An example is implementing a new evidence-based clinical practice guideline at an institution to reduce or prevent chemotherapy extravasation for patients receiving vesicant therapy.

If a literature review identifies gaps, you may conduct a study to generate new knowledge or to validate existing knowledge to answer a specific research question. Human subject approval is necessary before conducting a research study. An example is a randomized controlled trial of two skin care regimens for patients receiving external-beam radiation therapy.

Nurses at all levels of care will be involved in asking and answering focused clinical questions with a goal of improving patient outcomes. It is important to be familiar with the similarities and differences between research, EBP, and QI. Each is an excellent method to improve clinical outcomes.

Research, Evidence-Based Practice, and Quality Improvement

  • Evidence-based care
  • Oncology quality measures

Research, Evidence-Based Practice, and Quality Improvement Simplified

  • PMID: 36595725
  • DOI: 10.3928/00220124-20221207-09

Is the research process different than evidence-based practice and quality improvement, or is it the same? Scattered evidence and misperceptions regarding research, evidence-based practice, and quality improvement make the answer unclear among nurses. This article clarifies and simplifies the three processes for frontline clinical nurses and nurse leaders. The three processes are described and discussed to give the reader standards for differentiating one from the other. The similarities and differences are highlighted, and examples are provided for contextualization of the methods. [ J Contin Educ Nurs . 2023;54(1):40-48.] .

  • Education, Nursing, Continuing*
  • Evidence-Based Practice
  • Quality Improvement*

ES 9 – Improve and Innovate

Essential Public Health Service 9: Improve and innovate public health functions through ongoing evaluation, research, and continuous quality improvement

Learn how environmental health programs support Essential Public Health Service 9 and public health accreditation.

These activities support Essential Service 9 (evaluation, research, and continuous improvement).

Public health wheel showing essential service 9 in dark color, with equity at the center.

Essential service 9—improve and innovate public health functions through ongoing evaluation, research, and continuous quality improvement—is one of the essential services aligned to the Assurance core function.

Here are some examples of common activities* that help deliver essential service 9.

Building a culture of quality in environmental health programs and using quality improvement to improve environmental health programmatic efforts.

Using research, innovation, and other data to inform program decision-making.

Contributing to the evidence base around environmental health practice.

Evaluating environmental health services, policies, plans, and laws to ensure they contribute to health and do not create undue harm.

These activities connect to PHAB standards.

Environmental health programs also link to and support broader public health initiatives such as public health accreditation. Following are examples of activities that could contribute to accreditation by the Public Health Accreditation Board (PHAB) . Completing these activities does not guarantee conformity to PHAB documentation requirements.

PHAB Standard 9.1: Build and foster a culture of quality.

  • Monitoring environmental public health services against established criteria for performance, including the number of services and inspections provided and the extent to which program goals and objectives are achieved for these services, with a focus on outcome and improvement (for example, decreased rate of illness and injury, decrease in critical inspection violations and factors, decrease in exposure).
  • Engaging in performance management, including participation in the health department-wide system.
  • Engaging in quality improvement efforts in the health department and implementing quality improvement projects to improve environmental health processes, programs, and interventions.

PHAB Standard 9.2: Use and contribute to developing research, evidence, practice-based insights, and other forms of information for decision-making.

  • Collaborating with traditional and nontraditional partners (for example, advocacy groups, academia, other government departments) to identify opportunities to translate research findings to improve environmental public health practice and to analyze and publish findings of environmental health investigations to further general knowledge.

More Information

  • 10 Essential Public Health Services
  • 10 Essential Public Health Services Toolkit (Public Health National Center for Innovation)
  • Environmental Public Health and the 10 Essential Services

*Examples are not exhaustive.

Exit Notification / Disclaimer Policy

  • The Centers for Disease Control and Prevention (CDC) cannot attest to the accuracy of a non-federal website.
  • Linking to a non-federal website does not constitute an endorsement by CDC or any of its employees of the sponsors or the information and products presented on the website.
  • You will be subject to the destination website's privacy policy when you follow the link.
  • CDC is not responsible for Section 508 compliance (accessibility) on other federal or private website.

SweetProcess Icon

Quality Improvement Process: How to Plan and Measure It

Last Updated on March 21, 2024 by Owen McGab Enaohwo

Start your free 14-day trial of SweetProcess No credit card needed. Cancel anytime. Click Here To Try it for Free.

When a business takes that jump from small to more substantial, it can lose footing. Customers might complain that the product they used to love has lost the high quality they appreciated about it. The question becomes, how do you maintain quality while producing more?

Every business wants to present quality offers, but not everyone understands what it takes and how to have the best quality. And managers often get this wrong.

In this article, we will look at the quality improvement process, uncover the must-haves in a good QIP, and study the methods, tools, and steps to build a sound strategy.

SweetProcess is our software, and it’s built for teams to create and manage quality improvement documents and procedures in one place so they can focus on what drives real business growth. Without adding your credit card info, you can sign up for our 14-day free trial to see how it works.

TABLE OF CONTENTS

What Exactly Is a Quality Improvement Process?

Importance of quality improvement process in a business, key steps for planning an effective quality improvement process, enhance your company’s quality improvement process using sweetprocess, top quality improvement methodologies, common quality improvement tools, common challenges of quality improvement process, key ways to measure the quality improvement process, quality improvement process in healthcare, build an effective quality improvement process with sweetprocess.

Quality improvement process (QIP) is a journey to enhance and refine the overall quality of an organization’s products, services, and processes.

The main essence is to identify areas that need improvement, implement changes, and always monitor outcomes to ensure constant improvement. It’s driven by a commitment to meet or exceed customer expectations, comply with industry standards, and achieve organizational goals.

Let’s look in more detail at the importance and benefits of the quality improvement process.

Enhanced Brand Reputation

When a business regularly offers great products and services, it builds trust with customers, who become loyal and recommend the business to others. QIP helps to make sure that quality is maintained consistently.

Employee Engagement and Satisfaction

QIP ensures clear communication, setting expectations and reducing frustration. This clarity builds trust, enhances teamwork, and promotes collaboration. Investing in QIP not only improves products or services but also creates a workplace where employees feel valued and satisfied.

A motivated and happy team is key to delivering the quality customers expect. QIP is not just a process tool; it’s also a strategy for engaging employees, leading to overall business success.

Adaptability to Change

Change is constant, especially in a competitive market. The ability to adapt quickly can make or mar a brand. That’s where a strong QIP helps you build. It encourages you to stay vigilant, anticipating changes like shifts in customer behavior or new competitors. It promotes a culture of innovation by seeking better ways to operate.

Cost Reduction

Mistakes can be expensive, leading to rework, refunds, lost sales, and damage to your brand. QIP focuses on error prevention before they become complex issues and saves the costs of fixing mistakes. But it doesn’t stop there. QIP leads to long-term savings by promoting continuous learning and development . As you improve, you better understand processes, team capabilities, and customer needs, making informed decisions about technology, hiring, and market expansion.

Regulatory Compliance

QIP helps your business adhere to industry standards, regulations, and laws . It is a built-in guide, ensuring your business is on the right track. QIP prioritizes documenting every process and change, creating a reliable trail of evidence for compliance.

While it may seem like a lot of paperwork, this documentation serves as proof during audits or inspections, demonstrating that your business follows the rules.

Improved Productivity

Quality improvement process can transform your business into a productive powerhouse where you work smarter, not harder, and achieve more with less.

Let’s go over these ten important steps to plan a quality improvement process for your company.

Define Your Goals

Before you plan to improve your work, you must decide what you want to achieve. What are you trying to do with this improvement process? Do you want to spend less money, make customers happier, or get more work done?

Once you know your goals, you can start working toward them. Ensure your goals are specific, achievable, trackable, significant, and have a deadline.

Identify and Analyze the Current Situation

This step involves collecting information and looking at your current ways of doing things to find areas where you can improve. You can use tools like maps, charts, and checklists to write how things are done and find places where things might be slow or wasteful.

Once you know how things are, you can start thinking of ways to improve them. It’s important to include your team and others in this thinking process because they might have helpful ideas that you haven’t thought about.

Engage Stakeholders

Now that you’ve figured out where things can get better and chosen the right methods and tools, it’s time to involve the people that the changes will affect—your stakeholders.

Start by telling them about the goals and good things that will come from improving things. Explain how it will affect their work and answer questions they might have. Ask for their ideas and thoughts on how to make things better. This not only helps make sure the changes work well but also gets them more involved and feel responsible for the process.

Identify Improvement Opportunities

In order to improve your processes , you need to identify opportunities for improvement. It’s achievable by analyzing your current processes and looking for areas of waste, inefficiency, or error. You may also want to gather feedback from your stakeholders to identify problems or areas where they see room for improvement.

Keep in mind that quality improvement ideas can come from sources you least expect, so be open to suggestions from your team and stakeholders.

Set Measurable Goals

After figuring out what you want to achieve and where you can make things better, the next step is to make sure your goals are measurable. This means making sure your goals are specific, measurable, achievable, relevant, and time-bound (SMART) .

For instance, if you want to reduce customer complaints, you could set a goal to decrease complaints by 20% in the next six months. This way, you have a specific target to aim at and a deadline to reach it.

Select Improvement Methodology and Tools

Choose from the various tools available, which could be lean, Six Sigma, or agile. Each method has its tools and techniques to find and fix problems, reduce mistakes, and boost employee productivity . However, when deciding on the method and tools, think about the business you have, how big your company is, and the QI project size.

Implement Strategy

This is where all your QI efforts in the previous steps come together. You’ll need to create a quality improvement project plan outlining the specific tasks and timelines for each part of the process.

Involve your team and stakeholders in the implementation, and ensure everyone has clear roles and responsibilities. Stay organized and take records during the implementation so you can spot any obstacles early on and make adjustments.

Monitor and Evaluate Progress

After you’ve put your quality improvement plan into action, it’s important to monitor your progress and ensure you’re on the right path to reaching your goals.

Schedule regular checks with your team and stakeholders to review the progress and pinpoint any issues or areas where you can improve. Be open-minded and be ready to change your approach when necessary. 

Collect Feedback

While you’re making improvements to your QI process, it’s essential to hear from your team members. Their feedback can point out areas you can improve or your process might not work for them. You can collect feedback through surveys, interviews, or observations. You can also use data to track progress and spot places for improvement.

Standardize and Build a Culture of Continuous Improvement

You can do this through regular training, continuous iteration, and having a Standard Operating Procedure (SOP) . However, getting feedback and monitoring the entire process without a platform that automates everything is stressful and time-consuming. You can also use SOP software , so you can automate the process and focus on growing your business.

Yes, you can use SweetProcess to enhance your quality improvement program. Let’s look at how our software can help you.

Document Your Process to Identify Opportunities for Improvement

SweetProcess can help you prevent scattered documents or misinformation in your quality improvement journey.

To do this, log in to SweetProcess, head to the drop-down ‘’More’’ button, and click on the “Processes” tab. After this, click on the “Create Process” button, enter a title for your process, then click “Continue.”

Then use the AI-suggested description or manually write a description of the steps to be followed to complete the process. This will help you easily spot bottlenecks later and make adjustments.

When choosing your description, always ensure that it’s simple, concise, and straight to the point. Once you’re okay with the output, click on the “Approve” button to save it.

As you can see in the image below, SweetProcess gives you the chance to know who edited or adjusted what and when exactly. Plus, you can trace each previous process version, so you can easily identify where to make adjustments.

Once you’re okay with the output, click on the “Approve” button to save it.

Leverage Reports and Analytics in SweetProcess to Make Data-Driven Decisions

SweetProcess also allows you to keep track of process data: every action, step, and task completion time is tracked, giving you a wealth of insights.

And you can use it to track changes and generate and analyze reports to see how improvements affect the quality of different procedures and processes.

As you can see in the image below, almost every important data you need is clearly shown: version history, the personnel who edited or approved a procedure, description of a process or procedure , toggles for approving requests, sign-off log, enable, disable, or preset review dates and number of times this can be done, check approval requests, and related tasks.

SweetProcess makes quality improvement a straightforward process, as it offers an easy-to-use platform that houses process steps. For instance, at Thimbleberry Financial , Amy Walls, president, and financial advisor, explained how documentation on SweetProcess helps them achieve this.

Their old process documentation on Microsoft Word worked until they started having high turnovers. The team couldn’t follow the process steps, and it led to chaos in their daily operations.

SweetProcess helped Amy and her team create documentation that allowed them to easily spot errors, inadequacies, and what were causing operational missteps; plus it helped enhance their overall service delivery.

“Once we started plugging things into SweetProcess, I found the team was finding, ‘Oh, these other documentations are missing so many steps,’ and we found part of the things that makes our processes and procedures more difficult…because we do the comprehensive plan and wealth management, there’s lots of crossover between things,” Amy explained.

Likewise, Tom, the quality assurance and sensory coordinator at Stone & Wood , was tasked with maintaining quality assurance and smooth operations.

Now, aside from being consistent with the taste and quality of its beer, it has to abide by the regulatory standards in its industry or risk being sanctioned. However, stepping up to the plate is a struggle because of the absence of an effective quality control system .

The binders and Microsoft Word documents the organization was working with were insufficient. Employees working with outdated procedures could alter the entire production, leaving the team with no option but to start afresh—a big waste of time and resources. 

Tom took up the responsibility to resolve the problem. After trying several systems, he found the solution in SweetProcess. 

In his words: “It gets a lot of use for onboarding new staff. We often get casual labor for a single day’s work. To do that, they have to go through safety induction, quality induction, and various other little documents, so we’ve actually put them all into SweetProcess. We can assign that process as a task to someone, and they can read through it at their own pace, tick it off, and then the manager can come back and see that they have done all of that. It’s working well for us in that regard.

“It really helps us a lot with standard requirements that are out there like the International Organization for Standardization (ISO) where you have to demonstrate that your employees have been trained and show procedure documents, so it ticks all the boxes there,” he added.

SweetProcess is built to help teams create and manage quality improvement documents and procedures in one place so they can focus on what drives real business growth. Without adding your credit card info, you can sign up for our 14-day free trial to see how it works.

 Now let’s look at some of the top quality improvement methodologies.

The PDCA, or Deming Cycle, stands for plan, do, check, and act. First, plan by setting goals. Do this by implementing changes and gathering data. In the check phase, observe the results and adjust accordingly. Finally, act by adopting and adjusting based on your analysis.

The PDCA cycle is a continuous loop for ongoing improvement that provides a structured framework to address challenges and enhance quality.

Six Sigma is a great way to make things better. It started in the 1980s at companies like Motorola and General Electric. The goal is to have an almost perfect process with very few mistakes. There are two major ways to use Six Sigma: DMAIC and DMADV.

  • DMAIC (define, measure, analyze, improve, and control) is for fixing existing problems. 
  • DMADV (define, measure, analyze, design, and verify) is for making new things.

Kaizen is a Japanese word that means “change for the better.” It’s all about making slight adjustments regularly instead of improving everything all at once. And as time passes, many minor changes add up to make a big difference.

Total Quality Management

Total quality management (TQM) started in Japan after World War II and came to the United States in the 1980s. It’s known for being a way of thinking about quality that involves everyone and every process in a company, not just those on the manufacturing floor.

TQM relies on three main ideas to work on customer focus, continuous improvement , and teamwork. To meet and exceed what the customer wants, look for ways to get better and collaborate to ensure everyone on your team communicates regularly.

Lean Manufacturing

Lean manufacturing is a way of making things better, and it comes from Japan, particularly from Toyota. The main idea is to do things efficiently and reduce any wasteful steps to make processes smoother and give customers what they truly want. By going this way, companies can work more efficiently, create less waste, and provide products or services that customers like.

Here are some common quality improvement tools.

Pareto Analysis

Pareto analysis is a helpful QI tool that helps us focus on the most significant things. It follows the 80/20 rule, where a small part (20%) causes most of the results (80%). You can use it by creating a chart to see what matters most.

For instance, in a business, you can use it to find and solve a few major problems causing most of the issues. It’s not only for finding problems; you can also use it to understand what’s going well.

Flowcharts are like maps for processes, simplifying complex procedures. They visually represent steps with symbols and arrows, making it easy to understand and improve the process.

Flowcharts can be simple or detailed and are versatile in various scenarios, from troubleshooting tech issues to event planning.

Voice of Customer Analysis

Next on our quality improvement tool list is voice of customer analysis (VOC). Forget lasers and holograms; this tool focuses on something even more powerful—customer feedback.

Think of it like understanding what guests want at a party. VOC involves gathering customer feedback through surveys, interviews, and social media to grasp their expectations and preferences. But it’s not just about collecting data; it’s also about analyzing it to find patterns and insights.

Process Maps

Process maps are visual representations of business processes, like a more specific version of flowcharts. They show steps, responsibilities, resources, and time. Use them for quality improvement—identify inefficiencies, waste, or errors in the existing process—and address them.

Process maps enhance communication, aiding coordination and collaboration. They also serve as effective training tools for new team members, boosting competence and confidence.

In the competitive business world, pursuing quality improvement is important for survival. Yet the journey isn’t always easy, and various challenges can arise. Here are some challenges to be ready for:

Lack of Continuous Improvement

Businesses should always try to get better. Doing something once and stopping can make things worse. Don’t settle for just okay — you might go backward if you don’t keep improving.

Improving only happens when a company encourages new ideas and takes risks. Employees should have the freedom to share ideas, which is why having a quality improvement tool is very important, as it helps you gather feedback and collaborate effectively with your team members.

Communication Gap

Operating in silos, where departments don’t share ideas, creates missed opportunities and friction. Listening is equally important, and having a platform for communicating ideas is key. Bridging the communication gap requires a conscious effort from all units to foster open dialogue, define communicating goals, promote collaboration, and encourage honest feedback.

Resistance to Change

Change in businesses, like upgrading to a new technology, often faces resistance because of the natural human inclination for stability. This resistance can manifest in various forms, such as employees’ hesitance about using quality improvement software or departments’ worries about losing power. That’s why open dialogue and a policy report that explains the reasons and benefits of such a transition are important.

Ineffective Documentation

Implementing quality improvement processes without effective documentation is like assembling furniture without simple instructions—frustrating and prone to errors.

Documentation serves as the blueprint, outlining standards, roles, procedures, and expected results. Inadequate or unclear documentation leads to confusion, assumptions, and errors and hinders the aim of the initiative.

Improving quality is a constant goal for all organizations. It’s about fixing mistakes and enhancing processes to meet customer expectations. Here are eight ways to measure the quality improvement process.

Defect Rate

In quality improvement, the defect rate shows how many mistakes there are in your system and points out where you can do better. It counts the number of messed-up items in a production process, trying to have as close to zero mistakes as possible.

To find the defect rate, you divide the number of messed-up items by the total produced, then multiply by 100 to get a percentage. What counts as a “defect” can be different depending on the industry and product, but it gives a simple way to keep track.

First Pass Yield

First pass yield (FPY) is a vital measure in making things better, checking how well a process works by seeing how many products or services are right the first time without fixing mistakes.

To calculate FPY, you divide the products without mistakes by the total made, then multiply by 100. This tells you how healthy the process is. If FPY is high, it means the process works well; if it’s low, there are problems causing mistakes.

Scrap and Rework Percent

Scrap and rework percent is an important measure in making things better, showing how much of what you produce ends up as waste or needs fixing. For example, if 15 out of 100 things you make are thrown away or require fixing, the scrap and rework percentage is 15%. To find it, count how many things are thrown away or fixed, then divide by the total made, then multiply by 100. A high percentage means more waste, fixing, costs, and maybe unhappy customers, telling you that you need to do better.

Cycle time simply shows how long it takes for a process or operation to be done from start to finish. It’s like using a stopwatch to see how well tasks are achieved. If the time is short, things are working well; if it’s long, there might be problems slowing things down.

Measuring cycle time is easy. Start the clock when the process begins and stop it when it’s done, then figure out the time in between. Doing this a few times gives an average time, which is a standard to compare against.

Customer Satisfaction Rate

The Customer satisfaction rate is a crucial external measure that shows how happy customers are with products, services, or their overall experience with a company. It’s figured out through surveys, finding the percentage of satisfied customers among those who respond to the survey. This gives insights into what customers think.

To calculate it, divide the number of satisfied customers by the total survey responses, then multiply by 100. For example, if 60 out of 80 people say they’re satisfied, the rate is 75%. But it’s not just about the numbers—it’s about using the feedback to improve things.

Compliance Rate

The compliance rate is an important measure that checks how well processes follow the rules. It shows the percentage of activities done according to the set standards. Not following these rules can lead to problems like fines, legal troubles, and harm to reputation.

Understanding why the compliance rate matters is key. It directly affects how things work inside a company, affecting product quality, efficiency, and overall performance. Having a high compliance rate is good, showing that things align with standards. But it’s not a one-time thing—you need to keep checking regularly to keep the rate high.

Equipment Downtime Rate

This metric keeps track of how often your equipment or machinery isn’t working correctly. It measures the percentage of time your equipment is out of service, whether because of breakdowns, maintenance, repairs, or any other reason.

Why does this matter? Imagine you’re in the middle of an important production, and suddenly your main machine stops working. This can lead to delays, missed deadlines, and increased costs. That’s why monitoring your equipment downtime rate is essential.

To measure it, think of it like a stopwatch. It starts when your equipment stops and starts when it’s back in action. Divide the total downtime by the planned production time, then multiply by 100 to get the equipment downtime rate.

For example, if you planned for 100 hours of production, but your machinery was down for 10 hours, your equipment downtime rate would be 10%.

A lower number is better. A high equipment downtime rate signals issues with machinery, maintenance, or spare parts. But don’t worry; a high rate is a call to action, prompting you to find and fix the root cause.

Return on Investment

In simple terms, ROI is like the financial payback you get from an investment. It helps you see the financial impact of your quality improvement efforts.

Why is ROI important? Quality improvement isn’t just about better processes; it’s also about making a profit. ROI measures if your investment in quality improvement is turning into dollars and cents.

Calculating ROI may sound complicated, but it’s quite simple. You subtract the cost of the investment from the gains, then divide by the cost of the investment. Multiply by 100, and you get your ROI percentage.

The healthcare quality improvement process is like a plan to make healthcare better. It includes finding what’s not working, setting goals to improve, making changes, and checking if it’s getting results.

For instance, if a hospital wants to reduce patient infections, they may first try a new way of sterilizing equipment in one unit. If it reduces infections, they can adopt the QI method in the entire hospital.

Having quality products and services involves sticking to a continuous quality improvement routine. But the journey to quality improvement is hard when you don’t have software to manage, automate, and streamline the whole process.

That’s why you need SweetProcess as a tool to help you every step of the way. You can sign up without a credit card for a 14-day FREE trial period so you can see how it works for yourself.

You also get to enjoy features like an AI process prompter, process documentation, drag-and-drop process maps, process version history, teammate feedback, progress tracker, alerts for team member tracking, assign tasks, process search bar, customizable process flow charts, cloud integration with CRM tools, and process activity reports. These tools allow you to organize the entire process and grow your company.

Related Posts:

What is Quality Improvement

Get Your Free Systemization Checklist

Systemize Checklist

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Quality Assurance/Continuous Quality Improvement Program Administrator (WMS2) - Olympia / Hybrid

Job Posting for Quality Assurance/Continuous Quality Improvement Program Administrator (WMS2) - Olympia / Hybrid at State of Washington

  • Develop and present to executive management, strategic plans, new policy and policy changes for both the short- and long-term organizational goals of the administration.
  • Organize development of targeted and actionable change management plans which may include a coaching plan, sponsor roadmap, resistance management plan and training plan.
  • Coordinate, write, and review federal reports: Child and Family Services Review, the Annual Progress of Services Review.
  • Bachelor’s degree in Social Work, Health Science, Education, Public Administration or a closely related degree field AND seven (7) years of  increasing experience in leadership, grant writing, quality assurance, program development and implementation, contracts, budgeting and supervision in public child welfare and/or childcare.  
  • Master’s degree in Social Work, Health Science, Education, Public Administration or a closely related degree field AND five (5) years of increasing experience in leadership, grant writing, quality assurance, program development and implementation, contracts, budgeting and supervision in public child welfare and/or childcare.
  • Ability to establish and maintain tracking and monitoring systems.
  • Current resume detailing experience and education.

Apply for this job

Receive alerts for other Quality Assurance/Continuous Quality Improvement Program Administrator (WMS2) - Olympia / Hybrid job openings

Report this Job

Popular Search Topics

Sign up to receive alerts about other jobs with skills like those required for the quality assurance/continuous quality improvement program administrator (wms2) - olympia / hybrid ..

Click the checkbox next to the jobs that you are interested in.

Batch Testing Skill

  • Quality Generalist Manager Income Estimation: $106,111 - $143,385
  • Quality Control Manager Income Estimation: $118,046 - $152,677

Campaign Management Skill

  • E-commerce Marketing Manager Income Estimation: $105,829 - $148,328
  • Product/Brand Marketing Supervisor Income Estimation: $108,373 - $144,322

Job openings at State of Washington

Not the job you're looking for here are some other quality assurance/continuous quality improvement program administrator (wms2) - olympia / hybrid jobs in the thurston, wa area that may be a better fit., we don't have any other quality assurance/continuous quality improvement program administrator (wms2) - olympia / hybrid jobs in the thurston, wa area right now..

State of Washington Dept of Children, Youth, and... , Olympia, WA

IT Quality Assurance - Journey (Olympia)

State of Washington Dept of Children, Youth, and Families , Olympia, WA

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Hughes RG, editor. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Rockville (MD): Agency for Healthcare Research and Quality (US); 2008 Apr.

Cover of Patient Safety and Quality

Patient Safety and Quality: An Evidence-Based Handbook for Nurses.

Chapter 44 tools and strategies for quality improvement and patient safety.

Ronda G. Hughes .

Affiliations

The necessity for quality and safety improvement initiatives permeates health care. 1 , 2 Quality health care is defined as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge” 3 (p. 1161). According to the Institute of Medicine (IOM) report, To Err Is Human , 4 the majority of medical errors result from faulty systems and processes, not individuals. Processes that are inefficient and variable, changing case mix of patients, health insurance, differences in provider education and experience, and numerous other factors contribute to the complexity of health care. With this in mind, the IOM also asserted that today’s health care industry functions at a lower level than it can and should, and it put forth the following six aims of health care: effective, safe, patient-centered, timely, efficient, and equitable. 2 The aims of effectiveness and safety are targeted through process-of-care measures, assessing whether providers of health care perform processes that have been demonstrated to achieve the desired aims and avoid those processes that are predisposed toward harm. The goals of measuring health care quality are to determine the effects of health care on desired outcomes and to assess the degree to which health care adheres to processes based on scientific evidence or agreed to by professional consensus and is consistent with patient preferences.

Because errors are caused by system or process failures, 5 it is important to adopt various process-improvement techniques to identify inefficiencies, ineffective care, and preventable errors to then influence changes associated with systems. Each of these techniques involves assessing performance and using findings to inform change. This chapter will discuss strategies and tools for quality improvement—including failure modes and effects analysis, Plan-Do-Study-Act, Six Sigma, Lean, and root-cause analysis—that have been used to improve the quality and safety of health care.

  • Measures and Benchmarks

Efforts to improve quality need to be measured to demonstrate “whether improvement efforts (1) lead to change in the primary end point in the desired direction, (2) contribute to unintended results in different parts of the system, and (3) require additional efforts to bring a process back into acceptable ranges” 6 (p. 735). The rationale for measuring quality improvement is the belief that good performance reflects good-quality practice, and that comparing performance among providers and organizations will encourage better performance. In the past few years, there has been a surge in measuring and reporting the performance of health care systems and processes. 1 , 7–9 While public reporting of quality performance can be used to identify areas needing improvement and ascribe national, State, or other level of benchmarks, 10 , 11 some providers have been sensitive to comparative performance data being published. 12 Another audience for public reporting, consumers, has had problems interpreting the data in reports and has consequently not used the reports to the extent hoped to make informed decisions for higher-quality care. 13–15

The complexity of health care systems and delivery of services, the unpredictable nature of health care, and the occupational differentiation and interdependence among clinicians and systems 16–19 make measuring quality difficult. One of the challenges in using measures in health care is the attribution variability associated with high-level cognitive reasoning, discretionary decisionmaking, problem-solving, and experiential knowledge. 20–22 Another measurement challenge is whether a near miss could have resulted in harm or whether an adverse event was a rare aberration or likely to recur. 23

The Agency for Healthcare Research and Quality (AHRQ), the National Quality Forum, the Joint Commission, and many other national organizations endorse the use of valid and reliable measures of quality and patient safety to improve health care. Many of these useful measures that can be applied to the different settings of care and care processes can be found at AHRQ’s National Quality Measures Clearinghouse ( http://www.qualitymeasures.ahrq.gov ) and the National Quality Forum’s Web site ( http://www.qualityforum.org ). These measures are generally developed through a process including an assessment of the scientific strength of the evidence found in peer-reviewed literature, evaluating the validity and reliability of the measures and sources of data, determining how best to use the measure (e.g., determine if and how risk adjustment is needed), and actually testing the measure. 24 , 25

Measures of quality and safety can track the progress of quality improvement initiatives using external benchmarks. Benchmarking in health care is defined as the continual and collaborative discipline of measuring and comparing the results of key work processes with those of the best performers 26 in evaluating organizational performance. There are two types of benchmarking that can be used to evaluate patient safety and quality performance. Internal benchmarking is used to identify best practices within an organization, to compare best practices within the organization, and to compare current practice over time. The information and data can be plotted on a control chart with statistically derived upper and lower control limits. However, using only internal benchmarking does not necessarily represent the best practices elsewhere. Competitive or external benchmarking involves using comparative data between organizations to judge performance and identify improvements that have proven to be successful in other organizations. Comparative data are available from national organizations, such as AHRQ’s annual National Health Care Quality Report 1 and National Healthcare Disparities Report, 9 as well as several proprietary benchmarking companies or groups (e.g., the American Nurses Association’s National Database of Nursing Quality Indicators).

  • Quality Improvement Strategies

More than 40 years ago, Donabedian 27 proposed measuring the quality of health care by observing its structure, processes, and outcomes. Structure measures assess the accessibility, availability, and quality of resources, such as health insurance, bed capacity of a hospital, and number of nurses with advanced training. Process measures assess the delivery of health care services by clinicians and providers, such as using guidelines for care of diabetic patients. Outcome measures indicate the final result of health care and can be influenced by environmental and behavioral factors. Examples include mortality, patient satisfaction, and improved health status.

Twenty years later, health care leaders borrowed techniques from the work of Deming 28 in rebuilding the manufacturing businesses of post-World War II Japan. Deming, the father of Total Quality Management (TQM), promoted “constancy of purpose” and systematic analysis and measurement of process steps in relation to capacity or outcomes. The TQM model is an organizational approach involving organizational management, teamwork, defined processes, systems thinking, and change to create an environment for improvement. This approach incorporated the view that the entire organization must be committed to quality and improvement to achieve the best results. 29

In health care, continuous quality improvement (CQI) is used interchangeably with TQM. CQI has been used as a means to develop clinical practice 30 and is based on the principle that there is an opportunity for improvement in every process and on every occasion. 31 Many inhospital quality assurance (QA) programs generally focus on issues identified by regulatory or accreditation organizations, such as checking documentation, reviewing the work of oversight committees, and studying credentialing processes. 32 There are several other strategies that have been proposed for improving clinical practice. For example, Horn and colleagues discussed clinical practice improvement (CPI) as a “multidimensional outcomes methodology that has direct application to the clinical management of individual patients” 33 (p. 160). CPI, an approach lead by clinicians that attempts a comprehensive understanding of the complexity of health care delivery, uses a team, determines a purpose, collects data, assesses findings, and then translates those findings into practice changes. From these models, management and clinician commitment and involvement have been found to be essential for the successful implementation of change. 34–36 From other quality improvement strategies, there has been particular emphasis on the need for management to have faith in the project, communicate the purpose, and empower staff. 37

In the past 20 years, quality improvement methods have “generally emphasize[d] the importance of identifying a process with less-than-ideal outcomes, measuring the key performance attributes, using careful analysis to devise a new approach, integrating the redesigned approach with the process, and reassessing performance to determine if the change in process is successful” 38 (p. 9). Besides TQM, other quality improvement strategies have come forth, including the International Organization for Standardization ISO 9000, Zero Defects, Six Sigma, Baldridge, and Toyota Production System/Lean Production. 6 , 39 , 40

Quality improvement is defined “as systematic, data-guided activities designed to bring about immediate improvement in health care delivery in particular settings” 41 (p. 667). A quality improvement strategy is defined as “any intervention aimed at reducing the quality gap for a group of patients representative of those encountered in routine practice” 38 (p. 13). Shojania and colleagues 38 developed a taxonomy of quality improvement strategies (see Table 1 ), which infers that the choice of the quality improvement strategy and methodology is dependent upon the nature of the quality improvement project. Many other strategies and tools for quality improvement can be accessed at AHRQ’s quality tools Web site ( www.qualitytools.ahrq.gov ) and patient safety Web site ( www.patientsafety.gov ).

Table 1

Taxonomy of Quality Improvement Strategies With Examples of Substrategies

Quality improvement projects and strategies differ from research: while research attempts to assess and address problems that will produce generalizable results, quality improvement projects can include small samples, frequent changes in interventions, and adoption of new strategies that appear to be effective. 6 In a review of the literature on the differences between quality improvement and research, Reinhardt and Ray 42 proposed four criteria that distinguish the two: (1) quality improvement applies research into practice, while research develops new interventions; (2) risk to participants is not present in quality improvement, while research could pose risk to participants; (3) the primary audience for quality improvement is the organization, and the information from analyses may be applicable only to that organization, while research is intended to be generalizable to all similar organizations; and (4) data from quality improvement is organization-specific, while research data are derived from multiple organizations.

The lack of scientific health services literature has inhibited the acceptance of quality improvement methods in health care, 43 , 44 but new rigorous studies are emerging. It has been asserted that a quality improvement project can be considered more like research when it involves a change in practice, affects patients and assesses their outcomes, employs randomization or blinding, and exposes patients to additional risks or burdens—all in an effort towards generalizability. 45–47 Regardless of whether the project is considered research, human subjects need to be protected by ensuring respect for participants, securing informed consent, and ensuring scientific value. 41 , 46 , 48

Plan-Do-Study-Act (PDSA)

Quality improvement projects and studies aimed at making positive changes in health care processes to effecting favorable outcomes can use the Plan-Do-Study-Act (PDSA) model. This is a method that has been widely used by the Institute for Healthcare Improvement for rapid cycle improvement. 31 , 49 One of the unique features of this model is the cyclical nature of impacting and assessing change, most effectively accomplished through small and frequent PDSAs rather than big and slow ones, 50 before changes are made systemwide. 31 , 51

The purpose of PDSA quality improvement efforts is to establish a functional or causal relationship between changes in processes (specifically behaviors and capabilities) and outcomes. Langley and colleagues 51 proposed three questions before using the PDSA cycles: (1) What is the goal of the project? (2) How will it be known whether the goal was reached? and (3) What will be done to reach the goal? The PDSA cycle starts with determining the nature and scope of the problem, what changes can and should be made, a plan for a specific change, who should be involved, what should be measured to understand the impact of change, and where the strategy will be targeted. Change is then implemented and data and information are collected. Results from the implementation study are assessed and interpreted by reviewing several key measurements that indicate success or failure. Lastly, action is taken on the results by implementing the change or beginning the process again. 51

Six Sigma, originally designed as a business strategy, involves improving, designing, and monitoring process to minimize or eliminate waste while optimizing satisfaction and increasing financial stability. 52 The performance of a process—or the process capability—is used to measure improvement by comparing the baseline process capability (before improvement) with the process capability after piloting potential solutions for quality improvement. 53 There are two primary methods used with Six Sigma. One method inspects process outcome and counts the defects, calculates a defect rate per million, and uses a statistical table to convert defect rate per million to a σ (sigma) metric. This method is applicable to preanalytic and postanalytic processes (a.k.a. pretest and post-test studies). The second method uses estimates of process variation to predict process performance by calculating a σ metric from the defined tolerance limits and the variation observed for the process. This method is suitable for analytic processes in which the precision and accuracy can be determined by experimental procedures.

One component of Six Sigma uses a five-phased process that is structured, disciplined, and rigorous, known as the define, measure, analyze, improve, and control (DMAIC) approach. 53 , 54 To begin, the project is identified, historical data are reviewed, and the scope of expectations is defined. Next, continuous total quality performance standards are selected, performance objectives are defined, and sources of variability are defined. As the new project is implemented, data are collected to assess how well changes improved the process. To support this analysis, validated measures are developed to determine the capability of the new process.

Six Sigma and PDSA are interrelated. The DMAIC methodology builds on Shewhart’s plan, do, check, and act cycle. 55 The key elements of Six Sigma is related to PDSA as follows: the plan phase of PDSA is related to define core processes, key customers, and customer requirements of Six Sigma; the do phase of PDSA is related to measure performance of Six Sigma; the study phase of PDSA is related to analyze of Six Sigma; and the act phase of PDSA is related to improve and integrate of Six Sigma. 56

Toyota Production System/Lean Production System

Application of the Toyota Production System—used in the manufacturing process of Toyota cars 57 —resulted in what has become known as the Lean Production System or Lean methodology. This methodology overlaps with the Six Sigma methodology, but differs in that Lean is driven by the identification of customer needs and aims to improve processes by removing activities that are non-value-added (a.k.a. waste). Steps in the Lean methodology involve maximizing value-added activities in the best possible sequence to enable continuous operations. 58 This methodology depends on root-cause analysis to investigate errors and then to improve quality and prevent similar errors.

Physicians, nurses, technicians, and managers are increasing the effectiveness of patient care and decreasing costs in pathology laboratories, pharmacies, 59–61 and blood banks 61 by applying the same principles used in the Toyota Production System. Two reviews of projects using Toyota Production System methods reported that health care organizations improved patient safety and the quality of health care by systematically defining the problem; using root-cause analysis; then setting goals, removing ambiguity and workarounds, and clarifying responsibilities. When it came to processes, team members in these projects developed action plans that improved, simplified, and redesigned work processes. 59 , 60 According to Spear, the Toyota Production System method was used to make the “following crystal clear: which patient gets which procedure (output); who does which aspect of the job (responsibility); exactly which signals are used to indicate that the work should begin (connection); and precisely how each step is carried out” 60 (p. 84).

Factors involved in the successful application of the Toyota Production System in health care are eliminating unnecessary daily activities associated with “overcomplicated processes, workarounds, and rework” 59 (p. 234), involving front-line staff throughout the process, and rigorously tracking problems as they are experimented with throughout the problem-solving process.

Root Cause Analysis

Root cause analysis (RCA), used extensively in engineering 62 and similar to critical incident technique, 63 is a formalized investigation and problem-solving approach focused on identifying and understanding the underlying causes of an event as well as potential events that were intercepted. The Joint Commission requires RCA to be performed in response to all sentinel events and expects, based on the results of the RCA, the organization to develop and implement an action plan consisting of improvements designed to reduce future risk of events and to monitor the effectiveness of those improvements. 64

RCA is a technique used to identify trends and assess risk that can be used whenever human error is suspected 65 with the understanding that system, rather than individual factors, are likely the root cause of most problems. 2 , 4 A similar procedure is critical incident technique, where after an event occurs, information is collected on the causes and actions that led to the event. 63

An RCA is a reactive assessment that begins after an event, retrospectively outlining the sequence of events leading to that identified event, charting causal factors, and identifying root causes to completely examine the event. 66 Because it is a labor-intensive process, ideally a multidisciplinary team trained in RCA triangulates or corroborates major findings and increases the validity of findings. 67 Taken one step further, the notion of aggregate RCA (used by the Veterans Affairs (VA) Health System) is purported to use staff time efficiently and involves several simultaneous RCAs that focus on assessing trends, rather than an in-depth case assessment. 68

Using a qualitative process, the aim of RCA is to uncover the underlying cause(s) of an error by looking at enabling factors (e.g., lack of education), including latent conditions (e.g., not checking the patient’s ID band) and situational factors (e.g., two patients in the hospital with the same last name) that contributed to or enabled the adverse event (e.g., an adverse drug event). Those involved in the investigation ask a series of key questions, including what happened, why it happened, what were the most proximate factors causing it to happen, why those factors occurred, and what systems and processes underlie those proximate factors. Answers to these questions help identify ineffective safety barriers and causes of problems so similar problems can be prevented in the future. Often, it is important to also consider events that occurred immediately prior to the event in question because other remote factors may have contributed. 68

The final step of a traditional RCA is developing recommendations for system and process improvement(s), based on the findings of the investigation. 68 The importance of this step is supported by a review of the literature on root-cause analysis, where the authors conclude that there is little evidence that RCA can improve patient safety by itself. 69 A nontraditional strategy, used by the VA, is aggregate RCA processes, where several simultaneous RCAs are used to examine multiple cases in a single review for certain categories of events. 68 , 70

Due the breadth of types of adverse events and the large number of root causes of errors, consideration should be given to how to differentiate system from process factors, without focusing on individual blame. The notion has been put forth that it is a truly rare event for errors to be associated with irresponsibility, personal neglect, or intention, 71 a notion supported by the IOM. 4 , 72 Yet efforts to categorize individual errors—such as the Taxonomy of Error Root Cause Analysis of Practice Responsibility (TERCAP), which focuses on “lack of attentiveness, lack of agency/fiduciary concern, inappropriate judgment, lack of intervention on the patient’s behalf, lack of prevention, missed or mistaken MD/healthcare provider’s orders, and documentation error” 73 (p. 512)—may distract the team from investigating systems and process factors that can be modified through subsequent interventions. Even the majority of individual factors can be addressed through education, training, and installing forcing functions that make errors difficult to commit.

Failure Modes and Effects Analysis

Errors will inevitably occur, and the times when errors occur cannot be predicted. Failure modes and effects analysis (FMEA) is an evaluation technique used to identify and eliminate known and/or potential failures, problems, and errors from a system, design, process, and/or service before they actually occur. 74–76 FMEA was developed for use by the U.S. military and has been used by the National Aeronautics and Space Administration (NASA) to predict and evaluate potential failures and unrecognized hazards (e.g., probabilistic occurrences) and to proactively identify steps in a process that could reduce or eliminate future failures. 77 The goal of FMEA is to prevent errors by attempting to identifying all the ways a process could fail, estimate the probability and consequences of each failure, and then take action to prevent the potential failures from occurring. In health care, FMEA focuses on the system of care and uses a multidisciplinary team to evaluate a process from a quality improvement perspective.

This method can be used to evaluate alternative processes or procedures as well as to monitor change over time. To monitor change over time, well-defined measures are needed that can provide objective information of the effectiveness of a process. In 2001, the Joint Commission mandated that accredited health care providers conduct proactive risk management activities that identify and predict system weaknesses and adopt changes to minimize patient harm on one or two high-priority topics a year. 78

Developed by the VA’s National Center for Patient Safety, the health failure modes and effects analysis (HFMEA) tool is used for risk assessment. There are five steps in HFMEA: (1) define the topic; (2) assemble the team; (3) develop a process map for the topic, and consecutively number each step and substep of that process; (4) conduct a hazard analysis (e.g., identify cause of failure modes, score each failure mode using the hazard scoring matrix, and work through the decision tree analysis); 79 and (5) develop actions and desired outcomes. In conducting a hazard analysis, it is important to list all possible and potential failure modes for each of the processes, to determine whether the failure modes warrant further action, and to list all causes for each failure mode when the decision is to proceed further. After the hazard analysis, it is important to consider the actions needed to be taken and outcome measures to assess, including describing what will be eliminated or controlled and who will have responsibility for each new action. 79

  • Research Evidence

Fifty studies and quality improvement projects were included in this analysis. The findings were categorized by type of quality method employed, including FMEA, RCA, Six Sigma, Lean, and PDSA. Several common themes emerged: (1) what was needed to implement quality improvement strategies, (2) what was learned from evaluating the impact of change interventions, and (3) what is known about using quality improvement tools in health care.

What Was Needed To Implement Quality Improvement Strategies?

Substantial and strong leadership support, 80–83 involvement, 81 , 84 consistent commitment to continuous quality improvement, 85 , 86 and visibility, 87 both in writing and physically, 86 were important in making significant changes. Substantial commitment from hospital boards was also found to be necessary. 86 , 88 The inevitability of resource demands associated with changing process required senior leadership to (1) ensure adequate financial resources 87–89 by identifying sources of funds for training and purchasing and testing innovative technologies 90 and equipment; 91 (2) facilitate and enable key players to have the needed time to be actively involved in the change processes, 85 , 88 , 89 providing administrative support; 90 (3) support a time-consuming project by granting enough time for it to work; 86 , 92 and (4) emphasize safety as an organizational priority and reinforce expectations, especially when the process was delayed or results were periodically not realized. 87 It was also asserted that senior leaders needed to understand the impact of high-level decisions on work processes and staff time, 88 especially when efforts were underway to change practice, and that quality improvement needed to be incorporated into systemwide leadership development. 88 Leadership was needed to make patient safety a key aspect of all meetings and strategies, 85 , 86 to create a formal process for identifying annual patient safety goals for the organization, and to hold themselves accountable for patient safety outcomes. 85

Even with strong and committed leadership, some people within the organization may be hesitant to participate in quality improvement efforts because previous attempts to create change were hindered by various system factors, 93 a lack of organization-wide commitment, 94 poor organizational relationships, and ineffective communication. 89 However the impact of these barriers were found to be lessened if the organization embraced the need for change, 95 changed the culture to enable change, 90 and actively pursued institutionalizing a culture of safety and quality improvement. Yet adopting a nonpunitive culture of change took time, 61 , 90 even to the extent that the legal department in one hospital was engaged in the process to turn the focus to systems, not individual-specific issues. 96 Also, those staff members involved in the process felt more at ease with improving processes, particularly when cost savings were realized and when no layoff policies were put in place to protect job security even when efficiencies were realized. 84

The improvement process needed to engage 97 and involve all stakeholders and gain their understanding that the investment of resources in quality improvement could be recouped with efficiency gains and fewer adverse events. 86 Stakeholders were used to (1) prioritize which safe practices to target by developing a consensus process among stakeholders 86 , 98 around issues that were clinically important, i.e., hazards encountered in everyday practice that would make a substantial impact on patient safety; (2) develop solutions to the problems that required addressing fundamental issues of interdisciplinary communication and teamwork, which were recognized as crucial aspects of a culture of safety; and (3) build upon the success of other hospitals. 86 In an initiative involving a number of rapid-cycle collaboratives, successful collaboratives were found to have used stakeholders to determine the choice of subject, define objectives, define roles and expectations, motivate teams, and use results from data analyses. 86 Additionally, it was important to take into account the different perspectives of stakeholders. 97 Because variation in opinion among stakeholders and team members was expected 99 and achieving buy-in from all stakeholders could have been difficult to achieve, efforts were made to involve stakeholders early in the process, solicit feedback, 100 and gain support for critical changes in the process. 101

Communication and sharing information with stakeholders and staff was critical to specifying the purpose and strategy of the quality initiative; 101 developing open channels of communication across all disciplines and at all levels of leadership/staff, permitting the voicing of concerns and observations throughout the process of creating change; 88 ensuring that patients and families were appropriately included in the dialogue; ensuring that everyone involved felt that he or she was an integral part of the health care team and was responsible for patient safety; sharing lessons learned from root-cause analysis; and capturing attention and soliciting buy-in by sharing patient safety stories with staff and celebrating successes, no matter how small. 85 Yet in trying to keep everyone informed of the process and the data behind decisions, some staff had difficulty accepting system changes made in response to the data. 89

The successful work of these strategies was dependent upon having motivated 80 and empowered teams. There were many advantages to basing the work of the quality improvement strategies on the teamwork of multidisciplinary teams that would review data and lead change. 91 These teams needed to be comprised of the right staff people, 91 , 92 include peers, 102 engage all of the right stakeholders (ranging from senior managers to staff), and be supported by senior-level management/leadership. 85 , 86 Specific stakeholders (e.g., nurses and physicians) had to be involved 81 and supported to actually make the change, and to be the champions 103 and problem-solvers within departments 59 for the interventions to succeed. Because implementing the quality initiatives required substantial changes in the clinician’s daily work, 86 consideration of the attitude and willingness of front-line staff for making the specific improvements 59 , 88 , 104 was needed.

Other key factors to improvement success were implementing protocols that could be adapted to the patient’s needs 93 and to each unit, based on experience, training, and culture. 88 It was also important to define and test different approaches; different approaches can converge and arrive at the same point. 81 Mechanisms that facilitated staff buy-in was putting the types and causes of errors in the forefront of providers’ minds, making errors visible, 102 being involved in the process of assessing work and looking for waste, 59 providing insight as to whether the improvement project would be feasible and its impact measurable, 105 and presenting evidence-based changes. 100 Physicians were singled out as the one group of clinicians that needed to lead 106 or be actively involved in changes, 86 especially when physician behaviors could create inefficiencies. 84 In one project, physicians were recruited as champions to help spread the word to other physicians about the critical role of patient safety, to make patient safety a key aspect of all leadership and medical management meetings and strategies. 85

Team leaders and the composition of the team were also important. Team leaders that emphasized efforts offline to help build and improve relationships were found to be necessary for team success. 83 , 93 These teams needed a dedicated team leader who would have a significant amount of time to put into the project. 84 While the leader was not identified in the majority of reports reviewed for this paper, the team on one project was co-chaired by a physician and an administrator. 83 Not only did the type and ability of team leaders affect outcomes, the visibility of the initiative throughout the organization was dependent upon having visible champions. 100 Multidisciplinary teams needed to understand the numerous steps involved in quality improvement and that there were many opportunities for error, which essentially enabled teams to prioritize the critical items to improve within a complex process and took out some of the subjectivity from the analysis. The multidisciplinary structure of teams allowed members to identify each step from their own professional practice perspective, anticipate and overcome potential barriers, allowed the generation of diverse ideas, and allowed for good discussion and deliberations, which together ultimately promoted team building. 100 , 107 In two of the studies, FMEA/HFMEA was found to minimize group biases by benefiting from the diversity within multidisciplinary composition of the team and enabling the team to focus on a structured outline of the goals that needed to be accomplished. 107 , 108

Teams needed to be prepared and enabled to meet the demands of the quality initiatives with ongoing education, weekly debriefings, review of problems solved and principles applied, 84 and ongoing monitoring and feedback opportunities. 92 , 95 Education and training of staff 95 , 80 , 95 , 101 , 104 and leadership 80 about the current problem, quality improvement tools, the planned change in practice intervention, and updates as the project progressed were key strategies. 92 Training was an ongoing process 91 that needed to focus on skill deficits 82 and needed to be revised as lessons were learned and data was analyzed during the implementation of the project. 109 The assumption could not be made that senior staff or leadership would not need training. 105 Furthermore, if the team had no experience with the quality tools or successfully creating change, an additional resource could have been a consultant or someone to facilitate the advanced knowledge involved in quality improvement techniques. 106 Another consideration was using a model that intervened at the hospital-community interface, coupled with an education program. 97

The influence of teamwork processes enabled those within the team to improve relationships across departments. 89 Particular attention needed to be given to effective team building, 110 actively following the impact of using the rapid-cycle (PDSA) model, meeting frequently, and monitoring progress using outcome data analysis at least on a monthly basis. 86 Effective teamwork and communication, information transfer, coordination among multiple hospital departments and caregivers, and changes to hospital organization culture were considered essential elements of team effectiveness. 86 Yet the impact of team members that had difficulty in fully engaging in teamwork because of competing workloads (e.g., working double shifts) was dampened. 97 Better understanding of each other’s role is an important project outcome and provides a basis for continuing the development of other practices to improve outcomes. 97 The work of teams was motivated through continual sharing of progress and success and celebration of achievements. 87

Teamwork can have many advantages, but only a few were discussed in the reports reviewed. Teams were seen as being able to increase the scope of knowledge, improve communication across disciplines, and facilitate learning about the problem. 111 Teams were also found to be proactive, 91 integrating tools that improve both the technical processes and organizational relationships, 83 and to work together to understand the current situation, define the problem, pathways, tasks, and connections, as well as to develop a multidisciplinary action plan. 59 But teamwork was not necessarily an easy process. Group work was seen as difficult for some and time consuming, 111 and problems arose when everyone wanted their way, 97 which delayed convergence toward a consensus on actions. Team members needed to learn how to work with a group and deal with group dynamics, confronting peers, conflict resolution, and addressing behaviors that are detrimental. 111

What Was Learned From Evaluating the Impact of Change Interventions?

As suggested by Berwick, 112 the leaders of the quality improvement initiatives in this review found that successful initiatives needed to simplify; 96 , 104 standardize; 104 stratify to determine effects; improve auditory communication patterns; support communication against the authority gradient; 96 use defaults properly; automate cautiously; 96 use affordance and natural mapping (e.g., design processes and equipment so that the easiest thing to do is the right thing to do); respect limits of vigilance and attention; 96 and encourage reporting of near hits, errors, and hazardous conditions. 96 Through the revision and standardization of policies and procedures, many of these initiatives were able to effectively realize the benefit of making the new process easier than the old and decrease the effect of human error associated with limited vigilance and attention. 78 , 80–82 , 90–92 , 94 , 96 , 102 , 103 , 113 , 114

Simplification and standardization were found to be effective as a forcing function by decreasing reliance on individualized decisionmaking. Several initiatives standardized medication ordering and administration protocols, 78 , 87 , 101 , 103 , 106–108 , 109 , 114–116 realizing improvements in patient outcomes, nurse efficiency, and effectiveness. 103 , 106 , 108 , 109 , 114–116 One initiative used a standardized form for blood product ordering. 94 Four initiatives improved pain assessment and management by using standardized metrics and assessment tools. 80 , 93 , 100 , 117 In all of these initiatives, simplification and standardization were effective strategies.

Related to simplification and standardization is the potential benefit of using information technology to implement checks, defaults, and automation to improve quality and reduce errors, in large part to embedding forcing functions to remove the possibility of errors. 96 , 106 The effects of human error could be mitigated by using necessary redundancy, such as double-checking for certain types of errors; this was seen as engaging the knowledge and abilities of two skilled practitioners 61 , 101 and was used successfully to reduce errors associated with dosing. 78 Information technology was successfully used to (1) decrease the opportunity for human error through automation; 61 (2) standardize medication concentrations 78 and dosing using computer-enabled calculations, 115 , 116 standardized protocols, 101 and order clarity; 116 (3) assist caregivers in providing quality care using alerts and reminders; (4) improve medication safety (e.g., implementing bar coding and computerized provider order entry); and (5) track performance through database integration and indicator monitoring. Often workflow and procedures needed to be revised to keep pace with technology. 78 Using technology implied that organizations were committed to investing in technology to enable improvement, 85 but for two initiatives, the lack of adequate resources for data collection impacted analysis and evaluation of the initiative. 93 , 97

Data and information were needed to understand the root causes of errors and near errors, 99 to understand the magnitude of adverse events, 106 to track and monitor performance, 84 , 118 and to assess the impact of the initiatives. 61 Reporting of near misses, errors, and hazardous conditions needs to be encouraged. 96 In part, this is because error reporting is generally low and is associated with organizational culture 106 and can be biased, which will taint results. 102 Organizations not prioritizing reporting or not strongly emphasizing a culture of safety may have the tendency to not report errors that harm patients or near misses (see Chapter 35 . “Evidence Reporting and Disclosure”). Using and analyzing data was viewed as critical, yet some team members and staff may have benefited from education on how to effectively analyze and display findings. 106 Giving staff feedback by having a transparent process 39 of reporting findings 82 was viewed as a useful trigger that brought patient safety to the forefront of the hospital. 107 It follows then that not having data, whether because it was not reported or not collected, made statistical analysis of the impact of the initiative 115 or assessing its cost-benefit ratio not possible. 108 As such, multi-organizational collaboration should have a common database. 98

The meaning of data can be better understood by using measures and benchmarks. Repeated measurements were found to be useful for monitoring progress, 118 but only when there was a clear metric for measuring the degree of success. 83 The use of measures could be used as a strategy to involve more clinicians and deepened their interest, especially physicians. Using objective, broader, and better measures was viewed as being important for marking progress, and provided a basis for “a call to action” and celebration. 106 When measures of care processes were used, it was asserted that there was a need to demonstrate the relationship between specific changes to care processes and outcomes. 61

When multiple measures were used, along with better documentation of care, it was easier to assess the impact of the initiative on patient outcomes. 93 Investigators from one initiative put forth the notion that hospital administrators should encourage more evaluations of initiatives and that the evaluations should focus on comprehensive models that assess patient outcomes, patient satisfaction, and cost effectiveness. 114 The assessment of outcomes can be enhanced by setting realistic goals, not unrealistic goals such as 100 percent change, 119 and by comparing organizational results to recognized State, regional, and national benchmarks. 61 , 88

The cost of the initiative was an viewed as important factor in the potential for improvement, even when the adverse effects of current processes were considered as necessitating rapid change. 106 Because of this, it is important to implement changes that are readily feasible 106 and can be implemented with minimal disruption of practice activities. 99 It is also important to consider the potential of replicating the initiative in other units or at other sites. 99 One strategy to improve the chances of replication is to standardize processes, which will most likely incur some cost. 106 In some respects, the faster small problems were resolved, the faster improvements could be replicated throughout the entire system. 84 , 106 Recommendations that did not incur costs or had low costs and could be demonstrated to be effective were implemented expeditiously. 93 , 107 A couple of investigators stated that their interventions decreased costs and patients’ length of stay, 103 but did not present any data to verify those statements. It was also purported that the costs associated with change will be recouped either in return on investment or in reduced patient risk (and thus reduced liability costs). 61

Ensuring that those implementing the initiative receive education is critical. There were several examples of this. Two initiatives that targeted pain management found that educating staff on pain management guidelines and protocols for improving chronic pain assessment and management improved staff understanding, assessment and documentation, patient and family satisfaction, and pain management. 80 , 93 Another initiative educated all staff nurses on intravenous (IV) site care and assessment, as well as assessment of central lines, and realized improved patient satisfaction and reduced complications and costs. 109

Despite the benefits afforded by the initiatives, there were many challenges that were identified in implementing the various initiatives:

  • Lack of time and resources made it difficult to implement the initiative well. 82
  • Some physicians would notaccept the new protocol and thwarted implementation until they had confidence in the tool. 103
  • Clear expectations were lacking. 86
  • Hospital leadership was not adequately engaged. 86
  • There was insufficient emphasis on importance and use of measures. 86
  • The number and type of collaborative staffing was insufficient. 86
  • The time required for nurses and other staff to implement the changes was underestimated. 120
  • The extent to which differences in patient severity accounted for results could not be evaluated because severity of illness was not measured. 89
  • Improvements associated with each individual PDSA cycle could not be evaluated. 89
  • The full impact on the costs of care, including fixed costs for overhead, could not be evaluated. 89
  • Failure to consider the influence of factors such as fatigue, distraction, time pressures. 82
  • The Hawthorne effect may have caused improvements more so than the initiative. 118
  • Many factors were interrelated and correlated. 96
  • There was a lack of generalizability because of small sample size. 93 , 119
  • Addressing some of the problems created others (e.g., implementing computerized physician order entry (CPOE)). 110
  • Targets set (e.g., 100 percent of admissions) may have been too ambitious and were thus always demanding and difficult-to-achieve service improvements. 119

Despite the aforementioned challenges, many investigators found that it was important to persevere and stay focused because introducing new processes can be difficult, 84 , 100 but the reward of quality improvement is worth the effort. 84 Implementing quality improvement initiatives was considered time consuming, tedious, and difficult for people who are very action oriented; it required an extensive investment of resources (i.e., time, money, and energy); 94 and it involved trial and error to improve the process. 91 Given theses and other challenges, it was also important to celebrate the victories. 84

Other considerations were given to the desired objective of sustaining the changes after the implementation phase of the initiative ended. 105 Investigators asserted that improving quality through initiatives needed to be considered as integral in the larger, organizationwide, ongoing process of improvement. Influential factors attributed to the success of the initiatives were effecting practice changes that could be easily used at the bedside; 82 using simple communication strategies; 88 maximizing project visibility, which could sustain the momentum for change; 100 establishing a culture of safety; and strengthening the organizational and technological infrastructure. 121 However, there were opposing viewpoints about the importance of spreading the steps involved in creating specific changes (possibly by forcing changes into the redesign of processes), rather than only relying on only adapting best practices. 106 , 121 Another factor was the importance of generating enthusiasm about embracing change through a combination of collaboration (both internally and externally) 103 and healthy competition. Collaboratives could also be a vehicle for encouraging the use of and learning from evidence-based practice and rapid-cycle improvement as well as identifying and gaining consensus on potentially better practices. 86 , 98

What Is Known About Using Quality Improvement Tools in Health Care?

Quality tools used to define and assess problems with health care were seen as being helpful in prioritizing quality and safety problems 99 and focusing on systems, 98 not individuals. The various tools were used to address errors and growing costs 88 and to change provider practices. 117 Several of the initiatives used more than one of the quality improvement tools, such as beginning with root-cause analysis then using either Six Sigma, Toyota Production System/Lean, or Plan-Do-Study-Act to implement change in processes. Almost every initiative included in this analysis performed some type of pretesting/pilot testing. 92 , 99 Investigators and leaders of several initiatives reported advantages of using specific types of quality tools. These are discussed as follows:

Root-cause analysis was reported to be useful to assess reported errors/incidents and differentiate between active and latent errors, to identify need for changes to policies and procedures, and to serve as a basis to suggest system changes, including improving communication of risk. 82 , 96 , 102 , 105

Six Sigma/Toyota Production System was reported to have been successfully used to decrease defects/variations 59 , 61 , 81 and operating costs 81 and improve outcomes in a variety of health care settings and for a variety of processes. 61 , 88 Six Sigma was found to be a detailed process that clearly differentiated between the causes of variation and outcome measures of process. 61 One of the advantages of using Six Sigma was that it made work-arounds and rework difficult because the root causes of the preimplementation processes were targeted. 59 , 88 Additionally, investigators reported that the more teams worked with this strategy, the better they became at implementing it and the more effective the results. 84 Yet it was noted that to use this strategy effectively, a substantial commitment of leadership time and resources was associated with improved patient safety, lowered costs, and increased job satisfaction. 84 Six Sigma was also an important strategy for problem-solving and continuous improvement; communicating clearly about the problem; guiding the implementation process; and producing results in a clear, concise, and objective way. 59

Plan-Do-Study-Act (PDSA) was used by the majority of initiatives included in this analysis to implement initiatives gradually, while improving them as needed. The rapid-cycle aspect of PDSA began with piloting a single new process, followed by examining results and responding to what was learned by problem-solving and making adjustments, after which the next PDSA cycle would be initiated. The majority of quality improvement efforts using PDSA found greater success using a series of small and rapid cycles to achieve the goals for the intervention, because implementing the initiative gradually allowed the team to make changes early in the process 80 and not get distracted or sidetracked by every detail and too many unknowns. 87 , 119 , 122 The ability of the team to successfully use the PDSA process was improved by providing instruction and training on the use of PDSA cycles, using feedback on the results of the baseline measurements, 118 meeting regularly, 120 and increasing the team’s effectiveness by collaborating with others, including patients and families, 80 to achieve a common goal. 87 Conversely, some teams experienced difficulty in using rapid-cycle change, collecting data, and constructing run charts, 86 and one team reported that applying simple rules in PDSA cycles may have been more successful in a complex system. 93

Failure modes and effects analysis (FMEA) was used to avoid events and improve or maintain the quality of care. 123 FMEA was used prospectively to identify potential areas of failure 94 where experimental characterization of the process at the desired speed of change could be assessed, 115 and retrospectively to characterize the safety of a process by identifying potential areas of failure, learning about the process from the staff’s point of view. 94 Using a flow chart of the process before beginning the analysis got the team to focus and work from the same document. 94 Information learned from FMEA was used to provide data for prioritizing improvement strategies, serve as a benchmark for improvement efforts, 116 educate and provide a rationale for diffusion of these practice changes to other settings, 115 and increase the ability of the team to facilitate change across all services and departments within the hospital. 124 Using FMEA facilitated systematic error management, which was important to good clinical care in complex processes and complex settings, and was dependent upon a multidisciplinary approach, integrated incident and error reporting, decision support, standardization of terminology, and education of caregivers. 116

Health failure modes and effects analysis (HFMEA) was used to provide a more detailed analysis of smaller processes, resulting in more specific recommendations, as well as larger processes. HFEMA was viewed as a valid tool for proactive analysis in hospitals, facilitating a very thorough analysis of vulnerabilities (i.e., failure modes) before adverse events occurred. 108 This tool was considered valuable in identifying the multifactoral nature of most errors 108 and the potential risk for errors, 111 but was seen as being time consuming. 107 Initiatives that used HFMEA could minimize group biases through the multidisciplinary composition of the team 78 , 108 , 115 and facilitate teamwork by providing a step-by-step process, 107 but these initiatives required a paradigm shift for many. 111

  • Evidence-Based Practice Implications

From the improvement strategies and projects assessed in this review, several themes emerged from successful initiatives that nurses can use to guide quality improvement efforts. The strength of the following practice implications is associated with the methodological rigor and generalizability of these strategies and projects:

  • The importance of having strong leadership commitment and support cannot be overstated. Leadership needs to empower staff, be actively involved, and continuously drive quality improvement. Without the commitment and support of senior-level leadership, even the best intended projects are at great risk of not being successful. Champions of the quality initiative and quality improvement need to be throughout the organization, but especially in leadership positions and on the team.
  • A culture of safety and improvement that rewards improvement and is driven to improve quality is important. The culture is needed to support a quality infrastructure that has the resources and human capital required for successfully improving quality.
  • Quality improvement teams need to have the right stakeholders involved.
  • Due to the complexity of health care, multidisciplinary teams and strategies are essential. Multidisciplinary teams from participating centers/units need to work closely together, taking advantage of communication strategies such as face-to-face meetings, conference calls, and dedicated e-mail listservs, and utilize the guidance of trained facilitators and expert faculty throughout the process of implementing change initiatives when possible.
  • Quality improvement teams and stakeholders need to understand the problem and root causes . There must be a consensus on the definition of the problem. To this end, a clearly defined and universally agreed upon metric is essential. This agreement is as crucial to the success of any improvement effort as the validity of the data itself.
  • Use a proven, methodologically sound approach without being distracted by the jargon used in quality improvement. The importance given to using clear models, terms, and process is critical, especially because many of the quality tools are interrelated; using only one tool will not produce successful results.
  • Standardizing care processes and ensuring that everyone uses those standards should improve processes by making them more efficient and effective—and improve organizational and patient outcomes.
  • Evidence-based practice can facilitate ongoing quality improvement efforts.
  • Implementation plans need to be flexible to adapt to needed changes as they come up
  • Efforts to change practice and improve the quality of care can have multiple purposes , including redesigning care processes to maximize efficiency and effectiveness, improving customer satisfaction, improving patient outcomes, and improving organizational climate.
  • Appropriate use of technology can improve team functioning, foster collaboration, reduce human error, and improve patient safety.
  • Efforts need to have sufficient resources , including protected staff time.
  • Continually collect and analyze data and communicate results on critical indicators across the organization. The ultimate goal of assessing and monitoring quality is to use findings to assess performance and define other areas needing improvement.
  • Change takes time , so it is important to stay focused and persevere.
  • Research Implications

Given the complexity of health care, assessing quality improvement is a dynamic and challenging area. The body of knowledge is slowly growing in this area, which could be due to the continued dilemma as to whether a quality improvement initiative is just that or whether it meets the definition of research and employs methodological rigor—even if it meets the requirements for publication. Various quality improvement methods have been used since Donabedian’s seminal publication in 1966, 27 but only recently has health care quality improvement used the Six Sigma methodology and published findings; when it has, it has been used only on a single, somewhat isolated component of a larger system, making organizational learning and generalizability difficult. Because of the long standing importance of quality improvement, particularly driven by external sources (e.g., CMS and the Joint Commission) in the past few years, many quality improvement efforts within organizations have taken place and are currently in process, but may not have been published and therefore not captured in this review, and may not have necessarily warranted publication in the peer-reviewed literature. With this in mind, researchers, leaders and clinicians will need to define what should be considered generalizable and publishable in the peer-reviewed literature to move the knowledge of quality improvement methods and interventions forward.

While the impact of many of the quality improvement projects included in this analysis were mentioned in terms of clinical outcomes, functional outcomes, patient satisfaction, staff satisfaction, and readiness to change, cost and utilization outcomes and measurement is important in quality improvement efforts, especially when variation occurs. There are many unanswered questions. Some key areas are offered for consideration:

  • How can quality improvement efforts recognize the needs of patients, insurers, regulators, patients, and staff and be successful?
  • What is the best method to identify priorities for improvement and meet the competing needs of stakeholders?
  • What is the threshold of variation that needs to be attained to produce regular desired results?
  • How can a bottom-up approach to changing clinical practice be successful if senior leadership is not supportive or the organizational culture does not support change?

In planning quality improvement initiatives or research, researchers should use a conceptual model to guide their work, which the aforementioned quality tools can facilitate. To generalize empirical findings from quality improvement initiatives, more consideration should be given to increasing sample size by collaborating with other organizations and providers. We need to have a better understanding of what tools work the best, either alone or in conjunction with other tools. It is likely that mixed methods, including nonresearch methods, will offer a better understanding of the complexity of quality improvement science. We also know very little about how tailoring implementation interventions contributes to process and patient outcomes, or what the most effective steps are that cross intervention strategies. Lastly, we do not know what strategies or combination of strategies work for whom and in what context, why they work in some settings or cases and not others, and what the mechanism is by which these strategies or combination of strategies work.

  • Conclusions

Whatever the acronym of the method (e.g., TQM, CQI) or tool used (e.g., FMEA or Six Sigma), the important component of quality improvement is a dynamic process that often employs more than one quality improvement tool. Quality improvement requires five essential elements for success: fostering and sustaining a culture of change and safety, developing and clarifying an understanding of the problem, involving key stakeholders, testing change strategies, and continuous monitoring of performance and reporting of findings to sustain the change.

  • Search Strategy

To identify quality improvement efforts for potential inclusion in this systematic review, PubMed and CINAL were searched from 1997 to present. The following key words and terms were used: “Failure Modes and Effects Analysis/FMEA,” “Root Cause Analysis/RCA,” “Six Sigma,” “Toyota Production System/Lean,” and “Plan Do Study Act/PDSA.” Using these key words, 438 articles were retrieved. Inclusion criteria included reported processes involving nursing; projects/research involving methods such as FMEA, RCA, Six Sigma, Lean, or PDSA; qualitative and quantitative analyses; and reporting patient outcomes. Projects and research were excluded if they did not involve nursing on the improvement team, did not provide sufficient information to describe the process used and outcomes realized, nursing was not directly involved in the patient/study outcomes, or the setting was in a developing country. Findings from the projects and research included in the final analysis were grouped into common themes related to applied quality improvement.

Evidence Table

Evidence Table

Quality Methods

  • Cite this Page Hughes RG. Tools and Strategies for Quality Improvement and Patient Safety. In: Hughes RG, editor. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Rockville (MD): Agency for Healthcare Research and Quality (US); 2008 Apr. Chapter 44.
  • PDF version of this page (298K)

In this Page

Other titles in this collection.

  • Advances in Patient Safety

Similar articles in PubMed

  • Six Sigma: not for the faint of heart. [Radiol Manage. 2003] Six Sigma: not for the faint of heart. Benedetto AR. Radiol Manage. 2003 Mar-Apr; 25(2):40-53.
  • Keeping the "continuous" in continuous quality improvement: exploring perceived outcomes of CQI program use in community pharmacy. [Res Social Adm Pharm. 2014] Keeping the "continuous" in continuous quality improvement: exploring perceived outcomes of CQI program use in community pharmacy. Boyle TA, Bishop AC, Duggan K, Reid C, Mahaffey T, MacKinnon NJ, Mahaffey A. Res Social Adm Pharm. 2014 Jan-Feb; 10(1):45-57. Epub 2013 Mar 23.
  • Understanding uptake of continuous quality improvement in Indigenous primary health care: lessons from a multi-site case study of the Audit and Best Practice for Chronic Disease project. [Implement Sci. 2010] Understanding uptake of continuous quality improvement in Indigenous primary health care: lessons from a multi-site case study of the Audit and Best Practice for Chronic Disease project. Gardner KL, Dowden M, Togni S, Bailie R. Implement Sci. 2010 Mar 13; 5:21. Epub 2010 Mar 13.
  • Review Leadership, safety climate, and continuous quality improvement: impact on process quality and patient safety. [J Nurs Adm. 2014] Review Leadership, safety climate, and continuous quality improvement: impact on process quality and patient safety. McFadden KL, Stock GN, Gowen CR 3rd. J Nurs Adm. 2014 Oct; 44(10 Suppl):S27-37.
  • Review The continuous quality improvement process in dynamic and rapid change. [Semin Nurse Manag. 1996] Review The continuous quality improvement process in dynamic and rapid change. Mount CB. Semin Nurse Manag. 1996 Mar; 4(1):55-9.

Recent Activity

  • Tools and Strategies for Quality Improvement and Patient Safety - Patient Safety... Tools and Strategies for Quality Improvement and Patient Safety - Patient Safety and Quality

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

IMAGES

  1. Guide to Continuous Quality Improvement

    research and continuous quality improvement

  2. What is Continuous Improvement?

    research and continuous quality improvement

  3. 5: The PDCA Cycle for continuous improvement.

    research and continuous quality improvement

  4. Continuous Quality Improvement Model

    research and continuous quality improvement

  5. Focusing on Continuous Improvement in the Workplace (2022)

    research and continuous quality improvement

  6. 6 Continuous Improvement Tools & Techniques for 2021

    research and continuous quality improvement

VIDEO

  1. qualityoverquantity

  2. Webinar Continuous Quality Improvement

  3. Continuous Quality Improvement HEARTS 2023 Performance and 2024 Priorities Day 1 English

  4. Measuring the impact of school-based mental health services

  5. #qualityoverquantity

  6. Dr Olabode Agbasale

COMMENTS

  1. Quality improvement into practice

    Box 1 Definitions of quality improvement. Improvement in patient outcomes, system performance, and professional development that results from a combined, multidisciplinary approach in how change is delivered.3 The delivery of healthcare with improved outcomes and lower cost through continuous redesigning of work processes and systems.4 Using a systematic change method and strategies to improve ...

  2. Research on Continuous Improvement: Exploring the Complexities of

    We are in the midst of an exciting shift in education research and practice. As a result of increasing frustration with the dominant "What Works" paradigm of large-scale research-based improvement (Bryk et al., 2015; Penuel et al., 2011), practitioners, researchers, foundations, and policymakers are beginning to favor good practice over best practice, local proofs over experimental ...

  3. A Systematic Review of Approaches for Continuous Quality Improvement

    Continuous quality improvement (CQI) ... The research team single-screened the titles and abstracts of 5102 studies, resulting in the exclusion of 4842 studies and retention of 262 studies for full-text review, which were double-screened. Full-text review resulted in a final sample of 61 studies meeting inclusion criteria from which data were ...

  4. The effectiveness of continuous quality improvement for developing

    Continuous quality improvement (CQI), an approach used extensively in industrial and manufacturing sectors, has been used in the health sector. ... [37, 38], with the methods outlined in a research protocol registered on PROSPERO (CRD42018088309). We identified studies through searches of 11 electronic databases, specifically MEDLINE (via Ovid ...

  5. Research and Quality Improvement: How Can They Work Together?

    Research and quality improvement provide a mechanism to support the advancement of knowledge, and to evaluate and learn from experience. The focus of research is to contribute to developing knowledge or gather evidence for theories in a field of study, whereas the focus of quality improvement is to standardize processes and reduce variation to improve outcomes for patients and health care ...

  6. Continuous Quality Improvement

    Continuous Quality Improvement (CQI) is a progressive incremental improvement of processes, safety, and patient care. The goal of CQI may include improvement of operations, outcomes, systems processes, improved work environment, or regulatory compliance. Process improvement may be "gradual" or "breakthrough" in nature. CQI project development commonly includes defining the problem ...

  7. Research, Evidence-Based Practice, and Quality Improvement Simplified

    The slippery slope: Differentiating between quality improvement and research. The Journal of Nursing Administration, 36(4), 211-219. ... Evidence-based practice barriers and facilitators from a continuous quality improvement perspective: An integrative review. Journal of Nursing Management, 19(1), 109-120.

  8. Full article: Continuous improvement implementation models: a

    Theory and research problem. The concept of continuous improvement stems from the American discipline of statistical quality control (Shewhart Citation 1931) and the Japanese 'Kaizen' (Imai Citation 1986).Developing a capability to continuously improve is perceived as a long-term investment towards a situation in which incremental and frequent improvements are an integral part of ...

  9. Frontiers

    Keywords: primary health care, health systems research, continuous quality improvement, Aboriginal and Torres Strait Islander health, building block. Citation: Bailie R, Bailie J, Larkins S and Broughton E (2017) Editorial: Continuous Quality Improvement (CQI)—Advancing Understanding of Design, Application, Impact, and Evaluation of CQI ...

  10. The relationship between continuous quality improvement and research

    The relationship between continuous quality improvement and research. 2002 Jan-Feb;24 (1):4-8; quiz 8, 48. doi: 10.1097/01445442-200201000-00002. The adoption of new care processes can take years despite high-quality research findings that support the change in clinical practice. Healthcare quality professionals, with their leadership and ...

  11. Quality Improvement

    Quality improvement (QI) is essential to achieving the triple aim of improving the health of the population, enhancing patient experiences and outcomes, and reducing the per capita cost of care, and to improving provider experience. Challenges and Opportunities in Home Health Quality Improvement. Create Accurate Reports from Electronic Health ...

  12. Quality Improvement, Evidence-Based Practice, and Research

    Quality Improvement, Evidence-Based Practice, and Research

  13. Clinical Updates: Quality improvement into practice

    Definitions of quality improvement. Improvement in patient outcomes, system performance, and professional development that results from a combined, multidisciplinary approach in how change is delivered. 3. The delivery of healthcare with improved outcomes and lower cost through continuous redesigning of work processes and systems. 4.

  14. Differentiating research, evidence-based practice, and quality improvement

    June 11, 2014. By: Brian T. Conner, PhD, RN, CNE. Research, evidence-based practice (EBP), and quality improvement support the three main goals of the Magnet Recognition Program ® and the Magnet Model component of new knowledge, innovation, and improvements. The three main goals of the Magnet Recognition Program are to: 1) Promote quality in a ...

  15. Continuous Quality Improvement and Comprehensive Primary Health Care: A

    What is CQI and What are its Impacts? Continuous quality improvement in health care is "a structured organizational process for involving people in planning and executing a continuous flow of improvement to provide quality health care that meets or exceeds expectations" (p. 4).O'Neill et al. highlighted four key elements of CQI approaches as follows: (1) implemented in or by a health ...

  16. Evidence-Based Practice, Quality Improvement, and Research: What's the

    Determining whether a project is evidence-based practice (EBP), quality improvement (QI), or research can be challenging—even for experts! Some projects that appear to be EBP or QI may contain elements of research. Conversely, some projects that we refer to colloquially as research are actually EBP or QI. Finding the correct determination is ...

  17. What Is the Difference Between Quality Improvement, Evidence-Based

    As healthcare institutions become ever more complex and our focus on patient experience expands, nurses are leading and participating in research studies, evidence-based practice (EBP) projects, and quality improvement (QI) initiatives with a goal of improving patient outcomes. Research, EBP, and QI have subtle differences and frequent overlap, which can make it a challenge for nurses to ...

  18. Primer on Continuous Quality Improvement (QI) Strategies

    Resource: Practice Transformation Toolkit: Continuous Quality Improvement Strategies Primer This online Toolkit for implementing a new or upgraded electronic health record (EHR) system includes a primer with short descriptions of the following continuous quality improvement approaches: Model for Improvement, Lean, Six Sigma, and the Baldridge Quality Model.

  19. Research, Evidence-Based Practice, and Quality Improvement ...

    Scattered evidence and misperceptions regarding research, evidence-based practice, and quality improvement make the answer unclear among nurses. This article clarifies and simplifies the three processes for frontline clinical nurses and nurse leaders. The three processes are described and discussed to give the reader standards for ...

  20. Inter-Establishment Complex Musculoskeletal Care Pathways in Montreal

    While integrated in the continuous quality improvement project, the research team must keep its independence for protocol elaboration and for data analysis; it also benefits from the autonomy to explore new questions and develop new knowledge about complex care pathways.

  21. Quality Improvement Methods (LEAN, PDSA, SIX SIGMA)

    Quality improvement is integral to many sectors, including business, manufacturing, and healthcare. Systematic and structured approaches are used to evaluate performance to improve standards and outcomes. The Institute of Medicine defines quality in healthcare as "the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are ...

  22. ES 9

    Essential service 9—improve and innovate public health functions through ongoing evaluation, research, and continuous quality improvement—is one of the essential services aligned to the Assurance core function. View Larger. Here are some examples of common activities* that help deliver essential service 9.

  23. Quality Improvement Process: How to Plan and Measure It

    Quality improvement process (QIP) is a journey to enhance and refine the overall quality of an organization's products, services, and processes. The main essence is to identify areas that need improvement, implement changes, and always monitor outcomes to ensure constant improvement.

  24. Quality improvement; part 1: introduction and overview

    Editor's key points. •. Quality improvement (QI) methodology differs from that of clinical audit or empirical research. •. Quality is most often defined in terms of the six domains of safety, clinical effectiveness, patient centredness, timeliness, efficiency, and equity. •. Many frameworks for improvement methodologies exist; common ones ...

  25. Quality Assurance/Continuous Quality Improvement Program Administrator

    Apply for the Job in Quality Assurance/Continuous Quality Improvement Program Administrator (WMS2) - Olympia / Hybrid at Thurston, WA. View the job description, responsibilities and qualifications for this position. Research salary, company info, career paths, and top skills for Quality Assurance/Continuous Quality Improvement Program Administrator (WMS2) - Olympia / Hybrid

  26. Tools and Strategies for Quality Improvement and Patient Safety

    Whatever the acronym of the method (e.g., TQM, CQI) or tool used (e.g., FMEA or Six Sigma), the important component of quality improvement is a dynamic process that often employs more than one quality improvement tool. Quality improvement requires five essential elements for success: fostering and sustaining a culture of change and safety, developing and clarifying an understanding of the ...