Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Developing and Optimising the Use of Logic Models in Systematic Reviews: Exploring Practice and Good Practice in the Use of Programme Theory in Reviews

* E-mail: [email protected]

Affiliation Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre), UCL Institute of Education, University College London, London, United Kingdom

Affiliation Centre for Paediatrics, Blizard Institute, Queen Mary University of London, London, United Kingdom

  • Dylan Kneale, 
  • James Thomas, 
  • Katherine Harris

PLOS

  • Published: November 17, 2015
  • https://doi.org/10.1371/journal.pone.0142187
  • Reader Comments

Table 1

Logic models are becoming an increasingly common feature of systematic reviews, as is the use of programme theory more generally in systematic reviewing. Logic models offer a framework to help reviewers to ‘think’ conceptually at various points during the review, and can be a useful tool in defining study inclusion and exclusion criteria, guiding the search strategy, identifying relevant outcomes, identifying mediating and moderating factors, and communicating review findings.

Methods and Findings

In this paper we critique the use of logic models in systematic reviews and protocols drawn from two databases representing reviews of health interventions and international development interventions. Programme theory featured only in a minority of the reviews and protocols included. Despite drawing from different disciplinary traditions, reviews and protocols from both sources shared several limitations in their use of logic models and theories of change, and these were used almost unanimously to solely depict pictorially the way in which the intervention worked. Logic models and theories of change were consequently rarely used to communicate the findings of the review.

Conclusions

Logic models have the potential to be an aid integral throughout the systematic reviewing process. The absence of good practice around their use and development may be one reason for the apparent limited utility of logic models in many existing systematic reviews. These concerns are addressed in the second half of this paper, where we offer a set of principles in the use of logic models and an example of how we constructed a logic model for a review of school-based asthma interventions.

Citation: Kneale D, Thomas J, Harris K (2015) Developing and Optimising the Use of Logic Models in Systematic Reviews: Exploring Practice and Good Practice in the Use of Programme Theory in Reviews. PLoS ONE 10(11): e0142187. https://doi.org/10.1371/journal.pone.0142187

Editor: Paula Braitstein, University of Toronto Dalla Lana School of Public Health, CANADA

Received: February 1, 2015; Accepted: October 19, 2015; Published: November 17, 2015

Copyright: © 2015 Kneale et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

Data Availability: Data can be found on the Cochrane database of systematic reviews ( http://onlinelibrary.wiley.com/cochranelibrary/search/ ) and the 3ie database of systematic reviews ( http://www.3ieimpact.org/evidence/systematic-reviews/ ).

Funding: This work was supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care North Thames at Barts Health NHS Trust. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health. The funders had no direct role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Researchers in academic institutions have historically measured their success by the impact that their research has within their own research communities, and have paid less attention to measuring its broader social impact. This presents a contradiction between the metrics of success of research and its ultimate extrinsic value [ 1 ], serving to expose a gulf between ‘strictly objective’ and ‘citizen’ scientists and social scientists [ 2 ]; the former believing that research should be objective and independent of external societal influences and the latter whose starting point is that science should benefit society. In recent years the need to link research within broader knowledge utilisation processes has been recognised, or at least accepted, by research councils and increasing numbers of researchers. While some forms of academic enquiry that pushes disciplinary boundaries or that represents ‘blue skies’ thinking remains important, despite being only distally linked to knowledge utilisation, there is little doubt as to the capacity of many other forms of ‘research’ to influence and transform policy and practice (see [ 3 , 4 ]). In many ways, both systematic reviews and logic models are both borne of such a need for greater knowledge transference and influence. Policy and practice-relevance is integral to most systematic reviews, with the systematic and transparent synthesis of evidence serving to enhance the accessibility of research findings to other researchers and wider audiences [ 5 , 6 ]. Through an explicit, rigorous and accountable process of discovery, description, quality assessment, and synthesis of the literature according to defined criteria, systematic reviews can help to make research accessible to policy-makers and other stakeholders who may not otherwise engage with voluminous tomes of evidence. Similarly, one of the motivations in evaluation research and programme management for setting out programme theory through a logic model or theory of change was to develop a shared understanding of the processes and underlying mechanisms by which interventions were likely to ‘work’. In the case of logic models this is undertaken through pictorially depicting the chain of components representing processes and conditions between the initial inputs of an intervention and the outcomes; a similar approach also underlies theories of change, albeit with a greater emphasis on articulating the specific hypotheses of how different parts of the chain result in progression to the next stage. This shared understanding was intended to develop across practitioner and program implementers, who may otherwise have very different roles in an intervention, as well as among a broader set of stakeholders, including funders and policy-makers.

As others before us have speculated, there is room for the tools of programme theory and the approach of systematic reviewing to converge, or more precisely, for logic models to become a tool to be employed as part of undertaking a systematic review [ 7 – 9 ]. This is not in dispute in this paper. However, even among audiences engaged in systematic research methods, we remain far from a shared understanding about the purpose and potential uses of a logic model, and even its definition. This has also left us without any protocol around how a logic model should be constructed to enhance a systematic review. In this paper we offer:

  • an account of the way in which logic models are used in the systematic review literature
  • an example of a logic model we have constructed to guide our own work and the documented steps taken to construct this
  • a set of principles for good practice in preparing a logic model

Here, we begin with an outline of the introduction of logic models into systematic reviews and their utility as part of the process.

The Use of Programme Theory in Review Literature

As understood in the program evaluation literature, logic models are one way of representing the underlying processes by which an intervention effects a change on individuals, communities or organisations. Logic models themselves have become an established part of evaluation methodology since the late 60s [ 10 ], although documentation that outlines the underlying assumptions addressing the ‘why’ and ‘for whom’ questions that define interventions is found in literature that dates back further, to the late 50s [ 11 ].

Despite being established in evaluation research, programme theory and the use of logic models remains an approach that is underutilised by many practitioners who design, run, and evaluate interventions in the UK [ 12 , 13 ]. Furthermore, there is a substantial degree of fragmentation regarding the programme theory approach used. Numerous overlapping approaches have been developed within evaluation literature, including ‘logic model’, ‘theory of change’, ‘theory of action’, ‘outcomes chain’, ‘programme theory’ and ‘program logic’ [ 11 , 13 ]. This very lack of consistency and agreement as to the appropriate tools to be used to conceptualise programme theory has been identified as one reason why, in a survey of 1,000 UK charities running interventions, four-fifths did not report using any formal program theory tool to understand the way in which their interventions effected change on their beneficiaries [ 12 ].

Conversely, within systematic reviewing so far, there has been some degree of consensus on the terminology used to represent the processes that link interventions and their outcomes (for example [ 7 , 8 , 9 ]). Many systematic reviews of health interventions tend to settle on a logic model as being the instrument of choice for guiding the review. Alternatively, reviews of international development interventions often include a theory of change, perhaps reflective of the added complexity of such interventions which often take place on a community, policy or systems basis. Logic models and theories of change sit on the same continuum, although a somewhat ‘fuzzy’ but important distinction exists. While a logic model may set out the chain of activities needed or are expected to lead to a chain of outcomes, a theory of change will provide a fuller account of the causal processes, indicators, and hypothesised mechanisms linking activities to outcomes. However, how reviews utilise programme theory is relatively unexplored.

Methods and criteria

To examine the use of logic models and theories of change in the systematic review literature, we examined indicative evidence from two sources. The first of these sources, the Cochrane database publishes reviews that have a largely standardised format that follow guidelines set out in the Cochrane Handbook (of systematic reviews of interventions) [ 14 ]. Currently, the handbook itself does not include a section on the use of programme theory in reviews. Other guidance set out by individual Cochrane review groups (of which there are 53, each focussed on a specific health condition or area) may highlight the utility of using programme theory in the review process. For example the Public Health Review Group, in their guidance on preparing a review protocol, describe the way in which logic models can be used to describe how interventions work and to justify a focus of a review on a particular part of the intervention or outcome [ 15 ]. Meanwhile, in the 2012 annual Cochrane methods-focussed report, the use logic models was viewed as holding potential to ‘confer benefits for review authors and their readers’ [ 16 ] and logic models have also been included in Cochrane Colloquium programme [ 17 ]. However, a definitive recommendation for use is not found, at the time of writing, in standard guidance provided to review authors. The second source, the 3ie database includes reviews with a focus on the effectiveness of social and economic interventions in low- and middle- income countries. The database includes reviews that have been funded by 3ie as well as those that are not directly funded but that nevertheless fall within its scope and are deemed to be of sufficient rigour. While the use of programme theory does not form part of the inclusion criteria, its use is encouraged in good practice set out by 3ie [ 18 ] and a high degree of importance is attributed to its use in 3ie’s criteria for awarding funding for reviews [ 19 ].

To obtain a sample of publications, we searched the Cochrane Library for systematic reviews and protocols and for material that included either the phrase ‘logic model’ or ‘theory of change’ occurring anywhere, published between September 2013 and September 2014 (over this period a total of 1,473 documents were published in the Cochrane Library). We also searched the 3ie (International Initiative for Impact Evaluation) database of systematic reviews published in 2013, and manually searched publications for the phrases ‘logic model’ or ‘theory of change’. Both searches were intended to provide a snapshot of review activity through capturing systematic review publications occurring over the course of a year. For the 3ie database, it was not possible to search by month, therefore we searched for publications by calendar year; to ensure that we obtained a full sample for a year we selected 2013 as our focus. For the Cochrane database, in order to obtain a more recent snapshot of publications to reflect current trends, we opted to search for publications occurring over a year (13 months in this case). All reviews and protocols of reviews that fell within the search parameters were analysed.

In the Cochrane database, over this period, four reviews and ten protocols were published that included the phrase ‘logic model’; while two protocols were published that included the phrase ‘theory of change’. It should be noted therefore that, certainly within reviews of health topics that adhere to Cochrane standards, that neither tool has made a substantial impact in this set of literature. This is likely to reflect the mainly clinical nature of many Cochrane reviews–among the 8 publications that were published through the public health group (all of which were protocols), 5 included mention of programme theory. Within the 3ie database of international development interventions, 53 reviews and protocols were published in 2013 (correct as of December 2014), of which 24 included a mention of either a logic model or a theory of change.

We developed a template for summarising the way in which logic models were used in the included protocols and systematic reviews based on the different stages of undertaking systematic reviews [ 6 ] and the potential usage of systematic reviews as identified by Anderson and colleagues and Waddington and colleagues [ 7 , 18 ], who in the former case describe logic models as being tools that can help to (i) scope the review; (ii) define and conduct the review; and (iii) make the review relevant to policy and practice including in communicating results. These template constructs also reflected the way in which logic model usage was described in the publications, which was primarily shaped by reporting conventions for protocols and reports published in Cochrane and 3ie (although the format for the latter source is less structured). Criteria around the constructs included in the template were then defined before two reviewers (see S1 Table . Data Coding Template) then independently assessed the use of logic model within the reviews and protocols published; the reviewers then met to discuss their findings. What this template approach cannot capture is the extent to which using a logic model shaped the conceptual thinking of the review teams, which, as discussed later in this paper, is one of the major contributions of using a logic model framework.

While both databases cover two different disciplines, allowing us to make comparisons between these, there may be some who argue that through having rigidly enforced methodological guidelines in the production of reviews, that we are unlikely to encounter innovative approaches to the use of programme theory in the reviews and protocols included here. This is a legitimate concern and is a caveat of the results presented here, although even among these sources we observe considerable diversity in the use of programme theory as we describe in our results.

Results: how are logic models and theories of change used in systematic reviews?

Looking first at publications from the Cochrane database and the two studies that included some component examining ‘theories of change’, the first of these described ‘theory of change’ in the context of synthesising different underlying theoretical frameworks [ 20 ] while the second used ‘theory of change’ in the context of describing a particular modality of theory [ 21 ]. Meanwhile, logic models were incorporated in a variety of ways; most frequently, they have been used as a shorthand way to describe how interventions are expected to work, outlining stages from intervention inputs through to expected outcomes at varying levels of detail ( Table 1 ).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0142187.t001

In around half of reports and protocols, the authors described in some form how they planned to, or did in fact use, their logic model in the review. Nevertheless, in the remainder of publications (all of which were protocols), the logic model was presented as a schematic to describe how the intervention may work and was not explicitly referred to further as a tool to guide the review process. Two Cochrane review protocols explicitly outlined the way in which the logic model would be used as a reference tool when considering the types of intervention that would be within the scope of the review [ 25 , 26 ]; this was also described in a full review [ 24 ]. We identified three publications where it was suggested that the logic model was developed through an iterative process where consensus was developed across the review team [ 23 , 24 , 30 ].

Two Cochrane reviews described how the logic model was used to determine the subgroup analyses a priori [ 22 , 23 ], helping to avoid some of the statistical pitfalls of running post-hoc sub-group analyses [ 35 ]. For example in their review of psychosocial smoking cessation interventions for pregnant women, Chamberlain and colleagues [ 22 ] developed their logic model from a synthesis of information collected before the review process began, with the express purpose of guiding analyses and stating sub-group analyses of interest. In Glenton’s study, a revision to the logic model they originally specified, based on the review findings was also viewed as being a useful tool to guide the sub-group analyses of future reviews. However, none of the protocols (as opposed to reviews) from the Cochrane database explicitly mentioned that the logic model had been used to consider which sub-group analyses should be undertaken. The review and protocol by Glenton et al. [ 23 ] and Ramke et al. [ 33 ] respectively provided the only examples where the logic model was to be revised iteratively during the course of the review based on review findings. Of the three Cochrane reviews included in Table 1 , Glenton and colleagues’ study [ 23 ] can be considered the one to have used a logic model most comprehensively as a dynamic tool to be refined and used to actively the synthesis of results in the review. The authors describe a novel use of the logic model in their mixed methods review as being a tool to describe mechanisms by which the intervention barriers and facilitators identified in the qualitative synthesis could impact on the outcomes assessed quantitatively in their review of programme effectiveness.

Among the studies extracted from the 3ie database, the terminology was weighted towards the ‘theories of change’ as opposed to ‘logic models’ (as expected, based on the guidance provided). Out of the 24 studies that were included ( Table 2 ), fourteen included a Logic Model and nine included a Theory of Change, while one report used both terms. Despite more studies including mention or an actual depiction of a theory of change or logic model, this body of literature shared the same limitations around the extent of use of programme theory as a tool integral to the review process. The majority of studies used a Theory of Change/Logic Model to describe their overall conceptual model or how they viewed the intervention or policy change under review would work, although this was reported at different stages of the review. Of the eleven protocols that were included, eight explicitly mentioned that they planned to return to their model at the end of the review, emphasising the use of programme theory tools as tools to help design the review and communicate the findings in this field. For example, in Willey and colleagues’ review of approaches to strengthen health services in developing countries [ 59 ], the Logic Model was updated at the end of the review to reflect the strength of the evidence discovered for each of the hypothesised pathways. Seven of the twenty protocols and studies described how a theory of change/logic model would be used to guide the review in terms of search strategy or more generally as a reference throughout the screening and other stages. Finally, two publications [ 48 , 52 ] described how they would use a theory of change as the basis for synthesising qualitative findings and two described how they would use a logic model/theory of change to structure sub-group meta analyses in quantitative syntheses [ 48 , 58 ]; both of these two latter protocols described how programme theory would be used at a number of key decision points in the review itself.

thumbnail

https://doi.org/10.1371/journal.pone.0142187.t002

Among the Cochrane and 3ie publications, few reviews or protocols described the logic model as being useful in the review initiation, description of the study characteristics or in assessing the quality and relevance of publications. Three Cochrane protocols and one Cochrane review described using existing logic models in full or examining components of existing logic models or reviews to develop their own while one in our sample of international development systematic reviews did so. Most authors appear to develop their own logic models afresh, and largely in the absence of guidance around good practice around the use of logic models. As Glenton and colleagues describe there is “no uniform template for developing logic models, although the most common approach involves identifying a logical flow that starts with specific planned inputs and activities and ends with specific outcomes or impacts, often with short-term or intermediate outcomes along the way” ([ 23 ]; p13).

Developing a Logic Model: A Worked Example from School Based Asthma Interventions

The second aim of this paper is to provide an example of the development of a logic model in practice. The logic model we describe is one developed as part of a systematic review examining the impact of school based interventions focussing on the self-management of asthma among children. This review is being carried out by a multidisciplinary team comprising team members with experience of systematic reviewing as well as team members who are trialists with direct experience in the field of asthma and asthma management. Of particular interest in this review are the modifiable intervention factors that can be identified as being associated with improvements in asthma management and health outcomes. The evidence will be used directly in the design of an intervention that will be trialled among London school children. Our approach was to view the development of the logic model as an iterative process, and we present three different iterations (Figs 1 – 3 ) that we undertook to arrive at the model we included in our review protocol [ 60 ]. Our first model was based on pathways identified by one reviewer through a summary examination of the literature and existing reviews. This was then challenged and refined through the input of a second reviewer and an information scientist, to form a second iteration of the model. Finally, a third iteration was constructed through the input of the wider review team, which included both methodological specialists and clinicians. These steps are summarised in Box 1 and are described in greater detail in the sections below. The example provided here is one that best reflects a process driven logic model where the focus is on establishing the key stages of interest and using the identified processes to guide later stages of the review. An alternative approach to developing a logic model may be to focus more on the representation of systems and theory [ 61 ]; although this approach may be better placed to support reviews of highly complex interventions (such as many of the international development reviews described earlier) or reviews that are more methodological than empirical in nature.

thumbnail

https://doi.org/10.1371/journal.pone.0142187.g001

thumbnail

https://doi.org/10.1371/journal.pone.0142187.g002

thumbnail

https://doi.org/10.1371/journal.pone.0142187.g003

Box 1. Summary of steps taken in developing the logic model for school based asthma interventions.

  • Synthesis of existing logic models in the field
  • Reviewer 1 identified distal outcomes
  • Working backwards, reviewer 1 then identified the necessary preconditions to reach distal outcomes; from distal outcomes intermediate and proximal level outcomes were then identified
  • Once outcomes had been identified, the outputs were defined (necessary pre-conditions but not necessarily goals in themselves); on completion the change part of the model was complete in draft form
  • Modifiable processes were then specified these were components that were not expected to be present in each intervention included in the review
  • Continuing to work backwards, intervention inputs (including core pedagogical inputs) were then specified. These were inputs that were expected to be present in each intervention included in the review, although their characteristics would differ between studies
  • In addition, external factors were identified as were potential moderators
  • Reviewer 1 and 2 then worked together to redevelop the model paying particular attention to clarity, the conceptual soundness of groupings and the sequencing of aspects
  • The review team and external members were asked to comment on the second iteration, and later agree a revised version 3. This version would provide the structure for some aspects of quantitative analyses and highlight where qualitative analyses were expected to illuminate areas of ambiguity.
  • The final version was included in the protocol with details on how it would be used in later stages of the review, including the way in which it would be transformed, based on the results uncovered, into a theory of change.
  • Consider undertaking additional/supplementary steps12.

Step 1, examination and synthesis of existing logic models

The first step we took in developing our logic model was to familiarise ourselves with the existing literature around the way in which improved self-management of asthma leads to improved outcomes among children and the way in which school-based learning can help to foster these. Previous systematic reviews around this intervention did not include a logic model or develop a theory of change but did help to identify some of the outcomes of self-management educational interventions. These included improved lung, self-efficacy, absenteeism from school, days of restricted activity, and number of visits to an emergency department, among others (see [ 62 ]). A logic model framework helped to order these sequentially and separate process outputs from proximal, intermediate and distal outcomes. Other studies also pointed towards the school being a good site for teaching asthma self-management techniques among children for several reasons, including the familiar environment for learning that it provides for children, and the potential for identification of large numbers of children with asthma at a single location [ 63 – 65 ]. Some individual studies and government action plans also included logic models showing how increased education aimed at improving self-management skills was expected to lead to improvements in asthma outcomes (for example [ 66 , 67 , 68 ]). This evidence was synthesised and was found to be particularly useful in helping to identify some of the intervention processes taking place that could lead to better asthma outcomes, although these were of varying relevance to our specific situation of interest in terms of school-based asthma interventions, as well as being very heavily shaped by local context. We adopted an aggregative approach to the synthesis of the evidence at this point, including all information which was transferable across contexts [ 69 ]. After examining the available literature, the first reviewer was able to proceed with constructing a first draft of the logic model.

Step 2, identification of distal outcomes

Reviewer 1 started with the identification of the very distal outcomes that could change as a result of school-based interventions aimed at improving asthma self-management. From these outcomes the reviewer worked backwards and identified the necessary pre-conditions to achieving these to develop a causal chain. Identifying this set of distal outcomes was analogous to questioning why a potential funding, delivery or hosting organisation (such as a school or health authority) may want to fund such an intervention–the underlying goals of running the intervention. In this case, these outcomes could include potential improvements in population-level health, reductions in health budgets and/or potential increases in measures of school performance ( Fig 1 ). After identifying these macro-level outcomes, we identified the distal child level outcomes which were restricted to changes in children’s outcomes that would only be perceptible at long-term follow-up. These included changes in quality of life and academic achievement, which we identified as being modifiable only after sustained periods of behaviour change and a reduction in the physical symptoms of asthma.

Step 3, identification of intermediate and proximal outcomes

Next, the reviewer 1 outlined some of the intermediate outcomes, those changes necessary to achieve the distal outcomes. Here our intermediate changes in health were based on observations of events, including emergency admissions and limitations in children’s activity over a period of time (which we left unspecified). The only intermediate educational outcome was school attendance, and we identified this as being the only (or at least main) pathway through which we may observe (distal) changes in academic achievement as a result of school-based asthma interventions. Working backwards, our proximal outcomes were defined those pre-conditions necessary to achieve our intermediate outcomes; these revolved around health symptoms and behaviour around asthma and asthma management. We expect these to be observable shortly after the intervention ends (although may be measured at long-term follow-up). The intention is for the systematic review to be published as a Cochrane review which requires the identification of 2–3 primary outcomes and approximately 7 outcomes in total, which in our case helped to rationalise the number of outcomes we included, which left unbounded could have included many more.

Step 4, identification of outputs

Finally in the ‘change’ section of the logic model (see Fig 2 ), we then specified the outputs of the intervention, which we define here as those aspects of behaviour or knowledge that are the direct focus for modification within the activities of the intervention, but are unlikely to represent the original motivations underlying the intervention. Our outputs are those elements of the intervention where changes will be detectable during the course of the intervention itself. Here increased knowledge of asthma may be a pre-condition for improved symptomology and would have a direct focus within intervention activities (outputs), but increased knowledge in itself was not viewed as an likely underlying motivation of running the intervention. A different review question may prioritise improved knowledge of a health condition, and view increased knowledge as an end-point in itself.

Step 5, specification of modifiable intervention processes

To aid in later stages of the review we placed the modifiable design characteristics in sequence after intervention inputs, as we view these as variants that can occur once the inputs of the intervention have been secured. Separating these from standard intervention inputs was a useful stage when it came to considering the types of process data we might extract and in designing data extraction forms. The number of modifiable design characteristics of the intervention specified was enhanced by examining some of the literature described earlier as well as through discussions with members of the review team who were most involved with designing the intervention that will take place after the review.

Step 6, specification of intervention inputs

Standard intervention inputs were specified as were the ‘core elements of the intervention’. These core elements represent the pedagogical focus of the intervention and form some of the selection criteria for studies that will be included, although studies will differ in terms of the number of core elements that are included as well as the way in which these are delivered. Studies that do not include any of these core elements were not considered as interventions that focus on the improvement asthma self-management skills.

Step 7, specification of intervention moderators including setting and population group

Finally, child level moderators (population characteristics) and the characteristics of the schools receiving the intervention were specified (context/setting characteristics). Specifying these early-on in the logic model helped to identify early-on the type of subgroup analyses we would conduct to investigate any potential sources of heterogeneity.

Step 8, share initial logic model, review and redraft

Reviewer 1 shared the draft logic model with a second member of the team. Of particular concern in this step was to establish consensus around the clarity, conceptual soundness of the groupings, the sequencing of the change part of the model, and the balance between meeting the design needs of the intervention and the generalisability of the findings to other settings. With respect to the latter, the second reviewer commented that specifying reductions in health budgets reflected our own experiences of the UK context, and may not be appropriate for all healthcare contexts likely to be included in our review. Therefore, Fig 2 in our second iteration only acknowledges that macro-level (beneficial) changes can be observed from observing changes in the distal outcomes of children, but we do not specify what these might be. At this stage it was helpful to have the first reviewer working alongside a second member of the review team who had greater expertise and knowledge of the design and delivery health-based interventions and who was working directly alongside schools in the preliminary stages of data collection around the intervention itself. Figs 1 – 3 show the development of the logic model across iterations. This second iteration had a clearer distinction between the action and change aspects of the logic model, and had refined the number of outcomes that would be explicitly outlined, which had implications for the search strategy. The action part of the model was also altered to better differentiate parts of the model that represented implementation processes from parts of the model that represented implementation measures or metrics.

Step 9, share revised logic model with wider group, review and redraft

The draft logic model was shared among the wider review team and to an information scientist with comments sought, particularly around those aspects of step 8 that had been the source of further discussion. The review team were asked specifically to input on the content of the different sections of the logic model, the sequencing of different parts, and the balance between meeting the design needs of the intervention and the generalisability of the findings to other settings. Input was sought from an information scientist external to review to ensure that the model adequately communicated and captured the body of literature that the review team aimed to include, and helped to make certain that the model was interpretable to those who were not directly part of the review team. For the third (and final) iteration, views were also sought about whether the main moderating factors across which the team might investigate sources of heterogeneity in meta analyses were included, or for those that would be identified through earlier qualitative analyses, that these were adequately represented. Once these views were collated, the third iteration was produced and agreed. The third iteration better represents the uncertainty in terms of processes that may be uncovered during qualitative analyses and the way in which these will be used to investigate heterogeneity in subgroup analyses in the quantitative (meta) analysis.

Step 10, present the final logic model in the protocol

The final version was included in the protocol with details on how it would be used in later stages of the review. At the end of the review, we intend to return to the logic model and represent those factors that are associated with successful interventions from the quantitative and qualitative analyses in a theory of change.

Potential additional and supplementary steps that could be taken elsewhere

Greater consultation or active collaboration with additional stakeholders on the logic model may be beneficial, particularly for complex reviews involving system-based interventions where different stakeholders will bring different types of knowledge [ 8 , 70 ]. There may also be merit in this approach at the outset in situations where the review findings are intended to inform on an intervention in a known setting, and to ensure that the elements that will enhance the applicability or transferability of the intervention are represented. In the example given here, as there were members of the review team who were both taking part in the review and the design of the intervention, there was less of a need to undertake this additional stage of consultation, and elements such as the presence or change in school asthma policies were included in the logic model to reflect the interests of the intervention team.

Produce further iterations of the logic model : When there is less consensus among the review team than was the case here, when there are greater numbers of stakeholders being consulted, or when the intervention itself is a more complex systems-based intervention; there may be a need to produce further multiple iterations of the logic model. In programme evaluation, logic models are considered to be iterative tools that reflect cumulative knowledge accrued during the course of running an intervention [ 11 ]. While the exact same principle doesn’t apply in the case of systematic reviews, a greater number of iterations may be necessary in order to produce a logic model to guide the review, for example to reflect the different forms of knowledge different stakeholders may bring. Where there are parts of the logic model that are unclear at the outset of a review, or in situations where there is an insurmountable lack of consensus and only the review findings can help to clarify the issue, these can be represented in a less concrete way in the logic model, for example the processes to be examined in our own review in Fig 3 .

Multiple logic models : There may also be a need to construct multiple logic models for large interventions to reflect the complexity of the intervention, although it may also be the case that such a large or complex question may be unsuitable for a single review but would instead fall across multiple linked reviews. However, where the same question is being examined using different types of evidence (mixed method review), multiple logic models representing the same processes in different ways could be useful–for example a logic model focussing on theory and mechanistic explanations for processes in addition to a logic model focussing on empirical expected changes may be necessary for certain forms of mixed methods reviews (dependent on the research question). In other cases, the review may focus on a particular intervention or behaviour change mechanism within a (small) number of defined situations–for example a review may focus on the impact of mass media to tackle public health issues using smoking cessation, alcohol consumption and sexual health as examples. The review question may be focussed on the transferability of processes between these public health issues but in order to guide the review itself it may be necessary to produce a separate logic model for each public health issue, which could be synthesised into a unified theory of change for mass media as an intervention at a later stage.

Using the logic model in the review.

The logic model described in this paper is being used to guide a review that is currently in progress and as such we are not able to give a full outline of its potential use. Others in the literature before us have described logic models as having added value to systematic reviewers when (i) scoping the review (refining the question; deciding on lumping or splitting a review topic; identifying review components); (ii) defining and conducting the review (identifying the review study criteria; guiding the search strategy; rationale behind surrogate outcomes; justifying sub-group analyses); (iii) making the review relevant to policy and practice (structuring the reporting of results; illustrating how harms and feasibility or connected with interventions, interpreting the results based on intervention theory) [ 7 , p35]. Others still have emphasised the utility of a logic model framework in helping reviewers to think conceptually through illustrating the influential relationships and components from inputs to outcomes, suggesting that logic models can help reviewers identify testable hypotheses to focus upon [ 8 , 71 ]; they have also speculated that a logic models could help to identify the parameters of a review as an addition to the well-established PICO framework [ 8 , 9 ].

Our own experience of using the logic model ( Fig 3 ) in a current systematic review to date is summarised in Table 3 below; which focuses on additions to the uses suggested elsewhere. While the additional description below provides an indication as to the potential added value of using a logic model, the use of a logic model has not be without its challenges. Firstly, the use of logic models is relatively novel within the systematic review literature (and even in program theory literature, as discussed earlier), and initially there was some apathy towards the logic model, even within the review team. Secondly, while we agree that a logic model could be used to depict the PICO criteria [ 8 , 9 ], our own logic model did not include a representation of ‘C’, the comparator, as this was the usual care provided across different settings, which could vary substantially. Others may also experience difficulties in representing the comparison element in their logic models. Finally, all of the utilities of the logic model described here and elsewhere are not unique qualities or contingent to using a logic model, but using a logic model accelerates these processes and brings about a shared understanding more quickly; for example development of exclusion criteria is not contingent on having a logic model, but rather that the logic model facilitates the process of identifying inclusion and exclusion criteria more rationally, and helps depict some of the reasoning underlying review decisions. Practically, where the logic model has its advantages is in aiding the initial conceptual thinking around the scope of interventions, its utility in aiding decisions about individual parts of the intervention within the context of the intervention as whole, its flexibility and its use as a reference tool in decision-making, and in communication across the review team.

thumbnail

https://doi.org/10.1371/journal.pone.0142187.t003

Developing Elements of Good Practice for the Use of Logic Models and Theories of Change

The earlier analysis suggests that many systematic review authors tend to use programme theory tools to depict the conceptual framework pictorially, but may not view either a logic model or theory of change as integral review tools. To prevent logic models and theories of change being included in reviews and protocols as simply part of a tick-box exercise, there is a need to develop good practice on how to use programme theory in systematic reviews, as well as developing good practice on how to develop a logic model/theory of change. This is not to over-complicate the process or to introduce rigidity where rigidity is unwelcome, but to maximise the contribution that programme theory can make to a review.

Here we introduce some elements of good practice that can be applied when considering the use of logic models. These are derived from (i) the literature around the use of logic models in systematic reviews [ 7 , 8 , 17 ]; (ii) the broader literature around the use of theory in systematic reviews [ 72 , 73 ]; (iii) our analyses contrasting the suggested uses of logic models in systematic reviews with their actual use (section1); (iv) the use of logic models in program theory literature [ 11 , 13 ]; as well as broader conceptual debates in systematic review and program theory literature. These principles draw from the work of Anderson and colleagues and Baxter and colleagues as well as our own experiences, but are unlikely to represent an exhaustive list as there is a need to maintain a degree of flexibility in the development and use of logic models. Our main concern is that logic models in the review literature appear to be used in such a limited way that a loose set of principles, such as those proposed here, can be applied with little cost in terms of imposing rigidity but with substantial impact in terms of enhanced transparency in use and benefit to the review concept, structure and communication.

A logic model is a tool and as such its use needs to be described

Logic models provide a framework for ‘thinking’ conceptually before, during and at the end of the review. In addition to the uses highlighted earlier by Anderson and Waddington [ 7 , 18 ], our own experiences of using the logic model on our review has emphasised the utility of a logic model in: (i) clarifying the scope of the review and assessing whether a question was too broad to be addressed in a single review; (ii) identifying points of uncertainty that could become focal points of investigation within the review; (iii) clarification of the scope of the study and particularly in distinguishing between different forms of intervention study design (in our own case between a process evaluation and a qualitative outcomes evaluation); (iv) ensuring that there is theoretical inclusivity at an early stage of the review; (v) clarifying inclusion and exclusion criteria, particularly with regards to core elements of the intervention; (vi) informing the search strategy with regards to the databases and scholarly disciplines upon which the review may draw literature; (vii) a communication tool and reference point when making decisions about the review design; (viii) as a project management tool in helping to identify dependencies within the review. Sharing the logic model with an information scientist was also a means of communicating the goals of the review itself while examination of existing logic models was found to be a way of developing expertise around how an intervention was expected to work. Use of a logic model has also been linked with a reduced risk of type III errors occurring, helping to avoid conflation between errors in the implementation and flaws in the intervention [ 17 , 74 ].

Summarising our own learning around the uses of the logic model and the uses identified by others (primarily Anderson) for their use as a tool in systematic reviews in Table 4 highlights that a logic model may have utility primarily at the beginning and end of the systematic review, and may be a useful reference tool throughout.

thumbnail

https://doi.org/10.1371/journal.pone.0142187.t004

Our analyses suggest that the use of logic models has faltered and our earlier review of the systematic review literature highlighted that (i) logic models were infrequently used as a review tool and that the extent of use is likely to reflect the conventions of different disciplines; and (ii) where logic models were used, they were often used in a very limited way to represent the intervention pictorially. Often, they did not ostensibly appear to be used as tools integral to the review. There remains the possibility that some of the reviews and protocols featured earlier simply did not report the extent to which they used the logic model, although given that this is both a tool for thinking conceptually and a communication tool, it could be expected that the logic model would be referred to and referenced at different points in the review process. Logic models can be useful review tools, although the limited scope of use described in the literature could suggest that they are in danger of becoming a box-ticking exercise included in reviews and protocols rather than methodological aids in their own right.

Terminology is important: Logic models and theories of change

We earlier stated that ‘theories of change’ and ‘logic models’ were used interchangeably by reviewers, largely dependent of the discipline in which they are conducted. However, outside the systematic review literature, a distinction often exists. Theories of change are often used to denote complex interventions where there is a need to identify the assumptions of how and why sometimes disparate elements of large interventions may effect change; they are also used in cases for less complex interventions where assumptions of how and why program components effect change are pre-specified. Theories of change can also be used to depict how entirely different interventions can lead to the same set of outcomes. Logic models on the other hand are used to outline program components and check whether they are plausible in relation to the outcomes; they do not necessitate the underlying assumptions to be stated [ 11 , 75 ]. This distinction fits in well with the different stages of a systematic review. A logic model provides a sequential depiction of the components of interventions and their outcomes, but not necessarily the preconditions that are needed to achieve these outcomes, or the relative magnitude of different components. Given that few of the programme theory tools that are used in current protocols and reviews are derived or build upon existing tools, for most systematic reviews that do not constitute whole system reviews of complex intervention strategies, or for reviews that are not testing a pre-existing theory of change, developing a Logic Model initially may be most appropriate. This assertion does not mean that systematic reviews should be atheoretical or ‘theory-lite’, and different possible conceptual frameworks can be included in Logic Models. However, the selection of a single conceptual framework upfront, as is implicitly the case when developing a Theory of Change, may not represent the diversity of disciplines that reviewers are likely to encounter. Except in the cases outlined earlier around highly complex/systems based interventions (mainly restricted to development studies literature), theories of change are causal models that are most suitable when developed through the evidence gathered in the review process itself.

Logic models can evolve into theories of change

Once a review has identified the factors that are associated with different outcomes, their magnitude, and the underlying causal pathways linking intervention components with different outcomes, this evidence can in some cases be used to update a logic model and construct a theory of change. We can observe examples in the literature where review evidence has been synthesised to map out the direction and magnitude of evidence in the literature (see [ 8 ], although in this case, the resulting model was described as a ‘Logic Model’ and not a ‘Theory of Change’), and this serves as a good model for all reviews. Programme theory can effectively be used to represent the change in understanding developed as a result of the review, and even in some cases the learning acquired during a review, although this is not the case for all reviews and there may be some where this approach is unsuitable or is otherwise not possible. A logic model can be viewed iteratively as a preliminary for constructing a theory of change at the end of the review, which in turn forms a useful tool to communicate the findings of the review. However, some reviewers may find little to update in the logic model in terms of the theory of the intervention or may otherwise find that the evidence around the outcomes and process of the intervention is unclear among the literature as it stands. There may also be occasions where reporting conventions for disciplines or review groups may preclude updating the logic model on the basis of the findings of the review.

Programme theory should not be developed in isolation

In our exploration of health-based and international development reviews, we observed just one example where the reviewers described a Logic Model as having been developed through consensus within the review team [ 24 ]. Other examples are found in the literature, where logic models or theories of change have been developed with stakeholders, for example Baxter and colleagues [ 8 ; p3] record that ‘following development of a draft model we sought feedback from stakeholders regarding the clarity of representation of the findings, and potential uses’. These examples are clearly in the minority in the systematic review literature, although most programme theory described in the evaluation literature is clear that models should be developed through a series of iterations taking into account the views of different stakeholders [ 11 ]. While some of this effect may be due to reporting, as it is likely that at least some of the models included in Tables 1 and 2 were developed having reached a consensus, it is nevertheless important to highlight that a more collaborative approach to developing models could bring benefits to the review itself. Given that systematic review teams are often interdisciplinary in nature, and can be engaging with literature that is also interdisciplinary, programme theory should reflect the expertise and experience of all team members as well as that of external stakeholders if appropriate. Programme theory is also used as a shorthand communication tool, and the process of developing a working theoretical model within a team can help to simplify the conceptual model into a format that is understandable within review teams, but which can also be used to involve external stakeholders, as is often necessary in the review process [ 70 ].

A logic model should be used as an Early Warning System

Logic models have their original purpose as a planning and communication tool in the evaluation literature. However in systematic reviews, they can also provide the first test of the underlying conceptual model of the review. If a review question cannot be represented in a Logic Model (or Theory of Change in the case of highly complex issues), this can provide a signal that the question itself may be too broad or needs further refining. It may be that a series of Logic Models may better represent the underlying concepts and the overall research question driving the review, and this may also reflect a need to undertake a series of reviews rather than a single review, particularly where the resources available may be more limited [ 73 ]. Alternatively, as is often the case with complex systems-based interventions (as encountered in many reviews of international development initiatives published on the 3ie database), the intervention may be based on a number of components, which could be represented individually through logic models, the mechanisms of which are relatively well-established and understood, and a theory of change may better represent the intervention. The tool can also help the reviewer to assess the degree to which the review may focus on collecting evidence around particular pathways of outcomes, and the potential contribution the review can make to the field, helping to establish whether the scope of the review is broad and deep (as might be the ideal in the presence of sufficient resource), or narrower and more limited in scope and depth [ 73 ]. This can also help to manage the expectations of stakeholders at the outset. Logic models can be used as the basis for developing a systematic review protocol, and should be considered living documents, subject to several iterations during the process of developing a protocol as the focus of the review is clarified. They can both guide and reflect the review scope and focus during the preparation of a review protocol.

There is no set format for a Logic Model (or Theory of Change), but there are key components that should be considered

Most Logic Models at a minimum, depict the elements included in the PICO/T/S framework (patient/problem/population; intervention; comparison/control/comparator; outcomes and time/setting) [ 76 ]. However, a logic model represents a causal chain of events resulting from an intervention (or from exposure, membership of a group or other ‘treatment’); therefore it is necessary to consider how outcomes may precede or succeed one another sequentially. Dividing outcomes into distal (from the intervention), intermediate or proximal categories is a strategy that is often used to help identify sets of pre-existing conditions or events needed in order to achieve a given outcome. The result is a causal chain of events including outputs and outcomes that represent the pre-conditions that are hypothesised to lead to distal outcomes. Outcomes are only achieved once different sets of outputs are met; these may represent milestones of the intervention but are not necessarily success criteria of the intervention in themselves (for example in Fig 3 ). In the case of reviews of observational studies, the notion of outputs (and even interventions and intervention components) may be less relevant, but may instead be better represented by ‘causes’ and potential ‘intervention points’ [ 71 ], that are also structured sequentially to indicate which are identified as necessary pre-conditions for later events in the causal chain.

Many of the elements described above refer to the role of the intervention (or condition) in changing the outcomes for an individual (or other study unit), which can also be referred to as a theory of change; the elements of the causal chain that reflect the intervention and its modifiable elements are known as the theory of action [ 11 , 77 ]. Within the theory of action, the modifiable components of the intervention needed to achieve later outputs and outcomes, such as the study design, resources, and process-level factors such as quality and adherence are usually included. Other modifiable elements, including population or group-level moderators can also be included, and even the underlying conceptual theories that may support different interventions may be included as potential modifiers. Finally, it some of the contextual factors that may reflect the environments in which interventions take place can also be represented. Within our example in Fig 3 , these include the school-level factors such as intake of the school and its local neighbourhood as well as broader health service factors and local health policies. For some reviews and studies, the influence of these contextual factors may themselves be the focus of the review.

Summary and Conclusions

In the past, whether justified or not, a critique often levelled at systematic reviews has been the absence of theory when classifying intervention components [ 78 ]. The absence of theory in reviews has transparent negative consequences in terms of the robustness of the findings, the applicability and validity of recommendations, and in the absence of mechanistic theories of why interventions work, limits on the generalisability of the findings. A number of systematic reviewers are beginning to address this critique directly through considering the methodological options available when reviewing theories [ 69 , 78 , 79 ], while others have gone further through exploring the role that differences in taxonomies of theory hold in explaining effect sizes [ 78 , 80 , 81 ]. Nevertheless, despite the benefits of, and need for, using theory to guide secondary data analysis, reviewers may be confronted by several situations where the conceptualisation of the theoretical framework itself may be problematic. Such instances include those where there may be little available detail on the theories underlying interventions, or competing theories or disciplinary differences in the articulation of theories for the same interventions exist (requiring synthesis) [ 79 ]; or a review topic may necessitate the grouping and synthesis of very different interventions to address a particular research question; or more fundamentally, where there is a need to consider alternative definitions, determinants and outcomes of interventions that goes beyond representing these within ‘black boxes’. In common with others before us [ 7 , 8 ], in this paper we view a logic model as a tool to help reviewers to overcome these challenges and meet these needs through providing a framework for ‘thinking’ conceptually.

Much of this paper examines the application of a logic model to ‘interventionist’ systematic reviews, and we have not directly considered their use in systematic reviews of observational phenomena. Certainly, while some of the terminology would need to change to reflect the absence of ‘outputs’ and ‘resources’, the benefits to the review process would remain. For some, this idea may simply be too close that of a graphical depiction of a conceptual framework. However, the logic model is distinct in that it represents only part of a conceptual framework–it does not definitively represent a single ideological perspective or epistemological stance, and the accompanying assumptions. Arguably, a theory of change often does attempt to represent an epistemological framework, and this is why we view a distinction between both tools. As the goal of a systematic review is uncover the strength of evidence and the likely mechanisms underlying how different parts of a causal pathway relate to one another, then the evidence can be synthesised into a theory of change; and we maintain the emphasis on this being a ‘theory’, to be investigated and tested across different groups and across different geographies of time and space.

In investigating the use of logic models, we found that among the comparatively small number of reviewers who used a theory of change or logic model, many described a limited role as well as role intrinsic to the beginning of the review and not as a tool to communicate review findings. A worked through example may help expand the use as will making their use a formal requirement, but the formation of guidelines will help make sure where they are used, they’re used to greater effect. A recommendation of this paper is for greater guidance to be prepared around how programme theory could and should be used in systematic reviewing, incorporating the elements raised here and others. Much of this paper is concerned with the benefits that logic models can bring to reviewers as a pragmatic tool in carrying out the review, as a tool to help strengthen the quality of reviews, and perhaps most importantly as a communication tool to disseminate the findings to reviewers and trialists within academic communities and beyond to policy-makers and funders. With respect to this last purpose in particular, improving the way in which logic models are used in reviews can only serve to increase the impact that systematic reviews can have in shaping policy and influencing practice in healthcare and beyond.

Supporting Information

S1 checklist. prisma checklist..

https://doi.org/10.1371/journal.pone.0142187.s001

S1 Flowchart. PRISMA Flowchart.

https://doi.org/10.1371/journal.pone.0142187.s002

S1 Table. Data Coding Template.

https://doi.org/10.1371/journal.pone.0142187.s003

Acknowledgments

We would like to acknowledge the contributions of Jonathan Grigg, Toby Lasserson and Vanessa McDonald in helping to shape the development of the logic model.

Author Contributions

Conceived and designed the experiments: DK JT. Analyzed the data: DK KH. Contributed reagents/materials/analysis tools: DK. Wrote the paper: DK JT.

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 6. Gough D, Oliver S, Thomas J. Introducing systematic reviews. In: Gough D, Oliver S, Thomas J, editors. An Introduction to Systematic Reviews. London: Sage; 2012.
  • 11. Funnell SC, Rogers PJ. Purposeful program theory: effective use of theories of change and logic models. San Francisco, CA: John Wiley & Sons; 2011.
  • 12. Ni Ogain E, Lumley T, Pritchard D. Making an Impact. London: NPC, 2012.
  • 14. Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions. Chichester: Wiley-Blackwell; 2011.
  • 15. The Cochrane Public Health Group. Guide for developing a Cochrane protocol. Melbourne, Australia: University of Melbourne, 2011.
  • 16. Brennan S. Using logic models to capture complexity in systematic reviews: Commentary. Oxford: The Cochrane Operations Unit, 2012.
  • 17. Francis D, Baker P. Developing and using logic models in reviews of complex interventions. 19th Cochrane Colloquium; October 18th; Madrid: Cochrane; 2011.
  • 19. Bhavsar A, Waddington H. 3ie tips for writing strong systematic review applications London: 3ie; 2012. Available: http://www.3ieimpact.org/en/funding/systematic-reviews-grants/3ie-tips-for-writing-systematic-review-applications/.Accessed 3 December 2014.
  • 20. Barlow J, MacMillan H, Macdonald G, Bennett C, Larkin SK. Psychological interventions to prevent recurrence of emotional abuse of children by their parents. The Cochrane Library. 2013.
  • 21. McLaughlin AE, Macdonald G, Livingstone N, McCann M. Interventions to build resilience in children of problem drinkers. The Cochrane Library. 2014.
  • 25. Burns J, Boogaard H, Turley R, Pfadenhauer LM, van Erp AM, Rohwer AC, et al. Interventions to reduce ambient particulate matter air pollution and their effect on health. The Cochrane Library. 2014.
  • 26. Costello JT, Baker PRA, Minett GM, Bieuzen F, Stewart IB, Bleakley C. Whole-body cryotherapy (extreme cold air exposure) for preventing and treating muscle soreness after exercise in adults. The Cochrane Library. 2013.
  • 27. Gavine A, MacGillivray S, Williams DJ. Universal community-based social development interventions for preventing community violence by young people 12 to 18 years of age. The Cochrane Library. 2014.
  • 28. Kuehnl A, Rehfuess E, von Elm E, Nowak D, Glaser J. Human resource management training of supervisors for improving health and well-being of employees. The Cochrane Library. 2014.
  • 29. Land M-A, Christoforou A, Downs S, Webster J, Billot L, Li M, et al. Iodine fortification of foods and condiments, other than salt, for preventing iodine deficiency disorders. The Cochrane Library. 2013.
  • 30. Langbecker D, Diaz A, Chan RJ, Marquart L, Hevey D, Hamilton J. Educational programmes for primary prevention of skin cancer. The Cochrane Library. 2014.
  • 31. Michelozzi P, Bargagli AM, Vecchi S, De Sario M, Schifano P, Davoli M. Interventions for reducing adverse health effects of high temperature and heatwaves. The Cochrane Library. 2014.
  • 32. Peña-Rosas JP, Field MS, Burford BJ, De-Regil LM. Wheat flour fortification with iron for reducing anaemia and improving iron status in populations. The Cochrane Library. 2014.
  • 33. Ramke J, Welch V, Blignault I, Gilbert C, Petkovic J, Blanchet K, et al. Interventions to improve access to cataract surgical services and their impact on equity in low- and middle- income countries. The Cochrane Library. 2014.
  • 34. Sreeramareddy CT, Sathyanarayana TN. Decentralised versus centralised governance of health services. The Cochrane Library. 2013.
  • 37. Brody C, Dworkin S, Dunbar M, Murthy P, Pascoe L. The effects of economic self-help group programs on women's empowerment: A systematic review protocol. Oslo, Norway: The Campbell Collaboration, 2013.
  • 38. Cirera X, Lakshman R, Spratt S. The impact of export processing zones on employment, wages and labour conditions in developing countries. London: 3ie, 2013.
  • 40. Giedion U, Andrés Alfonso E, Díaz Y. The impact of universal coverage schemes in the developing world: a review of the existing evidence. Washington DC: World Bank, 2013.
  • 41. Gonzalez L, Piza C, Cravo TA, Abdelnour S, Taylor L. The Impacts of Business Support Services for Small and Medium Enterprises on Firm Performance in Low-and Middle-Income Countries: A Systematic Review. The Campbell Collaboration, 2013.
  • 42. Higginson A, Mazerolle L, Benier KH, Bedford L. Youth gang violence in developing countries: a systematic review of the predictors of participation and the effectiveness of interventions to reduce involvement. London: 3ie, 2013.
  • 43. Higginson A, Mazerolle L, Davis J, Bedford L, Mengersen K. The impact of policing interventions on violent crime in developing countries. London: 3ie, 2013.
  • 44. Kingdon G, Aslam M, Rawal S, Das S. Are contract and para-teachers a cost effective intervention to address teacher shortages and improve learning outcomes? London: Institute of Education, 2012.
  • 45. Kluve J, Puerto S, Robalino D, Rother F, Weidenkaff F, Stoeterau J, et al. Interventions to improve labour market outcomes of youth: a systematic review of training, entrepreneurship promotion, employment services, mentoring, and subsidized employment interventions. The Campbell Collaboration, 2013.
  • 46. Loevinsohn M, Sumberg J, Diagne A, Whitfield S. Under What Circumstances and Conditions Does Adoption of Technology Result in Increased Agricultural Productivity? A Systematic Review. Brighton: Institute for Development Studies, 2013.
  • 47. Lynch U, Macdonald G, Arnsberger P, Godinet M, Li F, Bayarre H, et al. What is the evidence that the establishment or use of community accountability mechanisms and processes improve inclusive service delivery by governments, donors and NGOs to communities. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, 2013.
  • 48. Molina E, Pacheco A, Gasparini L, Cruces G, Rius A. Community Monitoring to Curb Corruption and Increase Efficiency in Service Delivery: Evidence from Low Income Communities. Campbell Collaboration, 2013.
  • 49. Posthumus H, Martin A, Chancellor T. A systematic review on the impacts of capacity strengthening of agricultural research systems for development and the conditions of success. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, 2013 1907345469.
  • 50. Samarajiva R, Stork C, Kapugama N, Zuhyle S, Perera RS. Mobile phone interventions for improving economic and productive outcomes for farm and non-farm rural enterprises and households in low and middle-income countries. London: 3ie, 2013.
  • 51. Samii C, Lisiecki M, Kulkarni P, Paler L, Chavis L. Impact of Payment for Environmental Services and De-Centralized Forest Management on Environmental and Human Welfare: A Systematic Review. The Campbell Collaboration, 2013.
  • 52. Seguin M, Niño-Zarazúa M. What do we know about non-clinical interventions for preventable and treatable childhood diseases in developing countries? United Nations University, 2013.
  • 53. Spangaro J, Zwi A, Adogu C, Ranmuthugala G, Davies GP, Steinacker L. What is the evidence of the impact of initiatives to reduce risk and incidence of sexual violence in conflict and post-conflict zones and other humanitarian crises in lower-and middle-income countries? London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, 2013.
  • 56. Tripney J, Roulstone A, Vigurs C, Moore M, Schmidt E, Stewart R. Protocol for a Systematic Review: Interventions to Improve the Labour Market Situation of Adults with Physical and/or Sensory Disabilities in Low-and Middle-Income Countries. The Campbell Collaboration, 2013.
  • 58. Welch VA, Awasthi S, Cumberbatch C, Fletcher R, McGowan J, Krishnaratne S, et al. Deworming and Adjuvant Interventions for Improving the Developmental Health and Well-being of Children in Low- and Middle- income Countries: A Systematic Review and Network Meta-analysis. Campbell Collaboration, 2013.
  • 59. Willey B, Smith Paintain L, Mangham L, Car J, Armstrong Schellenberg J. Effectiveness of interventions to strengthen national health service delivery on coverage, access, quality and equity in the use of health services in low and lower middle income countries. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London 2013.
  • 61. Rohwer AC, Rehfuess E. Logic model templates for systematic reviews of complex health interventions. Cochrane Colloquium; Quebec, Canada2013.
  • 66. Alamgir AH. Texas Asthma Control Program: Strategic Evaluation Plan 2011–2014. Austin, TX: Texas Department of State Health Services 2012.
  • 67. AAP. Schooled in Asthma–Physicians and Schools Managing Asthma Together. Elk Grove Village, IL: American Academy of Pediatrics (AAP) 2001.
  • 70. Rees R, Oliver S. Stakeholder perspectives and participation in reviews. In: Gough D, Oliver S, Thomas J, editors. An Introduction to Systematic Reviews. London: Sage Publications; 2012.
  • 73. Gough D, Thomas J. Commonality and diversity in reviews. In: Gough D, Oliver S, Thomas J, editors. An Introduction to Systematic Reviews. London: Sage; 2012.
  • 74. Waddington H. Response to 'Developing and using logic models in reviews of complex interventions'. 19th Cochrane Colloquium; October 18th; Madrid: Cochrane; 2011.
  • 75. Clark H, Anderson AA. Theories of Change and Logic Models: Telling Them Apart. American Evaluation Association; Atlanta, Georgia2004.
  • 76. Brunton G, Stansfield C, Thomas J. Finding relevant studies. In: Gough D, Oliver S, Thomas J, editors. An Introduction to Systematic Reviews. London: Sage; 2012.
  • 77. Chen H-T. Theory-driven evaluations. Newbury Park, CA, USA: Sage Publications; 1990.
  • Search Menu
  • Advance articles
  • Editor's Choice
  • Supplements
  • Author Guidelines
  • Submission Site
  • Open Access
  • About Journal of Public Health
  • About the Faculty of Public Health of the Royal Colleges of Physicians of the United Kingdom
  • Editorial Board
  • Self-Archiving Policy
  • Dispatch Dates
  • Advertising and Corporate Services
  • Journals Career Network
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction.

  • < Previous

Developing logic models to inform public health policy outcome evaluation: an example from tobacco control

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Tessa Langley, Duncan Gillespie, Sarah Lewis, Katie Eminson, Alan Brennan, Graeme Docherty, Ben Young, Developing logic models to inform public health policy outcome evaluation: an example from tobacco control, Journal of Public Health , Volume 43, Issue 3, September 2021, Pages 639–646, https://doi.org/10.1093/pubmed/fdaa032

  • Permissions Icon Permissions

The evaluation of large-scale public health policy interventions often relies on observational designs where attributing causality is challenging. Logic models—visual representations of an intervention’s anticipated causal pathway—facilitate the analysis of the most relevant outcomes. We aimed to develop a set of logic models that could be widely used in tobacco policy evaluation.

We developed an overarching logic model that reflected the broad categories of outcomes that would be expected following the implementation of tobacco control policies. We subsequently reviewed policy documents to identify the outcomes expected to result from the implementation of each policy and conducted a literature review of existing evaluations to identify further outcomes. The models were revised according to feedbacks from a range of stakeholders.

The final models represented expected causal pathways for each policy. The models included short-term outcomes (such as policy awareness, compliance and social cognitive outcomes), intermediate outcomes (such as changes in smoking behaviour) and long-term outcomes (such as mortality, morbidity and health service usage).

The use of logic models enables transparent and theory-based planning of evaluation analyses and should be encouraged in the evaluation of tobacco control policy, as well as other areas of public health.

Large-scale public policy interventions, such as tobacco tax increases and regulation of advertising for unhealthy commodities, are regularly implemented with a view to improving public health. The maintenance, improvement and expansion of these policies depend on post-implementation evaluation of their effectiveness. This requires the identification of important outcome measures and appropriately designed analyses to measure policy impact.

A key challenge in this type of policy evaluation is the attribution of causality—does the policy cause a change in health outcomes, or is the change attributable to something else? 1 Population-level policy changes cannot usually be evaluated using randomized-controlled trials, because governments, rather than researchers, control their implementation. 2 Furthermore, researchers may not have the opportunity to design studies and collect relevant data prospectively. The evaluation of these ‘natural experiments’, therefore, often involves in observational designs and frequently relying on routinely collected data such as health service records or population surveys. 3 Some, such as cross-sectional designs, have particularly low internal validity. 2 Others, such as longitudinal studies, interrupted times series and difference-in-difference analysis, have greater internal validity but face challenges in disentangling policy effects from secular trends and other factors which contribute to changes in relevant outcomes. 2 This is a particular problem in settings where several policies are implemented in a short period of time. For example, in England, tobacco control policies have often been implemented close together—such as the smoking ban in public places and the increase in the minimum age of purchase for tobacco products in 2007—and standardized tobacco packaging and the European Union Tobacco Products Directive, which also made changes to the appearance of tobacco products in 2016. Despite these challenges, studies which evaluate natural experiments can make a contribution to the evidence base for public health. 4–6

One way of mitigating the above challenges is to articulate the programme theory, which describes how each policy is likely to work and in whom. The programme theory can be represented visually in a logic model, which shows the anticipated causal pathway of an intervention and the populations expected to be affected. Logic models are most frequently simple, linear models ( Fig. 1 ); others may seek to capture more complex pathways, such as non-linear pathways or multiple causal strands. 7 , 8 Programme theory and logic models can be a valuable tool in intervention planning and implementation as well as in evaluation. 8 , 9

Simple logic model framework.

Simple logic model framework.

Ideally, programme theories should be articulated prior to the implementation of public health interventions, as part of the justification for their introduction, thus providing an a priori judgement, based on well-considered evidence, of how and in whom they will work. In the context of policy evaluation, a pre-specified logic model allows researchers to test existing hypotheses about the effect of policy, in some cases using prospective data that are collected specifically for the purposes of evaluation. In the absence of existing logic models, it falls to researchers to generate hypotheses about the expected effect of the policy and provide a ‘plausible and sensible model of how a programme is supposed to work’. 10 In turn, this allows them to identify justifiable measures of intervention impact, and plan analyses of the most relevant outcomes, albeit often relying on existing data sources.

Many resources are available to public health practitioners and researchers who intend to develop logic models, 8 , 11–15 and logic models have informed the planning and evaluation of a range of large-scale public health policies, including tobacco and alcohol policies. 16–21 However, they are not consistently presented in the peer-reviewed literature, and the development and application of these models are generally not described in detail. 16–20

Some models are highly simplified, 20 while others are extremely complex 19 which may limit their use. Sometimes existing logic models are adapted for policy evaluation, and it is often unclear how or why outcomes have been selected during adaption from existing models, creating uncertainty about the logic that underpins the attribution of causality. 17 , 18 , 21 Tobacco policy logic models often draw on the Centres for Disease Control and Prevention models which, although evidence based, are goal focused rather than policy specific and may overlook outcomes related to the implementation a particular policy. 17 , 18 , 21 , 22 There is, therefore, a need for systematic and transparent approaches to disentangle the effects of large-scale public policies from each other, from secular trends and other factors.

We aimed to develop a set of policy-specific logic models that could be widely used in tobacco policy evaluation. We aimed to do this using systematic methods that can be applied to other areas of public health where multiple large-scale policy interventions create complexities for evaluation. The purpose of this paper is to describe our novel systematic approach to the development of the models and present the resulting models.

The logic models were developed in the context of a research project which aimed to evaluate a range of tobacco control policies implemented in England between 2007 and 2015 ( Fig. 2 ).

Tobacco control policies implemented in England, 2007–2015.

Tobacco control policies implemented in England, 2007–2015.

The project used a range of publicly available secondary data to assess the short- and medium-term impact of these policies (e.g. on smoking prevalence) using interrupted time series analysis (ITSA) and estimated the long-term effect of the policies on health care costs and population health outcomes by extrapolating the results of the ITSA using economic modelling. The purpose of the logic models was to identify hypothesized causal pathways and outcomes at the beginning of the project to guide the choice of outcomes in the subsequent analysis.

Logic model development

We used an iterative process to develop models for each policy on the timeline in Fig. 2 : (1) development of an overarching logic model; (2) review of government documents and existing literature to populate individual policy models and (3) refinement of logic models through multiple rounds of stakeholder feedback.

Stage 1: Because the focus of our project was to assess the health-related effects of policies, we developed an initial overarching logic model drawing on Nutbeam’s three levels of outcomes for health promotion, including health promotion outcomes (such as knowledge), intermediate health outcomes (such as health behaviours) and health and social outcomes (such as mortality). 23 The Nutbeam model captures the overarching causal pathway for public health interventions and was therefore well-suited to our logic models. Our model was further informed by a logic model for tobacco control mass media campaigns that some members of the research team had previously worked on, 24 which drew on the work of Chen. 25 Chen distinguishes between the ‘action model’ and ‘change model’ within a programme theory. The action model describes what will be done and how and is therefore most useful prior to and during implementation. The change model describes the causal process generated by an intervention. Because we were developing our logic models in the context of evaluation as opposed to planning and implementation, we developed change models to show the anticipated causal process. Our overarching model reflected the broad categories of tobacco use-related outcomes that would be expected in the short-, medium- and long-term following the implementation of tobacco control policies.

Stage 2: We subsequently developed policy-specific logic models by identifying the intended outcomes of each policy and outcomes on which an effect had been demonstrated in existing studies, using a combination of government policy documents and literature review. We conducted a search for policy and consultation documents related to the policies of interest on the UK government website, on the basis that they would describe the outcomes that were expected from implementing the policies, either in the document text or in logic models. 26 We reviewed these documents to identify the outcomes expected to result from the implementation of each policy, as well as the population (e.g. youth or adults) in whom the outcomes were expected, i.e. the target population. None of these documents contained logic models. The documents that we reviewed are listed in Supplementary Material , part A.

We conducted a literature review of existing policy evaluations to identify further relevant outcomes. We searched MEDLINE, Web of Science and EMBASE using a search strategy tailored to each policy group of interest to identify reviews and systematic reviews of evaluations of the relevant policies, both within and outside of England. Searches were limited to English language and restricted by date (January 2000–April 2017). Where relevant reviews did not exist for a specific type of policy, we subsequently searched for primary studies and evaluation protocols. Where a relevant review existed but provided limited detail on relevant outcomes, we also accessed primary studies included in the review. From these publications, we extracted outcomes that were reported to have been changed, or hypothesized by authors to have the potential to be changed, by the relevant policies as well as contextual factors which might have influenced the effect of the policy. To avoid researcher bias, we took an inclusive approach and included all outcomes that were reported to have changed in response to a policy, or for which such an effect was hypothesized. An example search strategy for one policy group, and an example of the outcomes extracted for one policy and the relevant reviews and studies are included in Supplementary Material , parts B and C.

We combined the outcomes identified from government documents and the literature search to put together initial logic models for each policy, categorizing outcomes according to the overarching logic model.

Stage 3: The initial models—both the policy-specific models and subsequently the overarching model—were refined through meetings of the research team, and meetings with a range of stakeholders. This included: a face-to-face meeting with a public involvement group of local smokers and ex-smokers, who were shown each model and provided feedback; a telephone conference with a project advisory group comprising national and international tobacco control researchers who provided feedback on the models which were sent to them prior to the meeting; and a face-to-face meeting with national tobacco control policymakers. This part of the process helped to ensure that potentially relevant outcomes and contextual factors which were not identified in stage 2 were included in the models.

Final logic models

Figure 3 shows the overarching logic model that was used to develop individual policy models. Figure 4 shows an example of one of the individual policy models and distinguishes between outcomes identified in stage 2 (literature and policy document review) and stage 3 (research team and stakeholder meetings). All final models are shown in Supplementary Material . The final models represented the expected causal pathways for each policy and identified the target populations in which changes in outcomes were expected to occur. The models included proximal outcomes (such as policy awareness, compliance and social cognitive outcomes), intermediate outcomes (such as changes in smoking behaviour) and distal outcomes (such as mortality, morbidity and health service usage), which broadly overlap with the three levels of health promotion outcomes. 23 Stage 3 in the model development process suggested that the intermediate outcomes were better divided into two categories (labelled i and ii), to provide a more detailed representation of the causal pathway. The distinction between outcomes identified at different stages of the logic model development process highlights the importance of the multi-stage approach, which helps to ensure that no important outcomes are missed. The ‘population and contextual moderators box’ captures factors which may influence the effect of the policy. These include other tobacco control policies, the economic context and social norms. The latter may also be directly influenced by the policy—for example, stigma associated with purchasing cigarettes after a point-of-sale tobacco display ban, as shown in Fig. 4 .

Overarching tobacco control policy logic model.

Overarching tobacco control policy logic model.

Logic model for point of sale tobacco display bans.

Logic model for point of sale tobacco display bans.

Main finding of this study

We have demonstrated a staged approach to the development of logic models for the evaluation of tobacco control policies and used this approach to identify logic models for several of the major new tobacco control policies in the UK in the past decade. Our logic models identify the short-, medium- and long-term outcomes expected to result from these policies. Our overarching logic model could serve as a starting point for tobacco control policy evaluation in other settings. Our policy-specific models can be used to support evaluation of similar policies in other settings, and could be updated should evidence of effects of policies on other outcomes be identified. Each stage of our proposed process could be adapted for evaluation of policy in other areas. We have described our approach in the context of development of logic models following the implementation of policy; our approach could also be adopted for the prospective development of models, and we recommend that models are developed in advance of policy implementation where possible.

What is already known on this topic

The limited feasibility of randomized controlled trials for the evaluation of population-level public health policy means that natural experiments may often provide the best possible evidence. 5 Maximizing their potential requires careful planning, especially as they often rely on routinely collected data over which researchers have limited control. 6 In particular, due to the challenges associated with demonstrating causal effects when evaluating natural experiments, it can be difficult to justify the selection of outcome measures. The long chains of outcomes between the intervention and the ultimate health outcome mean that thinking critically about the causal pathway is particularly important. 24

The application of logic models to evaluation planning can enhance the transparency and the credibility of findings. However, while logic models are often presented as part of evaluation studies, 16–18 , 20 their development is often not clearly articulated, and it is therefore not always clear whether they are realistic representations of expected causal pathways. There are resources in the grey literature which can be used to guide the development of logic models. 8 , 11–14 However, while these often describe the need for stakeholder engagement, these typically do not suggest a more structured approach incorporating evidence from the peer-reviewed literature and relevant government documentation. National Health Service (NHS) Scotland has developed logic models for tobacco control and explicitly links to the literature that has contributed to their development; however, a detailed description of how the models were developed is not provided. 15

What this study adds

In this paper, we have described a structured approach to the development of the models, which makes use of a wide range of relevant government documents and existing literature, as well as including extensive stakeholder engagement. Using a multi-faceted approach such as this, and documenting the process, provides reassurance that the resulting models reflect existing evidence as well as expert opinion.

The development of logic models prior to the development of evaluation analysis plans enables a theory-based approach to analysis planning and ensures that analysis focuses on the outcomes and populations that are most pertinent to the policy in question. Our logic models provided a conceptual framework to guide hypothesis generation and subsequent analysis to evaluate tobacco control policies using ITSA and economic modelling. The development of logic models is also an important step in the development of population models that extrapolate the long-term health and economic outcomes of policies, helping to ensure that the outcomes and subgroup analyses are aligned with the decision-making process. 27 , 28

After the development of logic models, the next stage is to consider the practicalities of available data and analysis methods. We linked outcomes from our logic models to available secondary data on smoking-related measures at the national level to identify which outcomes we would be able to analyze for each policy and in which population. Relevant data were identified in national surveys. From this we generated a series of hypotheses, which were published as part of an analysis plan placed on the project webpage prior to starting the analysis; the results of this analysis have yet to be published. 29 Where available, a comparable approach could also be applied using regional or local data. The pre-publication of study protocols or analysis plans is increasingly encouraged by funders 6 and journals 30 alike as a way of enhancing transparency. In the case of natural experiments, as well as with other interventions, the use of logic models can support the development of evidence- and theory-driven hypotheses and analyses. Such analyses will, in most cases, rely on observational data, and identifying effects and attributing causality are likely to be challenging. In addition to quantitative analysis, there is a role for qualitative research to help to understand the reasons for apparent effect—or lack of effect—of policies.

To our knowledge, this is the first time, a wide range of logic models have been developed for the purposes of tobacco control policy evaluation using a consistent and replicable approach. Applying a consistent method of logic model development to a broad range of policies is particularly valuable when seeking to separate out the effects of multiple policies that are implemented at similar times. Specifying the causal pathway prior to developing analysis plans may subsequently help to increase confidence in attributing causal effects and may help to identify policy effects more quickly, allowing faster policy expansion. Our approach to logic model development was systematic and used a wide range of available international evidence. Furthermore, we took a collaborative approach, ensuring that the views of a wide range of stakeholders were considered.

We acknowledge that our approach requires more resources—both in terms of time and skills—than that typically outlined in the grey literature, owing to the need for detailed literature reviews and meetings with multiple stakeholder groups, and may therefore not be feasible in all settings in which logic models are used. However, logic models can be valuable tools in both the planning and evaluation of public health policies. While our approach to logic model development has some limitations (described below), the stages that we have set out could help to increase the rigour and transparency of logic model development. Our staged approach could be combined with guides on logic model development from the grey literature, which provide detail on how to conduct stakeholder engagement and activities that can support the identification of outcomes.

Limitations of this study

Our logic models have some limitations. Although we adopted a rigorous search strategy to identify published evaluations and reviews of evaluations, our literature review methods were not systematic, and we may therefore have missed outcomes in the literature. Our method relied to a large extent on existing published evaluations, which risks placing more emphasis on outcomes on which an effect has previously been identified. However, incorporating information from policy documents and the views of a range of stakeholders helped to mitigate this. Our logic models did not capture potential differential effects by subgroups other than by age. This largely reflects that existing studies tended not to identify subgroup effects (e.g. by socioeconomic status and sex). However, it also reflects that the policies of interest did not target specific groups other than adults or young people. However, our subsequent statistical analyses did assess differential effects by subgroup. Furthermore, our logic models imply that the causal pathway for each policy is linear, which may not fully capture the complexity of the causal pathway. In particular, it seems likely that relationships between shorter term outcomes, such as attitudes and smoking behaviour among individuals may not be linear. For example, a change in attitudes towards smoking may lead to a reduction in smoking, which in turn may change attitudes to smoking. Furthermore, we have not explicitly considered interactions between different policies that are in place at the same time. Logic models, such as that developed, are better placed to captures these dynamic effects; however, logic models are generally understood as simplifications of reality and are less likely to aid planning and communication if they are too complex. 31 In addition, designing evaluations to capture these dynamic effects is likely to be particularly difficult. Interactions between policies can nevertheless be considered in analyses of policy effects; when planning analyses based on logic models, researchers should consider the timing of different policies and potential confounding and interacting effects.

Our logic models were change models and were designed post-policy implementation, which has some disadvantages. It is preferable for logic models incorporating both action and change models to be developed a priori by intervention stakeholders. This ensures a shared understanding of an intervention between stakeholders, facilitates intervention monitoring and formative evaluation and helps to guide plans for evaluation (such as data collection) prior to implementation. However, it is not uncommon for researchers and evaluators to have to develop logic models retrospectively, and our approach ensured that this was done in a reliable and transparent way.

The logic models guided the development of hypotheses and choice of outcome measures in subsequent evaluations of tobacco control policies. The use of logic models enables prospective and theory-based planning of evaluation analyses, which in turn enhances the transparency of policy evaluation. The use of logic models should be encouraged in the evaluation of tobacco control policy, as well as in other areas of public health. Logic models can be developed quickly by policy planners and evaluators with limited resources; however, such models may not reflect the existing evidence base in relation to the policy, nor the expertise of a broad group of stakeholders. Where possible, such as in the context of evaluation research, the development of logic models should encompass: (i) development of an initial logic model; (ii) revision of the initial model based on (systematic) review of the literature and other relevant documentation; (iii) revision of the model based on feedback from stakeholders such as policymakers, researchers and the general public; and (iv) documentation of each stage of logic model development.

This study is funded by the National Institute of Health Research (NIHR) Policy Research Programme (PR-R14-1215-24001). The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, the Department of Health and Social Care, arm’s-length bodies or other government department.

Tessa Langley , Associate Professor in Health Economics

Duncan Gillespie , Research Fellow

Sarah Lewis , Professor of Medical Statistics

Katie Eminson , Research Assistant

Alan Brennan , Professor of Health Economics and Decision Modelling

Graeme Docherty , Research Coordinator

Ben Young , Research Associate

Basu   S , Meghani   A , Siddiqi   A . Evaluating the health impact of large-scale public policy changes: classical and novel approaches . Annu Rev Public Health   2017 ; 38 : 351 – 70 .

Google Scholar

Fong   GT , Cummings   KM , Borland   R  et al.    The conceptual framework of the international tobacco control (ITC) policy evaluation project . Tob Control   2006 ; 15 : iii3 – iii11 .

MRC . Using Natural Experiments to Evaluate Population Health Interventions: Guidance for Producers and Users of Evidence . Medical Research Council , 2009 . https://mrc.ukri.org/documents/pdf/natural-experiments-guidance/   (24 February 2020, date last accessed) .

Google Preview

Wanless , D.   Securing Good Health for the Whole Population . 2004 . https://www.southampton.gov.uk/moderngov/documents/s19272/prevention-appx%201%20wanless%20summary.pdf   (2 April 2019, date last accessed) .

Pettigrew   M , Cummins   S , Ferrell   C  et al.    Natural experiments: an underused tool for public health?   Public Health   2005 ; 119 : 751 – 7 .

Medical Research Council . Using Natural Experiments to Evaluate Population Health Interventions: Guidance for Producers and Users of Evidence . MRC , 2012 . https://mrc.ukri.org/documents/pdf/natural-experiments-guidance/ (24 February 2020, date last accessed).

Rogers   P . Using programme theory to evaluate complicated and complex aspects of interventions . Evaluation   2008 ; 14 : 29 – 48 .

W.K.K.F . W.K. Kellogg Foundation Logic Model Development Guide . W.K.Kellogg Foundation . 2006 . https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-development-guide   (2 April 2019, date last accessed) .

Hayes   H , Parchman   M , Howard   R . A logic model framework for evaluation and planning in a primary care practice-based research network (PBRN) . J Am Board Fam Med   2011 ; 24 : 576 – 82 .

Bickmann   L . The functions of program theory . New Dir Eval   1987 ; 33 : 5 – 19 .

CDC . CDC Evaluation Documents, Workbooks and Tools: Logic Models . Centers for Disease Control . https://www.cdc.gov/eval/tools/logic_models/index.html   (2 April 2019, date last accessed) .

Public Health England . Introduction to Logic Models . 2018 . https://www.gov.uk/government/publications/evaluation-in-health-and-well-being-overview/introduction-to-logic-models   (16 January 2020, date last accessed) .

University of Wisconsin . Logic Models . https://fyi.extension.wisc.edu/programdevelopment/logic-models/   (16 January 2020, date last accessed) .

Midlands and Lancashire Commissioning Support Unit . Your Guide to Using Logic Models . https://www.midlandsandlancashirecsu.nhs.uk/images/Logic_Model_Guide_AGA_2262_ARTWORK_FINAL_07.09.16_1.pdf   (16 January 2020, date last accessed) .

NHS Scotland . Outcomes Frameworks . http://www.healthscotland.com/OFHI/index.html   (16 January 2020, date last accessed) .

Roeseler   A , Burns   D . The quarter that changed the world . Tob Control   2010 ; 19 : i3 – i15 .

Haw   SJ , Gruer   L , Amos   A  et al.    Legislation on smoking in enclosed public places in Scotland: how will we evaluate the impact?   J Public Health   2006 ; 28 : 24 – 30 .

McNeill   A , Lewis   S , Quinn   C  et al.    Evaluation of the removal of point-of-sale tobacco displays in Ireland . Tob Control   2011 ; 20 : 137 – 43 .

Pettigrew   M , Eastmure   E , Mays   N  et al.    The public health responsibility deal: how should such a complex public health policy be evaluated?   J Public Health   2013 ; 35 : 495 – 501 .

Humphreys   D , Eisner   M . Do flexible alcohol trading hours reduce violence? A theory-based natural experiment in alcohol policy . Soc Sci Med   2014 ; 102 : 1 – 9 .

Edwards   R , Thomson   G , Wilson   N . After the smoke has cleared: evaluation of the impact of a new national smoke-free law in New Zealand . Tob Control   2008 ; 17 : e2 .

CDC . Preventing Initiation of Tobacco Use: Outcome Indicators for Comprehensive Tobacco Control Programs–2014 . Centers for Disease Control and Prevention . 2014 . https://www.cdc.gov/tobacco/stateandcommunity/tobacco_control_programs/surveillance_evaluation/preventing_initiation/pdfs/preventing_initiation.pdf   (06 August 2019, date last accessed) .

Nutbeam   D . Evaluating health promotion—progress, problems and solutions . Health Promot Int   1998 ; 13 : 27 – 44 .

Stead   M , Angus   K , Langley   T  et al.    Mass media to communicate public health messages in six health topic areas: a systematic review and other reviews of the evidence . Public Health Res   2019 ; 7 :

Chen   H . Practical Program Evaluation: Assessing and Improving Planning, Implementation, and Effectiveness . Thousand Oaks, CA : SAGE Publications Inc. , 2005 .

www.gov.uk   (29 April 2019, date last accessed) .

Squires   H , Chilcott   J , Akehurst   R  et al.    A framework for developing the structure of public health economic models . Value Health   2016 ; 19 : 588 – 601 .

Brennan   A , Meier   P , Purshouse   R . Developing policy analytics for public health strategy and decisions—the Sheffield alcohol policy model framework . Ann Oper Res   2016 ; 236 : 149 – 76 .

UKCTAS . Do Tobacco Control Policies Work? A Comprehensive Evaluation of the Impact of Recent English Tobacco Policy Using Secondary Data . https://ukctas.net/featured-projects/NIHR-PRP.html   (29 April 2019, date last accessed) .

SSA . Addiction: Instructions for Authors . https://onlinelibrary.wiley.com/page/journal/13600443/homepage/forauthors.html   (17 May 2019, date last accessed) .

Funnell   S . Developing and using a program theory matrix for program evaluation and performance monitoring. In: Rogers   P , Hacsi   T , Petrosino   A , Huebner   T (eds). Program Theory in Evaluation: Challenges and Opportunities. New Directions for Evaluation, no. 87 . San Francisco : Jossey—Bass , 2000 .

  • public health medicine
  • tobacco control

Supplementary data

Email alerts, citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1741-3850
  • Print ISSN 1741-3842
  • Copyright © 2024 Faculty of Public Health
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Research article
  • Open access
  • Published: 10 May 2014

Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions

  • Susan K Baxter 1 ,
  • Lindsay Blank 1 ,
  • Helen Buckley Woods 1 ,
  • Nick Payne 1 ,
  • Melanie Rimmer 1 &
  • Elizabeth Goyder 1  

BMC Medical Research Methodology volume  14 , Article number:  62 ( 2014 ) Cite this article

13k Accesses

55 Citations

23 Altmetric

Metrics details

There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods.

This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders.

The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction.

Conclusions

The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies.

Trial registration number

PROSPERO registration number: CRD42013004037 .

Peer Review reports

Worldwide shifts in demographics and disease patterns, accompanied by changes in societal expectations are driving up treatment costs. As a result of this, several strategies have been developed to manage the referral of patients for specialist care. In the United Kingdom (UK) referrals from primary care to secondary services are made by General Practitioners (GPs), who may be termed Family Physicians or Primary Care Providers in other health systems. These physicians in the UK act as the gatekeeper for patient access to secondary care, and are responsible for deciding which patients require referral to specialist care. Similar models are found in health care services in Australia, Denmark and the Netherlands however, this process differs from systems in other countries such as France and the United States of America.

As demand outstrips resources in the UK, the volume and appropriateness of referrals from primary care to specialist services has become a key concern. The term “demand management” is used to describe methods which monitor, direct or regulate patient referrals within the healthcare system. Evaluation of these referral management interventions however presents challenges for systematic review methodologies. Target outcomes are diverse, encompassing for example both the reduction of referrals and enhancing the optimal timing of referrals. Also, the interventions are varied and may target primary care, specialist services, or administration or infrastructure (such as triaging processes and referral management centres) [ 1 ].

In systematic review methodology there is increasing recognition of the need to evaluate not only what works, but the theory of why and how an intervention works [ 2 ]. The evaluation of complex interventions such as referral management therefore requires methods which move beyond reductionist approaches, to those which examine wider factors including mechanisms of change [ 3 – 5 ].

A logic model is a summary diagram which maps out an intervention and conjectured links between the intervention and anticipated outcomes in order to develop a summarised theory of how a complex intervention works. Logic models seek to uncover the theories of change or logic underpinning pathways from interventions to outcomes [ 2 ]. The aim is to identify assumptions which underpin links between interventions, and the intended short and long term outcomes and broader impacts [ 6 ]. While logic models have been used for some time in programme evaluation, their potential to make a contribution to systematic review methodology has been recognised only more recently. Anderson et al. [ 7 ] discuss their use at many points in the systematic review process including scoping the review, guiding the searching and identification stages, and during interpretation of the results. Referral management entails moving from a system that reacts in an ad hoc way to increasing needs, to one which is able to plan, direct and optimise services in order to optimise demand, capacity and access across an area. Uncovering the assumptions and processes within a referral management intervention therefore requires an understanding of whole systems and assumptions, which a logic model methodology is well placed to address.

A number of benefits from using logic models have been proposed including: identification of different understandings or theories about how an intervention should work; clarification of which interventions lead to which outcomes; providing a summary of the key elements of an intervention; and the generation of testable hypotheses [ 8 ]. These advantages relate to the power of diagrammatic representation as a communication tool. Logic models have the potential to make systematic reviews “more transparent and more cogent” to decision-makers [ 7 ]. The use of alternative methods of synthesis and presentation of reviews is also worthy of consideration given the poor awareness and use of systematic review results amongst clinicians [ 9 ]. In addition, logic models may move systematic review findings beyond the oft-repeated conclusion that more evidence is needed [ 7 ].

While the potential benefit as a communication tool has been emphasised, there has been limited evaluation of logic models. In this study we aimed to further develop and evaluate the use of logic models as synthesis tools, during a systematic review of interventions to manage referrals from primary care to hospital specialists.

The method we used built on previous work by members of the team [ 10 , 11 ]. The approach combines conventional rigorous and transparent review methods (systematic searching, identification, selection and extraction of papers for review, and appraisal of potential bias amongst included studies) with a logic model synthesis of data. The building of models systematically from the evidence contrasts to the approach typically adopted, whereby logic models are built by discussion and consensus at meetings of stakeholders or expert groups. The processes followed are described in further detail below.

Search strategy

A study protocol was devised (PROSPERO registration number: CRD42013004037) to guide the review which outlined the research questions, search strategy, inclusion criteria, and methods to be used. The primary research question was “what can be learned from the international evidence on interventions to manage referral from primary to specialist care?” Secondary questions were “what factors affect the applicability of international evidence in the UK”, and “what are the pathways from interventions to improved outcomes?”

Systematic searches of published and unpublished (grey literature) sources from healthcare, and other industries were undertaken. Rather than a single search, an iterative (a number of different searches) and emergent approach (the understanding of the question develops throughout the process), was taken to identify evidence [ 12 , 13 ]. As the model was constructed, further searches were required in order to seek additional evidence where there were gaps in the chain of reasoning as described below. An audit table of the search process was kept, with date of search, search terms/strategy, database searched, number of hits, keywords and other comments included, in order that searches were transparent, systematic and replicable.

Searches took place between November 2012 and July 2013. A broad range of electronic databases was searched in order to reflect the diffuse nature of the evidence (see Additional file 1 ). Citation searches of included articles and other systematic reviews were also undertaken and relevant reviews articles were used to identify studies. Grey literature (in the form of published or unpublished reports, data published on websites, in government policy documents or in books) was searched for using OpenGrey, Greysource, and Google Scholar electronic databases. Hand searching of reference lists of all included articles was also undertaken; including relevant systematic reviews.

Identification of studies

Inclusion/exclusion criteria were developed using the established PICO framework [ 14 ]. Participants included all primary care physicians, hospital specialists, and their patients. Interventions included were those which aimed to influence and/or affect referral from primary care to specialist services by having an impact on the referral practices of the primary physician. Studies using any comparator group were eligible for inclusion, and all outcomes relating to referral were considered. With the increasing recognition that a broad range of evidence is able to inform review findings, no restrictions were placed on study design with controlled, non-controlled (before and after) studies, as well as qualitative work examined. Studies eligible for inclusion were limited by date (January 2000 to July 2013). Articles in non-English languages with English abstracts were considered for translation (none were found to meet the inclusion criteria for the review). The key criterion for inclusion in the review was that a study was able to answer or inform the research questions.

Selection of papers

Citations identified using the above search methods were imported into Reference Manager Version 12. The database was screened by two reviewers, with identification and coding of potential papers for inclusion. Full papers copies of potentially relevant articles were retrieved for further examination.

Data extraction

A data extraction form was developed using the previous expertise of the review team, trialled using a small number of papers, and refined for use here. Data extractions were completed by one reviewer and checked by a second. Extracted data included: country of the study, study design, data collection method, aim of the study, detail of participants (number, any reported demographics), study methods/intervention details, comparator details if any, length of follow up, response and/or attrition rate, context (referral from what/who to what/who), outcome measures, main results, and reported associations.

Quality appraisal

The potential for bias within each quantitative study was assessed drawing on work by the Cochrane Collaboration [ 15 ]. We slightly adapted their tool for assessing risk of bias in order that the appraisal would be suitable for our broader range of study designs. For the qualitative papers we adapted the Critical Appraisal Skills Checklist [ 16 ] to provide a similar format to the quantitative tool. In addition to assessing the quality of each individual paper we also considered the overall strength of evidence for papers grouped by typology, drawing on criteria used by Hoogendoom et al. [ 17 ]. Each group of papers was graded as providing either: stronger evidence (generally consistent findings in multiple higher quality studies); weaker evidence (generally consistent findings in one higher quality study and lower quality studies, or in multiple lower quality studies.); inconsistent evidence (<75% consistency findings in multiple studies) or very limited evidence (a single study). Strength of evidence appraisal was undertaken at a meeting of the research team to establish consensus.

Logic model synthesis

Logic models typically adopt a left to right flow of “if....then” propositions to illustrate the chain of reasoning underpinning how interventions lead to immediate (or short term) outcomes and then to longer term outcomes and impacts. This lays out the logic or assumptions that underpin the pathway (in this case, what needs to happen in order for interventions with General Practitioners to impact on referral demand). In our approach, extracted data from the included papers across study designs are combined and treated as textual (qualitative) data. A process of charting, categorising and thematic synthesis [ 18 ] of the extracted quantitative intervention and qualitative data is used in order to identify individual elements of the model. A key part of the model is detailing the mechanism/s of change within the pathway and the moderating and mediating factors which may be associated with or influence outcomes [ 19 ] this is often referred to as the theory of change [ 2 ].

Evaluation of the model

Following development of a draft model we sought feedback from stakeholders regarding the clarity of representation of the findings, and potential uses. We carried out group sessions with patient representatives, individual interviews and seminar presentations with GPs and consultants, and also interviews with commissioners (in the UK commissioning groups comprise individuals who are responsible for the process of planning, agreeing and monitoring services, having control of the budget to be spent). At these sessions we presented the draft model and asked for verbal comments regarding the clarity of the model as a way of understanding the review findings, any elements which seemed to be missing, elements which did not seem to make sense or fit participants knowledge or experience, and also how participants envisaged that the model could be used. We also gave out feedback forms for participants to provide written comments on these aspects. In addition to these sessions we circulated the draft model via email to topic experts for their input. The feedback we obtained was examined and discussed by the team in order to inform subsequent drafts of the model.

Ethical approval

The main study was secondary research and therefore exempt from requiring ethical approval. Approval for the final feedback phase of the work was obtained from the University of Sheffield School of Health and Related Research ethics committee (reference 0599). Informed consent was obtained from all participants.

The electronic searches generated a database of 8327 unique papers. Of these, 581 papers were selected for full paper review (see Additional file 1 ). After considering these and completing our further identification procedures, 295 papers were included in the review. Figure  1 illustrates the process of inclusion and exclusion. The included papers consisted of 141 intervention papers and 154 non-intervention papers. The 154 non-intervention papers included 33 qualitative studies and 121 quantitative studies (see Additional file 1 ).

figure 1

The process of inclusion and exclusion. A flow chart illustrating the process of paper identification.

A logic model was systematically developed from reviewing and synthesising these papers (see Figure  2 ). The model illustrates the elements in the pathway from demand management interventions to their intended impact. While this paper will refer to the emerging review findings which are currently undergoing peer review, its primary purpose is to describe and evaluate the methodology.

figure 2

The completed logic model built from examining the identified published literature. The model illustrates the pathway between demand management interventions and intended impact. It reads from left to right, with a typology of demand management interventions in the first column, the immediate or short term outcomes following the interventions in the second column, then factors which may act as barriers to achievement of longer term outcomes in the mediating and moderating factors column, outcomes in terms of demand management at the level of physicians and their practice in the fourth column, and longer term system-wide impacts in the final column. The model indicates (via differing text types) where there was stronger or weaker evidence of links in the pathway.

Logic model development

Following data extraction and quality appraisal, the process of systematically constructing the logic model began. We developed the model column by column, underpinned by the evidence. The model contains five columns detailing the pathway from interventions to short-term outcomes; via moderating and mediating factors; to demand management outcomes; and finally demand management impact.

The first stage in building the model was to develop intervention typology tables from the extracted data, in order to begin the process of grouping and organising the intervention content and processes which would form the first column. This starting point in the pathway details the wide range of interventions which are reported in the literature. It groups these interventions into typologies of: practitioner education; process change; system change; and patient intervention. Within each of these boxes the specific types of interventions in each category have been listed, for example the GP education typology contains interventions targeting training sessions, peer feedback, and provision of guidelines. Process change interventions include electronic referral, direct access to screening and consultation with specialists prior to referral. System change interventions include additional staff in community, gate-keeping and payment systems. We found few examples of patient interventions. The model provides an indication of where the evidence is stronger or weaker. For example in regard to physician education it can be seen that peer review/feedback has stronger evidence underpinning its effectiveness, with the use of guidelines being underpinned by conflicting evidence of effectiveness. For all but two of the interventions, the evidence was for either none or some level of positive outcome on referral management. For the additional staff in primary care and the addition or removal of gatekeeping interventions however there was strong evidence that these could worsen referral management outcomes.

The interventions thus formed the starting point, and first column of the logic model. By developing a typology we were able to group and categorise the data, and begin to explore questions regarding which types of intervention may work, and what characteristics of interventions may be successful in managing patient referral.

The intervention studies used a wide range of outcomes to judge efficacy. A key aim of logic models is to uncover assumptions in the chain of reasoning between interventions and their expected impacts, and to develop a theory of change which sets out these implicit “if…then” pathways. The next stage in development of the model was therefore to begin to unpack these outcomes and assumptions regarding links between interventions and demand management impacts.

The outcomes were divided into those which were considered to be short-term outcomes, long term outcomes or result in broader impact on demand management systems. In order to do this we used “if…then” reasoning to deduce in what order outcomes needed to occur for these to then lead to the intended impact. Short-term outcomes were classified as those that impacted immediately or specifically on individual referrers, patients or referrals. Long term outcomes were categorised as those which had an effect more widely beyond the level of the individual GP, service or patient, and impact factors were those that would determine the effectiveness of referral management across whole health systems.

Outcomes and impacts reported in the intervention studies were identified and grouped by typology, by the stage in the pathway, and by the level of evidence. The outcomes column includes all those outcomes which were reported in the included papers. They encompass: whether or not the adequacy of information provided by the referrer to the specialist was improved; whether there was an improvement to patient waiting time; whether there was an increase in level of GP or patient satisfaction with services, and whether referrals were auctioned more appropriately. These outcomes form an important element of the pathway to the final impacts column and demonstrate the importance of identifying all the links in the chain of reasoning. For example the model outlines that referral information needs to be accurate in order that referrals may be directed to the most appropriate place or person. Interventions need to include evaluation of this interim outcome and not only consider impact measures such as rate of referral if they are to explore how and if an intervention is effective. Also, GP satisfaction with a service will determine where referrals are sent, and patient satisfaction may determine whether a costly appointment with a specialist is attended. Here again many studies we evaluated used only broad impact measures (such as referral rate) to evaluate outcomes rather than explore where the links in the pathway may be breaking down.

The impacts column contains all those impacts that were reported in the included literature. These were: the impact on referral rate/level; whether attendance rate increased; any impact on referrals being considered appropriate; any impact on the appropriateness of the timing of the referral; and the effect on healthcare cost. As can be seen from the model, the relationship between interventions and a wider impact on systems was challenging to demonstrate from the evidence.

Having developed the first and final two columns of the model, attention then turned to the key middle section. This phase of the work required detailed exploration of the change pathway to explore exactly how the interventions would act on participants in order to produce the demand management outcomes and impacts. The second and third columns of the model are core elements of the theory of change within the model.

While a small amount of data for these elements came from the intervention studies, the majority came from analysis and synthesis of the qualitative papers and non-intervention studies. Much of the intervention literature seemed to have a “black box” between the intervention and the long term impacts. This was a key area of the work where we employed iterative additional searching in order to seek evidence for associations, to ensure that the chain of reasoning was complete. For example, the first additional search aimed to explore evidence underpinning the assumption that increasing GP knowledge would lead to improved referral practice. The second additional search aimed to identify evidence underpinning the link between changes in referral systems and changed physician attitudes or behaviour. The search also sought evidence regarding specific outcomes following interventions to change patient knowledge, attitudes or behaviour.

The second column of the model details the short-term outcomes for individual GPs, patients, and GP services that may result from interventions. These are the factors which need to be changed within the referrer or referral, in order that the longer term outcomes and impacts will happen. The short-term outcomes we identified were: physician knowledge; physician beliefs/attitudes; physician behaviour; doctor-patient interaction; patient knowledge, or patient attitudes/beliefs or behaviour. Of note is the weaker evidence of physician knowledge change impacting on referrals, and greater evidence of change to physician attitudes and beliefs, and also doctor-patient interaction having an impact.

The third column (and final element to be completed) is another key part of the theory of change. This section identifies a range of factors which may be associated with or influence whether the short-term outcomes will lead to the intended longer term outcomes and impacts. This column examines the moderating and mediating variables which may act as predictors of whether an intervention will be successful. They can be considered as similar to the barriers and facilitators often described in qualitative studies. The model details a wide range of these moderating and mediating factors relating to: the physician; the patient; and the organisation. Of particular interest here is the conflicting evidence relating to physician and patient demographic factors (the subject of a large number of studies) influencing referral patterns, and the clearer picture regarding the influence of patient clinical and social factors in the referral process.

Having outlined the content of each column, the following provides an example of the flow of reasoning for one particular type of intervention underpinned by elements of the model. Much work in the UK has been directed towards issuing guidelines for GPs regarding who and when to refer, with the assumption that changed knowledge will lead to changed referral practice. However, the model questions these assumptions by indicating that there is conflicting evidence regarding the efficacy of this type of intervention, and also suggesting that there is weak evidence of interventions such as these leading to enhanced knowledge outcomes, Perhaps if the guidelines focused more on elements of the model where evidence is stronger such as addressing GP attitudes and beliefs (for example tolerance of risk) or behaviour (such as the optimal content of referral information) this may lead to more successful immediate outcomes. The model highlights however that the effect of any guideline intervention will also be modified by GP, patient and service factors, for example the complexity of the case, the GP”s emotional response to the patient and GP time pressure. These potential barriers need to be considered in the implementation of guideline interventions. If these elements can be addressed however, use of guidelines by GPs may enhance referrer or patient satisfaction, improve waiting time, or change the content of a referral and thus have a resulting impact at a service wide level.

Following development of the model we sought feedback from stakeholders regarding the clarity of representation of the findings, and potential uses. This consultation was carried out via individual and group discussion with practitioners, patient and the public representatives, commissioners (individuals who have responsibility for purchasing services), and by circulating the model to experts in the field. In total we received input from 44 individuals (15 GPs, five commissioners, seven patient and public representatives, and 17 hospital specialists from a range of clinical areas). Thirty eight of the respondents reported that they clearly understood the model however, four specialists described the model as overly complex and 2 patient representatives reported some confusion understanding it.

GPs in particular gave positive feedback, highlighting that it was a good fit with their experience of the way referrals are managed, and that it successfully conveyed the complexity of general practice. The model was also described positively as identifying the role of both the GPs’ and the patients’ attitudes and beliefs, and the doctor-patient interaction. Also, GPs noted with satisfaction that the model included the physicians’ emotional response to the patient, which resonated with their experiences. Most specialists also reported that the model was a good fit with their experience of factors influencing referral management. Potential uses of the model described were: as a tool for GP trainees and educators; as a teaching aid for undergraduate medical students; for analysing the demand management pathway when commissioning; for comparing what was being commissioned with what was evidence based; and to direct research into poorly evidenced areas.

Some of the feedback from participants concerned factors that had not been identified in the literature. For example the potential role of carers as well as the patient in doctor-patient interactions was highlighted, and the potential influence of being a GP temporarily covering a colleagues’ work. Some amendments were made to the model following this feedback, principally clarifying where there was no evidence versus inconclusive evidence, and editing terminology.

While referral management is often considered to have only a capacity-limiting function, the model was able to identify the true complexity of what is aiming to be achieved. Our model has added to the existing literature by setting out the chain of reasoning that underpins how and if interventions are to lead to their intended impacts, and made explicit the assumptions that underpin the process. The logic model is able to summarise a wealth of information regarding the findings of a systematic review on a single page. The visual presentation of this information was clearly understood by almost all professionals and all commissioners in our sample. This study therefore supports the value of logic models as communication tools. The effectiveness of the model for communicating findings to patients and the public however warrants further exploration. While four of the seven patient representatives in our group found value in the model, three found it lacked relevance to patients. While this was a very small sample, it would be worth exploring in the future whether the topic of the model contributed to this perception. Perhaps a model relating to a specific clinical condition rather than service delivery may be perceived of greater relevance to patients and the public.

The use of stakeholders in developing a theory of change has been recommended by other authors [ 20 ]. Participants in our sample were able to provide valuable input by suggesting areas where there were seeming gaps in evidence. Our method of building the model from the literature has sought to be systematic and evidence-based, rather than be influenced by expert/stakeholder opinion (as is more typically the process of logic model development). However, while it is important to be alert to potential sources of bias in the review process, it seems that the involvement of stakeholders for determining potential gaps in evidence alongside systematic identification processes should not be ignored. The logic model we have produced outlines only where we identified literature, and does not include the two areas suggested by the stakeholders. We debated whether these suggested areas should be added to the model, and concluded that it would be counter-intuitive in a model presenting the evidence to show areas of no evidence. It is possible that future development of the methodology could consider including “ghost boxes” or similar to indicate where experts or practitioners believe that there are links, however there is no current research to substantiate this.

We endeavoured to enhance the communicative power of the model by adopting a system of evaluating the strength of evidence underpinning elements. The determination of strength of evidence is a challenging area, with our adopted system likely to be the subject of debate. Other widely used methods of appraising the quality of evidence (such as that used by the Cochrane Collaboration [ 15 ]) typically use different checklists for different study designs. The method that we adopted was able to be inclusive of the diversity of types of evidence in our review. In selecting an approach we also aimed to move beyond a simple count of papers. This “more equals stronger” approach may be misleading as a greater number may be only an indicator of where work has been carried out, or is perhaps where a topic is more amenable to investigation. The evaluation we utilised included elements of both quantity and quality, together with the consideration of consistency. However, the volume of studies in the rating was still influential. While we believe that the strength of evidence indicator adds considerably to the model and review findings, we recognise that there is still work to be done in refining this aspect of the method.

Our process of synthesising the data to develop the logic model draws on methodological developments in the area of qualitative evidence synthesis [ 18 , 21 ]. Our use of categorising and charting to build elements also draws on techniques of Framework Analysis [ 22 ] which is commonly used as a method of qualitative data analysis in policy research. The Framework Method may be particularly useful to underpin this process as it is highly systematic method of categorizing and organizing data [ 23 ]. By its inclusion of a diverse range of evidence, our method also resonates with the growing use of mixed-methods research which appreciates the contribution of both qualitative and quantitative evidence to answering a research question. While our method utilises the model for synthesis at the latter end of a systematic review, logic models have been suggested as being of value at various stages of the process [ 7 ]. Recently it has been proposed that logic models should be added to the established PICOs framework for establishing review parameters [ 24 ] in the initial stages.

In order to be of value, a visual representation should stand up to scrutiny so that concepts and meaning can be grasped by others and stimulate discussion [ 21 ]. We believe that evaluation of our model indicates that it met these requirements. The use of diagrams to explain complex interventions has been criticised in the past on the grounds that it can fail to identify mechanisms of change [ 19 ]. We argue that by using a wide range of literature and employing methods of iterative searching to examine potential associations, that this potential limitation can be overcome. Vogel [ 25 ] emphasised that diagrams should combine simplicity with validity – an acknowledgement of complexity but recognition that things are more complex than can be described. The vast majority of feedback on the model reported that this complexity represented the area as they knew it, and that this was a key asset of the model.

This work has demonstrated the potential value of a logic model synthesis approach to systematic review methodologies. In particular for this piece of work, the method proved valuable in unpicking the complexity of the area, illuminating multiple outcomes and potential impacts, and highlighting a range of factors that need to be considered if interventions are to lead to intended impacts.

Faulkner A, Mills N, Bainton D, Baxter K, Kinnersley P, Peters TJ, Sharp D: A systematic review of the effect of primary care-based service innovations on quality and patterns of referral to specialist secondary care. Brit J Gen Pract. 2003, 53: 878-884.

Google Scholar  

Weiss CH: Nothing as practical as a good theory: exploring theory-based evaluation for comprehensive community initiatives for children and families. New Approaches to Evaluating Community Initiatives. Edited by: Connell JP, Kubisch AC, Schoor LB, Weiss CH. 1995, Washington DC: Aspen Institute, 65-69.

Plsek PE, Greenhalgh T: The challenge of complexity in healthcare. BMJ. 2001, 323: 625-628. 10.1136/bmj.323.7313.625.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Miles A: Complexity in medicine and healthcare: people and systems, theory and practice. J Eval Clin Pract. 2009, 15: 409-410. 10.1111/j.1365-2753.2009.01204.x.

Article   PubMed   Google Scholar  

Pawson R: Evidence-based policy: the promise of realist synthesis. Evaluation. 2002, 8: 340-358. 10.1177/135638902401462448.

Article   Google Scholar  

Rogers PJ: Theory-based evaluation: reflections ten years on. N Dir Eval. 2007, 114: 63-81.

Anderson LM, Petticrew M, Rehfuess E, Armstong R, Ueffing E, Baker P, Francis D, Tugwell D: Using logic models to capture complexity in systematic reviews. Res Synth Meth. 2011, 2: 33-42. 10.1002/jrsm.32.

Rogers PJ: Using programme theory to evaluate complicated and complex aspects of interventions. Evaluation. 2008, 14: 29-48. 10.1177/1356389007084674.

Wallace J, Nwosu B, Clarke M: Barriers to the uptake of evidence from systematic reviews and meta-analyses: a systematic review of decision makers’ perceptions. BMJ Open. 2012, 2: e001220-doi:10.1136/bmjopen-2012-001220

Article   PubMed   PubMed Central   Google Scholar  

Baxter S, Baxter S, Killoran A, Kelly M, Goyder E: Synthesizing diverse evidence: the use of primary qualitative data analysis methods and logic models in public health reviews. Pub Health. 2010, 124: 99-106. 10.1016/j.puhe.2010.01.002.

Article   CAS   Google Scholar  

Allmark P, Baxter S, Goyder E, Guillaume L, Crofton-Martin G: Assessing the health benefits of advice services: using research evidence and logic model methods to explore complex pathways. Health Soc Care Comm. 2013, 21: 59-68. 10.1111/j.1365-2524.2012.01087.x.

EPPI-Centre: Methods for Conducting Systematic Reviews. 2010, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London

Grant MJ, Brettle A, Long AF: Poster Presentation. Beyond the Basics of Systematic Reviews. Developing a Review Question: A Spiral Approach to Literature Searching. 2000, Oxford

Schardt C, Adams MB, Owens T, Keitz S, Fontelo P: Utilization of the PICO framework to improve searching PubMed for clinical questions. BMC Med Inform Decis Mak. 2007, 7: doi:10.1186/1472-6947-7-16

The Cochrane Collaboration: Cochrane handbook for systematic reviews of interventions, version 5.1.0, 2011. [Handbook.cochrane.org]

Critical Skills Appraisal Programme (CASP) qualitative research checklist. [ http://www.casp-uk.net/wpcontent/uploads/2011/11/CASP_Qualitative_Appraisal_Checklist_14oct10.pdf ]

Hoogendoom WE, Van Poppel MN, Bongers PM, Koes BW, Bouter LM: Physical load during work and leisure time as risk factors for back pain. Scand J Work Environ Health. 1999, 25: 387-403. 10.5271/sjweh.451.

Thomas A, Harden A: Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008, 8: doi:10.1186/1471-2288-8-45

Weiss CH: Theory-based evaluation: past present and future. N Dir Eval. 1997, 76: 68-81.

Blamey A, Mackenzie M: Theories of change and realistic evaluation: peas in a pod or apples and oranges?. Evaluation. 2007, 13: 439-455. 10.1177/1356389007082129.

Dixon-Woods M, Fitzpatrick R, Roberts K: Including qualitative research in systematic reviews: opportunities and problems. J Eval Clin Pract. 2001, 2: 125-133.

Gale NK, Heath G, Cameron E, Rashid S, Redwood S: Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013, 13: doi:10.1186/1471-2288-13-117

Ritchie J, Lewis J: Qualitative Research Practice: A Guide for Social Science Students and Researchers. 2003, London: Sage

McDonald KM, Schultz EM, Chang C: Evaluating the state of quality-improvement science through evidence synthesis: insights from the Closing the Quality Gap Series. Perm J. 2013, 17: 52-61. 10.7812/TPP/13-010.

Vogel I: Review of the Use of Theory of Change in International Development: Review Report. 2012, London: Department of International Development

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/14/62/prepub

Download references

Acknowledgements

We would like to thank the following members of the project steering group for their valuable input: Professor Danuta Kasprzyk; Professor Helena Britt; Ellen Nolte; Jon Karnon; Nigel Edwards; Christine Allmar; Brian Hodges; and Martin McShane.

This project was funded by the National Institute for Health Research (Health Service and Delivery Research Programme project number11/1022/01). The views and opinions expressed therein are those of the authors and do not necessarily reflect those of the Health Service and Delivery Research Programme NIHR, NHS or the Department of Health.

Author information

Authors and affiliations.

School of Health and Related Research, University of Sheffield, Regent Court, 30 Regent Street, Sheffield, S14DA, UK

Susan K Baxter, Lindsay Blank, Helen Buckley Woods, Nick Payne, Melanie Rimmer & Elizabeth Goyder

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Susan K Baxter .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors’ contribution

SB was a reviewer and led development of the logic model. LB was principal investigator and lead reviewer. HB carried out the searches. NP and EG provided methodological and topic expertise throughout the work. MR carried out the evaluation phase. All members of the team read and commented on drafts of this paper.

Electronic supplementary material

12874_2014_1082_moesm1_esm.docx.

Additional file 1: Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions.(DOCX 42 KB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2, rights and permissions.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Baxter, S.K., Blank, L., Woods, H.B. et al. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions. BMC Med Res Methodol 14 , 62 (2014). https://doi.org/10.1186/1471-2288-14-62

Download citation

Received : 16 January 2014

Accepted : 30 April 2014

Published : 10 May 2014

DOI : https://doi.org/10.1186/1471-2288-14-62

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic review
  • Methodology
  • Evidence synthesis
  • Logic model
  • Demand management
  • Referral systems
  • Referral management

BMC Medical Research Methodology

ISSN: 1471-2288

literature review on logic model

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Program Evaluation Through the Use of Logic Models

Author statement

o Conceptualization and formulation of overarching research goals

o Methodology and design

• Jovan D. Miles, Pharm.D., AAHIVP, CPh

o Conceptualization of research goals

o Writing of original draft

• Michael D. Thompson, Pharm.D., AAHIVP

o Supervision of research activity

• Briana Journee, Pharm.D., MBA, AAHIVP, CPh

• Eboni Nelson

o Writing and editing of revised draft

Introduction:

In order to effectively design and evaluate the effectiveness of a new clinical program or intervention, pharmacists must be equipped with the skills and knowledge that are obtained by familiarity and use of a logic model. Currently, most pharmacy school curricula do not include logic model exercises to instill these necessary skills into the knowledge base of doctor of pharmacy students. This report provides understanding of how a logic model can be permanently implemented into pharmacy curricula in order to develop critical thinking skills that will allow students to become more well-rounded in their future practice of pharmacy.

A 23-point questionnaire was developed by the principal investigator, primarily based on feedback from the student reflection papers and areas of interest with the author’s prior experience and use of logic models. They were distributed to assess the knowledge, attitudes, and perspectives of students enrolled or previously enrolled in the course.

Questionnaires were received from 128 students, representing approximately 32% of those provided the opportunity to participate. The majority of students (72.98%) viewed the potential benefits of learning about logic models favorably. Overall, 64.86% of students agreed that the experience gained through constructing their own logic model was an intellectually stimulating activity.

Conclusions:

The logic model is an effective tool that can be used to teach pharmacy students how planned program development would contribute to combating various public health issues.

Introduction

When designing a new intervention or clinical program, pharmacists must be familiar with how to design, implement, and evaluate the new venture in order to justify its effectiveness and impact. A logic model is one tool capable of doing this. More specifically, a logic model is a systematic and visual way to represent the relationship between the resources (e.g., human, financial, community) available to operate the program, the actions and activities required to implement the program, and the changes or results to be achieved (outcomes). 1

As of July 2020, there are 141 United States-based colleges and school of pharmacy with accredited professional degree programs, with only six programs located at a Historically Black College/University (HBCU). 2 Florida A&M University (FAMU) College of Pharmacy & Pharmaceutical Sciences is a mid-size HBCU with two campuses located in the southern region of the university, and is a fully accredited program through the Accreditation Council for Pharmacy Education (ACPE). 2 During the time of this study, the Introduction to Public Health Course was a required course for students to take in the fall semester of their second professional year (P2) of pharmacy school (since the completion of the study, the course has been moved to the fall of the third professional year [P3]). The pharmacy curricula focus heavily on medication therapy management but not as much on how to design, implement, and evaluate the effectiveness of the programs and services that they provide or may provide in the future. In order to familiarize students with the use of logic models, a didactic module was developed to provide students an opportunity to construct a logic model to maximize the understanding of its use. The learning experience was implemented as follows: (1) lectures explaining the history and use of logic models with a focus on community program development were given by the professor; (2) students were then assigned to groups in order to identify a community need and then to construct a logic model to describe its implementation and evaluation; and (3) students then assessed their views to ascertain if they felt the logic model tool would be useful in practice.

Introduction of logic models as a theoretical framework can be used to rationalize how a desired intervention and developed program can yield intended results and patient outcomes. Incorporating the use of logic models in the pharmacy curriculum will enable colleges of pharmacy to meet ACPE Standard 1 (Foundational Knowledge), Standard 2 (Essentials for Practice and Care), Standard 3 (Approach to Practice and Care), as well as Domain 2.3 (Health and Wellness Promoter), Domain 2.4 (Population-based Care), and Domain 3.4 (Interprofessional Collaboration) of the Center for Advancement of Pharmacy Education Outcomes. Using logic models allows students to gain a higher level of understanding, critical thinking, and program development skills to address public health concerns. 3 The goal of this study was to assess the attitudes and feelings of pharmacy students towards the use of logic models as a tool for program development and utilization in pharmacy practice.

A new learning experience concerning logic models was included in a required two-credit hour public health course for P2 students. A lecture was provided that included the history and use of logic models in public health and its applicability to evaluating pharmacy programs, services, and interventions. A sample logic model ( Fig. 1 ) was used for instructional purposes to provide the student an opportunity to see a community-based model recommended for use in actual practice. 4 Since the course is also concerned with health disparities and population health issues, students were assigned to work in groups and tasked with designing a community-based program to address a potential health disparity within a community.

An external file that holds a picture, illustration, etc.
Object name is nihms-1765441-f0001.jpg

Sample logic model. 4

In prior years, student feedback was collected regarding logic models through a reflection paper on their experience. A 23-point questionnaire was developed by the principal investigator, primarily based on feedback from the student reflection papers and areas of interest with the author’s prior experience and use of logic models. They were distributed to assess the knowledge, attitudes, and perspectives of students enrolled or previously enrolled in the course. The purpose of the questionnaire was to validate key topics in the use of logic models in program development by pharmacy students within various communities that face health disparities. The questionnaire collected demographic information and student responses through the use of a Likert scale ( Table 1 ). The questionnaire was administered using an online survey platform, over a 30-day open period for students to respond to an email request for participation. Students who agreed to participate signed an informed consent agreement, and the overall project was approved by the FAMU Institutional Review Board. The inclusion criteria included students who were enrolled in their second, third, or fourth professional year for the 2016–2017 academic year, completed the Introduction to Public Health in Pharmacy course, and participated in the logic model program development group project. The 2016–2017 academic year was selected because there were over 350 eligible student participants for the study, with each participant responding anonymously to the questionnaire. The exclusion criteria included students who were enrolled in their first professional year for the 2016–2017 academic year. These students were excluded because they had not yet taken the Introduction to Public Health course. Additional exclusions included any student who had not taken the Introduction to Public Health in Pharmacy course, and any individuals who were not currently enrolled in the FAMU College of Pharmacy & Pharmaceutical Sciences during the 2016–2017 academic year.

Logic model questionnaire.

The questionnaire distributed included student demographics (age, sex, race/ethnicity) and students’ professional year of study. The assessment questions reflected on the students’ personal knowledge as it pertained to the logic model, their perception of logic models, and their experience with using the logic model to develop a public health program. A pilot test of the questionnaire in the previous year is what led to the development and execution of the research study.

Descriptive analyses were used to summarize demographic information as well as the knowledge and attitudes of the students regarding logic models.

Demographic information

Questionnaires were received from 128 students representing approximately 32% of those provided the opportunity to participate (questionnaire participation was not a mandatory component of the course). There were 66 P2, 60 P3, and two fourth-year student participants who completed the disseminated questionnaire. The majority of student respondents were female (75.78%), African American (53.91%), aged 18–24 years (53.91%), and located in Tallahassee, Florida (71.88%). The second largest groups of study respondents were identified as being Caucasian (22 participants), aged 25–34 years (49 participants), and were located in Crestview, Florida (the distance learning campus) ( Table 2 ).

Basic demographics.

P2 = second professional year; P3 = third professional year; P4 = fourth professional year.

Knowledge and attitudes about logic models

The majority of students (72.98%) viewed the potential benefits of learning about logic models favorably. These students believed that they learned new ideas, approaches, and skills that will assist them in implementing a community program once becoming a pharmacist. Moreover, 66.1% and 68% of students agreed respectively that use of this tool is a simple way to monitor program performance and to communicate progress (or lack of progress) in a program ( Fig. 2 ). About 71% of students believed that the logic model provides accountability of a program and assures that program outcomes are being measured appropriately. When it came to utilization of logic models in the development of a future public health program, 79.5% of students agreed that it would provide an opportunity for pharmacists to help the community address a public health problem ( Fig. 3 ). Furthermore, 75.2% of respondents agreed that developing a public health program would increase collaboration with other healthcare professionals for more patient-centered care.

An external file that holds a picture, illustration, etc.
Object name is nihms-1765441-f0002.jpg

Pros of the logic model.

An external file that holds a picture, illustration, etc.
Object name is nihms-1765441-f0003.jpg

The utilization of logic models in developing a public health program will be an opportunity to help the community address a public health problem.

The overwhelming majority of students agreed that it is the pharmacist’s responsibility to bring awareness to and address public health issues in the community through program development and implementation via logic models.

In contrast, 57.69% of students believed that logic models do not capture all qualities of a program. Likewise, 52.88% believed this model does not explain what supported or hindered the process of program development. Interestingly, 56% of students felt that utilizing logic models in implementing public health programs will be an overwhelming responsibility for pharmacists. Overall, 64.86% of students agreed that the experience gained through constructing their own logic model was an intellectually stimulating activity. More than half of the participants agreed that if they were to create a program in the pharmacy setting, they would use the logic model to plan, develop, implement, and evaluate it ( Fig. 4 ).

An external file that holds a picture, illustration, etc.
Object name is nihms-1765441-f0004.jpg

How likely are you to use the logic model as a tool to plan, develop, implement, and evaluate your program?

Pharmacy learners must be trained to become outcome oriented in order to successfully practice and administer programs effectively in today’s healthcare environment. Public health professionals are able to use logic models to evaluate program activities and ensure those activities are contributing to the success of accomplishing the overarching goals. 5 Although the use of logic models in program development and evaluation is a common learning experience in public health curricula and practice, it is not an evaluative tool routinely taught in pharmacy programs. From a pharmacy student perspective, this is an important tool as it provides an opportunity to critically think through the design and implementation of a project and to identify the relationship between program objectives and eventual measurable outcomes. Logic models increase critical thinking skills by allowing the participant to prioritize focus and determine appropriate evaluation tools needed to assess, which will in turn allow for evidence-based recommendations to implement improvements to programs. 5 Additional detailed descriptions of logic models, definitions, uses, and more explanative information can be found in the literature. 1 , 6 – 8 The incorporation of this tool into pharmacy curricula is important and contributes to the need to develop practitioners that are outcome driven in their approach to care and program implementation. This tool can be incorporated across pharmacy curricula in courses that involve disease state management, pharmacy management, and professional development. Although students evaluated the use of this model favorably, it was clear that it represented a different way of problem solving for them. The goal is to permanently include this tool as an important component of this course and to expand its use in other courses in order to supplement critical-thinking and decision-making experiences for students.

Conclusions

As pharmacy students graduate and pursue interprofessional roles, the importance of becoming outcome oriented must be fully realized. The decision-making process in health systems require that practitioners are outcome driven in their approach to care in order to justify their substantive roles to administrators and their place as major contributors to improving patient care. The logic model is an effective tool that can be used to teach pharmacy students how planned program development would contribute to combating various public health issues. Furthermore, the application of logic models can guide pharmacists on how to implement effective interventions in patient care and improve optimal health outcomes.

Declaration of competing interest

Disclosures

Help | Advanced Search

Computer Science > Human-Computer Interaction

Title: apprentices to research assistants: advancing research with large language models.

Abstract: Large Language Models (LLMs) have emerged as powerful tools in various research domains. This article examines their potential through a literature review and firsthand experimentation. While LLMs offer benefits like cost-effectiveness and efficiency, challenges such as prompt tuning, biases, and subjectivity must be addressed. The study presents insights from experiments utilizing LLMs for qualitative analysis, highlighting successes and limitations. Additionally, it discusses strategies for mitigating challenges, such as prompt optimization techniques and leveraging human expertise. This study aligns with the 'LLMs as Research Tools' workshop's focus on integrating LLMs into HCI data work critically and ethically. By addressing both opportunities and challenges, our work contributes to the ongoing dialogue on their responsible application in research.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Pharmacist intervention for pediatric asthma: A systematic literature review and logic model

Affiliations.

  • 1 State Key Laboratory of Quality Research in Chinese Medicine, Institute of Chinese Medical Sciences, University of Macau, Macao.
  • 2 State Key Laboratory of Quality Research in Chinese Medicine, Institute of Chinese Medical Sciences, University of Macau, Macao; Department of Public Health and Medicinal Administration, Faculty of Health Sciences, University of Macau, Macao.
  • 3 State Key Laboratory of Quality Research in Chinese Medicine, Institute of Chinese Medical Sciences, University of Macau, Macao; Department of Public Health and Medicinal Administration, Faculty of Health Sciences, University of Macau, Macao. Electronic address: [email protected].
  • PMID: 37679253
  • DOI: 10.1016/j.sapharm.2023.08.008

Background: Asthma is highly prevalent in children. Evidence about pharmacist-led interventions in the management of pediatric asthma is emerging.

Objective: To summarize empirical evidence of pharmacist-led interventions for pediatric asthma patients, and to identify the components of a logic model, which can inform evidence-based pharmacy practice.

Methods: PubMed, Web of Science, Embase Scopus, ScienceDirect, Medline and CNKI were searched. Studies concerning pharmacist-led interventions for pediatric asthma patients with an interventional design published between January 2013 and February 2023 were selected for analysis. Literature was searched and retrieved according to PRISMA guidelines. Components of pharmacist-led interventions were compiled into a logic model comprising input, activity, output, outcome and contextual factors.

Results: The initial search retrieved 2291 records and 35 were included in the analysis. The main interventional activities included optimising medicines use and prevention and control of asthma. Commonly reported outputs were medication adherence, knowledge and inhaler technique. The main economic outcomes included cost of medication and hospitalization; clinical outcomes included Childhood Asthma Control Test/Asthma Control Test scores and lung function in FEV1% and PEF%; humanistic outcomes included patients' quality of life and satisfaction. Social, economic, political, and technological factors were identified as contextual factors.

Conclusion: The logic model summarized components of interventions evaluated in literature. It provides a blueprint for pharmacist-led management of pediatric asthma. Further research can focus on the pharmacists' role in a multidisciplinary healthcare professional team and transition of care in patient-centered management of pediatric asthma.

Copyright © 2023 Elsevier Inc. All rights reserved.

Publication types

IMAGES

  1. The Logic Model: Take it one step at a time

    literature review on logic model

  2. R711 Literature Review: The Use of Logic Models for Evaluation

    literature review on logic model

  3. More than 40 Logic Model Templates & Examples ᐅ TemplateLab

    literature review on logic model

  4. More than 40 Logic Model Templates & Examples ᐅ TemplateLab

    literature review on logic model

  5. Step 3: Draw a logic model

    literature review on logic model

  6. Logic Model Templates

    literature review on logic model

VIDEO

  1. Developing a Logic Model Part 1.m4v

  2. Lecture 1 (History of Philosophy) Review of Propositional and Predicate Logic with Identity

  3. The Logic Model

  4. Season 2

  5. Module 32: Probit and Logit Models

  6. FY24.25 ACNJ Competitive NOFO Technical Assistance Session: Theory of Change, Evidence, Logic Model

COMMENTS

  1. Developing and Optimising the Use of Logic Models in Systematic ...

    Background Logic models are becoming an increasingly common feature of systematic reviews, as is the use of programme theory more generally in systematic reviewing. Logic models offer a framework to help reviewers to 'think' conceptually at various points during the review, and can be a useful tool in defining study inclusion and exclusion criteria, guiding the search strategy, identifying ...

  2. A logic model framework for evaluation and planning in a primary care

    What is a logic model? The logic model has proven to be a successful tool for program planning as well as implementation and performance management in numerous fields, including primary care (2-14).A logic model (see Figure One) is defined as a graphical/textual representation of how a program is intended to work and links outcomes with processes and the theoretical assumptions of the ...

  3. Enhancing the Effectiveness of Logic Models

    One of the most widely used communication tools in evaluation is the logic model. Despite its extensive use, there has been little research into the visualization aspect of the logic model. ... revisions decreased mental effort and reduced the amount of time taken to review the model. Together, the findings from the study support the claim that ...

  4. Advancing complexity science in healthcare research: the logic of logic

    A typology of logic model types is proposed based on a scoping review of the literature, along with a formal methodology for developing dynamic models, referred to as "type 4" logic models. We hope this will help researchers to a) know which logic model type to use when evaluating interventions and b) overcome the challenges of modelling ...

  5. PDF Introducing Logic Models

    Use of theory of change and program logic models began in the 1970s. Carol Weiss (1995), and Michael Fullan (2001) and Huey Chen (1994, 2005) are among the pioneers and champions for the use of program theory in program design and evaluation. U.S. Agency for International Development's (1971) logical framework approach and Claude Bennett's ...

  6. Using logic models to capture complexity in systematic reviews

    Second, logic models can be used to direct the review process more specifically. They can help justify narrowing the scope of a review, identify the most relevant inclusion criteria, guide the literature search, and clarify interpretation of results when drawing policy-relevant conclusions about review findings.

  7. PDF Developing Logic Models

    Developing Logic Models 2 An introduction to using Logic Models in CIDG reviews. What are Logic Models? Logic models are diagrams which map out important concepts within a systematic review. They typically appear in the background of a review, but may provide an underlying framework for both the results section and the discussion.

  8. Developing logic models to inform public health policy outcome

    We used an iterative process to develop models for each policy on the timeline in Fig. 2: (1) development of an overarching logic model; (2) review of government documents and existing literature to populate individual policy models and (3) refinement of logic models through multiple rounds of stakeholder feedback.

  9. (PDF) Developing and Optimising the Use of Logic Models ...

    Logic mod els offer a. framework to help reviewers to ' think ' conceptually at various points during the review, and. can be a useful tool in defining study inclusion and exclusion criteria ...

  10. PDF Introducing Logic Models T

    1996. This publication promoted the structures and vocabulary of logic models. The W. K. Kellogg Foundation also was instrumental in spreading the use of logic models with its Logic Model Development Guide (2001). For those readers interested in more detail on the historical evolution of logic models, see the references pro-

  11. Developing and Optimising the Use of Logic Models in Systematic Reviews

    As understood in the program evaluation literature, logic models are one way of representing the underlying processes by which an intervention effects a change on individuals, communities or organisations. ... This distinction fits in well with the different stages of a systematic review. A logic model provides a sequential depiction of the ...

  12. Using logic models to capture complexity in systematic reviews

    Second, logic models can be used to direct the review process more specifically. They can help justify narrowing the scope of a review, identify the most relevant inclusion criteria, guide the literature search, and clarify interpretation of results when drawing policy‐relevant conclusions about review findings.

  13. Using logic model methods in systematic review synthesis: describing

    The completed logic model built from examining the identified published literature. The model illustrates the pathway between demand management interventions and intended impact. ... While our method utilises the model for synthesis at the latter end of a systematic review, logic models have been suggested as being of value at various stages of ...

  14. Developing and Optimising the Use of Logic Models in ...

    Background: Logic models are becoming an increasingly common feature of systematic reviews, as is the use of programme theory more generally in systematic reviewing. Logic models offer a framework to help reviewers to 'think' conceptually at various points during the review, and can be a useful tool in defining study inclusion and exclusion criteria, guiding the search strategy, identifying ...

  15. Using logic model methods in systematic review synthesis: describing

    Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. Methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature.

  16. Towards a taxonomy of logic models in systematic reviews and ...

    The taxonomy distinguishes 3 approaches (a priori, staged, and iterative) and 2 types (systems-based and process-orientated) of logic models. An a priori logic model is specified at the start of the systematic review/HTA and remains unchanged. With a staged logic model, the reviewer prespecifies several points, at which major data inputs ...

  17. PDF Logic models for program design, implementation, and evaluation

    This Logic Model Workshop Toolkit is designed to help practitioners learn the overall purpose of a logic model, the different elements of a logic model, and the appropriate steps for developing and using a logic model for program evaluation. This toolkit includes a facilitator workbook, a participant workbook, and a slide deck.

  18. On the same page: Co-designing the logic model Volume 5: 1-7 The Author

    Table 1. Steps taken in developing the programme logic model. Step 1: 10 September 2016 In an inductive approach to identify the potential components of the logic model, a review of the literature (both peer-reviewed and the grey literature, including existing RFW internal documents) was conducted to

  19. Using logic model methods in systematic review synthesis ...

    Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. Methods: This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The ...

  20. Program Evaluation Through the Use of Logic Models

    Interestingly, 56% of students felt that utilizing logic models in implementing public health programs will be an overwhelming responsibility for pharmacists. Overall, 64.86% of students agreed that the experience gained through constructing their own logic model was an intellectually stimulating activity. More than half of the participants ...

  21. Apprentices to Research Assistants: Advancing Research with Large

    Large Language Models (LLMs) have emerged as powerful tools in various research domains. This article examines their potential through a literature review and firsthand experimentation. While LLMs offer benefits like cost-effectiveness and efficiency, challenges such as prompt tuning, biases, and subjectivity must be addressed. The study presents insights from experiments utilizing LLMs for ...

  22. Applied Sciences

    Aseptic loosening is the most common failure mode for total hip arthroplasty, and the design of the implant plays a significant role in influencing the longevity and stability of the implant. Finite Element (FE) models have been demonstrated to be powerful numerical tools that allow for generating information supporting the device's safety and/or efficacy during pre-clinical assessment.

  23. Effectiveness of Hospital Pharmacist Interventions for COPD ...

    Purpose: This review aimed to summarize empirical evidence about pharmacist-led interventions for chronic obstructive pulmonary disease (COPD) patients in hospital settings and to identify the components of a logic model (including input, interventions, output, outcome and contextual factors) to inform the development of hospital pharmacist's role in COPD management.

  24. Pharmacist intervention for pediatric asthma: A systematic literature

    The logic model summarized components of interventions evaluated in literature. It provides a blueprint for pharmacist-led management of pediatric asthma. ... Pharmacist intervention for pediatric asthma: A systematic literature review and logic model Res Social Adm Pharm. 2023 Aug 22;S1551-7411(23)00350-9. doi: 10.1016/j.sapharm.2023.08.008.