Europe PMC requires Javascript to function effectively.

Either your web browser doesn't support Javascript or it is currently turned off. In the latter case, please turn on Javascript support in your web browser and reload this page.

Search life-sciences literature (44,101,374 articles, preprints and more)

  • Full text links
  • Citations & impact
  • Similar Articles

A qualitative risk assessment methodology for scientific expert panels.

Author information, affiliations, orcids linked to this article.

  • Thébault A | 0000-0001-5886-8222
  • Saegerman C | 0000-0001-9087-7436
  • Lancelot R | 0000-0002-5826-5242

Revue Scientifique et Technique (International Office of Epizootics) , 01 Dec 2011 , 30(3): 673-681 https://doi.org/10.20506/rst.30.3.2063   PMID: 22435181 

Abstract 

Full text links .

Read article at publisher's site: https://doi.org/10.20506/rst.30.3.2063

Citations & impact 

Impact metrics, citations of article over time, alternative metrics.

Altmetric item for https://www.altmetric.com/details/29305810

Smart citations by scite.ai Smart citations by scite.ai include citation statements extracted from the full text of the citing article. The number of the statements may be higher than the number of citations provided by EuropePMC if one paper cites another multiple times or lower if scite has not yet processed some of the citing articles. Explore citation contexts and check if this article has been supported or disputed. https://scite.ai/reports/10.20506/rst.30.3.2063

Article citations, assessment and strategy development for sars-cov-2 screening in wildlife: a review..

Italiya J , Bhavsar T , Černý J

Vet World , 16(6):1193-1200, 04 Jun 2023

Cited by: 1 article | PMID: 37577208 | PMCID: PMC10421538

A review of qualitative risk assessment in animal health: Suggestions for best practice.

Horigan V , Simons R , Kavanagh K , Kelly L

Front Vet Sci , 10:1102131, 07 Feb 2023

Cited by: 1 article | PMID: 36825234 | PMCID: PMC9941190

Estimating Risk of Introduction of Ebola Virus Disease from the Democratic Republic of Congo to Tanzania: A Qualitative Assessment.

Rugarabamu S , George J , Mbanzulu KM , Mwanyika GO , Misinzo G , Mboera LEG

Epidemiologia (Basel) , 3(1):68-80, 11 Feb 2022

Cited by: 1 article | PMID: 36417268 | PMCID: PMC9620938

Chronic Wasting Disease Transmission Risk Assessment for Farmed Cervids in Minnesota and Wisconsin.

Kincheloe JM , Horn-Delzer AR , Makau DN , Wells SJ

Viruses , 13(8):1586, 11 Aug 2021

Cited by: 2 articles | PMID: 34452450 | PMCID: PMC8402894

Risk assessment of SARS-CoV-2 infection in free-ranging wild animals in Belgium.

Logeot M , Mauroy A , Thiry E , De Regge N , Vervaeke M , Beck O , De Waele V , Van den Berg T

Transbound Emerg Dis , 69(3):986-996, 26 May 2021

Cited by: 6 articles | PMID: 33909351 | PMCID: PMC8242903

Review Free full text in Europe PMC

Similar Articles 

To arrive at the top five similar articles we use a word-weighted algorithm to compare words from the Title and Abstract of each citation.

Scientific basis of the OCRA method for risk assessment of biomechanical overload of upper limb, as preferred method in ISO standards on biomechanical risk factors.

Colombini D , Occhipinti E

Scand J Work Environ Health , 44(4):436-438, 01 Jul 2018

Cited by: 6 articles | PMID: 29961081

Cumulative incidence rate of medical consultation for fecundity problems--analysis of a prevalent cohort using competing risks.

Duron S , Slama R , Ducot B , Bohet A , Sørensen DN , Keiding N , Moreau C , Bouyer J

Hum Reprod , 28(10):2872-2879, 09 Jul 2013

Cited by: 11 articles | PMID: 23838160

EFSA statement on the review of the risks related to the exposure to the food additive titanium dioxide (E 171) performed by the French Agency for Food, Environmental and Occupational Health and Safety (ANSES).

EFSA (European Food Safety Authority)

EFSA J , 17(6):e05714, 12 Jun 2019

Cited by: 18 articles | PMID: 32626336 | PMCID: PMC7009203

Integrating surveillance of animal health, food pathogens and foodborne disease in the European Union.

Berthe F , Hugas M , Makela P

Rev Sci Tech , 32(2):521-528, 01 Aug 2013

Cited by: 3 articles | PMID: 24547655

Safety and nutritional assessment of GM plants and derived food and feed: the role of animal feeding trials.

EFSA GMO Panel Working Group on Animal Feeding Trials

Food Chem Toxicol , 46 Suppl 1:S2-70, 13 Feb 2008

Cited by: 43 articles | PMID: 18328408

Europe PMC is part of the ELIXIR infrastructure

A qualitative risk assessment methodology for scientific expert panels

Affiliation.

  • 1 Unité sous contrat (USC) Ecole nationale vétérinaire d'Alfort (ENVA)/Agence nationale de sécurité sanitaire (Anses) - Epidémiologïe des maladies animates infectieuses (EPIMAI), Ecole nationale vétérinaire d'Alfort, 7 avenue du Général de Gaulle, 94700 Maisons-Alfort Cedex, France.
  • PMID: 22435181
  • DOI: 10.20506/rst.30.3.2063

Risk assessment can be either quantitative, i.e. providing a numeric estimate of the probability of risk and the magnitude of the consequences, or qualitative, using a descriptive approach. The French Agency for Food, Environmental and Occupational Health and Safety (ANSES), formerly the French Food Safety Agency (AFSSA), bases its assessments on the opinions of scientific panels, such as the ANSES Animal Health Scientific Panel (AH-SP). Owing to the lack of relevant data and the very short period of time usually allowed to assess animal health risks on particular topics, this panel has been using a qualitative risk method for evaluating animal health risks or crises for the past few years. Some experts have drawn attention to the limitations of this method, such as the need to extend the range of adjectives used for the lower probabilities and to develop a way to assess consequences. The aim of this paper is to describe the improved method now established by the AH-SP, taking into account the limitations of the first version. The authors describe a new set of levels for probabilities, as well as the items considered when addressing either animal or human health consequences.

  • Animal Diseases / epidemiology*
  • Animal Diseases / prevention & control
  • Global Health
  • Probability
  • Risk Assessment / methods*
  • Risk Assessment / standards
  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Qualitative Research:...

Qualitative Research: Consensus methods for medical and health services research

  • Related content
  • Peer review
  • Jeremy Jones , Lecturer in health economics a ,
  • Duncan Hunter , research fellow b
  • a Nuffield Community Care Studies Unit, Department of Epidemiology and Public Health, University of Leicester, Leicester LE1 7RH,
  • b Health Services Research Unit, Department of Public Health and Policy, London School of Hygiene and Tropical Medicine, London WC1E 7HT
  • Correspondence to: Dr Jones.

Health providers face the problem of trying to make decisions in situations where there is insufficient information and also where there is an overload of (often contradictory) information. Statistical methods such as meta-analysis have been developed to summarise and to resolve inconsistencies in study findings—where information is available in an appropriate form. Consensus methods provide another means of synthesising information, but are liable to use a wider range of information than is common in statistical methods, and where published information is inadequate or non-existent these methods provide a means of harnessing the insights of appropriate experts to enable decisions to be made. Two consensus methods commonly adopted in medical, nursing, and health services research—the Delphi process and the nominal group technique (also known as the expert panel)—are described, together with the most appropriate situations for using them; an outline of the process involved in undertaking a study using each method is supplemented by illustrations of the authors' work. Key methodological issues in using the methods are discussed, along with the distinct contribution of consensus methods as aids to decision making, both in clinical practice and in health service development.

This is the sixth in a series of seven articles describing non-quantitative techniquesand showing their value in health research

  • Download figure
  • Open in new tab
  • Download powerpoint

Defining consensus and consensus methods

Quantitative methods such as meta-analysis have been developed to provide statistical overviews of the results of clinical trials and to resolve inconsistencies in the results of published studies. Consensus methods are another means of dealing with conflicting scientific evidence. They allow a wider range of study types to be considered than is usual in statistical reviews. In addition they allow a greater role for the qualitative assessment of evidence ( box 1 ). These methods, unlike those described in the other papers in this series, are primarily concerned with deriving quantitative estimates through qualitative approaches.

Features of consensus methods

  • View inline

The aim of consensus methods is to determine the extent to which experts or lay people agree about a given issue. They seek to overcome some of the disadvantages normally found with decision making in groups or committees, which are commonly dominated by one individual or by coalitions representing vested interests. In open committees individuals are often not ready to retract long held and publicly stated opinions, even when these have been proved to be false.

The term “agreement” takes two forms, which need to be distinguished: firstly, the extent to which each respondent agrees with the issue under consideration (typically rated on a numerical or categorical scale) and, secondly, the extent to which respondents agree with each other, the consensus element of these studies (typically assessed by statistical measures of average and dispersion).

Application

The focus of consensus methods lies where unanimity of opinion does not exist owing to a lack of scientific evidence or where there is contradictory evidence on an issue. The methods attempt to assess the extent of agreement (consensus measurement) and to resolve disagreement (consensus development).

The Delphi process takes its name from the Delphic oracle's skills of interpretation and foresight

The three best known consensus methods are the Delphi process, the nominal group technique (also known as the expert panel), and the consensus development conference. Each of these methods involves measuring consensus, and the last two methods are also concerned with developing consensus. The consensus development conference will not be covered in this paper because it requires resources beyond those at the disposal of most researchers (unlike the other two methods), is commonly organised within defined programmes (for example, by the King's Fund in Britain and the National Institutes of Health in the United States), and has been discussed at length elsewhere. 3 4 5 6

The methods described THE DELPHI PROCESS

The Delphi process takes its name from the Delphic oracle's skills of interpretation and foresight and proceeds in a series of rounds as follows:

Round 1: Either the relevant individuals are invited to provide opinions on a specific matter, based on their knowledge and experience, or the team undertaking the Delphi expresses opinions on a specific matter and selects suitable experts to participate in subsequent questionnaire rounds;

These opinions are grouped together under a limited number of headings and statements drafted for circulation to all participants on a questionnaire;

Round 2: Participants rank their agreement with each statement in the questionnaire;

The rankings are summarised and included in a repeat version of the questionnaire;

Round 3: Participants rerank their agreement with each statement in the questionnaire, with the opportunity to change their score in view of the group's response;

The rerankings are summarised and assessed for degree of consensus: if an acceptable degree of consensus is obtained the process may cease, with final results fed back to participants; if not, the third round is repeated.

Figure 1 shows an example of this process for a Delphi study undertaken by one of the authors (JJ). In addition to scoring agreement with statements, respondents are commonly asked to rate the confidence or certainty with which they express their opinions.

Example of Delphiprocess used in study by JJ

The Delphi technique has been used widely in health research within the fields of technology assessment, 7 8 9 10 education and training 11 12 13 14 and priorities and information, 15 16 17 and in developing nursing and clinical practice. 19 20 21 It enables a large group of experts to be contacted cheaply, usually by mail with a self administered questionnaire (though computer communications have also been used), with few geographical limitations on the sample. Some situations have included a round in which the participants meet to discuss the process and resolve uncertainty or any ambiguities in the wording of the questionnaire.

THE NOMINAL GROUP TECHNIQUE

The nominal group technique uses a highly structured meeting to gather information from relevant experts (usually 9-12 in number) about a given issue. It consists of two rounds in which panellists rate, discuss, and then rerate a series of items or questions. The method was developed in the United States in the 1960s and has been applied to problems in social services, education, government, and industry. 22 In the context of health care the method has most commonly been used to examine the appropriateness of clinical interventions 23 24 25 26 27 but has also been applied in education and training, 28 29 30 in practice development, 31 32 33 and for identifying measures for clinical trials. 34 35 36

A nominal group meeting is facilitated either by an expert on the topic 37 or a credible non-expert 38 and is structured as follows:

Participants spend several minutes writing down their views about the topic in question;

Each participant, in turn, contributes one idea to the facilitator, who records it on a flip chart;

Similar suggestions are grouped together, where appropriate. There is a group discussion to clarify and evaluate each idea;

Each participant privately ranks each idea (round 1);

The ranking is tabulated and presented;

The overall ranking is discussed and reranked (round 2);

The final rankings are tabulated and the results fed back to participants.

Figure 2 shows an example of a modified nominal group undertaken by one of the authors (DH).

Example of modified nominal group undertaken by DH

The method can be adapted and has been conducted as a single meeting or with the first stage conducted by post followed by a discussion and rerating at a face to face meeting. Some nominal group meetings have incorporated a detailed review of literature as background material for the topic under discussion.

Alongside the consensus process there may be a nonparticipant observer collecting qualitative data on the nominal group. This approach has some features in common with focus groups (see articl by Kitzinger 39 ). However, the nominal group technique focuses on a single goal (for example, the definition of criteria to assess the appropriateness of a surgical intervention) and is less concerned with eliciting a range of ideas or the qualitative analysis of the group process per se than is the case in focus groups.

ological issues WHO TO INCLUDE AS PARTICIPANTS

There can be few hard and fast rules about who to include as participants, except that each must be justifiable as in some way “expert” on the matter under discussion. Clearly, for studies concerned with defining criteria for clinical intervention, the most appropriate experts will be clinicians practising in the field under consideration. However, the inclusion of other clinicians such as general practitioners may be appropriate to provide an alternative clinical view, particularly when the study is expected to have an impact beyond a particular specialist field. When the discussion concerns matters of general interest, such as health service priorities, participants should include non-clinical health professionals and the expression of lay opinions should also be allowed for.

There is clearly a potential for bias in the selection of participants. Although it has been shown that doctors who are willing to participate in expert panels are representative of their colleagues, 40 the exact composition of the panel can affect the results obtained. 24 The results will also be affected by any “random” variation in panel behaviour. These problems can be overcome by using a different mixture of participants in further panels.

HOW TO MEASURE THE ACCURACY OF THE ANSWER OBTAINED

The existence of a consensus does not mean that the “correct” answer has been found—there is the danger of deriving collective ignorance rather than wisdom. The nominal group is not a replacement for rigorous scientific reviews of published reports or for original research, but rather a means of identifying current medical opinion and areas of disagreement. For Delphi surveys, Pill recommends that the results should, when possible, be matched to observable events. 1 Observers of the accuracy of opinion polls before the 1992 general election in Britain might well agree with this conclusion.

HOW TO FEED BACK THE RESULTS OF EACH ROUND

Agreement with statements is usually summarised by using the median and consensus assessed by using interquartile ranges for continuous numerical scales. These summary statistics may be fed back to participants at each round along with fuller indications of the distribution of responses to each statement in the form of tables of the proportions ranking at each point on the scale (see box 2), histograms, or other graphical representations of the range (see box 3 ). Feeding back the group's response enables participants to consider their initial ranking in relation to their colleagues' assessments. It should be made clear to each participant that they need not conform to the group view—though, in the nominal group technique, those with atypical opinions (compared with the rest of the group) may face critical questioning of their view from other panel members. In a Delphi exercise, the researcher undertaking the study may ask participants who they have defined as outliers (for example, those in the lower and upper quartiles) to provide written justification for their responses.

Box 2—Example of feedback of second round results in a Delphi 40

The following are possible adverse effects of lowering the number of junior medical staff in general medicine and its associated specialties. The star indicates the number you selected to indicate the extent to which you agreed or disagreed with each statement in response to the previous questionnaire. Each of the numbers below the scale represents the percentage of those responding to the questionnaire who selected that particular value. We would be grateful if you would read through the questionnaire and consider whether, in the light of your colleagues' assessments, you would like to alter your response. Please indicate the extent to which you agree or disagree with each statement by circling the appropriate number (0 indicates total disagreement and 9 total agreement): if your choice remains unchanged please circle the same number you selected on the previous questionnaire.

(i) Mortality rates in hospital will rise

disagree 0 1 2 3 4 5 6 7 8 9 agree

5-3-9-4-3-18-18-12-16-12

Example of feedback of first round results in a nominal group23

For nominal groups, rules have been developed to assess agreement when statements have been ranked on a 9 point scale (see box 3 ). In this example, the scale can be broken down so that scores 1-3 represent a region where participants feel intervention is not indicated; 4-6, a region where participants are equivocal; and 7-9, a region where participants feel intervention is indicated. The first rule is based on where the scores fall on the ranking scale ( box 4 ): if all ratings fall within one of these predefined regions there is said to be strict agreement (in the example, all participants agreed that transurethral resection of the prostate was not indicated). An alternative relaxed definition for agreement is that all ratings fall within any 3 point region. This may be treated as agreement, in that all ratings are within an acceptable range, but the group opinion is ambiguous as to whether intervention is indicated or not.

Examples of strict and relaxed rules for agreement in a nominal group

The second rule tests whether extreme rankings are having an undue influence on the final results and consists of assessing the strict and relaxed definitions by including all ratings for each statement and then by excluding one extreme high and one extreme low rating for each statement. The ranges indicated in box 3 include all ratings, and it is noticeable that several of these ranges are from 1 to 9. It may be that these ranges exaggerate the dispersion of the group's response.

Validity and applicability

There has been an active debate on the validity of the Delphi method. For example, Harold Sackman argued that the Delphi method fails to meet the standards normally set for scientific methods. 41 Many of his criticisms were aimed at past studies of poor quality rather than fundamental critiques of the method itself; he particularly criticised poor questionnaire design, inadequate testing of reliability and validity of methods, and the methods of defining and selecting experts. He also argued that the method forces consensus and is weakened by not allowing participants to discuss issues.

Reviews by Pill 1 and by Gerth and Smith (personal communication) showed no clear evidence in favour of meeting based methods over Delphi. Rowe et al, though, concluded that the Delphi technique is generally inferior to the nominal group technique, but state that the degree of inferiority is small, arising more from practical than from theoretical difficulties; they argue for further research aiming to improve the practice of Delphi studies—particularly a careful consideration of what constitutes expertise. 2

Consensus methods, in particular Delphi, have been described as methods of “last resort,” 42 with defenders warning against “overselling” the methods 43 and suggesting that they should be regarded more as methods for structuring group communication than as a means for providing answers. There is clearly a danger that since these approaches have a prescribed method and are often used to generate quantitative estimates, they may lead the casual observer to place greater reliance on their results than might be warranted. As we stated earlier, unless the findings can be tested against observed data, we can never be sure that the methods have produced the “correct” answer. This should be made clear in reporting study results.

The structures of Delphi and nominal groups (shown in box 1 ) aim to maximise the benefits from having informed panels consider a problem (often termed “process gain”) while minimising the disadvantages associated with collective decision making (“process loss”), particularly domination by individuals or professional interests. The extent to which these are realised depends on the ability of those running the studies to use the advantages of the methods. An important role of the facilitator in the nominal group is to ensure that all participants are able to express their views and to keep particular personal or professional views from dominating the discussion; participants in both Delphi and nominal group panels should be selected as to ensure that no particular interest or preconceived opinion is likely to dominate.

Consensus methods provide a useful way of identifying and measuring uncertainty in medical and health services research. Delphi and nominal group techniques have been used to clarify particular issues in health service organisation: to define professional roles, to aid design of educational programmes, to enable long term projections of need for care for particular client groups where there has been considerable uncertainty (for example, for cases of HIV and AIDS 9 ), and to develop criteria for appropriateness of interventions as part of technology assessment. In addition to forming studies in their own right, these techniques have been widely used as component parts of larger projects. 8 31 The two pieces of research from which materials have been presented in this paper each formed part of larger projects: the Delphi exercise 44 was concerned with defining possible adverse effects of reducing junior doctor staffing levels as part of a study of the adequacy of hospital medical staffing levels; the nominal group 23 was concerned with defining appropriate indications for surgical intervention as part of a population based assessment of need for prostate surgery within an NHS region.

Conclusions

The emphasis, when the findings of Delphi and nominal group studies are presented, should be on the justification in using such methods, the use of sound methodology (including selection of experts and the clear definition of target “acceptable” levels of consensus), appropriate presentation of findings (where proposed standards for presentation—as for clinical practice guidelines 45 —should be considered), and on the relevance and systematic use of the results. The output from consensus approaches (including consensus development conferences) is rarely an end in itself. Dissemination and implementation of such findings is the ultimate aim of consensus activities—for example, the publication of consensus statements intended to guide health policy, clinical practice, and research, such as the consensus statement on cancer of the colon and rectum. 46

  • McGlynn EA ,
  • Kosecoff J ,
  • Kalberer JT
  • Stocking B ,
  • Ouslander JG ,
  • Rollinger I ,
  • Reuben DB ,
  • Bellamy N ,
  • Anastassiades TP ,
  • Buchanan WW ,
  • McCain GA ,
  • Bernstein SJ ,
  • Hilborne LH ,
  • Fraser CE ,
  • Elder OC Jr . ,
  • Attala JM ,
  • Gresley RS ,
  • McSweeney N ,
  • Moscovice I ,
  • Armstrong P ,
  • Oranga HM ,
  • Rainhorn JD ,
  • Brudon-Jakobowicz P ,
  • Mobily PR ,
  • Passannante MR ,
  • Restifo RA ,
  • Reichman LB
  • Chassin M ,
  • Hunter DJW ,
  • Sanderson CFB ,
  • Technology Subcommittee of the Working Group on Critical Care, Ontario Ministry of Health
  • Ziemba SE ,
  • Hubbell FA ,
  • Battles JB ,
  • Dowell DL ,
  • Buchan MS ,
  • Ginzler M ,
  • Justice J ,
  • Manning D ,
  • Balson PM ,
  • Barenberg N ,
  • Neidlinger SH ,
  • Ljutic PM ,
  • Kowalczyk ME ,
  • Gallagher M ,
  • Spencer J ,
  • Bradshaw C ,
  • Phillips C ,
  • Cats-Baril W
  • Van de Ven A
  • Kitzinger J
  • Linstone HA
  • Jones JMG ,
  • Hayward RSA ,
  • Wilson MC ,
  • King's Fund Forum

qualitative risk assessment methodology for scientific expert panels

  •   Login
  • EN [EN] English [FR] Français
  • Detailled Reference

qualitative risk assessment methodology for scientific expert panels

All documents in ORBi are protected by a user license .

Bibliography

  • Agence française de sécurité sanitaire des aliments (AFSSA) (2002). - Rapport sur le botulisme d'origine aviaire et bovine. Rapport du comité d'experts spécialisé «Santé animale» de l'AFSSA. AFSSA, Maisons-Alfort, France, 82 pp.
  • Agence française de sécurité sanitaire des aliments (AFSSA) (2003). - Rapport sur la rage des Chiroptères en France métropolitaine. Rapport du comité d'experts spé cialisé «Santé animale» de l'AFSSA. AFSSA, Maisons-Alfort, France, 70 pp.
  • Agence française de sécurité sanitaire des aliments (AFSSA) (2004). - Fièvre Q: Évaluation des risques pour la santé publique et des outils de gestion des risques en élevage de ruminants. Rapport du comité d'experts spécialisé «Santé animale» de l'AFSSA. AFSSA, Maisons-Alfort, France, 88 pp.
  • Agence française de sécurité sanitaire des aliments (AFSSA) (2008). - Rapport sur une méthode qualitative d'estimation du risque en santé animale. Rapport du comité d'experts spécialisé «Santé animale» de l'AFSSA. AFSSA, Maisons-Alfort, France, 47 pp.
  • Akkoc N., Kuruuzum Z., Akar S., Yuce A., Onen F, Yapar N., Ozgenc O., Turk M., Ozdemir D., Avci M., Guruz Y., Oral A.M., Pozio E.: Izmir Trichinellosis Outbreak Study Group (2009). - A large-scale outbreak of trichinellosis caused by Trichinella britovi in Turkey. Zoonoses public Hlth, 56 (2), 65-70. E-pub.: 14 August 2008.
  • Dufour B. & Moutou F (2007). - Qualitative risk analysis in animal health. A methodological example. In Advances in statistical methods for the health sciences (J.-L. Auget, N. Balakrishnan, M. Mesbah & G. Molenberghs, eds). Birkhäuser Boston, Cambridge, Massachusetts, 529-539.
  • Fox-Rushby J.A. & Hanson K. (2001). - Calculating and presenting disability adjusted life years (DALYs) in cost-effectiveness analysis. Hlth Policy Plan., 16 (3), 326-331. (Pubitemid 32911160)
  • Hendrikx P., Dufour B., Tulasne J.J. & Kondolas G. (2001). -Analyse qualitative du risque d'épizootie de peste bovine en République Centrafricaine (RCA) à partir du Tchad et du Soudan. Épidé miol. Santé anim., 40, 83-94.
  • Kemmeren J.M., Mangen M.-J.J., van Duynhoven Y.T.H P. & Havelaar A.H. (2006). - Priority setting of foodborne pathogens. Dutch National Institute for Public Health and the Environment (RIVM) report. RIVM, Bilthoven, the Netherlands, 123 pp.
  • Moutou F., Dufour B. & Ivanov Y. (2001). - A qualitative assessment of the risk of introducing foot and mouth disease into Russia and Europe from Georgia, Armenia and Azerbaijan. Rev. sci. tech. Off. int. Epiz., 20 (3), 723-730. (Pubitemid 33588916)
  • Murray C.J.L. & Lopez A.D. (1996). - The global burden of disease. Harvard School of Public Health, Harvard, Cambridge, Massachusetts, 1022 pp.
  • Murray N. (2002). - Import risk analysis, animals and animal products. New Zealand Ministry of Agriculture and Forestry, Wellington, New Zealand, 183 pp.
  • Saegerman C. & Berkvens D. (2007). - Application de l'évaluation des risques dans la chaîne alimentaire. Introduction. In Application de l'évaluation des risques dans la chaîne alimentaire. Legal deposition D/2007/10.413/2. Federal Agency for Safety of the Food Chain, Brussels, Belgium, 7-13.
  • Saegerman C, Porter S. & Humblet M.F (2008). - Risk assessment of the re-emergence of bovine brucellosis/tuberculosis. In Emerging animal diseases: from science to policy. Legal deposition: D/2008/10.413/5. Federal Agency for Safety of the Food Chain, Brussels, Belgium, 63-71.
  • World Organisation for Animal Health (OIE) (2007). -Terrestrial Animal Health Code, 16th Ed. Import risk analysis. Chapters 1.3.1. & 1.3.2. OIE, Paris.
  • Zepeda-Sein C. (1998). - Méthodes d'évaluation des risques zoosanitaires lors des échanges internationaux. In Séminaire sur la sécurité zoosanitaire des échanges dans les Caraïbes, 9-11 December 1997, Port of Spain, Trinidad and Tobago. World Organisation for Animal Health (OIE), Paris, 2-17.
  • Zepeda-Sein C. (2002). - Risk analysis: a decision support tool for the control and prevention of animal diseases. In Proc. 70th General Session of the International Committee, World Organisation for Animal Health (OIE), 26-31 May, Paris, Document 70 SG/10. OIE, Paris, 7.

Similar publications

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

A qualitative risk assessment methodology for scientific expert panels

Profile image of A. Thebault

Revue Scientifique et Technique de l'OIE

Related Papers

International Perspectives on Violence Risk Assessment

Stål Bjørkly

qualitative risk assessment methodology for scientific expert panels

Journal of Medical Ethics

David Wendler

Robert Litan

Theoretical Medicine

Raphael Sassower

EFSA Journal

mikhael maramis

Yearbook of the National Society for the Study of Education

Philip Piety

La lutte antivectorielle en France

Sylvie Lecollinet

Human Resources in Healthcare, Health Informatics and Healthcare Systems

Pedro Antunes

RELATED PAPERS

International Journal of Bio-Medical Informatics and e-Health

WARSE The World Academy of Research in Science and Engineering

Andi Reski Aprianti

Cahyo Budiyantoro

Brain Sciences

Sarah Mason

Cytotherapy

  • Cognitive Science

Céline Clavel

erkan ölçücüoğlu

Journal of Inorganic Biochemistry

Angelica Ellen Graminha

Scientific Reports

Hugues Pascal-Moussellard

IOSR Journals

New J. Chem

Niranjan Panda

Derivatives

Dietmar Ernst

Balıkesir Üniversitesi Fen Bilimleri Enstitüsü Dergisi

Cengiz Özmetin

Omar Ben Mya

Medical care

Allison Rodriguez

Stefano Del Giacco

Paul Omar P Gangoso

Plastic Surgery International

Gisele Da Silva Dalben

Acta Crystallographica Section E Structure Reports Online

ROZALIA VILMA BOJAN

Defence Science Journal

Mohammed Shareef

Town and Country Planning

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Mathematics
  • Computer Science
  • Academia ©2024

Volume 10 Supplement 1

7th Annual Conference on the Science of Dissemination and Implementation in Health

  • Meeting abstract
  • Open access
  • Published: 14 August 2015

Innovative methods for using expert panels in identifying implementation strategies and obtaining recommendations for their use

  • Thomas J Waltz 1 , 2 ,
  • Byron J Powell 3 ,
  • Monica M Matthieu 4 , 5 ,
  • Matthew J Chinman 6 ,
  • Jeffrey L Smith 4 ,
  • Enola K Proctor 7 ,
  • Laura J Damschroder 2 &
  • JoAnn E Kirchner 4 , 8  

Implementation Science volume  10 , Article number:  A44 ( 2015 ) Cite this article

6494 Accesses

10 Citations

Metrics details

Introduction

A variety of research questions can be addressed using expert panels to synthesize existing knowledge and issue recommendations. This panel's presentations describe the use of innovative methods for engaging expert panels comprised of implementation scientists and clinical managers in complex recommendation processes to match implementation strategies with evidence based practices in real world service settings as part of the Veterans Health Administration (VA) funded 'Expert Recommendations for Implementing Change' (ERIC) project (QLP 55-025).

Powell describes the use of a web-based modified-Delphi processes to obtain expert consensus on a compilation of discrete implementation strategies. Waltz describes the use of a concept mapping method to characterize the interrelationships among the strategies in the compilation. Finally, Matthieu describes the methods used to engage multiple stakeholders to develop a structured recommendation process that applies the implementation strategies to specific practice changes in the VA.

The innovative sequence of methods used highlights the value of structured tasks that support transparent, quantitative characterizations of expert panel recommendations. The majority of this expert panel's activities involved asynchronous use of a variety of software platforms, reducing logistical barriers often involved in engaging a large panel of experts. Activities that required synchronous consensus meetings also utilized technology to host structured discussions and post-discussion voting that provided participants with real time feedback on the recommendation outcomes.

The sequence of methods employed in the ERIC project can serve as a model for developing context-sensitive expert recommendations for other dissemination and implementation initiatives.

Building expert consensus for characterizing discrete implementation strategies

Efforts to identify, develop, and test implementation strategies have been complicated by the use of inconsistent language and inadequate descriptions of strategies in the scholarly literature. A literature-based compilation of strategies was developed to address this problem (Powell et al., 2012); however, its development was not informed by the participation of a wide-range of implementation and clinical experts. This presentation describes our effort to further refine that compilation for use in the VA by establishing expert consensus on strategy terms, definitions, and categories that can be used to guide implementation research. Purposive sampling was used to recruit an expert panel comprised of implementation science experts and VA clinical managers. Specifically, a reputation-based snowball sampling approach was used in which an initial list of experts was developed by members of the study team. This list included the editorial board of the journal Implementation Science , Implementation Research Coordinators from the VA QUERI program, and faculty from the NIH-funded Implementation Research Institute. The Expert Panel was engaged in a three-round modified Delphi process to generate consensus on strategies and definitions. The first and second rounds involved web-based surveys that prompted comments on implementation strategy terms and definitions. The initial survey was seeded with strategy terms and definitions from the Powell et al. (2012) compilation. After each round, iterative refinements were made to the compilation based upon participants' feedback. The third round involved a live, web-based polling process and consensus process that yielded a final compilation of 73 strategies and definitions. This presentation highlights the advantages and challenges associated with using asynchronous and live web-based methods for obtaining wide participation of experts.

Concept mapping: harnessing the power of an expert panel to conceptualize relationships among 73 implementation strategies

After obtaining the compilation of discrete implementation strategies in the earlier phase of the Expert Recommendations for Implementing Change (ERIC) project, we were faced with a practical challenge of how to realistically ask experts to consider 73 different implementation strategies when making recommendations. One strategy to reduce the cognitive burden of a complex multicomponent recommendation development process is to organize strategies by similarity. Concept mapping is a method that allows you to engage an expert panel in a structured task that can be completed asynchronously and online. For this study, expert panel members were given a deck of virtual "cards", each with one of ERIC's 73 implementation strategies. Participants then sorted these cards into piles on the basis of similarity and then rated each strategy in terms of its relative importance and relative feasibility considering all 73 implementation strategies. The benefit of concept mapping is the ability to quantitatively characterize how your target audience conceptualizes a wide range of topics. For the ERIC project, concept mapping provided us with a structured, participant driven approach to organizing our data into 9 expert-derived categories. This organization scheme was then used to structure additional expert panel tasks. This presentation will focus on concept mapping as a tool for characterizing an expert panel's shared understanding of key concepts to be used in a subsequent recommendation process. While data from the ERIC project will be used to illustrate this method, discussion will include how this method can be used to support active and structured stakeholder engagement in a variety of dissemination and implementation activities.

Development and application of a menu-based choice framework to structure expert recommendations for implementing complex practice changes in the VA

The Expert Recommendations for Implementing Change (ERIC) project sought to utilize methods to support a highly structured and transparent recommendation process that actively engaged key stakeholders throughout the project's execution. The ERIC project's penultimate activity involves a menu based choice (MBC) task. MBC methods are used in consumer marketing research for product development and these tasks are useful for providing a context rich structure for making decisions that involve multiple elements. In ERIC's MBC tasks, panelists were presented with 73 implementation strategies structured into nine categories. They were tasked with building multi-strategy implementation approaches for particular clinical practice changes being implemented across three settings, each with specific relative strengths and weaknesses (i.e., varying contextual characteristics).

The clinical practice changes were identified by national VA leadership as high priority areas for clinical quality improvement efforts (e.g., improving safety for patients taking antipsychotic medications, depression outcome monitoring in primary care mental health, prolonged exposure therapy for treating post-traumatic stress disorder). Scenarios describing these practice changes were developed using key informant interviews with front line providers, clinical managers, health service researchers, and implementation scientists. These experts all practice in the respective area and were able to provide common and realistic challenges they face in routine service delivery in VA settings. ERIC project staff then expanded the scenarios to address varying organizational contexts (e.g., organizational culture, leadership, evaluation infrastructure) and across levels of evidence (e.g., strength and quality, relative advantage, compatibility, adaptability). Stakeholders were repeatedly engaged in an iterative process of evaluating the scenarios for reliability, credibility, and transferability. This presentation will highlight the critical role partnering with key stakeholders plays in executing this structured recommendation method.

Primary funding for this research was provided by the U.S. Department of Veterans Affairs Veterans Health Administration's Mental Health Quality Enhancement Research Initiative (QLP 55-025).

Author information

Authors and affiliations.

Department of Psychology, Eastern Michigan University, Ypsilanti, MI, 48197, USA

Thomas J Waltz

Center for Clinical Management Research and Diabetes QUERI, VA Ann Arbor Healthcare System, Ann Arbor, MI, 48105, USA

Thomas J Waltz & Laura J Damschroder

Center for Mental Health Policy and Services Research Department of Psychiatry, Perelman School of Medicine University of Pennsylvania, Philadelphia, PA, 19104, USA

Byron J Powell

Central Arkansas Veterans Healthcare System, HSR&D and Mental Health Quality Enhancement Research Initiative (QUERI), Little Rock, AR, 72114, USA

Monica M Matthieu, Jeffrey L Smith & JoAnn E Kirchner

School of Social Work, College for Public Health & Social Justice, Saint Louis University, St. Louis, MO, 63103, USA

Monica M Matthieu

VISN 4 MIRECC, VA Pittsburgh Healthcare System, and RAND Corporation, Pittsburgh, PA, 15213, USA

Matthew J Chinman

Brown School, Washington University in St. Louis, St. Louis, MO, 63105, USA

Enola K Proctor

Department of Psychiatry, College of Medicine, University of Arkansas for Medical Sciences, Little Rock, AR, 72205, USA

JoAnn E Kirchner

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Thomas J Waltz .

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/4.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Waltz, T.J., Powell, B.J., Matthieu, M.M. et al. Innovative methods for using expert panels in identifying implementation strategies and obtaining recommendations for their use. Implementation Sci 10 (Suppl 1), A44 (2015). https://doi.org/10.1186/1748-5908-10-S1-A44

Download citation

Published : 14 August 2015

DOI : https://doi.org/10.1186/1748-5908-10-S1-A44

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Expert Panel
  • Implementation Strategy
  • Concept Mapping
  • Veteran Health Administration
  • Recommendation Process

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

qualitative risk assessment methodology for scientific expert panels

  • Research article
  • Open access
  • Published: 07 January 2016

SEaRCH™ expert panel process: streamlining the link between evidence and practice

  • Ian Coulter 1 ,
  • Pamela Elfenbaum 2 ,
  • Shamini Jain 2 &
  • Wayne Jonas 2  

BMC Research Notes volume  9 , Article number:  16 ( 2016 ) Cite this article

3082 Accesses

21 Citations

2 Altmetric

Metrics details

With rising health care costs and the diversity of scientific and clinical information available to health care providers it is essential to have methodologies that synthesize and distill the quality of information and make it practical to clinicians, patients and policy makers. Too often research synthesis results in the statement that “more and better research is needed” or the conclusions are slanted toward the biases of one type of stakeholder. Such conclusions are discouraging to clinicians and patients who need better guidance on the decisions they make every day.

Expert panels are one method for offering valuable insight into the scientific evidence and what experts believe about its application to a given clinical situation. However, with improper management their conclusions can end up being biased or even wrong. There are several types of expert panels, but two that have been extensively involved in bringing evidence to bear on clinical practice are consensus panels , and appropriateness panels . These types of panels are utilized by organizations such as the National Institutes of Health, the Institute of Medicine, RAND, and other organizations to provide clinical guidance. However, there is a need for a more cost effective and efficient approach in conducting these panels. In this paper we describe both types of expert panels and ways to adapt those models to form part of Samueli Institute’s Scientific Evaluation and Research of Claims in Health Care (SEaRCH™) program.

Expert Panels provide evidence-based information to guide research, practice and health care decision making. The panel process used in SEaRCH seeks to customize, synthesize and streamline these methods. By making the process transparent the panel process informs decisions about clinical appropriateness and research agenda decisions.

When attempting to develop evidence-based medicine, health professions face several challenges in trying to base their practices on evidence. One source of information is scientific evidence derived from research studies. However, since a single study is seldom definitive, a systematic review of the numerous studies published in the literature is preferable. Systematic reviews of the literature provide a method of assessing the quality of each individual study [ 1 , 2 ], as well as assessing the overall level of evidence based on the entire published literature for a particular intervention in a given population [ 2 – 5 ]. In a separate article in this series of articles we describe a rapid and reliable way of getting sound evidence through systematic reviews called the Rapid Evidence Assessment of the Literature (REAL © ) [ 6 ]. However, while clinical research, systematic reviews and their publications are necessary, they are not sufficient to ensure that the relevant evidence has been properly assembled or that it can influence practice and clinical decision making. This may occur because of the paucity of studies available, because of the poor quality of the studies, because of the restricted nature of the populations in clinical studies or because of the impossibility of doing the studies for either ethical issues or methodological or resource reasons [ 6 , 7 ]. The result is that much of health care occurs in a clinical space in which the evidence is indeterminate or uncertain. When good evidence does not exist to guide practice, clinicians must base their approach on their own clinical intuition, on what they were taught in their training or on what the experts recommend. However, such approaches lack transparency and rigor and are often fraught with error. Coulter has termed this “therapeutic anarchy” where in effect each clinician does his/her own thing or the market drives what is done rather than patient needs and preferences [ 7 , 8 ]. The situation is even worse for patients who may not even have the basic knowledge or experience on how to interpret evidence but are also asked to make decisions about treatment choices and preferences.

Expert panels

One approach that has emerged to overcome this problem has been the use of expert panels (EP). While in systematic reviews the opinion of experts is considered at the bottom of the evidence hierarchy, in expert panels, which combine evidence and clinical acumen, the opinions are made transparent and subjected to critical appraisal. This overcomes the major objection to the opinion of experts vs. systematic reviews and has the additional value of allowing the introduction of information closer to clinical practice and a better interface with research evidence.

Samueli Institute uses the following expert panels: a Clinical Expert Panel (CEP), and a Research Expert Panel (REP), with others designed for policy (PoEP) and patient expert panel (PaEP) decisions, in development. These types of panels form a part of the Scientific Evaluation and Review of Claims in Health Care (SEaRCH™) process. [ 6 ] They draw on the best in existing models of expert panels, but differ in that they are designed to complement the evidence process in the SEaRCH program to better clarify both research agenda needs and facilitate practice decisions. They also differ from other EPs in that the process for doing them is structured and streamlined to ensure they can be conducted rapidly and in an objective and cost effective manner. Furthermore, they can be customized in response to a client’s needs.

Samueli Institute’s EP processes have drawn from three methods developed over time to deal with the application of evidence to practice. These are the National Institutes of Health Consensus Development Conference (NIH CDC), the Institute of Medicine (IOM) report process and the RAND Expert Panel process. Because we draw on these three methods they will be described briefly in what follows.

NIH consensus panels

The National Institute of Health (NIH) consensus development conference method for resolving evidence judgment issues was begun in 1977 by NIH as a method by which the scientific community could bring relevant research to bear on the quality of health care [ 8 – 10 ]. The purpose of the NIH Consensus Development Conference (CDC) is to bring clinical practice more in line with research and so improve the quality of care. “To achieve this, the focus is on the scientific community bringing to the attention of the health professions the results of research and the impact on the quality of care.” [ 8 , 9 ] Given its purpose the membership of the NIH panels favors research experts in both the clinical and related aspects of the topic.

NIH panels focused largely on questions of efficacy and safety [ 10 ], and are intended to resolve scientific issues (controversies) [ 11 ]. However, the issues chosen have to be resolvable by evidence. There is testimony from experts, audience participation and the end product of the panel is consensual statements (i.e. 100 % agreement). A panel of experts is chosen and reviews the evidence for or against a procedure. Recently, panels are provided with a systematic literature review and the original literature. Over 2 days panelists also hear testimony from experts. These hearings are open to audience participation who may also comment. At the end of this process the panel is then cloistered until the final consensual recommendations are completed. These are then issued publicly. The topics chosen can either be disease based (e.g. breast cancer) or procedure based (e.g. mammography). The focus is on the state of the science rather than the state of current practice. Commentators have described the NIH process as a combination of three models: the judicial process using testimony and a jury; a scientific meeting where research evidence is presented; and the town meeting where interested citizens can make comments [ 9 , 10 ]. Ultimately, however, the recommendations are made by a select few with scientific expertise and made behind closed doors. Consensus panels may also comment on the state of the science and make recommendations about both present and future research needs.

There are, however, several challenges which limit the usefulness of the NIH Consensus Development Panels. If the focus is only on those issues that can be resolved by scientific evidence, it is necessarily confined to those issues with scientific support. Unfortunately, within the field of health care, many of the most problematic issues cannot be resolved by evidence alone. In such a situation the NIH CDC is forced to report simply on the state of the science. In addition, the judgment processes used for the final conclusions are also done behind closed doors and so are not completely transparent.

The NIH panel places a premium on research findings, and gives less weight to clinical experience or acumen (although in the testimony phase it may hear this). This undermines the credibility of the NIH panels with the very persons it seeks to influence—practitioners and patients. Additionally, the panels favor largely scientific (not clinical experts) in both the clinical and related aspects of the topic [ 8 , 9 ]. Related to this, they focus on efficacy and not effectiveness and on the state-of-art practices and not usual practice. This makes their guidelines less useful to practitioners. This panel is an expensive process which means it cannot be repeated very often. Where technology is transforming practice rapidly, the findings can become obsolete very quickly. Consensus statements are considered by NIH to be historical documents after 5 years and they do not recommend that decisions be based on these statements after that time. Unfortunately, updates to these statements are rarely conducted, due to the costs of conducting the comprehensive systematic reviews and the rarity of gathering an NIH panel. This is limiting in areas of health care that are changing rapidly, where answers about particular clinical procedures are needed expediently, and where it is recognized that evidence-based clinical decision-making needs to be informed via timely updates and re-assessment of the literature [ 13 ]. Research by RAND has shown that the impact of the NIH panels on care is not strong in changing physician behavior [ 10 – 12 ].

Another limiting factor of NIH panels is that they are consensus-based, and while minority reports are possible, participants are strongly encouraged to agree on assessments and recommendations. Such consensus panels may be more prone to bias from dominant panel members [ 11 ]. The recommendations include only those for which there is consensus, and the panels are cloistered until such consensus is met [ 12 ]. This often means that the result is the “lowest common denominator”. The evidence for the clinical impact of the NIH panels on changing behavior of health care providers has not been strong. The emphasis on efficacy rather than effectiveness and not on the state of the current practice has limited the relevance of the panel findings for a broader health care audience.

In situations where the evidence is insufficient to derive consensus statements or as part of the consensus panel for clinical practice, a state of the science special panel can be created. This panel is also further limited by the relative lack of transparency and specificity related to gap assessment and research recommendations. Little systematic information is given on how group decisions are made regarding literature gap assessment, including the relevance of a particular gap for clinical practice or policy, or the types of research designs best suited to address specific gaps. Finally, as noted by the Institute of Medicine (IOM), many expert panels do not make a clear distinction between the quality of evidence vs. strength of recommendations. While the NIH State-of-the-Science Panels are ostensibly run when there is a perception that the evidence base is “weaker”, the methods used in assessing the quality of evidence may vary and strength of recommendations often varies. Recommendations from these “State-of- the-Science” panels are not presented based on the relevance of a particular gap in terms of research, practice, or policy.

Institute of medicine reports

The Institute of Medicine (IOM) also produces reports that often summarize evidence and make recommendations about practice and research. While their methods can vary, the basic approach is similar to the NIH CDC in that they pick scientific experts, receive outside input and sequester the participants and seek consensus whenever possible. Unlike the NIH CDC, the panels usually meet several times and have more time for discussion and analysis. In addition, the reports are longer and more thoroughly developed. However, despite a recent IOM report on criteria for systematic reviews and guidelines, each panel does not yet always conduct or rely on systematic reviews and their processes are sometimes even less transparent than those of NIH [ 12 , 13 ]. Their impact on clinical practice also varies. While IOM works hard to reduce clear conflict of interest of panel members, they do not structure the conduct of any SRs done and the experts in a way to prevent bias during the assessment and consensus process. Thus, the bias of panel members with more dominant voices can influence the outcome.

RAND expert panels for appropriateness of care

The RAND method has been extensively described elsewhere in the literature. One of its essential features vis-a vis the NIH or IOM approaches is that the RAND approach is oriented less to scientific experts and more to clinical experts [ 8 , 9 ]. In addition, the RAND process has carefully evaluated the optimum process for creating diverse and multi-stakeholder input, allowing for bias reduction and wider audience relevance.

Coulter [ 13 , 14 ] has described the RAND panel in previous publications and this will be drawn on here [ 14 , 15 ]. The process has a specific way of managing the panel to address some of the limitations of those previously described. In a RAND panel nine experts are chosen. The members reflect a spectrum of clinicians and academics, so that any given specialty does not dominate the panel (five academically based and four practicing clinicians). In addition, the RAND expert panel evaluation of the evidence process departs from that of the NIH CDC and IOM. The main differences between these and the RAND panel process are described below.

First, in a RAND panel an extensive review of the literature is conducted and a systematic review written (a meta-analysis if this is possible, but a synthesis if it is not) [ 23 ]. In the RAND clinical appropriateness panel process, research staff (with input from clinical expertise) then creates a set of possible clinical indications to which the evidence might be applied. These indications categorize patients in ways that they usually present to the clinic. This includes such things as their symptoms, past medical history, the results of previous diagnostic tests and patient preferences. “The objective is to create lists that are detailed, comprehensive, and manageable. They must be detailed enough so that the patients covered by the category are relatively homogeneous so that the procedure would be equally appropriate for all of them. To be comprehensive they must include all the indications for doing the procedure that occur in practice. However, they must be short enough that the panelists can complete them within a reasonable space of time.” [ 8 , 9 ].

After these lists are compiled, the process uses a modified Delphi method where the indications created are sent to the nine panelists along with the literature synthesis. The panelists then independently rate the appropriateness of the procedure based on the evidence from the literature review and their clinical experience. The ratings for appropriateness are from 1 to 9, with 1 representing extremely inappropriate and 9 extremely appropriate. Appropriate care is defined as when “the expected health benefit to the patient (relief of symptoms, improved functional capacity, reduction of anxiety, etc.) exceed expected health risks (pain, discomfort, time, cost, etc.) by a sufficiently wide margin that the procedure is worth doing” [ 14 , 15 ]. The panelists are instructed to evaluate the risks and benefits based on commonly accepted best clinical practice for the year in which the panel is conducted. Considering an average group of patients with each listed indication, presenting to an average practitioner in the US, the ratings should reflect the panelist’s own best clinical or expert judgment. In this way the judgments are a combination of evidence from the literature and clinical acumen or experience with realistic variations as they often happen in the clinic. This forms a bridge from evidence to practice in a more realistic way than simple evidence summaries do, but with a more systematic and balanced process than individual clinical opinion.

These ratings are quantitatively summarized by the research staff. Then, in a second round of ratings the panelists are brought together in a face-to-face meeting. Each panelist is shown his/her rating for each indication, and the distribution of the ratings of the panel as a whole is presented for each indication. Individual panelists must reconsider their judgment and although they are not forced to defend it in most cases, where the individual does differ from the group, he/she will usually do so or at least explain the logic or evidence for that position. Following the discussion, the panelists re-rate the appropriateness of the procedure again. From the second rating, it is possible to determine the degrees of agreement in the panel, and to calculate the average median ratings, and the average dispersion measures for the procedures. Consensus is not required. However, in most instances the dispersion decreases during the second rating as the panelists come closer to a consensus. Once the work of the expert panel is completed, the team then compiles a set of indications for performing a procedure based on the evidence and their clinical experience, which can then be used to compare to actual practice. This allows researchers to calculate a rate of appropriate/inappropriate care present in practice.

In the RAND panels, consensus is not required but its degree is measured. RAND has utilized two approaches to measure consensus. In the first there is consensus if all raters’ responses fall within one of the three point ranges of the scale (i.e. 1, 2, 3; 4, 5, 6; 7, 8, 9). This would mean all the raters agreed that the procedure should not be performed (1, 2, 3); they agreed that it was questionable or uncertain (4, 5, 6); or, they agreed that it should be performed (7, 8, 9). The second method is to define agreement if all the ratings fell within any 3 point range. Furthermore, agreement can be determined using both methods but by rejecting one extreme high or low rating. Similarly, disagreement can be calculated using two methods; if at least one rater chose a 1 and one chose a 9; or if at least one rater fell in the lowest three point region and at least one in the highest. As with disagreement, the extreme ratings can be discarded. A procedure can be judged inappropriate if its median rating is in the range 1–3, and without disagreement; uncertain if the median rating is in the range 4–6, and appropriate if it is 7–9, without disagreement. Finally, the outcome can also be that the panelists disagreed on the proper rating (they were indeterminate) [ 13 – 16 ].

A unique feature of the RAND appropriateness panel is the amount of research that has been done on its reliability and validity, making the RAND panel process, unlike other expert panel processes, truly evidenced-based. For validity of the RAND appropriateness panel, studies have examined the relationship between ratings and the literature [ 6 ], face and content validity [ 16 , 17 ] and construct validity [ 18 , 19 ]. Studies have looked at test–retest reliability [ 20 ], compared panels occurring in different countries on the same procedures [ 21 ], compared panels occurring at different times [ 20 ], and investigated the impact of panel membership on the judgments of appropriateness [ 17 ]. These studies show that, when these steps are applied, extreme variation across the panels does not occur. The first formal test of reproducibility of the RAND panels [ 20 ] tested the reliability of three parallel panels for two procedures, hysterectomy and coronary revascularization, conducted within the same time frame. Comparing the reproducibility of the panels with what physicians do daily, the study concluded that the RAND method is much less variable than physicians making independent decisions. Coulter et al. [ 17 ] have compared the panel ratings of a multidisciplinary panel versus an all specialty panel for spinal manipulation for low back pain and shown that those who use this procedure are more likely to rate it as appropriate for more conditions than those who do not [ 7 , 17 – 22 ].

SEaRCH™ Expert Panels

The SEaRCH Expert Panel Process draws heavily on the work done at NIH, IOM and RAND. The process integrates the most reliable and useful aspects of these panels. The desire to create a streamlined approach was driven by the need to reduce cost and create more efficient and transparent expert panels.

One example of streamlining is to have expert panelists utilize the online technology database to enter data related to appropriateness ratings. Another is to hold meetings via teleconference, which allows for reduced costs as well as the inclusion of key expert panelists that otherwise may not be able to attend an in-person meeting. This also allows for the overall panel process to happen more expediently and to be more inclusive of the right experts for balance. In addition, it allows for a greater variety of panel types to be developed to serve different purposes, such as panels focused on policy and patient preferences.

Currently, the SEaRCH Expert Panel Process (SEPP) uses expert panels for two primary purposes –clinical appropriateness and research agendas. A Clinical Expert Panel (CEP) is used to determine when care is appropriate or inappropriate combining both evidence and clinical acumen, similar to the RAND process. A Research Expert Panel (REP) is designed to examine the state-of-science and to identify and prioritize gaps in the scientific evidence vis-à-vis practice. Both the CEP and REP are used particularly in areas where the evidence is either lacking in quality or insufficient to determine appropriate care using usual consensus or guideline processes. While the two expert panels differ, the processes they use are similar and will be described jointly below.

SEaRCH Expert Panel Process (SEPP)

A request for a SEaRCH Expert Panel Process (SEPP) typically comes from an outside person or group who needs a balanced and objective way to make research or clinical recommendations on a particular topic. The requestor often seeks to obtain an evidence-based, expert judgment to help determine the appropriateness for clinical use of, or develop the research needs for, a product, practice, program or policy. In the case of a clinical expert panel (CEP), the requestor seeks recommendations for an intervention within a given setting or recommendations around a particular treatment approach in various settings. In the case of a research expert panel (REP), the requestor seeks specific research recommendations based on current gaps in the evidence and may include an assessment of available resources and readiness to conduct such research. The following describes the steps in the SEaRCH Expert Panel Process (SEPP). Figure  1 provides a flow chart of the CEP process.

Basic steps in a clinical expert panel (CEP)

The first step of the SEPP is for an Expert Panel Manager to meet with the client to create and refine the question to be answered. This meeting also examines the focus of the other integrated SEaRCH components such as the REAL (Rapid Evidence Assessment of the Literature) which is a streamlined, reliable, systematic review process; [ 22 , 23 ] and the Claim Assessment Profile (CAP) [ 23 , 24 ], which provides a detailed descriptive evaluation of the intervention (product, practice, program or policy) and claim (efficacy, effectiveness, efficiency, relevance, cost and outcome). The CAP and REAL processes involved in SEaRCH are described in the companion pieces in this issue. Next, a balanced team is created for the successful conduct of the panel. The selection of panel members is key for obtaining valid judgments from a panel. Panelists are always selected to ensure no conflict of interest in the area. They are also selected to provide a diversity of experience and knowledge within the panel. The SEPP process uses specific criteria to select the most qualified panel members.

As stated earlier, one of the benefits of the SEaRCH model is the communication that occurs between the description, evidence and judgment components (the CAP, REAL and SEPP, respectively) needed to answer important health information questions. For example, using the REAL process for informing the Expert Panels provides a systematized assessment of research quantity and quality. For the clinical expert panel, specific SMEs develop the clinical scenarios that will be rated by the panelists. Once the EP team and panel members have been established and the clinical scenarios created, the actual panel process begins.

Each panelist integrates information from the REAL and/or CAP evidence with their own clinical judgment for each scenario. They rate the appropriateness of use of the intervention for each scenario and enter their ratings into an online database. The use of the database allows for less error, more opportunity for statistical review and a much faster turnaround of results for phase II.

The second phase of the CEP consists of all nine panelists meeting face to face, either in person or through virtual means to review their clinical appropriateness rating scores and discuss them among the group. After the panel discussion, panelists are asked to re-rate the scenarios using the online database. The computerized program can be used to tabulate the new re-rated scenarios.

Conducting research expert panels

As the flow chart in Fig.  2 demonstrates, the initial steps in the research panel are similar for the research and clinical expert panels. The following methodological processes focus on the specifics of conducting the Research Expert Panel.

Basic steps in a research expert panel (REP)

Similar to the Clinical EP, the Research EP is a two-phased process. However, in this case, initial ratings entered in the online technology database are not based on appropriateness of clinical care, but rather on elucidating and clarifying research gaps, as well as identifying research designs to address the most relevant gaps.

Similar to the Clinical EP, the Research EP member integrates his/her knowledge and expertise with the available evidence from the REAL and/or CAP, and then enters this information into a computerized database. The initial ratings provided by the research expert panelists help to systematize and make transparent specific recommendations on research directions in particular gap areas. In the case where research recommendations are being made specifically for an institution that wishes to carry out a research project, tailored recommendations also are based on the available resources for that institution (when explicated through the CAP). The process for convening the Research Expert Panel in Phase II is similar to the Clinical Expert Panel.

The goal of the research expert panel meeting, however, is for the panelists to discuss areas where they may disagree on gaps and next research priorities. As with the clinical expert panel, after the panel discussion, panelists are then asked to re-rate the research form. Panelists input their ratings in the database which tabulates the scores.

The last phase of the process is to summarize and deliver the panel recommendations in a report with quantified ratings. Recommendations and specific content of each summary report will vary depending on the type of panel (research vs. clinical appropriateness) and the question(s) that the client is seeking to answer. For these panels, the summary includes an in depth description of the EP methodology and recommendations based upon the findings.

Panel variations: making panels more patient centered

It is important to note that this panel model can be modified to address questions on multiple issues related to health care such as policy implementation and patient-centeredness. For example, a Policy Expert Panel (PoEP) is a derivative of the Research Expert Panel and focuses on making evidenced based policy judgments (payment, coverage, scope of practice) to direct implementation of a practice claim. In fact, this panel is often used to explore direct implementation issues even when a policy issue is not a factor.

One of the biggest challenges in all panel processes is that they have difficulty obtaining patient input early and continuously in the decision making. Usually, patient representatives are placed on advisory boards where input comes late in the decision making process. In addition, patients may feel intimidated or lost when on panels with scientists and clinical experts, especially if they are not trained or comfortable executing their role and using the panel methodology. The Samueli SEPP has patients and patient interest groups involved in every phase of the process,. It may even be deemed appropriate to convene a Patient-only Expert Panel (PoEP) to achieve a complete patient perspective. Patients can be incorporated into panels in various ratios such as equal (1:1:1 with scientists and clinicians) or weighted toward patients (2:1:1). In addition, use of the anonymous Delphi and virtual meeting processes can further empower patient input more deeply in judging either clinical or research relevance, making both methods more patient-centered. The REAL review process also trains all potential panel members (no matter their expertise) in how to use the results of a systematic review. Special training of the patient panel members and of panel moderators also enhances communication and input from the patient’s perspective [ 23 , 25 ].

Conclusions and discussion

There is an essential need for evidence-based information to guide research, practice and health care decision making. Expert panels contribute to evidence-based decision making in both research and practice by closing the gap between the usual evidence summaries and the needs of clinicians, policy makers, patients and researchers for using such evidence in daily decisions. There is a clear need for more transparent, systematized, and efficient processes for expert panels, especially in terms of providing recommendations. This process seeks to fill those gaps. As part of the SEaRCH program, Samueli Institute streamlined an expert panel process that was driven by the need for cost effective, efficient and transparent approaches to addressing health care needs. It is designed to expand diverse stakeholder input into the research judgment process. This allows for easier delivery of expert opinion and patient input for making decisions about research priorities and clinical appropriateness. This methodology is integrated with other SEaRCH components such as the Rapid Evidence Assessment of the Literature (REAL) and the Claims Assessment Profile (CAP). This methodology is still new and will need to be validated in the future. The use of information from these three evidence based components, will allow for more customized, expedient and evidence-based recommendations on therapeutic claims of interest.

The expert panel process is the final step in the integrated SEaRCH process described in this series of articles. It improves the link between research evidence, research agendas, real world practice decisions and patient needs and preferences. Together these integrated strategies allow development of true evidence-based health care.

Abbreviations

claims assessment profile

clinical expert panel

expert panel

Institute of Medicine

National Institutes of Health Consensus Development Conference

patient expert panel

policy expert Panel

Rapid Evidence Assessment of the Literature

research expert panel

scientific evaluation and review of claims in healing

SEaRCH™ expert panel process

subject matter expert

systematic review

Graham R, Mancher M, Wolman D, Greenfield S, Steinberg E. Institute of Medicine. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press; 2011.

Egger M, Juni P, Bartlett C, Holenstein F, Sterne J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health Technol Assess. 2003;7:1–76.

PubMed   CAS   Google Scholar  

Linde K. Systematic reviews and meta-analyses. In: Lewith G, Jonas W, Walach H, editors. Clinical research in complementary therapies: principles, problems and solutions. London: Churchill Livingstone; 2002. p. 187–97.

Chapter   Google Scholar  

Higgins J, Green S (eds.). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011] The Cochrane Collaboration 2011. West Sussex, England: The Cochrane Collaboration; 2011.

Scottish Intercollegiate Guidelines Network (SIGN): a Guideline Developer’s Handbook ( http://www.sign.ac.uk/ ). Accessed 15 Jan 2015.

Jonas W, Crawford C, Hilton L, Elfenbaum P: Scientific Evaluation and Review of Claims in Health Care (SEaRCH™). a streamlined, systematic, phased approach for determining “what works” in health care. to be published in BMC Res Notes. 2015.

Jonas W, Lewith G, Walach H. Balanced research strategies for complementary and alternative medicine. In: Lewith G, Jonas W, Walach H, editors. Clinical research in complementary therapies: principles, problems and solutioins. London: Churchill Livingstone; 2002. p. 1–29.

Google Scholar  

Coulter I. Evidence summaries and synthesis: necessary but insufficient approach for determining clinical practice of integrated medicine? Integr Cancer Ther. 2006;5:282–6.

Article   PubMed   Google Scholar  

Perry S, The NIH. Consensus development program, a decade later. N Engl J Med. 1987;317:485–8.

Article   PubMed   CAS   Google Scholar  

Andreasen P. Consensus conferences in different countries. Int J Technol Assess Health Care. 1988;4:305–8.

Coulter I, The NIH. Consensus conference on diagnosis, treatment and management of dental caries throughout life: process and outcome. J Evid Based Dent Pract. 2001;1:58–63.

Article   Google Scholar  

Kanouse D, Brook R, Winkler J, Kosecoff J, Berry S, Carter G, Kahan J, et al. Changing medical practice through technology assessment: an evaluation of the NIH consensus development program. Santa Monica: RAND; 1989.

Kosecoff J, Kanouse D, Rogers W, McCloskey L, Winslow C, Brook R. Effects of the National Institutes of Health Consensus Development Program on Physician Practice. JAMA. 1987;258:2708–13.

Robin Graham, Michelle Mancher, Dianne Miller Wolman, Sheldon Greenfield, and Earl Steinberg, Editors; Committee on Standards for Developing Trustworthy Clinical Practice Guidelines; Board on Health Care Services; Institute of Medicine. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press; 2011.

Coulter I, Adams A. Consensus Methods, Clinical Guidelines, and the RAND Study of Chiropractic. ACA J Chiropr. 1992;50–61.

Coulter I, Shekelle P, Mootz R, Hansen D. The use of expert panel results the RAND panel for appropriateness of manipulation and mobilization of the cervical spine. Santa Monica: RAND; 1997.

Fink A, Brook R, Kosecoff J, Chassin M, Solomon D. Sufficiency of clinical literature on the appropriate uses of six medical and surgical procedures. West J Med. 1987;147:609–14.

PubMed   CAS   PubMed Central   Google Scholar  

Chassin M. How do we decide whether an investigation or procedure is appropriate? In: Hopkins A, editor. Appropriate investigation and treatment in clinical practice. London: Royal College of Physicians; 1989.

Park R, Fink A, Brook R, Chassin M, Kahn R, Merrick N, Kosecoff J, Solomon D. Physician ratings of appropriate indications for six medical and surgical procedures. Am J Public Health. 1986;76(766):772.

Merrick N, Fink A, Park R, Brook R, Kosecoff J, Chassin M, Solomon D. Derivation of clinical indications for carotid endarterectomy by an expert panel. Am J Public Health. 1987;77:187–90.

Article   PubMed   CAS   PubMed Central   Google Scholar  

Brook R, Kosecoff J, Chassin M, Park R, Winslow C, Hampton J. Diagnosis and treatment of coronary disease: comparison of doctors’ attitudes in the USA and the UK. Lancet. 1988;1:750–3.

Shekelle P, Kahan J, Bernstein S, Leape L, Kamberg C, Park R. The Reproducibility, of a method to identify the overuse and underuse of medical procedures. N Engl J Med. 1998;338:1888–95.

Crawford C, Boyd C, Jain S, Khorsan R, Jonas W. Rapid Evidence Assessment of the Literature (REAL © ): streamlining the systematic review process and creating utility for evidence-based health care. To be published in BMC Res Notes. 2015.

Hilton L, Jonas W: Claim Assessment Profile: a method for capturing health care evidence in the Scientific Evaluation and Review of Claims in Health Care (SEaRCH™). to be published in BMC Res Notes. 2015.

Jonas W, Rakel D. Developing optimal healing environments in family medicine. In: Rakel R, editor. Textbook of Family Medicine, vol. Seventh., EditionSaunders: Philadelphia PA; 2007. p. 15–24.

Download references

Authors’ contributions

IC, PE, SJ and WJ developed and designed the methodology of the Clinical Expert Panel and the Research Expert Panel. All were involved in the drafting of the manuscript and revising it for important intellectual content and have given final approval of the version to be published and take responsibility for all portions of the content. IC, PE, SJ and WJ have made substantial contributions to the conception and design of the Expert Panel Process, have been involved in the drafting and critical review of the manuscript for important intellectual content. All have given final approval of the version to be published and take public responsibility for the methodology being shared in this manuscript. All authors read and approved the final manuscript.

Authors’ information

This project was partially supported by award number W81XWH-08-1-0615-P00001 (United States Army Medical Research Acquisition Activity). The views expressed in this article are those of the authors and do not necessarily represent the official policy or position of the US Army Medical Command or the Department of Defense, nor those of the National Institutes of Health, Public Health Service, or the Department of Health and Human Services.

Acknowledgements

The authors would like to acknowledge Mr. Avi Walter for his assistance with the overall SEaRCH process developed at Samueli Institute, and Ms. Viviane Enslein for her assistance with manuscript preparation.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and affiliations.

RAND Corporation, 1776 Main Street, Santa Monica, CA, 90401, USA

Ian Coulter

Samueli Institute, 1737 King Street, Suite 600, Alexandria, VA, 22314, USA

Pamela Elfenbaum, Shamini Jain & Wayne Jonas

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Wayne Jonas .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Coulter, I., Elfenbaum, P., Jain, S. et al. SEaRCH™ expert panel process: streamlining the link between evidence and practice. BMC Res Notes 9 , 16 (2016). https://doi.org/10.1186/s13104-015-1802-8

Download citation

Received : 01 May 2015

Accepted : 14 December 2015

Published : 07 January 2016

DOI : https://doi.org/10.1186/s13104-015-1802-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical expert panel
  • Research expert panel
  • Policy expert panel
  • Patient expert panel
  • Subject matter experts
  • Methodology
  • Appropriateness
  • Scientific evaluation and review of claims in health care

BMC Research Notes

ISSN: 1756-0500

qualitative risk assessment methodology for scientific expert panels

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.6(1); 2020 Jan

Logo of heliyon

Qualitative and quantitative project risk assessment using a hybrid PMBOK model developed under uncertainty conditions

This study presented a qualitative and quantitative project risk assessment using a hybrid PMBOK model developed under uncertainty conditions. Accordingly, an exploratory and applied research design was employed in this study. The research sample included 15 experienced staff working in main and related positions in Neyr Perse Company. After reviewing the literature and the Project Management Body of Knowledge (PMBOK), 32 risk factors were identified and their number reduced to 17 risks using the expert opinions via the fuzzy Delphi technique run through three stages. The results of the confirmatory factor analysis showed that all risks were confirmed by the members of the research sample. Then the identified risks were structured and ranked using fuzzy DEMATEL and fuzzy ANP techniques. The final results of the study showed that the political and economic sanctions had the highest weight followed by foreign investors' attraction and the lack of regional infrastructure.

Project risks; Project management body of knowledge (PMBOK); Uncertainty; Mixed qualitative and quantitative risk assessment approach; Mathematics; Probability theory; Engineering; Industrial engineering; Business

1. Introduction

It can be stated with certainty that uncertainty exists in all projects, and appropriate methods should be employed to deal with this uncertainty and reduce its impact on managers' decision making [1] . One way to reduce uncertainty and counteract it is to use the fuzzy set theory, which can reflect somehow the ambiguity inherent in the problem under analysis, and present results that are closer to reality [2] . There are many risks in oil projects which can cause many problems if there is no required control and planning [3] . Considering the great importance of such projects and the vital impact of oil on various aspects of the life of Iranian people, it is necessary to conduct extensive studies to increase the reliability of planning. Risk management as one of the most important branches of management science, especially project management, aims to increase reliability. Accordingly, several methods have been devised and proposed. Fuzzy set theory and fuzzy logic as modern concepts will be able to play a major role in risk management f they are combined with management science. Construction projects constitute the greatest and most important projects in the oil industry, and they naturally are replete with small and big risks that can be dealt with through accurate planning [4] .

The oil and gas industry is the most important industry in terms of financial turnover and employment. Given the degree of development in the industry which depends on oil and its derivatives, new projects are initiated every day. Therefore, the number of projects in the oil and gas sector is very high. Considering the financial tunover of oil projects, the management of these projects is very important [4] . On the other hand, these projects are also at high risk which can be attributed to the high risky nature of gas and oil and the flammability and hazardous nature of their derivatives, which are often the cause of accidents in exploration and exploitation projects. For this reason, reducing the risks associated with oil projects, especially in exploration and exploitation projects, is very important [5] . Gas and oil projects are associated with a variety of risks in the present era. Therefore, the management of project risks is critical to the survival of these projects. Risk management is one of the phases of project management and project risk ranking is a key part of the risk assessment phase in the process of project risk management. Also, according to experts and practitioners of oil industry projects, the probable impact of risks affect project objectives such as cost, time, scope and quality of the project [3] . The PMBOK standard identifies risk management at various steps and provides control programs to reduce the severity of risks. These steps are stated as follows:

  • 1. Risk management planning
  • 2. Risk identification
  • 3. Quantitative risk analysis
  • 4. Qualitative risk analysis
  • 5. Risk response planning
  • 6. Risk monitoring and control [2]

An important point in risk assessment and assessment is uncertainty. Uncertainty in estimating the time and cost of industrial projects is considered as a major challenge in project management science. Accordingly, one of the most effective solutions to solve this problem is risk analysis. In fact, risk management is the systematic use of management policies, procedures, and processes related to risk analysis, assessment, and control activities. Therefore, prior to initiating the project, the project risks must be identified and quantified, and ultimately an appropriate strategy taken to prevent their occurrence or mitigate their effects [6] . Two issues are critical in implementing the risk management process. First, the critical risks that have a great impact on the time and cost of the project are identified, because the analysis of all risks in a project is time-consuming and not effective. Second, after identifying critical risks and analyzing them, responding to the risks is essential, because the risk management is effective only in cases where the effects of risk are eliminated or mitigated with precise and predetermined planning as soon as the risk occurs. To this end, the use of a method that can perform quantitative analysis at a higher speed and reduce uncertainty in the decision-making context can be effective. Therefore, the present study focuses on the use of statistical and multi-criteria decision making methods and fuzzy techniques which are used to structure and prioritize the risks of oil projects in the exploration and exploitation phases. The risks inherent in oil projects due to the great number of these projects can have a negative impact on the project quality, time, and cost, and their management can greatly hinder the occurrence of risk-associated accidents. Thus, given the presence of European countries such as France for collaborating on oil projects after the Joint Comprehensive Plan of Action (JCPOA), it seems that focusing on risk management and mitigating their effects is one of the requirements that contracting companies need to pursue engineering, procurement, and construction (EPC) projects. This kind of risk mitigation will also lead to the increased trust of foreign companies and lower their costs. The review of databases showed that, despite the importance of risk assessment and analysis in oil exploration projects, mixed methods have not been employed for risk analysis and evaluation. Therefore, the present study seeks to use mixed methods including fuzzy Delphi, factor analysis, and DEMATEL, and Fuzzy ANP techniques to propose an executive and operational framework for risk analysis that minimizes the risks of exploration projects. Thus, the main questions addressed in this study are stated as follows:

  • 1. What risks exist in oil exploration and exploitation phase projects based on the PMBOK classification?
  • 2. How do these risks affect and how they are affected?
  • 3. What is the significance of each project risk?

2. Literature review

2.1. concepts and theories, 2.1.1. definition of the project and the importance of its management.

Considering the rapid development of industries in the country and the gradual increase of new industrial, construction, and development projects, correct project planning, and management is essential in these industrial sectors. Overall, a project can be defined as a series of complex, non-repetitive, and interrelated operations that are implemented by the management or an administrative organization to meet certain goals within a predetermined schedule and budget framework: Project management is a process in which the project will achieve the desired outcome during its lifetime through the easiest and most cost-effective way. The project management process consists of three main components: planning, implementation, and supervision [4] .

2.1.2. Project planning and control system

The success of major industrial and construction projects is dependent on a systematic approach to planning and controlling the way activities are carried out in terms of the execution time and cost. The main function of the project planning and control system is preparing, compiling, recording, and keeping the information related to different stages of the project lifecycle and also processing, classifying, and analyzing the information, and preparing the necessary reports for the project manager. The purpose of this system is to direct the project according to the determined schedule and budget, and to provide the final objectives and products of the project and to store the resulting information for use in future projects. This system should assist the project manager in optimizing the three factors of time, cost, and quality in project implementation. A good project planning and control system should have the following capabilities and features [7] :

  • 1. Determining the completion date of the project at the planning and initial scheduling stage
  • 2. Determining the work breakdown structure (WBS) for proper implementation and non-interference of activities and their resources
  • 3. Providing cost-effective solutions to compensate for delays in executing some project activities at the execution time
  • 4. Delivering cost-effective solutions to expedite project implementation in case of changes in the economic and social conditions of the country or the project-generating organization and changes in the project priorities and the need for its faster implementation
  • 5. Scheduling and planning for the use of human resources, machinery and equipment, and, in general, reusing the resources for optimum use of them and avoiding possible bottlenecks and limitations
  • 6. Determining the distribution of materials and, in general, non-reuse resources between projects and their various activities
  • 7. Scheduling purchase orders for materials, materials, machines, and equipment to reduce storage and waste costs as well as losses caused by stagnant project finance.
  • 8. Determine the amount of the project's liquidity per time unit for timely payment of bills and prepayments
  • 9. Recording and analyzing the results when necessary to change the project planning and maintenance for use in future projects and prevent similar problems [7] ).

2.1.3. Project planning and control stages

1. Planning stage Project planning includes tasks that are done to identify project activities and their interrelationships, and estimate the time, resources, and cost of implementing them based on criteria in the project-generating organization. The various project planning stages can be divided into the following categories: Step 1: Project analysis, understanding activities and their interrelationships, preparing the work breakdown structure (WBS)

  • 1. Determining the project implementation phase based on the implantation organization of its activities and determining the major activities of each project phase, i.e. dividing the project to its sub-projects
  • 2. Breaking down each sub-project into its components and determining all project activities based on how they are implemented
  • 3. Designing the work breakdown structure (WBS) using a systematic and top-down approach, which according to the type, organization, and scope of the project can affect the project implementation phases, major project activities, final product, and its components, units contributing to the implementation of the project or a combination of them
  • 4. Determining all project milestones to facilitate subsequent controls and emphasize the completion of some vital activities at a given time
  • 5. Identifying and defining the order of activities in an accurate and realistic way [8] .

Step 2: Estimating the time, resources, and cost of implementing each project activity

  • 1. Estimating the duration of implementation of any of the activities identified in the first step according to the opinions of executive experts and prior experience in the implementation of similar projects
  • 2. Plotting the project network using the critical path method (CRM) and utilizing professional software programs for project planning and control
  • 3. Estimating human resources, equipment, and machinery required for implementing each project activity
  • 4. Estimating the materials needed to implement the project
  • 5. Identifying existing and available resources and their applicability
  • 6. Estimating the cost of each activity with respect to their fixed and variable costs
  • 7. Analyzing the project costs and comparison of the results with the budget determined for project implementation by the project-generating organization [8] .

Step 3: Project scheduling, resource planning, cost-time trade-off analysis, and reviewing possible problems

  • 1. Analyzing the network time, determining the critical path, and identifying activities that are less floating (critical activities)
  • 2. Allocating available resources to project activities based on the existing resource constraints
  • 3. Analyzing the project resources and changing the initial scheduling due to existing resource constraints
  • 4. Leveling resources if necessary and changing the initial scheduling according to the leveled resources
  • 5. Analyzing cost-time trade-off and project scheduling with minimal cost using the existing and new methods presented in this field
  • 6. Reviewing inappropriate atmospheric conditions and other predictable problems affecting the implementation and timing of project activities [8] .

2.1.4. Risk management

Chapman and Ward have proposed a general project risk management process consisting of nine phases: 1) Identifying key aspects of the project; 2) Focusing on a strategic approach to risk management; 3) Identifying the time of occurrence of risks; 4) Estimating risks and the interrelationship; 5) Allocating ownership of risks and providing appropriate responses; 6) Estimating uncertainty; 7) Estimating the importance of the relationship between different risks; 8) Designing responses and monitoring the risk situation; and 9) Controlling the implementation stages [4] .

In order to achieve tangible development, developing countries are forced to increase investment in infrastructure, which, apart from meeting basic needs, has a positive impact on accelerating economic development [9] [10] . Although developing countries such as Iran faces some limitations and uncertainties when moving toward this goal, they have to engage in domestic and foreign private sectors in projects or infrastructural services in order to overcome or reduce such uncertainties. Growing development in a country like Iran requires a large amount of investment in the infrastructural sector [11] . Therefore, due to the uncertain nature of projects and the need for the optimal utilization of resources, each project faces uncertainties. The belief that projects are fraught with uncertainties, such as technical skills or management quality reinforces the fact that many projects fail in terms of their goals, benefits, costs, and the expected time. The existence of risk and uncertainty in the project reduces the accuracy in the proper estimation of the goals and reduces the efficiency of the projects. Therefore, the need for project risk identification and management is essential [12] . Considering the importance of the science of project management in recent years, various standards have proposed in this regard. These standards include the basic principles and requirements that are considered necessary for the successful management of a project or the implementation of a project management system. Some of the famous standard project management standards are presented in Table 1 .

Famous project management standards [4] .

The most famous and extensive standard among the above standards is the Project Management Body of Knowledge (PMBOK). This standard covers nine areas of knowledge for successful project management. Of these areas, project scope management, project time management, project cost management, and project quality management are considered as the main areas. One of the most important support areas is risk management [8] . Risk management is the process of identifying, analyzing, evaluating, and responding to the risks in the project [13] . The Project Management Body of Knowledge is a set of words, guidelines, and instructions for project management developed and proposed the Project Management Institute. This body of knowledge has evolved over time in the form of a book entitled “A Guide to the Project Management Body of Knowledge”. The fifth edition of this guide was released in 2013. The Project Management Body of Knowledge (PMBOK) also overlaps with the concept of management in its overall sense because both involve concepts such as planning, organizing, human resources, implementing, and controlling organizational operations. The Project Management Body of Knowledge (PMBOK) has similarity and overlap in other management disciplines, such as financial predictions, organizational behavior, management science, budgeting, and other planning approaches [2] .

The purpose of project risk management is to identify and analyze risks in a manner that the risks are understood easier and managed more effectively [14] . A systematic risk management process is usually divided into three categories:

  • 1. Risk identification and classification
  • 2. Risk analysis
  • 3. Risk mitigation [14] .

2.1.5. Project risk management

Project risk management is one of the main project issues [20] and is considered a key factor in most of the organizations involving in the project [21] . Risk management is the systematic process of identifying, analyzing, and responding to project management, which involves maximizing the probability of occurrence of positive events and their outcomes and minimizing the risk of adverse events and their outcomes [22] . He proposed a two-stage process for project risk management as follows:

  • 1. Risk assessment including risk identification, analysis, and prioritization
  • 2. Risk management including risk management planning, risk precautions, follow-up, and corrective actions

Risk assessment is the process of estimating the likelihood of the occurrence of an event (desirable or undesirable) and its impact [23] . This step can help to select less risky projects and eliminate the residual risk [22] . In the first step, using one of the risk identification tools, major threats and opportunities that can affect the project processes and outcomes are identified. After identifying the main risks, the second step involves the accurate assessment of the frequency of the occurrence and the results of each risk and then ranking the various risks based on the assessment results. In this way, identified risks can be compared with each other, and in the next phases of the risk management process, an appropriate risk response method can be decided.

2.2. Previous studies

2.2.1. domestic studies.

See Table 2 .

Summarizes the studies conducted in Iran.

2.2.2. Foreign studies

See Table 3 .

Summarizes the studies conducted abroad.

2.3. Identified factors

The risks of exploration and exploitation projects are considered as variables and units of analysis and are initially classified using the PMBOK standard based on the following model (see Fig. 1 ).

Figure 1

Risks classified based on the PMBOK standard [4] ).

Also, based on studies in the literature, the following risks were identified for oil projects in the exploration and exploitation phase (see Table 4 ). Thus, various risks were identified based on the studies in the literature.

Risks identified in oil projects in the exploration and exploitation phase.

As mentioned above, there are notable researches addressing Risk Assessment. However, to the best of our knowledge, none of them considered an Fuzzy Dematel and Fuzzy ANP techniques in the risk assessment problems in oil and gas companies. This has been a motivation of the current work. More specifically, the main contributions of this paper can be described as follows:

  • 1. Qualitative and Quantitative Project Risk Assessment Using a Hybrid PMBOK Model is developed Under Uncertainty for oil and gas company.
  • 2. 32 risk factors were identified using Literature and their number reduced to 17 risks using the expert opinions via the fuzzy Delphi technique run through three stages.
  • 3. The identified risks were structured and ranked using fuzzy DEMATEL and fuzzy ANP techniques
  • 4. The performance of the developed solution approaches are evaluated by running the mentioned techniques

The rest of this paper is structured as follows. Section 3 is devoted to the Methodology. Section 4 represents the results and comprehensive experimental analysis comprehensive experimental analysis. Finally, conclusion of this paper is provided in Section 5 .

3. Methodology

The present study is an exploratory research in terms of its objectives as it seeks to identify and evaluate the risks of exploration and exploitation projects. In addition, this study employs descriptive and analytical design as the researcher does not manipulate the variables and only describes the variables in their normal states and analyzes the collected data. This study is also a survey because it collected expert data and opinions using various questionnaires. First, this study presented a review of the literature and addressed the risks under analysis. Then, using the PMBOK standard, other risks were identified. The fuzzy Delphi method was used to confirm the related risks based on expert opinions. The results identified the relevant risks, which ones are relevant. Afterward, the fuzzy DEMATEL technique was employed to structure and investigate the network relationships among the risks. Finally, based on the fuzzy DEMATEL results, the interrelationship between risks was identified using the fuzzy ANP questionnaire. Besides, the fuzzy rank and weight of each risk were estimated using the fuzzy ANP technique (see Fig. 2 ).

Figure 2

The research procedure.

3.1. Expert panel

In order to identify and evaluate the project risks, the opinions of experts and specialists in managing oil exploration and exploitation projects in Neyr-Perse Company were used. Regarding Cochran sampling method 60 experts were asked to fill the questionnaire. In order to complete the Delphi, DEMATEL, and ANP questionnaires, 15 experienced experts who held the main and related positions in the company were surveyed. The experts were selected based on their expertise and availability.

3.2. Instruments and data collection procedure

The data were collected through library and field techniques. The secondary data were collected via the library technique and the initial data were collected using the field technique, i.e. by distributing questionnaires among the respondents in the research sample. In this study, three fuzzy Delphi, fuzzy DEMATEL and fuzzy ANP questionnaires were used. To determine the validity of the questionnaire, expert opinion was used. That is, all three questionnaires have a stereotypical structure and the indices are first extracted from the research literature and entered into the Delphi questionnaire. Then the confirmed risks are entered into the DEMTEL and paired ANP comparison questionnaire. Therefore, the professors as well as the experts in the first stage of the research confirm the risks.

3.3. Data analysis

The data collected in this study were analyzed in three stages. First confirmation of the identified risks using fuzzy Delphi analysis, second construction of the validated factors using fuzzy DEMATEL and then prioritization of the final indicators using fuzzy ANP. The following is a description of each method:

3.3.1. Fuzzy set theory

Decision-making in the area of risk analysis cannot be made in a purely definitive space. In classical multi-criteria decision making, the weight of the criteria is well known, but due to the ambiguity and uncertainty in the decision-maker statements, expressing the data definitively is inappropriate [24] . In this study, verbal expressions were used instead of definite numbers to determine the weight of the indexes and to rank the options. In this study, Table 5 proposed by [25] to determine the effectiveness of risks and their weights and Table 6 presented by [26] to form the decision matrix were used.

Correspondence of verbal expressions with triangular fuzzy numbers.

Verbal variables associated with indicators.

In this study, triangular fuzzy numbers are used to prevent ambiguity from decision making at all stages. A fuzzy triangle number denoted by à = (l, m, u). The parameters l, m, and u respectively represent the lowest possible value, the most probable value, and the highest possible value of a fuzzy event [24] . To assess the experts' views on the severity of the impact of the risks in pairwise comparisons, the five preferred linguistic variables “equal”, “low”, “high”, “very high” and “very high” were used.

It should be noted that triangular fuzzy numbers is used in Fuzzy DEMATEL and Fuzzy ANP methods and trapezoidal Fuzzy numbers is used in Fuzzy Delphi Method. The description of both method is as follows.

3.3.2. Fuzzy Delphi technique

The fuzzy Delphi technique is, in fact, a combination of the Delphi method and the analysis of the collected data using the definitions of the theory of fuzzy sets as follows:

  • 1. Selecting experts and explaining the research problem to them
  • 2. Preparing the questionnaire and sending it to experts

In Eq. (1) A i indicates the opinion of expert i and in Eq. (2) , A a v e represents the average of the expert opinions. Also, a 1 , a 2 , a 3 , and a 4 represent trapezoidal fuzzy numbers.

  • 4. In this step, the previous opinion of each expert and its difference with the average opinions of others along with the next round of questionnaires will be sent back to the experts.

If the difference at this step exceeds the threshold, step 4 should be repeated.

  • 6. However, if the difference between the two steps is smaller than the threshold, the fuzzy Delphi process is stopped.

3.3.3. Fuzzy DEMATEL technique

Given that expert opinions are required to use the DEMATEL method and include both verbal and ambiguous expressions, it is advisable to convert them to fuzzy numbers in order to integrate them. To solve this problem, Lin and Wu developed a model using the dimensional method in the fuzzy environment [28] . The procedure is described below [25]

Step 1: Obtaining the expert opinions and averaging them

Suppose p experts have expressed their opinions about the relationship between risks is using the verbal expressions in Table 5 . Therefore, there are p matrixes x ˜ 1 , x ˜ 2 , …, and x ˜ p , each representing the opinions of one expert, and the matrix components are identified with the corresponding fuzzy numbers. Eq (4) is used to estimate the average matrix of opinions

Matrix Z is called the initial fuzzy direct relation matrix.

Step 2: Calculation of the normalized direct relation matrix

Equations (5) and (6) are used to normalize the obtained matrix:

The steps for performing the fuzzy DEMATEL technique are described below:

Step 3: Calculating the total T relation fuzzy matrix

The total relation fuzzy matrix is calculated via equations (7) through (9) :

Where each component is expressed as t ˜ i j = ( I i j t , m i j t , u i j t ) and is calculated as follows:

Where I is the identity matrix, and H I , H m , and H u are each an n ⁎ n matrix whose components constitute the lower, middle, and upper numbers of the triangular fuzzy numbers of matrix H [29] .

Step 4: Calculating the sum of the rows and columns of the matrix T 4

The sum of rows and columns is obtained according to equations (11) and (12) :

Where, D ˜ and R ˜ are n ⁎ 1 and n ⁎ 1 matrixes, respectively.

Step 5: Determining the weight of indexes D ˜ + R ˜ and the relationship between the criteria D ˜ − R ˜

If, D ˜ − R ˜ > 0 the related criterion will be effective and if D ˜ − R ˜ < 0 the related criterion will be affected.

Step 6: Defuzzification of fuzzy numbers D ˜ + R ˜ and D ˜ − R ˜ calculated in the previous step

The fuzzy numbers D ˜ + R ˜ and D ˜ − R ˜ calculated in the previous step are defuzzificated using center of Gravity method Eq. (13) – (16) :

Where Z ⁎ is the defuzzificated value of A ˜ = ( a 1 , a 2 , a 3 ) .

Step 7: Calculating weight and impact factors:

The relative importance of the criteria will be estimated through the following equation [30] [31]

Step 8: Normalization of the weights of the criteria

Where, W ˜ j is the final weight of the decision-making criteria.

3.3.4. The fuzzy analysis network process (FANP)

The analysis network process (ANP) is generally the analytic hierarchy process (AHP) and a method for supporting multi-criteria decision-making for breaking down complex issues, with hierarchical relations among its components. The ANP also uses clockwise paired comparisons. The compatibility indicator is also used to indicate the convergence of the expert opinions. Each network component is denoted with symbols such as C h and h = 1 , . . . , m , with n h elements. We show these elements with e h 1 , e h 2 , and e h n h . The effect of a dataset of elements in a component in the system is represented by a priority vector derived from the paired comparisons. The purpose of grouping and sorting these data is to transform the structure into a matrix. This matrix is used to represent the effect of an element of a component on itself, or of a component with an arrow from it to another component. Sometimes, like hierarchical mode, effects are run only from the beginning of the arrow to the end of the arrow. The effects of elements on other elements of the network can be shown via the supermatrix displayed in Fig. 3 (a).

Figure 3

Supermatrix.

Each W i j in the supermatrix is called a block as shown in Fig. 3 (b). Each W i j column is an eigenvector of the effect (significance) of elements in the network component i on an element in the network component j . Some data may be zero for the lack of impact. Therefore, we do not need to use all the elements of a component in pairwise comparisons to obtain an eigenvector and only non-zero effects are sufficient. In the last step, we take the limit of the supermatrix W using the Markov process as follows, in order to obtain the ultimate priority: [32]

After completing the comparison matrix, the priority or weight of each criterion and alternative are calculated. In the analysis process, two types of weight should be calculated: relative weight and final weight.

The relative weight is obtained from the pairwise comparison matrix. The elements of each level are compared in terms of their respective element at the higher level in even pairs and their weights are calculated. These weights are called relative weights, while the final weight is the final rank of each option calculated from the combination of relative weights. Any pairwise comparison matrix may be compatible or incompatible. If this value is less than 0.1 it is accepted but in case of inconsistency, pairwise comparisons need to be repeated to obtain a consistent pairwise comparison matrix. Because a good decision model requires ambiguity, fuzzy set theory is used to solve the usual ANP, commonly known as fuzzy ANP or FANP. The following steps are taken to do so:

  • 1. Breaking down the project risk analysis into a network. The overall goal is to select the risks with the highest importance.
  • 2. A questionnaire is prepared on the basis of the mentioned network and experts are asked to complete it. The questionnaire is developed based on pairwise comparisons and a nine-point clock scale. The compatibility index and compatibility ratio are calculated for each matrix to test the consistency of the opinions of each expert. If the compatibility test is not accepted, the main values in the comparative pairwise matrix must be reviewed by the expert.

Conversion of expressive variables to fuzzy numbers.

A fuzzy positive two-way matrix can be defined as follows:

Where A ˜ k is the positive two-way matrix of the decision maker k and a ˜ i j is the related importance between the decision elements i and j :

If k represents expert p 1 to p k , each of the pairwise comparisons between the two criteria will have a k-positive fuzzy two-way triangular value. The geometric averaging method is used to integrate the multiple answers of experts. Accordingly, the integrated fuzzy positive two-way matrix is as follows:

  • 4. Using the center of Gravity method (explained in Fuzzy DEMATEL technique section), the generated triangular fuzzy numbers are converted to ordinary numbers.
  • 5. The pairwise comparison matrix is computed using non-fuzzy values and the priority vector for each pairwise comparison matrix is calculated.

Figure 4

The supermatrix used in this study.

  • 7. The limit supermatrix is calculated by raising it to the power of the weighed supermatrix until the supermatrix converges to a stable supermatrix. Risk priority weights are obtained from the limit supermatrixes by using the Fuzzy ANP Solver software.

After the necessary information and data have been collected, extracted and categorized, the model and the information will be solved and analyzed respectively. This chapter uses the fuzzy Delphi method to specify identified risks in oil exploration and exploitation phases, and a novel fuzzy DEMATEL structuring method, as well as a fuzzy ANP ranking method for analyzing the collected data and structuring and rating of these factors. Following are the identified risks from the research sources, the results of the fuzzy Delphi method, the data analysis using the fuzzy DEMATEL method and finally the results of the fuzzy ANP technique.

4.1. Identified risks

This section presents the 32 identified risks from previous literature and studies and their categorization using the PMBOK classification (see Table 8 ).

Identified risks.

By identifying these risks, given that these risks are taken from standard authorities, some of them may not be applicable in the Iranian field of operation or there may be other risks in the process of exploration and exploitation in Iran that should be addressed and only experts can comment on this. Therefore, fuzzy Delphi technique was used to gather expert opinion and reach consensus on identified risks. The reason for using fuzzy Delphi is to accept the uncertainty and ambiguity of the expert opinion as described below.

4.2. Fuzzy Delphi results

After distributing the questionnaire in two rounds and the averaging of the opinions, the results of the difference in averages and the final results of the consensus of the experts on the risks are presented in the following table.

4.2.1. Definition of linguistic variables

Qualitative variables are defined as trapezoidal fuzzy numbers: low (0,0,2,4), medium (3,4,6,7), high (6,8,10,10). Although trapezoidal fuzzy numbers have more complex computational process than triangular fuzzy numbers, they can Carry out more ambiguity in the verbal and qualitative variables in range from b to c defined for trapezoidal fuzzy numbers that the use of trapezoidal numbers for the Delphi section may reveal more ambiguity in expert opinion [33] .

4.2.2. Risk analysis

Based on the suggested options and definition of linguistic variables, the questionnaire was designed. The results of the survey responses to the questionnaire are presented in Table 9 .

First questionnaire results.

We also convert fuzzy numbers to definite numbers by using the Minkowski formula. Minkowski formula was used because with regards to the data in this paper in comparison with another defuzzification methods had better answer and was easier to use.

According to Tables ​ Tables9 9 and ​ and10, 10 , each expert's disagreement can be calculated according to Eq. (3) [27] . In fact, based on this relationship, each expert can measure his opinion with average comments and adjust his previous opinions if desired. The result of this step is given in Tables ​ Tables11 11 and ​ and12 12 .

Average opinions of experts from the first questionnaire.

Second questionnaire results.

Average opinions of experts from the second questionnaire.

Following is a review of the results of the mean differences and the final conclusions of the experts on the risks.

As it can be seen, the experts did not agree on 6 cases. They also agreed to omit 10 risks and they confirmed 16 risks. Thus, to determine the assignment of the remaining six indices, the third Delphi questionnaire was redistributed and asked to re-evaluate their opinion (see Table 13 ).

Differences in the experts' opinions in the first and second questionnaires.

As it is shown in Table 14 , it appears that the experts agreed with the remaining 6 items in the third stage. They omitted 5 risks and confirmed only 1 risk (No. 23). Thus, in total, 17 risks were confirmed by the experts and 15 were not confirmed due to climatic conditions of exploration and exploitation activities(see Table 15 ). Table 16 presents the list of the confirmed risks.

Differences in the experts' opinions in the second and third questionnaires.

The identified risks.

One expert's opinion on the pairwise comparison of indicators in terms of effectiveness.

4.3. Fuzzy DEMATEL results

At first, the DEMATEL questionnaire was distributed among the experts and they were asked to compare the extent to which the indexes under analysis are effective or being affected by each other using verbal descriptions. In the next step, the questionnaires were collected and the verbal descriptions were converted to the corresponding fuzzy numbers (see Table 17 ).

Corresponding fuzzy numbers for pairwise comparisons.

In the next step, the matrix of the expert opinions was formed in the form of fuzzy numbers for each expert, and the opinions were accumulated using the mean arithmetic method. The matrix of accumulated expert opinions is obtained as a fuzzy set [34]

This matrix is called the initial direct-relation fuzzy matrix, in which Z ˜ i j = ( l i j ′ , m i j ′ , u i j ) is a triangular fuzzy number and Z ˜ i i ( i = 1 , 2 , … , n ) is considered a triangular fuzzy number ( 0 , 0.0 ) .

Then, by normalizing the initial direct-relation fuzzy matrix, the normalized direct-relation fuzzy matrix X ˜ is obtained as follows:

Where r is defined as follows:

Table 18 shows the normalized accumulated expert opinion matrix.

The normalized accumulated expert opinion matrix.

In the next stage, high, middle, and lower fuzzy triangular numbers were separated from each other and entered into the DEMATEL Solver software as three separate matrices. Then the results were combined. That is, R and J values for all three parts were combined and the three matrices formed a single fuzzy matrix. Then the R+J and R-J were calculated using fuzzy equations (see Table 19 ).

The result of the DEMTEL technique for the bottom section of triangular fuzzy.

The results showed that the risk of political and economic sanctions, lack of attraction of foreign investors in project implementation, sanctioning of specialized consultations by foreign companies, and lack of necessary infrastructure in the region for the implementation of industrial projects in the first priority up to Fourth is in the analysis of bottom numbers of triangular fuzzy (see Table 20 ).

The result of the DEMTEL technique for the middle section of triangular fuzzy.

The results showed that the risk of political and economic sanctions, lack of attraction of foreign investors in project implementation, sanctioning of specialized consultations by foreign companies, and lack of necessary infrastructure in the region for the implementation of industrial projects in the first priority up to Fourth is in the analysis of middle numbers of triangular fuzzy (see Table 21 ).

The result of the DEMTEL technique for the above section of triangular fuzzy.

The results showed that the risk of political and economic sanctions, lack of attraction of foreign investors in project implementation, sanctioning of specialized consultations by foreign companies, and lack of necessary infrastructure in the region for the implementation of industrial projects in the first priority up to Fourth is in the analysis of above numbers of triangular fuzzy.

In order to determine the final ranks and design the impact model Table 22 is deffuzzified as follows [35] , [36]

Fuzzy R+J and R-J relations.

As is clear from the calculations, R 17 has the greatest impact. This means that it also affects a large number of risks and has the greatest impact. Fig. 5 shows the impact of the final risks in the exploration and exploitation phase (see Table 23 ).

Figure 5

The impact of final risks in the exploration and exploitation phase.

Final defuzzificated results.

As it can be seen in the figure above, each factor at the highest point of the model (RJ) can affect the highest number of factors, and each factor on the right side of the model (R + J) can have the greatest impact on other factors. The results also indicated that the political and economic sanction is at the top of the model. Therefore, it affects the greatest number of factors. The non-attraction of foreign investors in the implementation of the projects and Banning professional consultation by foreign companies occupy the other positions in terms of their effects on other factors. In addition, political and economic sanction is located at the rightmost point the model, occupying the first place in terms of intensity. Also, the non-attraction of foreign investors in the implementation of the projects and the failure of contractors and consultants to consider minus requirements in tenders and their failure to consider the project final cost and estimate profit and loss occupy the next positions.

4.4. Fuzzy ANP results

To better understand the effect of the indexes, the threshold value must be specified so that the low-effect relationships are filtered out and removed from the model. In other words, only the effects are displayed that their value in the matrix T exceeds the threshold. According to the experts, the threshold covers the effects that are below the lower limit. To determine the threshold, the fuzzy matrix was defuzzificated and a DEMATEL analysis was performed for it. Then the defuzzificated threshold was estimated to be 0.05. In other words, the relations whose impact was higher than 0.05 were determined in the total impact matrix as shown in Table 24 . [35] , [36] :

The effects higher than the threshold in the total impact matrix.

As it can be seen, only the factors whose interrelationship exceeds 0.05 are entered into the ANP questionnaire, and other relationships are considered to be zero due to their low importance. The initial relation matrix based on the above results is presented in Table 25 .

The impact matrix (0, 1).

Table 26 displays the normalized matrix.

The normalized matrix.

Accordingly, the normalized weights of the risk impact matrix were determined. Then, in order to determine the weight of each risk, the risks were initially classified based on PMBOK standard into seven categories including time and cost, human resources, quality, contract, score, communication, and others, and the final risks of each index were determined in Fig. 6

Figure 6

Risk-relation matrix based on DEMATEL results in fuzzy ANP software.

Once the model has been identified, the main categories should be compared and weighted first. Each group of risks is then compared and weighted. Categories with single risks are weighted 1. The three categories of quality, range and other risks have only one risk. In addition, two categories of human resources and contract have two risks, the weight of which was determined by experts in the questionnaire. In order to achieve the purpose of the research, paired comparisons questionnaires were designed and distributed among experts. According to the fuzzy approach in this study, the verbal expressions and fuzzy numbers in Table 27 were used.

Fuzzy spectrum and corresponding verbal expressions.

In this section, according to Fig. 7 , pairwise comparisons are made and using the modified method of [37] , [38] [39] , [35] the component weights were obtained and prioritized accordingly. In this software Gogus and Butcher method was used to calculate compatibility. The following tables show the geometric mean of expert opinions. In the last column of these tables, the special vector is shown. The following example tables for explaining how to calculate the eigen vector and the geometric mean.

Figure 7

The final weight matrix for criteria in terms of oil exploration and exploitation risks.

The following figures and tables (see Fig. 8 and Tables ​ Tables28, 28 , ​ ,29, 29 , ​ ,30, 30 , ​ ,31, 31 , ​ ,32, 32 , ​ ,33, 33 , ​ ,34 34 and ​ and35) 35 ) show the final weights for each risk category:

Figure 8

The final weight matrix for sub-criteria in terms of oil exploration and exploitation risks.

Mean paired comparisons to the risk of oil exploration and exploitation.

Mean paired comparisons to Time and Cost.

Mean paired comparisons to R l .

The result of mean paired comparisons to each risk and consistency/inconsistency of expert's opinions.

As it can be seen, cost and time have the highest weight followed by quality risks and other major risks.

As it is shown, economic and political sanctions have the highest weight followed by the attraction of foreign investors and the lack of the regional infrastructure which occupied the second and third positions.

5. Conclusions

Neyr Perse Company is one of the most important companies in the field of exploration and exploitation of oil projects whose operations are always exposed to risks. Considering the importance and necessity of risk management in the company's projects, this study proposed a hybrid model of risks presented in the Project Management Body of Knowledge (PMBOK) in order to structure and rank these risks using the expert opinions. The results showed the weight factor (importance) of the risks under analysis. Accordingly, economic and political sanctions were found to have the highest weight followed by the attraction of foreign investors and the lack of the regional infrastructure which occupied the second and third positions. Based on the results and the qualitative and quantitative approach taken in this study, a couple of suggestions are provided to the officials of Neyr Perse Company:

  • 1. Managers of the company are recommended to plan and counteract the risks by continuously recognizing and assessing the company's risks. Without the use of scientific methods, the decisions made by the manager may deviate a lot from reality and compensating for themistakes made in the decision may be costly.
  • 2. Managers of the company can take decisions based on a combination of approaches derived from theories and previous studies, documentation, and global and national standards, risk management instructions such as PMBOK, as well as the opinions of the experts and managers of the company that are the result of their expertise and experience, and thus contribute to promoting the position of the company and the achievement of its goals.
  • 3. The structuring of identified risks helps managers analyze the extent to which the risks can affect and be affected and recognize that the degree to which the improvement in each of the risks can be effective in improving other risks. In this way, managers can identify the domino effect of risks and focus their attention on those risks whose improvement can change the entire model.
  • 4. There is no possibility of changing some of the risks for managers, and some of the risks have features that managers should pay attention to when making decisions. The use of multi-criteria decision-making techniques helps them to prioritize risks.
  • 5. Given the uncertainty in the risk management environment and the importance of using fuzzy logic to control ambiguity and complexity in this environment, a combination of techniques used with the fuzzy approach can help the company's manager to reduce the ambiguity and complexity inherent in decision making and get better and more realistic results by using verbal descriptions.
  • 6. Mixed approaches allow managers and decision makers to have a set of tools that can both take into account the collective opinions of experts and construct a structuring and ranking model using structuring and multi-criteria decision-making approaches in order to improve their decisions.

Declarations

Author contribution statement.

B. Barghi: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.

S. Shadrokh: Conceived and designed the experiments; Performed the experiments; Contributed reagents, materials, analysis tools or data.

Funding statement

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Competing interest statement

The authors declare no conflict of interest.

Additional information

No additional information is available for this paper.

IMAGES

  1. (PDF) A qualitative risk assessment methodology for scientific expert

    qualitative risk assessment methodology for scientific expert panels

  2. Qualitative Risk Analysis

    qualitative risk assessment methodology for scientific expert panels

  3. a case study of the qualitative risk assessment methods

    qualitative risk assessment methodology for scientific expert panels

  4. 5+ Qualitative Risk Analysis Templates

    qualitative risk assessment methodology for scientific expert panels

  5. Qualitative Risk Assessment Matrix

    qualitative risk assessment methodology for scientific expert panels

  6. Qualitative Risk Analysis

    qualitative risk assessment methodology for scientific expert panels

VIDEO

  1. Session 20 Plan Risk Management, Perform Qualitative Risk Analysis

  2. Quantitative Risk Assessments

  3. summative assessment bioscience methodology AP TS DSC TET plz🙏 subscribe jai sree ram🙏

  4. By Priya choudhary -Advanced Risk management in servicenow

  5. Demystifying the intricacies of Research Methodology, Scientific Publication and Patenting PMU

  6. Qualitative and Quantitative Risk Assessment

COMMENTS

  1. A qualitative risk assessment methodology for scientific expert panels

    Abstract and Figures. Risk assessment can be either quantitative, i.e. providing a numeric estimate of the probability of risk and the magnitude of the consequences, or qualitative, using a ...

  2. A qualitative risk assessment methodology for scientific expert panels

    A qualitative risk assessment methodology for scientific expert panels. The authors describe a new set of levels for probabilities, as well as the items considered when addressing either animal or human health consequences, taking into account the limitations of the first version. Expand.

  3. A qualitative risk assessment methodology for scientific expert panels

    This website requires cookies, and the limited processing of your personal data in order to function. By using the site you are agreeing to this as outlined in our privacy notice and cookie policy.

  4. A qualitative risk assessment methodology for scientific expert panels

    A qualitative risk assessment methodology for scientific expert panels (PDF) A qualitative risk assessment methodology for scientific expert panels | Renaud Lancelot - Academia.edu Academia.edu no longer supports Internet Explorer.

  5. SEaRCH™ expert panel process: streamlining the link between evidence

    NIH consensus panels. The National Institute of Health (NIH) consensus development conference method for resolving evidence judgment issues was begun in 1977 by NIH as a method by which the scientific community could bring relevant research to bear on the quality of health care [8-10].The purpose of the NIH Consensus Development Conference (CDC) is to bring clinical practice more in line ...

  6. PubMed

    PubMed

  7. Qualitative Research: Consensus methods for medical and ...

    Two consensus methods commonly adopted in medical, nursing, and health services research—the Delphi process and the nominal group technique (also known as the expert panel)—are described, together with the most appropriate situations for using them; an outline of the process involved in undertaking a study using each method is supplemented ...

  8. Conducting Online Expert panels: a feasibility and experimental

    Background. Expert panels are an established consensus-finding method in clinical and health services research [1,2].They often use a modified Delphi structure [], which typically consists of two question-driven phases and one discussion phase.If conducted properly, expert panels are an invaluable tool for defining agreement on controversial subjects [4,5].

  9. Risk Assessment and Analysis Methods: Qualitative and Quantitative

    A risk assessment determines the likelihood, consequences and tolerances of possible incidents. "Risk assessment is an inherent part of a broader risk management strategy to introduce control measures to eliminate or reduce any potential risk- related consequences." 1 The main purpose of risk assessment is to avoid negative consequences related to risk or to evaluate possible opportunities.

  10. ORBi: Detailled Reference

    A qualitative risk assessment methodology for scientific expert panels. Dufour, B.; Plee, L.; ... this panel has been using a qualitative risk method for evaluating animal health risks or crises for the past few years. Some experts have drawn attention to the limitations of this method, such as the need to extend the range of adjectives used ...

  11. Lessons Learned From the Stakeholder Engagement in Research

    • Have no or minimal existing scientific method to quantify the risk of spread ... existing data provides valuable first steps in data-driven risk assessment, while recognizing opportunities to improve data quality; [5] ... A qualitative risk assessment methodology for scientific expert panels. Revue Sci Techn Office Int Des Epiz.

  12. PDF A qualitative risk assessment methodology for scientific expert panels

    Risk assessment can be either quantitative, i.e. providing a numeric estimate of the probability of risk and the magnitude of the consequences, or qualitative, using a descriptive approach.

  13. A qualitative risk assessment methodology for scientific expert panels

    Keywords ANSES - Consequences assessment - France - Method - Probability of occurrence - Qualitative risk assessment - Scientific panels. 674 Introduction Risk assessment can be either quantitative, i.e. providing a numeric estimate of the probability of the risk and the magnitude of the consequences, or qualitative, that is, using ...

  14. Innovative methods for using expert panels in identifying

    A variety of research questions can be addressed using expert panels to synthesize existing knowledge and issue recommendations. This panel's presentations describe the use of innovative methods for engaging expert panels comprised of implementation scientists and clinical managers in complex recommendation processes to match implementation strategies with evidence based practices in real ...

  15. A qualitative risk assessment methodology for scientific expert panels

    DOI: 10.20506/RST.30.3.2063 Corpus ID: 5305282; A qualitative risk assessment methodology for scientific expert panels. @article{Dufour2011AQR, title={A qualitative risk assessment methodology for scientific expert panels.}, author={Barbara Dufour and Ludovic Pl{\'e}e and François Moutou and Denis Boisseleau and Caroline Chartier and Benoit Durand and Jean Pierre Gani{\`e}re and Joseph lgnace ...

  16. Using a participatory qualitative risk assessment to estimate the risk

    This article presents a participative and iterative qualitative risk assessment framework that can be used to evaluate the spatial variation of the risk of infectious animal disease introduction and spread on a national scale. The framework was developed through regional training action workshops and field activities. ... and expert knowledge ...

  17. Rapid risk assessment framework to assess public health risk of

    2.2. Decision criteria contributing to the risk. As the RRA framework should be able to assess a range of different bacteria, AMR genes from several antimicrobial classes and different food commodities, even for emerging risks where data are limited, multi-criteria and risk matrix-methods were seen as the most appropriate methods to evaluate the overall risks (Van der Fels-Klerx et al., 2018).

  18. PDF Guidelines for Responsible Conduct in Veterinary Research

    identification and measures for risk mitigation are more readily applicable to applied research. 2.2 RISK ASSESSMENT Structured risk assessment should be conducted as early as possible and throughout the research life cycle. Researchers and institutions should take practical steps to evaluate not only the benefits but also the possible risks

  19. SEaRCH™ expert panel process: streamlining the link between evidence

    The National Institute of Health (NIH) consensus development conference method for resolving evidence judgment issues was begun in 1977 by NIH as a method by which the scientific community could bring relevant research to bear on the quality of health care [8-10].The purpose of the NIH Consensus Development Conference (CDC) is to bring clinical practice more in line with research and so ...

  20. Safety assessment of the substance amines, di‐C14‐C20‐alkyl, oxidised

    The methodology is based on the characterisation of the substance that is the subject of the request for safety assessment prior to authorisation, its impurities and reaction and degradation products, the evaluation of the exposure to those substances through migration and the definition of minimum sets of toxicity data required for safety ...

  21. Qualitative and quantitative project risk assessment using a hybrid

    This study presented a qualitative and quantitative project risk assessment using a hybrid PMBOK model developed under uncertainty conditions. Accordingly, an exploratory and applied research design was employed in this study. The research sample included 15 experienced staff working in main and related positions in Neyr Perse Company.

  22. Safety assessment of the substance amines, di‐C14‐C20‐alkyl, oxidised

    Abstract The EFSA Panel on Food Contact Materials, Enzymes and Processing Aids (CEP) ... Safety assessment of the substance amines, di-C14-C20-alkyl, oxidised, from hydrogenated vegetable oil, for use in food contact materials ... If you wish to access the declaration of interests of any expert contributing to an EFSA scientific assessment ...

  23. [PDF] Qualitative Risk Analysis in Animal Health: A Methodological

    Risk analysis can be performed following either a quantitative or a qualitative approach. Both methodologies are linked to the same theoretical rules. Once the potential hazard has been identified, the qualitative risk assessment is carried out by combining the probabilities of occurrences of the events (emission and exposition) in the presence of a hazard, and its consequences.

  24. Assessment of the feed additive consisting of Lactiplantibacillus

    Following a request from the European Commission, EFSA was asked to deliver a scientific opinion on the assessment of the application for renewal of Lactiplantibacillus plantarum (formerly Lactobacillus plantarum) ATCC 55943 as a technological additive (functional group: silage additive) for all animal species.The applicant has provided evidence that the additive currently on the market ...

  25. Risk analysis: a decision support tool for the control and prevention

    The results of the survey show that training is still needed by the decisionmakers, field staff and personal that perform risk analyses, and Respondents indicate that the OIE should play a more active role in training as well as in the dissemination of risk analysis results. Animal health risk analysis is a very useful tool for decision-making. This paper explores the use of risk analysis in ...

  26. A qualitative risk assessment methodology for scientific expert panels

    Table II Ordinal scaling and adjectives used to qualify an estimated probability (of release, exposure or occurrence) and the severity of the consequences - "A qualitative risk assessment methodology for scientific expert panels."

  27. Computers

    This paper introduces an improved methodology designed to address a practical deficit of existing methodologies by incorporating circuit-level analysis in the assessment of building microgrid reliability. The scientific problem at hand involves devising a systematic approach that integrates circuit modeling, Probability Density Function (PDF) selection, formulation of reliability functions ...

  28. Assessment of the feed additive consisting of Lactiplantibacillus

    Following a request from the European Commission, EFSA was asked to deliver a scientific opinion on the assessment of the application for renewal of Lactiplantibacillus plantarum (formerly Lactobacillus plantarum) ATCC 55944 as a technological additive (functional group: silage additive) for all animal species.The applicant has provided evidence that the additive currently on the market ...

  29. Assessment of the feed additive consisting of Limosilactobacillus

    Following a request from the European Commission, EFSA was asked to deliver a scientific opinion on the assessment of the application of renewal of Limosilactobacillus fermentum NCIMB 30169 as a technological feed additive (functional group: silage additives) for all animal species. The applicant has provided evidence that the additive currently on the market complies with the existing terms ...