U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.35(45); 2020 Nov 23

Logo of jkms

Reporting Survey Based Studies – a Primer for Authors

Prithvi sanjeevkumar gaur.

1 Smt. Kashibai Navale Medical College and General Hospital, Pune, India.

Olena Zimba

2 Department of Internal Medicine No. 2, Danylo Halytsky Lviv National Medical University, Lviv, Ukraine.

Vikas Agarwal

3 Department Clinical Immunology and Rheumatology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, India.

Latika Gupta

Associated data.

The coronavirus disease 2019 (COVID-19) pandemic has led to a massive rise in survey-based research. The paucity of perspicuous guidelines for conducting surveys may pose a challenge to the conduct of ethical, valid and meticulous research. The aim of this paper is to guide authors aiming to publish in scholarly journals regarding the methods and means to carry out surveys for valid outcomes. The paper outlines the various aspects, from planning, execution and dissemination of surveys followed by the data analysis and choosing target journals. While providing a comprehensive understanding of the scenarios most conducive to carrying out a survey, the role of ethical approval, survey validation and pilot testing, this brief delves deeper into the survey designs, methods of dissemination, the ways to secure and maintain data anonymity, the various analytical approaches, the reporting techniques and the process of choosing the appropriate journal. Further, the authors analyze retracted survey-based studies and the reasons for the same. This review article intends to guide authors to improve the quality of survey-based research by describing the essential tools and means to do the same with the hope to improve the utility of such studies.

Graphical Abstract

An external file that holds a picture, illustration, etc.
Object name is jkms-35-e398-abf001.jpg


Surveys are the principal method used to address topics that require individual self-report about beliefs, knowledge, attitudes, opinions or satisfaction, which cannot be assessed using other approaches. 1 This research method allows information to be collected by asking a set of questions on a specific topic to a subset of people and generalizing the results to a larger population. Assessment of opinions in a valid and reliable way require clear, structured and precise reporting of results. This is possible with a survey based out of a meticulous design, followed by validation and pilot testing. 2 The aim of this opinion piece is to provide practical advice to conduct survey-based research. It details the ethical and methodological aspects to be undertaken while performing a survey, the online platforms available for distributing survey, and the implications of survey-based research.

Survey-based research is a means to obtain quick data, and such studies are relatively easy to conduct and analyse, and are cost-effective (under a majority of the circumstances). 3 These are also one of the most convenient methods of obtaining data about rare diseases. 4 With major technological advancements and improved global interconnectivity, especially during the coronavirus disease 2019 (COVID-19) pandemic, surveys have surpassed other means of research due to their distinctive advantage of a wider reach, including respondents from various parts of the world having diverse cultures and geographically disparate locations. Moreover, survey-based research allows flexibility to the investigator and respondent alike. 5 While the investigator(s) may tailor the survey dates and duration as per their availability, the respondents are allowed the convenience of responding to the survey at ease, in the comfort of their homes, and at a time when they can answer the questions with greater focus and to the best of their abilities. 6 Respondent biases inherent to environmental stressors can be significantly reduced by this approach. 5 It also allows responses across time-zones, which may be a major impediment to other forms of research or data-collection. This allows distant placement of the investigator from the respondents.

Various digital tools are now available for designing surveys ( Table 1 ). 7 Most of these are free with separate premium paid options. The analysis of data can be made simpler and cleaning process almost obsolete by minimising open-ended answer choices. 8 Close-ended answers makes data collection and analysis efficient, by generating an excel which can be directly accessed and analysed. 9 Minimizing the number of questions and making all questions mandatory can further aid this process by bringing uniformity to the responses and analysis simpler. Surveys are arguably also the most engaging form of research, conditional to the skill of the investigator.

Q/t = questions per typeform, A/m = answers per month, Q/s = questions per survey, A/s = answers per survey, NA = not applicable, NPS = net promoter score.

Data protection laws now mandate anonymity while collecting data for most surveys, particularly when they are exempt from ethical review. 10 , 11 Anonymization has the potential to reduce (or at times even eliminate) social desirability bias which gains particular relevance when targeting responses from socially isolated or vulnerable communities (e.g. LGBTQ and low socio-economic strata communities) or minority groups (religious, ethnic and medical) or controversial topics (drug abuse, using language editing software).

Moreover, surveys could be the primary methodology to explore a hypothesis until it evolves into a more sophisticated and partly validated idea after which it can be probed further in a systematic and structured manner using other research methods.

The aim of this paper is to reduce the incorrect reporting of surveys. The paper also intends to inform researchers of the various aspects of survey-based studies and the multiple points that need to be taken under consideration while conducting survey-based research.


The COVID-19 has led to a distinctive rise in survey-based research. 12 The need to socially distance amid widespread lockdowns reduced patient visits to the hospital and brought most other forms of research to a standstill in the early pandemic period. A large number of level-3 bio-safety laboratories are being engaged for research pertaining to COVID-19, thereby limiting the options to conduct laboratory-based research. 13 , 14 Therefore, surveys appear to be the most viable option for researchers to explore hypotheses related to the situation and its impact in such times. 15


Designing a fine survey is an arduous task and requires skill even though clear guidelines are available in regard to the same. Survey design requires extensive thoughtfulness on the core questions (based on the hypothesis or the primary research question), with consideration of all possible answers, and the inclusion of open-ended options to allow recording other possibilities. A survey should be robust, in regard to the questions gathered and the answer choices available, it must be validated, and pilot tested. 16 The survey design may be supplanted with answer choices tailored for the convenience of the responder, to reduce the effort while making it more engaging. Survey dissemination and engagement of respondents also requires experience and skill. 17

Furthermore, the absence of an interviewer prevents us from gaining clarification on responses of open-ended questions if any. Internet surveys are also prone to survey fraud by erroneous reporting. Hence, anonymity of surveys is a boon and a bane. The sample sizes are skewed as it lacks representation of population absent on the Internet like the senile or the underprivileged. The illiterate population also lacks representation in survey-based research.

The “Enhancing the QUAlity and Transparency Of health Research” network (EQUATOR) provides two separate guidelines replete with checklists to ensure valid reporting of e-survey methodology. These include “The Checklist for Reporting Results of Internet E-Surveys” (CHERRIES) statement and “ The Journal of Medical Internet Research ” (JMIR) checklist.


From a clinician's standpoint, the common survey types include those centered around problems faced by the patients or physicians. 18 Surveys collecting the opinions of various clinicians on a debated clinical topic or feedback forms typically served after attending medical conferences or prescribing a new drug or trying a new method for a given procedure are also surveys. The formulation of clinical practice guidelines entails Delphi exercises using paper surveys, which are yet another form of survey-mediated research.

Size of the survey depends on its intent. They could be large or small surveys. Therefore, identification of the intent behind the survey is essential to allow the investigator to form a hypothesis and then explore it further. Large population-based or provider-based surveys are often done and generate mammoth data over the years. E.g. The National Health and Nutrition Examination Survey, The National Health Interview Survey and the National Ambulatory Medical Care Survey.


Despite all said and done about the convenience of conducting survey-based research, it is prudent to conduct a feasibility check before embarking on one. Certain scenarios may be the key determinants in determining the fate of survey-based research ( Table 2 ).


Approval from the Institutional Review Board should be taken as per requirement according to the CHERRIES checklist. However, rules for approval are different as per the country or nation and therefore, local rules must be checked and followed. For instance, in India, the Indian Council of Medical Research released an article in 2017, stating that the concept of broad consent has been updated which is defined “consent for an unspecified range of future research subject to a few contents and/or process restrictions.” It talks about “the flexibility of Indian ethics committees to review a multicentric study proposal for research involving low or minimal risk, survey or studies using anonymized samples or data or low or minimal risk public health research.” The reporting of approvals received and applied for and the procedure of written, informed consent followed must be clear and transparent. 10 , 19

The use of incentives in surveys is also an ethical concern. 20 The different of incentives that can be used are monetary or non-monetary. Monetary incentives are usually discouraged as these may attract the wrong population due to the temptation of the monetary benefit. However, monetary incentives have been seen to make survey receive greater traction even though this is yet to proven. Monetary incentives are not only provided in terms of cash or cheque but also in the form of free articles, discount coupons, phone cards, e-money or cashback value. 21 These methods though tempting must be seldom used. If used, their use must be disclosed and justified in the report. The use of non-monetary incentives like a meeting with a famous personality or access to restricted and authorized areas. These can also help pique the interest of the respondents.


As mentioned earlier, the design of a survey is reflective of the skill of the investigator curating it. 22 Survey builders can be used to design an efficient survey. These offer majority of the basic features needed to construct a survey, free of charge. Therefore, surveys can be designed from scratch, using pre-designed templates or by using previous survey designs as inspiration. Taking surveys could be made convenient by using the various aids available ( Table 1 ). Moreover, even the investigator should be mindful of the unintended response effects of ordering and context of survey questions. 23

Surveys using clear, unambiguous, simple and well-articulated language record precise answers. 24 A well-designed survey accounts for the culture, language and convenience of the target demographic. The age, region, country and occupation of the target population is also considered before constructing a survey. Consistency is maintained in the terms used in the survey and abbreviations are avoided to allow the respondents to have a clear understanding of the question being answered. Universal abbreviations or previously indexed abbreviations maintain the unambiguity of the survey.

Surveys beginning with broad, easy and non-specific questions as compared to sensitive, tedious and non-specific ones receive more accurate and complete answers. 25 Questionnaires designed such that the relatively tedious and long questions requiring the respondent to do some nit-picking are placed at the end improves the response rate of the survey. This prevents the respondent to be discouraged to answer the survey at the beginning itself and motivates the respondent to finish the survey at the end. All questions must provide a non-response option and all questions should be made mandatory to increase completeness of the survey. Questions can be framed in close-ended or open-ended fashion. However, close-ended questions are easier to analyze and are less tedious to answer by the respondent and therefore must be the main component in a survey. Open-ended questions have minimal use as they are tedious, take time to answer and require fine articulation of one's thoughts. Also, their minimal use is advocated because the interpretation of such answers requires dedication in terms of time and energy due to the diverse nature of the responses which is difficult to promise owing to the large sample sizes. 26 However, whenever the closed choices do not cover all probabilities, an open answer choice must be added. 27 , 28

Screening questions to meet certain criteria to gain access to the survey in cases where inclusion criteria need to be established to maintain authenticity of target demographic. Similarly, logic function can be used to apply an exclusion. This allows clean and clear record of responses and makes the job of an investigator easier. The respondents can or cannot have the option to return to the previous page or question to alter their answer as per the investigator's preference.

The range of responses received can be reduced in case of questions directed towards the feelings or opinions of people by using slider scales, or a Likert scale. 29 , 30 In questions having multiple answers, check boxes are efficient. When a large number of answers are possible, dropdown menus reduce the arduousness. 31 Matrix scales can be used to answer questions requiring grading or having a similar range of answers for multiple conditions. Maximum respondent participation and complete survey responses can be ensured by reducing the survey time. Quiz mode or weighted modes allow the respondent to shuffle between questions and allows scoring of quizzes and can be used to complement other weighted scoring systems. 32 A flowchart depicting a survey construct is presented as Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-35-e398-g001.jpg

Survey validation

Validation testing though tedious and meticulous, is worthy effort as the accuracy of a survey is determined by its validity. It is indicative of the of the sample of the survey and the specificity of the questions such that the data acquired is streamlined to answer the questions being posed or to determine a hypothesis. 33 , 34 Face validation determines the mannerism of construction of questions such that necessary data is collected. Content validation determines the relation of the topic being addressed and its related areas with the questions being asked. Internal validation makes sure that the questions being posed are directed towards the outcome of the survey. Finally, Test – retest validation determines the stability of questions over a period of time by testing the questionnaire twice and maintaining a time interval between the two tests. For surveys determining knowledge of respondents pertaining to a certain subject, it is advised to have a panel of experts for undertaking the validation process. 2 , 35

Reliability testing

If the questions in the survey are posed in a manner so as to elicit the same or similar response from the respondents irrespective of the language or construction of the question, the survey is said to be reliable. It is thereby, a marker of the consistency of the survey. This stands to be of considerable importance in knowledge-based researches where recall ability is tested by making the survey available for answering by the same participants at regular intervals. It can also be used to maintain authenticity of the survey, by varying the construction of the questions.

Designing a cover letter

A cover letter is the primary means of communication with the respondent, with the intent to introduce the respondent to the survey. A cover letter should include the purpose of the survey, details of those who are conducting it, including contact details in case clarifications are desired. It should also clearly depict the action required by the respondent. Data anonymization may be crucial to many respondents and is their right. This should be respected in a clear description of the data handling process while disseminating the survey. A good cover letter is the key to building trust with the respondent population and can be the forerunner to better response rates. Imparting a sense of purpose is vital to ideationally incentivize the respondent population. 36 , 37 Adding the credentials of the team conducting the survey may further aid the process. It is seen that an advance intimation of the survey prepares the respondents while improving their compliance.

The design of a cover letter needs much attention. It should be captivating, clear, precise and use a vocabulary and language specific to the target population for the survey. Active voice should be used to make a greater impact. Crowding of the details must be avoided. Using italics, bold fonts or underlining may be used to highlight critical information. the tone ought to be polite, respectful, and grateful in advance. The use of capital letters is at best avoided, as it is surrogate for shouting in verbal speech and may impart a bad taste.

The dates of the survey may be intimated, so the respondents may prepare themselves for taking it at a time conducive to them. While, emailing a closed group in a convenience sampled survey, using the name of the addressee may impart a customized experience and enhance trust building and possibly compliance. Appropriate use of salutations like Mr./Ms./Mrs. may be considered. Various portals such as SurveyMonkey allow the researchers to save an address list on the website. These may then be reached out using an embedded survey link from a verified email address to minimize bouncing back of emails.

The body of the cover letter must be short, crisp and not exceed 2–3 paragraphs under idea circumstances. Ernest efforts to protect confidentiality may go a long way in enhancing response rates. 38 While it is enticing to provide incentives to enhance response, these are best avoided. 38 , 39 In cases when indirect incentives are offered, such as provision of results of the survey, these may be clearly stated in the cover letter. Lastly, a formal closing note with the signatures of the lead investigator are welcome. 38 , 40

Designing questions

Well-constructed questionnaires are essentially the backbone of successful survey-based studies. With this type of research, the primary concern is the adequate promotion and dissemination of the questionnaire to the target population. The careful of selection of sample population, therefore, needs to be with minimal flaws. The method of conducting survey is an essential determinant of the response rate observed. 41 Broadly, surveys are of two types: closed and open. Depending on the sample population the method of conducting the survey must be determined.

Various doctors use their own patients as the target demographic, as it improves compliance. However, this is effective in surveys aiming towards a geographically specific, fairly common disease as the sample size needs to be adequate. Response bias can be identified by the data collected from respondent and non-respondent groups. 42 , 43 Therefore, to choose a target population whose database of baseline characteristics is already known is more efficacious. In cases of surveys focused on patients having a rare group of diseases, online surveys or e-surveys can be conducted. Data can also be gathered from the multiple national organizations and societies all over the world. 44 , 45 Computer generated random selection can be done from this data to choose participants and they can be reached out to using emails or social media platforms like WhatsApp and LinkedIn. In both these scenarios, closed questionnaires can be conducted. These have restricted access either through a URL link or through e-mail.

In surveys targeting an issue faced by a larger demographic (e.g. pandemics like the COVID-19, flu vaccines and socio-political scenarios), open surveys seem like the more viable option as they can be easily accessed by majority of the public and ensures large number of responses, thereby increasing the accuracy of the study. Survey length should be optimal to avoid poor response rates. 25 , 46


Uniform distribution of the survey ensures equitable opportunity to the entire target population to access the questionnaire and participate in it. While deciding the target demographic communities should be studied and the process of “lurking” is sometimes practiced. Multiple sampling methods are available ( Fig. 1 ). 47

Distribution of survey to the target demographic could be done using emails. Even though e-mails reach a large proportion of the target population, an unknown sender could be blocked, making the use of personal or a previously used email preferable for correspondence. Adding a cover letter along with the invite adds a personal touch and is hence, advisable. Some platforms allow the sender to link the survey portal with the sender's email after verifying it. Noteworthily, despite repeated email reminders, personal communication over the phone or instant messaging improved responses in the authors' experience. 48 , 49

Distribution of the survey over other social media platforms (SMPs, namely WhatsApp, Facebook, Instagram, Twitter, LinkedIn etc.) is also practiced. 50 , 51 , 52 Surveys distributed on every available platform ensures maximal outreach. 53 Other smartphone apps can also be used for wider survey dissemination. 50 , 54 It is important to be mindful of the target population while choosing the platform for dissemination of the survey as some SMPs such as WhatsApp are more popular in India, while others like WeChat are used more widely in China, and similarly Facebook among the European population. Professional accounts or popular social accounts can be used to promote and increase the outreach for a survey. 55 Incentives such as internet giveaways or meet and greets with their favorite social media influencer have been used to motivate people to participate.

However, social-media platforms do not allow calculation of the denominator of the target population, resulting in inability to gather the accurate response rate. Moreover, this method of collecting data may result in a respondent bias inherent to a community that has a greater online presence. 43 The inability to gather the demographics of the non-respondents (in a bid to identify and prove that they were no different from respondents) can be another challenge in convenience sampling, unlike in cohort-based studies.

Lastly, manually filling of surveys, over the telephone, by narrating the questions and answer choices to the respondents is used as the last-ditch resort to achieve a high desired response rate. 56 Studies reveal that surveys released on Mondays, Fridays, and Sundays receive more traction. Also, reminders set at regular intervals of time help receive more responses. Data collection can be improved in collaborative research by syncing surveys to fill out electronic case record forms. 57 , 58 , 59

Data anonymity refers to the protection of data received as a part of the survey. This data must be stored and handled in accordance with the patient privacy rights/privacy protection laws in reference to surveys. Ethically, the data must be received on a single source file handled by one individual. Sharing or publishing this data on any public platform is considered a breach of the patient's privacy. 11 In convenience sampled surveys conducted by e-mailing a predesignated group, the emails shall remain confidential, as inadvertent sharing of these as supplementary data in the manuscript may amount to a violation of the ethical standards. 60 A completely anonymized e-survey discourages collection of Internet protocol addresses in addition to other patient details such as names and emails.

Data anonymity gives the respondent the confidence to be candid and answer the survey without inhibitions. This is especially apparent in minority groups or communities facing societal bias (sex workers, transgenders, lower caste communities, women). Data anonymity aids in giving the respondents/participants respite regarding their privacy. As the respondents play a primary role in data collection, data anonymity plays a vital role in survey-based research.


The data collected from the survey responses are compiled in a .xls, .csv or .xlxs format by the survey tool itself. The data can be viewed during the survey duration or after its completion. To ensure data anonymity, minimal number of people should have access to these results. The data should then be sifted through to invalidate false, incorrect or incomplete data. The relevant and complete data should then be analyzed qualitatively and quantitatively, as per the aim of the study. Statistical aids like pie charts, graphs and data tables can be used to report relative data.


Analysis of the responses recorded is done after the time made available to answer the survey is complete. This ensures that statistical and hypothetical conclusions are established after careful study of the entire database. Incomplete and complete answers can be used to make analysis conditional on the study. Survey-based studies require careful consideration of various aspects of the survey such as the time required to complete the survey. 61 Cut-off points in the time frame allow authentic answers to be recorded and analyzed as compared to disingenuous completed questionnaires. Methods of handling incomplete questionnaires and atypical timestamps must be pre-decided to maintain consistency. Since, surveys are the only way to reach people especially during the COVID-19 pandemic, disingenuous survey practices must not be followed as these will later be used to form a preliminary hypothesis.


Reporting the survey-based research is by far the most challenging part of this method. A well-reported survey-based study is a comprehensive report covering all the aspects of conducting a survey-based research.

The design of the survey mentioning the target demographic, sample size, language, type, methodology of the survey and the inclusion-exclusion criteria followed comprises a descriptive report of a survey-based study. Details regarding the conduction of pilot-testing, validation testing, reliability testing and user-interface testing add value to the report and supports the data and analysis. Measures taken to prevent bias and ensure consistency and precision are key inclusions in a report. The report usually mentions approvals received, if any, along with the written, informed, consent taken from the participants to use the data received for research purposes. It also gives detailed accounts of the different distribution and promotional methods followed.

A detailed account of the data input and collection methods along with tools used to maintain the anonymity of the participants and the steps taken to ensure singular participation from individual respondents indicate a well-structured report. Descriptive information of the website used, visitors received and the externally influencing factors of the survey is included. Detailed reporting of the post-survey analysis including the number of analysts involved, data cleaning required, if any, statistical analysis done and the probable hypothesis concluded is a key feature of a well-reported survey-based research. Methods used to do statistical corrections, if used, should be included in the report. The EQUATOR network has two checklists, “The Checklist for Reporting Results of Internet E-Surveys” (CHERRIES) statement and “ The Journal of Medical Internet Research ” (JMIR) checklist, that can be utilized to construct a well-framed report. 62 , 63 Importantly, self-reporting of biases and errors avoids the carrying forward of false hypothesis as a basis of more advanced research. References should be cited using standard recommendations, and guided by the journal specifications. 64


Surveys can be published as original articles, brief reports or as a letter to the editor. Interestingly, most modern journals do not actively make mention of surveys in the instructions to the author. Thus, depending on the study design, the authors may choose the article category, cohort or case-control interview or survey-based study. It is prudent to mention the type of study in the title. Titles albeit not too long, should not exceed 10–12 words, and may feature the type of study design for clarity after a semicolon for greater citation potential.

While the choice of journal is largely based on the study subject and left to the authors discretion, it may be worthwhile exploring trends in a journal archive before proceeding with submission. 65 Although the article format is similar across most journals, specific rules relevant to the target journal may be followed for drafting the article structure before submission.


Articles that are removed from the publication after being released are retracted articles. These are usually retracted when new discrepancies come to light regarding, the methodology followed, plagiarism, incorrect statistical analysis, inappropriate authorship, fake peer review, fake reporting and such. 66 A sufficient increase in such papers has been noticed. 67

We carried out a search of “surveys” on Retraction Watch on 31st August 2020 and received 81 search results published between November 2006 to June 2020, out of which 3 were repeated. Out of the 78 results, 37 (47.4%) articles were surveys, 23 (29.4%) showed as unknown types and 18 (23.2%) reported other types of research. ( Supplementary Table 1 ). Fig. 2 gives a detailed description of the causes of retraction of the surveys we found and its geographic distribution.

An external file that holds a picture, illustration, etc.
Object name is jkms-35-e398-g002.jpg

A good survey ought to be designed with a clear objective, the design being precise and focused with close-ended questions and all probabilities included. Use of rating scales, multiple choice questions and checkboxes and maintaining a logical question sequence engages the respondent while simplifying data entry and analysis for the investigator. Conducting pilot-testing is vital to identify and rectify deficiencies in the survey design and answer choices. The target demographic should be defined well, and invitations sent accordingly, with periodic reminders as appropriate. While reporting the survey, maintaining transparency in the methods employed and clearly stating the shortcomings and biases to prevent advocating an invalid hypothesis.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Gaur PS, Zimba O, Agarwal V, Gupta L.
  • Visualization: Gaur PS, Zimba O, Agarwal V, Gupta L.
  • Writing - original draft: Gaur PS, Gupta L.


Reporting survey based research

  • Search Menu
  • Advance articles
  • Editor's Choice
  • Supplements
  • French Abstracts
  • Portuguese Abstracts
  • Spanish Abstracts
  • Author Guidelines
  • Submission Site
  • Open Access
  • About International Journal for Quality in Health Care
  • About the International Society for Quality in Health Care
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Contact ISQua
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

What is survey research, advantages and disadvantages of survey research, essential steps in survey research, research methods, designing the research tool, sample and sampling, data collection, data analysis.

  • < Previous

Good practice in the conduct and reporting of survey research

  • Article contents
  • Figures & tables
  • Supplementary Data

KATE KELLEY, BELINDA CLARK, VIVIENNE BROWN, JOHN SITZIA, Good practice in the conduct and reporting of survey research, International Journal for Quality in Health Care , Volume 15, Issue 3, May 2003, Pages 261–266, https://doi.org/10.1093/intqhc/mzg031

  • Permissions Icon Permissions

Survey research is sometimes regarded as an easy research approach. However, as with any other research approach and method, it is easy to conduct a survey of poor quality rather than one of high quality and real value. This paper provides a checklist of good practice in the conduct and reporting of survey research. Its purpose is to assist the novice researcher to produce survey work to a high standard, meaning a standard at which the results will be regarded as credible. The paper first provides an overview of the approach and then guides the reader step-by-step through the processes of data collection, data analysis, and reporting. It is not intended to provide a manual of how to conduct a survey, but rather to identify common pitfalls and oversights to be avoided by researchers if their work is to be valid and credible.

Survey research is common in studies of health and health services, although its roots lie in the social surveys conducted in Victorian Britain by social reformers to collect information on poverty and working class life (e.g. Charles Booth [ 1 ] and Joseph Rowntree [ 2 ]), and indeed survey research remains most used in applied social research. The term ‘survey’ is used in a variety of ways, but generally refers to the selection of a relatively large sample of people from a pre-determined population (the ‘population of interest’; this is the wider group of people in whom the researcher is interested in a particular study), followed by the collection of a relatively small amount of data from those individuals. The researcher therefore uses information from a sample of individuals to make some inference about the wider population.

Data are collected in a standardized form. This is usually, but not necessarily, done by means of a questionnaire or interview. Surveys are designed to provide a ‘snapshot of how things are at a specific time’ [ 3 ]. There is no attempt to control conditions or manipulate variables; surveys do not allocate participants into groups or vary the treatment they receive. Surveys are well suited to descriptive studies, but can also be used to explore aspects of a situation, or to seek explanation and provide data for testing hypotheses. It is important to recognize that ‘the survey approach is a research strategy, not a research method’ [ 3 ]. As with any research approach, a choice of methods is available and the one most appropriate to the individual project should be used. This paper will discuss the most popular methods employed in survey research, with an emphasis upon difficulties commonly encountered when using these methods.

Descriptive research

Descriptive research is a most basic type of enquiry that aims to observe (gather information on) certain phenomena, typically at a single point in time: the ‘cross-sectional’ survey. The aim is to examine a situation by describing important factors associated with that situation, such as demographic, socio-economic, and health characteristics, events, behaviours, attitudes, experiences, and knowledge. Descriptive studies are used to estimate specific parameters in a population (e.g. the prevalence of infant breast feeding) and to describe associations (e.g. the association between infant breast feeding and maternal age).

Analytical studies

Analytical studies go beyond simple description; their intention is to illuminate a specific problem through focused data analysis, typically by looking at the effect of one set of variables upon another set. These are longitudinal studies, in which data are collected at more than one point in time with the aim of illuminating the direction of observed associations. Data may be collected from the same sample on each occasion (cohort or panel studies) or from a different sample at each point in time (trend studies).

Evaluation research

This form of research collects data to ascertain the effects of a planned change.


The research produces data based on real-world observations (empirical data).

The breadth of coverage of many people or events means that it is more likely than some other approaches to obtain data based on a representative sample, and can therefore be generalizable to a population.

Surveys can produce a large amount of data in a short time for a fairly low cost. Researchers can therefore set a finite time-span for a project, which can assist in planning and delivering end results.


The significance of the data can become neglected if the researcher focuses too much on the range of coverage to the exclusion of an adequate account of the implications of those data for relevant issues, problems, or theories.

The data that are produced are likely to lack details or depth on the topic being investigated.

Securing a high response rate to a survey can be hard to control, particularly when it is carried out by post, but is also difficult when the survey is carried out face-to-face or over the telephone.

Research question

Good research has the characteristic that its purpose is to address a single clear and explicit research question; conversely, the end product of a study that aims to answer a number of diverse questions is often weak. Weakest of all, however, are those studies that have no research question at all and whose design simply is to collect a wide range of data and then to ‘trawl’ the data looking for ‘interesting’ or ‘significant’ associations. This is a trap novice researchers in particular fall into. Therefore, in developing a research question, the following aspects should be considered [ 4 ]:

Be knowledgeable about the area you wish to research.

Widen the base of your experience, explore related areas, and talk to other researchers and practitioners in the field you are surveying.

Consider using techniques for enhancing creativity, for example brainstorming ideas.

Avoid the pitfalls of: allowing a decision regarding methods to decide the questions to be asked; posing research questions that cannot be answered; asking questions that have already been answered satisfactorily.

The survey approach can employ a range of methods to answer the research question. Common survey methods include postal questionnaires, face-to-face interviews, and telephone interviews.

Postal questionnaires

This method involves sending questionnaires to a large sample of people covering a wide geographical area. Postal questionnaires are usually received ‘cold’, without any previous contact between researcher and respondent. The response rate for this type of method is usually low, ∼20%, depending on the content and length of the questionnaire. As response rates are low, a large sample is required when using postal questionnaires, for two main reasons: first, to ensure that the demographic profile of survey respondents reflects that of the survey population; and secondly, to provide a sufficiently large data set for analysis.

Face-to-face interviews

Face-to-face interviews involve the researcher approaching respondents personally, either in the street or by calling at people’s homes. The researcher then asks the respondent a series of questions and notes their responses. The response rate is often higher than that of postal questionnaires as the researcher has the opportunity to sell the research to a potential respondent. Face-to-face interviewing is a more costly and time-consuming method than the postal survey, however the researcher can select the sample of respondents in order to balance the demographic profile of the sample.

Telephone interviews

Telephone surveys, like face-to-face interviews, allow a two-way interaction between researcher and respondent. Telephone surveys are quicker and cheaper than face-to-face interviewing. Whilst resulting in a higher response rate than postal surveys, telephone surveys often attract a higher level of refusals than face-to-face interviews as people feel less inhibited about refusing to take part when approached over the telephone.

Whether using a postal questionnaire or interview method, the questions asked have to be carefully planned and piloted. The design, wording, form, and order of questions can affect the type of responses obtained, and careful design is needed to minimize bias in results. When designing a questionnaire or question route for interviewing, the following issues should be considered: (1) planning the content of a research tool; (2) questionnaire layout; (3) interview questions; (4) piloting; and (5) covering letter.

Planning the content of a research tool

The topics of interest should be carefully planned and relate clearly to the research question. It is often useful to involve experts in the field, colleagues, and members of the target population in question design in order to ensure the validity of the coverage of questions included in the tool (content validity).

Researchers should conduct a literature search to identify existing, psychometrically tested questionnaires. A well designed research tool is simple, appropriate for the intended use, acceptable to respondents, and should include a clear and interpretable scoring system. A research tool must also demonstrate the psychometric properties of reliability (consistency from one measurement to the next), validity (accurate measurement of the concept), and, if a longitudinal study, responsiveness to change [ 5 ]. The development of research tools, such as attitude scales, is a lengthy and costly process. It is important that researchers recognize that the development of the research tool is equal in importance—and deserves equal attention—to data collection. If a research instrument has not undergone a robust process of development and testing, the credibility of the research findings themselves may legitimately be called into question and may even be completely disregarded. Surveys of patient satisfaction and similar are commonly weak in this respect; one review found that only 6% of patient satisfaction studies used an instrument that had undergone even rudimentary testing [ 6 ]. Researchers who are unable or unwilling to undertake this process are strongly advised to consider adopting an existing, robust research tool.

Questionnaire layout

Questionnaires used in survey research should be clear and well presented. The use of capital (upper case) letters only should be avoided, as this format is hard to read. Questions should be numbered and clearly grouped by subject. Clear instructions should be given and headings included to make the questionnaire easier to follow.

The researcher must think about the form of the questions, avoiding ‘double-barrelled’ questions (two or more questions in one, e.g. ‘How satisfied were you with your personal nurse and the nurses in general?’), questions containing double negatives, and leading or ambiguous questions. Questions may be open (where the respondent composes the reply) or closed (where pre-coded response options are available, e.g. multiple-choice questions). Closed questions with pre-coded response options are most suitable for topics where the possible responses are known. Closed questions are quick to administer and can be easily coded and analysed. Open questions should be used where possible replies are unknown or too numerous to pre-code. Open questions are more demanding for respondents but if well answered can provide useful insight into a topic. Open questions, however, can be time consuming to administer and difficult to analyse. Whether using open or closed questions, researchers should plan clearly how answers will be analysed.

Interview questions

Open questions are used more frequently in unstructured interviews, whereas closed questions typically appear in structured interview schedules. A structured interview is like a questionnaire that is administered face to face with the respondent. When designing the questions for a structured interview, the researcher should consider the points highlighted above regarding questionnaires. The interviewer should have a standardized list of questions, each respondent being asked the same questions in the same order. If closed questions are used the interviewer should also have a range of pre-coded responses available.

If carrying out a semi-structured interview, the researcher should have a clear, well thought out set of questions; however, the questions may take an open form and the researcher may vary the order in which topics are considered.

A research tool should be tested on a pilot sample of members of the target population. This process will allow the researcher to identify whether respondents understand the questions and instructions, and whether the meaning of questions is the same for all respondents. Where closed questions are used, piloting will highlight whether sufficient response categories are available, and whether any questions are systematically missed by respondents.

When conducting a pilot, the same procedure as as that to be used in the main survey should be followed; this will highlight potential problems such as poor response.

Covering letter

All participants should be given a covering letter including information such as the organization behind the study, including the contact name and address of the researcher, details of how and why the respondent was selected, the aims of the study, any potential benefits or harm resulting from the study, and what will happen to the information provided. The covering letter should both encourage the respondent to participate in the study and also meet the requirements of informed consent (see below).

The concept of sample is intrinsic to survey research. Usually, it is impractical and uneconomical to collect data from every single person in a given population; a sample of the population has to be selected [ 7 ]. This is illustrated in the following hypothetical example. A hospital wants to conduct a satisfaction survey of the 1000 patients discharged in the previous month; however, as it is too costly to survey each patient, a sample has to be selected. In this example, the researcher will have a list of the population members to be surveyed (sampling frame). It is important to ensure that this list is both up-to date and has been obtained from a reliable source.

The method by which the sample is selected from a sampling frame is integral to the external validity of a survey: the sample has to be representative of the larger population to obtain a composite profile of that population [ 8 ].

There are methodological factors to consider when deciding who will be in a sample: How will the sample be selected? What is the optimal sample size to minimize sampling error? How can response rates be maximized?

The survey methods discussed below influence how a sample is selected and the size of the sample. There are two categories of sampling: random and non-random sampling, with a number of sampling selection techniques contained within the two categories. The principal techniques are described here [ 9 ].

Random sampling

Generally, random sampling is employed when quantitative methods are used to collect data (e.g. questionnaires). Random sampling allows the results to be generalized to the larger population and statistical analysis performed if appropriate. The most stringent technique is simple random sampling. Using this technique, each individual within the chosen population is selected by chance and is equally as likely to be picked as anyone else. Referring back to the hypothetical example, each patient is given a serial identifier and then an appropriate number of the 1000 population members are randomly selected. This is best done using a random number table, which can be generated using computer software (a free on-line randomizer can be found at http://www.randomizer.org/index.htm ).

Alternative random sampling techniques are briefly described. In systematic sampling, individuals to be included in the sample are chosen at equal intervals from the population; using the earlier example, every fifth patient discharged from hospital would be included in the survey. Stratified sampling selects a specific group and then a random sample is selected. Using our example, the hospital may decide only to survey older surgical patients. Bigger surveys may employ cluster sampling, which randomly assigns groups from a large population and then surveys everyone within the groups, a technique often used in national-scale studies.

Non-random sampling

Non-random sampling is commonly applied when qualitative methods (e.g. focus groups and interviews) are used to collect data, and is typically used for exploratory work. Non-random sampling deliberately targets individuals within a population. There are three main techniques. (1) purposive sampling: a specific population is identified and only its members are included in the survey; using our example above, the hospital may decide to survey only patients who had an appendectomy. (2) Convenience sampling: the sample is made up of the individuals who are the easiest to recruit. Finally, (3) snowballing: the sample is identified as the survey progresses; as one individual is surveyed he or she is invited to recommend others to be surveyed.

It is important to use the right method of sampling and to be aware of the limitations and statistical implications of each. The need to ensure that the sample is representative of the larger population was highlighted earlier and, alongside the sampling method, the degree of sampling error should be considered. Sampling error is the probability that any one sample is not completely representative of the population from which it has been drawn [ 9 ]. Although sampling error cannot be eliminated entirely, the sampling technique chosen will influence the extent of the error. Simple random sampling will give a closer estimate of the population than a convenience sample of individuals who just happened to be in the right place at the right time.

Sample size

What sample size is required for a survey? There is no definitive answer to this question: large samples with rigorous selection are more powerful as they will yield more accurate results, but data collection and analysis will be proportionately more time consuming and expensive. Essentially, the target sample size for a survey depends on three main factors: the resources available, the aim of the study, and the statistical quality needed for the survey. For ‘qualitative’ surveys using focus groups or interviews, the sample size needed will be smaller than if quantitative data is collected by questionnaire. If statistical analysis is to be performed on the data then sample size calculations should be conducted. This can be done using computer packages such as G * Power [ 10 ]; however, those with little statistical knowledge should consult a statistician. For practical recommendations on sample size, the set of survey guidelines developed by the UK Department of Health [ 11 ] should be consulted.

Larger samples give a better estimate of the population but it can be difficult to obtain an adequate number of responses. It is rare that everyone asked to participate in the survey will reply. To ensure a sufficient number of responses, include an estimated non-response rate in the sample size calculations.

Response rates are a potential source of bias. The results from a survey with a large non-response rate could be misleading and only representative of those who replied. French [ 12 ] reported that non-responders to patient satisfaction surveys are less likely to be satisfied than people who reply. It is unwise to define a level above which a response rate is acceptable, as this depends on many local factors; however, an achievable and acceptable rate is ∼75% for interviews and 65% for self-completion postal questionnaires [ 9 , 13 ]. In any study, the final response rate should be reported with the results; potential differences between the respondents and non-respondents should be explicitly explored and their implications discussed.

There are techniques to increase response rates. A questionnaire must be concise and easy to understand, reminders should be sent out, and method of recruitment should be carefully considered. Sitzia and Wood [ 13 ] found that participants recruited by mail or who had to respond by mail had a lower mean response rate (67%) than participants who were recruited personally (mean response 76.7%). A most useful review of methods to maximize response rates in postal surveys has recently been published [ 14 ].

Researchers should approach data collection in a rigorous and ethical manner. The following information must be clearly recorded:

How, where, how many times, and by whom potential respondents were contacted.

How many people were approached and how many of those agreed to participate.

How did those who agreed to participate differ from those who refused with regard to characteristics of interest in the study, for example how were they identified, where were they approached, and what was their gender, age, and features of their illness or health care.

How was the survey administered (e.g. telephone interview).

What was the response rate (i.e. the number of usable data sets as a proportion of the number of people approached).

The purpose of all analyses is to summarize data so that it is easily understood and provides the answers to our original questions: ‘In order to do this researchers must carefully examine their data; they should become friends with their data’ [ 15 ]. Researchers must prepare to spend substantial time on the data analysis phase of a survey (and this should be built into the project plan). When analysis is rushed, often important aspects of the data are missed and sometimes the wrong analyses are conducted, leading to both inaccurate results and misleading conclusions [ 16 ]. However, and this point cannot be stressed strongly enough, researchers must not engage in data dredging, a practice that can arise especially in studies in which large numbers of dependent variables can be related to large numbers of independent variables (outcomes). When large numbers of possible associations in a dataset are reviewed at P < 0.05, one in 20 of the associations by chance will appear ‘statistically significant’; in datasets where only a few real associations exist, testing at this significance level will result in the large majority of findings still being false positives [ 17 ].

The method of data analysis will depend on the design of the survey and should have been carefully considered in the planning stages of the survey. Data collected by qualitative methods should be analysed using established methods such as content analysis [ 18 ], and where quantitative methods have been used appropriate statistical tests can be applied. Describing methods of analysis here would be unproductive as a multitude of introductory textbooks and on-line resources are available to help with simple analyses of data (e.g. [ 19 , 20 ]). For advanced analysis a statistician should be consulted.

When reporting survey research, it is essential that a number of key points are covered (though the length and depth of reporting will be dependent upon journal style). These key points are presented as a ‘checklist’ below:

Explain the purpose or aim of the research, with the explicit identification of the research question.

Explain why the research was necessary and place the study in context, drawing upon previous work in relevant fields (the literature review).

State the chosen research method or methods, and justify why this method was chosen.

Describe the research tool. If an existing tool is used, briefly state its psychometric properties and provide references to the original development work. If a new tool is used, you should include an entire section describing the steps undertaken to develop and test the tool, including results of psychometric testing.

Describe how the sample was selected and how data were collected, including:

How were potential subjects identified?

How many and what type of attempts were made to contact subjects?

Who approached potential subjects?

Where were potential subjects approached?

How was informed consent obtained?

How many agreed to participate?

How did those who agreed differ from those who did not agree?

What was the response rate?

Describe and justify the methods and tests used for data analysis.

Present the results of the research. The results section should be clear, factual, and concise.

Interpret and discuss the findings. This ‘discussion’ section should not simply reiterate results; it should provide the author’s critical reflection upon both the results and the processes of data collection. The discussion should assess how well the study met the research question, should describe the problems encountered in the research, and should honestly judge the limitations of the work.

Present conclusions and recommendations.

The researcher needs to tailor the research report to meet:

The expectations of the specific audience for whom the work is being written.

The conventions that operate at a general level with respect to the production of reports on research in the social sciences.

Anyone involved in collecting data from patients has an ethical duty to respect each individual participant’s autonomy. Any survey should be conducted in an ethical manner and one that accords with best research practice. Two important ethical issues to adhere to when conducting a survey are confidentiality and informed consent.

The respondent’s right to confidentiality should always be respected and any legal requirements on data protection adhered to. In the majority of surveys, the patient should be fully informed about the aims of the survey, and the patient’s consent to participate in the survey must be obtained and recorded.

The professional bodies listed below, among many others, provide guidance on the ethical conduct of research and surveys.

American Psychological Association: http://www.apa.org

British Psychological Society: http://www.bps.org.uk

British Medical Association: http://www.bma.org.uk .

UK General Medical Council: http://www.gmc-uk.org

American Medical Association: http://www.ama-assn.org

UK Royal College of Nursing: http://www.rcn.org.uk

UK Department of Health: http://www.doh.gov

Survey research demands the same standards in research practice as any other research approach, and journal editors and the broader research community will judge a report of survey research with the same level of rigour as any other research report. This is not to say that survey research need be particularly difficult or complex; the point to emphasize is that researchers should be aware of the steps required in survey research, and should be systematic and thoughtful in the planning, execution, and reporting of the project. Above all, survey research should not be seen as an easy, ‘quick and dirty’ option; such work may adequately fulfil local needs (e.g. a quick survey of hospital staff satisfaction), but will not stand up to academic scrutiny and will not be regarded as having much value as a contribution to knowledge.

Address reprint requests to John Sitzia, Research Department, Worthing Hospital, Lyndhurst Road, Worthing BN11 2DH, West Sussex, UK. E-mail: [email protected]

London School of Economics, UK. Http://booth.lse.ac.uk/ (accessed 15 January 2003 ).

Vernon A. A Quaker Businessman: Biography of Joseph Rowntree (1836–1925) . London: Allen & Unwin, 1958 .

Denscombe M. The Good Research Guide: For Small-scale Social Research Projects . Buckingham: Open University Press, 1998 .

Robson C. Real World Research: A Resource for Social Scientists and Practitioner-researchers . Oxford: Blackwell Publishers, 1993 .

Streiner DL, Norman GR. Health Measurement Scales: A Practical Guide to their Development and Use . Oxford: Oxford University Press, 1995 .

Sitzia J. How valid and reliable are patient satisfaction data? An analysis of 195 studies. Int J Qual Health Care 1999 ; 11: 319 –328.

Bowling A. Research Methods in Health. Investigating Health and Health Services . Buckingham: Open University Press, 2002 .

American Statistical Association, USA. Http://www.amstat.org (accessed 9 December 2002 ).

Arber S. Designing samples. In: Gilbert N, ed. Researching Social Life . London: SAGE Publications, 2001 .

Heinrich Heine University, Dusseldorf, Germany. Http://www.psycho.uni-duesseldorf.de/aap/projects/gpower/index.html (accessed 12 December 2002 ).

Department of Health, England. Http://www.doh.gov.uk/acutesurvey/index.htm (accessed 12 December 2002 ).

French K. Methodological considerations in hospital patient opinion surveys. Int J Nurs Stud 1981 ; 18: 7 –32.

Sitzia J, Wood N. Response rate in patient satisfaction research: an analysis of 210 published studies. Int J Qual Health Care 1998 ; 10: 311 –317.

Edwards P, Roberts I, Clarke M et al. Increasing response rates to postal questionnaires: systematic review. Br Med J 2002 ; 324: 1183 .

Wright DB. Making friends with our data: improving how statistical results are reported. Br J Educ Psychol 2003 ; in press.

Wright DB, Kelley K. Analysing and reporting data. In: Michie S, Abraham C, eds. Health Psychology in Practice . London: SAGE Publications, 2003 ; in press.

Davey Smith G, Ebrahim S. Data dredging, bias, or confounding. Br Med J 2002 ; 325: 1437 –1438.

Morse JM, Field PA. Nursing Research: The Application of Qualitative Approaches . London: Chapman and Hall, 1996 .

Wright DB. Understanding Statistics: An Introduction for the Social Sciences . London: SAGE Publications, 1997 .

Sportscience, New Zealand. Http://www.sportsci.org/resource/stats/index.html (accessed 12 December 2002 ).

Email alerts

Citing articles via.

  • Recommend to your Library


  • Online ISSN 1464-3677
  • Print ISSN 1353-4505
  • Copyright © 2024 International Society for Quality in Health Care and Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Equator network

Enhancing the QUAlity and Transparency Of health Research

  • Courses & events
  • Librarian Network
  • Search for reporting guidelines

reporting guidelines for survey research

Browse for reporting guidelines by selecting one or more of these drop-downs:

Displaying 622 reporting guidelines found.

Most recently added records are displayed first.

  • Consensus reporting guidelines to address gaps in descriptions of ultra-rare genetic conditions
  • Biofield therapies: Guidelines for reporting clinical trials
  • The PICOTS-ComTeC Framework for Defining Digital Health Interventions: An ISPOR Special Interest Group Report
  • REPORT-SCS : minimum reporting standards for spinal cord stimulation studies in spinal cord injury
  • CARE-radiology statement explanation and elaboration: reporting guideline for radiological case reports
  • Trial Forge Guidance 4: a guideline for reporting the results of randomised Studies Within A Trial (SWATs)
  • The RETRIEVE Checklist for Studies Reporting the Elicitation of Stated Preferences for Child Health -Related Quality of Life
  • We don’t know what you did last summer. On the importance of transparent reporting of reaction time data pre-processing
  • Introducing So NHR-Reporting guidelines for Social Networks In Health Research
  • Reporting standard for describing first responder systems, smartphone alerting systems, and AED networks
  • The reporting checklist for Chinese patent medicine guidelines: RIGHT for CPM
  • The SHARE : SHam Acupuncture REporting guidelines and a checklist in clinical trials
  • REPCAN : Guideline for REporting Population-based CANcer Registry Data
  • The Test Adaptation Reporting Standards (TARES) : reporting test adaptations
  • ACCORD (ACcurate COnsensus Reporting Document): A reporting guideline for consensus methods in biomedicine developed via a modified Delphi
  • Consolidated Reporting Guidelines for Prognostic and Diagnostic Machine Learning Modeling Studies: Development and Validation
  • Development of the Reporting Infographics and Visual Abstracts of Comparative studies (RIVA-C) checklist and guide
  • Preliminary guideline for reporting bibliometric reviews of the biomedical literature (BIBLIO) : a minimum requirements
  • Appropriate design and reporting of superiority, equivalence and non-inferiority clinical trials incorporating a benefit-risk assessment: the BRAINS study including expert workshop
  • Preferred Reporting Items for Resistance Exercise Studies (PRIRES) : A Checklist Developed Using an Umbrella Review of Systematic Reviews
  • ENLIGHT : A consensus checklist for reporting laboratory-based studies on the non-visual effects of light in humans
  • Consensus Statement for Protocols of Factorial Randomized Trials: Extension of the SPIRIT 2013 Statement
  • Reporting of Factorial Randomized Trials: Extension of the CONSORT 2010 Statement
  • Adjusting for Treatment Switching in Oncology Trials: A Systematic Review and Recommendations for Reporting
  • Modeling Infectious Diseases in Healthcare Network (MInD-Healthcare) Framework for Describing and Reporting Multidrug-resistant Organism and Healthcare -Associated Infections Agent-based Modeling Methods
  • LEVEL (Logical Explanations & Visualizations of Estimates in Linear mixed models): recommendations for reporting multilevel data and analyses
  • Data linkage in pharmacoepidemiology: A call for rigorous evaluation and reporting
  • Expert consensus document: Reporting checklist for quantification of pulmonary congestion by lung ultrasound in heart failure
  • Commentary: minimum reporting standards should be expected for preclinical radiobiology irradiators and dosimetry in the published literature
  • Best practice guidelines for citizen science in mental health research: systematic review and evidence synthesis
  • Evaluating the quality of studies reporting on clinical applications of stromal vascular fraction: A systematic review and proposed reporting guidelines (CLINIC-STRA-SVF)
  • Enhancing reporting quality and impact of early phase dose-finding clinical trials: CONSORT Dose-finding Extension (CONSORT-DEFINE) guidance
  • Enhancing quality and impact of early phase dose-finding clinical trial protocols: SPIRIT Dose-finding Extension (SPIRIT-DEFINE) guidance
  • ESMO Guidance for Reporting Oncology real -World evidence (GROW)
  • A systematic review and cluster analysis approach of 103 studies of high-intensity interval training on cardiorespiratory fitness
  • Generate Analysis -Ready Data for Real-world Evidence: Tutorial for Harnessing Electronic Health Records With Advanced Informatic Technologies
  • Developing Consensus -Based Guidelines for Case Reporting in Aesthetic Medicine: Enhancing Transparency and Standardization
  • MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care
  • Presenting artificial intelligence, deep learning, and machine learning studies to clinicians and healthcare stakeholders: an introductory reference with a guideline and a Clinical AI Research (CAIR) checklist proposal
  • Data Processing Strategies to Determine Maximum Oxygen Uptake: A Systematic Scoping Review and Experimental Comparison with Guidelines for Reporting
  • Improving the Rigor of Mechanistic Behavioral Science: The Introduction of the Checklist for Investigating Mechanisms in Behavior -Change Research (CLIMBR)
  • An analysis of reporting practices in the top 100 cited health and medicine-related bibliometric studies from 2019 to 2021 based on a proposed guidelines
  • Improving the Reporting of Primary Care Research: Consensus Reporting Items for Studies in Primary Care-the CRISP Statement
  • Checklist for studies of HIV drug resistance prevalence or incidence: rationale and recommended use
  • Community-developed checklists for publishing images and image analyses
  • Systematic Development of Standards for Mixed Methods Reporting in Rehabilitation Health Sciences Research
  • Minimal reporting guideline for research involving eye tracking (2023 edition)
  • CHEERS Value of Information (CHEERS-VOI) Reporting Standards – Explanation and Elaboration
  • Initial Standardized Framework for Reporting Social Media Analytics in Emergency Care Research
  • The adapted Autobiographical interview: A systematic review and proposal for conduct and reporting
  • Paediatric Ureteroscopy (P-URS) reporting checklist: a new tool to aid studies report the essential items on paediatric ureteroscopy for stone disease
  • Adult Ureteroscopy (A-URS) Checklist: A New Tool To Standardise Reporting in Endourology
  • Recommendations for the development, implementation, and reporting of control interventions in efficacy and mechanistic trials of physical, psychological, and self-management therapies: the Co PPS Statement
  • Reporting Eye-tracking Studies In DEntistry (RESIDE) checklist
  • AdVi SHE : A Validation -Assessment Tool of Health -Economic Models for Decision Makers and Model Users
  • i CHECK-DH : Guidelines and Checklist for the Reporting on Digital Health Implementations
  • New reporting items and recommendations for randomized trials impacted by COVID- 19 and force majeure events: a targeted approach
  • Development, explanation, and presentation of the Physical Literacy Interventions Reporting Template (PLIRT)
  • Reporting guidelines for allergy and immunology survey research
  • CheckList for EvaluAtion of Radiomics research (CLEAR) : a step-by-step reporting guideline for authors and reviewers endorsed by ESR and EuSo MII
  • Transparent reporting of multivariable prediction models for individual prognosis or diagnosis: checklist for systematic reviews and meta-analyses (TRIPOD-SRMA)
  • Preferred Reporting Items for Complex Sample Survey Analysis (PRICSSA)
  • Defining measures of kidney function in observational studies using routine health care data: methodological and reporting considerations
  • CORE-CERT Items as a Minimal Requirement for Replicability of Exercise Interventions: Results From Application to Exercise Studies for Breast Cancer Patients
  • Recommendations for Reporting Machine Learning Analyses in Clinical Research
  • ACURATE : A guide for reporting sham controls in trials using acupuncture
  • Checklist for Artificial Intelligence in Medical Imaging (CLAIM) : A Guide for Authors and Reviewers
  • STandards for Reporting Interventions in Clinical Trials Of Tuina/Massage (STRICTOTM) : Extending the CONSORT statement
  • Transparent reporting of multivariable prediction models developed or validated using clustered data: TRIPOD-Cluster checklist
  • The SUPER reporting guideline suggested for reporting of surgical technique
  • Guidelines for Reporting Outcomes in Trial Reports: The CONSORT-Outcomes 2022 Extension
  • Guidelines for Reporting Outcomes in Trial Protocols: The SPIRIT-Outcomes 2022 Extension
  • PROBE 2023 guidelines for reporting observational studies in Endodontics: A consensus-based development study
  • CONFERD-HP : recommendations for reporting COmpeteNcy FramEwoRk Development in health professions
  • Development of the ASSESS tool: a comprehenSive tool to Support rEporting and critical appraiSal of qualitative, quantitative, and mixed methods implementation reSearch outcomes
  • Guiding document analyses in health professions education research
  • Evidence-based statistical analysis and methods in biomedical research (SAMBR) checklists according to design features
  • Social Accountability Reporting for Research (SAR 4Research): checklist to strengthen reporting on studies on social accountability in the literature
  • How to Report Data on Bilateral Procedures and Other Issues with Clustered Data: The CLUDA Reporting Guidelines
  • Best practice guidance and reporting items for the development of scoping review protocols
  • Establishing reporting standards for participant characteristics in post-stroke aphasia research: An international e -Delphi exercise and consensus meeting
  • STARTER Checklist for Antimalarial Therapeutic Efficacy Reporting
  • Best Practice in the chemical characterisation of extracts used in pharmacological and toxicological research -The ConPhy MP-Guidelines
  • Advising on Preferred Reporting Items for patient-reported outcome instrument development: the PRIPROID
  • Recommendations for reporting the results of studies of instrument and scale development and testing
  • Methodical approaches to determine the rate of radial muscle displacement using tensiomyography: A scoping review and new reporting guideline
  • Methods for developing and reporting living evidence synthesis
  • Bayesian Analysis Reporting Guidelines
  • Reporting guideline for overviews of reviews of healthcare interventions: development of the PRIOR statement
  • The Do CTRINE Guidelines: Defined Criteria To Report INnovations in Education
  • Development of guidelines to reduce, handle and report missing data in palliative care trials: A multi-stakeholder modified nominal group technique
  • The RIPI-f (Reporting Integrity of Psychological Interventions delivered face-to-face) checklist was developed to guide reporting of treatment integrity in face-to-face psychological interventions
  • The Intraoperative Complications Assessment and Reporting with Universal Standards (ICARUS) Global Surgical Collaboration Project: Development of Criteria for Reporting Adverse Events During Surgical Procedures and Evaluating Their Impact on the Postoperative Course
  • CODE-EHR best-practice framework for the use of structured electronic health-care records in clinical research
  • A Reporting Tool for Adapted Guidelines in Health Care: The RIGHT-Ad@pt Checklist
  • Development of a reporting guideline for systematic reviews of animal experiments in the field of traditional Chinese medicine
  • TIDieR-telehealth : precision in reporting of telehealth interventions used in clinical trials – unique considerations for the Template for the Intervention Description and Replication (TIDieR) checklist
  • Towards better reporting of the proportion of days covered method in cardiovascular medication adherence: A scoping review and new tool TEN-SPIDERS
  • Murine models of radiation cardiotoxicity: a systematic review and recommendations for future studies
  • Systematic Review and Meta -Analysis of Outcomes After Operative Treatment of Aberrant Subclavian Artery Pathologies and Suggested Reporting Items
  • Reporting standards for psychological network analyses in cross-sectional data
  • Reporting ChAracteristics of cadaver training and sUrgical studies: The CACTUS guidelines
  • EULAR points to consider for minimal reporting requirements in synovial tissue research in rheumatology
  • Methods and Applications of Social Media Monitoring of Mental Health During Disasters: Scoping Review
  • Position Statement on Exercise Dosage in Rheumatic and Musculoskeletal Diseases: The Role of the IMPACT-RMD Toolkit
  • A checklist for assessing the methodological quality of concurrent t ES-fMRI studies (ContES checklist): a consensus study and statement
  • Application of Mixed Methods in Health Services Management Research: A Systematic Review
  • Methodological standards for conducting and reporting meta-analyses: Ensuring the replicability of meta-analyses of pharmacist-led medication review
  • A scoping review of the use of ethnographic approaches in implementation research and recommendations for reporting
  • The Chest Wall Injury Society Recommendations for Reporting Studies of Surgical Stabilization of Rib Fractures
  • How to Report Light Exposure in Human Chronobiology and Sleep Research Experiments
  • Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI
  • Use of actigraphy for assessment in pediatric sleep research
  • EULAR points to consider when analysing and reporting comparative effectiveness research using observational data in rheumatology
  • International Consensus Based Review and Recommendations for Minimum Reporting Standards in Research on Transcutaneous Vagus Nerve Stimulation (Version 2020)
  • Developing a checklist for reporting research using simulated patient methodology (CRiSP) : a consensus study
  • Conceptual Ambiguity Surrounding Gamification and Serious Games in Health Care: Literature Review and Development of Game -Based Intervention Reporting Guidelines (GAMING)
  • Improving Reporting of Clinical Studies Using the POSEIDON Criteria: POSORT Guidelines
  • Six practical recommendations for improved implementation outcomes reporting
  • Recommendations and publication guidelines for studies using frequency domain and time-frequency domain analyses of neural time series
  • EVIDENCE Publication Checklist for Studies Evaluating Connected Sensor Technologies: Explanation and Elaboration
  • An extension of the RIGHT statement for introductions and interpretations of clinical practice guidelines: RIGHT for INT
  • Rasch Reporting Guideline for Rehabilitation Research (RULER) : The RULER Statement
  • Using qualitative research to develop an elaboration of the TIDieR checklist for interventions to enhance vaccination communication: short report
  • Preliminary Minimum Reporting Requirements for In -Vivo Neural Interface Research: I. Implantable Neural Interfaces
  • Standardized Reporting of Machine Learning Applications in Urology: The STREAM-URO Framework
  • Guidance for publishing qualitative research in informatics
  • Guidelines for reporting on animal fecal transplantation (GRAFT) studies: recommendations from a systematic review of murine transplantation protocols
  • A Scoping Review of Four Decades of Outcomes in Nonsurgical Root Canal Treatment, Nonsurgical Retreatment, and Apexification Studies: Part 3 -A Proposed Framework for Standardized Data Collection and Reporting of Endodontic Outcome Studies
  • Health -Economic Analyses of Diagnostics: Guidance on Design and Reporting
  • Reporting guidelines for human microbiome research: the STORMS checklist
  • Smartphone -Delivered Ecological Momentary Interventions Based on Ecological Momentary Assessments to Promote Health Behaviors: Systematic Review and Adapted Checklist for Reporting Ecological Momentary Assessment and Intervention Studies
  • A Systematic Review of Methods and Procedures Used in Ecological Momentary Assessments of Diet and Physical Activity Research in Youth: An Adapted STROBE Checklist for Reporting EMA Studies (CREMAS)
  • Reporting Data on Auditory Brainstem Responses (ABR) in Rats: Recommendations Based on Review of Experimental Protocols and Literature
  • Heterogeneity in the Identification of Potential Drug -Drug Interactions in the Intensive Care Unit: A Systematic Review, Critical Appraisal, and Reporting Recommendations
  • Development of the CLARIFY (CheckList stAndardising the Reporting of Interventions For Yoga) guidelines: a Delphi study
  • Early phase clinical trials extension to guidelines for the content of statistical analysis plans
  • PRESENT 2020: Text Expanding on the Checklist for Proper Reporting of Evidence in Sport and Exercise Nutrition Trials
  • Intraoperative fluorescence diagnosis in the brain: a systematic review and suggestions for future standards on reporting diagnostic accuracy and clinical utility
  • Checklist for Theoretical Report in Epidemiological Studies (CRT-EE) : explanation and elaboration
  • STAndard Reporting of CAries Detection and Diagnostic Studies (STARCARDDS)
  • Implementing the 27 PRISMA 2020 Statement items for systematic reviews in the sport and exercise medicine, musculoskeletal rehabilitation and sports science fields: the PERSiST (implementing Prisma in Exercise, Rehabilitation, Sport medicine and SporTs science) guidance
  • Strengthening the Reporting of Observational Studies in Epidemiology Using Mendelian Randomization: The STROBE-MR Statement
  • Recommended reporting items for epidemic forecasting and prediction research: The EPIFORGE 2020 guidelines
  • Extending the CONSORT Statement to moxibustion
  • Consensus-based recommendations for case report in Chinese medicine (CARC)
  • Reporting Guidelines for Whole -Body Vibration Studies in Humans, Animals and Cell Cultures: A Consensus Statement from an International Group of Experts
  • Guidelines for cellular and molecular pathology content in clinical trial protocols: the SPIRIT-Path extension
  • A Guideline for Reporting Mediation Analyses of Randomized Trials and Observational Studies: The AGReMA Statement
  • Social Innovation For Health Research (SIFHR) : Development of the SIFHR Checklist
  • How to write a guideline: a proposal for a manuscript template that supports the creation of trustworthy guidelines
  • REPORT-PFP : a consensus from the International Patellofemoral Research Network to improve REPORTing of quantitative PatelloFemoral Pain studies
  • Reporting stAndards for research in PedIatric Dentistry (RAPID) : an expert consensus-based statement
  • Improving the reporting quality of reliability generalization meta-analyses: The REGEMA checklist
  • Evaluation of post-introduction COVID- 19 vaccine effectiveness: Summary of interim guidance of the World Health Organization
  • Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report
  • Guidelines for Reporting Trial Protocols and Completed Trials Modified Due to the COVID- 19 Pandemic and Other Extenuating Circumstances: The CONSERVE 2021 Statement
  • Describing deprescribing trials better: an elaboration of the CONSORT statement
  • Comprehensive reporting of pelvic floor muscle training for urinary incontinence: CERT-PFMT
  • International Olympic Committee Consensus Statement: Methods for Recording and Reporting of Epidemiological Data on Injury and Illness in Sports 2020 (Including the STROBE Extension for Sports Injury and Illness Surveillance (STROBE-SIIS))
  • RIGHT for Acupuncture: An Extension of the RIGHT Statement for Clinical Practice Guidelines on Acupuncture
  • The APOSTEL 2.0 Recommendations for Reporting Quantitative Optical Coherence Tomography Studies
  • Room Indirect Calorimetry Operating and Reporting Standards (RICORS 1.0): A Guide to Conducting and Reporting Human Whole -Room Calorimeter Studies
  • COSMIN reporting guideline for studies on measurement properties of patient-reported outcome measures
  • Ensuring best practice in genomics education and evaluation: reporting item standards for education and its evaluation in genomics (RISE 2 Genomics)
  • A Consensus -Based Checklist for Reporting of Survey Studies (CROSS)
  • CONSORT extension for the reporting of randomised controlled trials conducted using cohorts and routinely collected data (CONSORT-ROUTINE) : checklist with explanation and elaboration
  • Preferred reporting items for journal and conference abstracts of systematic reviews and meta-analyses of diagnostic test accuracy studies (PRISMA-DTA for Abstracts): checklist, explanation, and elaboration
  • An analysis of preclinical efficacy testing of antivenoms for sub -Saharan Africa: Inadequate independent scrutiny and poor-quality reporting are barriers to improving snakebite treatment and management
  • Artificial intelligence in dental research: Checklist for authors, reviewers, readers
  • EULAR recommendations for the reporting of ultrasound studies in rheumatic and musculoskeletal diseases (RMDs)
  • PRIASE 2021 guidelines for reporting animal studies in Endodontology: a consensus-based development
  • The reporting checklist for public versions of guidelines: RIGHT-PVG
  • SQUIRE-EDU (Standards for QUality Improvement Reporting Excellence in Education): Publication Guidelines for Educational Improvement
  • PRISMA-S : an extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews
  • Anaesthesia Case Report (ACRE) checklist: a tool to promote high-quality reporting of cases in peri-operative practice
  • Strengthening tRansparent reporting of reseArch on uNfinished nursing CARE : The RANCARE guideline
  • Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report
  • Defining Group Care Programs: An Index of Reporting Standards
  • Stakeholder analysis in health innovation planning processes: A systematic scoping review
  • PRISMA extension for moxibustion 2020: recommendations, explanation, and elaboration
  • PRISMA (Preferred Reporting Items for Systematic Reviews and Meta -Analyses) Extension for Chinese Herbal Medicines 2020 (PRISMA-CHM 2020)
  • Reporting gaps in immunization costing studies: Recommendations for improving the practice
  • The RIGHT Extension Statement for Traditional Chinese Medicine: Development, Recommendations, and Explanation
  • Proposed Requirements for Cardiovascular Imaging -Related Machine Learning Evaluation (PRIME) : A Checklist: Reviewed by the American College of Cardiology Healthcare Innovation Council
  • Benefit -Risk Assessment of Vaccines. Part II : Proposal Towards Consolidated Standards of Reporting Quantitative Benefit -Risk Models Applied to Vaccines (BRIVAC)
  • BIAS : Transparent reporting of biomedical image analysis challenges
  • Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist
  • STrengthening the Reporting Of Pharmacogenetic Studies: Development of the STROPS guideline
  • TIDieR-Placebo : a guide and checklist for reporting placebo and sham controls
  • Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI Extension
  • Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI Extension
  • The IDEAL Reporting Guidelines: A Delphi Consensus Statement Stage specific recommendations for reporting the evaluation of surgical innovation
  • The adaptive designs CONSORT extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design
  • Reporting Guideline for Priority Setting of Health Research (REPRISE)
  • Consensus on the reporting and experimental design of clinical and cognitive-behavioural neurofeedback studies (CRED-nf checklist)
  • PRICE 2020 guidelines for reporting case reports in Endodontics: a consensus-based development
  • PRIRATE 2020 guidelines for reporting randomized trials in Endodontics: a consensus-based development
  • Guidance for reporting intervention development studies in health research (GUIDED) : an evidence-based consensus study
  • Standard Protocol Items for Clinical Trials with Traditional Chinese Medicine 2018: Recommendations, Explanation and Elaboration (SPIRIT-TCM Extension 2018)
  • SPIRIT extension and elaboration for n-of-1 trials: SPENT 2019 checklist
  • Standards for reporting interventions in clinical trials of cupping (STRICTOC) : extending the CONSORT statement
  • Pa CIR : A tool to enhance pharmacist patient care intervention reporting
  • Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline
  • Guidelines for reporting case studies and series on drug-induced QT interval prolongation and its complications following acute overdose
  • Criteria for describing and evaluating training interventions in healthcare professions – CRe-DEPTH
  • CONSORT extension for reporting N-of- 1 trials for traditional Chinese medicine (CENT for TCM) : Recommendations, explanation and elaboration
  • Reporting items for systematic reviews and meta-analyses of acupuncture: the PRISMA for acupuncture checklist
  • Consolidated criteria for strengthening reporting of health research involving indigenous peoples: the CONSIDER statement
  • Checklist for the preparation and review of pain clinical trial publications: a pain-specific supplement to CONSORT
  • Reporting guidelines on remotely collected electronic mood data in mood disorder (e MOOD)-recommendations
  • CONSORT 2010 statement: extension to randomised crossover trials
  • Reporting of Multi -Arm Parallel -Group Randomized Trials: Extension of the CONSORT 2010 Statement
  • Microbiology Investigation Criteria for Reporting Objectively (MICRO) : a framework for the reporting and interpretation of clinical microbiology data
  • Core Outcome Set -STAndardised Protocol Items: the COS-STAP Statement
  • The Reporting on ERAS Compliance, Outcomes, and Elements Research (RECOvER) Checklist: A Joint Statement by the ERAS ® and ERAS ® USA Societies
  • Reporting of stepped wedge cluster randomised trials: extension of the CONSORT 2010 statement with explanation and elaboration
  • Improving reporting of Meta -Ethnography : The e MERGe Reporting Guidance
  • The Reporting Items for Patent Landscapes statement
  • Reporting guidelines on how to write a complete and transparent abstract for overviews of systematic reviews of health care interventions
  • The reporting of studies conducted using observational routinely collected health data statement for pharmacoepidemiology (RECORD-PE)
  • PRISMA Extension for Scoping Reviews (PRISMA-ScR) : Checklist and Explanation
  • Systems Perspective of Amazon Mechanical Turk for Organizational Research: Review and Recommendations
  • ESPACOMP Medication Adherence Reporting Guideline (EMERGE)
  • Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension
  • Reporting guidelines for implementation research on nurturing care interventions designed to promote early childhood development
  • Strengthening the reporting of empirical simulation studies: Introducing the STRESS guidelines
  • TIDieR-PHP : a reporting guideline for population health and policy interventions
  • Guidelines for Inclusion of Patient -Reported Outcomes in Clinical Trial Protocols: The SPIRIT-PRO Extension
  • Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement
  • Structural brain development: A review of methodological approaches and best practices
  • Improving the Development, Monitoring and Reporting of Stroke Rehabilitation Research: Consensus -Based Core Recommendations from the Stroke Recovery and Rehabilitation Roundtable
  • Variability in the Reporting of Serum Urate and Flares in Gout Clinical Trials: Need for Minimum Reporting Requirements
  • Methodology of assessment and reporting of safety in anti-malarial treatment efficacy studies of uncomplicated falciparum malaria in pregnancy: a systematic literature review
  • Standards for UNiversal reporting of patient Decision Aid Evaluation studies: the development of SUNDAE Checklist
  • Consideration of Sex Differences in Design and Reporting of Experimental Arterial Pathology Studies -Statement From ATVB Council
  • Guidelines for the Content of Statistical Analysis Plans in Clinical Trials
  • RECORDS : Improved reporting of Monte Carlo Radiation transport studies: Report of the AAPM Research Committee Task Group 268
  • Ten simple rules for neuroimaging meta-analysis
  • CONSORT-Equity 2017 extension and elaboration for better reporting of health equity in randomised trials
  • Preferred Reporting Items for Overviews of systematic reviews including harms checklist: A pilot tool to be used for balanced reporting of benefits and harms
  • Reporting Guidelines for the Use of Expert Judgement in Model -Based Economic Evaluations
  • Standards for reporting chronic periodontitis prevalence and severity in epidemiologic studies: Proposed standards from the Joint EU / USA Periodontal Epidemiology Working Group
  • Graphics and statistics for cardiology: designing effective tables for presentation and publication
  • Guidelines for reporting meta-epidemiological methodology research
  • Reporting to Improve Reproducibility and Facilitate Validity Assessment for Healthcare Database Studies V1.0
  • Characteristics of funding of clinical trials: cross-sectional survey and proposed guidance
  • Guidelines for Reporting on Latent Trajectory Studies (GRoLTS)
  • AMWA ‒ EMWA ‒ ISMPP Joint Position Statement on the Role of Professional Medical Writers
  • STROCSS 2021: Strengthening the reporting of cohort, cross-sectional and case-control studies in surgery
  • Unique identification of research resources in the biomedical literature: the Resource Identification Initiative (RRID)
  • Checklist for One Health Epidemiological Reporting of Evidence (COHERE)
  • STARD for Abstracts: essential items for reporting diagnostic accuracy studies in journal or conference abstracts
  • GRIPP 2 reporting checklists: tools to improve reporting of patient and public involvement in research
  • Single organ cutaneous vasculitis: Case definition & guidelines for data collection, analysis, and presentation of immunization safety data
  • Improving the reporting of clinical trials of infertility treatments (IMPRINT) : modifying the CONSORT statement
  • Improving the reporting of therapeutic exercise interventions in rehabilitation research
  • Guidance to develop individual dose recommendations for patients on chronic hemodialysis
  • A literature review of applied adaptive design methodology within the field of oncology in randomised controlled trials and a proposed extension to the CONSORT guidelines
  • AHRQ Series on Complex Intervention Systematic Reviews – Paper 6: PRISMA-CI Extension Statement & Checklist
  • Adaptation of the CARE Guidelines for Therapeutic Massage and Bodywork Publications: Efforts To Improve the Impact of Case Reports
  • A review of published analyses of case-cohort studies and recommendations for future reporting
  • Guidelines for reporting evaluations based on observational methodology
  • Reporting and Guidelines in Propensity Score Analysis: A Systematic Review of Cancer and Cancer Surgical Studies
  • Minimum Information for Studies Evaluating Biologics in Orthopaedics (MIBO) : Platelet -Rich Plasma and Mesenchymal Stem Cells
  • CONSORT 2010 statement: extension checklist for reporting within person randomised trials
  • CONSORT Extension for Chinese Herbal Medicine Formulas 2017: Recommendations, Explanation, and Elaboration
  • Methods and processes of developing the strengthening the reporting of observational studies in epidemiology – veterinary (STROBE-Vet) statement
  • CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials
  • STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies
  • SPIRIT 2013 Statement: Defining standard protocol items for clinical trials
  • Best Practices in Data Analysis and Sharing in Neuroimaging using MRI
  • Latent Class Analysis: An example for reporting results
  • Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View
  • Clarity in Reporting Terminology and Definitions of Set End Points in Resistance Training
  • An introduction to using Bayesian linear regression with clinical data
  • Making economic evaluations more helpful for treatment choices in haemophilia
  • Guideline for Reporting Interventions on Spinal Manipulative Therapy: Consensus on Interventions Reporting Criteria List for Spinal Manipulative Therapy (CIRCLe SMT)
  • Guidance on Conducting and REporting DElphi Studies (CREDES) in palliative care: Recommendations based on a methodological systematic review
  • STARD-BLCM : Standards for the Reporting of Diagnostic accuracy studies that use Bayesian Latent Class Models
  • A Reporting Tool for Practice Guidelines in Health Care: The RIGHT Statement
  • The AGREE Reporting Checklist: a tool to improve reporting of clinical practice guidelines
  • The REFLECT statement: methods and processes of creating reporting guidelines for randomized controlled trials for livestock and food safety by modifying the CONSORT statement
  • Reporting Items for Updated Clinical Guidelines: Checklist for the Reporting of Updated Guidelines (CheckUp)
  • Preferred Reporting Items for the Development of Evidence-based Clinical Practice Guidelines in Traditional Medicine (PRIDE-CPG-TM) : Explanation and elaboration
  • Standards for Reporting Implementation Studies (StaRI) Statement
  • Preferred Reporting Of Case Series in Surgery (PROCESS) 2023 guidelines
  • Consensus on Exercise Reporting Template (CERT) : Modified Delphi Study
  • Development of the Anatomical Quality Assurance (AQUA) Checklist: guidelines for reporting original anatomical studies
  • CONSORT 2010 statement: extension to randomised pilot and feasibility trials
  • Core Outcome Set -STAndards for Reporting: The COS-STAR Statement
  • Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement
  • Evaluation of response after pre-operative radiotherapy in soft tissue sarcomas; the European Organisation for Research and Treatment of Cancer -Soft Tissue and Bone Sarcoma Group (EORTC-STBSG) and Imaging Group recommendations for radiological examination and reporting with an emphasis on magnetic resonance imaging
  • Standardization of pathologic evaluation and reporting of postneoadjuvant specimens in clinical trials of breast cancer: recommendations from an international working group
  • Image-guided Tumor Ablation: Standardization of Terminology and Reporting Criteria—A 10 -Year Update
  • Irreversible Electroporation (IRE) : Standardization of Terminology and Reporting Criteria for Analysis and Comparison
  • Recommendations for improving the quality of reporting clinical electrochemotherapy studies based on qualitative systematic review
  • METastasis Reporting and Data System for Prostate Cancer: Practical Guidelines for Acquisition, Interpretation, and Reporting of Whole-body Magnetic Resonance Imaging-based Evaluations of Multiorgan Involvement in Advanced Prostate Cancer
  • Reporting Magnetic Resonance Imaging in Men on Active Surveillance for Prostate Cancer: The PRECISE Recommendations -A Report of a European School of Oncology Task Force
  • Transcription factor HIF 1A: downstream targets, associated pathways, polymorphic hypoxia response element (HRE) sites, and initiative for standardization of reporting in scientific literature
  • Eliciting the child’s voice in adverse event reporting in oncology trials: Cognitive interview findings from the Pediatric Patient -Reported Outcomes version of the Common Terminology Criteria for Adverse Events initiative
  • CONSISE statement on the reporting of Seroepidemiologic Studies for influenza (ROSES-I statement): an extension of the STROBE statement
  • REporting recommendations for tumour MARKer prognostic studies (REMARK)
  • Recommendations to improve adverse event reporting in clinical trial publications: a joint pharmaceutical industry/journal editor perspective
  • Reporting studies on time to diagnosis: proposal of a guideline by an international panel (REST)
  • A Checklist for Reporting Valuation Studies of Multi -Attribute Utility -Based Instruments (CREATE)
  • Strengthening the Reporting of Observational Studies in Epidemiology for Newborn Infection (STROBE-NI) : an extension of the STROBE statement for neonatal infection research
  • The SCARE 2020 Guideline: Updating Consensus Surgical CAse REport (SCARE) Guidelines
  • Development and validation of the guideline for reporting evidence-based practice educational interventions and teaching (GREET)
  • Using theory of change to design and evaluate public health interventions: a systematic review
  • Homeopathic clinical case reports: Development of a supplement (HOM-CASE) to the CARE clinical case reporting guideline
  • Reporting Guidelines for Health Care Simulation Research: Extensions to the CONSORT and STROBE Statements
  • Guidelines for Reporting Articles on Psychiatry and Heart rate variability (GRAPH) : recommendations to advance research communication
  • RAMESES II reporting standards for realist evaluations
  • Guidelines for Accurate and Transparent Health Estimates Reporting: the GATHER statement
  • Medical abortion reporting of efficacy: the MARE guidelines.
  • Strengthening the Reporting of Observational Studies in Epidemiology—Nutritional Epidemiology (STROBE-nut) : An Extension of the STROBE Statement
  • Sex and Gender Equity in Research: rationale for the SAGER guidelines and recommended use
  • The Single -Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016 Statement
  • Consensus on Recording Deep Endometriosis Surgery: the CORDES statement
  • Developing the Clarity and Openness in Reporting: E3-based (CORE) reference user manual for creation of clinical study reports in the era of clinical trial transparency
  • SCCT guidelines for the interpretation and reporting of coronary CT angiography: a report of the Society of Cardiovascular Computed Tomography Guidelines Committee
  • Definition and classification of intraoperative complications (CLASSIC) : Delphi study and pilot evaluation
  • Improving research practice in rat orthotopic and partial orthotopic liver transplantation: a review, recommendation, and publication guide
  • Methodology used in studies reporting chronic kidney disease prevalence: a systematic literature review
  • Standardized outcomes reporting in metabolic and bariatric surgery
  • Consensus guidelines on plasma cell myeloma minimal residual disease analysis and reporting
  • DELTA 2 guidance on choosing the target difference and undertaking and reporting the sample size calculation for a randomised controlled trial
  • Transparent reporting of data quality in distributed data networks
  • Quality of methods reporting in animal models of colitis
  • Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (m ERA) checklist
  • Quality of pain intensity assessment reporting: ACTTION systematic review and recommendations
  • An extension of STARD statements for reporting diagnostic accuracy studies on liver fibrosis tests: the Liver -FibroSTARD standards
  • STROBE-AMS : recommendations to optimise reporting of epidemiological studies on antimicrobial resistance and informing improvement in antimicrobial stewardship
  • Reporting guidelines for population pharmacokinetic analyses
  • Recommendations for the improved effectiveness and reporting of telemedicine programs in developing countries: results of a systematic literature review
  • Guidelines for the reporting of treatment trials for alcohol use disorders
  • Development of the Standards of Reporting of Neurological Disorders (STROND) checklist: A guideline for the reporting of incidence and prevalence studies in neuroepidemiology
  • A review of 40 years of enteric antimicrobial resistance research in Eastern Africa: what can be done better?
  • Ensuring consistent reporting of clinical pharmacy services to enhance reproducibility in practice: an improved version of DEPICT
  • RiGoR: reporting guidelines to address common sources of bias in risk model development
  • Reporting standards for guideline-based performance measures
  • Utstein-style guidelines on uniform reporting of in-hospital cardiopulmonary resuscitation in dogs and cats
  • Guidelines for reporting embedded recruitment trials
  • PRISMA harms checklist: improving harms reporting in systematic reviews
  • Developing a methodological framework for organisational case studies: a rapid review and consensus development process
  • The PRISMA 2020 statement: An updated guideline for reporting systematic reviews
  • The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: guidelines for reporting observational studies
  • A checklist to improve reporting of group-based behaviour-change interventions
  • The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) Statement
  • Preferred reporting items for studies mapping onto preference-based outcome measures: The MAPS statement
  • A call for transparent reporting to optimize the predictive value of preclinical research
  • Guidelines for uniform reporting of body fluid biomarker studies in neurologic disorders
  • The PRISMA Extension Statement for Reporting of Systematic Reviews Incorporating Network Meta-analyses of Health Care Interventions: Checklist and Explanations
  • Setting number of decimal places for reporting risk ratios: rule of four
  • Too many digits: the presentation of numerical data
  • The CONSORT Statement: Application within and adaptations for orthodontic trials
  • A structured approach to documenting a search strategy for publication: a 12 step guideline for authors
  • Standards of reporting for MRI-targeted biopsy studies (START) of the prostate: recommendations from an International Working Group
  • Standardized reporting guidelines for emergency department syncope risk-stratification research
  • Designing and reporting case series in plastic surgery
  • Disaster medicine reporting: the need for new guidelines and the CONFIDE statement
  • Development and validation of reporting guidelines for studies involving data linkage
  • A reporting guide for studies on individual differences in traffic safety
  • Translating trial-based molecular monitoring into clinical practice: importance of international standards and practical considerations for community practitioners
  • Canadian Association of Gastroenterology consensus guidelines on safety and quality indicators in endoscopy
  • A tool to analyze the transferability of health promotion interventions
  • A new manner of reporting pressure results after glaucoma surgery
  • Do the media provide transparent health information? A cross-cultural comparison of public information about the HPV vaccine
  • Recommendations for the reporting of foot and ankle models
  • An introduction to standardized clinical nomenclature for dysmorphic features: the Elements of Morphology project
  • Strengthening the Reporting of Observational Studies in Epidemiology for Respondent -Driven Sampling Studies: ‘ STROBE-RDS ’ Statement.
  • CONSORT extension for reporting N-of- 1 trials (CENT) 2015 Statement
  • A protocol format for the preparation, registration and publication of systematic reviews of animal intervention studies
  • Reporting standards for literature searches and report inclusion criteria: making research syntheses more transparent and easy to replicate
  • Preferred Reporting Items for Systematic Review and Meta -Analyses of individual participant data: the PRISMA-IPD Statement
  • Evaluating complex interventions in end of life care: the MORECare statement on good practice generated by a synthesis of transparent expert consultations and systematic reviews
  • Biospecimen reporting for improved study quality (BRISQ)
  • Developing a guideline to standardize the citation of bioresources in journal articles (CoBRA)
  • Reporting Guidelines for Clinical Pharmacokinetic Studies: The Clin PK Statement
  • Using the spinal cord injury common data elements
  • Guidelines for assessment of bone microstructure in rodents using micro-computed tomography
  • Instrumental variable methods in comparative safety and effectiveness research
  • Protecting the power of interventions through proper reporting
  • Reporting of data from out-of-hospital cardiac arrest has to involve emergency medical dispatching – taking the recommendations on reporting OHCA the Utstein style a step further
  • A position paper on standardizing the nonneoplastic kidney biopsy report
  • Using qualitative methods for attribute development for discrete choice experiments: issues and recommendations
  • Reporting of interaction
  • Head, neck, and brain tumor embolization guidelines
  • Reporting outcomes of back pain trials: a modified Delphi study
  • A common language in neoadjuvant breast cancer clinical trials: proposals for standard definitions and endpoints
  • Viscerotropic disease: case definition and guidelines for collection, analysis, and presentation of immunization safety data
  • Diarrhea: case definition and guidelines for collection, analysis, and presentation of immunization safety data
  • Immunization site pain: case definition and guidelines for collection, analysis, and presentation of immunization safety data
  • Can the Brighton Collaboration case definitions be used to improve the quality of Adverse Event Following Immunization (AEFI) reporting? Anaphylaxis as a case study
  • Definitions, methodological and statistical issues for phase 3 clinical trials in chronic myeloid leukemia: a proposal by the European LeukemiaNet
  • A new standardized format for reporting hearing outcome in clinical trials
  • A consensus approach toward the standardization of back pain definitions for use in prevalence studies
  • A proposed taxonomy of terms to guide the clinical trial recruitment process
  • Minimum data elements for research reports on CFS
  • Reporting standards for angiographic evaluation and endovascular treatment of cerebral arteriovenous malformations
  • How to report low-level laser therapy (LLLT) /photomedicine dose and beam parameters in clinical and laboratory studies
  • Common data elements for posttraumatic stress disorder research
  • American College of Medical Genetics standards and guidelines for interpretation and reporting of postnatal constitutional copy number variants
  • Completeness of reporting of radiation therapy planning, dose, and delivery in veterinary radiation oncology manuscripts from 2005 to 2010
  • Criteria for Reporting the Development and Evaluation of Complex Interventions in healthcare: revised guideline (CReDECI 2)
  • TRIPOD + AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods
  • Preferred Reporting Items for Systematic Review and Meta -Analysis Protocols (PRISMA-P) 2015 statement
  • Reporting guidance for violence risk assessment predictive validity studies: the RAGEE Statement
  • Standards for reporting qualitative research: a synthesis of recommendations
  • A systematic review of systematic reviews and meta-analyses of animal experiments with guidelines for reporting
  • Systematic reviews and meta-analysis of preclinical studies: why perform them and how to appraise them critically
  • Guidelines for reporting case studies on extracorporeal treatments in poisonings: methodology
  • Reporting standards for studies of diagnostic test accuracy in dementia: The STARDdem Initiative.
  • Strengthening the reporting of molecular epidemiology for infectious diseases (STROME-ID) : an extension of the STROBE statement
  • Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide
  • Launch of a checklist for reporting longitudinal observational drug studies in rheumatology: a EULAR extension of STROBE guidelines based on experience from biologics registries
  • CONSORT Harms 2022 statement, explanation, and elaboration: updated guideline for the reporting of harms in randomized trials
  • The CARE Guidelines: Consensus-based Clinical Case Reporting Guideline Development
  • Documenting Clinical and Laboratory Images in Publications: The CLIP Principles
  • ICMJE : Uniform Format for Disclosure of Competing Interests in ICMJE Journals
  • Selection and presentation of imaging figures in the medical literature
  • Neuroimaging standards for research into small vessel disease and its contribution to ageing and neurodegeneration
  • Finding What Works in Health Care: Standards for Systematic Reviews. Chapter 5 – Standards for Reporting Systematic Reviews
  • Systematic Reviews. CRD ’s guidance for undertaking reviews in health care
  • The Hu GENet ™ Hu GE Review Handbook, version 1.0. Guidelines for systematic review and meta-analysis of gene disease association studies
  • Cochrane Handbook for Systematic Reviews of Interventions Version 6.1
  • Writing for Publication in Veterinary Medicine. A Practical Guide for Researchers and Clinicians
  • Publication of population data for forensic purposes
  • Publication of population data of linearly inherited DNA markers in the International Journal of Legal Medicine
  • Proposed definitions and criteria for reporting time frame, outcome, and complications for clinical orthopedic studies in veterinary medicine
  • Recommended guidelines for the conduct and evaluation of prognostic studies in veterinary oncology
  • Update of the stroke therapy academic industry roundtable preclinical recommendations
  • Good laboratory practice: preventing introduction of bias at the bench
  • Consensus-based reporting standards for diagnostic test accuracy studies for paratuberculosis in ruminants
  • A gold standard publication checklist to improve the quality of animal studies, to fully integrate the Three Rs, and to make systematic reviews more feasible
  • The ARRIVE Guidelines 2.0: updated guidelines for reporting animal research
  • International Society for Medical Publication Professionals Code of Ethics
  • Proposed best practice for statisticians in the reporting and publication of pharmaceutical industry-sponsored clinical trials
  • What should be done to tackle ghostwriting in the medical literature?
  • Electrodermal activity at acupoints: literature review and recommendations for reporting clinical trials
  • Systematic review to determine best practice reporting guidelines for AFO interventions in studies involving children with cerebral palsy
  • EANM Dosimetry Committee guidance document: good practice of clinical dosimetry reporting
  • ASAS recommendations for collecting, analysing and reporting NSAID intake in clinical trials/epidemiological studies in axial spondyloarthritis
  • International Spinal Cord Injury Core Data Set (version 3.0) – including standardization of reporting
  • Risk of recurrent venous thromboembolism after stopping treatment in cohort studies: recommendation for acceptable rates and standardized reporting
  • The importance of uniform venous terminology in reports on varicose veins
  • Recommendations for reporting perioperative transoesophageal echo studies
  • Guidelines for reporting an f MRI study
  • Society for Cardiovascular Magnetic Resonance guidelines for reporting cardiovascular magnetic resonance examinations
  • Criteria for evaluation of novel markers of cardiovascular risk
  • American Society of Transplantation recommendations for screening, monitoring and reporting of infectious complications in immunosuppression trials in recipients of organ transplantation
  • Research reporting standards for radioembolization of hepatic malignancies
  • Research reporting standards for image-guided ablation of bone and soft tissue tumors
  • EANM procedure guidelines for brain neurotransmission SPECT using I-labelled dopamine transporter ligands, version 2
  • Transcatheter Therapy for Hepatic Malignancy: Standardization of Terminology and Reporting Criteria
  • Reporting standards for percutaneous thermal ablation of renal cell carcinoma
  • Research reporting standards for percutaneous vertebral augmentation
  • Reporting standards for percutaneous interventions in dialysis access
  • Reporting standards for clinical evaluation of new peripheral arterial revascularization devices
  • Guidelines for the reporting of renal artery revascularization in clinical trials
  • Reporting standards for carotid interventions from the Society for Vascular Surgery
  • Reporting standards for carotid artery angioplasty and stent placement
  • Standardized definitions and clinical endpoints in carotid artery and supra-aortic trunk revascularization trials
  • Setting the standards for reporting ruptured abdominal aortic aneurysm
  • Endovascular repair compared with operative repair of traumatic rupture of the thoracic aorta: a nonsystematic review and a plea for trauma-specific reporting guidelines
  • Reporting standards for thoracic endovascular aortic repair (TEVAR)
  • Research reporting standards for endovascular treatment of pelvic venous insufficiency
  • Reporting standards for endovascular treatment of pulmonary embolism
  • Reporting Standards for Endovascular Repair of Saccular Intracranial Cerebral Aneurysms
  • Trial design and reporting standards for intra-arterial cerebral thrombolysis for acute ischemic stroke
  • Target registration and target positioning errors in computer-assisted neurosurgery: proposal for a standardized reporting of error assessment
  • Recommended guidelines for reporting on emergency medical dispatch when conducting research in emergency medicine: the Utstein style
  • Recommended guidelines for reviewing, reporting, and conducting research on post-resuscitation care: the Utstein style
  • Utstein-style guidelines for uniform reporting of laboratory CPR research
  • Recommendations for uniform reporting of data following major trauma–the Utstein style
  • Recommended guidelines for reviewing, reporting, and conducting research on in-hospital resuscitation: the in-hospital ‘Utstein style’
  • Standardization of uveitis nomenclature for reporting clinical data. Results of the First International Workshop
  • EACTS / ESCVS best practice guidelines for reporting treatment results in the thoracic aorta
  • Consensus statement: Defining minimal criteria for reporting the systemic inflammatory response to cardiopulmonary bypass
  • Guidelines for reporting data and outcomes for the surgical treatment of atrial fibrillation
  • Recommendations for reporting morbid events after heart valve surgery
  • Standards for reporting results of refractive surgery
  • Quality of study methods in individual- and group-level HIV intervention research: critical reporting elements
  • Quality of reporting in evaluations of surgical treatment of trigeminal neuralgia: recommendations for future reports
  • Guidelines for reporting case series of tumours of the colon and rectum
  • Guidance on reporting ultrasound exposure conditions for bio-effects studies
  • American College of Cardiology Clinical Expert Consensus Document on Standards for Acquisition, Measurement and Reporting of Intravascular Ultrasound Studies (IVUS)
  • Standardized reporting of bleeding complications for clinical investigations in acute coronary syndromes: a proposal from the academic bleeding consensus (ABC) multidisciplinary working group
  • Standardized reporting guidelines for studies evaluating risk stratification of ED patients with potential acute coronary syndromes
  • Calibration methods used in cancer simulation models and suggested reporting guidelines
  • Preschool vision screening: what should we be detecting and how should we report it? Uniform guidelines for reporting results of preschool vision screening studies
  • The lessons of QUANTEC : recommendations for reporting and gathering data on dose-volume dependencies of treatment outcome
  • A reporting guideline for clinical platelet transfusion studies from the BEST Collaborative
  • Clinical trials focusing on cancer pain educational interventions: core components to include during planning and reporting
  • Reporting disease activity in clinical trials of patients with rheumatoid arthritis: EULAR / ACR collaborative recommendations
  • Exercise therapy and low back pain: insights and proposals to improve the design, conduct, and reporting of clinical trials
  • Eligibility and outcomes reporting guidelines for clinical trials for patients in the state of a rising prostate-specific antigen: recommendations from the Prostate -Specific Antigen Working Group
  • Consensus guidelines for the conduct and reporting of clinical trials in systemic light-chain amyloidosis
  • Diagnosis and management of acute myeloid leukemia in adults: recommendations from an international expert panel, on behalf of the European LeukemiaNet
  • Revised recommendations of the International Working Group for Diagnosis, Standardization of Response Criteria, Treatment Outcomes, and Reporting Standards for Therapeutic Trials in Acute Myeloid Leukemia
  • Methodological challenges when using actigraphy in research
  • Draft STROBE checklist for conference abstracts
  • Conflict of Interest in Peer -Reviewed Medical Journals
  • Financial Conflicts of Interest Checklist 2010 for clinical research studies
  • Professional medical associations and their relationships with industry: a proposal for controlling conflict of interest
  • How to formulate research recommendations
  • Suggestions for improving the reporting of clinical research: the role of narrative
  • The case for structuring the discussion of scientific papers
  • More medical journals should inform their contributors about three key principles of graph construction
  • Figures in clinical trial reports: current practice & scope for improvement
  • Recommendations for the assessment and reporting of multivariable logistic regression in transplantation literature
  • Reporting results of latent growth modeling and multilevel modeling analyses: some recommendations for rehabilitation psychology
  • Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls
  • Assessing and reporting heterogeneity in treatment effects in clinical trials: a proposal
  • Statistics in medicine–reporting of subgroup analyses in clinical trials
  • Seven items were identified for inclusion when reporting a Bayesian analysis of a clinical study
  • Bayesian methods in health technology assessment: a review
  • Basic Statistical Reporting for Articles Published in Biomedical Journals: The “Statistical Analyses and Methods in the Published Literature” or The SAMPL Guidelines
  • Establishing a knowledge trail from molecular experiments to clinical trials
  • Preparing raw clinical data for publication: guidance for journal editors, authors, and peer reviewers
  • Best practices in the reporting of participatory action research: Embracing both the forest and the trees
  • A comprehensive checklist for reporting the use of OSCEs
  • Quality of standardised patient research reports in the medical education literature: review and recommendations
  • Development and use of reporting guidelines for assessing the quality of validation studies of health administrative data
  • Perspective: Guidelines for reporting team-based learning activities in the medical and health sciences education literature
  • Authors’ Submission Toolkit: a practical guide to getting your research published
  • Good publication practice for communicating company sponsored medical research: GPP 3
  • Standardized reporting of clinical practice guidelines: a proposal from the Conference on Guideline Standardization
  • A new structure for quality improvement reports
  • Guidelines for conducting and reporting economic evaluation of fall prevention strategies
  • Design, execution, interpretation, and reporting of economic evaluation studies in obstetrics
  • Economic evaluation using decision analytical modelling: design, conduct, analysis, and reporting
  • Reporting format for economic evaluation. Part II : Focus on modelling studies
  • Increasing the generalizability of economic evaluations: recommendations for the design, analysis, and reporting of studies
  • Good research practices for cost-effectiveness analysis alongside clinical trials: the ISPOR RCT-CEA Task Force report
  • Recommendations for Conduct, Methodological Practices, and Reporting of Cost-effectiveness Analyses: Second Panel on Cost -Effectiveness in Health and Medicine
  • The quality of mixed methods studies in health services research
  • Qualitative research review guidelines – RATS
  • Evolving guidelines for publication of qualitative research studies in psychology and related fields
  • Revealing the wood and the trees: reporting qualitative research
  • Qualitative research: standards, challenges, and guidelines
  • RAMESES publication standards: meta-narrative reviews
  • RAMESES publication standards: realist syntheses
  • Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group
  • Meta-analysis of individual participant data: rationale, conduct, and reporting
  • PRISMA-Equity 2012 Extension: Reporting Guidelines for Systematic Reviews with a Focus on Health Equity
  • PRISMA 2020 for Abstracts: Reporting Systematic Reviews in Journal and Conference Abstracts
  • The STARD statement for reporting diagnostic accuracy studies: application to the history and physical examination
  • Capturing momentary, self-report data: a proposal for reporting guidelines
  • Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES)
  • Guidelines for field surveys of the quality of medicines: a proposal
  • A guide for the design and conduct of self-administered surveys of clinicians
  • Good practice in the conduct and reporting of survey research
  • Reporting genetic results in research studies: summary and recommendations of an NHLBI working group
  • Recommendations for biomarker identification and qualification in clinical proteomics
  • Missing covariate data within cancer prognostic studies: a review of current reporting and proposed guidelines
  • Gene expression-based prognostic signatures in lung cancer: ready for clinical use?
  • Anecdotes as evidence
  • Recommendations for reporting adverse drug reactions and adverse events of traditional Chinese medicine
  • Guidelines for submitting adverse event reports for publication
  • Guidelines for clinical case reports in behavioral clinical psychology
  • Instructions to authors for case reporting are limited: a review of a core journal list
  • Reporting participation in case-control studies
  • Conducting and reporting case series and audits–author guidelines for acupuncture in medicine
  • Appropriate use and reporting of uncontrolled case series in the medical literature
  • Improving the reporting of clinical case series
  • EULAR points to consider when establishing, analysing and reporting safety data of biologics registers in rheumatology
  • Preliminary core set of domains and reporting requirements for longitudinal observational studies in rheumatology
  • A community standard for immunogenomic data reporting and analysis: proposal for a STrengthening the REporting of Immunogenomic Studies statement
  • STrengthening the Reporting of OBservational studies in Epidemiology – Molecular Epidemiology (STROBE-ME) : An extension of the STROBE statement
  • STrengthening the REporting of Genetic Association Studies (STREGA) : An Extension of the STROBE Statement.
  • Guidelines for conducting and reporting mixed research in the field of counseling and beyond
  • Reporting experiments in homeopathic basic research (REHBaR) – a detailed guideline for authors
  • Guidelines for the design, conduct and reporting of human intervention studies to evaluate the health benefits of foods
  • Good research practices for comparative effectiveness research: Defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources
  • Guidelines for reporting non-randomised studies
  • Setting the bar in phase II trials: the use of historical data for determining “go/no go” decision for definitive phase III testing
  • GNOSIS : Guidelines for Neuro -Oncology : Standards for Investigational Studies – reporting of surgically based therapeutic clinical trials
  • The standard of reporting of health-related quality of life in clinical cancer trials
  • A systematic review of the reporting of Data Monitoring Committees’ roles, interim analysis and early termination in pediatric clinical trials
  • “Brimful of STARLITE ”: toward standards for reporting literature searches
  • Systematic prioritization of the STARE-HI reporting items. An application to short conference papers on health informatics evaluation
  • CONSORT-EHEALTH : improving and standardizing evaluation reports of Web-based and mobile health interventions
  • Reporting of patient-reported outcomes in randomized trials: the CONSORT PRO extension
  • Consolidated Health Economic Evaluation Reporting Standards 2022 (CHEERS 2022) Statement: Updated Reporting Guidance for Health Economic Evaluations
  • Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ
  • Inadequate planning and reporting of adjudication committees in clinical trials: recommendation proposal
  • Relevance of CONSORT reporting criteria for research on eHealth interventions
  • Reporting guidelines for music-based interventions
  • Reporting whole-body vibration intervention studies: recommendations of the International Society of Musculoskeletal and Neuronal Interactions
  • Reporting standards for studies of tailored interventions
  • Reporting data on homeopathic treatments (RedHot) : A supplement to CONSORT
  • Evaluating the quality of reporting occupational therapy randomized controlled trials by expanding the CONSORT criteria
  • WIDER recommendations for reporting of behaviour change interventions
  • Evidence-based behavioral medicine: what is it and how do we achieve it?
  • CONSORT 2010: CONSORT-C (children)
  • The CONSORT statement checklist in allergen-specific immunotherapy: a GA 2 LEN paper
  • Improving the reporting of pragmatic trials: an extension of the CONSORT statement
  • CONSORT for reporting randomised trials in journal and conference abstracts
  • CONSORT Statement for Randomized Trials of Nonpharmacologic Treatments: A 2017 Update and a CONSORT Extension for Nonpharmacologic Trial Abstracts
  • Reporting randomized, controlled trials of herbal interventions: an elaborated CONSORT Statement
  • Consort 2010 statement: extension to cluster randomised trials
  • Reporting of noninferiority and equivalence randomized trials: extension of the CONSORT 2010 statement
  • Revised STandards for Reporting Interventions in Clinical Trials of Acupuncture (STRICTA) : extending the CONSORT statement
  • STARE-HI – Statement on reporting of evaluation studies in Health Informatics
  • The ORION statement: guidelines for transparent reporting of Outbreak Reports and Intervention studies Of Nosocomial infection
  • Consensus recommendations for the uniform reporting of clinical trials: report of the International Myeloma Workshop Consensus Panel 1
  • Economic evaluation alongside randomised controlled trials: design, conduct, analysis, and reporting
  • Reporting and presenting information retrieval processes: the need for optimizing common practice in health technology assessment
  • Guidelines for reporting reliability and agreement studies (GRRAS) were proposed
  • SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process
  • Refining a checklist for reporting patient populations and service characteristics in hospice and palliative care research
  • Point and interval estimates of effect sizes for the case-controls design in neuropsychology: rationale, methods, implementations, and proposed reporting standards
  • Recommended guidelines for uniform reporting of pediatric advanced life support: the pediatric Utstein style
  • Consolidated criteria for reporting qualitative research (COREQ) : a 32-item checklist for interviews and focus groups
  • Guidelines for reporting results of quality of life assessments in clinical trials
  • Recommendations for reporting economic evaluations of haemophilia prophylaxis
  • Overview of methods used in cross-cultural comparisons of menopausal symptoms and their determinants: Guidelines for Strengthening the Reporting of Menopause and Aging (STROMA) studies
  • Strengthening the reporting of Genetic RIsk Prediction Studies: the GRIPS Statement
  • GNOSIS : guidelines for neuro-oncology: standards for investigational studies-reporting of phase 1 and phase 2 clinical trials
  • Standard guidelines for publication of deep brain stimulation studies in Parkinson’s disease (Guide 4 DBS-PD)

Reporting guidelines for main study types


Some reporting guidelines are also available in languages other than English. Find out more in our Translations section .

  • About the Library

For information about Library scope and content, identification of reporting guidelines and inclusion/exclusion criteria please visit About the Library .

Visit our Help page for information about searching for reporting guidelines and for general information about using our website.

Library index

  • What is a reporting guideline?
  • Browse reporting guidelines by specialty
  • Reporting guidelines under development
  • Translations of reporting guidelines
  • EQUATOR Network reporting guideline manual
  • Reporting guidelines for animal research
  • Guidance on scientific writing
  • Guidance developed by editorial groups
  • Research funders’ guidance on reporting requirements
  • Professional medical writing support
  • Research ethics, publication ethics and good practice guidelines
  • Links to other resources

Reporting guidelines for survey research: an analysis of published guidance and reporting practices


  • 1 Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada. [email protected]
  • PMID: 21829330
  • PMCID: PMC3149080
  • DOI: 10.1371/journal.pmed.1001069

Background: Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.

Methods and findings: We conducted a three-part project: (1) a systematic review of the literature (including "Instructions to Authors" from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n = 165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%).

Conclusions: There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research.

Publication types

  • Research Support, Non-U.S. Gov't
  • Data Collection
  • Guidelines as Topic*
  • Health Surveys
  • Publishing / standards*
  • Research Design*
  • Surveys and Questionnaires

Grants and funding

  • MGC-42668/Canadian Institutes of Health Research/Canada

Best Practices for Survey Research

Below you will find recommendations on how to produce the best survey possible..

Included are suggestions on the design, data collection, and analysis of a quality survey.  For more detailed information on important details to assess rigor of survey methology, see the  AAPOR Transparency Initiative .

To download a pdf of these best practices,  please click here

"The quality of a survey is best judged not by its size, scope, or prominence, but by how much attention is given to [preventing, measuring and] dealing with the many important problems that can arise."

“What is a Survey?”, American Statistical Association

1. Planning Your Survey

Is a survey the best method for answering your research question.

Surveys are an important research tool for learning about the feelings, thoughts, and behaviors of groups of individuals. However, surveys may not always be the best tool for answering your research questions. They may be appropriate when there is not already sufficiently timely or relevant existing data on the topic of study. Researchers should consider the following questions when deciding whether to conduct a survey:

  • What are the objectives of the research? Are they unambiguous and specific?
  • Have other surveys already collected the necessary data?
  • Are other research methods such as focus groups or content analyses more appropriate?
  • Is a survey alone enough to answer the research questions, or will you also need to use other types of data (e.g., administrative records)?

Surveys should not be used to produce predetermined results, campaigning, fundraising, or selling. Doing so is a violation of the  AAPOR Code of Professional Ethics .

Should the survey be offered online, by mail, in person, on the phone, or in some combination of these modes?

Once you have decided to conduct a survey, you will need to decide in what mode(s) to offer it. The most common modes are online, on the phone, in person, or by mail.  The choice of mode will depend at least in part on the type of information in your survey frame and the quality of the contact information. Each mode has unique advantages and disadvantages, and the decision should balance the data quality needs of the research alongside practical considerations such as the budget and time requirements.

  • Compared with other modes, online surveys can be quickly administered for less cost. However, older respondents, those with lower incomes, or respondents living in rural areas are less likely to have reliable internet access or to be comfortable using computers. Online surveys may work well when the primary way you contact respondents is via email. It also may elicit more honest answers from respondents on sensitive topics because they will not have to disclose sensitive information directly to another person (an interviewer).
  • Telephone surveys are often more costly than online surveys because they require the use of interviewers. Well trained interviewers can help guide the respondent through questions that might be hard to understand and encourage them to keep going if they start to lose interest, reducing the number of people who do not complete the survey. Telephone surveys are often used when the sampling frame consists of telephone numbers. Quality standards can be easier to maintain in telephone surveys if interviewers are in one centralized location.
  • In-person, or face-to-face, surveys tend to cost the most and generally take more time than either online or telephone surveys.  With an in-person survey, the interviewer can build a rapport with the respondent and help with questions that might be hard to understand. This is particularly relevant for long or complex surveys. In-person surveys are often used when the sampling frame consists of addresses.
  • Mailed paper surveys can work well when the mailing addresses of the survey respondents are known. Respondents can complete the survey at their own convenience and do not need to have computer or internet access. Like online surveys, they can work well for surveys on sensitive topics. However, since mail surveys cannot be automated, they work best when the flow of the questionnaire is relatively straightforward. Surveys with complex skip patterns based on prior responses may be confusing to respondents and therefore better suited for other modes.

Some surveys use multiple modes, particularly if a subset of the people in the sample are more reachable via a different mode. Often, a less costly method is employed first or used concurrently with another method, for example offering a choice between online and telephone response, or mailing a paper survey with a telephone follow-up with those who have not yet responded.

2. Designing Your Sample

How to design your sample.

When you run a survey, the people who respond to your survey are called your sample because they are a sample of people from the larger population you are studying, such as adults who live in the U.S. A sampling frame is a list of information that will allow you to contact potential respondents – your sample – from a population. Ultimately, it’s the sampling frame that allows you to draw a sample from the larger population. For a mail-based survey, it’s a list of addresses in the geographic area in which your population is located; for an online panel survey, it’s the people in the panel; for a telephone survey, it’s a list of phone numbers. Thinking through how to design your sample to best match the population of study can help you run a more accurate survey that will require fewer adjustments afterwards to match the population.

One approach is to use multiple sampling frames; for example, in a phone survey, you can combine a sampling frame of people with cell phones and a sampling frame of people with landlines (or both), which is now considered a best practice for phone surveys.

Surveys can be either probability-based or nonprobability-based. For decades, probability samples, often used for telephone surveys, were the gold standard for public opinion polling. In these types of samples, there is a frame that covers all or almost all the population of interest, such as a list of all the phone numbers in the U.S. or all the residential addresses, and individuals are selected using random methods to complete the survey. More recently, nonprobability samples and online surveys have gained popularity due to the rising cost of conducting probability-based surveys. A survey conducted online can use probability samples, such as those recruited using residential addresses, or can use nonprobability samples, such as “opt-in” online panels or participants recruited, through social media or personal networks.  Analyzing and reporting  nonprobability-based survey results often require using special statistical techniques and taking great care to ensure transparency about the methodology.

3. Designing your questionnaire

What are some best practices for writing survey questions.

  • Questions should be specific and ask only about one concept at a time. For example, respondents may interpret a question about the role of “government” differently – some may think of the federal government, while others may think of state governments.
  • Write questions that are short and simple and use words and concepts that the target audience will understand. Keep in mind that knowledge,  literacy skills , and  English proficiency  vary widely among respondents.
  • Keep questions free of bias by avoiding language that pushes respondents to respond in a certain way or that presents only one side of an issue. Also be aware that respondents may tend toward a socially desirable answer or toward saying “yes” or “agree” in an effort to please the interviewer, even if unconsciously.
  • Arrange questions in an order that will be logical to respondents but not influence how they answer. Often, it’s better for general questions to come earlier than specific questions about the same concept in the survey. For example, asking respondents whether they favor or oppose certain policy positions of a political leader prior to asking a general question about the favorability of that leader may prime them to weigh those certain policy positions more heavily than they otherwise would in determining how to answer about favorability.
  • Choose whether a question should be closed-ended or open-ended. Closed-ended questions, which provide a list of response options to choose from, place less of a burden on respondents to come up with an answer and are easier to interpret, but they are more likely to influence how a respondent answers. Open-ended questions allow respondents to respond in their own words but require coding in order to be interpreted quantitatively.
  • Response options for closed-ended questions should be chosen with care. They should be mutually exclusive, include all reasonable options (including, in some cases, options such as “don’t know” or “does not apply” or neutral choices such as “neither agree nor disagree”), and be in a logical order. In some circumstances, response options should be rotated (for example, half the respondents see response options in one order while the other half see it in reverse order) due to an  observed tendency  of respondents to pick the first answer in self-administered surveys and the last answer in interviewer-administered surveys. Randomization allows researchers to check on whether there are order effects.
  • Consider what languages you will offer the survey in. Many U.S. residents speak limited or no English. Most nationally representative surveys in the U.S. offer questionnaires in both English and Spanish, with bilingual interviewers available in interviewer-administered modes.
  • See AAPOR’s  resources on question wording for more details

How can I measure change over time?

If you want to measure change, don’t change the measure.

To accurately measure whether an observed change between surveys taken at two points in time reflects a true shift in public attitudes or behaviors, it is critical to keep the question wording, framing, and methodology of the survey as similar as possible across the two surveys. Changes in question-wording and even the context of other questions before it can influence how respondents answer and make it appear that there has been a change in public opinion even if the only change is in how respondents are interpreting the question (or potentially mask an actual shift in opinion).

Changes in mode, such as comparing a survey conducted over the telephone to one conducted online, can sometimes also mimic a real change because many people respond to certain questions differently when speaking to an interviewer on the phone versus responding in private to a web survey. Questions that are very personal or have a response option that respondents see as socially undesirable, or embarrassing are particularly sensitive to this mode effect.

If changing the measure is necessary — perhaps due to flawed question wording or a desire to switch modes for logistical reasons — the researcher can employ a split-ballot experiment to test whether respondents will be sensitive to the change. This would involve fielding two versions of a survey — one with the previous mode or question wording and one with the new mode or question wording — with all other factors kept as similar as possible across the two versions. If respondents answer both versions similarly, there is evidence that any change over time is likely due to a real shift in attitudes or behaviors rather than an artifact of the change in measurement. If response patterns differ according to which version respondents see, then change over time should be interpreted cautiously if the researcher moves ahead with the change in measurement.

How can I ensure the safety, confidentiality, and comfort of respondents?

  • Follow your institution’s guidance and policies on the protection of personal identifiable information and determine whether any data privacy laws apply to the study. If releasing individual responses in a public dataset, keep in mind that demographic information and survey responses may make it possible to identify respondents even if personal identifiable information like names and addresses are removed.
  • Consult an  Institutional Review Board  for recommendations on how to mitigate the risk, even if not required by your institution.
  • Disclose the sensitive topic at the beginning of the survey, or just before the questions appear in the survey, and  inform respondents  that they can skip the questions if they are not comfortable answering them (and be sure to program an online survey to allow skipping, or instruct interviewers to allow refusals without probing).
  • Provide links or hotlines to resources that can help respondents who were affected by the sensitive questions (for example, a hotline that provides help for those suffering from eating disorders if the survey asks about disordered eating behaviors).
  • Build rapport with a respondent by beginning with easy and not-too-personal questions and keeping sensitive topics for later in the survey.
  • Keep respondent burden low by keeping questionnaires and individual questions short and limiting the number of difficult, sensitive, or open-ended questions.
  • Allow respondents to skip a question or provide an explicit “don’t know” or “don’t want to answer” response, especially for difficult or sensitive questions. Requiring an answer increases the risk of respondents choosing to leave the survey early.

4. Fielding Your Survey

If i am using interviewers, how should they be trained.

Interviewers need to undergo training that covers both recruiting respondents into the survey and administering the survey. Recruitment training should cover topics such as contacting sampled respondents and convincing reluctant respondents to participate. Interviewers should be comfortable navigating the hardware and software used to conduct the survey and pronouncing difficult names or terms. They should have familiarity with the concepts the survey questions are asking about and know how to help respondents without influencing their answers. Training should also involve practice interviews to familiarize the interviewers with the variety of situations they are likely to encounter. If the survey is being administered in languages other than English, interviewers should demonstrate language proficiency and cultural awareness. Training should address how to conduct non-English interviews appropriately.

Interviewers should be trained in protocols on how best to protect the health and well-being of themselves and respondents, as needed. As an example, during the COVID-19 pandemic, training in the proper use of personal protective equipment and social distancing would be appropriate for field staff.

What kinds of testing should I do before fielding a survey?

Before fielding a survey, it is important to pretest the questionnaire. This typically consists of conducting cognitive interviews or using another qualitative research method to understand respondents’ thought processes, including their interpretation of the questions and how they came up with their answers. Pretesting should be conducted with respondents who are similar to those who will be in the survey (e.g., students if the survey sample is college students).

Conducting a pilot test to ensure that all survey procedures (e.g., recruiting respondents, administering the survey, cleaning data) work as intended is recommended. If it is unclear what question-wording or survey design choice is best, implementing an experiment during data collection can help systematically compare the effects of two or more alternatives.

What kinds of monitoring or quality checks should I do on my survey?

Checks must be made at every step of the survey life cycle to ensure that the sample is selected properly, the questionnaire is programmed accurately, interviewers do their work properly, information from questionnaires is edited and coded accurately, and proper analyses are used. The data should be monitored while it is being collected by using techniques such as observation of interviewers, replication of some interviews (re-interviews), and monitoring of response and paradata distributions. Odd patterns of responses may reflect a programming error or interviewer training issue that needs to be addressed immediately.

How do I get as many people to respond to the survey as possible?

It is important to monitor responses and attempt to maximize the number of people who respond to your survey. If very few people respond to your survey, there is a risk that you may be missing some types of respondents entirely, and your survey estimates may be biased. There are a variety of ways to incentivize respondents to participate in your survey, including offering monetary or non-monetary incentives, contacting them multiple times in different ways and at different times of the day, and/or using different persuasive messages. Interviewers can also help convince reluctant respondents to participate. Ideally,  reasonable efforts  should be made to convince both respondents who have not acknowledged the survey requests as well as those who refused to participate.

5. Analyzing and reporting the survey results

What are the common methods of analyzing survey data.

Analyzing survey data is, in many ways, similar to data analysis in other fields. However, there are a few details unique to survey data analysis to take note of. It is important to be as transparent as possible, including about any statistical techniques used to adjust the data.

Depending on your survey mode, you may have respondents who answer only part of your survey and then end the survey before finishing it. These are called partial responses, drop offs, or break offs. You should make sure to indicate these responses in your data and use a value to indicate there was no response. Questions with no response should have a different value than answer options such as “none of the above,” “I don’t know,” or “I prefer not to answer.” The same applies if your survey allows respondents to skip questions but continue in the survey.

A common way of reporting on survey data is to show cross-tabulated results, or crosstabs for short. Crosstabs are when you show a table with one question’s answers as the column headers and another question’s answers as the row names. The values in the crosstab can be either counts — the number of respondents who chose those specific answers to those two questions — or percentages. Typically, when showing percentages, the columns total to 100%.

Analyzing survey data allows us to estimate findings about the population under study by using a sample of people from that population. An industry standard is to calculate and report on the margin of sampling error, often shortened to the margin of error. The margin of error is a measurement of confidence in how close the survey results are to the true value in the population. To learn more about the margin of error and the credibility interval, a similar measurement used for nonprobability surveys, please see AAPOR’s  Margin of Error resources.

What is weighting and why is it important?

Ideally, the composition of your sample would match the population under study for all the characteristics that are relevant to the topic of your survey; characteristics such as age, sex, race/ethnicity, location, educational attainment, political party identification, etc. However, this is rarely the case in practice, which can lead to the results of your survey being skewed. Weighting is a statistical technique to adjust the results to adjust the relative contributions of your respondents to match the population characteristics more closely. Learn more about weighting .

What are the common industry standards for transparency in reporting data?

Because there are so many different ways to run surveys, it’s important to be transparent about how a survey was run and analyzed so that people know how to interpret and draw conclusions from it. AAPOR’s Transparency Initiative has established  a list of items to report with your survey results that uphold the industry transparency standards. These items include sample size, margin of sampling error, weighting attributes, the full text of the questions and answer options, the survey mode, the population under study, the way the sample was constructed, recruitment, and several other details of how the survey was run. The list of items to report can vary based on the mode of your survey — online, phone, face-to-face, etc. Organizations that want to commit to upholding these standards can also become members of the Transparency Initiative .

It is important to monitor responses and attempt to maximize the number of people who respond to your survey. If very few people respond to your survey, there is a risk that you may be missing some types of respondents entirely, and your survey estimates may be biased. There are a variety of ways to incentivize respondents to participate in your survey, including offering monetary or non-monetary incentives, contacting them multiple times in different ways and at different times of the day, and/or using different persuasive messages. Interviewers can also help convince reluctant respondents to participate. Ideally, reasonable efforts should be made to convince both respondents who have not acknowledged the survey requests as well as those who refused to participate.

2019 Presidential Address from the 74th Annual Conference

David Dutwin May 2019

“Many of you know me primarily as a methodologist.  But in fact, my path to AAPOR had nothing to do with methodology.  My early papers, in fact, wholly either provided criticism of, or underscored the critical value of, public opinion and public opinion polls.

And so in some respects, this Presidential Address is for me, completes a full circle of thought and passion I have for AAPOR, for today I would like to discuss matters pertaining to the need to reconsider, strengthen, and advance the mission of survey research in democracy.

Historically, there has been much to say on the role of public opinion in democracy.   George Gallup summarized the role of polls quite succinctly when he said,  “Without polls, [elites] would be guided only by letters to congressmen, the lobbying of pressure groups, and the reports of political henchmen.”

Further,  Democratic theory notes the critical, if not the pivotal role, of public opinion and democratic practice.   Storied political scientist V.O Key said:  “The poll furnishes a means for the deflation of the extreme claims of pressure groups and for the testing of their extravagant claims of public sentiment in support of their demands.”

Furthermore, surveys provide a critical check and balance to other claims of what the American public demands in terms of policies and their government.   Without polls, it would be all that much harder to verify and combat claims of public sentiment made by politicians, elites, lobbyists, and interest groups.  [“No policy that does not rest upon some public opinion can be permanently maintained.”- Abe Lincoln; “Public opinion is a thermometer a monarch should constantly consult” – Napoleon]

It is sometimes asked whether leaders do consult polls and whether polls have any impact of policy.   The relationship here is complex, but time and again researchers have found a meaningful and significant effect of public opinion, typically as measured by polling, on public policy. As one example, Page and Shapiro explored trends in American public opinion from the 1930s to the 1980s and found no less than 231 different changes in public policy following shifts in public opinion.

And certainly, in modern times around the world, there is recognition that the loss of public opinion would be, indeed, the loss of democracy itself. [“Where there is no public opinion, there is likely to be bad government, which sooner or later, becomes autocratic government.” – Willian Lyon Mackenzie King]

And yet, not all agree.  Some twist polling to be a tool that works against democratic principles.  [“The polls are just being used as another tool for voter suppression.” – Rush Limbaugh]

And certainly, public opinion itself is imperfect, filled with non-attitudes, the will of the crowd, and can often lead to tyranny of the majority, as Jon Stewart nicely pointed out. [“You have to remember one thing about the will of the people: It wasn’t that long ago that we were swept away by the Macarena.” – Jon Stewart]

If these later quotes were the extent of criticism on the role of public opinion and survey research in liberal democracy, I would not be up here today discussing what soon follows in this address.  Unfortunately, however, we live a world in which many of the institutions of democracy and society are under attack.

It is important to start by recognizing that AAPOR is a scientific organization.  Whether you are a quantitative or qualitative researcher, a political pollster or developer of official statistics, a sociologist or a political scientist, someone who works for a commercial entity or nonprofit, we are all survey scientists, and we come together as a great community of scientists within AAPOR, no matter our differences.

And so we, AAPOR, should be as concerned as any other scientific community regarding the current environment where science is under attack, devalued, and delegitimized.  It is estimated that since the 2016 election, the federal policy has moved to censor, or misrepresent, or curtail and suppress scientific data and discoveries over 200 times, according to the Sabin Center at Columbia University.  Not only is this a concern to AAPOR as a community of scientists, but we should be concerned as well on the impact of these attacks on public opinion itself.

Just as concerning is the attack on democratic information, in general.  Farrell and Schneier argue that there are two key types of knowledge in democracy, common and contested.  And while we should be free to argue and disagree with policy choices, our pick of democratic leaders, and even many of the rules and mores that guide us as a society, what is called contested knowledge, what cannot be up for debate is the common knowledge of democracy, for example, the legitimacy of the electoral process itself, or the validity of data attained by the Census, or even more so, I would argue, that public opinion does not tell us what the public thinks.

As the many quotes I provided earlier attest to, democracy is dependent upon a reliable and nonideological measure of the will of the people.  For more than a half century and beyond, survey research been the principal and predominant vehicle by which such knowledge is generated.

And yet, we are on that doorstep where common knowledge is becoming contested.  We are entering, I fear, a new phase of poll delegitimization.  I am not here to advocate any political ideology and it is critical for pollsters to remain within the confines of science.  Yet there has been a sea change in how polls are discussed by the current administration.  To constantly call out polls for being fake is to delegitimize public opinion itself and is a threat to our profession.

Worse still, many call out polls as mere propaganda (see Joondeph, 2018).  Such statements are more so a direct attack on our science, our field, and frankly, the entire AAPOR community.  And yet even worse is for anyone to actually rig poll results.  Perhaps nothing may undermine the science and legitimacy of polling more.

More pernicious still, we are on the precipice of an age where faking anything is possible.  The technology now exists to fake actual videos of politicians, or anyone for that matter, and to create realistic false statements.  The faking of poll results is merely in lockstep with these developments.

There are, perhaps, many of you in this room who don’t directly connect with this.  You do not do political polling.  You do government statistics.  Sociology.  Research on health, on education, or consumer research.  But we must all realize that polling is the tip of the spear.  It is what the ordinary citizen sees of our trade and our science.  As Andy Kohut once noted, it represents all of survey research. [Political polling is the “most visible expression of the validity of the survey research method.“ – Andrew Kohut]

With attacks on science at an all-time high in the modern age, including attacks on the science of surveys; with denigration of common knowledge, the glue that holds democracy together, including denunciation on the reliability of official statistics; with slander on polling that goes beyond deliberation on the validity of good methods but rather attacks good methods as junk, as propaganda, and as fake news; and worse of all, a future that, by all indications, will if anything include the increased frequency of fake polls, and fake data, well, what are we, AAPOR, to do?

We must respond.  We must react.  And, we must speak out.  What does this mean, exactly?  First, AAPOR must be able to respond.  Specifically, AAPOR must have vehicles and avenues of communication and the tools by which it can communicate.  Second, AAPOR must know how to respond.  That is to say, AAPOR must have effective and timely means of responding.  We are in an every minute of the day news cycle.  AAPOR must adapt to this environment and maximize its impact by speaking effectively within this communication environment.  And third, AAPOR must, quite simply, have the willpower to respond.  AAPOR is a fabulous member organization, providing great service to its members in terms of education, a code of ethics, guidelines for best practices and promotions of transparency and diversity in the field of survey research.  But we have to do more.  We have to learn to professionalize our communication and advocate for our members and our field.  There are no such thing as sidelines anymore.  We must do our part to defend survey science, polling, and the very role of public opinion in a functioning democracy.

This might seem to many of you like a fresh idea, and bold new step for AAPOR.  But in fact, there has been a common and consistent call for improved communication abilities, communicative outreach, and advocacy by many past Presidents, from Diane Colasanto to Nancy Belden to Andy Kohut.

Past President Frank Newport for example was and is a strong supporter of the role of public opinion in democracy, underscoring in his Presidential address that quote, “the collective views of the people…are absolutely vital to the decision-making that ultimately affects them.” He argued in his Presidential address that AAPOR must protect the role of public opinion in society.

A number of Past Presidents have rightly noted that AAPOR must recognize the central role of journalists in this regard, who have the power to frame polling as a positive or negative influence on society.  President Nancy Mathiowetz rightly pointed out that AAPOR must play a role in, and even financially support, endeavors to guarantee that journalists’ support AAPOR’s position on the role of polling in society and journalists’ treatment of polls.  And Nancy’s vision, in fact, launched relationship with Poynter in building a number of resources for journalist education of polling.

Past President Scott Keeter also noted the need for AAPOR to do everything it can to promote public opinion research.  He said that “we all do everything we can to defend high-quality survey research, its producers, and those who distribute it.”  But at the same time, Scott noted clearly that, unfortunately, “At AAPOR we are fighting a mostly defensive war.”

And finally, Past President Cliff Zukin got straight to the point in his Presidential address, noting that, quote “AAPOR needs to increase its organizational capacity to respond and communicate, both internally and externally. We need to communicate our positions and values to the outside world, and we need to diffuse ideas more quickly within our profession.”

AAPOR is a wonderful organization, and in my biased opinion, the best professional organization I know.  How have we responded to the call of past Presidents?  I would say, we responded with vigor, with energy, and with passion.  But we are but a volunteer organization of social scientists.  And so, we make task forces.  We write reports.  These reports are well researched, well written, and at the same time, I would argue, do not work effectively to create impact in the modern communication environment.

We have taken one small step to ameliorate this, with the report on polling in the 2016 election, which was publicly released via a quite successful live Facebook video event.  But we can still do better.  We need to be more timely for one, as that event occurred 177 days after the election, when far fewer people were listening, and the narrative was largely already written.  And we need to find ways to make such events have greater reach and impact.  And of course, we need more than just one event every four years.

I have been proud to have been a part of, and even be the chair of, a number of excellent task force reports.  But we cannot, I submit, continue to respond only with task force reports.  AAPOR is comprised of the greatest survey researchers in the world.  But it is not comprised of professional communication strategists, plain and simple.  We need help, and we need professional help.

In the growth of many organizations, there comes a time when the next step must be taken.  The ASA many years ago, for example, hired a full time strategic communications firm.  Other organizations, including the NCA, APSA, and others, chose instead to hire their own full time professional communication strategist.

AAPOR has desired to better advocate for itself for decades.  We recognize that we have to get into the fight, that there are again no more things as sidelines.  And we have put forward a commendable effort in this regard, building educational resources for journalists, and writing excellent reports on elections, best practices, sugging and frugging, data falsification, and other issues.  But we need to do more, and in the context of the world outside of us, we need to speak a language that resonates with journalists, political elites, and perhaps most importantly the public.

I want to stop right here and make it clear, that the return on investment on such efforts is not going to be quick.  And the goal here is not to improve response rates, though I would like that very much!  No, it is not likely that any efforts in any near term reverses trends in nonresponse.

It may very well be that our efforts only slow or at best stop the decline. But that would be an important development.  The Washington Post says that democracy dies in darkness.  If I may, I would argue that AAPOR must say, democracy dies in silence, when the vehicle for public opinion, surveys, has been twisted to be distrusted by the very people who need it most, ordinary citizens.  For the most part, AAPOR has been silent.  We can be silent no more.

This year, Executive Council has deliberated the issues outlined in this address, and we have chosen to act.  The road will be long, and at this time, I cannot tell you where it will lead.  But I can tell you our intentions and aspirations.  We have begun to execute a 5 point plan that I present here to you.

First, AAPOR Executive Council developed and released a request for proposals for professional strategic communication services.  Five highly regarded firms responded.  After careful deliberation and in person meetings with the best of these firms, we have chosen Stanton Communications to help AAPOR become a more professionalized association.  Our goals in the short term are as follows.

We desire to become more nimble and effective at responding to attacks on polls, with key AAPOR members serving as spokespersons when needed, but only after professional development of the messages they will promulgate, approved by Council, and professionalized by the firm.  Stanton brings with it a considerable distribution network of journalists and media outlets.  AAPOR, through its professional management firm Kellen, has access to audio and video services of the National Press Club, and will utilize these services when needed to respond to attacks on polls, and for other communications deemed important by AAPOR Executive Council.

Our plan is to begin small.  We are cognizant of the cost that professional communication can entail, and for now, we have set very modest goals. The first step is to be prepared, and have a plan for, the 2020 election, with fast response options of communication during the campaign, and perhaps most importantly, directly thereafter.

The second element of our plan is to re-envision AAPOR’s role in journalism education.  In short, we believe we need to own this space, not farm it out to any other entity.  We need refreshed educational videos, and many more of them, from explaining response rates to the common criticisms made on the use of horserace polling in the media.

We need to travel.  Willing AAPOR members should be funded to travel and present at journalism conferences, to news rooms, and to journalism schools on an annual basis. AAPOR could as well have other live events, for example a forum on the use of polls in journalism.  There should be a consistent applied effort over time.  The media and journalists are AAPOR’s greatest spokespeople.  By and large, much of our image is shaped through them.

The third element looks at the long game.  And that is, for AAPOR to help in developing civics education on public opinion and the role of public opinion in democracy.  With the help of educational experts, and importantly, tipping our hats to our AAPOR’s Got Talent winner last year, Allyson Holbrook, who proposed exactly this kind of strategy, we believe AAPOR can help develop a curriculum and educational materials and engage with educators to push for the inclusion of this curriculum in primary education.  AAPOR can and should develop specific instructional objectives of civics education by grade and develop a communications plan to lobby for the inclusion of this civics curriculum by educators.

The fourth element is for AAPOR to direct the Transparency Initiative to develop a strategic plan for the next ten years.  We recognize that it is not always the case that polls are executed with best practices.  How does AAPOR respond in these instances?  With a plethora of new sampling approaches and modalities, we believe the TI needs to have a full-throated conversation about these challenges and how AAPOR should handle them.  After all, this too is part of the conversation of AAPOR communication.

Finally, AAPOR should, as past President Tim Johnson called for last year, learn as much as it can about the perceptions of polls in society.  We cannot make effective strategic communication plans without knowing first how they will resonate and know to some degree their expected effectiveness.  Such an effort should continue over time, building both a breadth and depth of understanding.

If this sounds a bit like a wish list, well, you would be right.  For now, the immediate goal for AAPOR and its communication firm is to prepare for 2020 and to take some modest steps toward professionalizing AAPOR’s ability to effectively and quickly communicate and advocate.    Looking toward the future, AAPOR Council has authorized the development of the Ad Hoc Committee on Public Opinion.  This committee will be comprised of AAPOR members dedicated to pushing forward this agenda.

We recognize the potential cost of these endeavors in terms of money and labor, and so in each area, there will be mission leaders on the committee whose goal is to push forward with two goals.  The first is funding.  We cannot and should not fund these endeavors alone.  We will be seeking foundational funding for each of these areas, and are developing a proposal for each specifically.  Perhaps only one area attains funding, perhaps all of them.  No matter, the committee will adjust its goals contingent on the means it has available.

A number of members have already asked to be part of these efforts.  But I call on all of you, the AAPOR membership, to reach out and join the effort as well.  We need people experienced in seeking funding, and people passionate in moving the needle with regard to polling journalism, civics education, and the role of public opinion in democracy.  AAPOR’s secret sauce has always been the passion of its members and we call on you to help.  Please go the link below to tell us you want to join the effort.

Friends and colleagues, one of the many excellent AAPOR task forces already, in fact explored this issue, the task force on polling and democracy and leadership.  They argued that “AAPOR should adopt an increased public presence arguing for the importance of public opinion in a democracy and the importance of rigorous, unbiased, scientific research assessing public opinion.”

It is time we strive to realize these aspirations.  For the good of our association, our field, and our very democracy.  If past efforts by AAPOR volunteers are any indication, we anticipate great success and health in the future of our field and our endeavors.

It has been an honor and a privilege serving as your President. Thank you.”

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections
  • Collections Home
  • About Collections
  • Browse Collections
  • Calls for Papers

Reporting Guidelines

Reporting guidelines are statements intended to advise authors reporting research methods and findings. They can be presented as a checklist, flow diagram or text, and describe what is required to give a clear and transparent account of a study's research and results. These guidelines are prepared through consideration of specific issues that may introduce bias, and are supported by the latest evidence in the field. The Reporting Guidelines Collection highlights articles published across PLOS journals and includes guidelines and guidance, commentary, and related research on guidelines. This collection features some of the many resources available to facilitate the rigorous reporting of scientific studies, and to improve the presentation and evaluation of published studies.

Image Credit: CCAC North Library, Flickr

  • From the Blogs
  • Observational & Epidemiological Research
  • Randomized Controlled Trials
  • Systematic Reviews & Meta-Analyses
  • Diagnostic & Prognostic Research
  • Animal & Cell Models
  • General Guidance
  • Image credit CCAC North Library, Flickr.com Speaking of Medicine Maximizing the Impact of Research: New Reporting Guidelines Collection from PLOS – Speaking of Medicine September 3, 2013 Amy Ross, Laureen Connell
  • Image credit 10.1371/journal.pmed.1001885 PLOS Medicine The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) Statement October 6, 2015 Eric I. Benchimol, Liam Smeeth, Astrid Guttmann, Katie Harron, David Moher, Irene Petersen, Henrik T. Sørensen, Erik von Elm, Sinéad M. Langan, RECORD Working Committee
  • Image credit 10.1371/journal.pmed.0040297 PLOS Medicine Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and Elaboration October 16, 2007 Jan P Vandenbroucke, Erik von Elm, Douglas G Altman, Peter C Gøtzsche, Cynthia D Mulrow, Stuart J Pocock, Charles Poole, James J Schlesselman, Matthias Egger, for the STROBE Initiative
  • Image credit 10.1371/journal.pmed.0040296 PLOS Medicine The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: Guidelines for Reporting Observational Studies October 16, 2007 Erik von Elm, Douglas G Altman, Matthias Egger, Stuart J Pocock, Peter C Gøtzsche, Jan P Vandenbroucke, for the STROBE Initiative
  • Image credit PLOS PLOS Medicine Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices August 2, 2011 Carol Bennett, Sara Khangura, Jamie Brehaut, Ian Graham, David Moher, Beth Potter, Jeremy Grimshaw
  • Image credit 10.1371/journal.pone.0064733 PLOS ONE Impact of STROBE Statement Publication on Quality of Observational Study Reporting: Interrupted Time Series versus Before-After Analysis August 26, 2013 Sylvie Bastuji-Garin, Emilie Sbidian, Caroline Gaudy-Marqueste, Emilie Ferrat, Jean-Claude Roujeau, Marie-Aleth Richard, Florence Canoui-Poitrine
  • Image credit 10.1371/journal.pone.0094412 PLOS ONE The Reporting of Observational Clinical Functional Magnetic Resonance Imaging Studies: A Systematic Review April 22, 2014 Qing Guo, Melissa Parlar, Wanda Truong, Geoffrey Hall, Lehana Thabane, Margaret McKinnon, Ron Goeree, Eleanor Pullenayegum
  • Image credit 10.1371/journal.pone.0101176 PLOS ONE A Review of Published Analyses of Case-Cohort Studies and Recommendations for Future Reporting June 27, 2014 Stephen Sharp, Manon Poulaliou, Simon Thompson, Ian White, Angela Wood
  • Image credit 10.1371/journal.pone.0103360 PLOS ONE Outlier Removal and the Relation with Reporting Errors and Quality of Psychological Research July 29, 2014 Marjan Bakker, Jelte Wicherts
  • Image credit 10.1371/journal.pmed.1000022 PLOS Medicine STrengthening the REporting of Genetic Association Studies (STREGA)— An Extension of the STROBE Statement February 3, 2009 Julian Little, Julian P.T Higgins, John P.A Ioannidis, David Moher, France Gagnon, Erik von Elm, Muin J Khoury, Barbara Cohen, George Davey-Smith, Jeremy Grimshaw, Paul Scheet, Marta Gwinn, Robin E Williamson, Guang Yong Zou, Kim Hutchings, Candice Y Johnson, Valerie Tait, Miriam Wiens, Jean Golding, Cornelia van Duijn, John McLaughlin, Andrew Paterson, George Wells, Isabel Fortier, Matthew Freedman, Maja Zecevic, Richard King, Claire Infante-Rivard, Alex Stewart, Nick Birkett
  • Image credit 10.1371/journal.pmed.1001117 PLOS Medicine STrengthening the Reporting of OBservational studies in Epidemiology – Molecular Epidemiology (STROBE-ME): An Extension of the STROBE Statement October 25, 2011 Valentina Gallo, Matthias Egger, Valerie McCormack, Peter Farmer, John Ioannidis, Micheline Kirsch-Volders, Giuseppe Matullo, David Phillips, Bernadette Schoket, Ulf Strömberg, Roel Vermeulen, Christopher Wild, Miquel Porta, Paolo Vineis
  • Image credit PLOS PLOS Medicine Observational Studies: Getting Clear about Transparency August 26, 2014 The PLoS Medicine Editors
  • Image credit 10.1371/journal.pmed.0050020 PLOS Medicine CONSORT for Reporting Randomized Controlled Trials in Journal and Conference Abstracts: Explanation and Elaboration January 22, 2008 Sally Hopewell, Mike Clarke, David Moher, Elizabeth Wager, Philippa Middleton, Douglas G Altman, Kenneth F Schulz, and the CONSORT Group
  • Image credit PLOS PLOS ONE Endorsement of the CONSORT Statement by High-Impact Medical Journals in China: A Survey of Instructions for Authors and Published Papers February 13, 2012 Xiao-qian Li, Kun-ming Tao, Qinghui Zhou, David Moher, Hong-yun Chen, Fu-zhe Wang, Chang-quan Ling
  • Image credit PLOS PLOS ONE Assessing the Quality of Reports about Randomized Controlled Trials of Acupuncture Treatment on Diabetic Peripheral Neuropathy July 2, 2012 Chen Bo, Zhao Xue, Guo Yi, Chen Zelin, Bai Yang, Wang Zixu, Wang Yajun
  • Image credit 10.1371/journal.pone.0065442 PLOS ONE Reporting Quality of Social and Psychological Intervention Trials: A Systematic Review of Reporting Guidelines and Trial Publications May 29, 2013 Sean Grant, Evan Mayo-Wilson, G. J. Melendez-Torres, Paul Montgomery
  • Image credit 10.1371/journal.pone.0084779 PLOS ONE Are Reports of Randomized Controlled Trials Improving over Time? A Systematic Review of 284 Articles Published in High-Impact General and Specialized Medical Journals December 31, 2013 Matthew To, Jennifer Jones, Mohamed Emara, Alejandro Jadad
  • Image credit 10.1371/journal.pone.0086360 PLOS ONE Assessment of the Reporting Quality of Randomized Controlled Trials on Treatment of Coronary Heart Disease with Traditional Chinese Medicine from the Chinese Journal of Integrated Traditional and Western Medicine: A Systematic Review January 28, 2014 Fan Fang, Xu Qin, Sun Qi, Zhao Jun, Wang Ping, Guo Rui
  • Image credit 10.1371/journal.pmed.1001666 PLOS Medicine Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials June 24, 2014 Kerry Dwan, Douglas Altman, Mike Clarke, Carrol Gamble, Julian Higgins, Jonathan Sterne, Paula Williamson, Jamie Kirkham
  • Image credit 10.1371/journal.pone.0110229 PLOS ONE Systematic Evaluation of the Patient-Reported Outcome (PRO) Content of Clinical Trial Protocols October 15, 2014 Derek Kyte, Helen Duffy, Benjamin Fletcher, Adrian Gheorghe, Rebecca Mercieca-Bebber, Madeleine King, Heather Draper, Jonathan Ives, Michael Brundage, Jane Blazeby, Melanie Calvert
  • Image credit 10.1371/journal.pone.0110216 PLOS ONE Patient-Reported Outcome (PRO) Assessment in Clinical Trials: A Systematic Review of Guidance for Trial Protocol Writers October 15, 2014 Melanie Calvert, Derek Kyte, Helen Duffy, Adrian Gheorghe, Rebecca Mercieca-Bebber, Jonathan Ives, Heather Draper, Michael Brundage, Jane Blazeby, Madeleine King
  • Image credit 10.1371/journal.pmed.1000261 PLOS Medicine Revised STandards for Reporting Interventions in Clinical Trials of Acupuncture (STRICTA): Extending the CONSORT Statement June 8, 2010 Hugh MacPherson, Douglas G. Altman, Richard Hammerschlag, Li Youping, Wu Taixiang, Adrian White, David Moher, on behalf of the STRICTA Revision Group
  • Image credit PLOS PLOS Medicine Comparative Effectiveness Research: Challenges for Medical Journals April 27, 2010 Harold C. Sox, Mark Helfand, Jeremy Grimshaw, Kay Dickersin, the PLoS Medicine Editors , David Tovey, J. André Knottnerus, Peter Tugwell
  • Image credit PLOS PLOS Medicine Reporting of Systematic Reviews: The Challenge of Genetic Association Studies June 26, 2007 Muin J Khoury, Julian Little, Julian Higgins, John P. A Ioannidis, Marta Gwinn
  • Image credit 10.1371/journal.pmed.0040078 PLOS Medicine Epidemiology and Reporting Characteristics of Systematic Reviews March 27, 2007 David Moher, Jennifer Tetzlaff, Andrea C Tricco, Margaret Sampson, Douglas G Altman
  • Image credit 10.1371/journal.pone.0027611 PLOS ONE From QUOROM to PRISMA: A Survey of High-Impact Medical Journals’ Instructions to Authors and a Review of Systematic Reviews in Anesthesia Literature November 16, 2011 Kun-ming Tao, Xiao-qian Li, Qing-hui Zhou, David Moher, Chang-quan Ling, weifeng yu
  • Image credit 10.1371/journal.pone.0075122 PLOS ONE Testing the PRISMA-Equity 2012 Reporting Guideline: the Perspectives of Systematic Review Authors October 10, 2013 Belinda Burford, Vivian Welch, Elizabeth Waters, Peter Tugwell, David Moher, Jennifer O'Neill, Tracey Koehlmoos, Mark Petticrew
  • Image credit 10.1371/journal.pone.0092508 PLOS ONE The Quality of Reporting Methods and Results in Network Meta-Analyses: An Overview of Reviews and Suggestions for Improvement March 26, 2014 Brian Hutton, Georgia Salanti, Anna Chaimani, Deborah Caldwell, Chris Schmid, Kristian Thorlund, Edward Mills, Lucy Turner, Ferran Catala-Lopez, Doug Altman, David Moher
  • Image credit 10.1371/journal.pone.0096407 PLOS ONE Blinded by PRISMA: Are Systematic Reviewers Focusing on PRISMA and Ignoring Other Guidelines? May 1, 2014 Padhraig Fleming, Despina Koletsi, Nikolaos Pandis
  • Image credit 10.1371/journal.pmed.1000097 PLOS Medicine Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement July 21, 2009 David Moher, Alessandro Liberati, Jennifer Tetzlaff, Douglas G. Altman, The PRISMA Group
  • Image credit 10.1371/journal.pmed.1001333 PLOS Medicine PRISMA-Equity 2012 Extension: Reporting Guidelines for Systematic Reviews with a Focus on Health Equity October 30, 2012 Vivian Welch, Mark Petticrew, Peter Tugwell, David Moher, Jennifer O'Neill, Elizabeth Waters, Howard White
  • Image credit 10.1371/journal.pmed.1001419 PLOS Medicine PRISMA for Abstracts: Reporting Systematic Reviews in Journal and Conference Abstracts April 9, 2013 Elaine Beller, Paul Glasziou, Douglas Altman, Sally Hopewell, Hilda Bastian, Iain Chalmers, Peter Gotzsche, Toby Lasserson, David Tovey
  • Image credit PLOS PLOS Medicine Many Reviews Are Systematic but Some Are More Transparent and Completely Reported than Others March 27, 2007 The PLoS Medicine Editors
  • Image credit 10.1371/journal.pmed.1000100 PLOS Medicine The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration July 21, 2009 Alessandro Liberati, Douglas G. Altman, Jennifer Tetzlaff, Cynthia Mulrow, Peter C. Gøtzsche, John P. A. Ioannidis, Mike Clarke, P. J. Devereaux, Jos Kleijnen, David Moher
  • Image credit 10.1371/journal.pone.0007753 PLOS ONE Quality and Reporting of Diagnostic Accuracy Studies in TB, HIV and Malaria: Evaluation Using QUADAS and STARD Standards November 13, 2009 Patricia Scolari Fontela, Nitika Pant Pai, Ian Schiller, Nandini Dendukuri, Andrew Ramsay, Madhukar Pai
  • Image credit 10.1371/journal.pmed.1001531 PLOS Medicine Use of Expert Panels to Define the Reference Standard in Diagnostic Research: A Systematic Review of Published Methods and Reporting October 15, 2013 Loes Bertens, Berna Broekhuizen, Christiana Naaktgeboren, Frans Rutten, Arno Hoes, Yvonne van Mourik, Karel Moons, Johannes Reitsma
  • Image credit 10.1371/journal.pone.0085908 PLOS ONE The Assessment of the Quality of Reporting of Systematic Reviews/Meta-Analyses in Diagnostic Tests Published by Authors in China January 21, 2014 Long Ge, Jian-cheng Wang, Jin-long Li, Li Liang, Ni An, Xin-tong Shi, Yin-chun Liu, JH Tian
  • Image credit 10.1371/journal.pmed.1000420 PLOS Medicine Strengthening the Reporting of Genetic Risk Prediction Studies: The GRIPS Statement March 15, 2011 A. Cecile J. W. Janssens, John P. A. Ioannidis, Cornelia M. van Duijn, Julian Little, Muin J. Khoury, for the GRIPS Group
  • Image credit 10.1371/journal.pmed.1001216 PLOS Medicine Reporting Recommendations for Tumor Marker Prognostic Studies (REMARK): Explanation and Elaboration May 29, 2012 Doug Altman, Lisa McShane, Willi Sauerbrei, Sheila Taube
  • Image credit 10.1371/journal.pmed.1001381 PLOS Medicine Prognosis Research Strategy (PROGRESS) 3: Prognostic Model Research February 5, 2013 Ewout Steyerberg, Karel Moons, Danielle van der Windt, Jill Hayden, Pablo Perel, Sara Schroter, Richard Riley, Harry Hemingway, Douglas Altman
  • Image credit 10.1371/journal.pmed.1001380 PLOS Medicine Prognosis Research Strategy (PROGRESS) 2: Prognostic Factor Research February 5, 2013 Richard Riley, Jill Hayden, Ewout Steyerberg, Karel Moons, Keith Abrams, Panayiotis Kyzas, Nuria Malats, Andrew Briggs, Sara Schroter, Douglas Altman, Harry Hemingway
  • Image credit 10.1371/journal.pmed.1001671 PLOS Medicine Improving the Transparency of Prognosis Research: The Role of Reporting, Data Sharing, Registration, and Protocols July 8, 2014 George Peat, Richard Riley, Peter Croft, Katherine Morley, Panayiotis Kyzas, Karel Moons, Pablo Perel, Ewout Steyerberg, Sara Schroter, Douglas Altman, Harry Hemingway
  • Image credit 10.1371/journal.pone.0007824 PLOS ONE Survey of the Quality of Experimental Design, Statistical Analysis and Reporting of Research Using Animals November 30, 2009 Carol Kilkenny, Nick Parsons, Ed Kadyszewski, Michael F. W. Festing, Innes C. Cuthill, Derek Fry, Jane Hutton, Douglas G. Altman
  • Image credit 10.1371/journal.pmed.1001489 PLOS Medicine Threats to Validity in the Design and Conduct of Preclinical Efficacy Studies: A Systematic Review of Guidelines for In Vivo Animal Experiments July 23, 2013 Valerie Henderson, Jonathan Kimmelman, Dean Fergusson, Jeremy Grimshaw, Dan Hackam
  • Image credit 10.1371/journal.pone.0088266 PLOS ONE Five Years MIQE Guidelines: The Case of the Arabian Countries February 4, 2014 Afif Abdel Nour, Esam Azhar, Ghazi Damanhouri, Stephen Bustin
  • Image credit 10.1371/journal.pone.0101131 PLOS ONE The Quality of Methods Reporting in Parasitology Experiments July 30, 2014 Oscar Flórez-Vargas, Michael Bramhall, Harry Noyes, Sheena Cruickshank, Robert Stevens, Andy Brass
  • Image credit 10.1371/journal.pbio.1000412 PLOS Biology Improving Bioscience Research Reporting: The ARRIVE Guidelines for Reporting Animal Research June 29, 2010 Carol Kilkenny, William J. Browne, Innes C. Cuthill, Michael Emerson, Douglas G. Altman
  • Image credit 10.1371/journal.pbio.1001481 PLOS Biology Whole Animal Experiments Should Be More Like Human Randomized Controlled Trials February 12, 2013 Beverly Muhlhausler, Frank Bloomfield, Matthew Gillman
  • Image credit 10.1371/journal.pbio.1001756 PLOS Biology Two Years Later: Journals Are Not Yet Enforcing the ARRIVE Guidelines on Reporting Standards for Pre-Clinical Animal Studies January 7, 2014 David Baker, Katie Lidster, Ana Sottomayor, Sandra Amor
  • Image credit PLOS PLOS Biology Reporting Animal Studies: Good Science and a Duty of Care June 29, 2010 Catriona J. MacCallum
  • Image credit PLOS PLOS Biology Open Science and Reporting Animal Studies: Who’s Accountable? January 7, 2014 Catriona MacCallum, Jonathan Eisen, Emma Ganley
  • Image credit PLOS PLOS Computational Biology Minimum Information About a Simulation Experiment (MIASE) April 28, 2011 Dagmar Waltemath, Richard Adams, Daniel A. Beard, Frank T. Bergmann, Upinder S. Bhalla, Randall Britten, Vijayalakshmi Chelliah, Michael T. Cooling, Jonathan Cooper, Edmund J. Crampin, Alan Garny, Stefan Hoops, Michael Hucka, Peter Hunter, Edda Klipp, Camille Laibe, Andrew K. Miller, Ion Moraru, David Nickerson, Poul Nielsen, Macha Nikolski, Sven Sahle, Herbert M. Sauro, Henning Schmidt, Jacky L. Snoep, Dominic Tolle, Olaf Wolkenhauer, Nicolas Le Novère
  • Image credit 10.1371/journal.pmed.0050139 PLOS Medicine Guidelines for Reporting Health Research: The EQUATOR Network’s Survey of Guideline Authors June 24, 2008 Iveta Simera, Douglas G Altman, David Moher, Kenneth F Schulz, John Hoey
  • Image credit 10.1371/journal.pmed.1000217 PLOS Medicine Guidance for Developers of Health Research Reporting Guidelines February 16, 2010 David Moher, Kenneth F. Schulz, Iveta Simera, Douglas G. Altman
  • Image credit 10.1371/journal.pone.0035621 PLOS ONE Are Peer Reviewers Encouraged to Use Reporting Guidelines? A Survey of 116 Health Research Journals April 27, 2012 Allison Hirst, Douglas Altman
  • Image credit PLOS PLOS Neglected Tropical Diseases Research Ethics and Reporting Standards at PLoS Neglected Tropical Diseases October 31, 2007 Gavin Yamey
  • Image credit PLOS PLOS Medicine Better Reporting, Better Research: Guidelines and Guidance in PLoS Medicine April 29, 2008 The PLoS Medicine Editors
  • Image credit CCAC North Library, Flickr.com PLOS Medicine Better Reporting of Scientific Studies: Why It Matters August 27, 2013 The PLoS Medicine Editors
  • Image credit 10.1371/journal.pone.0097492 PLOS ONE Do Editorial Policies Support Ethical Research? A Thematic Text Analysis of Author Instructions in Psychiatry Journals June 5, 2014 Daniel Strech, Courtney Metz, Hannes Knüppel
  • Alzheimer's disease & dementia
  • Arthritis & Rheumatism
  • Attention deficit disorders
  • Autism spectrum disorders
  • Biomedical technology
  • Diseases, Conditions, Syndromes
  • Endocrinology & Metabolism
  • Gastroenterology
  • Gerontology & Geriatrics
  • Health informatics
  • Inflammatory disorders
  • Medical economics
  • Medical research
  • Medications
  • Neuroscience
  • Obstetrics & gynaecology
  • Oncology & Cancer
  • Ophthalmology
  • Overweight & Obesity
  • Parkinson's & Movement disorders
  • Psychology & Psychiatry
  • Radiology & Imaging
  • Sleep disorders
  • Sports medicine & Kinesiology
  • Vaccination
  • Breast cancer
  • Cardiovascular disease
  • Chronic obstructive pulmonary disease
  • Colon cancer
  • Coronary artery disease
  • Heart attack
  • Heart disease
  • High blood pressure
  • Kidney disease
  • Lung cancer
  • Multiple sclerosis
  • Myocardial infarction
  • Ovarian cancer
  • Post traumatic stress disorder
  • Rheumatoid arthritis
  • Schizophrenia
  • Skin cancer
  • Type 2 diabetes
  • Full List »

share this!

April 16, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:


peer-reviewed publication

trusted source

New guidelines reflect growing use of AI in health care research

by NDORMS, University of Oxford

artificial intelligence

The widespread use of artificial intelligence (AI) in medical decision-making tools has led to an update of the TRIPOD guidelines for reporting clinical prediction models. The new TRIPOD+AI guidelines are launched in the BMJ today.

The TRIPOD guidelines (which stands for Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis Or Diagnosis) were developed in 2015 to improve tools to aid diagnosis and prognosis that are used by doctors. Widely used, their uptake by medical practitioners to estimate the probability that a specific condition is present or may occur in the future, has helped improve transparency and accuracy of decision-making and significantly improve patient care.

But research methods have moved on since 2015, and we are witnessing an acceleration of studies that are developing prediction models using AI, specifically machine learning methods. Transparency is one of the six core principles underpinning the WHO guidance on ethics and governance of artificial intelligence for health. TRIPOD+AI has therefore been developed to provide a framework and set of reporting standards to boost reporting of studies developing and evaluating AI prediction models regardless of the modeling approach.

The TRIPOD+AI guidelines were developed by a consortium of international investigators, led by researchers from the University of Oxford alongside researchers from other leading institutions across the world, health care professionals , industry, regulators, and journal editors. The development of the new guidance was informed by research highlighting poor and incomplete reporting of AI studies, a Delphi survey, and an online consensus meeting.

Gary Collins, Professor of Medical Statistics at the Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences (NDORMS), University of Oxford, and lead researcher in TRIPOD, says, "There is enormous potential for artificial intelligence to improve health care from earlier diagnosis of patients with lung cancer to identifying people at increased risk of heart attacks. We're only just starting to see how this technology can be used to improve patient outcomes.

"Deciding whether to adopt these tools is predicated on transparent reporting. Transparency enables errors to be identified, facilitates appraisal of methods and ensures effective oversight and regulation. Transparency can also create more trust and influence patient and public acceptability of the use of prediction models in health care."

The TRIPOD+AI statement consists of a 27-item checklist that supersedes TRIPOD 2015. The checklist details reporting recommendations for each item and is designed to help researchers, peer reviewers, editors, policymakers and patients understand and evaluate the quality of the study methods and findings of AI-driven research.

A key change in TRIPOD+AI has been an increased emphasis on trustworthiness and fairness. Prof. Carl Moons, UMC Utrecht said, "While these are not new concepts in prediction modeling, AI has drawn more attention to these as reporting issues. A reason for this is that many AI algorithms are developed on very specific data sets that are sometimes not even from studies or could simply be drawn from the internet.

"We also don't know which groups or subgroups were included. So to ensure that studies do not discriminate against any particular group or create inequalities in health care provision, and to ensure decision-makers can trust the source of the data, these factors become more important."

Dr. Xiaoxuan Liu and Prof Alastair Denniston, Directors of the NIHR Incubator for Regulatory Science in AI & Digital Health care are co-authors of TRIPOD+AI explained, "Many of the most important applications of AI in medicine are based on prediction models. We were delighted to support the development of TRIPOD+AI which is designed to improve the quality of evidence in this important area of AI research."

TRIPOD 2015 helped change the landscape of clinical research reporting bringing minimum reporting standards to prediction models. The original guidelines have been cited over 7500 times, featured in multiple journal instructions to authors, and been included in WHO and NICE briefing documents.

"I hope the TRIPOD+AI will lead to a marked improvement in reporting, reduce waste from incompletely reported research and enable stakeholders to arrive at an informed judgment based on full details on the potential of the AI technology to improve patient care and outcomes that cut through the hype in AI-driven health care innovations," concluded Gary.

Explore further

Feedback to editors

reporting guidelines for survey research

Researchers develop a new way to safely boost immune cells to fight cancer

16 hours ago

reporting guidelines for survey research

New compound from blessed thistle may promote functional nerve regeneration

reporting guidelines for survey research

New research defines specific genomic changes associated with the transmissibility of the mpox virus

17 hours ago

reporting guidelines for survey research

New study confirms community pharmacies can help people quit smoking

reporting guidelines for survey research

Researchers discover glial hyper-drive for triggering epileptic seizures

reporting guidelines for survey research

Deeper dive into the gut microbiome shows changes linked to body weight

reporting guidelines for survey research

A new therapeutic target for traumatic brain injury

reporting guidelines for survey research

Dozens of COVID virus mutations arose in man with longest known case, research finds

reporting guidelines for survey research

Researchers explore causal machine learning, a new advancement for AI in health care

18 hours ago

reporting guidelines for survey research

Analyzing the progression in retinal thickness could predict cognitive progression in Parkinson's patients

Related stories.

reporting guidelines for survey research

Experts establish checklist detailing key consensus reporting items for primary care studies

Nov 28, 2023

reporting guidelines for survey research

A new standard for reporting epidemic prediction research

Oct 19, 2021

reporting guidelines for survey research

Urology treatment studies show increased reporting of harmful effects

Dec 21, 2023

reporting guidelines for survey research

New reporting guidelines developed to improve AI in health care settings

May 19, 2022

reporting guidelines for survey research

New guidelines to improve reporting standards of studies that investigate causal mechanisms

Sep 21, 2021

reporting guidelines for survey research

New guidelines for reporting clinical trials of biofield therapies released

Feb 8, 2024

Recommended for you

reporting guidelines for survey research

Geneticists develop world's first bioinformatic tool to identify amyloids consisting of multiple copies of same protein

21 hours ago

reporting guidelines for survey research

Using AI to trace the origins of metastatic cancer cells

reporting guidelines for survey research

Large genomic study finds tri-ancestral origins for Japanese population

Apr 18, 2024

reporting guidelines for survey research

Researchers reduce bias in pathology AI algorithms and enhance accuracy using foundation models

reporting guidelines for survey research

New heart disease calculator could save lives by identifying high-risk patients missed by current tools

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Medical Xpress in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

AI Index Report

Welcome to the seventh edition of the AI Index report. The 2024 Index is our most comprehensive to date and arrives at an important moment when AI’s influence on society has never been more pronounced. This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development. Featuring more original data than ever before, this edition introduces new estimates on AI training costs, detailed analyses of the responsible AI landscape, and an entirely new chapter dedicated to AI’s impact on science and medicine.

Read the 2024 AI Index Report

The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI.

The AI Index is recognized globally as one of the most credible and authoritative sources for data and insights on artificial intelligence. Previous editions have been cited in major newspapers, including the The New York Times, Bloomberg, and The Guardian, have amassed hundreds of academic citations, and been referenced by high-level policymakers in the United States, the United Kingdom, and the European Union, among other places. This year’s edition surpasses all previous ones in size, scale, and scope, reflecting the growing significance that AI is coming to hold in all of our lives.

Steering Committee Co-Directors

Jack Clark

Ray Perrault

Steering committee members.

Erik Brynjolfsson

Erik Brynjolfsson

John Etchemendy

John Etchemendy

Katrina light

Katrina Ligett

Terah Lyons

Terah Lyons

James Manyika

James Manyika

Juan Carlos Niebles

Juan Carlos Niebles

Vanessa Parli

Vanessa Parli

Yoav Shoham

Yoav Shoham

Russell Wald

Russell Wald

Staff members.

Loredana Fattorini

Loredana Fattorini

Nestor Maslej

Nestor Maslej

Letter from the co-directors.

A decade ago, the best AI systems in the world were unable to classify objects in images at a human level. AI struggled with language comprehension and could not solve math problems. Today, AI systems routinely exceed human performance on standard benchmarks.

Progress accelerated in 2023. New state-of-the-art systems like GPT-4, Gemini, and Claude 3 are impressively multimodal: They can generate fluent text in dozens of languages, process audio, and even explain memes. As AI has improved, it has increasingly forced its way into our lives. Companies are racing to build AI-based products, and AI is increasingly being used by the general public. But current AI technology still has significant problems. It cannot reliably deal with facts, perform complex reasoning, or explain its conclusions.

AI faces two interrelated futures. First, technology continues to improve and is increasingly used, having major consequences for productivity and employment. It can be put to both good and bad uses. In the second future, the adoption of AI is constrained by the limitations of the technology. Regardless of which future unfolds, governments are increasingly concerned. They are stepping in to encourage the upside, such as funding university R&D and incentivizing private investment. Governments are also aiming to manage the potential downsides, such as impacts on employment, privacy concerns, misinformation, and intellectual property rights.

As AI rapidly evolves, the AI Index aims to help the AI community, policymakers, business leaders, journalists, and the general public navigate this complex landscape. It provides ongoing, objective snapshots tracking several key areas: technical progress in AI capabilities, the community and investments driving AI development and deployment, public opinion on current and potential future impacts, and policy measures taken to stimulate AI innovation while managing its risks and challenges. By comprehensively monitoring the AI ecosystem, the Index serves as an important resource for understanding this transformative technological force.

On the technical front, this year’s AI Index reports that the number of new large language models released worldwide in 2023 doubled over the previous year. Two-thirds were open-source, but the highest-performing models came from industry players with closed systems. Gemini Ultra became the first LLM to reach human-level performance on the Massive Multitask Language Understanding (MMLU) benchmark; performance on the benchmark has improved by 15 percentage points since last year. Additionally, GPT-4 achieved an impressive 0.97 mean win rate score on the comprehensive Holistic Evaluation of Language Models (HELM) benchmark, which includes MMLU among other evaluations.

Although global private investment in AI decreased for the second consecutive year, investment in generative AI skyrocketed. More Fortune 500 earnings calls mentioned AI than ever before, and new studies show that AI tangibly boosts worker productivity. On the policymaking front, global mentions of AI in legislative proceedings have never been higher. U.S. regulators passed more AI-related regulations in 2023 than ever before. Still, many expressed concerns about AI’s ability to generate deepfakes and impact elections. The public became more aware of AI, and studies suggest that they responded with nervousness.

Ray Perrault Co-director, AI Index

Our Supporting Partners

Supporting Partner Logos

Analytics & Research Partners

reporting guidelines for survey research

Stay up to date on the AI Index by subscribing to the  Stanford HAI newsletter.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Key facts about Americans and guns

A customer shops for a handgun at a gun store in Florida.

Guns are deeply ingrained in American society and the nation’s political debates.

The Second Amendment to the United States Constitution guarantees the right to bear arms, and about a third of U.S. adults say they personally own a gun. At the same time, in response to concerns such as rising gun death rates and  mass shootings , President Joe Biden has proposed gun policy legislation that would expand on the bipartisan gun safety bill Congress passed last year.

Here are some key findings about Americans’ views of gun ownership, gun policy and other subjects, drawn primarily from a Pew Research Center survey conducted in June 2023 .

Pew Research Center conducted this analysis to summarize key facts about Americans and guns. We used data from recent Center surveys to provide insights into Americans’ views on gun policy and how those views have changed over time, as well as to examine the proportion of adults who own guns and their reasons for doing so.

The analysis draws primarily from a survey of 5,115 U.S. adults conducted from June 5 to June 11, 2023. Everyone who took part in the surveys cited is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the  ATP’s methodology .

Here are the  questions used for the analysis on gun ownership , the questions used for the analysis on gun policy , and  the survey’s methodology .

Additional information about the fall 2022 survey of parents and its methodology can be found at the link in the text of this post.

Measuring gun ownership in the United States comes with unique challenges. Unlike many demographic measures, there is not a definitive data source from the government or elsewhere on how many American adults own guns.

The Pew Research Center survey conducted June 5-11, 2023, on the Center’s American Trends Panel, asks about gun ownership using two separate questions to measure personal and household ownership. About a third of adults (32%) say they own a gun, while another 10% say they do not personally own a gun but someone else in their household does. These shares have changed little from surveys conducted in 2021  and  2017 . In each of those surveys, 30% reported they owned a gun.

These numbers are largely consistent with rates of gun ownership reported by Gallup , but somewhat higher than those reported by NORC’s General Social Survey . Those surveys also find only modest changes in recent years.

The FBI maintains data on background checks on individuals attempting to purchase firearms in the United States. The FBI reported a surge in background checks in 2020 and 2021, during the coronavirus pandemic. The number of federal background checks declined in 2022 and through the first half of this year, according to FBI statistics .

About four-in-ten U.S. adults say they live in a household with a gun, including 32% who say they personally own one,  according to an August report based on our June survey. These numbers are virtually unchanged since the last time we asked this question in 2021.

There are differences in gun ownership rates by political affiliation, gender, community type and other factors.

  • Republicans and Republican-leaning independents are more than twice as likely as Democrats and Democratic leaners to say they personally own a gun (45% vs. 20%).
  • 40% of men say they own a gun, compared with 25% of women.
  • 47% of adults living in rural areas report personally owning a firearm, as do smaller shares of those who live in suburbs (30%) or urban areas (20%).
  • 38% of White Americans own a gun, compared with smaller shares of Black (24%), Hispanic (20%) and Asian (10%) Americans.

A bar chart showing that nearly a third of U.S. adults say they personally own a gun.

Personal protection tops the list of reasons gun owners give for owning a firearm.  About three-quarters (72%) of gun owners say that protection is a major reason they own a gun. Considerably smaller shares say that a major reason they own a gun is for hunting (32%), for sport shooting (30%), as part of a gun collection (15%) or for their job (7%). 

The reasons behind gun ownership have changed only modestly since our 2017 survey of attitudes toward gun ownership and gun policies. At that time, 67% of gun owners cited protection as a major reason they owned a firearm.

A bar chart showing that nearly three-quarters of U.S. gun owners cite protection as a major reason they own a gun.

Gun owners tend to have much more positive feelings about having a gun in the house than non-owners who live with them. For instance, 71% of gun owners say they enjoy owning a gun – but far fewer non-gun owners in gun-owning households (31%) say they enjoy having one in the home. And while 81% of gun owners say owning a gun makes them feel safer, a narrower majority (57%) of non-owners in gun households say the same about having a firearm at home. Non-owners are also more likely than owners to worry about having a gun in the home (27% vs. 12%, respectively).

Feelings about gun ownership also differ by political affiliation, even among those who personally own firearms. Republican gun owners are more likely than Democratic owners to say owning a gun gives them feelings of safety and enjoyment, while Democratic owners are more likely to say they worry about having a gun in the home.

A chart showing the differences in feelings about guns between gun owners and non-owners in gun households.

Non-gun owners are split on whether they see themselves owning a firearm in the future. About half (52%) of Americans who don’t own a gun say they could never see themselves owning one, while nearly as many (47%) could imagine themselves as gun owners in the future.

Among those who currently do not own a gun:

A bar chart that shows non-gun owners are divided on whether they could see themselves owning a gun in the future.

  • 61% of Republicans and 40% of Democrats who don’t own a gun say they would consider owning one in the future.
  • 56% of Black non-owners say they could see themselves owning a gun one day, compared with smaller shares of White (48%), Hispanic (40%) and Asian (38%) non-owners.

Americans are evenly split over whether gun ownership does more to increase or decrease safety. About half (49%) say it does more to increase safety by allowing law-abiding citizens to protect themselves, but an equal share say gun ownership does more to reduce safety by giving too many people access to firearms and increasing misuse.

A bar chart that shows stark differences in views on whether gun ownership does more to increase or decrease safety in the U.S.

Republicans and Democrats differ on this question: 79% of Republicans say that gun ownership does more to increase safety, while a nearly identical share of Democrats (78%) say that it does more to reduce safety.

Urban and rural Americans also have starkly different views. Among adults who live in urban areas, 64% say gun ownership reduces safety, while 34% say it does more to increase safety. Among those who live in rural areas, 65% say gun ownership increases safety, compared with 33% who say it does more to reduce safety. Those living in the suburbs are about evenly split.

Americans increasingly say that gun violence is a major problem. Six-in-ten U.S. adults say gun violence is a very big problem in the country today, up 9 percentage points from spring 2022. In the survey conducted this June, 23% say gun violence is a moderately big problem, and about two-in-ten say it is either a small problem (13%) or not a problem at all (4%).

Looking ahead, 62% of Americans say they expect the level of gun violence to increase over the next five years. This is double the share who expect it to stay the same (31%). Just 7% expect the level of gun violence to decrease.

A line chart that shows a growing share of Americans say gun violence is a 'very big national problem.

A majority of Americans (61%) say it is too easy to legally obtain a gun in this country. Another 30% say the ease of legally obtaining a gun is about right, and 9% say it is too hard to get a gun. Non-gun owners are nearly twice as likely as gun owners to say it is too easy to legally obtain a gun (73% vs. 38%). Meanwhile, gun owners are more than twice as likely as non-owners to say the ease of obtaining a gun is about right (48% vs. 20%).

Partisan and demographic differences also exist on this question. While 86% of Democrats say it is too easy to obtain a gun legally, 34% of Republicans say the same. Most urban (72%) and suburban (63%) dwellers say it’s too easy to legally obtain a gun. Rural residents are more divided: 47% say it is too easy, 41% say it is about right and 11% say it is too hard.

A bar chart showing that about 6 in 10 Americans say it is too easy to legally obtain a gun in this country.

About six-in-ten U.S. adults (58%) favor stricter gun laws. Another 26% say that U.S. gun laws are about right, and 15% favor less strict gun laws. The percentage who say these laws should be stricter has fluctuated a bit in recent years. In 2021, 53% favored stricter gun laws, and in 2019, 60% said laws should be stricter.

A bar chart that shows women are more likely than men to favor stricter gun laws in the U.S.

About a third (32%) of parents with K-12 students say they are very or extremely worried about a shooting ever happening at their children’s school, according to a fall 2022 Center survey of parents with at least one child younger than 18. A similar share of K-12 parents (31%) say they are not too or not at all worried about a shooting ever happening at their children’s school, while 37% of parents say they are somewhat worried.

Among all parents with children under 18, including those who are not in school, 63% see improving mental health screening and treatment as a very or extremely effective way to prevent school shootings. This is larger than the shares who say the same about having police officers or armed security in schools (49%), banning assault-style weapons (45%), or having metal detectors in schools (41%). Just 24% of parents say allowing teachers and school administrators to carry guns in school would be a very or extremely effective approach, while half say this would be not too or not at all effective.

A pie chart that showing that 19% of K-12 parents are extremely worried about a shooting happening at their children's school.

There is broad partisan agreement on some gun policy proposals, but most are politically divisive,   the June 2023 survey found . Majorities of U.S. adults in both partisan coalitions somewhat or strongly favor two policies that would restrict gun access: preventing those with mental illnesses from purchasing guns (88% of Republicans and 89% of Democrats support this) and increasing the minimum age for buying guns to 21 years old (69% of Republicans, 90% of Democrats). Majorities in both parties also  oppose  allowing people to carry concealed firearms without a permit (60% of Republicans and 91% of Democrats oppose this).

A dot plot showing bipartisan support for preventing people with mental illnesses from purchasing guns, but wide differences on other policies.

Republicans and Democrats differ on several other proposals. While 85% of Democrats favor banning both assault-style weapons and high-capacity ammunition magazines that hold more than 10 rounds, majorities of Republicans oppose these proposals (57% and 54%, respectively).

Most Republicans, on the other hand, support allowing teachers and school officials to carry guns in K-12 schools (74%) and allowing people to carry concealed guns in more places (71%). These proposals are supported by just 27% and 19% of Democrats, respectively.

Gun ownership is linked with views on gun policies. Americans who own guns are less likely than non-owners to favor restrictions on gun ownership, with a notable exception. Nearly identical majorities of gun owners (87%) and non-owners (89%) favor preventing mentally ill people from buying guns.

A dot plot that shows, within each party, gun owners are more likely than non-owners to favor expanded access to guns.

Within both parties, differences between gun owners and non-owners are evident – but they are especially stark among Republicans. For example, majorities of Republicans who do not own guns support banning high-capacity ammunition magazines and assault-style weapons, compared with about three-in-ten Republican gun owners.

Among Democrats, majorities of both gun owners and non-owners favor these two proposals, though support is greater among non-owners. 

Note: This is an update of a post originally published on Jan. 5, 2016 .

  • Partisanship & Issues
  • Political Issues

About 1 in 4 U.S. teachers say their school went into a gun-related lockdown in the last school year

Striking findings from 2023, for most u.s. gun owners, protection is the main reason they own a gun, gun violence widely viewed as a major – and growing – national problem, what the data says about gun deaths in the u.s., most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

What the trans care recommendations from the NHS England report mean

The report calls for more research on puberty blockers and hormone therapies.

A new report commissioned by the National Health Service England advocates for further research on gender-affirming care for transgender youth and young adults.

Dr. Hillary Cass, a former president of the Royal College of Paediatrics and Child Health, was appointed by NHS England and NHS Improvement to chair the Independent Review of Gender Identity Services in 2020 amid a rise in referrals to NHS' gender services. Upon review, she advises "extreme caution" for the use of hormone therapies.

"It is absolutely right that children and young people, who may be dealing with a complex range of issues around their gender identity, get the best possible support and expertise throughout their care," Cass states in the report.

Around 2022, about 5,000 adolescents and children were referred to the NHS' gender services. The report estimated that roughly 20% of children and young people seen by the Gender Identity Development Service (GIDS) enter a hormone pathway -- roughly 1,000 people under 18 in England.

Following four years of data analysis, Cass concluded that "while a considerable amount of research has been published in this field, systematic evidence reviews demonstrated the poor quality of the published studies, meaning there is not a reliable evidence base upon which to make clinical decisions, or for children and their families to make informed choices."

Cass continued: "The strengths and weaknesses of the evidence base on the care of children and young people are often misrepresented and overstated, both in scientific publications and social debate," read the report.

Among her recommendations, she urged the NHS to increase the available workforce in this field, to work on setting up more regional outlets for care, increase investment in research on this care, and improve the quality of care to meet international guidelines.

Cass' review comes as the NHS continues to expand its children and young people's gender identity services across the country. The NHS has recently opened new children and young people's gender services based in London and the Northwest.

NHS England, the country's universal healthcare system, said the report is expected to guide and shape its use of gender affirming care in children and potentially impact youth patients in England accessing gender-affirming care.

PHOTO: Trans activists and protesters hold a banner and placards while marching towards the Hyde Park Corner, July 8, 2023.

MORE: Lawsuit filed by families against Ohio trans care ban legislation

The debate over transgender youth care.

In an interview with The Guardian , Cass stated that her findings are not intended to undermine the validity of trans identities or challenge young people's right to transition but to improve the care they are receiving.

"We've let them down because the research isn't good enough and we haven't got good data," Cass told the news outlet. "The toxicity of the debate is perpetuated by adults, and that itself is unfair to the children who are caught in the middle of it. The children are being used as a football and this is a group that we should be showing more compassion to."

In the report, Cass argued that the knowledge and expertise of "experienced clinicians who have reached different conclusions about the best approach to care" has been "dismissed and invalidated" amid arguments concerning transgender care in youth.

Cass did not immediately respond to ABC News' request for comment.

Recommendations for trans youth care

Cass is calling for more thorough research that looks at the "characteristics, interventions and outcomes" of NHS gender service patients concerning puberty blockers and hormone therapy, particularly among children and adolescents.

The report's recommendations also urge caregivers to take an approach to care that considers young patients "holistically and not solely in terms of their gender-related distress."

The report notes that identity exploration is "a completely natural process during childhood and adolescence."

Cass recommends that pre-pubertal children and their families have early discussions about how parents can best support their child "in a balanced and non-judgemental way," which may include "psychological and psychopharmacological treatments" to manage distress associated with gender incongruence and co-occurring conditions.

In past interviews, U.S. physicians told ABC News , that patients, their physicians and their families often engage in a lengthy process of building a customized and individualized approach to care, meaning not every patient will receive any or every type of gender-affirming medical care option.

Cass' report states that evidence particularly for puberty blockers in children and adolescents is "weak" regarding the impact on "gender dysphoria, mental or psychosocial health. The effect on cognitive and psychosexual development remains unknown."

PHOTO:A photograph taken on April 10, 2024, in London, shows the entrance of the NHS Tavistock center, where the Tavistock Clinic hosted the Gender Identity Development Service (GIDS) for children until March 28, 2024.

The NHS has said it will halt routine use of puberty blockers as it prepares for a study into the practice later this year.

MORE: Amid anti-LGBTQ efforts, transgender community finds joy in 'chosen families'

According to the Endocrine Society puberty blockers, as opposed to hormone therapy, temporarily pause puberty so patients have more time to explore their gender identity.

The report also recommends "extreme caution" for transgender youth from age 16 who take more permanent hormone therapies.

"There should be a clear clinical rationale for providing hormones at this stage rather than waiting until an individual reaches 18," the report's recommendations state.

Hormone therapy, according to the Endocrine Society , triggers physical changes like hair growth, muscle development, body fat and more, that can help better align the body with a person's gender identity. It's not unusual for patients to stop hormone therapy and decide that they have transitioned as far as they wish, physicians have told ABC News.

Cass' report asserts that there are many unknowns about the use of both puberty blockers and hormones for minors, "despite their longstanding use in the adult transgender population."

"The lack of long-term follow-up data on those commencing treatment at an earlier age means we have inadequate information about the range of outcomes for this group," the report states.

Cass recommends that NHS England facilities have procedures in place to follow up with 17 to 25-year-old patients "to ensure continuity of care and support at a potentially vulnerable stage in their journey," as well as allow for further data and research on transgender minors through the years.

Several British medical organizations, including British Psychological Society and the Royal College of Paediatrics and Child Health, commended the report's recommendations to expand the workforce and invest in further research to allow young people to make better informed decisions.

“Dr Cass and her team have produced a thought-provoking, detailed and wide-ranging list of recommendations, which will have implications for all professionals working with gender-questioning children and young people," said Dr Roman Raczka, of the British Psychological Society. "It will take time to carefully review and respond to the whole report, but I am sure that psychology, as a profession, will reflect and learn lessons from the review, its findings and recommendations."

Some groups expressed fears that the report will be misused by anti-transgender groups.

"All children have the right to access specialist effective care on time and must be afforded the privacy to make decisions that are appropriate for them in consultation with a specialist," said human rights group Amnesty International. "This review is being weaponised by people who revel in spreading disinformation and myths about healthcare for trans young people."

Transgender care for people under 18 has been a source of contention in both the United States and the United Kingdom. Legislation is being pushed across the U.S. by many Republican legislators focused on banning all medical care options like puberty blockers and hormone therapies for minors. Some argue that gender-affirming care is unsafe for youth, or that they should wait until they're older.

Gender-affirming medical does come with risks, according to the Endocrine Society , including impacts to bone mineral density, cholesterol levels, and blood clot risks. However, physicians have told ABC News that all medications, surgeries or vaccines come with some kind of risk.

Major national medical associations in the U.S., including the American Academy of Pediatrics, the American Medical Association, the American Academy of Child and Adolescent Psychiatry, and more than 20 others have argued that gender-affirming care is safe, effective, beneficial, and medically necessary.

The first-of-its-kind gender care clinic at Johns Hopkins Hospital in Maryland opened in the 1960s, using similar procedures still used today.

Some studies have shown that some gender-affirming options can have positive impacts on the mental health of transgender patients, who may experience gender-related stress.

Related Topics

  • United Kingdom

Top Stories

reporting guidelines for survey research

Trump hush money trial: Judge sets opening statements for Monday

  • Apr 19, 5:15 PM

reporting guidelines for survey research

USC cancels all commencement speakers after canceled valedictorian speech

  • Apr 19, 10:02 PM

reporting guidelines for survey research

Savannah Chrisley talks about the fate of her parents Todd and Julie

  • Apr 19, 6:48 PM

reporting guidelines for survey research

New York AG asks court to reject Trump's $175M bond for civil judgment

  • Apr 19, 9:12 PM

reporting guidelines for survey research

Nothing to see here: US, Israel go radio silent on strike against Iran

  • Apr 19, 2:32 PM

ABC News Live

24/7 coverage of breaking news and live events

  • Skip to main content
  • Keyboard shortcuts for audio player

6 in 10 U.S. Catholics are in favor of abortion rights, Pew Research report finds

Jason DeRose at NPR headquarters in Washington, D.C., September 27, 2018. (photo by Allison Shelley)

Jason DeRose

reporting guidelines for survey research

Pope Francis remains popular among U.S. Catholics, with 75% having favorable views of him, according to a Pew Research report. But many self-identified Catholics disagree with various teachings of their church. Andrew Medichini/AP hide caption

Pope Francis remains popular among U.S. Catholics, with 75% having favorable views of him, according to a Pew Research report. But many self-identified Catholics disagree with various teachings of their church.

Catholics in the U.S., one of the country's largest single Christian groups, hold far more diverse views on abortion rights than the official teaching of their church.

While the Catholic Church itself holds that abortion is wrong and should not be legal, 6 in 10 U.S. adult Catholics say abortion should be legal in all or most cases, according to a newly released profile of Catholicism by Pew Research .

Catholic opinion about abortion rights, according to the report, tends to align with political leanings: Fewer Catholic Republicans favor legal abortion than Catholic Democrats. And Pew says Hispanic Catholics, who make up one-third of the U.S. church, are slightly more in favor of legal abortion than white Catholics.

Despite church prohibitions, Catholics still choose IVF to have children

Despite church prohibitions, Catholics still choose IVF to have children

Pew found that 20% of the U.S. population identifies as Catholic, but only about 3 in 10 say they attend mass regularly. Opinions about abortion rights appear to be related to how often someone worships — just 34% of Catholics who attend mass weekly say abortion should be legal in all or most cases, whereas that number jumps to 68% among those who attend mass monthly or less.

Most U.S. Catholics are white (57%), but that number has dropped by 8 percentage points since 2007, according the new report. About 33% identify as Hispanic, 4% Asian, 2% Black, and 3% describe themselves as another race.

Pew Research also found that as of February, Pope Francis remains highly popular, with 75% of U.S. Catholics rating him favorably. However, there is a partisan divide, with Catholic Democrats more strongly supporting him.

About 4 in 10 U.S. Catholics view Francis as a major agent of change, with 3 in 10 saying he is a minor agent of change.

Catholic Church works to explain what same-sex blessings are and are not

Catholic Church works to explain what same-sex blessings are and are not

Pew reports that many U.S. Catholics would welcome more change. Some 83% say they want the church to allow the use of contraception, 69% say priests should be allowed to get married, 64% say women should be allowed to become priests, and 54% say the Catholic Church should recognize same-sex marriage.

In December 2023, the Vatican issued guidance to priests that they may bless people in same-sex relationships. But the church insists those blessings not be construed in any way to be a form of marriage or even take place as part of a worship service.

  • Pope Francis
  • Abortion rights
  • Catholic church
  • Pew Research


  1. (PDF) Reporting Guidelines for Survey Research: An Analysis of

    reporting guidelines for survey research

  2. EQUATOR Network and Research Reporting Guidelines for Authors

    reporting guidelines for survey research

  3. How to Write a Research Report?

    reporting guidelines for survey research

  4. (PDF) Guidelines for Reporting Survey-Based Research Submitted to

    reporting guidelines for survey research

  5. (PDF) Reporting Guidelines for Survey Research: An Analysis of

    reporting guidelines for survey research

  6. Survey Research Report Template in Word, Pages, Google Docs

    reporting guidelines for survey research




  3. Reporting Guidelines| CONSORT Guideline Intro |#CONSORTguideline #research #biomedicalresearch

  4. What and when to report: A guide to mandatory reporting

  5. Paper Submission #ElSevier#

  6. User Feedback: 5 Guidelines


  1. A Consensus-Based Checklist for Reporting of Survey Studies (CROSS

    A Consensus-Based Checklist for Reporting of Survey Studies (CROSS). J Gen Intern Med. 2021. Language: English: PubMed ID: 33886027: Reporting guideline acronym: CROSS: Study design: Observational studies, Qualitative research: Applies to the whole report or to individual sections of the report? Whole report: Additional information

  2. Guidelines for Reporting Survey-Based Research Submitted to ...

    We recognize, of course, that the details and sophistication of a given survey design and implementation project may vary depending on the purpose and maturity of the ideas being explored and the type of submission (e.g., Research Report, Article, Innovation Report). Selected Reporting Guidelines for Survey Studies. Small differences in how a ...

  3. A Consensus-Based Checklist for Reporting of Survey Studies (CROSS)

    Because of concerns regarding survey qualities and lack of well-developed guidelines, there is a need for a single comprehensive tool that can be used as a standard reporting checklist for survey research to address significant discrepancies in the reporting of survey studies. 13, 25 - 28, 31, 32 The purpose of this study was to develop a ...

  4. A Consensus-Based Checklist for Reporting of Survey Studies ...

    Because of concerns regarding survey qualities and lack of well-developed guidelines, there is a need for a single comprehensive tool that can be used as a standard reporting checklist for survey research to address significant discrepancies in the reporting of survey studies. 13, 25,26,27,28, 31, 32 The purpose of this study was to develop a ...

  5. Designing, Conducting, and Reporting Survey Studies: A Primer for

    Burns et al., 2008 12. A guide for the design and conduct of self-administered surveys of clinicians. This guide includes statements on designing, conducting, and reporting web- and non-web-based surveys of clinicians' knowledge, attitude, and practice. The statements are based on a literature review, but not the Delphi method.

  6. Reporting Survey Based Studies

    Abstract. The coronavirus disease 2019 (COVID-19) pandemic has led to a massive rise in survey-based research. The paucity of perspicuous guidelines for conducting surveys may pose a challenge to the conduct of ethical, valid and meticulous research. The aim of this paper is to guide authors aiming to publish in scholarly journals regarding the ...

  7. PDF A Consensus-Based Checklist for Reporting of Survey Studies ...

    sive reporting guidelines for survey research are limited13, 14 and substantial variabilities and inconsistencies can be identi-fied in the reporting of survey studies. Indeed, different studies have presented multiform patterns of survey designs and re-ported results in various non-systematic ways. 15-17

  8. Reporting Guidelines for Survey Research: An Analysis of Published

    Methods and Findings. We conducted a three-part project: (1) a systematic review of the literature (including "Instructions to Authors" from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of ...

  9. A Consensus-Based Checklist for Reporting of Survey Studies ...

    7 Faculty of Medicine, Alexandria University, Alexandria, Egypt. 8 Institute of Tropical Medicine (NEKKEN) and School of Tropical Medicine and Global Health, Nagasaki University, Nagasaki, 852-8523, Japan. [email protected]. 9 Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia.

  10. Reporting Guidelines for Survey Research: An Analysis of Published

    Introduction. Surveys are a research method by which information is typically gathered by asking a subset of people questions on a specific topic and generalising the results to a larger population [1,2]. They are an essential component of many types of research including public opinion, politics, health, and others.

  11. Good practice in the conduct and reporting of survey research

    KATE KELLEY, BELINDA CLARK, VIVIENNE BROWN, JOHN SITZIA, Good practice in the conduct and reporting of survey research, International Journal for Quality in Health Care, Volume 15, Issue 3, May 2003, ... For practical recommendations on sample size, the set of survey guidelines developed by the UK Department of Health ...

  12. Reporting guidelines

    Preliminary guideline for reporting bibliometric reviews of the biomedical literature (BIBLIO) : a minimum requirements. 19. Appropriate design and reporting of superiority, equivalence and non-inferiority clinical trials incorporating a benefit-risk assessment: the BRAINS study including expert workshop. 20.

  13. Reporting guidelines for survey research: an analysis of published

    Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research. Methods and findings: We conducted a three-part project: (1) a systematic review of the literature (including "Instructions to ...

  14. (PDF) Reporting Guidelines for Survey Research: An Analysis of

    Flow diagram of records and reports—Guidelines for survey research and evidence on the quality of reporting of surveys. Identification process for article selection—Review of published reports ...

  15. Best Practices for Survey Research

    Below you will find recommendations on how to produce the best survey possible. Included are suggestions on the design, data collection, and analysis of a quality survey. For more detailed information on important details to assess rigor of survey methology, see the AAPOR Transparency Initiative. To download a pdf of these best practices ...

  16. A reporting guideline for IS survey research

    The objective of this paper is to develop a reporting guideline for IS survey research. The survey method is known to be subject to several types of biases, such as common method variance [7] and social desirability bias [8]. Some research contexts are even prone to respondent dishonesty, especially those that involve asking sensitive questions ...

  17. Reporting guidelines for allergy and immunology survey research

    Although survey reports are common, fewer than 10% of medical journals provide clear guidelines to investigators for survey research. In this special article, we provide guidance on minimum recommendations in the form of a CHecklist for Allergy and Immunology Reporting of Survey research (CHAIRS). Key components to consider include providing background information, such as a clear statement of ...

  18. A reporting guideline for IS survey research

    The objective of this paper is to develop a reporting guideline for IS survey research. The survey method is known to be subject to several types of biases, such as common method variance [ 7] and social desirability bias [ 8 ]. Some research contexts are even prone to respondent dishonesty, especially those that involve asking sensitive ...

  19. Reporting Guidelines

    Reporting guidelines are statements intended to advise authors reporting research methods and findings. They can be presented as a checklist, flow diagram or text, and describe what is required to give a clear and transparent account of a study's research and results. These guidelines are prepared through consideration of specific issues that may introduce bias, and are supported by the latest ...

  20. The IDEAL Reporting Guidelines: A Delphi Consensus Statement ...

    Sixty-one participants completed the initial survey, a clear majority indicating that new reporting guidelines were needed for IDEAL Stage 1 (69.5%), Stage 2a (78%), Stage 2b (74.6%), and Stage 4 (66%). A proposed set of checklists was modified by survey participants in 2 online Delphi rounds (n = 54 and n = 47, respectively), resulting in a ...

  21. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  22. Using Reporting Guidelines Effectively to Ensure Good Reporting of

    A reporting guideline lists the minimum set of items that should be included in a research report to provide a clear and transparent account of what was done and what was found. Researchers are the primary target group of most reporting guidelines as they benefit directly from their use both as authors and peer reviewers of research articles.

  23. New guidelines reflect growing use of AI in health care research

    The development of the new guidance was informed by research highlighting poor and incomplete reporting of AI studies, a Delphi survey, and an online consensus meeting.

  24. AI Index Report

    Mission. The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the ...

  25. Key findings about Americans and data privacy

    (Read the full report.) This survey was conducted among 5,101 U.S. adults from May 15 to 21, 2023. Everyone who took part in the survey is a member of the Center's American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses.

  26. Asian American Identities: Diverse Cultures and ...

    Even so, survey research is limited when it comes to documenting the views and attitudes of the less populous Asian origin groups in the U.S. To address this, the survey was complemented by 66 pre-survey focus groups of Asian adults, conducted from Aug. 4 to Oct. 14, 2021, with 264 recruited participants from 18 Asian origin groups. Focus group ...

  27. Reporting guidelines for allergy and immunology survey research

    Abstract. Although survey reports are common, fewer than 10% of medical journals provide clear guidelines to investigators for survey research. In this special article, we provide guidance on minimum recommendations in the form of a CHecklist for Allergy and Immunology Reporting of Survey research (CHAIRS). Key components to consider include ...

  28. Key facts about Americans and guns

    The Pew Research Center survey conducted June 5-11, 2023, on the Center's American Trends Panel, asks about gun ownership using two separate questions to measure personal and household ownership. ... 47% of adults living in rural areas report personally owning a firearm, as do smaller shares of those who live in suburbs (30%) or urban areas ...

  29. What the trans care recommendations from the NHS England report mean

    The report calls for more research on puberty blockers and hormone therapies. By Kiara Alfonseca. April 11, 2024, 12:43 PM ... and improve the quality of care to meet international guidelines. ...

  30. 6 in 10 Catholics favor abortion rights, Pew report finds : NPR

    6 in 10 U.S. Catholics are in favor of abortion rights, Pew Research report finds. Pope Francis remains popular among U.S. Catholics, with 75% having favorable views of him, according to a Pew ...