Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Survey Research | Definition, Examples & Methods

Survey Research | Definition, Examples & Methods

Published on August 20, 2019 by Shona McCombes . Revised on June 22, 2023.

Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyze the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyze the survey results, step 6: write up the survey results, other interesting articles, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research : investigating the experiences and characteristics of different social groups
  • Market research : finding out what customers think about products, services, and companies
  • Health research : collecting data from patients about symptoms and treatments
  • Politics : measuring public opinion about parties and policies
  • Psychology : researching personality traits, preferences and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and in longitudinal studies , where you survey the same sample several times over an extended period.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

meaning of survey research report

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • US college students
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18-24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalized to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

Several common research biases can arise if your survey is not generalizable, particularly sampling bias and selection bias . The presence of these biases have serious repercussions for the validity of your results.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every college student in the US. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalize to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions. Again, beware of various types of sampling bias as you design your sample, particularly self-selection bias , nonresponse bias , undercoverage bias , and survivorship bias .

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by mail, online or in person, and respondents fill it out themselves.
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses.

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by mail is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g. residents of a specific region).
  • The response rate is often low, and at risk for biases like self-selection bias .

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyze.
  • The anonymity and accessibility of online surveys mean you have less control over who responds, which can lead to biases like self-selection bias .

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping mall or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g. the opinions of a store’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations and is at risk for sampling bias .

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data: the researcher records each response as a category or rating and statistically analyzes the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analyzed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g. yes/no or agree/disagree )
  • A scale (e.g. a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g. age categories)
  • A list of options with multiple answers possible (e.g. leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analyzed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an “other” field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic. Avoid jargon or industry-specific terminology.

Survey questions are at risk for biases like social desirability bias , the Hawthorne effect , or demand characteristics . It’s critical to use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no indication that you’d prefer a particular answer or emotion.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by mail, online, or in person.

There are many methods of analyzing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also clean the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organizing them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analyzing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analyzed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyze it. In the results section, you summarize the key results from your analysis.

In the discussion and conclusion , you give your explanations and interpretations of these results, answer your research question, and reflect on the implications and limitations of the research.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Survey Research | Definition, Examples & Methods. Scribbr. Retrieved March 14, 2024, from https://www.scribbr.com/methodology/survey-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, questionnaire design | methods, question types & examples, what is a likert scale | guide & examples, what is your plagiarism score.

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Employee Exit Interviews
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Market Research
  • Artificial Intelligence
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • What is a survey?
  • Survey Research

Try Qualtrics for free

What is survey research.

15 min read Find out everything you need to know about survey research, from what it is and how it works to the different methods and tools you can use to ensure you’re successful.

Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall .

As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions. But survey research needs careful planning and execution to get the results you want.

So if you’re thinking about using surveys to carry out research, read on.

Get started with our free survey maker tool

Types of survey research

Calling these methods ‘survey research’ slightly underplays the complexity of this type of information gathering. From the expertise required to carry out each activity to the analysis of the data and its eventual application, a considerable amount of effort is required.

As for how you can carry out your research, there are several options to choose from — face-to-face interviews, telephone surveys, focus groups (though more interviews than surveys), online surveys , and panel surveys.

Typically, the survey method you choose will largely be guided by who you want to survey, the size of your sample , your budget, and the type of information you’re hoping to gather.

Here are a few of the most-used survey types:

Face-to-face interviews

Before technology made it possible to conduct research using online surveys, telephone, and mail were the most popular methods for survey research. However face-to-face interviews were considered the gold standard — the only reason they weren’t as popular was due to their highly prohibitive costs.

When it came to face-to-face interviews, organizations would use highly trained researchers who knew when to probe or follow up on vague or problematic answers. They also knew when to offer assistance to respondents when they seemed to be struggling. The result was that these interviewers could get sample members to participate and engage in surveys in the most effective way possible, leading to higher response rates and better quality data.

Telephone surveys

While phone surveys have been popular in the past, particularly for measuring general consumer behavior or beliefs, response rates have been declining since the 1990s .

Phone surveys are usually conducted using a random dialing system and software that a researcher can use to record responses.

This method is beneficial when you want to survey a large population but don’t have the resources to conduct face-to-face research surveys or run focus groups, or want to ask multiple-choice and open-ended questions .

The downsides are they can: take a long time to complete depending on the response rate, and you may have to do a lot of cold-calling to get the information you need.

You also run the risk of respondents not being completely honest . Instead, they’ll answer your survey questions quickly just to get off the phone.

Focus groups (interviews — not surveys)

Focus groups are a separate qualitative methodology rather than surveys — even though they’re often bunched together. They’re normally used for survey pretesting and designing , but they’re also a great way to generate opinions and data from a diverse range of people.

Focus groups involve putting a cohort of demographically or socially diverse people in a room with a moderator and engaging them in a discussion on a particular topic, such as your product, brand, or service.

They remain a highly popular method for market research , but they’re expensive and require a lot of administration to conduct and analyze the data properly.

You also run the risk of more dominant members of the group taking over the discussion and swaying the opinions of other people — potentially providing you with unreliable data.

Online surveys

Online surveys have become one of the most popular survey methods due to being cost-effective, enabling researchers to accurately survey a large population quickly.

Online surveys can essentially be used by anyone for any research purpose – we’ve all seen the increasing popularity of polls on social media (although these are not scientific).

Using an online survey allows you to ask a series of different question types and collect data instantly that’s easy to analyze with the right software.

There are also several methods for running and distributing online surveys that allow you to get your questionnaire in front of a large population at a fraction of the cost of face-to-face interviews or focus groups.

This is particularly true when it comes to mobile surveys as most people with a smartphone can access them online.

However, you have to be aware of the potential dangers of using online surveys, particularly when it comes to the survey respondents. The biggest risk is because online surveys require access to a computer or mobile device to complete, they could exclude elderly members of the population who don’t have access to the technology — or don’t know how to use it.

It could also exclude those from poorer socio-economic backgrounds who can’t afford a computer or consistent internet access. This could mean the data collected is more biased towards a certain group and can lead to less accurate data when you’re looking for a representative population sample.

When it comes to surveys, every voice matters.

Find out how to create more inclusive and representative surveys for your research.

Panel surveys

A panel survey involves recruiting respondents who have specifically signed up to answer questionnaires and who are put on a list by a research company. This could be a workforce of a small company or a major subset of a national population. Usually, these groups are carefully selected so that they represent a sample of your target population — giving you balance across criteria such as age, gender, background, and so on.

Panel surveys give you access to the respondents you need and are usually provided by the research company in question. As a result, it’s much easier to get access to the right audiences as you just need to tell the research company your criteria. They’ll then determine the right panels to use to answer your questionnaire.

However, there are downsides. The main one being that if the research company offers its panels incentives, e.g. discounts, coupons, money — respondents may answer a lot of questionnaires just for the benefits.

This might mean they rush through your survey without providing considered and truthful answers. As a consequence, this can damage the credibility of your data and potentially ruin your analyses.

What are the benefits of using survey research?

Depending on the research method you use, there are lots of benefits to conducting survey research for data collection. Here, we cover a few:

1.   They’re relatively easy to do

Most research surveys are easy to set up, administer and analyze. As long as the planning and survey design is thorough and you target the right audience , the data collection is usually straightforward regardless of which survey type you use.

2.   They can be cost effective

Survey research can be relatively cheap depending on the type of survey you use.

Generally, qualitative research methods that require access to people in person or over the phone are more expensive and require more administration.

Online surveys or mobile surveys are often more cost-effective for market research and can give you access to the global population for a fraction of the cost.

3.   You can collect data from a large sample

Again, depending on the type of survey, you can obtain survey results from an entire population at a relatively low price. You can also administer a large variety of survey types to fit the project you’re running.

4.   You can use survey software to analyze results immediately

Using survey software, you can use advanced statistical analysis techniques to gain insights into your responses immediately.

Analysis can be conducted using a variety of parameters to determine the validity and reliability of your survey data at scale.

5.   Surveys can collect any type of data

While most people view surveys as a quantitative research method, they can just as easily be adapted to gain qualitative information by simply including open-ended questions or conducting interviews face to face.

How to measure concepts with survey questions

While surveys are a great way to obtain data, that data on its own is useless unless it can be analyzed and developed into actionable insights.

The easiest, and most effective way to measure survey results, is to use a dedicated research tool that puts all of your survey results into one place.

When it comes to survey measurement, there are four measurement types to be aware of that will determine how you treat your different survey results:

Nominal scale

With a nominal scale , you can only keep track of how many respondents chose each option from a question, and which response generated the most selections.

An example of this would be simply asking a responder to choose a product or brand from a list.

You could find out which brand was chosen the most but have no insight as to why.

Ordinal scale

Ordinal scales are used to judge an order of preference. They do provide some level of quantitative value because you’re asking responders to choose a preference of one option over another.

Ratio scale

Ratio scales can be used to judge the order and difference between responses. For example, asking respondents how much they spend on their weekly shopping on average.

Interval scale

In an interval scale, values are lined up in order with a meaningful difference between the two values — for example, measuring temperature or measuring a credit score between one value and another.

Step by step: How to conduct surveys and collect data

Conducting a survey and collecting data is relatively straightforward, but it does require some careful planning and design to ensure it results in reliable data.

Step 1 – Define your objectives

What do you want to learn from the survey? How is the data going to help you? Having a hypothesis or series of assumptions about survey responses will allow you to create the right questions to test them.

Step 2 – Create your survey questions

Once you’ve got your hypotheses or assumptions, write out the questions you need answering to test your theories or beliefs. Be wary about framing questions that could lead respondents or inadvertently create biased responses .

Step 3 – Choose your question types

Your survey should include a variety of question types and should aim to obtain quantitative data with some qualitative responses from open-ended questions. Using a mix of questions (simple Yes/ No, multiple-choice, rank in order, etc) not only increases the reliability of your data but also reduces survey fatigue and respondents simply answering questions quickly without thinking.

Find out how to create a survey that’s easy to engage with

Step 4 – Test your questions

Before sending your questionnaire out, you should test it (e.g. have a random internal group do the survey) and carry out A/B tests to ensure you’ll gain accurate responses.

Step 5 – Choose your target and send out the survey

Depending on your objectives, you might want to target the general population with your survey or a specific segment of the population. Once you’ve narrowed down who you want to target, it’s time to send out the survey.

After you’ve deployed the survey, keep an eye on the response rate to ensure you’re getting the number you expected. If your response rate is low, you might need to send the survey out to a second group to obtain a large enough sample — or do some troubleshooting to work out why your response rates are so low. This could be down to your questions, delivery method, selected sample, or otherwise.

Step 6 – Analyze results and draw conclusions

Once you’ve got your results back, it’s time for the fun part.

Break down your survey responses using the parameters you’ve set in your objectives and analyze the data to compare to your original assumptions. At this stage, a research tool or software can make the analysis a lot easier — and that’s somewhere Qualtrics can help.

Get reliable insights with survey software from Qualtrics

Gaining feedback from customers and leads is critical for any business, data gathered from surveys can prove invaluable for understanding your products and your market position, and with survey software from Qualtrics, it couldn’t be easier.

Used by more than 13,000 brands and supporting more than 1 billion surveys a year, Qualtrics empowers everyone in your organization to gather insights and take action. No coding required — and your data is housed in one system.

Get feedback from more than 125 sources on a single platform and view and measure your data in one place to create actionable insights and gain a deeper understanding of your target customers .

Automatically run complex text and statistical analysis to uncover exactly what your survey data is telling you, so you can react in real-time and make smarter decisions.

We can help you with survey management, too. From designing your survey and finding your target respondents to getting your survey in the field and reporting back on the results, we can help you every step of the way.

And for expert market researchers and survey designers, Qualtrics features custom programming to give you total flexibility over question types, survey design, embedded data, and other variables.

No matter what type of survey you want to run, what target audience you want to reach, or what assumptions you want to test or answers you want to uncover, we’ll help you design, deploy and analyze your survey with our team of experts.

Ready to find out more about Qualtrics CoreXM?

Get started with our free survey maker tool today

Related resources

Survey bias types 24 min read, post event survey questions 10 min read, best survey software 16 min read, close-ended questions 7 min read, survey vs questionnaire 12 min read, response bias 13 min read, double barreled question 11 min read, request demo.

Ready to learn more about Qualtrics?

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

12.1 What is survey research, and when should you use it?

Learning objectives.

Learners will be able to…

  • Distinguish between survey as a research design and questionnaires used to measure concepts
  • Identify the strengths and weaknesses of surveys
  • Evaluate whether survey design fits with their research question

Pre-awareness check (Knowledge)

Have you ever been selected as a participant to complete a survey? How were you contacted? Would you incorporate the researchers’ methods into your research design?

Researchers quickly learn that there is more to constructing a good survey than meets the eye. Survey design takes a great deal of thoughtful planning and often many rounds of revision, but it is worth the effort. As we’ll learn in this section, there are many benefits to choosing survey research as your data collection method. We’ll discuss what a survey is, its potential benefits and drawbacks, and what research projects are the best fit for survey design.

Is survey research right for your project?

meaning of survey research report

Questionnaires are completed by individual people, so the unit of observation is almost always individuals, rather than groups or organizations. Generally speaking, individuals provide the most informed data about their own lives and experiences, so surveys often also use individuals as the unit of analysis . Surveys are also helpful in analyzing dyads, families, groups, organizations, and communities, but regardless of the unit of analysis, the unit of observation for surveys is usually individuals.

In some cases, getting the most-informed person to complete the questionnaire may not be feasible . As we discussed in Chapter 2 and Chapter 6 , ethical duties to protect clients and vulnerable community members is important. The ethical supervision needed via the IRB to complete projects that pose significant risks to participants takes time and effort. Sometimes researchers rely on key informants and gatekeepers like clinicians, teachers, and administrators who are less likely to be harmed by the survey. Key informants are people who are especially knowledgeable about the topic. If your study is about nursing, you would probably consider nurses as your key informants. These considerations are more thoroughly addressed in Chapter 10 . Sometimes, participants complete surveys on behalf of people in your target population who are infeasible to survey for some reason. Some examples of key informants include a head of household completing a survey about family finances or an administrator completing a survey about staff morale on behalf of their employees. In this case, the survey respondent is a proxy , providing their best informed guess about the responses other people might have chosen if they were able to complete the survey independently. You are relying on an individual unit of observation (one person filling out a self-report questionnaire) and group or organization unit of analysis (the family or organization the researcher wants to make conclusions about). Proxies are commonly used when the target population is not capable of providing consent or appropriate answers, as in young children.

Proxies are relying on their best judgment of another person’s experiences, and while that is valuable information, it may introduce bias and error into the research process. For instance, If you are planning to conduct a survey of people with second-hand knowledge of your topic, consider reworking your research question to be about something they have more direct knowledge about and can answer easily.

Remember, every project has limitations. Social work researchers look for the most favorable choices in design and methodology, as there are no perfect projects. A missed opportunity is when researchers who want to understand client outcomes (unit of analysis) by surveying practitioners (unit of observation). If a practitioner has a caseload of 30 clients, it’s not really possible to answer a question like “how much progress have your clients made?” on a survey. Would they just average all 30 clients together? Instead, design a survey that asks them about their education, professional experience, and other things they know about first-hand. By making your unit of analysis and unit of observation the same, you can ensure the people completing your survey are able to provide informed answers.

Researchers may introduce measurement error if the person completing the questionnaire does not have adequate knowledge or has a biased opinion about the phenomenon of interest. [ INSERT SOME DISCUSSION HERE, FOR EXAMPLE GALLUP OPINION POLLS, ELECTION POLLING ]

In summary, survey design tends to be used in quantitative research and best fits with research projects that have the following attributes:

  • Researchers plan to collect their own raw data, rather than secondary analysis of existing data.
  • Researchers have access to the most knowledgeable people (that you can feasibly and ethically sample) to complete the questionnaire.
  • Individuals are the unit of observation, and in many cases, the unit of analysis.
  • Researchers will try to observe things objectively and try not to influence participants to respond differently.
  • Research questions asks about indirect observables—things participants can self-report on a questionnaire.
  • There are valid, reliable, and commonly used scales (or other self-report measures) for the variables in the research question.

meaning of survey research report

Strengths of survey methods

Researchers employing survey research as a research design enjoy a number of benefits. First, surveys are an excellent way to gather lots of information from many people and is cost-effective due to its potential for generalizability. Related to the benefit of cost-effectiveness is a survey’s potential for generalizability. Because surveys allow researchers to collect data from very large samples for a relatively low cost, survey methods lend themselves to probability sampling techniques, which we discussed in Chapter 10 . When used with probability sampling approaches, survey research is the best method to use when one hopes to gain a representative picture of the attitudes and characteristics of a large group.

Survey research is particularly adept at investigating indirect observables or constructs . Indirect observables (e.g., income, place of birth, or smoking behavior) are things we have to ask someone to self-report because we cannot observe them directly.  Constructs such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), or beliefs (e.g., about a new law) are also often best collected through multi-item instruments such as scales. Unlike qualitative studies in which these beliefs and attitudes would be detailed in unstructured conversations, survey design seeks to systematize answers so researchers can make apples-to-apples comparisons across participants. Questionnaires used in survey design are flexible because you can ask about anything, and the variety of questions allows you to expand social science knowledge beyond what is naturally observable.

Survey research also tends to use reliable instruments within their method of inquiry, many scales in survey questionnaires are standardized instruments. Other methods, such as qualitative interviewing, which we’ll learn about in Chapter 18 , do not offer the same consistency that a quantitative survey offers. This is not to say that all surveys are always reliable. A poorly phrased question can cause respondents to interpret its meaning differently, which can reduce that question’s reliability. Assuming well-constructed questions and survey design, one strength of this methodology is its potential to produce reliable results.

The versatility of survey research is also an asset. Surveys are used by all kinds of people in all kinds of professions. They can measure anything that people can self-report. Surveys are also appropriate for exploratory, descriptive, and explanatory research questions (though exploratory projects may benefit more from qualitative methods). Moreover, they can be delivered in a number of flexible ways, including via email, mail, text, and phone. We will describe the many ways to implement a survey later on in this chapter.

In sum, the following are benefits of survey research:

  • Cost-effectiveness
  • Generalizability
  • Reliability
  • Versatility

meaning of survey research report

Weaknesses of survey methods

As with all methods of data collection, survey research also comes with a few drawbacks. First, while one might argue that surveys are flexible in the sense that we can ask any kind of question about any topic we want, once the survey is given to the first participant, there is nothing you can do to change the survey without biasing your results. Because surveys want to minimize the amount of influence that a researcher has on the participants, everyone gets the same questionnaire. Let’s say you mail a questionnaire out to 1,000 people and then discover, as responses start coming in, that your phrasing on a particular question seems to be confusing a number of respondents. At this stage, it’s too late for a do-over or to change the question for the respondents who haven’t yet returned their questionnaires. When conducting qualitative interviews or focus groups, on the other hand, a researcher can provide respondents further explanation if they’re confused by a question and can tweak their questions as they learn more about how respondents seem to understand them. Survey researchers often ask colleagues, students, and others to pilot test their questionnaire and catch any errors prior to sending it to participants; however, once researchers distribute the survey to participants, there is little they can do to change anything.

Depth can also be a problem with surveys. Survey questions are standardized; thus, it can be difficult to ask anything other than very general questions that a broad range of people will understand. Because of this, survey results may not provide as detailed of an understanding as results obtained using methods of data collection that allow a researcher to more comprehensively examine whatever topic is being studied. Let’s say, for example, that you want to learn something about voters’ willingness to elect an African American president. General Social Survey respondents were asked, “If your party nominated an African American for president, would you vote for him if he were qualified for the job?” (Smith, 2009). [2] Respondents were then asked to respond either yes or no to the question. But what if someone’s opinion was more complex than could be answered with a simple yes or no? What if, for example, a person was willing to vote for an African American man, but only if that person was a conservative, moderate, anti-abortion, antiwar, etc. Then we would miss out on that additional detail when the participant responded “yes,” to our question. Of course, you could add a question to your survey about moderate vs. radical candidates, but could you do that for all of the relevant attributes of candidates for all people? Moreover, how do you know that moderate or antiwar means the same thing to everyone who participates in your survey? Without having a conversation with someone and asking them follow up questions, survey research can lack enough detail to understand how people truly think.

In sum, potential drawbacks to survey research include the following:

  • Inflexibility
  • Lack of depth
  • Problems specific to cross-sectional surveys, which we will address in the next section.

Secondary analysis of survey data

This chapter is designed to help you conduct your own survey, but that is not the only option for social work researchers. Look back to Chapter 2 and recall our discussion of secondary data analysis . As we talked about previously, using data collected by another researcher can have a number of benefits. Well-funded researchers have the resources to recruit a large representative sample and ensure their measures are valid and reliable prior to sending them to participants. Before you get too far into designing your own data collection, make sure there are no existing data sets out there that you can use to answer your question. We refer you to Chapter 2 for all full discussion of the strengths and challenges of using secondary analysis of survey data.

Key Takeaways

  • Strengths of survey research include its cost effectiveness, generalizability, variety, reliability, and versatility.
  • Weaknesses of survey research include inflexibility and lack of potential depth. There are also weaknesses specific to cross-sectional surveys, the most common type of survey.

TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

If you are using quantitative methods in a student project, it is very likely that you are going to use survey design to collect your data.

  • Check to make sure that your research question and study fit best with survey design using the criteria in this section
  • Remind yourself of any limitations to generalizability based on your sampling frame.
  • Refresh your memory on the operational definitions you will use for your dependent and independent variables.

TRACK 2 (IF YOU  AREN’T CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

You are interested in understanding more about the needs of unhoused individuals in rural communities, including how these needs vary based on demographic characteristics and personal identities.

  • Develop a working research question for this topic.
  • Using the criteria for survey design described in this section, do you think a survey would be appropriate to answer your research question? Why or why not?
  • What are the potential limitations to generalizability if you select survey design to answer this research question?
  • Unless researchers change the order of questions as part of their methodology and ensuring accurate responses to questions ↵
  • Smith, T. W. (2009). Trends in willingness to vote for a Black and woman for president, 1972–2008.  GSS Social Change Report No. 55 . Chicago, IL: National Opinion Research Center ↵

The use of questionnaires to gather data from multiple participants.

the group of people you successfully recruit from your sampling frame to participate in your study

A research instrument consisting of a set of questions (items) intended to capture responses from participants in a standardized manner

a participant answers questions about themselves

the entities that a researcher actually observes, measures, or collects in the course of trying to learn something about her unit of analysis (individuals, groups, or organizations)

entity that a researcher wants to say something about at the end of her study (individual, group, or organization)

whether you can practically and ethically complete the research project you propose

Someone who is especially knowledgeable about a topic being studied.

a person who completes a survey on behalf of another person

things that require subtle and complex observations to measure, perhaps we must use existing knowledge and intuition to define.

Conditions that are not directly observable and represent states of being, experiences, and ideas.

The degree to which an instrument reflects the true score rather than error.  In statistical terms, reliability is the portion of observed variability in the sample that is accounted for by the true variability, not by error. Note : Reliability is necessary, but not sufficient, for measurement validity.

analyzing data that has been collected by another person or research group

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

Logo for Boise State Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

5 Approaching Survey Research

What is survey research.

Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers. Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.

Most survey research is non-experimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population, etc.) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be used within experimental research; as long as there is manipulation of an independent variable (e.g. anger vs. fear) to assess an effect on a dependent variable (e.g. risk judgments).

Chapter 5: Learning Objectives

If your research question(s) center on the experience or perception of a particular phenomenon, process, or practice, utilizing a survey method may help glean useful data. After reading this chapter, you will

  • Identify the purpose of survey research
  • Describe the cognitive processes involved in responding to questions
  • Discuss the importance of context in drafting survey items
  • Contrast the utility of open and closed ended questions
  • Describe the BRUSO method of drafting survey questions
  • Describe the format for survey questionnaires

The heart of any survey research project is the survey itself. Although it is easy to think of interesting questions to ask people, constructing a good survey is not easy at all. The problem is that the answers people give can be influenced in unintended ways by the wording of the items, the order of the items, the response options provided, and many other factors. At best, these influences add noise to the data. At worst, they result in systematic biases and misleading results. In this section, therefore, we consider some principles for constructing surveys to minimize these unintended effects and thereby maximize the reliability and validity of respondents’ answers.

Cognitive Processes of Responses

To best understand how to write a ‘good’ survey question, it is important to frame the act of responding to a survey question as a cognitive process. That is, there are are involuntary mechanisms that take place when someone is asked a question. Sudman, Bradburn, & Schwarz (1996, as cited in Jhangiani et. al, 2012) illustrate this cognitive process here.

Progression of a cognitive response. Fist the respondent must understand the question then retrieve information from memory to formulate a response based on a judgement formed by the information. The respondent must then edit the response, depending on the response options provided by the survey.

Framing the formulation of survey questions in this way is extremely helpful to ensure that the questions posed on your survey glean accurate information.

Example of a Poorly Worded Survey Question

How many alcoholic drinks do you consume in a typical day?

  • A lot more of average
  • Somewhat more than average
  • Average number
  • Somewhat fewer than average
  • A lot fewer than average

Although this item at first seems straightforward, it poses several difficulties for respondents. First, they must interpret the question. For example, they must decide whether “alcoholic drinks” include beer and wine (as opposed to just hard liquor) and whether a “typical day” is a typical weekday, typical weekend day, or both. Even though Chang and Krosnick (2003, as cited in Jhangiani et al. 2012) found that asking about “typical” behavior has been shown to be more valid than asking about “past” behavior, their study compared “typical week” to “past week” and may be different when considering typical weekdays or weekend days). Once respondents have interpreted the question, they must retrieve relevant information from memory to answer it. But what information should they retrieve, and how should they go about retrieving it? They might think vaguely about some recent occasions on which they drank alcohol, they might carefully try to recall and count the number of alcoholic drinks they consumed last week, or they might retrieve some existing beliefs that they have about themselves (e.g., “I am not much of a drinker”). Then they must use this information to arrive at a tentative judgment about how many alcoholic drinks they consume in a typical day. For example, this mental calculation might mean dividing the number of alcoholic drinks they consumed last week by seven to come up with an average number per day. Then they must format this tentative answer in terms of the response options actually provided. In this case, the options pose additional problems of interpretation. For example, what does “average” mean, and what would count as “somewhat more” than average? Finally, they must decide whether they want to report the response they have come up with or whether they want to edit it in some way. For example, if they believe that they drink a lot more than average, they might not want to report that for fear of looking bad in the eyes of the researcher, so instead, they may opt to select the “somewhat more than average” response option.

From this perspective, what at first appears to be a simple matter of asking people how much they drink (and receiving a straightforward answer from them) turns out to be much more complex.

Context Effects on Survey Responses

Again, this complexity can lead to unintended influences on respondents’ answers. These are often referred to as context effects because they are not related to the content of the item but to the context in which the item appears (Schwarz & Strack, 1990, as cited in Jhangiani et al. 2012). For example, there is an item-order effect when the order in which the items are presented affects people’s responses. One item can change how participants interpret a later item or change the information that they retrieve to respond to later items. For example, researcher Fritz Strack and his colleagues asked college students about both their general life satisfaction and their dating frequency (Strack, Martin, & Schwarz, 1988, as cited in Jhangiani et al. 2012) . When the life satisfaction item came first, the correlation between the two was only −.12, suggesting that the two variables are only weakly related. But when the dating frequency item came first, the correlation between the two was +.66, suggesting that those who date more have a strong tendency to be more satisfied with their lives. Reporting the dating frequency first made that information more accessible in memory so that they were more likely to base their life satisfaction rating on it.

The response options provided can also have unintended effects on people’s responses (Schwarz, 1999, as cited in Jhangiani et al. 2012) . For example, when people are asked how often they are “really irritated” and given response options ranging from “less than once a year” to “more than once a month,” they tend to think of major irritations and report being irritated infrequently. But when they are given response options ranging from “less than once a day” to “several times a month,” they tend to think of minor irritations and report being irritated frequently. People also tend to assume that middle response options represent what is normal or typical. So if they think of themselves as normal or typical, they tend to choose middle response options. For example, people are likely to report watching more television when the response options are centered on a middle option of 4 hours than when centered on a middle option of 2 hours. To mitigate against order effects, rotate questions and response items when there is no natural order. Counterbalancing or randomizing the order of presentation of the questions in online surveys are good practices for survey questions and can reduce response order effects that show that among undecided voters, the first candidate listed in a ballot receives a 2.5% boost simply by virtue of being listed first!

Writing Survey Items

Types of Items

Questionnaire items can be either open-ended or closed-ended. Open-ended  items simply ask a question and allow participants to answer in whatever way they choose. The following are examples of open-ended questionnaire items.

  • “What is the most important thing to teach children to prepare them for life?”
  • “Please describe a time when you were discriminated against because of your age.”
  • “Is there anything else you would like to tell us about?”

Open-ended items are useful when researchers do not know how participants might respond or when they want to avoid influencing their responses. Open-ended items are more qualitative in nature, so they tend to be used when researchers have more vaguely defined research questions—often in the early stages of a research project. Open-ended items are relatively easy to write because there are no response options to worry about. However, they take more time and effort on the part of participants, and they are more difficult for the researcher to analyze because the answers must be transcribed, coded, and submitted to some form of qualitative analysis, such as content analysis. Another disadvantage is that respondents are more likely to skip open-ended items because they take longer to answer. It is best to use open-ended questions when the answer is unsure or for quantities which can easily be converted to categories later in the analysis.

Closed-ended items ask a question and provide a set of response options for participants to choose from.

Examples of  Closed-Ended Questions

How old are you?

On a scale of 0 (no pain at all) to 10 (the worst pain ever experienced), how much pain are you in right now?

Closed-ended items are used when researchers have a good idea of the different responses that participants might make. They are more quantitative in nature, so they are also used when researchers are interested in a well-defined variable or construct such as participants’ level of agreement with some statement, perceptions of risk, or frequency of a particular behavior. Closed-ended items are more difficult to write because they must include an appropriate set of response options. However, they are relatively quick and easy for participants to complete. They are also much easier for researchers to analyze because the responses can be easily converted to numbers and entered into a spreadsheet. For these reasons, closed- ended items are much more common.

All closed-ended items include a set of response options from which a participant must choose. For categorical variables like sex, race, or political party preference, the categories are usually listed and participants choose the one (or ones) to which they belong. For quantitative variables, a rating scale is typically provided. A rating scale is an ordered set of responses that participants must choose from.

Likert Scale indicating scaled responses between 1 and 5 to questions. A selection of 1 indicates strongly disagree and a selection of 5 indicates strongly agree

The number of response options on a typical rating scale ranges from three to 11—although five and seven are probably most common. Five-point scales are best for unipolar scales where only one construct is tested, such as frequency (Never, Rarely, Sometimes, Often, Always). Seven- point scales are best for bipolar scales where there is a dichotomous spectrum, such as liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). For bipolar questions, it is useful to offer an earlier question that branches them into an area of the scale; if asking about liking ice cream, first ask “Do you generally like or dislike ice cream?” Once the respondent chooses like or dislike, refine it by offering them relevant choices from the seven-point scale. Branching improves both reliability and validity (Krosnick & Berent, 1993, as cited in Jhangiani et al. 2012 ) . Although you often see scales with numerical labels, it is best to only present verbal labels to the respondents but convert them to numerical values in the analyses. Avoid partial labels or length or overly specific labels. In some cases, the verbal labels can be supplemented with (or even replaced by) meaningful graphics.

Writing Effective Items

We can now consider some principles of writing questionnaire items that minimize unintended context effects and maximize the reliability and validity of participants’ responses. A rough guideline for writing 9 questionnaire items is provided by the BRUSO model (Peterson, 2000, as cited in Jhangiani et al. 2012 ) . An acronym, BRUSO stands for “brief,” “relevant,” “unambiguous,” “specific,” and “objective.” Effective questionnaire items are brief and to the point. They avoid long, overly technical, or unnecessary words. This brevity makes them easier for respondents to understand and faster for them to complete. Effective questionnaire items are also relevant to the research question. If a respondent’s sexual orientation, marital status, or income is not relevant, then items on them should probably not be included. Again, this makes the questionnaire faster to complete, but it also avoids annoying respondents with what they will rightly perceive as irrelevant or even “nosy” questions. Effective questionnaire items are also unambiguous; they can be interpreted in only one way. Part of the problem with the alcohol item presented earlier in this section is that different respondents might have different ideas about what constitutes “an alcoholic drink” or “a typical day.” Effective questionnaire items are also specific so that it is clear to respondents what their response should be about and clear to researchers what it is about. A common problem here is closed- ended items that are “double barreled .” They ask about two conceptually separate issues but allow only one response.

Example of a “Double Barreled” question

Please rate the extent to which you have been feeling anxious and depressed

Note: The issue in the question itself is that anxiety and depression are two separate items and should likely be separated

Finally, effective questionnaire items are objective in the sense that they do not reveal the researcher’s own opinions or lead participants to answer in a particular way. The best way to know how people interpret the wording of the question is to conduct a pilot test and ask a few people to explain how they interpreted the question. 

A description of the BRUSO methodology of writing questions wherein items are brief, relevant, unambiguous, specific, and objective

For closed-ended items, it is also important to create an appropriate response scale. For categorical variables, the categories presented should generally be mutually exclusive and exhaustive. Mutually exclusive categories do not overlap. For a religion item, for example, the categories of Christian and Catholic are not mutually exclusive but Protestant and Catholic are mutually exclusive. Exhaustive categories cover all possible responses. Although Protestant and Catholic are mutually exclusive, they are not exhaustive because there are many other religious categories that a respondent might select: Jewish, Hindu, Buddhist, and so on. In many cases, it is not feasible to include every possible category, in which case an ‘Other’ category, with a space for the respondent to fill in a more specific response, is a good solution. If respondents could belong to more than one category (e.g., race), they should be instructed to choose all categories that apply.

For rating scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint.

Example of an unbalanced versus balanced rating scale

Unbalanced rating scale measuring perceived likelihood

Unlikely | Somewhat Likely | Likely | Very Likely | Extremely Likely

Balanced rating scale measuring perceived likelihood

Extremely Unlikely | Somewhat Unlikely | As Likely as Not | Somewhat Likely |Extremely Likely

Note, however, that a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. However, including middle alternatives on bipolar dimensions can be used to allow people to choose an option that is neither.

Formatting the Survey

Writing effective items is only one part of constructing a survey. For one thing, every survey should have a written or spoken introduction that serves two basic functions (Peterson, 2000, as cited by Jhangiani et al. 2012 ). One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. This means that the researcher has only a moment to capture the attention of the respondent and must make it as easy as possible for the respondent  to participate . Thus the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.

The second function of the introduction is to establish informed consent. Remember that this involves describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and so on. Written consent forms are not always used in survey research (when the research is of minimal risk and completion of the survey instrument is often accepted by the IRB as evidence of consent to participate), so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.

The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last because they are least interesting to participants but also easy to answer in the event respondents have become tired or bored. Of course, any survey should end with an expression of appreciation to the respondent.

Coding your survey responses

Once you’ve closed your survey, you’ll need to identify how to quantify the data you’ve collected. Much of this can be done in ways similar to methods described in the previous two chapters. Although there are several ways by which to do this, here are some general tips:

  • Transfer data : Transfer your data to a program which will allow you to organize and ‘clean’ the data. If you’ve used an online tool to gather data, you should be able to download the survey results into a format appropriate for working the data. If you’ve collected responses by hand, you’ll need to input the data manually.
  • Save: ALWAYS save a copy of your original data. Save changes you make to the data under a different name or version in case you need to refer back to the original data.
  • De-identify: This step will depend on the overall approach that you’ve taken to answer your research question and may not be appropriate for your project.
  • Name the variables: Again, there is no ‘right’ way to do this; however, as you move forward, you will want to be sure you can easily identify what data you are extracting. Many times, when you transfer your data the program will automatically associate data collected with the question asked. It is a good idea to name the variable something associated with the data, rather than the question
  • Code the attributes : Each variable will likely have several different attributes, or      layers. You’ll need to come up with a coding method to distinguish the different responses. As discussed in previous chapters, each attribute should have a numeric code associated so that you can quantify the data and use descriptive and/or inferential statistical methods to either describe or explore relationships within the dataset.

Most online survey tools will download data into a spreadsheet-type program and organize that data in association with the question asked. Naming the variables so that you can easily identify the information will be helpful as you proceed to analysis.

This is relatively simple to accomplish with closed-ended questions. Because                   you’ve ‘forced’ the respondent to pick a concrete answer, you can create a code               that is associated with each answer. In the picture above, respondents were                     asked to identify their region and given a list of geographical regions and in                     structed to pick one. The researcher then created a code for the regions. In this               case, 1= West; 2= Midwest; 3= Northeast; 4= Southeast; and 5= Southwest. If you’re           working to quantify data that is somewhat qualitative in nature (i.e. open ended             questions) the process is a little more complicated. You’ll need to either create                 themes or categories, classify types or similar responses, and then assign codes to         those themes or categories.

6. Create a codebook : This.is.essential. Once you begin to code the data, you will                 have somewhat disconnected yourself from the data by translating the data from         a language that we understand to a language which a computer understands. Af           ter you run your statistical methods, you’ll translate it back to the native language         and share findings. To stay organized and accurate, it is important that you keep a         record of how the data has been translated.

7.  Analyze: Once you have the data inputted, cleaned, and coded, you should be                ready  to analyze your data using either descriptive or inferential methods, depend.      ing on your approach and overarching goal.

Key Takeaways

  • Surveys are a great method to identify information about perceptions and experiences
  • Question items must be carefully crafted to elicit an appropriate response
  • Surveys are often a mixed-methods approach to research
  • Both descriptive and inferential statistical approaches can be applied to the data gleaned through survey responses
  • Surveys utilize both open and closed ended questions; identifying which types of questions will yield specific data will be helpful as you plan your approach to analysis
  • Most surveys will need to include a method of informed consent, and an introduction. The introduction should clearly delineate the purpose of the survey and how the results will be utilized
  • Pilot tests of your survey can save you a lot of time and heartache. Pilot testing helps to catch issues in the development of item, accessibility, and type of information derived prior to initiating the survey on a larger scale
  • Survey data can be analyzed much like other types of data; following a systematic approach to coding will help ensure you get the answers you’re looking for
  • This section is attributed to Research Methods in Psychology by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted. ↵
  • The majority of content in these sections can be attributed to Research Methods in Psychology by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted. ↵

A mixed methods approach using self-reports of respondents who are sampled using stringent methods

A type of survey question that allows the respondent to insert their own response; typically qualitative in nature

A type of survey question which forces a respondent to select a response; no subjectivity.

Practical Research: A Basic Guide to Planning, Doing, and Writing Copyright © by megankoster. All Rights Reserved.

Share This Book

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 12 March 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9.1 Overview of Survey Research

Learning objectives.

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research is a quantitative approach that has two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.

Most survey research is nonexperimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be experimental. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987). By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called Literary Digest conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course it was. (We will consider the reasons that Gallup was right later in this chapter.)

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in Section 9.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of college students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States (see http://www.hcp.med.harvard.edu/ncs ). In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003. Table 9.1 “Some Lifetime Prevalence Results From the National Comorbidity Survey” presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders and also to clinicians and policymakers who need to understand exactly how common these disorders are.

Table 9.1 Some Lifetime Prevalence Results From the National Comorbidity Survey

And as the opening example makes clear, survey research can even be used to conduct experiments to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on college students. Although this is not a typical use of survey research, it certainly illustrates the flexibility of this approach.

Key Takeaways

  • Survey research is a quantitative approach that features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions.
  • Survey research has its roots in applied social research, market research, and election polling. It has since become an important approach in many academic disciplines, including political science, sociology, public health, and, of course, psychology.

Discussion: Think of a question that each of the following professionals might try to answer using survey research.

  • a social psychologist
  • an educational researcher
  • a market researcher who works for a supermarket chain
  • the mayor of a large city
  • the head of a university police force

Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

7.1: Overview of Survey Research

  • Last updated
  • Save as PDF
  • Page ID 16123

Learning Objectives

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers. Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.

Most survey research is non-experimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be experimental. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987) [1] . By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called Literary Digest conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course, it was. (We will consider the reasons that Gallup was right later in this chapter.) Interest in surveying around election times has led to several long-term projects, notably the Canadian Election Studies which has measured opinions of Canadian voters around federal elections since 1965. Anyone can access the data and read about the results of the experiments in these studies (see http://ces-eec.arts.ubc.ca/ )

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale . (See “What Is a Likert Scale?” in Section 7.2 ). Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of university students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States (see http://www.hcp.med.harvard.edu/ncs ). In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003. Table \(\PageIndex{1}\) presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders as well as to clinicians and policymakers who need to understand exactly how common these disorders are.

And as the opening example makes clear, survey research can even be used to conduct experiments to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on university students. Although this approach is not a typical use of survey research, it certainly illustrates the flexibility of this method.

Key Takeaways

  • Survey research features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions.
  • Survey research has its roots in applied social research, market research, and election polling. It has since become an important approach in many academic disciplines, including political science, sociology, public health, and, of course, psychology.
  • a social psychologist
  • an educational researcher
  • a market researcher who works for a supermarket chain
  • the mayor of a large city
  • the head of a university police force
  • Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press.
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

meaning of survey research report

Home Market Research

Research Reports: Definition and How to Write Them

Research Reports

Reports are usually spread across a vast horizon of topics but are focused on communicating information about a particular topic and a niche target market. The primary motive of research reports is to convey integral details about a study for marketers to consider while designing new strategies.

Certain events, facts, and other information based on incidents need to be relayed to the people in charge, and creating research reports is the most effective communication tool. Ideal research reports are extremely accurate in the offered information with a clear objective and conclusion. These reports should have a clean and structured format to relay information effectively.

What are Research Reports?

Research reports are recorded data prepared by researchers or statisticians after analyzing the information gathered by conducting organized research, typically in the form of surveys or qualitative methods .

A research report is a reliable source to recount details about a conducted research. It is most often considered to be a true testimony of all the work done to garner specificities of research.

The various sections of a research report are:

  • Background/Introduction
  • Implemented Methods
  • Results based on Analysis
  • Deliberation

Learn more: Quantitative Research

Components of Research Reports

Research is imperative for launching a new product/service or a new feature. The markets today are extremely volatile and competitive due to new entrants every day who may or may not provide effective products. An organization needs to make the right decisions at the right time to be relevant in such a market with updated products that suffice customer demands.

The details of a research report may change with the purpose of research but the main components of a report will remain constant. The research approach of the market researcher also influences the style of writing reports. Here are seven main components of a productive research report:

  • Research Report Summary: The entire objective along with the overview of research are to be included in a summary which is a couple of paragraphs in length. All the multiple components of the research are explained in brief under the report summary.  It should be interesting enough to capture all the key elements of the report.
  • Research Introduction: There always is a primary goal that the researcher is trying to achieve through a report. In the introduction section, he/she can cover answers related to this goal and establish a thesis which will be included to strive and answer it in detail.  This section should answer an integral question: “What is the current situation of the goal?”.  After the research design was conducted, did the organization conclude the goal successfully or they are still a work in progress –  provide such details in the introduction part of the research report.
  • Research Methodology: This is the most important section of the report where all the important information lies. The readers can gain data for the topic along with analyzing the quality of provided content and the research can also be approved by other market researchers . Thus, this section needs to be highly informative with each aspect of research discussed in detail.  Information needs to be expressed in chronological order according to its priority and importance. Researchers should include references in case they gained information from existing techniques.
  • Research Results: A short description of the results along with calculations conducted to achieve the goal will form this section of results. Usually, the exposition after data analysis is carried out in the discussion part of the report.

Learn more: Quantitative Data

  • Research Discussion: The results are discussed in extreme detail in this section along with a comparative analysis of reports that could probably exist in the same domain. Any abnormality uncovered during research will be deliberated in the discussion section.  While writing research reports, the researcher will have to connect the dots on how the results will be applicable in the real world.
  • Research References and Conclusion: Conclude all the research findings along with mentioning each and every author, article or any content piece from where references were taken.

Learn more: Qualitative Observation

15 Tips for Writing Research Reports

Writing research reports in the manner can lead to all the efforts going down the drain. Here are 15 tips for writing impactful research reports:

  • Prepare the context before starting to write and start from the basics:  This was always taught to us in school – be well-prepared before taking a plunge into new topics. The order of survey questions might not be the ideal or most effective order for writing research reports. The idea is to start with a broader topic and work towards a more specific one and focus on a conclusion or support, which a research should support with the facts.  The most difficult thing to do in reporting, without a doubt is to start. Start with the title, the introduction, then document the first discoveries and continue from that. Once the marketers have the information well documented, they can write a general conclusion.
  • Keep the target audience in mind while selecting a format that is clear, logical and obvious to them:  Will the research reports be presented to decision makers or other researchers? What are the general perceptions around that topic? This requires more care and diligence. A researcher will need a significant amount of information to start writing the research report. Be consistent with the wording, the numbering of the annexes and so on. Follow the approved format of the company for the delivery of research reports and demonstrate the integrity of the project with the objectives of the company.
  • Have a clear research objective: A researcher should read the entire proposal again, and make sure that the data they provide contributes to the objectives that were raised from the beginning. Remember that speculations are for conversations, not for research reports, if a researcher speculates, they directly question their own research.
  • Establish a working model:  Each study must have an internal logic, which will have to be established in the report and in the evidence. The researchers’ worst nightmare is to be required to write research reports and realize that key questions were not included.

Learn more: Quantitative Observation

  • Gather all the information about the research topic. Who are the competitors of our customers? Talk to other researchers who have studied the subject of research, know the language of the industry. Misuse of the terms can discourage the readers of research reports from reading further.
  • Read aloud while writing. While reading the report, if the researcher hears something inappropriate, for example, if they stumble over the words when reading them, surely the reader will too. If the researcher can’t put an idea in a single sentence, then it is very long and they must change it so that the idea is clear to everyone.
  • Check grammar and spelling. Without a doubt, good practices help to understand the report. Use verbs in the present tense. Consider using the present tense, which makes the results sound more immediate. Find new words and other ways of saying things. Have fun with the language whenever possible.
  • Discuss only the discoveries that are significant. If some data are not really significant, do not mention them. Remember that not everything is truly important or essential within research reports.

Learn more: Qualitative Data

  • Try and stick to the survey questions. For example, do not say that the people surveyed “were worried” about an research issue , when there are different degrees of concern.
  • The graphs must be clear enough so that they understand themselves. Do not let graphs lead the reader to make mistakes: give them a title, include the indications, the size of the sample, and the correct wording of the question.
  • Be clear with messages. A researcher should always write every section of the report with an accuracy of details and language.
  • Be creative with titles – Particularly in segmentation studies choose names “that give life to research”. Such names can survive for a long time after the initial investigation.
  • Create an effective conclusion: The conclusion in the research reports is the most difficult to write, but it is an incredible opportunity to excel. Make a precise summary. Sometimes it helps to start the conclusion with something specific, then it describes the most important part of the study, and finally, it provides the implications of the conclusions.
  • Get a couple more pair of eyes to read the report. Writers have trouble detecting their own mistakes. But they are responsible for what is presented. Ensure it has been approved by colleagues or friends before sending the find draft out.

Learn more: Market Research and Analysis

MORE LIKE THIS

data analysis tools

Exploring Top 15 Data Analysis Tools to Elevate Your Insights

Mar 13, 2024

Brand intelligence software

Top 10 Best Brand Intelligence Software in 2024

Mar 12, 2024

User Engagement Tools

Top 11 Best User Engagement Tools in 2024

Mar 11, 2024

AI in Healthcare

AI in Healthcare: Exploring ClinicAI + FREE eBook

Mar 6, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Book cover

Handbook of Market Research pp 1–53 Cite as

Crafting Survey Research: A Systematic Process for Conducting Survey Research

  • Arnd Vomberg 4 &
  • Martin Klarmann 5  
  • Living reference work entry
  • First Online: 17 April 2021

2490 Accesses

Surveys represent flexible and powerful ways for practitioners to gain insights into customers and markets and for researchers to develop, test, and generalize theories. However, conducting effective survey research is challenging. Survey researchers must induce participation by “over-surveyed” respondents, choose appropriately from among numerous design alternatives, and need to account for the respondents’ complex psychological processes when answering the survey. The aim of this chapter is to guide investigators in effective design of their surveys. We discuss state-of-the-art research findings on measurement biases (i.e., common method bias, key informant bias, social desirability bias, and response patterns) and representation biases (i.e., non-sampling bias and non-response bias) and outline when those biases are likely to occur and how investigators can best avoid them. In addition, we offer a systematic approach for crafting surveys. We discuss key steps and decisions in the survey design process, with a particular focus on standardized questionnaires, and we emphasize how those choices can help alleviate potential biases. Finally, we discuss how investigators can address potential endogeneity concerns in surveys.

  • Survey research
  • Survey design
  • Survey research process
  • Measurement theory
  • Common method bias
  • Key informant bias
  • Social desirability
  • Response styles
  • Non-sampling bias
  • Non-response bias
  • Item reversal

This is a preview of subscription content, log in via an institution .

Alwin, D. F., & Krosnick, J. A. (1991). The reliability of survey attitude measurement: The influence of question and respondent attributes. Sociological Methods & Research, 20 (1), 139–181.

Article   Google Scholar  

Andreß, H. J., Golsch, K., & Schmidt, A. W. (2013). Applied panel data analysis for economic and social surveys. Springer-Verlag Berlin Heidelberg.

Google Scholar  

Anseel, F., Lievens, F., Schollaert, E., & Choragwicka, B. (2010). Response rates in organizational science, 1995–2008: A meta-analytic review and guidelines for survey researchers. Journal of Business and Psychology, 25 (3), 335–349.

Arora, R. (1982). Validation of an SOR model for situation, enduring, and response components of involvement. Journal of Marketing Research, 19 , 505–516.

Assael, H., & Keon, J. (1982). Nonsampling vs. sampling errors in survey research. Journal of Marketing, 46 , 114–123.

Baruch, Y. (1999). Response rate in academic studies – A comparative analysis. Human Relations, 52 (4), 421–438.

Baumgartner, H., & Homburg, C. (1996). Applications of structural equation modeling in marketing and consumer research: A review. International Journal of Research in Marketing, 13 (2), 139–161.

Baumgartner, H., & Steenkamp, J. B. E. (2001). Response styles in marketing research: A cross-national investigation. Journal of Marketing Research, 38 (2), 143–156.

Baumgartner, H., & Steenkamp, J. B. E. (2006). Response biases in marketing research. In R. Grover & M. Vriens (Eds.), The handbook of marketing research. Uses, misuses, and future advances (pp. 95–109). Thousand Oaks: Sage.

Chapter   Google Scholar  

Baumgartner, H., & Weijters, B. (2017). Measurement models for marketing constructs. In B. Wierenga & R. Van der Lans (Eds.), Handbook of marketing decision models (International series in operations research & management science) (Vol. 254). Cham: Springer.

Baumgartner, H., & Weijters, B. (2021). Structural equation modeling. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research . Springer, Cham: Springer. Forthcoming.

Baumgartner, H., Weijters, B., & Pieters, R. (2018). Misresponse to survey questions: A conceptual framework and empirical test of the effects of reversals, negations, and polar opposite core concepts. Journal of Marketing Research, 55 (6), 869–883.

Bearden, W. O., & Netemeyer, R. G. (1999). Handbook of marketing scales: Multi-item measures for marketing and consumer behavior research . Thousand Oaks, Calif: Sage.

Bergkvist, L., & Rossiter, J. R. (2007). The predictive validity of multiple-item versus single-item measures of the same constructs. Journal of Marketing Research, 44 (2), 175–184.

Bliese, P. D. (2000). Within-group agreement, non-independence, and reliability: Implications for data aggregation and analysis. In K. J. Klein & S. W. J. Kozlowski (Eds.), Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 349–381). San Francisco: Jossey-Bass.

Böckenholt, U. (2012). Modeling multiple response processes in judgment and choice. Psychological Methods, 17 (4), 665–678.

Böckenholt, U. (2017). Measuring response styles in Likert items. Psychological Methods, 22 (1), 69.

Bornemann, T., & Hattula, S. (2021). Experiments in market research. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research . Springer, Cham: Springer. Forthcoming.

Bryman, A., & Bell, E. (2015). Business research methods (4th ed.). Oxford University Press.

Burke, M. J., & Dunlap, W. P. (2002). Estimating interrater agreement with the average deviation index: A user’s guide. Organizational Research Methods, 5 (2), 159–172.

Cabooter, E., Millet, K., Pandelaere, M., & Weijters, B. (2012). The ‘I’ in extreme responding. Paper presented at the European Marketing Academy Conference, Lisbon.

Cameron, A. C., & Trivedi, P. K. (2005). Microeconometrics – Methods and applications . New York: Cambridge University Press.

Book   Google Scholar  

Carson, S. J., & Ghosh, M. (2019). An integrated power and efficiency model of contractual channel governance: Theory and empirical evidence. Journal of Marketing, 83 (4), 101–120.

Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56 (5), 809–825.

Certo, S. T., Busenbark, J. R., Woo, H. S., & Semadeni, M. (2016). Sample selection bias and Heckman models in strategic management research. Strategic Management Journal, 37 (13), 2639–2657.

Chandler, J. J., & Paolacci, G. (2017). Lie for a dime: When most prescreening responses are honest but most study participants are impostors. Social Psychological and Personality Science, 8 (5), 500–508.

Chang, S. J., Van Witteloostuijn, A., & Eden, L. (2010). From the editors: Common method variance in international business research. Journal of International Business Studies, 41 , 178–184.

Church, A. H. (1993). Estimating the effect of incentives on mail survey response rates: A meta-analysis. Public Opinion Quarterly, 57 (1), 62–79.

Churchill, G. A., & Iacobucci, D. (2005). Marketing research. Methodological foundations (9th ed.). Mason: South-Western Cengage Learning.

Cook, C., Heath, F., & Thompson, R. L. (2000). A meta-analysis of response rates in web-or internet-based surveys. Educational and Psychological Measurement, 60 (6), 821–836.

Cote, J. A., & Buckley, M. R. (1987). Estimating trait, method, and error variance: Generalizing across 70 construct validation studies. Journal of Marketing Research, 24 , 315–318.

Cote, J. A., & Buckley, M. R. (1988). Measurement error and theory testing in consumer research: An illustration of the importance of construct validation. Journal of Consumer Research, 14 (4), 579–582.

Crampton, S. M., & Wagner, J. A., III. (1994). Percept-percept inflation in microorganizational research: An investigation of prevalence and effect. Journal of Applied Psychology, 79 (1), 67.

De Heer, W., & De Leeuw, E. (2002). Trends in household survey nonresponse: A longitudinal and international comparison. In Survey nonresponse (p. 41). New York: Wiley.

De Jong, M. G., Steenkamp, J. B. E., Fox, J. P., & Baumgartner, H. (2008). Using item response theory to measure extreme response style in marketing research: A global investigation. Journal of Marketing Research, 45 (1), 104–115.

De Jong, M. G., Pieters, R., & Fox, J. P. (2010). Reducing social desirability bias through item randomized response: An application to measure underreported desires. Journal of Marketing Research, 47 (1), 14–27.

De Jong, M. G., Fox, J. P., & Steenkamp, J. B. E. (2015). Quantifying under-and overreporting in surveys through a dual-questioning-technique design. Journal of Marketing Research, 52 (6), 737–753.

De Langhe, B., Puntoni, S., Fernandes, D., & Van Osselaer, S. M. J. (2011). The anchor contraction effect in international marketing research. Journal of Marketing Research, 48 (2), 366–380.

Diamantopoulos, A., Sarstedt, M., Fuchs, C., Wilczynski, P., & Kaiser, S. (2012). Guidelines for choosing between multi-item and single-item scales for construct measurement: a predictive validity perspective. Journal of the Academy of Marketing Science, 40 (3), 434–449.

Ebbes, P., Papies, D., & Van Heerde, H. J. (2021). Dealing with endogeneity: A nontechnical guide for marketing researchers. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research . Springer, Cham: Springer. Forthcoming.

Ericsson, K. A., & Simon, H. A. (1980). Verbal reports as data. Psychological Review, 87 (3), 215–251.

Evans, M. G. (1985). A Monte Carlo study of the effects of correlated method variance in moderated multiple regression analysis. Organizational Behavior and Human Decision Processes, 36 (3), 305–323.

Feldman, J. M., & Lynch, J. G. (1988). Self-generated validity and other effects of measurement on belief, attitude, intention, and behavior. Journal of Applied Psychology, 73 (3), 421.

Fisher, R. J. (1993). Social desirability bias and the validity of indirect questioning. Journal of Consumer Research, 20 (2), 303–315.

Fornell, C., & Larcker, D. F. (1981). Structural Equation Models with Unobservable Variables and Measurement Error: Algebra and Statistics. Journal of Marketing Research, 18 (3):382–388. https://doi.org/10.1177/002224378101800313

Fornell, C., Johnson, M. D., Anderson, E. W., Cha, J., & Bryant, B. E. (1996). The American customer satisfaction index: nature, purpose, and findings. Journal of Marketing, 60 , 7–18.

Gal, D., & Rucker, D. D. (2011). Answering the unasked question: Response substitution in consumer surveys. Journal of Marketing Research, 48 (February), 185–195.

Gannon, M. J., Nothern, J. C., & Carroll, S. J. (1971). Characteristics of nonrespondents among workers. Journal of Applied Psychology, 55 (6), 586.

Ghosh, M., & John, G. (2005). Strategic fit in industrial alliances: An empirical test of governance value analysis. Journal of Marketing Research, 42 (3), 346–357.

Goodman, J. K., & Paolacci, G. (2017). Crowdsourcing consumer research. Journal of Consumer Research, 44 (1), 196–210.

Goodman, J. K., Cryder, C. E., & Cheema, A. (2013). Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples. Journal of Behavioral Decision Making, 26 (3), 213–224.

Grayson, K. (2007). Friendship versus business in marketing relationships. Journal of Marketing, 71 (4), 121–139.

Greenleaf, E. A. (1992). Measuring extreme response style. Public Opinion Quarterly, 56 (3), 328–351.

Grewal, R., Kumar, A., Mallapragada, G., & Saini, A. (2013). Marketing channels in foreign markets: Control mechanisms and the moderating role of multinational corporation headquarters–subsidiary relationship. Journal of Marketing Research, 50 (3), 378–398.

Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70 (5), 646–675.

Groves, R. M., Presser, S., & Dipko, S. (2004). The role of topic interest in survey participation decisions. Public Opinion Quarterly, 68 (1), 2–31.

Gruner, R. L., Vomberg, A., Homburg, C., & Lukas, B. A. (2019). Supporting new product launches with social media communication and online advertising: Sales volume and profit implications. Journal of Product Innovation Management, 36 (2), 172–195.

Gupta, N., Shaw, J. D., & Delery, J. E. (2000). Correlates of response outcomes among organizational key informants. Organizational Research Methods, 3 (4), 323–347.

Hagen, L. (2020). Pretty healthy food: How and when aesthetics enhance perceived healthiness. Journal of Marketing , forthcoming.

Hair, J. F., Jr., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Upper Saddle River: Prentice Hall.

Hamilton, D. L. (1968). Personality attributes associated with extreme response style. Psychological Bulletin, 69 (3), 192.

Haumann, T., Kassemeier, R., & Wieseke, J. (2021). Multilevel Modeling. In Homburg, C., Klarmann, M., and Vomberg, A. (eds.) Handbook of Market Research. Springer, Cham.

Heide, J. B., Wathne, K. H., & Rokkan, A. I. (2007). Interfirm monitoring, social contracts, and relationship outcomes. Journal of Marketing Research, 44 (3), 425–433.

Heide, J. B., Kumar, A., & Wathne, K. H. (2014). Concurrent sourcing, governance mechanisms, and performance outcomes in industrial value chains. Strategic Management Journal, 35 (8), 1164–1185.

Helgeson, J. G., Voss, K. E., & Terpening, W. D. (2002). Determinants of mail-survey response: Survey design factors and respondent factors. Psychology & Marketing, 19 (3), 303–328.

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). Most people are not WEIRD. Nature, 466 (7302), 29.

Himmelfarb, S., & Lickteig, C. (1982). Social desirability and the randomized response technique. Journal of Personality and Social Psychology, 43 , 710–717.

Hohenberg, S., & Taylor, W. (2021). Measuring customer satisfaction and customer loyalty. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research . Springer, Cham: Springer. Forthcoming.

Holbrook, A. L., & Krosnick, J. A. (2010). Measuring voter turnout by using the randomized response technique: Evidence calling into question the method’s validity. Public Opinion Quarterly, 74 (2), 328–343.

Homburg, C. (2020). Marketingmanagement: Strategie – Instrumente – Umsetzung – Unternehmensführung (7th ed.). Heidelberg: Springer.

Homburg, C., & Klarmann, M. (2009). Multi informant-designs in der empirischen betriebswirtschaftlichen Forschung. Die Betriebswirtschaft, 69 (2), 147.

Homburg, C., & Krohmer, H. (2008). Der Prozess der Marktforschung: Festlegung der Datenerhebungsmethode, Stichprobenbildung und Fragebogengestaltung. In A. Herrmann, C. Homburg, & M. Klarmann (Eds.), Handbuch Marktforschung . Heidelberg: Springer Gabler.

Homburg, C., Jensen, O., & Klarmann, M. (2005). Die Zusammenarbeit zwischen Marketing und Vertrieb-eine vernachlässigte Schnittstelle (Vol. 86). Mannheim: Inst. für Marktorientierte Unternehmensführung, Univ. Mannheim.

Homburg, C., Grozdanovic, M., & Klarmann, M. (2007). Responsiveness to customers and competitors: The role of affective and cognitive organizational systems. Journal of Marketing, 71 (3), 18–38.

Homburg, C., Klarmann, M., & Schmitt, J. (2010). Brand awareness in business markets: When is it related to firm performance? International Journal of Research in Marketing, 27 (3), 201–212.

Homburg, C., Müller, M., & Klarmann, M. (2011). When should the customer really be king? On the optimum level of salesperson customer orientation in sales encounters. Journal of Marketing, 75 (2), 55–74.

Homburg, C., Artz, M., & Wieseke, J. (2012a). Marketing performance measurement systems: Does comprehensiveness really improve performance? Journal of Marketing, 76 (3), 56–77.

Homburg, C., Jensen, O., & Hahn, A. (2012b). How to organize pricing? Vertical delegation and horizontal dispersion of pricing authority. Journal of Marketing, 76 (5), 49–69.

Homburg, C., Klarmann, M., Reimann, M., & Schilke, O. (2012c). What drives key informant accuracy? Journal of Marketing Research, 49 (August), 594–608.

Homburg, C., Schwemmle, M., & Kuehnl, C. (2015a). New product design: Concept, measurement, and consequences. Journal of Marketing, 79 (3), 41–56.

Homburg, C., Vomberg, A., Enke, M., & Grimm, P. H. (2015b). The loss of the marketing department’s influence: Is it really happening? And why worry? Journal of the Academy of Marketing Science, 43 (1), 1–13.

Homburg, C., Gwinner, O., & Vomberg, A. (2019a). Customer reacquisition in business-to-business contexts . Working paper.

Homburg, C., Lauer, K., & Vomberg, A. (2019b). The multichannel pricing dilemma: Do consumers accept higher offline than online prices? International Journal of Research in Marketing, 36 (4), 597–612.

Homburg, C., Vomberg, A., & Muehlhaeuser, S. (2020). Design and governance of multichannel sales systems: Financial performance consequences in business-to-business markets. Journal of Marketing Research, 57 (6), 1113–1134.

Huang, G., & Sudhir, K. (2021). The Causal Effect of Service Satisfaction on Customer Loyalty. Management Science, 67 (1), 317–341.

Hulland, J. (2019). In through the out door. Journal of the Academy of Marketing Science, 47 (1), 1–3.

Hulland, J., & Miller, J. (2018). Keep on Turkin? Journal of the Academy of Marketing Science, 46 (5), 789–794. https://doi-org.proxy-ub.rug.nl/10.1007/s11747-018-0587-4

Hulland, J., Baumgartner, H., & Smith, K. M. (2018). Marketing survey research best practices: Evidence and recommendations from a review of JAMS articles. Journal of the Academy of Marketing Science, 46 (1), 92–108.

Humphreys, A. (2021). Automated text analysis. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research . Springer, Cham: Springer. Forthcoming.

Iacobucci, D. (2013). Marketing models: Multivariate statistics and marketing analytics (International Edition). South-Western: Cengage Learning.

Iacobucci, D., & Churchill, G. A. (2010). Marketing research. Methodological foundations (10th ed.). Mason: South-Western Cengage Learning.

Jansen, J. J., Van Den Bosch, F. A., & Volberda, H. W. (2005). Managing potential and realized absorptive capacity: How do organizational antecedents matter? Academy of Management Journal, 48 (6), 999–1015.

Jansen, J. J., Van Den Bosch, F. A., & Volberda, H. W. (2006). Exploratory innovation, exploitative innovation, and performance: Effects of organizational antecedents and environmental moderators. Management science, 52 (11), 1661–1674.

John, L. K., Loewenstein, G., Acquisti, A., & Vosgerau, J. (2018). When and why randomized response techniques (fail to) elicit the truth. Organizational Behavior and Human Decision Processes, 148 , 101–123.

Johnson, J. A. (2004). The impact of item characteristics on item and scale validity. Multivariate Behavioral Research, 39 (2), 273–302.

Johnson, R. E., Rosen, C. C., & Djurdjevic, E. (2011). Assessing the impact of common method variance on higher order multidimensional constructs. Journal of Applied Psychology, 96 (4), 744.

Kennedy, P. (2008). A guide to econometrics (6th ed.). Cambridge, MA: Wiley-Blackwell.

Klarmann, M. (2008). Methodische Problemfelder der Erfolgsfaktorenforschung: Bestandsaufnahme und empirische Analysen (Doctoral dissertation).

Klarmann, M., & Homburg, C. (2021). Factor analysis. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research . Springer, Cham: Springer. Forthcoming.

Knowles, E. S., & Condon, C. A. (1999). Why people say “yes”: A dual-process theory of acquiescence. Journal of Personality and Social Psychology, 77 (2), 379.

Koschate-Fischer, N., & Schwille, E. (2021). Mediation analysis in experimental research. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research . Springer, Cham: Springer. Forthcoming.

Kothandapani, V. (1971). Validation of feeling, belief, and intention to act as three components of attitude and their contribution to prediction of contraceptive behavior. Journal of Personality and Social Psychology, 19 (3), 321.

Krosnick, J. A. (1999). Survey research. Annual Review of Psychology, 50 (1), 537–567.

Krosnick, J. A., Li, F., & Lehman, D. R. (1990). Conversational conventions, order of information acquisition, and the effect of base rates and individuating information on social judgments. Journal of Personality and Social Psychology, 59 (6), 1140.

Kumar, N., Stern, L. W., & Anderson, J. C. (1993). Conducting interorganizational research using key informants. Academy of Management Journal, 36 (6), 1633–1651.

Kumar, N., Scheer, L. K., & Steenkamp, J. B. E. (1995). The effects of supplier fairness on vulnerable resellers. Journal of Marketing Research, 32 , 54–65.

Kumar, A., Heide, J. B., & Wathne, K. H. (2011). Performance implications of mismatched governance regimes across external and internal relationships. Journal of Marketing, 75 (2), 1–17.

Landwehr, J. R. (2021). Analysis of variance. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research . Springer, Cham: Springer. Forthcoming.

LeBreton, J. M., & Senter, J. L. (2008). Answers to 20 questions about interrater reliability and interrater agreement. Organizational Research Methods, 11 (4), 815–852.

Levay, K. E., Freese, J., & Druckman, J. N. (2016). The demographic and political composition of mechanical Turk samples. SAGE Open, 6 (1), 2158244016636433.

Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology, 86 (1), 114.

Litman, L., Robinson, J., & Abberbock, T. (2017). TurkPrime. com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49 (2), 433–442.

Little, T. D., Lindenberger, U., & Nesselroade, J. R. (1999). On selecting indicators for multivariate measurement and modeling with latent variables: When “good” indicators are bad and “bad” indicators are good. Psychological Methods, 4 (2), 192.

MacKenzie, S. B., & Podsakoff, P. M. (2012). Common method bias in marketing: Causes, mechanisms, and procedural remedies. Journal of Retailing, 88 (4), 542–555.

McElheran, K. (2015). Do market leaders lead in business process innovation? The case (s) of e-business adoption. Management Science, 61 (6), 1197–1216.

McKelvey, B. (1975). Guidelines for the empirical classification of organizations. Administrative Science Quarterly, 20 , 509–525.

Messick, S. (2012). Psychology and methodology of response styles. In R. E. Snow & D. E. Wiley (Eds.), Improving inquiry in social science: A volume in honor of Lee J. Cronbach (pp. 161–200). Hillsdale: Lawrence Erlbaum.

Mick, D. G. (1996). Are studies of dark side variables confounded by socially desirable responding? The case of materialism. Journal of Consumer Research, 23 (2), 106–119.

Mizik, N., & Jacobson, R. (2008). The financial value impact of perceptual brand attributes. Journal of Marketing Research, 45 (1), 15–32.

Moosbrugger, H. (2008). Klassische Testtheorie (KTT). In Testtheorie und Fragebogenkonstruktion (pp. 99–112). Berlin/Heidelberg: Springer.

Naemi, B. D., Beal, D. J., & Payne, S. C. (2009). Personality predictors of extreme response style. Journal of Personality, 77 (1), 261–286.

Nederhof, A. J. (1985). Methods of coping with social desirability bias: A review. European Journal of Social Psychology, 15 (3), 263–280.

Nunnally, J. C. (1967). Psychometric theory . New York: McGraw-Hill.

Ostroff, C., Kinicki, A. J., & Clark, M. A. (2002). Substantive and operational issues of response bias across levels of analysis: An example of climate-satisfaction relationships. Journal of Applied Psychology, 87 (2), 355.

Palmatier, R. W. (2016). Improving and publishing at JAMS: Contribution and positioning. Journal of the Academy of Marketing Science, 44 (6), 655–659.

Paulhus, D. L. (1984). Two-component models of socially desirable responding. Journal of Personality and Social Psychology, 46 (3), 598.

Paulhus, D. L. (2002). Socially desirable responding: The evolution of a construct. In H. Brand, D. N. Jackson, D. E. Wiley, & S. Messick (Eds.), The role of constructs in psychological and educational measurement (pp. 49–69). Mahwah: L. Erlbaum.

Paulhus, D. L., & John, O. P. (1998). Egoistic and moralistic biases in self-perception: The interplay of self-deceptive styles with basic traits and motives. Journal of Personality, 66 (6), 1025–1060.

Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology, 70 , 153–163.

Peterson, R. A. (2001). On the use of college students in social science research: Insights from a second-order meta-analysis. Journal of Consumer Research, 28 (3), 450–461.

Phillips, L. W. (1981). Assessing measurement error in key informant reports: A methodological note on organizational analysis in marketing. Journal of Marketing Research, 18 , 395–415.

Podsakoff, P. M., & Organ, D. W. (1986). Self-reports in organizational research: Problems and prospects. Journal of Management, 12 (4), 531–544.

Podsakoff, P. M., MacKenzie, S. B., Moorman, R. H., & Fetter, R. (1990). Transformational leader behaviors and their effects on followers’ trust in leader, satisfaction, and organizational citizenship behaviors. The Leadership Quarterly, 1 (2), 107–142.

Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88 (5), 879.

Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology, 63 , 539–569.

Presser, S., Couper, M. P., Lessler, J. T., Martin, E., Martin, J., Rothgeb, J. M., & Singer, E. (2004). Methods for testing and evaluating survey questions. Public Opinion Quarterly, 68 (1), 109–130.

Preston, C. C., & Colman, A. M. (2000). Optimal number of response categories in rating scales: Reliability, validity, discriminating power, and respondent preferences. Acta Psychologica, 104 (1), 1–15.

Price, J. L., & Mueller, C. W. (1986). Handbook of organizational measurements . Marshfield: Pittman.

Raval, D. (2020). Whose voice do we hear in the marketplace? Evidence from consumer complaining behavior. Marketing Science, 39 (1), 168–187.

Rindfleisch, A., & Heide, J. B. (1997). Transaction cost analysis: Past, present, and future applications. Journal of marketing, 61 (4), 30–54.

Rindfleisch, A., Malter, A. J., Ganesan, S., & Moorman, C. (2008). Cross-sectional versus longitudinal survey research: Concepts, findings, and guidelines. Journal of Marketing Research, 45 (3), 261–279.

Rogelberg, S. G., & Stanton, J. M. (2007). Introduction: Understanding and Dealing With Organizational Survey Nonresponse. Organizational Research Methods, 10 (2):195–209. https://doi.org/10.1177/1094428106294693

Rogelberg, S. G., Fisher, G. G., Maynard, D. C., Hakel, M. D., & Horvath, M. (2001). Attitudes toward surveys: Development of a measure and its relationship to respondent behavior. Organizational Research Methods, 4 (1), 3–25.

Rossi, P. E. (2014). Even the rich can make themselves poor: A critical examination of IV methods in marketing applications. Marketing Science, 33 (5), 655–672.

Sa Vinhas, A., & Heide, J. B. (2015). Forms of competition and outcomes in dual distribution channels: The distributor’s perspective. Marketing Science, 34 (1), 160–175.

Sande, J. B., & Ghosh, M. (2018). Endogeneity in survey research. International Journal of Research in Marketing, 35 (2), 185–204.

Sarstedt, M., Ringle, C. M., & Hair, J. F. (2021). Partial least squares structural equation modeling. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research . Springer, Cham: Springer. Forthcoming.

Schmidt, J., & Bijmolt, T. H. (2019). Accurately measuring willingness to pay for consumer goods: A meta-analysis of the hypothetical bias. Journal of the Academy of Marketing Science, 48 (3), 499–518.

Schuman, H., & Presser, S. (1979). The open and closed question. American Sociological Review, 44 , 692–712.

Schuman, H., & Presser, S. (1981). The attitude-action connection and the issue of gun control. The Annals of the American Academy of Political and Social Science, 455 (1), 40–47.

Schuman, H., & Presser, S. (1996). Questions and answers in attitude surveys: Experiments on question form, wording, and context . New York: Sage.

Schuman, H., Kalton, G., & Ludwig, J. (1983). Context and contiguity in survey questionnaires. Public Opinion Quarterly, 47 (1), 112–115.

Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54 (2), 93.

Schwarz, N. (2003). Self-reports in consumer research: The challenge of comparing cohorts and cultures. Journal of Consumer Research, 29 (4), 588–594.

Schwarz, N., & Scheuring, B. (1992). Selbstberichtete Verhaltens-und Symptomhäufigkeiten: Was Befragte aus Antwortvorgaben des Fragebogens lernen. Zeitschrift für klinische Psychologie.

Schwarz, N., Knäuper, B., Hippler, H. J., Noelle-Neumann, E., & Clark, L. (1991a). Rating scales numeric values may change the meaning of scale labels. Public Opinion Quarterly, 55 (4), 570–582.

Schwarz, N., Strack, F., & Mai, H. P. (1991b). Assimilation and contrast effects in part-whole question sequences: A conversational logic analysis. Public Opinion Quarterly, 55 (1), 3–23.

Seidler, J. (1974). On using informants: A technique for collecting quantitative data and controlling measurement error in organization analysis. American Sociological Review, 39 , 816–831.

Short, J. C., Ketchen, D. J., Jr., & Palmer, T. B. (2002). The role of sampling in strategic management research on performance: A two-study analysis. Journal of Management, 28 (3), 363–385.

Siemsen, E., Roth, A., & Oliveira, P. (2010). Common method bias in regression models with linear, quadratic, and interaction effects. Organizational Research Methods, 13 (3), 456–476.

Skiera, B., Reiner, J., & Albers, S. (2021). Regression analysis. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research . Springer, Cham: Springer. Forthcoming.

Steenkamp, J. B. E., De Jong, M. G., & Baumgartner, H. (2010). Socially desirable response tendencies in survey research. Journal of Marketing Research, 47 (2), 199–214.

Sudman, S., & Blair, E. (1999). Sampling in the twenty-first century. Journal of the Academy of Marketing Science, 27 (2), 269–277.

Tellis, G. J., & Chandrasekaran, D. (2010). Extent and impact of response biases in cross-national survey research. International Journal of Research in Marketing, 27 (4), 329–341.

The American Association for Public Opinion Research. (2016). Standard definitions: Final dispositions of case codes and outcome rates for surveys (9th ed.). AAPOR. https://www.aapor.org/AAPOR_Main/media/publications/Standard-Definitions20169theditionfinal.pdf

Thompson, L. F., & Surface, E. A. (2007). Employee surveys administered online: Attitudes toward the medium, nonresponse, and data representativeness. Organizational Research Methods, 10 (2), 241–261.

Tomaskovic-Devey, D., Leiter, J., & Thompson, S. (1994). Organizational survey nonresponse. Administrative Science Quarterly, 39 , 439–457.

Tortolani, R. (1965). Introducing bias intentionally into survey techniques. Journal of Marketing Research, 2 , 51–55.

Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response . Cambridge: Cambridge University Press.

Valli, V., Stahl, F., & Feit, E. (2021). Field experiments. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research . Springer, Cham: Springer. Forthcoming.

Van Rosmalen, J., Van Herk, H., & Groenen, P. J. (2010). Identifying response styles: A latent-class bilinear multinomial logit model. Journal of Marketing Research, 47 (1), 157–172.

Visser, P. S., Krosnick, J. A., Marquette, J., & Curtin, M. (1996). Mail surveys for election forecasting? An evaluation of the Columbus Dispatch poll. Public Opinion Quarterly, 60 (2), 181–227.

Vomberg, A., & Wies, S. (2021). Panel data analysis: A non-technical introduction for marketing researchers. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research . Springer, Cham: Springer. Forthcoming.

Vomberg, A., Homburg, C., & Bornemann, T. (2015). Talented people and strong brands: The contribution of human capital and brand equity to firm value. Strategic Management Journal, 36 (13), 2122–2131.

Vomberg, A., Homburg, C., & Gwinner, O. (2020). Tolerating and managing failure: An organizational perspective on customer reacquisition management. Journal of Marketing, 84 (5), 117–136.

Warner, S. L. (1965). Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60 , 63–69.

Wathne, K. H., Heide, J. B., Mooi, E. A., & Kumar, A. (2018). Relationship governance dynamics: The roles of partner selection efforts and mutual investments. Journal of Marketing Research, 55 (5), 704–721.

Weijters, B., & Baumgartner, H. (2012). Misresponse to reversed and negated items in surveys: A review. Journal of Marketing Research, 49 (5), 737–747.

Weijters, B., Schillewaert, N., & Geuens, M. (2008). Assessing response styles across modes of data collection. Journal of the Academy of Marketing Science, 36 (3), 409–422.

Weijters, B., Geuens, M., & Schillewaert, N. (2009). The proximity effect: The role of inter-item distance on reverse-item bias. International Journal of Research in Marketing, 26 (1), 2–12.

Weijters, B., Cabooter, E., & Schillewaert, N. (2010a). The effect of rating scale format on response styles: The number of response categories and response category labels. International Journal of Research in Marketing, 27 (3), 236–247.

Weijters, B., Geuens, M., & Schillewaert, N. (2010b). The individual consistency of acquiescence and extreme response style in self-report questionnaires. Applied Psychological Measurement, 34 (2), 105–121.

Weijters, B., Geuens, M., & Baumgartner, H. (2013). The effect of familiarity with the response category labels on item response to Likert scales. Journal of Consumer Research, 40 (2), 368–381.

Weijters, B., Millet, K., & Cabooter, E. (2020). Extremity in horizontal and vertical Likert scale format responses. Some evidence on how visual distance between response categories influences extreme responding. International Journal of Research in Marketing . https://doi.org/10.1016/j.ijresmar.2020.04.002 .

Wessling, K. S., Huber, J., & Netzer, O. (2017). MTurk character misrepresentation: Assessment and solutions. Journal of Consumer Research, 44 (1), 211–230.

Williams, L. J., & Brown, B. K. (1994). Method variance in organizational behavior and human resources research: Effects on correlations, path coefficients, and hypothesis testing. Organizational Behavior and Human Decision Processes, 57 (2), 185–209.

Winkler, J. D., Kanouse, D. E., & Ware, J. E. (1982). Controlling for acquiescence response set in scale development. Journal of Applied Psychology, 67 (5), 555.

Wong, N., Rindfleisch, A., & Burroughs, J. E. (2003). Do reverse-worded items confound measures in cross-cultural consumer research? The case of the material values scale. Journal of Consumer Research, 30 (1), 72–91.

Yammarino, F. J., Skinner, S. J., & Childers, T. L. (1991). Understanding mail survey response behavior a meta-analysis. Public Opinion Quarterly, 55 (4), 613–639.

Yu, J., & Cooper, H. (1983). A quantitative review of research design effects on response rates to questionnaires. Journal of Marketing Research, 20 , 36–44.

Zettler, I., Lang, J. W., Hülsheger, U. R., & Hilbig, B. E. (2015). Dissociating indifferent, directional, and extreme responding in personality data: Applying the three-process model to self-and observer reports. Journal of Personality, 84 (4), 461–472.

Download references

Author information

Authors and affiliations.

University of Groningen, Groningen, The Netherlands

Arnd Vomberg

Karlsruhe Institute of Technology, Karlsruhe, Germany

Martin Klarmann

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Arnd Vomberg .

Editor information

Editors and affiliations.

Universität Mannheim, Mannheim, Germany

Christian Homburg

Inst. Informations Systems and Marketing Marketing & Sales Research Group, Karlsruher Institut für Technologie, Karlsruhe, Germany

Faculty of Economics and Business, University of Groningen, Groningen, The Netherlands

Arnd E. Vomberg

Section Editor information

University of Mannheim, Mannheim, Germany

University of Groningen, Groningen, Niederlande

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this entry

Cite this entry.

Vomberg, A., Klarmann, M. (2021). Crafting Survey Research: A Systematic Process for Conducting Survey Research. In: Homburg, C., Klarmann, M., Vomberg, A.E. (eds) Handbook of Market Research. Springer, Cham. https://doi.org/10.1007/978-3-319-05542-8_4-1

Download citation

DOI : https://doi.org/10.1007/978-3-319-05542-8_4-1

Received : 18 December 2020

Accepted : 21 December 2020

Published : 17 April 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-05542-8

Online ISBN : 978-3-319-05542-8

eBook Packages : Springer Reference Business and Management Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Bit Blog

Survey Report: What is it & How to Create it?

' src=

So, you’ve conducted a survey and came up with strategies to get the most responses to that survey.

A survey does not end after you managed to capture a lot of responses. Responses are just plain data.

A survey report is how you convert that data into information and implement the results in your research.

Data left unanalyzed is just a mess of numbers and words, which do no good to anyone.In this article, we’ll show you the ins and outs of writing a fantastic survey report.

Before getting into the step-by-step process, let’s first understand what survey reports are.

What is a Survey Report? (Definition)

A survey report is a document that demonstrates all the important information about the survey in an objective, clear, precise, and fact-based manner.

The report defines the:

  • The objective of the survey
  • Number of questions
  • Number of respondents
  • Start and end date of the survey
  • The demographics of the people who have answered (geographical area, gender, age, marital status, etc)
  • Your findings and conclusion from the given data

All this data should be presented using graphs, charts, and tables. Anything that makes the data easy-to-read and understand.

After reading your survey report, the reader should be clear on:

  • Why you conducted this survey.
  • The time period the survey ran.
  • Channels used to promote and fill up the survey.
  • Demographics of the respondents.
  • What you found out after conducting the survey.
  • How can the findings be implemented to create better results?

Importance of Writing a Survey Report

Here are four compelling reasons why you should always write a survey report:

1. Make a powerful impact

Survey reports reveal the hard numbers regarding a scenario. It is always impactful when you include stats with facts.

Bit.ai Home Page CTA

For instance, saying “80% of women working in the media sector claim to have faced workplace harassment at some time in their life” is more impactful than saying “Many women face workplace harassment.”

Data is much more powerful when we know the exact proportion of it. People connect with issues that come with large numbers, as they stir up the room and demand for action.

Read more:   How To Create A Customer Survey For Better Insights?

2. Paves the way for d ecision making

When you categorize large data into easy to read charts and graphs, it gets converted into useful information. This information is then used by the management to make decisions that will directly affect the company.

For example, let’s say that you performed a product feedback survey to find out the preferences of your target audience.

Based on their answers, the features of your product can be amended.

After all, it’s the people for whom you are making the product, right?

3. Trend is your friend

This is a common saying among surveyors and researchers. It implies that when you ask one question again and again over time, you find a trend of change in the answers.

Thus, survey reports help track trends in the market that also provides insight into what the future could hold.

For instance, if we talk about fashion, bell-bottom jeans were a huge trend in the ’70s. Then, the ’80s saw carrot pants followed by ankle-high skinny jeans in the 90s.

Fashion researchers track these changes and bring back some of the old trends. Bell-bottoms and carrot pants came back and Gen Z is loving it!

4. Helps to gather explicit data

Explicit data in survey terms means first-hand data that is obtained without any tampering in the process. Data often gets contaminated, when it is not received first-hand and transfers through various mediums.

Survey reports help express information fully and clearly without beating around the bush. However, the possibility of a respondent altering the data, lying, or manipulating should be considered.

First-hand data should not be blindly followed. It must be supported by some underlying facts.

With this thought in mind, let’s move to the actual steps required to write a survey report.

Read more:   Formal Reports: What are they & How to Create them!

How to Write a Survey Report in 5 Easy Steps?

If you are here trying to learn how to make a survey report, we can assume that you have already taken two steps:

  • Created & distributed a questionnaire/survey
  • Received responses to it

When its time to analyze the collected data, take the following steps:

Step 1: Export Data

Whether you used Google Forms, Typeform, Survey Monkey, or any other survey platform, the result of the survey conducted comes in two ways. One is a spreadsheet document filled with data and the other is the graphical and chart representation of the data.

Export this real-time information in your survey report by either downloading, printing, or simply copy-pasting the graphical results of the survey into your survey report document.

This legitimates your survey and lets the reader see the exact survey you conducted and its digital results without any edits or data manipulation.

Read more:  How to Embed Google Form to Your Documents?

Step 2: Filter, Analyze and Visualize

The next step is to filter the data. You’ll want to look out for corrupt, biased, duplicate, or inaccurate information and filter it out. This step is also called cleaning the data. The cleaned data is then analyzed to infer results.

Analyzing the data includes grouping similar aspects, categorizing, and identifying patterns.

For instance, XYZ company conducted an employee satisfaction survey.

In it, they asked, how is the relationship between you and your immediate supervisor?

According to the data, it is found that 80% workforce said that their supervisor was warm and friendly. the other 20% gave a mixed kind of response.

On analyzing it, it was determined that most of the workforce helps each other and the pyramid of command is running successfully.

So the XYZ company will group this information and present it like: ‘4 out of 5 employees working in XYZ organization claimed to have a warm and friendly relationship with their supervisor ‘

With this, it can be concluded that the work environment of XYZ organization is good. They will also use this information to attract more workforce towards the organization.

It is always better to put the information in statistical terms. This makes the visualization process easier. While visualizing, you collect the analyzed data in the form of easily understandable charts, graphs, or figures.

Visualization makes the data go alive!

Step 3: Interpret Data

At this point, you have all your information ready and baked. It time to eat!

By eating, we mean inferring what you found out from the collected data. Observe your findings with supporting facts and figures and also think about the next step/recommendations to make a change.

A survey report

But hey, we can’t eat before presenting the dishes first. Presentation is an important element!

However, not every survey report is presented the same way. This leads us to the next step…

Step 4: Recognise the Type of Survey Report

There are various kinds of survey reports, depending on the nature and objective of the survey. Recognize what type your survey falls into. Some common ones include:

  • Employee Satisfaction Survey: This is done to determine the opinion of the employees regarding the workspace. It helps in increasing employee motivation and makes them feel heard.
  • Customer Satisfaction Survey: You must have come across one of these in your life. Many restaurants, salons, companies ask their customers to fill out a form giving feedback.
  • Market Research Survey: This type of survey is performed to find out the preferences and demographics of the target audience. It also helps with competitor research.
  • Social Survey: This is a survey conducted to find out stats about social topics such as climate change, waste management, poverty, etc.

Step 5: Structure of the Survey Report

Add the title of your survey, name of the organization, date of submission, name of the mentor, etc.

  • Background and Objective

Provide a background of the topic at hand, giving insight as to why this survey is conducted. The reason behind it is known as the objective of the report.

  • Methodology

Your methodology should tell the readers about the methods you used to conduct the survey, channels used to spread it, and the ways used to analyze the received data.

The conclusion should contain a detailed picture of your findings. It includes listing what you determined by the survey in form of facts, figures, and statistics.

  • Recommendation

Give the reader the next step or CTA (call to action) in your survey report. After sharing the survey and analyzing the results, offer some insightful recommendations to make a change, amend or implicate something.

For instance, to improve employee satisfaction and communication, we can hold a weekly town hall meeting.

Include detailed information that is too long/complicated to put in the report but is used or referred to in the report. It may include long mathematical calculations, tables of raw data, in-depth charts, etc.

Make sure to properly credit every source that you extracted information from. This is very important as the absence of references puts a threat of plagiarism in your report.

  • Table of Contents

Usually, it is included at the beginning of any report or project. However, in the case of the survey report, the table of contents comes at the end. It lists everything present in the report divided into sections and sub-sections.

  • Executive Summary

It is a crucial element of your report. Not everybody has the time to go through the entire survey report. The executive summary should be able to summarise the whole survey in one or two pages.

Okay great so now you know how to write a Survey Report, how about we show you the smartest & fastest way for you to create yours?

Bit.ai : The Ultimate Tool for Creating Survey Reports

Bit.ai: Tool for creating survey reports

Easily weave any type of digital content within your Bit docs like file attachments, visual web links, cloud files , PDF previews, math equations, videos , and much more. Bit integrates across 80+ popular applications like Google Sheets , OneDrive , Tableau , Typeform , Lucidcharts , and much more! Now your content, wherever it may reside can part of your survey report.

Bit’s impressive editor is collaborative so that you can work with your team at the same time and create smart documents that help you communicate more effectively together.

Bit features infographic

Your survey report can contain an automated table of contents, embedded Excel sheets of tables, interactive Tableau charts, PDF presentations, Google Form surveys, and much more! Having this information in one place allows you to easily add context to each element you share. Reduce the amount of scattered data and information and tie it together beautifully in one place.

When you’re ready to share it with your world, you can invite your team members to view your report inside of Bit. You can also share it with a live link, embed it on a website, or share a trackable link to track the engagement levels of your report. If there need to be additional security layers placed, you can invite your audience in as guests where they need to login to view your survey report.

Bit will change the way you communicate and is the ultimate tool necessary to create impressive survey reports!

Our team at  bit.ai  has created a few awesome business templates to make your business processes more efficient. Make sure to check them out before you go, y our team might need them!

  • SWOT Analysis Template
  • Business Proposal Template
  • Business Plan Template
  • Competitor Research Template
  • Project Proposal Template
  • Company Fact Sheet
  • Executive Summary Template
  • Operational Plan Template
  • Pitch Deck Template

Conducting a survey isn’t easy.

From preparing the questionnaire to interpreting and presenting the data, it can seem like a heck of a job.

With Bit.ai’s smart document collaboration platform, at least you don’t have to worry about creating an interactive, impressive survey report!

So what are you waiting for?

Try out Bit’s survey report template and let us know how you liked it by tweeting @bit_docs.

Further reads:

How To Create An Effective Status Report?

7 Types of Reports Your Business Certainly Needs!

Incident Report: What is it & How to Write it the Right Way!

Performance Report: What is it & How to Create it? (Steps Included)

Business Report: What is it & How to Write it? (Steps & Format)

Marketing Report: Definition, Types, Benefits & Things to Include!

Sales Report: What is it and How to Create One?

Bit bottom banner

How to Embed Airtable Database in your Bit Documents?

Quality Management Plan: What is it and How to Create it?

Related posts

11 secure file sharing sites to upload, store & transfer files, scope of work vs statement of work: differences and examples, customer service policy: what is it & how to create it (free template included), combating social isolation in remote working, statement of work: what is it & how to write it (template included), 9 powerful tips for writing great content for your website.

meaning of survey research report

About Bit.ai

Bit.ai is the essential next-gen workplace and document collaboration platform. that helps teams share knowledge by connecting any type of digital content. With this intuitive, cloud-based solution, anyone can work visually and collaborate in real-time while creating internal notes, team projects, knowledge bases, client-facing content, and more.

The smartest online Google Docs and Word alternative, Bit.ai is used in over 100 countries by professionals everywhere, from IT teams creating internal documentation and knowledge bases, to sales and marketing teams sharing client materials and client portals.

👉👉Click Here to Check out Bit.ai.

Recent Posts

9 must-have internal communication software in 2024, 11 best process documentation software & tools for 2024, 21 business productivity tools to enhance work efficiency, the anatomy of a smart wiki | a practical guide by bit.ai, top 15 essential client project documents, 25 best ai tools for peak productivity.

Loading metrics

Open Access

Peer-reviewed

Research Article

Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices

* E-mail: [email protected]

Affiliation Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada

Affiliations Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada, Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Canada

Affiliation Canadian Institutes of Health Research, Ottawa, Canada

Affiliation Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Canada

Affiliations Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada, Department of Medicine, University of Ottawa, Ottawa, Canada

  • Carol Bennett, 
  • Sara Khangura, 
  • Jamie C. Brehaut, 
  • Ian D. Graham, 
  • David Moher, 
  • Beth K. Potter, 
  • Jeremy M. Grimshaw

PLOS

  • Published: August 2, 2011
  • https://doi.org/10.1371/journal.pmed.1001069
  • Reader Comments

Table 1

Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.

Methods and Findings

We conducted a three-part project: (1) a systematic review of the literature (including “Instructions to Authors” from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n = 165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%).

Conclusions

There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research.

Please see later in the article for the Editors' Summary

Citation: Bennett C, Khangura S, Brehaut JC, Graham ID, Moher D, Potter BK, et al. (2011) Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices. PLoS Med 8(8): e1001069. https://doi.org/10.1371/journal.pmed.1001069

Academic Editor: Rachel Jewkes, Medical Research Council, South Africa

Received: December 23, 2010; Accepted: June 17, 2011; Published: August 2, 2011

Copyright: © 2011 Bennett et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: Funding, in the form of salary support, was provided by the Canadian Institutes of Health Research [MGC – 42668]. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Editors' Summary

Surveys, or questionnaires, are an essential component of many types of research, including health, and usually gather information by asking a sample of people questions on a specific topic and then generalizing the results to a larger population. Surveys are especially important when addressing topics that are difficult to assess using other approaches and usually rely on self reporting, for example self-reported behaviors, such as eating habits, satisfaction, beliefs, knowledge, attitudes, opinions. However, the methods used in conducting survey research can significantly affect the reliability, validity, and generalizability of study results, and without clear reporting of the methods used in surveys, it is difficult or impossible to assess these characteristics and therefore to have confidence in the findings.

Why Was This Study Done?

This uncertainty in other forms of research has given rise to Reporting Guidelines—evidence-based, validated tools that aim to improve the reporting quality of health research. The STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) Statement includes cross-sectional studies, which often involve surveys. But not all surveys are epidemiological, and STROBE does not include methods' and results' reporting characteristics that are unique to surveys. Therefore, the researchers conducted this study to help determine whether there is a need for a reporting guideline for health survey research.

What Did the Researchers Do and Find?

The researchers identified any previous relevant guidance for survey research, and any evidence on the quality of reporting of survey research, by: reviewing current guidance for reporting survey research in the “Instructions to Authors” of leading medical journals and in published literature; conducting a systematic review of evidence on the quality of reporting of surveys; identifying key quality criteria for the conduct of survey research; and finally, reviewing how these criteria are currently reported by conducting a review of recently published reports of self-administered surveys.

The researchers found that 154 of the 165 journals searched (93.3%) did not provide any guidance on survey reporting, even though the majority (81.8%) have published survey research. Only three of the 11 journals that provided some guidance gave more than one directive or statement. Five papers and one Internet site provided guidance on the reporting of survey research, but none used validated measures or explicit methods for development. The researchers identified eight papers that addressed the quality of reporting of some aspect of survey research: the reporting of response rates; the reporting of non-response analyses in survey research; and the degree to which authors make their survey instrument available to readers. In their review of 117 published survey studies, the researchers found that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%). Furthermore, (88 [75%]) did not include any information on consent procedures for research participants, and one-third (40 [34%]) of papers did not report whether the study had received research ethics board review.

What Do These Findings Mean?

Overall, these results show that guidance is limited and consensus lacking about the optimal reporting of survey research, and they highlight the need for a well-developed reporting guideline specifically for survey research—possibly an extension of the guideline for observational studies in epidemiology (STROBE)—that will provide the structure to ensure more complete reporting and allow clearer review and interpretation of the results from surveys.

Additional Information

Please access these web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001069 .

  • More than 100 reporting guidelines covering a broad spectrum of research types are indexed on the EQUATOR Networks web site
  • More information about STROBE is available on the STROBE Statement web site

Introduction

Surveys are a research method by which information is typically gathered by asking a subset of people questions on a specific topic and generalising the results to a larger population [1] , [2] . They are an essential component of many types of research including public opinion, politics, health, and others. Surveys are especially important when addressing topics that are difficult to assess using other approaches (e.g., in studies assessing constructs that require individual self-report about beliefs, knowledge, attitudes, opinions, or satisfaction). However, there is substantial literature to show that the methods used in conducting survey research can significantly affect the reliability, validity, and generalisability of study results [3] , [4] . Without clear reporting of the methods used in surveys, it is difficult or impossible to assess these characteristics.

Reporting guidelines are evidence-based, validated tools that employ expert consensus to specify minimum criteria for authors to report their research such that readers can critically appraise and interpret study findings [5] – [7] . More than 100 reporting guidelines covering a broad spectrum of research types are indexed on the EQUATOR Network's website ( www.equator-network.org ). There is increasing evidence that reporting guidelines are achieving their aim of improving the quality of reporting of health research [8] – [11] .

Given the growth in the number and range of reporting guidelines, the need for guidance on how to develop a guideline has been addressed [7] . A well-structured development process for reporting guidelines includes a review of the literature to determine whether a reporting guideline already exists (i.e., a needs assessment) [7] . The needs assessment should also include a search for evidence on the quality of reporting of published research in the domain of interest [7] .

The series of studies reported here was conducted to help determine whether there is a need for survey research reporting guidelines. We sought to identify any previous relevant guidance for survey research, and any evidence on the quality of reporting of survey research. The objectives of our study were:

  • to identify current guidance for reporting survey research in the “Instructions to Authors” of leading medical journals and in published literature;
  • to conduct a systematic review of evidence on the quality of reporting of surveys; and
  • to identify key quality criteria for the conduct of survey research and to review how they are being reported through a review of recently published reports of self-administered surveys.

Part 1: Identification of Current Guidance for Survey Research

Identifying guidance in “instructions to authors” sections in peer reviewed journals..

Using a strategy originally developed by Altman [12] to assess endorsement of CONSORT by top medical journals, we identified the top five journals from each of 33 medical specialties, and the top 15 journals from the general and internal medicine category, using Web of Science citation impact factors (list of journals available on request). The final sample consisted of 165 unique journals (15 appeared in more than one specialty).

We reviewed each journal's “Instructions to Authors” web pages as well as related PDF documents between January 12 and February 9, 2009. We used the “find” features of the Firefox web browser and Adobe Reader software to identify the following search terms: survey, questionnaire, response, response rate, respond, and non-responder. Web pages were hand searched for statements relevant to survey research. We also conducted an electronic search (MEDLINE 1950 – February Week 1, 2009; terms: survey, questionnaire) to identify whether the journals have published survey research.

Any relevant text was summarized by journal into categories: “No guidance” (survey related term found; however, no reporting guidance provided); “One directive” (survey related term(s) found that included one brief statement, directive or reference(s) relevant to reporting survey research); and “Guidance” (survey related term(s) including more than one statement, instruction and/or directive relevant to reporting survey research). Coding was carried out by one coder (SK) and verified by a second coder (CB).

Identifying published survey reporting guidelines.

MEDLINE (1950 – April Week 1, 2011) and PsycINFO (1806 – April Week 1, 2011) electronic databases were searched via Ovid to identify relevant citations. The MEDLINE electronic search strategy ( Text S1 ), developed by an information specialist, was modified as required for the PsycINFO database. For all papers meeting eligibility criteria, we hand-searched the reference lists and used the “Related Articles” feature in PubMed. Additionally, we reviewed relevant textbooks and web sites. Two reviewers (SK, CB) independently screened titles and abstracts of all unique citations to identify English language papers and resources that provided explicit guidance on the reporting of survey research. Full-text reports of all records passing the title/abstract screen were retrieved and independently reviewed by two members of the research team; there were no disagreements regarding study inclusion and all eligible records passing this stage of screening were included in this review. One researcher (CB) undertook a thematic analysis of identified guidance (e.g., sample selection, response rate, background, etc.), which was subsequently reviewed by all members of the research team. Data were summarized as frequencies.

Part 2: Systematic Review of Published Studies on the Quality of Survey Reporting

The results of the above search strategy ( Text S1 ) were also screened by the two reviewers to identify publications providing evidence on the quality of reporting of survey research in the health science literature. We identified the aspects of reporting survey research that were addressed in these evaluative studies and summarized their results descriptively.

Part 3: Assessment of Quality of Survey Reporting

The results from Part 1 and Part 2 identified items critical to reporting survey research and were used to inform the development of a data abstraction tool. Thirty-two items were deemed most critical to the reporting of survey research on that basis. These were compiled and categorized into a draft data abstraction tool that was reviewed and modified by all the authors, who have expertise in research methodology and survey research. The resulting draft data abstraction instrument was piloted by two researchers (CB, SK) on a convenience sample of survey articles identified by the authors. Items were added and removed and the wording was refined and edited through discussion and consensus among the coauthors. The revised final data abstraction tool ( Table S1 ) comprised 33 items.

Aiming for a minimum sample size of 100 studies, we searched the top 15 journals (by impact factor) from each of four broad areas of health research: health science, public health, general/internal medicine, and medical informatics. These categories, identified through Web of Science, were known to publish survey research and covered a broad range of the biomedical literature. An Ovid MEDLINE search of these 57 journals (three were included in more than one topic area) included Medical Subject Heading (MeSH) terms (“Questionnaires,” “Data Collection,” and “Health Surveys”) and keyword terms (“survey” and “questionnaire”). The search was limited to studies published between January 2008 and February 2009.

We defined a survey as a research method by which information is gathered by asking people questions on a specific topic and the data collection procedure is standardized and well defined. The information is gathered from a subset of the population of interest with the intent of generating summary statistics that are generalisable to the larger population [1] , [2] .

Two reviewers (CB, SK) independently screened all citations (title and abstract) to determine whether the study used a survey instrument consistent with our definition. The same reviewers screened all full-text articles of citations meeting our inclusion criteria, and those whose eligibility remained unclear. We included all primary reports of self-administered surveys, excluding secondary analyses, longitudinal studies, or surveys that were administered openly through the web (i.e., studies that lacked a clearly defined sampling frame). Duplicate data extraction was completed by the two reviewers. Inconsistencies were resolved by discussion and consensus.

Part 1: Identification of Current Guidance for Survey Research – “Instructions to Authors”

Of the 165 journals searched, 154 (93.3%) did not provide any guidance on survey reporting. Of these 154, 126 (81.8%) have published survey research, while 28 have not. Of the 11 journals providing some guidance, eight provided a brief phrase, statement of guidance, or reference; and three provided more substantive guidance, including more than one directive or statement. Examples are provided in Table 1 . Although no reporting guidelines for survey research were identified, several journals referenced the EQUATOR Network's web site. The EQUATOR Network includes two papers relevant to reporting survey research [13] , [14] .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pmed.1001069.t001

The EQUATOR Network also links to the STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) Statement ( www.strobe-statement.org ). Although the STROBE Statement includes cross-sectional studies, a class of studies that subsumes surveys, not all surveys are epidemiological. Additionally, STROBE does not include Methods ' and Results ' reporting characteristics that are unique to surveys ( Table S1 ).

Part 1: Identification of Current Guidance for Survey Research - Published Survey Reporting Guidelines

Our search identified 2,353 unique records ( Figure 1 ), which were title-screened. One-hundred sixty-four records were included in the abstract screen, from which 130 were excluded. The remaining 34 records were retrieved for full-text screening to determine eligibility. There was substantial agreement between reviewers across all the screening phases (kappa  =  0.73; 95% CI 0.69–0.77).

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.g001

We identified five papers [13] – [17] and one internet site [18] that provided guidance on the reporting of survey research. None of these sources reported using valid measures or explicit methods for development. In all cases, in addition to more descriptive details, the guidance was presented in the form of a numbered or bulleted checklist. One checklist was excluded from our descriptive analysis as it was very specific to the reporting of internet surveys [16] . Two checklists were combined for analysis because one [14] was a slightly modified version of the other [17] .

Amongst the four checklists, 38 distinct reporting items were identified and grouped in eight broad themes: background, methods, sample selection, research tool, results, response rates, interpretation and discussion, and ethics and disclosure ( Table 2 ). Only two items appeared in all four checklists: providing a description of the questionnaire instrument and describing the representativeness of the sample to the population of interest. Nine items appear in three checklists, 17 items appear in two checklists, and 10 items appear in only one checklist.

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.t002

Screening results are presented in Figure 1 . Eight papers were identified that addressed the quality of reporting of some aspect of survey research. Five studies [19] – [23] addressed the reporting of response rates; three evaluated the reporting of non-response analyses in survey research [20] , [21] , [24] ; and two assessed the degree to which authors make their survey instrument available to readers ( Table 3 ) [25] , [26] .

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.t003

Part 3: Assessment of Quality of Survey Reporting from the Biomedical Literature

Our search identified 1,719 citations: 1,343 citations were excluded during title/abstract screening because these studies did not use a survey instrument as their primary research tool. Three hundred seventy-six citations were retrieved for full-text review. Of those, 259 did not meet our eligibility criteria; reasons for their exclusion are reported in Figure 2 . The remaining 117 articles, reporting results from self-administered surveys, were retained for data abstraction.

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.g002

The 117 articles were published in 34 different journals: 12 journals from health science, seven from medical informatics, 10 from general/internal medicine, and eight from public health ( Table S2 ). The median number of pages per study was 8 (range 3–26). Of the 33 items that were assessed using our data abstraction form, the median number of items reported was 18 (range 11–25).

Reporting Characteristics: Title, Abstract, and Introduction

The majority (113 [97%]) of articles used the term “survey” or “questionnaire” in the title or abstract; four articles did not use a term to indicate that the study was a survey. While all of the articles presented a background to their research, 17 (15%) did not identify a specific purpose, aim, goal, or objective of the study. ( Table 4 )

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.t004

Reporting Characteristics: Methods

Approximately one-third (40 [34%]) of survey research reports did not provide access to the questionnaire items used in the study in either the article, appendices, or an online supplement. Of those studies that reported the use of existing survey questionnaires, the majority (40/52 [77%]) did not report the psychometric properties of the tool (although all but two did reference their sources). The majority of studies that developed a novel questionnaire (91/111 [82%]) failed to clearly describe the development process and/or did not describe the methods used to pre-test the tool; the majority (89/111 [80%]) also failed to report the reliability or validity of a newly developed survey instrument. For those papers which used survey instruments that required scoring (n = 95), 63 (66%) did not provide a description of the scoring procedures.

With respect to a description of sample selection methods, 104 (89%) studies did not describe the sample's representativeness of the population of interest. The majority (110 [94%]) of studies also did not present a sample size calculation or other justification of the sample size.

There were 23 (20%) papers for which we could not determine the mode of survey administration (i.e., in-person, mail, internet, or a combination of these). Forty-one (35%) articles did not provide information on either the type (i.e. phone, e-mail, postal mail) or the number of contact attempts. For 102 (87%) papers, there was no description of who was identified as the organization/group soliciting potential research subjects for their participation in the survey.

Twelve (10%) papers failed to provide a description of the methods used to analyse the data (i.e., a description of the variables that were analysed, how they were manipulated, and the statistical methods used). However, for a further 55 (47%) studies, the data analysis would be a challenge to replicate based on the description provided in the research report. Very few studies provided methods for analysis of non-response error, calculating response rates, or handling missing item data (15 [13%], 5 [4%], and 13 [11%] respectively). The majority (112 [96%]) of the articles did not provide a definition or cut-off limit for partial completion of questionnaires.

Reporting Characteristics: Results

While the majority (89 [76%]) of papers provided a defined response rate, 28 studies (24%) failed to define the reported response rate (i.e., no information was provided on the definition of the rate or how it was calculated), provided only partial information (e.g., response rates were reported for only part of the data, or some information was reported but not a response rate), or provided no quantitative information regarding a response rate. The majority (104 [87%]) of studies did not report the sample disposition (i.e., describing the number of complete and partial returned questionnaires according to the number of potential participants known to be eligible, of unknown eligibility, or known to be ineligible). More than two-thirds (80 [68%]) of the reports provided no information on how non-respondents differed from respondents.

Reporting Characteristics: Discussion and Ethical Quality Indicators

While all of the articles summarized their results with regard to the objectives, and the majority (110 [94%]) described the limitations of their study, most (90 [77%]) did not outline the strengths of their study and 70 (60%) did not include any discussion of the generalisability of their results.

When considering the ethical quality indicators, reporting was varied. While three-quarters (86 [74%]) of the papers reported their source of funding, approximately the same proportion (88 [75%]) did not include any information on consent procedures for research participants. One-third (40 [34%]) of papers did not report whether the study had received research ethics board review.

Our comprehensive review, to identify relevant guidance for survey research and evidence on the quality of reporting of surveys, substantiates the need for a reporting guideline for survey research. Overall, our results show that few medical journals provide guidance to authors regarding survey research. Furthermore, no validated guidelines for reporting surveys currently exist. Previous reviews of survey reporting quality and our own review of 117 published studies revealed that many criteria are poorly reported.

Surveys are common in health care research; we identified more than 117 primary reports of self-administered surveys in 34 high-impact factor journals over a one-year period. Despite this, the majority of these journals provided no guidance to authors for reporting survey research. This may stem, at least partly, from the fact that validated guidelines for survey research do not exist and that recommended quality criteria vary considerably. The recommended reporting criteria that we identified in the published literature are not mutually exclusive, and there is perhaps more overlap if one takes into account implicit and explicit considerations. Regardless of these limitations, the lack of clear guidance has contributed to inconsistency in the literature; both this work and that of others [19] – [26] shows that key survey quality characteristics are often under-reported.

Self-administered sample surveys are a type of observational study and for that reason they can fall within the scope of STROBE. However, there are methodological features relevant to sample surveys that need to be highlighted in greater detail. For example, surveys that use a probability sampling design do so in order to be able to generalise to a specific target population (many other types of observational research may have a more “infinite” target population); this emphasizes the importance of coverage error and non-response error – topics that have received attention in the survey literature. Thus, in our data abstraction tool, we placed emphasis on specific methodological details excluded from STROBE – such as non-response analysis, details of strategies used to increase response rates (e.g., multiple contacts, mode of contact of potential participants), and details of measurement methods (e.g., making the instrument available so that readers can consider questionnaire formatting, question framing, choice of response categories, etc.).

Consistent with previous work [25] , [26] , fully one-third of our sample failed to provide access to any survey questions used in the study. This poses challenges both for critical analysis of the studies and for future use of the tools, including replication in new settings. These challenges will be particularly apparent as the articles age and study authors become more difficult to contact [25] .

Assessing descriptions of the study population and sampling frame posed particular challenges in this study. It was often unclear whom the authors considered to be the population of interest. To standardise our assessment of this item, we used a clearly delineated definition of “survey population” and “sampling frame” [3] , [27] . A survey reporting guideline could help this issue by clearly defining the difference between the terms and descriptions of “population” and “sampling frame.”

Our results regarding reporting of response rates and non-response analysis were similar to previously published studies [19] – [24] . In our sample, 24% of papers assessed did not provide a defined response rate and 68% did not provide results from non-response analysis. The wide variation in how response rates are reported in the literature is perhaps a historical reflection of the limited consensus or explicit journal policy for response rate reporting [22] , [28] , [29] . However, despite lack of explicit policies regarding acceptable standards for response rates or the reporting of response rates, journal editors are known to have implicit policies for acceptable response rates when considering the publication of surveys [17] , [22] , [29] , [30] . Given the concern regarding declining response rates to surveys [31] , there is a need to ensure that aspects of the survey's design and conduct are well reported so that reviewers can adequately assess the degree of bias that may be present and allay concerns over the representativeness of the survey population.

With regard to the ethical quality indicators, sources of study funding were often reported (74%) in this sample of articles. However, the reporting of research ethics board approval and subject consent procedures were reported far less often. In particular, the reporting of informed consent procedures was often absent in studies where physicians, residents, other clinicians or health administrators were the subjects. This finding may suggest that researchers do not perceive doctors and other health-care professionals and administrators to be research subjects in the same way they perceive patients and members of the public to be. It could also reflect a lack of current guidelines that specifically address the ethical use of health services professionals and staff as research subjects.

Our research is not without limitations. With respect to the review of journals' “Instructions to Authors,” the study was cross-sectional in contrast with the dynamic nature of web pages. Since our searches in early 2009, several journals have updated their web pages. It has been noted that at least one has added a brief reference to the reporting of survey research.

A second limitation is that our sample included only the contents of “Instructions to Authors” web pages for higher-impact factor journals. It is possible that journals with lower impact factors contain guidance for reporting survey research. We chose this approach, which replicates previous similar work [12] , to provide a defensible sample of journals.

</?twb=.3w?>Third, the problem of identifying non-randomised studies in electronic searches is well known and often related to the inconsistent use of terminology in the original papers. It is possible that our search strategy failed to identify relevant articles. However, it is unlikely that there is an existing guideline for survey research that is in widespread use, given our review of actual surveys, instructions to authors, and reviews of reporting quality.

Fourth, although we restricted our systematic review search strategy to two health science databases, our hand search did identify one checklist that was not specific to the health science literature [18] . The variation in recommended reporting criteria amongst the checklists may, in part, be due to variation in the different domains (i.e., health science research versus public opinion research).

Additionally, we did not critically appraise the quality of evidence for items included in the checklists nor the quality of the studies that addressed the quality of reporting of some aspect of survey research. For our review of current reporting practices for surveys, we were unable to identify validated tools for evaluation of these studies. While we did use a comprehensive and iterative approach to develop our data abstraction tool, we may not have captured information on characteristics deemed important by other researchers. Lastly, our sample was limited to self-administered surveys, and the results may not be generalisable to interviewer-administered surveys.

Recently, Moher and colleagues outlined the importance of a structured approach to the development of reporting guidelines [7] . Given the positive impact that reporting guidelines have had on the quality of reporting of health research [8] – [11] , and the potential for a positive upstream effect on the design and conduct of research [32] , there is a fundamental need for well-developed reporting guidelines. This paper provides results from the initial steps in a structured approach to the development of a survey reporting guideline and forms the foundation for our further work in this area.

In conclusion, there is limited guidance and no consensus regarding the optimal reporting of survey research. While some key criteria are consistently reported by authors publishing their survey research in peer-reviewed journals, the majority are under-reported. As in other areas of research, poor reporting compromises both transparency and reproducibility, which are fundamental tenets of research. Our findings highlight the need for a well developed reporting guideline for survey research – possibly an extension of the guideline for observational studies in epidemiology (STROBE) – that will provide the structure to ensure more complete reporting and allow clearer review and interpretation of the results from surveys.

Supporting Information

Data abstraction tool items and overlap with STROBE.

https://doi.org/10.1371/journal.pmed.1001069.s001

Journals represented by 117 included articles.

https://doi.org/10.1371/journal.pmed.1001069.s002

Ovid MEDLINE search strategy.

https://doi.org/10.1371/journal.pmed.1001069.s003

Acknowledgments

We thank Risa Shorr (Librarian, The Ottawa Hospital) for her assistance with designing the electronic search strategy used for this study.

Author Contributions

Conceived and designed the experiments: JG DM CB SK JB IG BP. Analyzed the data: CB SK JB DM JG. Contributed to the writing of the manuscript. CB SK JB IG DM BP JG. ICMJE criteria for authorship read and met. CB SK JB IG DM BP JG. Acqusition of data: CB SK.

  • 1. Groves RM, Fowler FJ, Couper MP, Lepkowski JM, Singer E, et al. (2004) Survey Methodology. Hoboken (New Jersey): John Wiley & Sons, Inc.
  • 2. Aday LA, Cornelius LJ (2006) Designing and Conducting Health Surveys. Hoboken (New Jersey): John Wiley & Sons, Inc.
  • View Article
  • Google Scholar
  • 6. EQUATOR NetworkIntroduction to Reporting Guidelines. Available: http://www.equator-network.org/resource-centre/library-of-health-research-reporting/reporting-guidelines/#what . Accessed 23 November 2009.
  • 18. AAPORHome page of the American Association of Public Opinion Research (AAPOR). Available: http://www.aapor.org . Accessed 20 January 2009.
  • 22. Johnson T, Owens L (2003) Survey Response Rate Reporting in the Professional Literature. Available: http://www.amstat.org/sections/srms/proceedings/y2003/Files/JSM2003-000638.pdf . Accessed 11 July 2011.
  • 27. Dillman DA (2007) Mail and Internet Surveys: The Tailored Design Method. Hoboken (New Jersey): John Wiley & Sons, Inc.

Illustration

  • Basics of Research Process
  • Methodology
  • Survey Research Design: Definition, How to Conduct a Survey & Examples
  • Speech Topics
  • Basics of Essay Writing
  • Essay Topics
  • Other Essays
  • Main Academic Essays
  • Research Paper Topics
  • Basics of Research Paper Writing
  • Miscellaneous
  • Chicago/ Turabian
  • Data & Statistics
  • Admission Writing Tips
  • Admission Advice
  • Other Guides
  • Student Life
  • Studying Tips
  • Understanding Plagiarism
  • Academic Writing Tips
  • Basics of Dissertation & Thesis Writing

Illustration

  • Essay Guides
  • Research Paper Guides
  • Formatting Guides
  • Admission Guides
  • Dissertation & Thesis Guides

Survey Research Design: Definition, How to Conduct a Survey & Examples

Survey research

Table of contents

Illustration

Use our free Readability checker

Survey research is a quantitative research method that involves collecting data from a sample of individuals using standardized questionnaires or surveys. The goal of survey research is to measure the attitudes, opinions, behaviors, and characteristics of a target population. Surveys can be conducted through various means, including phone, mail, online, or in-person.

If your project involves live interaction with numerous people in order to obtain important data, you should know the basic rules of survey research beforehand. Today we’ll talk about this research type, review the step-by-step guide on how to do a survey research and try to understand main advantages and potential pitfalls. The following important questions will be discussed below:

  • Purpose and techniques of information collection.
  • Kinds of responses.
  • Analysis techniques, assumptions, and conclusions.

Do you wish to learn best practices of survey conducting? Stay with our research paper service and get prepared for some serious reading!

What Is Survey Research: Definition

Let’s define the notion of survey research first. It revolves around surveys you conduct to retrieve certain data from your respondents. The latter is to be carefully selected from some population that for particular reasons possess the data necessary for your research. For example, they can be witnesses of some event that you should investigate. Surveys contain a set of predefined questions, closed- or open-ended. They can be sent to participants who would answer them and thus provide you with data for your research. There are many methods for organizing surveys and processing the obtained information.

Purpose of Survey Research Design

Purpose of survey research is to collect proper data and thus get insights for your research. You should pick participants with relatable experience. It should be done in order to get relevant information from them. Questions in your survey should be formulated in a way that allows getting as much useful data as possible. The format of a survey should be adjusted to the situation. It will ensure your respondents will be ready to give their answers. It can be a questionnaire sent over email or questions asked during a phone call.

Surveys Research Methods

Which survey research method to choose? Let’s review the most popular approaches and when to use them. There are two critical factors that define how a survey will be conducted

  • Tool to send questions
  • online: using web forms or email questionnaires.
  • phone: reaching out to respondents individually. Sometimes using an automated service.
  • face-to-face: interviewing respondents in the real world. This makes room for more in-depth questions.
  • Time to conduct research
  • short-term periods.
  • long-term periods.

Let’s explore the time-related methods in detail.

Cross-Sectional Survey Design Research

The first type is cross sectional survey research. Design of this survey type includes collecting various insights from an audience within a specific short time period. It is used for descriptive analysis of a subject. The purpose is to provide quick conclusions or assumptions. Which is why this approach relies on fast data gathering and processing techniques.  Such surveys are typically implemented in sectors such as retail, education, healthcare etc, where the situation tends to change fast. So it is important to obtain operational results as soon as possible.

Longitudinal Survey Research

Let’s talk about survey research designs . Planning a design beforehand is crucial. It is crucial in case you are pressed on time or have a limited budget. Collecting information using a properly designed survey research is typically more effective and productive compared with a casually conducted study.  Preparation of a survey design includes the following major steps:

  • Understand the aim of your research. So that you can better plan the entire path of a survey and avoid obvious issues.
  • Pick a good sample from a population. Ensure precision of the results by selecting members who could provide useful insights and opinions.
  • Review available research methods. Decide about the one most suitable for your specific case.
  • Prepare a questionnaire. Selection of questions would directly affect the quality of your longitudinal analysis . So make sure to pick good questions. Also, avoid unnecessary ones to save time and counter possible errors.
  • Analyze results and make conclusions.

Advantages of Survey Research

As a rule, survey research involves getting data from people with first-hand knowledge about the research subject. Therefore, when formulated properly, survey questions should provide some unique insights and thus describe the subject better. Other benefits of this approach include:

  • Minimum investment. Online and automated call services require very low investment per respondent.
  • Versatile sources. Data can be collected by numerous means, allowing more flexibility.
  • Reliable for respondents. Anonymous surveys are secure. Respondents are more likely to answer honestly if they understand it will be confidential.

Types of Survey Research

Let’s review the main types of surveys. It is important to know about most popular templates. So that you wouldn’t have to develop your own ones from scratch for your specific case. Such studies are usually categorized by the following aspects:

  • Objectives.
  • Data source.
  • Methodology.

We’ll examine each of these aspects below, focusing on areas where certain types are used. 

Types of Survey Research Depending on Objective

Depending on your objective and the specifics of the subject’s context, the following survey research types can be used:

  • Predictive This approach foresees asking questions that automatically predict the best possible response options based on how they are formulated. As a result, it is often easier for respondents to provide their answers as they already have helpful suggestions.
  • Exploratory This approach is focused more on the discovery of new ideas and insights rather than collecting statistically accurate information. The results can be difficult to categorize and analyze. But this approach is very useful for finding a general direction for further research.
  • Descriptive This approach helps to define and describe your respondents' opinions or behavior more precisely. By predefining certain categories and designing survey questions, you obtain statistical data. This descriptive research approach is often used at later research stages. It is used in order to better understand the meaning of insights obtained at the beginning.

Types of Survey Research Depending on Data Source

The following research survey types can be defined based on which sources you obtain the data from:

  • Primary In this case, you collect information directly from the original source, e.g., learn about a natural disaster from a survivor. You aren’t using any intermediary instances. And, as a result, don't get any information twisted or lost on its way. This is the way to obtain the most valid and trustworthy results. But at the same time, it is often not so easy to access such sources.
  • Secondary This involves collecting data from existing research on the same subject that has been published. Such information is easier to access. But at the same time, it is usually too general and not tailored for your specific needs.

Types of Survey Research Depending on Methodology

Finally, let’s review survey research methodologies based on the format of retrieved and processed data. They can be:

  • Quantitative An approach that focuses on gathering numeric or measurable data from respondents. This provides enough material for statistical analysis. And then leads to some meaningful conclusions. Collection of such data requires properly designed surveys that include numeric options. It is important to take precautions to ensure that the data you’ve gathered is valid.
  • Qualitative Such surveys rely on opinions, impressions, reflections, and typical reactions of target groups. They should include open-ended questions to allow respondents to give detailed answers. It allows providing information that they consider most relevant. Qualitative research is used to understand, explain or evaluate some ideas or tendencies.

It is essential to differentiate these two kinds of research. That's why we prepared a special blog, which is about quantitative vs qualitative research .

How to Conduct a Survey Research: Main Steps

Now let’s find out how to do a survey step by step. Regardless of methods you use to design and conduct your survey, there are general guidelines that should be followed. The path is quite straightforward: 

  • Assess your goals and options for accessing necessary groups.
  • Formulate each question in a way that helps you obtain the most valuable data.
  • Plan and execute the distribution of the questions.
  • Process the results.

Let’s take a closer look at all these stages.

Step 1. Create a Clear Survey Research Question

Each survey research question should add some potential value to your expected results. Before formulating your questionnaire, it is better to invest some time analyzing your target populations. This will allow you to form proper samples of respondents. Big enough to get some insights from them but not too big at the same time. A good way to prepare questions is by constructing case studies for your subject. Analyzing case study examples in detail will help you understand which information about them is necessary.

Step 2. Choose a Type of Survey Research

As we’ve already learned, there are several different types of survey research. Starting with a close analysis of your subject, goals and available sources will help you understand which kinds of questions are to be distributed.  As a researcher, you’ll also need to analyze the features of the selected group of respondents. Pick a type that makes it easier to reach out to them. For example, if you should question a group of elderly people, online forms wouldn’t be efficient compared with interviews.

Step 3. Distribute the Questionnaire for Your Survey Research

The next step of survey research is the most decisive one. Now you should execute the plan you’ve created earlier. And then conduct the questioning of the entire group that was selected. If this is a group assignment, ask your colleagues or peers for help. Especially if you should deal with a big group of respondents. It is important to stick to the initial scenario but leave some room for improvisation in case there are difficulties with reaching out to respondents. After you collect all necessary responses, this data can be processed and analyzed.

Step 4. Analyze the Results of Your Research Survey

The data obtained during the survey research should be processed. So that you can use it for making assumptions and conclusions. If it is qualitative, you should conduct a thematic analysis to find important ideas and insights that could confirm your theories or expand your knowledge of the subject. Quantitative data can be analyzed manually or with the help of some program. Its purpose is to extract dependencies and trends from it to confirm or refute existing assumptions.

Step 5. Save the Results of Your Survey Research

The final step is to compose a survey research paper in order to get your results ordered. This way none of them would be lost especially if you save some copies of the paper. Depending on your assignment and on which stage you are at, it can be a dissertation, a thesis or even an illustrative essay where you explain the subject to your audience.  Each survey you’ve conducted must get a special section in your paper where you explain your methods and describe your results.

Survey Research Example

We have got a few research survey examples in case you would need some real world cases to illustrate the guidelines and tips provided above. Below is a sample research case with population and the purposes of researchers defined.

Example of survey research design The Newtown Youth Initiative will conduct a qualitative survey to develop a program to mitigate alcohol consumption by adolescent citizens of Newtown. Previously, cultural anthropology research was performed for studying mental constructs to understand young people's expectations from alcohol and their views on specific cultural values. Based on its results, a survey was designed to measure expectancies, cultural orientation among the adolescent population. A secure web page has been developed to conduct this survey and ensure anonymity of respondents. The Newtown Youth Initiative will partner with schools to share the link to this page with students and engage them to participate. Statistical analysis of differences in expectancies and cultural orientation between drinkers and non-drinkers will be performed using the data from this survey.

Survey Research: Key Takeaways

Today, we have explored the research survey notion and reviewed the main features of this research activity and its usage in the social sciences topics . Important techniques and tips have been reviewed. A step by step guide for conducting such studies has also been provided.

Illustration

Found it difficult to reach out to your target group? Or are you just pressed with deadlines? We've got your back! Check out our writing services and leave a ‘ write paper for me ’ request. We are a team of skilled authors with vast experience in various academic fields.

Frequently Asked Questions About Survey Research

1. what is a market research survey.

A market research survey can help a company understand several aspects of their target market. It typically involves picking focus groups of customers and asking them questions in order to learn about demand for specific products or services and understand whether it grows. Such feedback would be crucial for a company’s development. It can help it to plan its further strategic steps.

2. How does survey research differ from experimental research methods?

The main difference between experiment and survey research is that the latter means field research, while experiments are typically performed in laboratory conditions. When conducting surveys, researchers don’t have full control on the process and should adapt to the specific traits of their target groups in order to obtain answers from them. Besides, results of a study might be harder to quantify and turn into statistical values.

4. What is the difference between survey research and descriptive research?

The purpose of descriptive studies is to explain what the subject is and which features it has. Survey research may include descriptive information but is not limited by that. Typically it goes beyond descriptive statistics and includes qualitative research or advanced statistical methods used to draw inferences, find dependencies or build trends. On the other hand, descriptive methods don’t necessarily include questioning respondents, obtaining information from other sources.

3. What is good sample size for a survey?

It always depends on a specific case and researcher’s goals. However, there are some general guidelines and best practices for this activity. Good maximum sample size is usually around 10% of the population, as long as this does not exceed 1000 people. In any case, you should be mindful of your time and budget limitations when planning your actions. In case you’ve got a team to help you, it might be possible to process more data.

Joe_Eckel_1_ab59a03630.jpg

Joe Eckel is an expert on Dissertations writing. He makes sure that each student gets precious insights on composing A-grade academic writing.

You may also like

Descriptive Research

ELCOMBLUS

What is a Survey Report?

A survey report is a type of academic writing that uses research to provide information about a topic. It involves questions that are formulated based on the research objective, to be answered by respondents and later analyzed using appropriate data analysis methods. Survey reports involve report writing which is a very important element of the survey research process.

To be able to disseminate the information from the survey, you need to have good writing skills. Without good writing skills, the survey report is at risk of being misrepresented or not explained well. When this happens, the objective of the survey is not achieved, for it is the aim of survey reports to present the survey data in a manner that is engaging and understandable to various readers.

Survey questionnaires are the basic tool of survey reports. They are the forms containing the questions that the researcher will ask during the survey. Just as the objectives of survey reports vary, so do the types of questionnaires that you need to formulate.

The kind of questionnaire you will use depends on your research objective. Questionnaires may range from the most basic—the yes or no type, which requires respondents to tick off appropriate boxes containing the given responses—to the more complex: close-ended, which requires respondents to choose from among given options; and open-ended, which requires respondents to provide answers to thought-provoking questions.

While survey questionnaires have often used the pen and paper as main instruments in conducting a survey in the past, the advent of the Internet has made it possible to resort to using web-based questionnaires, which is actually more convenient as it requires less time and resources in gathering data.

Another way of conducting a survey is through interviews. While it requires basically the same types of structured questions as the questionnaire, the face-to-face interaction between the researcher and respondents in an interview gives more opportunity for an in-depth discussion of open-ended questions, thus allowing a better understanding of the respondent’s answers.

Who needs to see our story? Share this content

  • Opens in a new window

You Might Also Like

Read more about the article Concept Paper: Definition and Example

Concept Paper: Definition and Example

Read more about the article Language and Text Structure Across Disciplines

Language and Text Structure Across Disciplines

Read more about the article How to Read Academic Texts?

How to Read Academic Texts?

  • CORE SUBJECTS
  • APPLIED SUBJECTS
  • TEACHER TRAINING
  • GENERAL EDUCATION
  • Research Report: Definition, Types + [Writing Guide]

busayo.longe

One of the reasons for carrying out research is to add to the existing body of knowledge. Therefore, when conducting research, you need to document your processes and findings in a research report. 

With a research report, it is easy to outline the findings of your systematic investigation and any gaps needing further inquiry. Knowing how to create a detailed research report will prove useful when you need to conduct research.  

What is a Research Report?

A research report is a well-crafted document that outlines the processes, data, and findings of a systematic investigation. It is an important document that serves as a first-hand account of the research process, and it is typically considered an objective and accurate source of information.

In many ways, a research report can be considered as a summary of the research process that clearly highlights findings, recommendations, and other important details. Reading a well-written research report should provide you with all the information you need about the core areas of the research process.

Features of a Research Report 

So how do you recognize a research report when you see one? Here are some of the basic features that define a research report. 

  • It is a detailed presentation of research processes and findings, and it usually includes tables and graphs. 
  • It is written in a formal language.
  • A research report is usually written in the third person.
  • It is informative and based on first-hand verifiable information.
  • It is formally structured with headings, sections, and bullet points.
  • It always includes recommendations for future actions. 

Types of Research Report 

The research report is classified based on two things; nature of research and target audience.

Nature of Research

  • Qualitative Research Report

This is the type of report written for qualitative research . It outlines the methods, processes, and findings of a qualitative method of systematic investigation. In educational research, a qualitative research report provides an opportunity for one to apply his or her knowledge and develop skills in planning and executing qualitative research projects.

A qualitative research report is usually descriptive in nature. Hence, in addition to presenting details of the research process, you must also create a descriptive narrative of the information.

  • Quantitative Research Report

A quantitative research report is a type of research report that is written for quantitative research. Quantitative research is a type of systematic investigation that pays attention to numerical or statistical values in a bid to find answers to research questions. 

In this type of research report, the researcher presents quantitative data to support the research process and findings. Unlike a qualitative research report that is mainly descriptive, a quantitative research report works with numbers; that is, it is numerical in nature. 

Target Audience

Also, a research report can be said to be technical or popular based on the target audience. If you’re dealing with a general audience, you would need to present a popular research report, and if you’re dealing with a specialized audience, you would submit a technical report. 

  • Technical Research Report

A technical research report is a detailed document that you present after carrying out industry-based research. This report is highly specialized because it provides information for a technical audience; that is, individuals with above-average knowledge in the field of study. 

In a technical research report, the researcher is expected to provide specific information about the research process, including statistical analyses and sampling methods. Also, the use of language is highly specialized and filled with jargon. 

Examples of technical research reports include legal and medical research reports. 

  • Popular Research Report

A popular research report is one for a general audience; that is, for individuals who do not necessarily have any knowledge in the field of study. A popular research report aims to make information accessible to everyone. 

It is written in very simple language, which makes it easy to understand the findings and recommendations. Examples of popular research reports are the information contained in newspapers and magazines. 

Importance of a Research Report 

  • Knowledge Transfer: As already stated above, one of the reasons for carrying out research is to contribute to the existing body of knowledge, and this is made possible with a research report. A research report serves as a means to effectively communicate the findings of a systematic investigation to all and sundry.  
  • Identification of Knowledge Gaps: With a research report, you’d be able to identify knowledge gaps for further inquiry. A research report shows what has been done while hinting at other areas needing systematic investigation. 
  • In market research, a research report would help you understand the market needs and peculiarities at a glance. 
  • A research report allows you to present information in a precise and concise manner. 
  • It is time-efficient and practical because, in a research report, you do not have to spend time detailing the findings of your research work in person. You can easily send out the report via email and have stakeholders look at it. 

Guide to Writing a Research Report

A lot of detail goes into writing a research report, and getting familiar with the different requirements would help you create the ideal research report. A research report is usually broken down into multiple sections, which allows for a concise presentation of information.

Structure and Example of a Research Report

This is the title of your systematic investigation. Your title should be concise and point to the aims, objectives, and findings of a research report. 

  • Table of Contents

This is like a compass that makes it easier for readers to navigate the research report.

An abstract is an overview that highlights all important aspects of the research including the research method, data collection process, and research findings. Think of an abstract as a summary of your research report that presents pertinent information in a concise manner. 

An abstract is always brief; typically 100-150 words and goes straight to the point. The focus of your research abstract should be the 5Ws and 1H format – What, Where, Why, When, Who and How. 

  • Introduction

Here, the researcher highlights the aims and objectives of the systematic investigation as well as the problem which the systematic investigation sets out to solve. When writing the report introduction, it is also essential to indicate whether the purposes of the research were achieved or would require more work.

In the introduction section, the researcher specifies the research problem and also outlines the significance of the systematic investigation. Also, the researcher is expected to outline any jargons and terminologies that are contained in the research.  

  • Literature Review

A literature review is a written survey of existing knowledge in the field of study. In other words, it is the section where you provide an overview and analysis of different research works that are relevant to your systematic investigation. 

It highlights existing research knowledge and areas needing further investigation, which your research has sought to fill. At this stage, you can also hint at your research hypothesis and its possible implications for the existing body of knowledge in your field of study. 

  • An Account of Investigation

This is a detailed account of the research process, including the methodology, sample, and research subjects. Here, you are expected to provide in-depth information on the research process including the data collection and analysis procedures. 

In a quantitative research report, you’d need to provide information surveys, questionnaires and other quantitative data collection methods used in your research. In a qualitative research report, you are expected to describe the qualitative data collection methods used in your research including interviews and focus groups. 

In this section, you are expected to present the results of the systematic investigation. 

This section further explains the findings of the research, earlier outlined. Here, you are expected to present a justification for each outcome and show whether the results are in line with your hypotheses or if other research studies have come up with similar results.

  • Conclusions

This is a summary of all the information in the report. It also outlines the significance of the entire study. 

  • References and Appendices

This section contains a list of all the primary and secondary research sources. 

Tips for Writing a Research Report

  • Define the Context for the Report

As is obtainable when writing an essay, defining the context for your research report would help you create a detailed yet concise document. This is why you need to create an outline before writing so that you do not miss out on anything. 

  • Define your Audience

Writing with your audience in mind is essential as it determines the tone of the report. If you’re writing for a general audience, you would want to present the information in a simple and relatable manner. For a specialized audience, you would need to make use of technical and field-specific terms. 

  • Include Significant Findings

The idea of a research report is to present some sort of abridged version of your systematic investigation. In your report, you should exclude irrelevant information while highlighting only important data and findings. 

  • Include Illustrations

Your research report should include illustrations and other visual representations of your data. Graphs, pie charts, and relevant images lend additional credibility to your systematic investigation.

  • Choose the Right Title

A good research report title is brief, precise, and contains keywords from your research. It should provide a clear idea of your systematic investigation so that readers can grasp the entire focus of your research from the title. 

  • Proofread the Report

Before publishing the document, ensure that you give it a second look to authenticate the information. If you can, get someone else to go through the report, too, and you can also run it through proofreading and editing software. 

How to Gather Research Data for Your Report  

  • Understand the Problem

Every research aims at solving a specific problem or set of problems, and this should be at the back of your mind when writing your research report. Understanding the problem would help you to filter the information you have and include only important data in your report. 

  • Know what your report seeks to achieve

This is somewhat similar to the point above because, in some way, the aim of your research report is intertwined with the objectives of your systematic investigation. Identifying the primary purpose of writing a research report would help you to identify and present the required information accordingly. 

  • Identify your audience

Knowing your target audience plays a crucial role in data collection for a research report. If your research report is specifically for an organization, you would want to present industry-specific information or show how the research findings are relevant to the work that the company does. 

  • Create Surveys/Questionnaires

A survey is a research method that is used to gather data from a specific group of people through a set of questions. It can be either quantitative or qualitative. 

A survey is usually made up of structured questions, and it can be administered online or offline. However, an online survey is a more effective method of research data collection because it helps you save time and gather data with ease. 

You can seamlessly create an online questionnaire for your research on Formplus . With the multiple sharing options available in the builder, you would be able to administer your survey to respondents in little or no time. 

Formplus also has a report summary too l that you can use to create custom visual reports for your research.

Step-by-step guide on how to create an online questionnaire using Formplus  

  • Sign into Formplus

In the Formplus builder, you can easily create different online questionnaires for your research by dragging and dropping preferred fields into your form. To access the Formplus builder, you will need to create an account on Formplus. 

Once you do this, sign in to your account and click on Create new form to begin. 

  • Edit Form Title : Click on the field provided to input your form title, for example, “Research Questionnaire.”
  • Edit Form : Click on the edit icon to edit the form.
  • Add Fields : Drag and drop preferred form fields into your form in the Formplus builder inputs column. There are several field input options for questionnaires in the Formplus builder. 
  • Edit fields
  • Click on “Save”
  • Form Customization: With the form customization options in the form builder, you can easily change the outlook of your form and make it more unique and personalized. Formplus allows you to change your form theme, add background images, and even change the font according to your needs. 
  • Multiple Sharing Options: Formplus offers various form-sharing options, which enables you to share your questionnaire with respondents easily. You can use the direct social media sharing buttons to share your form link to your organization’s social media pages.  You can also send out your survey form as email invitations to your research subjects too. If you wish, you can share your form’s QR code or embed it on your organization’s website for easy access. 

Conclusion  

Always remember that a research report is just as important as the actual systematic investigation because it plays a vital role in communicating research findings to everyone else. This is why you must take care to create a concise document summarizing the process of conducting any research. 

In this article, we’ve outlined essential tips to help you create a research report. When writing your report, you should always have the audience at the back of your mind, as this would set the tone for the document. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • ethnographic research survey
  • research report
  • research report survey
  • busayo.longe

Formplus

You may also like:

21 Chrome Extensions for Academic Researchers in 2022

In this article, we will discuss a number of chrome extensions you can use to make your research process even seamless

meaning of survey research report

How to Write a Problem Statement for your Research

Learn how to write problem statements before commencing any research effort. Learn about its structure and explore examples

Ethnographic Research: Types, Methods + [Question Examples]

Simple guide on ethnographic research, it types, methods, examples and advantages. Also highlights how to conduct an ethnographic...

Assessment Tools: Types, Examples & Importance

In this article, you’ll learn about different assessment tools to help you evaluate performance in various contexts

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • v.187(6); 2015 Apr 7

Logo of cmaj

How to assess a survey report: a guide for readers and peer reviewers

Although designing and conducting surveys may appear straightforward, there are important factors to consider when reading and reviewing survey research. Several guides exist on how to design and report surveys, but few guides exist to assist readers and peer reviewers in appraising survey methods. 1 – 9 We have developed a guide to aid readers and reviewers to discern whether the information gathered from a survey is reliable, unbiased and from a representative sample of the population. In our guide, we pose seven broad questions and specific subquestions to assist in assessing the quality of articles reporting on self-administered surveys ( Box 1 ). We explain the rationale for each question posed and cite literature addressing its relevance in appraising the methodologic and reporting quality of survey research. Throughout the guide, we use the term “questionnaire” to refer to the instrument administered to respondents and “survey” to define the process of administering the questionnaire. We use “readers” to encompass both readers and peer reviewers.

A guide for appraising survey reports

  • 1a. Does the research question or objective specify clearly the type of respondents, the topic of interest, and the primary and secondary research questions to be addressed?
  • 2a. Was the population of interest specified?
  • 2b. Was the sampling frame specified?
  • 3a. Item generation and reduction: Did the authors report how items were generated and ultimately reduced?
  • 3b. Questionnaire formatting: Did the authors specify how questionnaires were formatted?
  • 3c. Pretesting: Were individual questions within the questionnaire pretested?
  • 4a. Pilot testing: Was the entire questionnaire pilot tested?
  • 4b. Clinimetric testing: Were any clinimetric properties (face validity or clinical sensibility testing, content validity, inter- or intra-rater reliability) evaluated and reported?
  • 5a. Was the method of questionnaire administration appropriate for the research objective or question posed?
  • 5b. Were additional details regarding prenotification, use of a cover letter and an incentive for questionnaire completion provided?
  • 6a. Was the response rate reported (alternatively, were techniques used to assess nonresponse bias)?
  • 6b. Was the response rate defined?
  • 6c. Were strategies used to enhance the response rate (including sending of reminders)?
  • 6d. Was the sample size justified?
  • 7a. Does the survey report address the research question(s) posed or the survey objectives?
  • 7b. Were methods for handling missing data reported?
  • 7c. Were demographic data of the survey respondents provided?
  • 7d. Were the analytical methods clear?
  • 7e. Were the results succinctly summarized?
  • 7f. Did the authors’ interpretation of the results align with the data presented?
  • 7g. Were the implications of the results stated?
  • 7h. Was the questionnaire provided in its entirety (as an electronic appendix or in print)?

Questions to ask about a survey report

Was a clear objective posed.

Every questionnaire should be guided by a simple, clearly articulated objective that highlights the topic of interest, the type of respondents and the primary research question to be addressed. 2 Readers should use their judgment to determine whether the survey report answered the research question posed.

Was the target population defined, and was the sample representative of the population?

The population of interest should be clearly defined in the research report. Because administering a questionnaire to an entire population is usually infeasible, researchers must survey a sample of the population. Readers should assess whether the sample of potential respondents was representative of the population. Representativeness refers to how closely the sample reflects the attributes of the population. The “sampling frame” is the target population from which the sample will be drawn. 2

How the sampling frame and potential respondents were identified should be specified in the report. For example, what sources did the authors use to identify the population (e.g., membership lists, registries), and how did they identify individuals within the population as potential respondents? Sample selection can be random (probability design [simple random, systematic random, stratified random or cluster sampling]) or deliberate (nonprobability design), 2 with advantages and disadvantages associated with each sampling strategy ( Table 1 ). 10 Except for cluster sampling, investigators rely on lists of individuals with accurate and up-to-date contact information (e.g., postal or email addresses, telephone numbers) to conduct probability sampling.

Commonly used strategies for probability sampling

Adapted, with permission, from Aday and Cornelius. 10

Readers should consider how similar the individuals invited to participate in the survey were to the target population based on the data sources and sampling strategy used, the technique used to administer the survey questionnaire and the respondents’ demographic data.

Was a systematic approach used to develop the questionnaire?

Questionnaire development includes four phases: item generation, item reduction, formatting and pretesting. Readers should discern whether a systematic approach was used to develop the questionnaire and understand the potential consequences of not using a methodical approach. Use of a systematic approach to questionnaire development reassures readers that key domains were not missed and that the authors carefully considered the phrasing of individual questions to limit the chance of misunderstanding. When evaluating questionnaire development, readers should ask themselves whether the methods used by the authors allowed them to address the research question posed.

First, readers should assess how items for inclusion in the questionnaire were generated (e.g., literature reviews, 11 in-depth interviews or focus group sessions, or the Delphi technique involving experts or potential respondents 2 , 12 , 13 ). In item generation, investigators identify all potential constructs or items (ideas, concepts) that could be included in the questionnaire with the goal of tapping into important domains (categories or themes) of the research question. 14 This step helps investigators define the constructs they wish to explore, group items into domains and start formulating questions within domains. 2 Item generation continues until they cannot identify new items.

Second, readers should assess how the authors identified redundant items and constrained the number of questions posed within domains, without removing important constructs or entire domains, with the goal of limiting respondent burden. Most research questions can be addressed with at least five domains and 25 or fewer items. 12 , 15 To determine how the items were reduced in number, readers should examine the process used (e.g., investigators may have conducted interviews or focus groups), who was involved (e.g., external appraisers or experts) and how items were identified for inclusion or exclusion (e.g., use of binary responses [in/out], ranking [ordinal scales], rating [Likert scales] 16 or statistical methods).

Next, readers should review the survey methods and the appended questionnaire to determine whether measures were taken by the authors to limit ambiguity while formatting question stems and response options. Each question should have addressed a single construct, and the perspective from which each question was posed should be clear. Readers should examine individual question stems and response formats to ensure that the phrasing was simple and easily understood, each question addressed a single item, and the response formats were mutually exclusive and exhaustive. 2 , 17 Closed-ended response formats, whereby respondents are restricted to specific responses (e.g., yes or no) or a limited number of categories, are the most frequently used and the easiest to aggregate and analyze. If necessary, authors may have included indeterminate responses and “other” options to provide a comprehensive list of response options. Readers should note whether the authors avoided using terminology that could be perceived as judgmental, biased or absolute (e.g., “always,” “never”) 15 and avoided using negative and double-barrelled items 17 that may bias responses or make them difficult to interpret.

Lastly, readers should note whether pretesting was conducted. Pretesting ensures that different respondents interpret the same question similarly and that questions fulfill the authors’ intended purpose and address the research question posed. In pretesting, the authors obtain feedback (e.g., written or verbal) from individuals who are similar to prospective respondents on whether to accept, reject or revise individual questions.

Was the questionnaire tested?

Several types of questionnaire testing can be performed, including pilot, clinical sensibility, reliability and validity testing. Readers should assess whether the investigators conducted formal testing to identify problems that may affect how respondents interpret and respond to individual questions and to the questionnaire as a whole. At a minimum, each questionnaire should have undergone pilot testing. Readers should evaluate what process was used for pilot testing the questionnaire (e.g., investigators sought feedback in a semi-structured format), the number and type of people involved (e.g., individuals similar to those in the sampling frame) and what features (e.g., the flow, salience and acceptability of the questionnaire) were assessed. Both pretesting and pilot testing minimize the chance that respondents will misinterpret questions. Whereas pretesting focuses on the wording of the questionnaire, pilot testing assesses the flow and relevance of the entire questionnaire, as well as individual questions, to identify unusual, irrelevant, poorly worded or redundant questions and responses. 18 Through testing, the authors identify problems with questions and response formats so that modifications can be made to enhance questionnaire reliability, validity and responsiveness.

Readers should determine whether additional testing (e.g., clinical sensibility, inter- or intra-rater reliability, and validity testing) was conducted and, if so, the number and type of participants involved in each assessment.

Clinical sensibility testing addresses the comprehensiveness, clarity and face validity of the questionnaire. 2 Readers should assess whether such an assessment was made and how it was done (e.g., use of a structured assessment sheet with either a Likert scale 16 or nominal response format). Clinical sensibility testing reassures readers that the investigators took steps to identify missing or redundant items, evaluated how well the questionnaire addressed the research question posed and assessed whether existing questions and responses were easily understood.

Reliability testing determines whether differences in respondents’ answers were due to poorly designed questions or to true differences within or between respondents. Readers should assess whether any reliability testing was conducted. In intra-rater (or test–retest) reliability testing, investigators assess whether the same respondent answered the same questions consistently when administered at different times, in the absence of any expected change. With internal consistency, they determine whether items within a construct are associated with one another. A variety of statistical tests can be used to assess test–retest reliability and internal consistency. 2 Test–retest reliability is commonly reported in survey articles. Substantial or near-perfect reliability scores (e.g., intraclass correlation coefficient > 0.61 and 0.80, respectively) should reassure readers that the respondents, when presented with the same question on two separate occasions, answered it similarly. 19

Types of validity assessments include face, content, construct and criterion validity. Readers should assess whether any validity testing was conducted. Although the number of validity assessments depends on current or future use of the questionnaire, investigators should have assessed at a minimum the face validity of their questionnaire during clinical sensibility testing. 2 In face validity, experts in the field or a sample of respondents similar to the target population determine whether the questionnaire measures what it aims to measure. 20 In content validity, experts assess whether the content of the questionnaire includes all aspects considered essential to the construct or topic. Investigators evaluate construct validity when specific criteria to define the concept of interest are unknown; they verify whether key constructs were included using content validity assessments made by experts in the field or using statistical methods (e.g., factor analysis). 2 In criterion validity, investigators compare responses to items with a gold standard. 2

Was the questionnaire administered in a manner that limited both response and nonresponse bias?

Although a variety of techniques (e.g., postal, telephone, electronic) can be used to administer questionnaires, postal and electronic (email or Internet) methods are the most common. The technique chosen depends on several factors, such as the research question, the amount and type of information desired, the sample size, available personnel and financial resources. The selected administration technique may result in response bias and ultimately influence how well survey respondents represent the target population. For example, telephone surveys will tend to identify respondents at home, and electronic surveys are more apt to be answered by those with Internet access and computer skills. Moreover, although electronic questionnaires are less labour intensive to conduct than postal questionnaires, their response rates may be lower. 21

Readers should assess the alignment between the administration technique used and the research question posed, the potential for the administration technique to have influenced the survey results and whether the investigators made efforts to contact nonrespondents to limit nonresponse bias.

Was the response rate reported, and were strategies used to optimize the response rate?

Readers should examine the reported survey methods and results to determine whether the response rates (numerators and denominators) align with the definition of the response rates provided, whether a target sample was provided and whether the investigators used specific strategies to enhance the response rate. 22 A high response rate (i.e., > 80%) minimizes the potential for bias due to absent responses, ensures precision of estimates and generalizability of survey findings to the target population and enhances the validity of the questionnaire. 1 , 2 , 23

The “sampling element” refers to the respondents for whom information is collected and analyzed. To compute accurate response rates, readers need information on the number of surveys sent (denominator) and the number of surveys received (numerator). They should then examine characteristics of the surveys returned and identify reasons for nonresponse. For example, returned questionnaires may be classified as eligible (completed) or ineligible (e.g., returned and opted out because it did not meet eligibility criteria or self-reported as ineligible). Questionnaires that were not returned may represent individuals who were eligible but who did not wish to respond or participate in the survey or individuals with indeterminate eligibility. 8 Readers should also determine how specific eligibility circumstances (e.g., return-to-sender questionnaires, questionnaires completed in part or in full) were managed and analyzed by the investigators. Transparent and replicable formulas for calculating response rates are continually being developed and updated by the American Association for Public Opinion Research. 8 The use of standardized formulas enables comparison of response rates across surveys. Investigators should define and report the response rates (overall and for prespecified subgroups) and provide sufficient detail to understand how they were computed. This information will help readers to verify the computations and determine how closely actual and target response rates align.

To optimize the number of valid responses, authors should report a sample size estimate based on anticipated response rates. 10 , 24 An a priori computation of the sample size helps guide the number of respondents sought and, if realized, increases readers’ confidence in the survey results. 25 Response rates below 60%, between 60% and 70%, and 70% or higher have all traditionally been considered acceptable, 12 , 14 , 16 with lower mean rates reported among physicians (54%–61%) 26 , 27 than among nonphysicians (68%). 26 A recent meta-analysis of 48 studies identified an overall survey response rate of 53% among health professionals. 28 A rate of 60% or higher among physicians is reasonable and has face validity. Notwithstanding, some authors feel that there is no scientifically established minimum acceptable response rate and assert that response rates may not be associated with survey representativeness or quality. 29 In such instances, the more important consideration in determining representativeness is the degree to which sampled respondents differ from the target population (or nonresponse bias), which can be assessed using a variety of techniques. 29

Relevant or topical research questions are more likely to garner interest and enthusiasm from potential respondents and favourably influence response rates. 22 Readers should assess whether potential respondents received a prenotification statement and an incentive (monetary or nonmonetary) for completing the questionnaire, and what type and number of reminder questionnaires were issued to nonrespondents. All of these factors can enhance response rates. In Tables 2 and ​ and3, 3 , we summarize strategies shown to substantially influence response rates to postal and electronic questionnaires in a large meta-analysis. 22 Reminder strategies may involve multiple follow-up waves. 32 For postal surveys, each additional mailed reminder could increase initial response rates by 30% to 50%. 33 The ordering of survey administration techniques can also be important. In one study, the response rate was higher when an initial postal survey was followed by an online survey to nonrespondents than when the reverse was done. 34

Strategies that enhance or reduce the response rate to mailed questionnaires 22

Note: CI = confidence interval, NA = not applicable.

Strategies that enhance or reduce the response rate to electronic questionnaires 22

Were the results reported clearly and transparently?

The reported results should directly address the primary and secondary research questions posed. Survey findings should be reported with sufficient detail to be clear and transparent. Readers should assess whether the methods used to handle missing data and to conduct analyses have been reported. They should also determine if the authors’ conclusions align with the data presented and whether the authors have discussed the implications of their findings. Readers should review the demographic data on the survey respondents to determine how similar or different the sampling frame is from their own population. Finally, they may wish to review the questionnaire, which ideally should be appended as an electronic supplement, to align specific questions with reported results.

Although several guides have been published for reporting survey findings, 3 – 6 survey methods are often poorly reported, leaving readers to question whether specific steps in questionnaire design were conducted or simply not reported. 9 , 35 In a review of 117 published surveys, few provided the questionnaire or core questions (35%), defined the response rate (25%), reported the validity or reliability of the instrument (19%), discussed the representativeness of the sample (11%) or identified how missing data were handled (11%). 9 In another review of survey reporting in critical care, Duffett and colleagues 35 identified five key features of high-quality survey reporting: a stated question (or objective); identification of the respondent group; the number of questionnaires distributed; the number of returned questionnaires completed in part or in full (or a response rate); and the methods used to deal with incomplete surveys. They identified additional methodologic features that should be reported to aid in interpretation and to limit bias: the unit of analysis (e.g., individual, hospital, city); the number of potential respondents who could not be contacted; the rationale for excluding entire questionnaires from the analysis; and the type of analysis conducted (e.g., descriptive, inferential or higher level analysis). 35 Finally, they highlighted the need for investigators to provide demographic data about respondents and to append a copy of the questionnaire to the published survey report. 35 With transparent reporting, readers will be able to identify the strengths and limitations of survey studies more easily and to determine how applicable the results are to their setting.

The seven broad questions and specific subquestions posed in our guide are designed to help readers assess the quality of survey reports systematically. They assess whether authors have posed a clear research question, gathered information from an unbiased and representative sample of the population and used a systematic approach to develop and test their questionnaire. They also probe whether authors defined and reported response rates (or assessed nonresponse bias), adopted strategies to enhance the response rate and justified the sample size. Finally, our guide prompts readers to assess the clarity and transparency with which survey results are reported. Rigorous design and testing, a high response rate, and appropriate interpretation and reporting of analyses enhance the credibility and trustworthiness of survey findings.

Although surveys may be categorized under the umbrella of observational research, there are important distinctions between surveys and other observational study designs (e.g., case–control, cross-sectional and cohort) that are not captured by the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) framework. 36 , 37 For example, with surveys, authors question respondents to understand their attitudes, knowledge and beliefs, whereas with observational studies, investigators typically observe participants for outcomes. Reasons for nonresponse and nonparticipation in surveys are often unknown, whereas individuals can be accounted for at various stages of observational studies. Survey researchers typically do not have detailed information about the characteristics of nonrespondents, whereas in observational studies, investigators may have demographic data for eligible participants who were not ultimately included in the cohort. Although they bear some similarities to observational studies, surveys have unique features of development, testing, administration and reporting that may justify the development and use of a separate reporting framework.

Our guide has limitations. First, we did not conduct a systematic review with the goal of critically appraising, comparing and synthesizing guidelines and checklists for reporting survey research. As a result, our guide may not include some items thought to be important by other researchers. Second, our guide is better suited to aiding readers in evaluating reports of self-administered questionnaires as opposed to interviewer-administered questionnaires. Third, although it does not bear directly on the methodologic quality of questionnaires, we did not address issues pertaining to research ethics approval and informed consent. Although ethics approval is required in Canada, it is not mandatory for conducting survey research in some jurisdictions. Completion and return of questionnaires typically implies consent to participate; however, written consent may be required to allow ongoing prospective follow-up, continued engagement and future contact. Finally, our intent was to develop a simple guide to assist readers in assessing and appraising a survey report, not a guidance document on how to conduct and report a survey. Although our guide does not supplant the need for a well-developed guideline on survey reporting, 9 it provides a needed framework to assist readers in appraising survey methods and reporting, and it lays the foundation for future work in this area.

  • Seven broad questions and several specific subquestions are posed to guide readers and peer reviewers in systematically assessing the quality of an article reporting on survey research.
  • Questions probe whether investigators posed a clear research question, gathered information from an unbiased and representative sample of the population, used a systematic approach to develop the questionnaire and tested the questionnaire.
  • Additional questions ask whether authors reported the response rate (or used techniques to assess nonresponse bias), defined the response rate, adopted strategies to enhance the response rate and justified the sample size.
  • Finally, the guide prompts readers and peer reviewers to assess the clarity and transparency of the reporting of the survey results.
  • American Association for Public Opinion Research ( www.aapor.org )
  • Best practices for survey research and public opinion ( www.aapor.org/AAPORKentico/Standards-Ethics/Best-Practices.aspx )
  • Eysenbach G. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res 2004;6:e34 ( www.jmir.org/2004/3/e34/ )
  • Burns KE, Duffet M, Kho M, et al.; ACCADEMY Group. A guide for the design and conduct of self-administered surveys of clinicians. CMAJ 2008;179:245–52 ( www.cmaj.ca/content/179/3/245.long )
  • Kelley K, Clark B, Brown V, et al. Good practice in the conduct and reporting of survey research. Int J Qual Health Care 2003;15:261–6 ( http://intqhc.oxfordjournals.org/content/15/3/261.long )

Acknowledgements

Karen Burns holds a Clinician Scientist Award from the Canadian Institutes of Health Research (CIHR) and an Ontario Ministry of Research and Innovation Early Researcher Award. Michelle Kho holds a CIHR Canada Research Chair in Critical Care Rehabilitation and Knowledge Translation.

Competing interests: None declared.

This article has been peer reviewed.

Contributors: Karen Burns performed the literature search, and Michelle Kho provided scientific guidance. Both authors drafted and revised the manuscript, approved the final version submitted for publication and agreed to act as guarantors of the work.

Read our research on: TikTok | Podcasts | Election 2024

Regions & Countries

State of the union 2024: where americans stand on the economy, immigration and other key issues.

President Joe Biden delivered last year's State of the Union address before a joint session of Congress at the U.S. Capitol on Feb. 7, 2023. Ahead of this year's address, Americans are focused the economy, immigration and conflicts abroad. (Win McNamee/Getty Images)

President Joe Biden will deliver his third State of the Union address on March 7. Ahead of the speech, Americans are focused on the health of the economy and the recent surge of migrant encounters at the U.S.-Mexico border, as well as ongoing conflicts abroad.

Here’s a look at public opinion on key issues facing the country, drawn from recent Pew Research Center surveys of U.S. adults.

This analysis looks at Americans’ views on a variety of national issues ahead of the annual State of the Union address.

These findings primarily come from a survey of 5,140 U.S. adults conducted Jan. 16-21, 2024. Everyone who took part in this survey is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the ATP’s methodology .

Links to additional ATP surveys used for this analysis, including information about their methodologies, are available in the text.

Economy remains top of mind for most Americans

A bar chart showing that Americans' top policy priority this year is strengthening the economy.

Nearly three-quarters of Americans (73%) say strengthening the economy should be a top priority for Biden and Congress this year, according to a Center survey conducted in January . Of the 20 policy goals we asked about, no other issue stands out – as has been the case for the past two years .

This assessment comes amid ongoing worries about high prices . Majorities of U.S. adults say they are very concerned about the price of food and consumer goods (72%) and the cost of housing (64%).

Still, views of the economy overall have warmed a bit in the past year. Slightly more than a quarter of Americans (28%) rate U.S. economic conditions as excellent or good, an increase of 9 percentage points since last April. This shift is driven largely by Democrats and Democratic-leaning independents: 44% rate the economy positively, compared with just 13% of Republicans and GOP leaners.

Despite these sharply different assessments, majorities of both Democrats (63%) and Republicans (84%) say strengthening the economy should be a top policy goal this year. These shares are largely unchanged since last year.

Immigration resonates strongly with Republicans

A horizontal stacked bar chart showing that most Democrats and nearly half of Republicans say boosting resources for quicker decisions on asylum cases would improve situation at Mexico border.

About six-in-ten Americans (57%) say dealing with immigration should be a top policy goal for the president and Congress this year, a share that’s increased 18 points (from 39%) since the start of Biden’s term.

This change is almost entirely due to growing concern among Republicans: 76% now say immigration should be a top priority, up from 39% in 2021. By comparison, the 39% of Democrats who cite immigration as a priority has remained fairly stable since 2021.

Related: Latinos’ Views on the Migrant Situation at the U.S.-Mexico Border

The growing number of migrant encounters at the southern border has emerged as a key issue in the 2024 election cycle. Biden and former President Donald Trump, who is running for the Republican presidential nomination, both visited the border on Feb. 29 .

Eight-in-ten U.S. adults say the federal government is doing a bad job dealing with the large number of migrants at the U.S.-Mexico border, including 45% who say it’s doing a very bad job, according to our January survey . Republicans and Democrats alike fault the federal government for its handling of the border situation – 89% and 73%, respectively, say it’s doing a bad job.

The survey also asked Americans to react to nine potential policies that could address the situation at the border, and found broadly positive views of several. Half or more of U.S. adults say the following would make the situation better:

  • Increasing the number of judges and staff to process asylum applications (60%)
  • Creating more opportunities for people to legally immigrate to the U.S. (56%)
  • Increasing deportations of people who are in the country illegally (52%)

Fewer than two-in-ten say any of these proposals would make the situation worse.

Terrorism and crime are growing concerns, particularly among Republicans

A line chart showing that, since 2021, concerns about crime have grown among both Republicans and Democrats.

About six-in-ten U.S. adults say defending the country from future terrorist attacks (63%) and reducing crime (58%) should be political priorities this year. But Republicans place more emphasis on these issues than Democrats.

Republicans’ concerns about terrorism have risen 11 points since last year – 76% now say it should be a top policy priority, up from 65% then. By comparison, about half of Democrats (51%) say defending against terrorism should be a priority this year, while 55% said this last year.

Concerns about crime have risen somewhat in both parties since the start of Biden’s presidency. About seven-in-ten Republicans (68%) say reducing crime should be a top priority this year, up 13 points since 2021. And 47% of Democrats say the same, up 8 points since 2021.

Most Americans see current foreign conflicts as important to U.S. interests

A dot plot showing that Democrats are more likely than Republicans to see Russia-Ukraine war as important.

As Biden urges Congress to pass emergency foreign aid , about three-quarters of Americans see the war between Israel and Hamas (75%), the tensions between China and Taiwan (75%), and the war between Russia and Ukraine (74%) as somewhat or very important to U.S. national interests, according to a separate Center survey from January .

Democrats and Republicans are about equally likely to see the Israel-Hamas war and China-Taiwan tensions as important to national interests. But Democrats are more likely than Republicans to describe the war in Ukraine this way (81% vs. 69%).

In a late 2023 survey , 48% of Republicans said the U.S. was giving too much support to Ukraine, while just 16% of Democrats said the same. This partisan gap has grown steadily wider since the beginning of the war.

Related: Americans’ Views of the Israel-Hamas War

Republicans and Democrats alike prioritize limiting money in politics

A dot plot showing that Democrats and Republicans alike say major donors, lobbyists have too much influence on Congress.

About six-in-ten Americans (62%) – including similar shares of Democrats (65%) and Republicans (60%) – say reducing the influence of money in politics should be a top policy goal this year.   

Most Americans (72%) favor spending limits for political campaigns, according to a July 2023 Center survey . Eight-in-ten also say major campaign donors have too much influence over decisions that members of Congress make, while 73% say lobbyists and special interest groups have too much influence.

And 81% of Americans, including majorities in both parties, rate members of Congress poorly when it comes to keeping their personal financial interests separate from their work as public servants.

Wide partisan gaps on climate policy

A dot plot showing that Republicans rank climate change at the bottom of their priorities for the president and Congress in 2024.

Democrats are much more likely than Republicans to say protecting the environment (63% vs. 23%) and dealing with climate change (59% vs. 12%) should be top policy priorities for 2024. In fact, addressing climate change ranks last on Republicans’ list of priorities this year.

Views of the Biden administration’s current climate policies also differ sharply by party. Eight-in-ten Democrats say the federal government is doing too little to reduce the effects of climate change, compared with 29% of Republicans, according to a Center survey from spring 2023 .

Overall, a majority of U.S. adults (67%) support prioritizing the development of renewable energy , such as wind and solar, over expanding the production of oil, coal and natural gas. But Democrats are far more likely than Republicans to prefer this (90% vs. 42%). Still, the public overall is hesitant about a full energy transition: Just 31% say the U.S. should phase out fossil fuels completely.

Related: How Republicans view climate change and energy issues

Sign up for our weekly newsletter

Fresh data delivered Saturday mornings

Striking findings from 2021

Public sees black people, women, gays and lesbians gaining influence in biden era, economy and covid-19 top the public’s policy agenda for 2021, 20 striking findings from 2020, our favorite pew research center data visualizations of 2019, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • AP Top 25 College Football Poll
  • Movie reviews
  • Book reviews
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

Most teens report feeling happy or peaceful when they go without smartphones, Pew survey finds

FILE- In this March 13, 2019, file photo Facebook, Messenger and Instagram apps are are displayed on an iPhone in New York. A group of 40 state attorneys general sent a letter to Instagram and Facebook parent company Meta expressing concern over what they say is dramatic uptick of consumer complaints about account takeovers and lockouts. The AGs called on Meta to do a better job preventing account takeovers. (AP Photo/Jenny Kane, File)

FILE- In this March 13, 2019, file photo Facebook, Messenger and Instagram apps are are displayed on an iPhone in New York. A group of 40 state attorneys general sent a letter to Instagram and Facebook parent company Meta expressing concern over what they say is dramatic uptick of consumer complaints about account takeovers and lockouts. The AGs called on Meta to do a better job preventing account takeovers. (AP Photo/Jenny Kane, File)

  • Copy Link copied

Nearly three-quarters of U.S. teens say they feel happy or peaceful when they don’t have their phones with them, according to a new report from the Pew Research Center.

In a survey published Monday , Pew also found that despite the positive associations with going phone-free, most teens have not limited their phone or social media use.

The survey comes as policymakers and children’s advocates are growing increasingly concerned with teens’ relationships with their phones and social media. Last fall, dozens of states, including California and New York, sued Instagram and Facebook owner Meta Platforms Inc. for harming young people and contributing to the youth mental health crisis by knowingly and deliberately designing features that addict children. In January, the CEOs of Meta, TikTok, X and other social media companies went before the Senate Judiciary Committee to testify about their platforms’ harms to young people.

Despite the increasing concerns, most teens say smartphones make it easier be creative and pursue hobbies, while 45% said it helps them do well in school. Most teens said the benefits of having a smartphone outweigh the harms for people their age. Nearly all U.S. teens (95%) have access to a smartphone, according to Pew.

FILE - In this image provided Malia Pila, Nex Benedict poses outside the family's home in Owasso, Okla., in December 2023. The death of Nex Benedict, a nonbinary student the day after a fight inside an Oklahoma high school restroom, has been ruled a suicide, the state medical examiner's office said Wednesday, March 13, 2024. (Sue Benedict via AP, File)

Majorities of teens say smartphones make it a little or a lot easier for people their age to pursue hobbies and interests (69%) and be creative (65%). Close to half (45%) say these devices have made it easier for youth to do well in school.

The poll was conducted from Sept. 26-Oct. 23, 2023, among a sample of 1,453 pairs of teens with one parent and has a margin of error of plus or minus 3.2 percentage points.

Here are some of the survey’s other findings:

— About half of parents (47%) say they limit the amount of time their teen can be on their phone, while a similar share (48%) don’t do this.

— Roughly 4 in 10 parents and teens (38% each) say they at least sometimes argue with each other about how much time their teen spends on the phone. Ten percent in each group said this happens often, with Hispanic Americans the most likely to say they often argue about phone use.

— Nearly two-thirds (64%) of parents of 13- to 14-year-olds say they look through their teen’s smartphone, compared with 41% among parents of 15- to 17-year-olds.

— Forty-two percent of teens say smartphones make learning good social skills harder, while 30% said it makes it easier.

— About half of the parents said they spend too much time on their phone. Higher-income parents were more likely to say this than those in lower income buckets, and white parents were more likely to report spending too much time on their phone than Hispanic or Black parents.

meaning of survey research report

Health Research Institute survey reveals adults under 30 are the most lonely people in Canberra

A woman with an auburn bob and glasses smiles.

A recent survey from the Health Research Institute (HRI) has told an inquiry into loneliness in the ACT that people aged between 18 and 29 are the loneliest in Canberra.

The University of Canberra (UC) survey — which suggested some groups are consistently more likely to experience loneliness and social isolation than others — was submitted to an ACT Legislative Assembly inquiry into loneliness and social isolation in the ACT.

The Living Well in the ACT Region survey measured incidence of loneliness amongst adults living in the territory at five points in time — late 2019, early 2020 during the first COVID-19 lockdown, at the end of 2020 between lockdowns, at the end of 2020, and in early 2023.

Silhouetted young people staring at the horizon at dusk

Lead researcher Associate Professor Jacki Schirmer said the survey found loneliness was "pretty common" in the capital, and some groups in the ACT tend to experience more social isolation than was standard in other parts of Australia.

During the survey the proportion of 18- to 29-year-olds reporting frequent loneliness fluctuated between 13 per cent and 24 per cent — aside from a spike to 31.8 per cent during first COVID-19 lockdown — all higher than the ACT average of 8.8 per cent.

"In particular some of our young adults do experience pretty high levels of social isolation — and we think that's in part because a lot of people, when they're in that young adult stage of life, that's when they shift to Canberra," Professor Schirmer said.

"So they'll shift here from somewhere else, they won't know anyone, and it does take a while to get settled in and to make those close connections.

"That's actually a real risk factor for people coming to Canberra that we want to help them address, and we want to help improve that for people who are in those first few years living in the city."

'Living on your own, not knowing anyone'

Two women smiling and holding each other's shoulders in front of a sign that reads "Taylor Swift Society".

The struggle to meet new people in an unfamiliar city was one second-year Australian National University (ANU) students Mikayla Simpson and Cate Hickman faced when they arrived from out of state to start university last year.

Ms Simpson said it was "really tough" making friends in an environment where everyone was a stranger, and Ms Hickman agreed.

"I think it was really daunting to go from living with parents to living on your own, not knowing anyone," Ms Hickman said.

Ms Simpson said she thinks it's likely most young people new to Canberra would feel quite lonely and may chose to isolate themselves further, particularly students living on campus.

"If you're in an on-campus residence, staying in your room, not getting out there, it can put you in a really tough situation where you don't feel supported," Ms Simpson said.

"I know some people that just ended up going home, they didn't really enjoy the uni experience because they didn't have people to support them."

A year later Ms Hickman and Ms Simpson are the president and vice president of the ANU Taylor Swift Society – and great friends with one another, something they attribute to the society.

"I think when you first get here and you don't know anyone it's really useful to have a common interest," Ms Hickman said.

"It doesn't necessarily have to be something you always talk about, it's not like we talk about nothing but Taylor Swift, but it is so comforting to know that you've got something in common — and then from there you can build off it and create meaningful friendships."

Professor Schirmer said the high rate of loneliness in younger Canberrans had other driving factors aside from the high number of people who relocate to the nation's capital.

"Younger adults [also] tend to be much more focused on work, a bit more busy," she said.

"Especially when they're in the early family stages [they're] very focused on children and family, so that can also reduce the number of other social connections that they have."

Older Canberrans experiencing less social isolation than younger ones

Group of older people - GENERIC RETIREMENT RETIREE

Professor Schirmer said part of the research that might be surprising was that Canberrans over 65, on average, experience less social isolation than young people.

"There's a common assumption that it's the elderly that are most at risk of high isolation and high loneliness – [and] absolutely there are some people in those older age groups who experience loneliness, and often really profound loneliness, however, the majority of our elderly people are pretty social," she said.

"It's actually not the case that elderly people are at higher risk of loneliness — even in other areas of Australia, or even internationally."

Professor Schirmer said the research team believed a major factor in older Canberrans not being as lonely was their higher likelihood of having social networks they had "built up over a lifetime".

"They're out and about, they're getting out chatting to friends, and they've often got more social connections than the younger adults," she said.

"There's quite a few studies finding that what we call the 'young old', the 65 to 75-year-olds, they generally are doing pretty well.

"Often they've retired so they've got more time to interact, they've often built up a really good network of friends and they're able to reconnect with them in the early years of retirement."

Higher isolation risk associated with living alone, disability, and renting

According to Professor Schirmer the survey also revealed a number of risk factors for social isolation — the most notable "probably not very surprisingly" being living alone.

"That's partly just because it's simply harder to see people on a regular basis and have those broader social networks; you have to [put in] a lot more effort to get out and to meet people," she said.

An unidentifiable man stands in front of a large window.

"Renting is also associated with higher isolation, and so is having a significant disability — one that maybe prevents you being able to be very mobile and get out socially," Professor Schirmer said.

"Canberra, like a lot of cities, isn't always very friendly for people who have a significant disability to be able to engage socially with other people — and that can really increase rates of loneliness."

Professor Schirmer said she believes loneliness needs to be talked about more given its potential impact on a person's overall health.

"I think we need to talk more about loneliness because the international evidence that's coming out about the effect of loneliness on health is absolutely profound," she said.

"Study after study is showing that if you experience loneliness for an extended period of time and at a very high rate ... that is associated with things like poorer physical health outcomes in the long term, poorer mental health outcomes in the long term.

"We need to talk about it also because the ways we connect socially are changing so fast that, for those that aren't able to adapt to the new ways we connect, that can really increase [their] risk of loneliness potentially."

  • X (formerly Twitter)

Related Stories

From anxiety to loneliness, here are five tips to help improve teen wellbeing.

Children jumping in the air silhouetted by a vibrant pink and purple sunset

A 'life-changing' run club is helping Australians tackle loneliness

Tara Gallagher, left, organiser of the Croissant Run Club smiling and talking to a fellow runner outside a Sydney cafe

Moving cities made me realise that loneliness can happen to us all. Here's how I found my community

Patrick Lenton and friends pose at the beach.

  • Local Government
  • Mental Health
  • Social Problems
  • University of Canberra

Haiti prime minister to resign and Robert Hur hearing: Morning Rundown

The war in Gaza casts a shadow in Jerusalem’s Old City on the first full day of Ramadan. Economists say post-pandemic inflation has reached a new phase. And a survey reveals what some teens think of their parents’ phone use.

Here’s what to know today.

Rising tensions in east Jerusalem as Ramadan begins

The streets of the Muslim quarter in Jerusalem’s Old City were markedly quiet yesterday on the first full day of Ramadan as small groups of worshippers made their way to the Al-Aqsa Mosque, the third-holiest site in Islam, for noon prayers. With no cease-fire in sight, the war in Gaza cast a heavy shadow over the start of the holy month.

Some Muslims such as shopkeeper Jamil Halwani said the “joy of Ramadan” is absent. Instead, many fear a rise in tensions and possibly violence in east Jerusalem if Israeli authorities block worshippers from accessing Al-Aqsa during the month of Ramadan.

Jamil Halwani in Jerusalem.

In the weeks leading up to Ramadan, the Israeli government had been unclear about whether it would look to impose new limits on access to the Al-Aqsa compound during the holy month. Last week, Israeli Prime Minister Benjamin Netanyahu’s office said no new restrictions would be enforced.

Already, at least one clash was reported Sunday. The Palestinian Information Ministry said Israeli authorities blocked young men from accessing the mosque’s compound as they made their way there for the first Taraweeh, an evening prayer held each night of Ramadan. Video circulated on social media showed a large crowd of people being blocked from accessing one of the gates leading to the mosque, with at least two officers appearing to strike people with batons as the group pushed back. 

Chantal Da Silva reports from Jerusalem about the mood in the Old City.  Read the full story here.

This is Morning Rundown, a weekday newsletter to start your morning. Sign up here to get it in your inbox.

Teens on phone use: Parents need to log off

Teens may be attached to their own smartphones but many agree their parents need to put down the devices more often. A new survey from the Pew Research Center reported that 46% of teens said their parents were sometimes or often distracted by their own phones when having conversations. Only 31% of parents said that was the case. 

Teens, meanwhile, have mixed feelings when it comes to their own smartphone use. Many said they feel happy or peaceful when they don’t have their smartphones — but 44% also said they feel anxious without them.  Read more about the survey results.

Why inflation is stuck in a ‘no landing’ scenario

When the Fed raised inflation rates post-pandemic, it did so with the goal of a “soft landing” that would see lower inflation without a significant increase in unemployment. But with the inflation rate hovering for the past eight months between 3% and 4%, some experts say the Fed’s 2% goal is farther off than once projected. If the first two phases of post-pandemic inflation were breakneck price increases amid job changes and slowing price growth, experts say a third phase has emerged: a stall. They expect further proof of a stall this morning when the Bureau of Labor Statistics announces the inflation figures for February.  

Two big reasons for stagnant inflation rates are the costs of rent and homeownership. Both have remained above 6% on a 12-month basis in January. There are other factors in play, too, including supply-chain issues, particularly related to trouble in the Red Sea. While Fed Chairman Jerome Powell previously said the first post-pandemic interest rate cut would come this year, some experts now think the U.S.  will see fewer rate cuts  than previously forecast.

Haitian prime minister resigns as gang violence intensifies

Ariel Henry, the embattled prime minister of Haiti, has agreed to resign as the nation's capital is wracked by out-of-control gang violence.

Speaking from Puerto Rico, Henry said his government would dissolve once a transitional council had been set up, following a week of "systematic looting and destruction of public buildings and private buildings."

Haiti was sent into a state of emergency last weekend after gangs banded together and attacked government institutions, airports and prisons, setting free thousands of prisoners. The notorious “G9 and Family” gang leader, Jimmy Cherizier — also known as “Barbecue” — threatened more violence if Henry, prime minister since 2021, did not step down.  Here’s what else to know  about the conflict.

Special counsel in Biden classified docs case faces fierce questioning

Robert Hur, special counsel in President Joe Biden’s classified documents case, will make his first public comments about the case this morning in a House Judiciary Committee hearing. The hearing comes more than a month after Hur’s report portrayed Biden as “an elderly man with a poor memory” who couldn’t recall when his son Beau Biden died, a characterization that the White House pushed back against. Hur is expected to defend the language he used about Biden’s memory as he faces fierce questioning from both Republicans and Democrats over his decision not to charge Biden in the case. 

NBC News’ politics team will have by-the-minute updates throughout the hearing.  Follow our live coverage on NBCNews.com.

Politics in Brief 

RNC shakeup:  At least four senior staffers were among the Republican National Committee employees  who were terminated  as the Trump campaign takes over, two sources said.

Border security:  Biden’s budget proposal for 2025  includes a $4.7 billion emergency fund  for border security in case of migrant surges. 

Jan. 6 aftermath:  A new report from the House committee investigating the special Jan. 6 committee  took aim at key witness Cassidy Hutchinson , saying other White House employees did not corroborate her account of Donald Trump’s actions that day.

Biden admin departure:  Marcia Fudge, secretary of housing and urban development since 2021,  is stepping down , becoming the second Cabinet member to leave the Biden administration during his first term as president.

‘Trump Employee 5’ speaks:  Brian Butler, who worked at Mar-a-Lago, came forward in an interview with CNN and  described moving “the boxes that were in the indictment”  before the FBI searched the former president’s home over his alleged mishandling of classified documents.

Want more politics news? Sign up for From the Politics Desk to get exclusive reporting and analysis delivered to your inbox every weekday evening.  Subscribe here.

Staff Pick: The commotion over the edited Kate Middleton pic

I’m not usually one for royal gossip, but I’d be lying if I said I haven’t been drawn in by the speculation about the whereabouts of Kate, Princess of Wales. What was intended to be a reassuring photo for the public showing that the princess was OK after abdominal surgery instead turned into a public relations nightmare that saw various agencies issue “kill notices” for the image because of signs that it had been altered. Of course,  that only fueled more wild rumors  about what’s really going on. —  Rudy Chinchilla,  breaking news editor

In Case You Missed It

  • A police officer in Alabama was disciplined after he arrested a woman at her home because she refused to show him her ID.
  • Airbnb is banning indoor security cameras from rental properties listed on its site, citing privacy concerns.
  • Three more people at a migrant shelter in Chicago have been diagnosed with measles , bringing the city’s total to five cases so far this year.
  • Spelling errors on a Kobe Bryant statue unveiled last month outside the Los Angeles Lakers’ arena will be corrected , the team said.
  • Eric Carmen, the Raspberries singer known for songs like “All by Myself” and “Hungry Eyes,” has died at 74 .

Select: Online Shopping, Simplified

Our Select team’s favorite new products this month include a fan-favorite olive oil now in a squeezy bottle, a new Macbook Air model that promises better performance and a tinted sunscreen.  See what else  caught our editors’ attention.

Sign up to The Selection  newsletter for exclusive reviews and shopping content from NBC Select.

Thanks for reading today’s Morning Rundown. Today’s newsletter was curated for you by Elizabeth Robinson. If you’re a fan, please send a link to your family and friends. They can sign-up here .

Elizabeth Robinson is a newsletter editor for NBC News, based in Los Angeles.

IMAGES

  1. Understanding the 3 Main Types of Survey Research & Putting Them to Use

    meaning of survey research report

  2. Survey Research: Definition, Examples & Methods

    meaning of survey research report

  3. How To Write A Survey Report

    meaning of survey research report

  4. Surveys: What They Are, Characteristics & Examples

    meaning of survey research report

  5. Survey research

    meaning of survey research report

  6. Types of Research Report

    meaning of survey research report

VIDEO

  1. Applied Survey Research Data Analysis Correlations

  2. Survey Method| Research Method, Business Research Methodology #shortnotes #bba #bcom

  3. Survey meaning in hindi || survey ka matlab kya hota hai || #shorts

  4. Finding validated scales for survey research using ChatGPT

  5. what does levelling survey meaning

  6. Survey meaning in Hindi Survey ko hindi m kya khte h Survey ka matlb hindi m kya hota h

COMMENTS

  1. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  2. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  3. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  4. Survey Research: Definition, Examples & Methods

    Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall.. As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions.

  5. 12.1 What is survey research, and when should you use it?

    In this textbook, we define a survey as a research design in which a researcher poses a set of predetermined questions to an entire group, or sample, of individuals. That set of questions is the questionnaire, a research instrument consisting of a set of questions (items) intended to capture responses from participants in a standardized manner.

  6. (PDF) Understanding and Evaluating Survey Research

    As one can notice, the latter has a broader meaning. In addition, survey research can use qualitative research strategies (e.g., interviews with open-ended or closed-ended questions) [90 ...

  7. Reporting Survey Based Studies

    INTRODUCTION. Surveys are the principal method used to address topics that require individual self-report about beliefs, knowledge, attitudes, opinions or satisfaction, which cannot be assessed using other approaches.1 This research method allows information to be collected by asking a set of questions on a specific topic to a subset of people and generalizing the results to a larger population.

  8. Approaching Survey Research

    What Is Survey Research? Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors.

  9. Best Practices for Survey Research Reports: A Synopsis for Authors and

    This article provides a checklist and recommendations for authors and reviewers to use when submitting or evaluating manuscripts reporting survey research that used a questionnaire as the primary data collection tool. To place elements of the checklist in context, a systematic review of the Journal was conducted for 2005 (volume 69) and 2006 ...

  10. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  11. 9.1 Overview of Survey Research

    Survey research is a quantitative approach that features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions. Survey research has its roots in applied social research, market research, and election polling.

  12. Survey Research: Definition, Types & Methods

    Descriptive research is the most common and conclusive form of survey research due to its quantitative nature. Unlike exploratory research methods, descriptive research utilizes pre-planned, structured surveys with closed-ended questions. It's also deductive, meaning that the survey structure and questions are determined beforehand based on existing theories or areas of inquiry.

  13. 7.1: Overview of Survey Research

    Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts ...

  14. PDF Survey Research

    Survey research is a specific type of field study that in- volves the collection of data from a sample of ele- ments (e.g., adult women) drawn from a well-defined population (e.g., all adult women living in the United States) through the use of a questionnaire (for more lengthy discussions, see Babbie, 1990; Fowler, 1988; ...

  15. Research Reports: Definition and How to Write Them

    Research reports are recorded data prepared by researchers or statisticians after analyzing the information gathered by conducting organized research, typically in the form of surveys or qualitative methods. A research report is a reliable source to recount details about a conducted research. It is most often considered to be a true testimony ...

  16. Crafting Survey Research: A Systematic Process for ...

    Abstract. Surveys represent flexible and powerful ways for practitioners to gain insights into customers and markets and for researchers to develop, test, and generalize theories. However, conducting effective survey research is challenging. Survey researchers must induce participation by "over-surveyed" respondents, choose appropriately ...

  17. Survey Report: What is it & How to Create it?

    (Definition) A survey report is a document that demonstrates all the important information about the survey in an objective, clear, precise, and fact-based manner. ... Market Research Survey: This type of survey is performed to find out the preferences and demographics of the target audience. It also helps with competitor research.

  18. PDF Fundamentals of Survey Research Methodology

    The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data collection tool for carrying out survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a ...

  19. Reporting Guidelines for Survey Research: An Analysis of ...

    Methods and Findings. We conducted a three-part project: (1) a systematic review of the literature (including "Instructions to Authors" from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of ...

  20. Survey Research: Definition, Design, Methods and Examples

    Survey research is a quantitative research method that involves collecting data from a sample of individuals using standardized questionnaires or surveys. The goal of survey research is to measure the attitudes, opinions, behaviors, and characteristics of a target population. Surveys can be conducted through various means, including phone, mail ...

  21. What is a Survey Report?

    A survey report is a type of academic writing that uses research to provide information about a topic. It involves questions that are formulated based on the research objective, to be answered by respondents and later analyzed using appropriate data analysis methods. Survey reports involve report writing which is a very important element of the […]

  22. Research Report: Definition, Types + [Writing Guide]

    A research report is a well-crafted document that outlines the processes, data, and findings of a systematic investigation. It is an important document that serves as a first-hand account of the research process, and it is typically considered an objective and accurate source of information.

  23. 8 ways to determine the credibility of research reports

    In our work, we are increasingly asked to make data-driven or fact-based decisions. A myriad of organisations offer analysis, data, intelligence and research on developments in international higher education. It can be difficult to know which source to rely on. Therefore, the first page to turn to in any research report is the methodology section.

  24. How Americans View the Coronavirus, COVID-19 ...

    A new Pew Research Center survey finds that just 20% of Americans view the coronavirus as a major threat to the health of the U.S. population today and only 10% are very concerned they will get it and require hospitalization. This data represents a low ebb of public concern about the virus that reached its height in the summer and fall of 2020, when as many as two-thirds of Americans viewed ...

  25. Methodology: Teens and parents survey

    The survey was conducted by Ipsos Public Affairs in English and Spanish using KnowledgePanel, its nationally representative online research panel. The research plan for this project was submitted to an external institutional review board (IRB), Advarra, which is an independent committee of experts that specializes in helping to protect the ...

  26. How to assess a survey report: a guide for readers and peer reviewers

    The reported results should directly address the primary and secondary research questions posed. Survey findings should be reported with sufficient detail to be clear and transparent. Readers should assess whether the methods used to handle missing data and to conduct analyses have been reported.

  27. State of the Union 2024

    Economy remains top of mind for most Americans. Nearly three-quarters of Americans (73%) say strengthening the economy should be a top priority for Biden and Congress this year, according to a Center survey conducted in January.Of the 20 policy goals we asked about, no other issue stands out - as has been the case for the past two years.. This assessment comes amid ongoing worries about high ...

  28. Most teens report feeling happy or peaceful when they go without

    Nearly three-quarters of U.S. teens say they feel happy or peaceful when they don't have their phones with them, according to a new report from the Pew Research Center. In a survey published Monday, Pew also found that despite the positive associations with going phone-free, most teens have not limited their phone or social media use. The ...

  29. Health Research Institute survey reveals adults under 30 are the most

    A Health Research Institute submission tells an inquiry into social isolation and loneliness that people between 18 and 29 report the highest levels of loneliness in the ACT, and residents aged 65 ...

  30. Haiti prime minister to resign and Robert Hur hearing: Morning Rundown

    A new survey from the Pew Research Center reported that 46% of teens said their parents were sometimes or often distracted by their own phones when having conversations. Only 31% of parents said ...