Grad Coach

How To Write The Methodology Chapter

The what, why & how explained simply (with examples).

By: Jenna Crossley (PhD) | Reviewed By: Dr. Eunice Rautenbach | September 2021 (Updated April 2023)

So, you’ve pinned down your research topic and undertaken a review of the literature – now it’s time to write up the methodology section of your dissertation, thesis or research paper . But what exactly is the methodology chapter all about – and how do you go about writing one? In this post, we’ll unpack the topic, step by step .

Overview: The Methodology Chapter

  • The purpose  of the methodology chapter
  • Why you need to craft this chapter (really) well
  • How to write and structure the chapter
  • Methodology chapter example
  • Essential takeaways

What (exactly) is the methodology chapter?

The methodology chapter is where you outline the philosophical underpinnings of your research and outline the specific methodological choices you’ve made. The point of the methodology chapter is to tell the reader exactly how you designed your study and, just as importantly, why you did it this way.

Importantly, this chapter should comprehensively describe and justify all the methodological choices you made in your study. For example, the approach you took to your research (i.e., qualitative, quantitative or mixed), who  you collected data from (i.e., your sampling strategy), how you collected your data and, of course, how you analysed it. If that sounds a little intimidating, don’t worry – we’ll explain all these methodological choices in this post .

Free Webinar: Research Methodology 101

Why is the methodology chapter important?

The methodology chapter plays two important roles in your dissertation or thesis:

Firstly, it demonstrates your understanding of research theory, which is what earns you marks. A flawed research design or methodology would mean flawed results. So, this chapter is vital as it allows you to show the marker that you know what you’re doing and that your results are credible .

Secondly, the methodology chapter is what helps to make your study replicable. In other words, it allows other researchers to undertake your study using the same methodological approach, and compare their findings to yours. This is very important within academic research, as each study builds on previous studies.

The methodology chapter is also important in that it allows you to identify and discuss any methodological issues or problems you encountered (i.e., research limitations ), and to explain how you mitigated the impacts of these. Every research project has its limitations , so it’s important to acknowledge these openly and highlight your study’s value despite its limitations . Doing so demonstrates your understanding of research design, which will earn you marks. We’ll discuss limitations in a bit more detail later in this post, so stay tuned!

Need a helping hand?

dissertation methodology benchmarking

How to write up the methodology chapter

First off, it’s worth noting that the exact structure and contents of the methodology chapter will vary depending on the field of research (e.g., humanities, chemistry or engineering) as well as the university . So, be sure to always check the guidelines provided by your institution for clarity and, if possible, review past dissertations from your university. Here we’re going to discuss a generic structure for a methodology chapter typically found in the sciences.

Before you start writing, it’s always a good idea to draw up a rough outline to guide your writing. Don’t just start writing without knowing what you’ll discuss where. If you do, you’ll likely end up with a disjointed, ill-flowing narrative . You’ll then waste a lot of time rewriting in an attempt to try to stitch all the pieces together. Do yourself a favour and start with the end in mind .

Section 1 – Introduction

As with all chapters in your dissertation or thesis, the methodology chapter should have a brief introduction. In this section, you should remind your readers what the focus of your study is, especially the research aims . As we’ve discussed many times on the blog, your methodology needs to align with your research aims, objectives and research questions. Therefore, it’s useful to frontload this component to remind the reader (and yourself!) what you’re trying to achieve.

In this section, you can also briefly mention how you’ll structure the chapter. This will help orient the reader and provide a bit of a roadmap so that they know what to expect. You don’t need a lot of detail here – just a brief outline will do.

The intro provides a roadmap to your methodology chapter

Section 2 – The Methodology

The next section of your chapter is where you’ll present the actual methodology. In this section, you need to detail and justify the key methodological choices you’ve made in a logical, intuitive fashion. Importantly, this is the heart of your methodology chapter, so you need to get specific – don’t hold back on the details here. This is not one of those “less is more” situations.

Let’s take a look at the most common components you’ll likely need to cover. 

Methodological Choice #1 – Research Philosophy

Research philosophy refers to the underlying beliefs (i.e., the worldview) regarding how data about a phenomenon should be gathered , analysed and used . The research philosophy will serve as the core of your study and underpin all of the other research design choices, so it’s critically important that you understand which philosophy you’ll adopt and why you made that choice. If you’re not clear on this, take the time to get clarity before you make any further methodological choices.

While several research philosophies exist, two commonly adopted ones are positivism and interpretivism . These two sit roughly on opposite sides of the research philosophy spectrum.

Positivism states that the researcher can observe reality objectively and that there is only one reality, which exists independently of the observer. As a consequence, it is quite commonly the underlying research philosophy in quantitative studies and is oftentimes the assumed philosophy in the physical sciences.

Contrasted with this, interpretivism , which is often the underlying research philosophy in qualitative studies, assumes that the researcher performs a role in observing the world around them and that reality is unique to each observer . In other words, reality is observed subjectively .

These are just two philosophies (there are many more), but they demonstrate significantly different approaches to research and have a significant impact on all the methodological choices. Therefore, it’s vital that you clearly outline and justify your research philosophy at the beginning of your methodology chapter, as it sets the scene for everything that follows.

The research philosophy is at the core of the methodology chapter

Methodological Choice #2 – Research Type

The next thing you would typically discuss in your methodology section is the research type. The starting point for this is to indicate whether the research you conducted is inductive or deductive .

Inductive research takes a bottom-up approach , where the researcher begins with specific observations or data and then draws general conclusions or theories from those observations. Therefore these studies tend to be exploratory in terms of approach.

Conversely , d eductive research takes a top-down approach , where the researcher starts with a theory or hypothesis and then tests it using specific observations or data. Therefore these studies tend to be confirmatory in approach.

Related to this, you’ll need to indicate whether your study adopts a qualitative, quantitative or mixed  approach. As we’ve mentioned, there’s a strong link between this choice and your research philosophy, so make sure that your choices are tightly aligned . When you write this section up, remember to clearly justify your choices, as they form the foundation of your study.

Methodological Choice #3 – Research Strategy

Next, you’ll need to discuss your research strategy (also referred to as a research design ). This methodological choice refers to the broader strategy in terms of how you’ll conduct your research, based on the aims of your study.

Several research strategies exist, including experimental , case studies , ethnography , grounded theory, action research , and phenomenology . Let’s take a look at two of these, experimental and ethnographic, to see how they contrast.

Experimental research makes use of the scientific method , where one group is the control group (in which no variables are manipulated ) and another is the experimental group (in which a specific variable is manipulated). This type of research is undertaken under strict conditions in a controlled, artificial environment (e.g., a laboratory). By having firm control over the environment, experimental research typically allows the researcher to establish causation between variables. Therefore, it can be a good choice if you have research aims that involve identifying causal relationships.

Ethnographic research , on the other hand, involves observing and capturing the experiences and perceptions of participants in their natural environment (for example, at home or in the office). In other words, in an uncontrolled environment.  Naturally, this means that this research strategy would be far less suitable if your research aims involve identifying causation, but it would be very valuable if you’re looking to explore and examine a group culture, for example.

As you can see, the right research strategy will depend largely on your research aims and research questions – in other words, what you’re trying to figure out. Therefore, as with every other methodological choice, it’s essential to justify why you chose the research strategy you did.

Methodological Choice #4 – Time Horizon

The next thing you’ll need to detail in your methodology chapter is the time horizon. There are two options here: cross-sectional and longitudinal . In other words, whether the data for your study were all collected at one point in time (cross-sectional) or at multiple points in time (longitudinal).

The choice you make here depends again on your research aims, objectives and research questions. If, for example, you aim to assess how a specific group of people’s perspectives regarding a topic change over time , you’d likely adopt a longitudinal time horizon.

Another important factor to consider is simply whether you have the time necessary to adopt a longitudinal approach (which could involve collecting data over multiple months or even years). Oftentimes, the time pressures of your degree program will force your hand into adopting a cross-sectional time horizon, so keep this in mind.

Methodological Choice #5 – Sampling Strategy

Next, you’ll need to discuss your sampling strategy . There are two main categories of sampling, probability and non-probability sampling.

Probability sampling involves a random (and therefore representative) selection of participants from a population, whereas non-probability sampling entails selecting participants in a non-random  (and therefore non-representative) manner. For example, selecting participants based on ease of access (this is called a convenience sample).

The right sampling approach depends largely on what you’re trying to achieve in your study. Specifically, whether you trying to develop findings that are generalisable to a population or not. Practicalities and resource constraints also play a large role here, as it can oftentimes be challenging to gain access to a truly random sample. In the video below, we explore some of the most common sampling strategies.

Methodological Choice #6 – Data Collection Method

Next up, you’ll need to explain how you’ll go about collecting the necessary data for your study. Your data collection method (or methods) will depend on the type of data that you plan to collect – in other words, qualitative or quantitative data.

Typically, quantitative research relies on surveys , data generated by lab equipment, analytics software or existing datasets. Qualitative research, on the other hand, often makes use of collection methods such as interviews , focus groups , participant observations, and ethnography.

So, as you can see, there is a tight link between this section and the design choices you outlined in earlier sections. Strong alignment between these sections, as well as your research aims and questions is therefore very important.

Methodological Choice #7 – Data Analysis Methods/Techniques

The final major methodological choice that you need to address is that of analysis techniques . In other words, how you’ll go about analysing your date once you’ve collected it. Here it’s important to be very specific about your analysis methods and/or techniques – don’t leave any room for interpretation. Also, as with all choices in this chapter, you need to justify each choice you make.

What exactly you discuss here will depend largely on the type of study you’re conducting (i.e., qualitative, quantitative, or mixed methods). For qualitative studies, common analysis methods include content analysis , thematic analysis and discourse analysis . In the video below, we explain each of these in plain language.

For quantitative studies, you’ll almost always make use of descriptive statistics , and in many cases, you’ll also use inferential statistical techniques (e.g., correlation and regression analysis). In the video below, we unpack some of the core concepts involved in descriptive and inferential statistics.

In this section of your methodology chapter, it’s also important to discuss how you prepared your data for analysis, and what software you used (if any). For example, quantitative data will often require some initial preparation such as removing duplicates or incomplete responses . Similarly, qualitative data will often require transcription and perhaps even translation. As always, remember to state both what you did and why you did it.

Section 3 – The Methodological Limitations

With the key methodological choices outlined and justified, the next step is to discuss the limitations of your design. No research methodology is perfect – there will always be trade-offs between the “ideal” methodology and what’s practical and viable, given your constraints. Therefore, this section of your methodology chapter is where you’ll discuss the trade-offs you had to make, and why these were justified given the context.

Methodological limitations can vary greatly from study to study, ranging from common issues such as time and budget constraints to issues of sample or selection bias . For example, you may find that you didn’t manage to draw in enough respondents to achieve the desired sample size (and therefore, statistically significant results), or your sample may be skewed heavily towards a certain demographic, thereby negatively impacting representativeness .

In this section, it’s important to be critical of the shortcomings of your study. There’s no use trying to hide them (your marker will be aware of them regardless). By being critical, you’ll demonstrate to your marker that you have a strong understanding of research theory, so don’t be shy here. At the same time, don’t beat your study to death . State the limitations, why these were justified, how you mitigated their impacts to the best degree possible, and how your study still provides value despite these limitations .

Section 4 – Concluding Summary

Finally, it’s time to wrap up the methodology chapter with a brief concluding summary. In this section, you’ll want to concisely summarise what you’ve presented in the chapter. Here, it can be a good idea to use a figure to summarise the key decisions, especially if your university recommends using a specific model (for example, Saunders’ Research Onion ).

Importantly, this section needs to be brief – a paragraph or two maximum (it’s a summary, after all). Also, make sure that when you write up your concluding summary, you include only what you’ve already discussed in your chapter; don’t add any new information.

Keep it simple

Methodology Chapter Example

In the video below, we walk you through an example of a high-quality research methodology chapter from a dissertation. We also unpack our free methodology chapter template so that you can see how best to structure your chapter.

Wrapping Up

And there you have it – the methodology chapter in a nutshell. As we’ve mentioned, the exact contents and structure of this chapter can vary between universities , so be sure to check in with your institution before you start writing. If possible, try to find dissertations or theses from former students of your specific degree program – this will give you a strong indication of the expectations and norms when it comes to the methodology chapter (and all the other chapters!).

Also, remember the golden rule of the methodology chapter – justify every choice ! Make sure that you clearly explain the “why” for every “what”, and reference credible methodology textbooks or academic sources to back up your justifications.

If you need a helping hand with your research methodology (or any other component of your research), be sure to check out our private coaching service , where we hold your hand through every step of the research journey. Until next time, good luck!

dissertation methodology benchmarking

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Quantitative results chapter in a dissertation

50 Comments

DAUDI JACKSON GYUNDA

highly appreciated.

florin

This was very helpful!

Nophie

This was helpful

mengistu

Thanks ,it is a very useful idea.

Thanks ,it is very useful idea.

Lucia

Thank you so much, this information is very useful.

Shemeka Hodge-Joyce

Thank you very much. I must say the information presented was succinct, coherent and invaluable. It is well put together and easy to comprehend. I have a great guide to create the research methodology for my dissertation.

james edwin thomson

Highly clear and useful.

Amir

I understand a bit on the explanation above. I want to have some coach but I’m still student and don’t have any budget to hire one. A lot of question I want to ask.

Henrick

Thank you so much. This concluded my day plan. Thank you so much.

Najat

Thanks it was helpful

Karen

Great information. It would be great though if you could show us practical examples.

Patrick O Matthew

Thanks so much for this information. God bless and be with you

Atugonza Zahara

Thank you so so much. Indeed it was helpful

Joy O.

This is EXCELLENT!

I was totally confused by other explanations. Thank you so much!.

keinemukama surprise

justdoing my research now , thanks for the guidance.

Yucong Huang

Thank uuuu! These contents are really valued for me!

Thokozani kanyemba

This is powerful …I really like it

Hend Zahran

Highly useful and clear, thank you so much.

Harry Kaliza

Highly appreciated. Good guide

Fateme Esfahani

That was helpful. Thanks

David Tshigomana

This is very useful.Thank you

Kaunda

Very helpful information. Thank you

Peter

This is exactly what I was looking for. The explanation is so detailed and easy to comprehend. Well done and thank you.

Shazia Malik

Great job. You just summarised everything in the easiest and most comprehensible way possible. Thanks a lot.

Rosenda R. Gabriente

Thank you very much for the ideas you have given this will really help me a lot. Thank you and God Bless.

Eman

Such great effort …….very grateful thank you

Shaji Viswanathan

Please accept my sincere gratitude. I have to say that the information that was delivered was congruent, concise, and quite helpful. It is clear and straightforward, making it simple to understand. I am in possession of an excellent manual that will assist me in developing the research methods for my dissertation.

lalarie

Thank you for your great explanation. It really helped me construct my methodology paper.

Daniel sitieney

thank you for simplifieng the methodoly, It was realy helpful

Kayode

Very helpful!

Nathan

Thank you for your great explanation.

Emily Kamende

The explanation I have been looking for. So clear Thank you

Abraham Mafuta

Thank you very much .this was more enlightening.

Jordan

helped me create the in depth and thorough methodology for my dissertation

Nelson D Menduabor

Thank you for the great explaination.please construct one methodology for me

I appreciate you for the explanation of methodology. Please construct one methodology on the topic: The effects influencing students dropout among schools for my thesis

This helped me complete my methods section of my dissertation with ease. I have managed to write a thorough and concise methodology!

ASHA KIUNGA

its so good in deed

leslie chihope

wow …what an easy to follow presentation. very invaluable content shared. utmost important.

Ahmed khedr

Peace be upon you, I am Dr. Ahmed Khedr, a former part-time professor at Al-Azhar University in Cairo, Egypt. I am currently teaching research methods, and I have been dealing with your esteemed site for several years, and I found that despite my long experience with research methods sites, it is one of the smoothest sites for evaluating the material for students, For this reason, I relied on it a lot in teaching and translated most of what was written into Arabic and published it on my own page on Facebook. Thank you all… Everything I posted on my page is provided with the names of the writers of Grad coach, the title of the article, and the site. My best regards.

Daniel Edwards

A remarkably simple and useful guide, thank you kindly.

Magnus Mahenge

I real appriciate your short and remarkable chapter summary

Olalekan Adisa

Bravo! Very helpful guide.

Arthur Margraf

Only true experts could provide such helpful, fantastic, and inspiring knowledge about Methodology. Thank you very much! God be with you and us all!

Aruni Nilangi

highly appreciate your effort.

White Label Blog Content

This is a very well thought out post. Very informative and a great read.

FELEKE FACHA

THANKS SO MUCH FOR SHARING YOUR NICE IDEA

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Dissertation Methodology – Structure, Example and Writing Guide

Dissertation Methodology – Structure, Example and Writing Guide

  • Table of Contents

Dissertation Methodology

Dissertation Methodology

In any research, the methodology chapter is one of the key components of your dissertation. It provides a detailed description of the methods you used to conduct your research and helps readers understand how you obtained your data and how you plan to analyze it. This section is crucial for replicating the study and validating its results.

Here are the basic elements that are typically included in a dissertation methodology:

  • Introduction : This section should explain the importance and goals of your research .
  • Research Design : Outline your research approach and why it’s appropriate for your study. You might be conducting an experimental research, a qualitative research, a quantitative research, or a mixed-methods research.
  • Data Collection : This section should detail the methods you used to collect your data. Did you use surveys, interviews, observations, etc.? Why did you choose these methods? You should also include who your participants were, how you recruited them, and any ethical considerations.
  • Data Analysis : Explain how you intend to analyze the data you collected. This could include statistical analysis, thematic analysis, content analysis, etc., depending on the nature of your study.
  • Reliability and Validity : Discuss how you’ve ensured the reliability and validity of your study. For instance, you could discuss measures taken to reduce bias, how you ensured that your measures accurately capture what they were intended to, or how you will handle any limitations in your study.
  • Ethical Considerations : This is where you state how you have considered ethical issues related to your research, how you have protected the participants’ rights, and how you have complied with the relevant ethical guidelines.
  • Limitations : Acknowledge any limitations of your methodology, including any biases and constraints that might have affected your study.
  • Summary : Recap the key points of your methodology chapter, highlighting the overall approach and rationalization of your research.

Types of Dissertation Methodology

The type of methodology you choose for your dissertation will depend on the nature of your research question and the field you’re working in. Here are some of the most common types of methodologies used in dissertations:

Experimental Research

This involves creating an experiment that will test your hypothesis. You’ll need to design an experiment, manipulate variables, collect data, and analyze that data to draw conclusions. This is commonly used in fields like psychology, biology, and physics.

Survey Research

This type of research involves gathering data from a large number of participants using tools like questionnaires or surveys. It can be used to collect a large amount of data and is often used in fields like sociology, marketing, and public health.

Qualitative Research

This type of research is used to explore complex phenomena that can’t be easily quantified. Methods include interviews, focus groups, and observations. This methodology is common in fields like anthropology, sociology, and education.

Quantitative Research

Quantitative research uses numerical data to answer research questions. This can include statistical, mathematical, or computational techniques. It’s common in fields like economics, psychology, and health sciences.

Case Study Research

This type of research involves in-depth investigation of a particular case, such as an individual, group, or event. This methodology is often used in psychology, social sciences, and business.

Mixed Methods Research

This combines qualitative and quantitative research methods in a single study. It’s used to answer more complex research questions and is becoming more popular in fields like social sciences, health sciences, and education.

Action Research

This type of research involves taking action and then reflecting upon the results. This cycle of action-reflection-action continues throughout the study. It’s often used in fields like education and organizational development.

Longitudinal Research

This type of research involves studying the same group of individuals over an extended period of time. This could involve surveys, observations, or experiments. It’s common in fields like psychology, sociology, and medicine.

Ethnographic Research

This type of research involves the in-depth study of people and cultures. Researchers immerse themselves in the culture they’re studying to collect data. This is often used in fields like anthropology and social sciences.

Structure of Dissertation Methodology

The structure of a dissertation methodology can vary depending on your field of study, the nature of your research, and the guidelines of your institution. However, a standard structure typically includes the following elements:

  • Introduction : Briefly introduce your overall approach to the research. Explain what you plan to explore and why it’s important.
  • Research Design/Approach : Describe your overall research design. This can be qualitative, quantitative, or mixed methods. Explain the rationale behind your chosen design and why it is suitable for your research questions or hypotheses.
  • Data Collection Methods : Detail the methods you used to collect your data. You should include what type of data you collected, how you collected it, and why you chose this method. If relevant, you can also include information about your sample population, such as how many people participated, how they were chosen, and any relevant demographic information.
  • Data Analysis Methods : Explain how you plan to analyze your collected data. This will depend on the nature of your data. For example, if you collected quantitative data, you might discuss statistical analysis techniques. If you collected qualitative data, you might discuss coding strategies, thematic analysis, or narrative analysis.
  • Reliability and Validity : Discuss how you’ve ensured the reliability and validity of your research. This might include steps you took to reduce bias or increase the accuracy of your measurements.
  • Ethical Considerations : If relevant, discuss any ethical issues associated with your research. This might include how you obtained informed consent from participants, how you ensured participants’ privacy and confidentiality, or any potential conflicts of interest.
  • Limitations : Acknowledge any limitations in your research methodology. This could include potential sources of bias, difficulties with data collection, or limitations in your analysis methods.
  • Summary/Conclusion : Briefly summarize the key points of your methodology, emphasizing how it helps answer your research questions or hypotheses.

How to Write Dissertation Methodology

Writing a dissertation methodology requires you to be clear and precise about the way you’ve carried out your research. It’s an opportunity to convince your readers of the appropriateness and reliability of your approach to your research question. Here is a basic guideline on how to write your methodology section:

1. Introduction

Start your methodology section by restating your research question(s) or objective(s). This ensures your methodology directly ties into the aim of your research.

2. Approach

Identify your overall approach: qualitative, quantitative, or mixed methods. Explain why you have chosen this approach.

  • Qualitative methods are typically used for exploratory research and involve collecting non-numerical data. This might involve interviews, observations, or analysis of texts.
  • Quantitative methods are used for research that relies on numerical data. This might involve surveys, experiments, or statistical analysis.
  • Mixed methods use a combination of both qualitative and quantitative research methods.

3. Research Design

Describe the overall design of your research. This could involve explaining the type of study (e.g., case study, ethnography, experimental research, etc.), how you’ve defined and measured your variables, and any control measures you’ve implemented.

4. Data Collection

Explain in detail how you collected your data.

  • If you’ve used qualitative methods, you might detail how you selected participants for interviews or focus groups, how you conducted observations, or how you analyzed existing texts.
  • If you’ve used quantitative methods, you might detail how you designed your survey or experiment, how you collected responses, and how you ensured your data is reliable and valid.

5. Data Analysis

Describe how you analyzed your data.

  • If you’re doing qualitative research, this might involve thematic analysis, discourse analysis, or grounded theory.
  • If you’re doing quantitative research, you might be conducting statistical tests, regression analysis, or factor analysis.

Discuss any ethical issues related to your research. This might involve explaining how you obtained informed consent, how you’re protecting participants’ privacy, or how you’re managing any potential harms to participants.

7. Reliability and Validity

Discuss the steps you’ve taken to ensure the reliability and validity of your data.

  • Reliability refers to the consistency of your measurements, and you might discuss how you’ve piloted your instruments or used standardized measures.
  • Validity refers to the accuracy of your measurements, and you might discuss how you’ve ensured your measures reflect the concepts they’re supposed to measure.

8. Limitations

Every study has its limitations. Discuss the potential weaknesses of your chosen methods and explain any obstacles you faced in your research.

9. Conclusion

Summarize the key points of your methodology, emphasizing how it helps to address your research question or objective.

Example of Dissertation Methodology

An Example of Dissertation Methodology is as follows:

Chapter 3: Methodology

  • Introduction

This chapter details the methodology adopted in this research. The study aimed to explore the relationship between stress and productivity in the workplace. A mixed-methods research design was used to collect and analyze data.

Research Design

This study adopted a mixed-methods approach, combining quantitative surveys with qualitative interviews to provide a comprehensive understanding of the research problem. The rationale for this approach is that while quantitative data can provide a broad overview of the relationships between variables, qualitative data can provide deeper insights into the nuances of these relationships.

Data Collection Methods

Quantitative Data Collection : An online self-report questionnaire was used to collect data from participants. The questionnaire consisted of two standardized scales: the Perceived Stress Scale (PSS) to measure stress levels and the Individual Work Productivity Questionnaire (IWPQ) to measure productivity. The sample consisted of 200 office workers randomly selected from various companies in the city.

Qualitative Data Collection : Semi-structured interviews were conducted with 20 participants chosen from the initial sample. The interview guide included questions about participants’ experiences with stress and how they perceived its impact on their productivity.

Data Analysis Methods

Quantitative Data Analysis : Descriptive and inferential statistics were used to analyze the survey data. Pearson’s correlation was used to examine the relationship between stress and productivity.

Qualitative Data Analysis : Interviews were transcribed and subjected to thematic analysis using NVivo software. This process allowed for identifying and analyzing patterns and themes regarding the impact of stress on productivity.

Reliability and Validity

To ensure reliability and validity, standardized measures with good psychometric properties were used. In qualitative data analysis, triangulation was employed by having two researchers independently analyze the data and then compare findings.

Ethical Considerations

All participants provided informed consent prior to their involvement in the study. They were informed about the purpose of the study, their rights as participants, and the confidentiality of their responses.

Limitations

The main limitation of this study is its reliance on self-report measures, which can be subject to biases such as social desirability bias. Moreover, the sample was drawn from a single city, which may limit the generalizability of the findings.

Where to Write Dissertation Methodology

In a dissertation or thesis, the Methodology section usually follows the Literature Review. This placement allows the Methodology to build upon the theoretical framework and existing research outlined in the Literature Review, and precedes the Results or Findings section. Here’s a basic outline of how most dissertations are structured:

  • Acknowledgements
  • Literature Review (or it may be interspersed throughout the dissertation)
  • Methodology
  • Results/Findings
  • References/Bibliography

In the Methodology chapter, you will discuss the research design, data collection methods, data analysis methods, and any ethical considerations pertaining to your study. This allows your readers to understand how your research was conducted and how you arrived at your results.

Advantages of Dissertation Methodology

The dissertation methodology section plays an important role in a dissertation for several reasons. Here are some of the advantages of having a well-crafted methodology section in your dissertation:

  • Clarifies Your Research Approach : The methodology section explains how you plan to tackle your research question, providing a clear plan for data collection and analysis.
  • Enables Replication : A detailed methodology allows other researchers to replicate your study. Replication is an important aspect of scientific research because it provides validation of the study’s results.
  • Demonstrates Rigor : A well-written methodology shows that you’ve thought critically about your research methods and have chosen the most appropriate ones for your research question. This adds credibility to your study.
  • Enhances Transparency : Detailing your methods allows readers to understand the steps you took in your research. This increases the transparency of your study and allows readers to evaluate potential biases or limitations.
  • Helps in Addressing Research Limitations : In your methodology section, you can acknowledge and explain the limitations of your research. This is important as it shows you understand that no research method is perfect and there are always potential weaknesses.
  • Facilitates Peer Review : A detailed methodology helps peer reviewers assess the soundness of your research design. This is an important part of the publication process if you aim to publish your dissertation in a peer-reviewed journal.
  • Establishes the Validity and Reliability : Your methodology section should also include a discussion of the steps you took to ensure the validity and reliability of your measurements, which is crucial for establishing the overall quality of your research.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Delimitations

Delimitations in Research – Types, Examples and...

Research Design

Research Design – Types, Methods and Examples

What is a Hypothesis

What is a Hypothesis – Types, Examples and...

Dissertation

Dissertation – Format, Example and Template

Dissertation vs Thesis

Dissertation vs Thesis – Key Differences

Ethical Considerations

Ethical Considerations – Types, Examples and...

How to Write the Methodology for a Dissertation

The ‘methodology’ chapter tells the reader exactly how the research was carried out. So, it needs to be accurate.

After all, the benchmark of a ‘good methodology’ is whether or not the reader feels confident enough to replicate it. To inspire that level of confidence in your reader, you’ll need to be clear and precise.

But how do you write a clear and precise methodology chapter? Well, begin by identifying the key elements of a research methodology.

What should a methodology include?

Methodology chapters do vary, so it’s difficult to provide an exhaustive checklist. Having said that, most good methodologies tend to include:

  • An outline of the research design-including how it will answer the research question(s)
  • A description of the research philosophy
  • A description of the research approach
  • An outline of the research strategy
  • A description of the research methods: This may include sub-sections such as: Sampling, Procedure, Data Collection, Data Analysis, Validity & Reliability, Ethics, etc., – this largely depends on the degree you are studying.

Each element can be quite tricky to get your head around, so let’s explore them in a bit more depth.

1. An overview of the research design

Before writing your methodology, you should know which research design you are using. Broadly speaking, there are three types of Research Design : Experimental, Descriptive, and Review. Under these headings, there are various sub-types, as shown in the table:

Most methodology chapters begin with a description of the research design. Then, with reference to the research question(s), they explain why the research design was a suitable choice.

2. Define the research philosophy

Secondly, you should clearly describe which research philosophy (or epistemology) you adopted.

This might seem like a waste of time, but it’s not!

If you clearly communicate your research philosophy to the reader, they’ll be able to understand what assumptions you made whilst conducting your research. This will not only make your research simpler to understand, but it’ll also make it easier for someone to replicate.

According to Saunders et al., (2009) , there are, broadly speaking, 5 philosophical approaches:

You should be clear about which ‘world view’ you adopted when you carried out your research. Importantly, this can help you to consider the strengths and weaknesses of your research. It’s this kind of critical thinking that’ll earn you the best grades!

3. Define the research approach

Next, you should explain whether your dissertation took a ‘deductive’ or ‘inductive’ approach. What’s the difference? Well,

  • Deductive research tests a specific theory, often in a novel setting or with a novel population group. It is most compatible with a positivist philosophy, but it works with other philosophies, too.
  • Inductive research explores a particular phenomenon and uses the findings to shape new theory. It is most compatible with interpretivism or post-modernism.

It’s best to thinking of deductive research as a “top-down” or “theory-led” approach, and inductive research as a “bottom up” or “findings-led” approach.

If you are not sure, ask yourself whether you formulated a hypothesis or not. If you have a hypothesis, your research is probably deductive.

4. Name the research strategy

The methodology should also define your chosen research strategy (quantitative, qualitative or mixed-methods). In brief:

A quantitative strategy collects numerical data, which is then analysed through statistical methods. In contrast, a qualitative strategy collects textual data, perhaps from interviews or media sources, and analyses it through a qualitative method such as thematic analysis. Finally, there’s the mixed-methods approach that combines both strategies in one dissertation.

When it comes to choosing a research strategy, there’s no ‘one best way’ as it really depends on the aims of your research. If you need help choosing a research strategy, one of our PhD Experts would be glad to assist.

5. Research methods in focus

Once you’ve laid the groundwork, it’s time to get down the ‘nitty-gritty’. Indeed, most methodologies will cover some or all of the following:

  • Sampling – Clearly explain your sampling method.
  • Procedure – You should provide a clear description of how/where/when/ the research took place.
  • Data collection – How was the data collected and stored?
  • Data analysis – Provide a clear description of how you analysed the data. If you used a qualitative method like ‘thematic analysis’ (TA), make sure you cite which researcher’s TA method you followed.
  • Validity – Consider, did the results really measure what you intended them to? How did you make sure of this?
  • Reliability – Also, if this study was replicated, would similar results be produced?
  • Ethics – You should discuss the ethical implications of you research, put any Ethics forms in the Appendix, and then refer to these in the methodology.

Often, it helps to use these as subheadings to organise your ideas. But, bear in mind that some of the above headings might not be relevant to your dissertation.

How should I structure my methodology?

One of the most common questions students ask is ‘How do I structure the methodology for my dissertation?’ . It’s quite difficult to advise on this because each dissertation varies.

However, as mentioned, most methodologies begin with an overview of the research design and a re-iteration of the research question(s). Then, a description of the research philosophy, approach, and strategy are provided. Finally, once all that is out of the way, the procedure, sampling, data collection/analysis, validity and reliability, and ethics etc., are usually discussed.

For further guidance, it’s advisable to:

  • Check your university’s dissertation guide
  • Speak to your supervisor
  • Take a look at dissertation examples from previous years
  • Consult your referencing style guidance (e.g. APA, Harvard) for any specific requirements.

Do all dissertations have a methodology?

If you are studying Natural Sciences, Computer Sciences, Psychology, Business/Management, or a Health-related degree, chances are your dissertation will need a ‘Methodology’ chapter. On the other hand, if you are studying a Humanities or Arts degree, you probably won’t need to include a ‘Methodology’ chapter.

In that case, you’ll probably explain your research design in the Introduction of your dissertation . As always, it’s best to check with your supervisor if you are unsure.

Tips for writing a robust methodology

Here are some final pointers by our dissertation writing service to keep in mind when writing your methodology chapter:

  • A common mistake students make is that they write too many words for the methodology chapter. Generally speaking, the methodology should account for around 15% of the full dissertation . Don’t make the mistake of spending too long evaluating every possible philosophy, design or strategy that you could have chosen. Instead, provide clear and succinct reasoning for the choices you’ve made, and this will allow your critical thinking skills to shine through. If you’re struggling to achieve this, our academic editors can show you how to write in a critical yet concise manner.
  • Use sub-headings as these help to make the methodology much more readable. However, make sure you observe any conventions from your dissertation handbook.
  • Write in the past tense. In a dissertation proposal , the methodology is written in the future tense (e.g. “The research design will be…”). However, when you come to write the methodology for the dissertation, this research has already been completed, so the methodology should be written in the past tense (e.g. “The research design was …”).
  • Don’t fill up your methodology with resources that belong in the Appendices. For example, if you’ve used a questionnaire as part of your research, this should go in the Appendices. When you refer to the questionnaire in the methodology, this can be followed by: “(See Appendix X)”.

Writing the methodology isn’t easy. In fact, it’s probably one of the hardest parts of the dissertation. But if you take it step-by-step and seek regular feedback from your supervisor, you’ll find it a lot easier.

You may also like

dissertation methodology benchmarking

Writing the Dissertation - Guides for Success: The Methodology

  • Writing the Dissertation Homepage
  • Overview and Planning
  • The Literature Review
  • The Methodology
  • The Results and Discussion
  • The Conclusion
  • The Abstract
  • Getting Started
  • What to Avoid

Overview of writing the methodology

The methodology chapter precisely outlines the research method(s) employed in your dissertation and considers any relevant decisions you made, and challenges faced, when conducting your research. Getting this right is crucial because it lays the foundation for what’s to come: your results and discussion.

Disciplinary differences

Please note: this guide is not specific to any one discipline. The methodology can vary depending on the nature of the research and the expectations of the school or department. Please adapt the following advice to meet the demands of your dissertation and the expectations of your school or department. Consult your supervisor for further guidance; you can also check out our  Writing Across Subjects guide .

Guide contents

As part of the Writing the Dissertation series, this guide covers the most common conventions found in a methodology chapter, giving you the necessary knowledge, tips and guidance needed to impress your markers!  The sections are organised as follows:

  • Getting Started  - Defines the methodology and its core characteristics.
  • Structure  - Provides a detailed walk-through of common subsections or components of the methodology.
  • What to Avoid  - Covers a few frequent mistakes you'll want to...avoid!
  • FAQs  - Guidance on first- vs. third-person, secondary literature and more.
  • Checklist  - Includes a summary of key points and a self-evaluation checklist.

Training and tools

  • The Academic Skills team has recorded a Writing the Dissertation workshop series to help you with each section of a standard dissertation, including a video on writing the method/methodology .
  • For more on methods and methodologies, you can check out USC's methodology research guide  and Huddersfield's guide to writing the methodology of an undergraduate dissertation .
  • The dissertation planner tool can help you think through the timeline for planning, research, drafting and editing.
  • iSolutions offers training and a Word template to help you digitally format and structure your dissertation.

What is the methodology?

The methodology of a dissertation is like constructing a house of cards. Having strong and stable foundations for your research relies on your ability to make informed and rational choices about the design of your study. Everything from this point on – your results and discussion –  rests on these decisions, like the bottom layer of a house of cards.

The methodology is where you explicitly state, in relevant detail, how you conduced your study in direct response to your research question(s) and/or hypotheses. You should work through the linear process of devising your study to implementing it, covering the important choices you made and any potential obstacles you faced along the way.

Methods or methodology?

Some disciplines refer to this chapter as the research methods , whilst others call it the methodology . The two are often used interchangeably, but they are slightly different. The methods chapter outlines the techniques used to conduct the research and the specific steps taken throughout the research process. The methodology also outlines how the research was conducted, but is particularly interested in the philosophical underpinning that shapes the research process. As indicated by the suffix, -ology , meaning the study of something, the methodology is like the study of research, as opposed to simply stating how the research was conducted.

This guide focuses on the methodology, as opposed to the methods, although the content and guidance can be tailored to a methods chapter. Every dissertation is different and every methodology has its own nuances, so ensure you adapt the content here to your research and always consult your supervisor for more detailed guidance.

What are my markers looking for?

Your markers are looking   for your understanding of the complex process behind original (see definition) research. They are assessing your ability to...

  • Demonstrate   an understanding of the impact that methodological choices can have on the reliability and validity of your findings, meaning you should engage with ‘why’ you did that, as opposed to simply ‘what’ you did.
  • Make   informed methodological choices that clearly relate to your research question(s).

But what does it mean to engage in 'original' research? Originality doesn’t strictly mean you should be inventing something entirely new. Originality comes in many forms, from updating the application of a theory, to adapting a previous experiment for new purposes – it’s about making a worthwhile contribution.

Structuring your methodology

The methodology chapter should outline the research process undertaken, from selecting the method to articulating the tool or approach adopted to analyse your results. Because you are outlining this process, it's important that you structure your methodology in a linear way, showing how certain decisions have impacted on subsequent choices.

Scroll to continue reading, or click a link below to jump immediately to that section:

The 'research onion'

To ensure you write your methodology in a linear way, it can be useful to think of the methodology in terms of layers, as shown in the figure below.

Oval diagram with these layers from outside to in: philosophy, approach, methodological choice, strategies, time horizon, and techniques/procedures.

Figure: 'Research onion' from Saunders et al. (2007).

You don't need to precisely follow these exact layers as some won't be relevant to your research. However, the layered 'out to in' structure developed by Saunders et al. (2007) is appropriate for any methodology chapter because it guides your reader through the process in a linear fashion, demonstrating how certain decisions impacted on others. For example, you need to state whether your research is qualitative, quantitative or mixed before articulating your precise research method. Likewise, you need to explain how you collected your data before you inform the reader of how you subsequently analysed that data.

Using this linear approach from 'outer' layer to 'inner' layer, the next sections will take you through the most common layers used to structure a methodology chapter.

Introduction and research outline

Like any chapter, you should open your methodology with an introduction. It's good to start by briefly restating the research problem, or gap, that you're addressing, along with your research question(s) and/or hypotheses. Following this, it's common to provide a very condensed statement that outlines the most important elements of your research design. Here's a short example:

This study adopted qualitative research through a series of semi-structured interviews with seven experienced industry professionals.

Like any other introduction, you can then provide a brief statement outlining what the chapter is about and how it's structured (e.g., an essay map ).

Restating the research problem (or gap) and your research question(s) and/or hypotheses creates a natural transition from your previous review of the literature - which helped you to identify the gap or problem - to how you are now going to address such a problem. Your markers are also going to assess the relevance and suitability of your method and methodological choices against your research question(s), so it's good to 'frame' the entire chapter around the research question(s) by bringing them to the fore.

Research philosophy

A research philosophy is an underlying belief that shapes the way research is conducted. For this reason, as featured in the 'research onion' above, the philosophy should be the outermost layer - the first methodological issue you deal with following the introduction and research outline - because every subsequent choice, from the method employed to the way you analyse data, is directly influenced by your philosophical stance.

You can say something about other philosophies, but it's best to directly relate this to your research and the philosophy you have selected - why the other philosophy isn't appropriate for you to adopt, for instance. Otherwise, explain to your reader the philosophy you have selected (using secondary literature), its underlying principles, and why this philosophy, therefore, is particularly relevant to your research.

The research philosophy is sometimes featured in a methodology chapter, but not always. It depends on the conventions within your school or discipline , so only include this if it's expected.

The reason for outlining the research philosophy is to show your understanding of the role that your chosen philosophy plays in shaping the design and approach of your research study. The philosophy you adopt also indicates your worldview (in the context of this research), which is an important way of highlighting the role you, the researcher, play in shaping new knowledge.

Research method

This is where you state whether you're doing qualitative, quantitative or mixed-methods research before outlining the exact instrument or strategy (see definition) adopted for research (interviews, case study, etc.). It's also important that you explain why you have chosen that particular method and strategy. You can also explain why you're not adopting an alternate form of research, or why you haven't used a particular instrument, but keep this brief and use it to reinforce why you have chosen your method and strategy.

Your research method, more than anything else, is going to directly influence how effectively you answer your research question(s). For that reason, it's crucial that you emphasise the suitability of your chosen method and instrument for the purposes of your research.                       

Data collection

The data collection part of your methodology explain the process of how you accessed and collected your data. Using an interview as a qualitative example, this might include the criteria for selecting participants, how you recruited the participants and how and where you conducted the interviews. There is often some overlap with data collection and research method, so don't worry about this. Just make sure you get the essential information across to your reader.

The details of how you accessed and collected your data are important for replicability purposes - the ability for someone to adopt the same approach and repeat the study. It's also important to include this information for reliability and consistency purposes (see  validity and reliability  on the next tab of this guide for more).

Data analysis

After describing how you collected the data, you need to identify your chosen method of data analysis. Inevitably, this will vary depending on whether your research is qualitative or quantitative (see note below).

Qualitative research tends to be narrative-based where forms of ‘coding’ are employed to categorise and group the data into meaningful themes and patterns (Bui, 2014). Quantitative deals with numerical data meaning some form of statistical approach is taken to measure the results against the research question(s).

Tell your reader which data analysis software (such as SPSS or Atlast.ti) or method you’ve used and why, using relevant literature. Again, you can mention other data analysis tools that you haven’t used, but keep this brief and relate it to your discussion of your chosen approach. This isn’t to be confused with the results and discussion chapters where you actually state and then analyse your results. This is simply a discussion of the approach taken, how you applied this approach to your data and why you opted for this method of data analysis.

Detail of how you analysed your data helps to contextualise your results and discussion chapters. This is also a validity issue (see next tab of guide), as you need to ensure that your chosen method for data analysis helps you to answer your research question(s) and/or respond to your hypotheses. To use an example from Bui (2014: 155), 'if one of the research questions asks whether the participants changed their behaviour before and after the study, then one of the procedures for data analysis needs to be a comparison of the pre- and postdata'.

Validity and reliability

Validity simply refers to whether the research method(s) and instrument(s) applied are directly suited to meet the purposes of your research – whether they help you to answer your research question(s), or allow you to formulate a response to your hypotheses.

Validity can be separated into two forms: internal and external. The difference between the two is defined by what exists inside the study (internal) and what exists outside the study (external).

  • Internal validity is the extent to which ‘the results obtained can be attributed to the manipulation of the independent variable' (Salkind, 2011: 147).
  • External validity refers to the application of your study’s findings outside the setting of your study. This is known as generalisability , meaning to what extent are the results applicable to a wider context or population.

Reliability

Reliability refers to the consistency with which you designed and implemented your research instrument(s). The idea behind this is to ensure that someone else could replicate your study and, by applying the instrument in the exact same way, would achieve the same results. This is crucial to quantitative and scientific based research, but isn’t strictly the case with qualitative research given the subjective nature of the data.

With qualitative data, it’s important to emphasise that data was collected in a consistent way to avoid any distortions. For example, let’s say you’ve circulated a questionnaire to participants. You would want to ensure that every participant receives the exact same questionnaire with precisely the same questions and wording, unless different questionnaires are required for different members of the sample for the purposes of the research.

Ethical considerations

Any research involving human participants needs to consider ethical factors. In response, you need to show your markers that you have implemented the necessary measures to cover the relevant ethical issues. These are some of the factors that are typically included:

  • How did you gain the consent of participants, and how did you formally record this consent?
  • What measures did you take to ensure participants had enough understanding of their role to make an informed decision, including the right to withdraw at any stage?
  • What measures did you take to maintain the confidentiality of participants during the research and, potentially, for the write-up?
  • What measures did you take to store the raw data and protect it from external access and use prior to the write-up?

These are only a few examples of the ethical factors you need to write about in your methodology. Depending on the nature of your research, ethical considerations might form a significant part of your methodology chapter, or may only constitute a few sentences. Either way, it’s imperative that you show your markers that you’ve considered the relevant ethical implications of your research.

Limitations

Don’t make the mistake of ignoring the limitations of your study (see the next tab, 'What to Avoid', for more on this) – it’s a common part of research and should be confronted. Limitations of research can be diverse, but tend to be logistical issues relating to time, scope and access . Whilst accepting that your study has certain limitations, the key is to put a positive spin on it, like the example below:

Despite having a limited sample size compared to other similar studies, the number of participants is enough to provide sufficient data, whilst the in-depth nature of the interviews facilitates detailed responses from participants.

  • Bui, Y. N. (2014) How to Write a Master’s Thesis. 2dn Edtn. Thousand Oaks, CA: Sage.
  • Guba, E. G. and Lincoln, Y. S. (1994) ‘Competing paradigms in qualitative research’, in Denzin, N. K. and Lincoln, N. S. (eds.) Handbook of Qualitative Research. Thousand Oaks, CA: Sage, pp. 105-117.
  • Salkind, N. J. (2011) ‘Internal and external validity’, in Moutinho, L. and Hutchenson, G. D. (eds.) The SAGE Dictionary of Quantitative Management Research . Thousand Oaks, CA: Sage, pp. 147-149.
  • Saunders, M., Lewis, P. and Thornhill, A. (2007) Research Methods for Business Students . 4th Edtn. Harlow: Pearson.

What to avoid

This portion of the guide will cover some common missteps you should try to avoid in writing your methodology.

Ignoring limitations

It might seem instinctive to hide any flaws or limitations with your research to protect yourself from criticism. However, you need to highlight any problems you encountered during the research phase, or any limitations with your approach. Your markers are expecting you to engage with these limitations and highlight the kind of impact they may have had on your research.

Just be careful that you don’t overstress these limitations. Doing so could undermine the reliability and validity of your results, and your credibility as a researcher.

Literature review of methods

Don’t mistake your methodology chapter as a detailed review of methods employed in other studies. This level of detail should, where relevant, be incorporated in the literature review chapter, instead (see our Writing the Literature Review guide ). Any reference to methodological choices made by other researchers should come into your methodology chapter, but only in support of the decisions you made.

Unnecessary detail

It’s important to be thorough in a methodology chapter. However, don’t include unnecessary levels of detail. You should provide enough detail that allows other researchers to replicate or adapt your study, but don’t bore your reader with obvious or extraneous detail.

Any materials or content that you think is worth including, but not essential in the chapter, could be included in an appendix (see definition). These don’t count towards your word count (unless otherwise stated), and they can provide further detail and context for your reader. For instance, it’s quite common to include a copy of a questionnaire in an appendix, or a list of interview questions.

Q: Should the methodology be in the past or present tense?

A: The past tense. The study has already been conducted and the methodological decisions have been implemented, meaning the chapter should be written in the past tense. For example...

Data was collected over the course of four weeks.

I informed participants of their right to withdraw at any time.

The surveys included ten questions about job satisfaction and ten questions about familial life (see Appendix).

Q: Should the methodology include secondary literature?

A: Yes, where relevant. Unlike the literature review, the methodology is driven by what you did rather than what other people have done. However, you should still draw on secondary sources, when necessary, to support your methodological decisions.

Q: Do you still need to write a methodology for secondary research?

A: Yes, although it might not form a chapter, as such. Including some detail on how you approached the research phase is always a crucial part of a dissertation, whether primary or secondary. However, depending on the nature of your research, you may not have to provide the same level of detail as you would with a primary-based study.

For example, if you’re analysing two particular pieces of literature, then you probably need to clarify how you approached the analysis process, how you use the texts (whether you focus on particular passages, for example) and perhaps why these texts are scrutinised, as opposed to others from the relevant literary canon.

In such cases, the methodology may not be a chapter, but might constitute a small part of the introduction. Consult your supervisor for further guidance.

Q: Should the methodology be in the first-person or third?

A: It’s important to be consistent , so you should use whatever you’ve been using throughout your dissertation. Third-person is more commonly accepted, but certain disciplines are happy with the use of first-person. Just remember that the first-person pronoun can be a distracting, but powerful device, so use it sparingly. Consult your supervisor for further guidance.

It’s important to remember that all research is different and, as such, the methodology chapter is likely to be very different from dissertation to dissertation. Whilst this guide has covered the most common and essential layers featured in a methodology, your methodology might be very different in terms of what you focus on, the depth of focus and the wording used.

What’s important to remember, however, is that every methodology chapter needs to be structured in a linear, layered way that guides the reader through the methodological process in sequential order. Through this, your marker can see how certain decisions have impacted on others, showing your understanding of the research process.

Here’s a final checklist for writing your methodology. Remember that not all of these points will be relevant for your methodology, so make sure you cover whatever’s appropriate for your dissertation. The asterisk (*) indicates any content that might not be relevant for your dissertation. You can download a copy of the checklist to save and edit via the Word document, below.

  • Methodology self-evaluation checklist

Decorative

  • << Previous: The Literature Review
  • Next: The Results and Discussion >>
  • Last Updated: Feb 22, 2024 3:43 PM
  • URL: https://library.soton.ac.uk/writing_the_dissertation

Logo for RMIT Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Benchmarking your research

Why benchmark.

decorative image

Benchmarking your research performance against other comparable individuals, institutions, and research centers, can be another method of demonstrating the impact and engagement of your research. This approach can be useful when applying for grant funding or career advancement.

The Library guide Research evidence for grants and promotion is a ‘how to’ guide on information and tools for describing and capturing evidence of research outputs.

Using proprietary tools

The two main bibliometric tools that can be used for benchmarking are SciVal and InCites . SciVal’s data is sourced from Elsevier’s Scopus database, whilst InCite’s data is sourced from Clarivate’s Web of Science database.

Access SciVal (login required). Note: that an Elsevier account is required to access this resource.

Watch the following video to learn more about using SciVal to benchmark your research performance.   

Benchmarking in SciVal (1:53 mins)

Benchmarking in SciVal (1:53 mins.) by Ana Ranitovic ( YouTube )

Access InCites (login required). Note: create an account using your RMIT email.

Watch the following video to learn more about using Incites to benchmark your research performance.  

InCites Benchmarking & Analytics: Quick Tour (4:53 mins)

InCites Benchmarking & Analytics: Quick Tour (4:53 mins) by Web of Science Training ( YouTube )

Research and Writing Skills for Academic and Graduate Researchers Copyright © 2022 by RMIT University is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

Share This Book

SMU Libraries

Dissertation / Thesis Research and Writing: 1. Proposal, Lit Search, and Benchmark

  • 1. Proposal, Lit Search, and Benchmark
  • 2. Literature Review
  • 3. Research Methodology
  • 4. Writing & Citing
  • 5. Oral defense
  • Submit your thesis to SMU IRIS

Learn how to prepare good proposals

dissertation methodology benchmarking

Literature search - what is it?

A literature search is a "systematic  and thorough search of all types of published literature in order to identify as many items as possible that are relevant to a particular topic." (Gash, S., 2000).

Reasons for searching the literature according to Hart (2001):

1. to identify work already done or in progress that is relevant to your work;

2. to help to prevent you from duplicating what has already been done;

3. to avoid some of the pitfalls and errors of previous research;

4. to help you design the methodology for your project by identifying the key issues and data collection techniques best suited to your topic;

5. to enable you to find gaps in existing research, thereby giving you a unique topic.

Gash, S. (2000). Effective literature searching  for research . (2nd ed.).  Aldershot, Hampshire: Gower

Hart, C. (2001). Doing a literature search:a comprehensive guide for the social sciences . London:Sage

Start your lit search and benchmarking

  • PQDT Open Access Dissertations and Theses Access the full text of open access dissertations and theses free of charge.
  • Ebsco Open Dissertations

Singapore Management University Dissertations and Theses

SMU Dissertations and Theses:

  • Complete list of SMU Dissertations and Theses
  • Lee Kong Chian School of Business (LKCSB) Dissertations and Theses
  • School of Economics Dissertations and Theses
  • School of Information Systems Dissertations and Theses
  • School of Social Sciences Dissertations and Theses

SMU Doctoral Programmes dissertations:

  • SMU Academic Research PhD Dissertations
  • SMU Professional Doctorate Dissertations

Need to benchmark some more?

  • Nanyang Technological University Theses
  • National University of Singapore Theses
  • << Previous: Finish your dissertation
  • Next: 2. Literature Review >>
  • Last Updated: Dec 27, 2023 11:31 AM
  • URL: https://researchguides.smu.edu.sg/dissertation

National Academies Press: OpenBook

A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry (2010)

Chapter: chapter 4 - benchmarking methodology.

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

30 Introduction This chapter describes a step-by-step process, in eight steps, for conducting a trend-analysis, peer-comparison, or full- fledged benchmarking effort. Not all of the steps described below will be needed for every effort. The first step of the process is to understand the context of the benchmarking exercise. What is the source of the issue being investigated? What is the timeline for providing an answer? Will this be a one-time exercise or a permanent process? The answers to these questions will determine the level of effort for the entire process and will also guide the selection of performance measures and screening criteria in subsequent steps. In Step 2, performance measures are developed that relate to the performance question being asked. A benchmarking program that is being set up as a regular (e.g., annual) effort will use the same set of measures year after year, while one- time efforts to address specific performance questions or issues will use unique groups of measures each time. This report’s peer-grouping methodology screens potential peers based on a number of common factors that influence performance results between otherwise similar agencies; however, addi- tional screening factors may be needed to ensure that the final peer group is appropriate for the performance question being asked. These secondary screening factors are also iden- tified at this stage. In all applications except a simple trend analysis of the target agency’s own performance, a peer group is estab- lished in Step 3. This report’s basic peer-grouping methodol- ogy has been implemented in the freely available Web-based FTIS software. Instructions for using FTIS are provided in Appendix A. FTIS identifies a set of potential peers most like the agency performing the peer-review (the “target agency”), based on a variety of criteria. The software provides the results of the screening and the calculations used in the process so that users can inspect the results for reasonableness. Once the initial peer group has been established, the secondary screen- ing factors can be applied to reduce the initial peer group to a final set of peers. After a peer group is identified, Step 4 compares the per- formance of the target agency to its peers. A mix of analysis techniques is appropriate—not just looking at a snapshot of the agencies’ performance for the most recent year, but also looking at trends in the data. This effort identifies both the areas where the transit agency is doing well relative to its peers (but might be able to better) and areas where the agency’s performance lags behind its peers. Ideally, the process does not focus on producing a “report card” of performance (although one can be useful for supporting the need for per- formance improvements), but instead is used to raise ques- tions about potential reasons behind the performance and to identify peer group members that the target agency can learn from. In a true benchmarking application, the process moves on to Step 5, where the target agency contacts its best-practices peers. The intent of these contacts is to (a) verify that there are no external factors unaccounted for that explain the dif- ference in performance and (b) identify practices that could be adopted to improve one’s own performance. A transit agency can skip this step, but it loses the value of learning what its peers have tried previously and thus risks spending resources unnecessarily in re-inventing the wheel. If a transit agency seeks to improve performance in a given area, it moves on to Step 6, developing strategies for improv- ing performance, and Step 7, implementing the strategies. The specifics of these steps depend on the particular performance- improvement need and the agency’s resources and operating environment. Once strategies for improving performance have been imple- mented, Step 8 monitors results on a regular basis (monthly, quarterly, or annually, depending on the issue) to determine whether the strategies are having a positive effect on perfor- mance. As the agency’s peers may also be taking steps to C H A P T E R 4 Benchmarking Methodology

improve performance, the transit agency should periodically return to Step 4 to compare its performance against its peers. In this way, a cycle of continuous performance improve- ment can be created. Figure 2 summarizes the steps involved in the benchmarking methodology. Places in the methodology where a step can be skipped or the process can end (depending on the application) are shown with dotted connectors. Step 1: Understand the Context of the Benchmarking Exercise The first step of the process is to clearly identify the purpose of the benchmarking effort since this determines the available timeframe for the effort, the amount and kind of data that can and should be collected, and the expected final outcomes. Examples of the kinds of benchmarking efforts that could be conducted, in order of increasing effort, are: • Immediate one-time request, such as a news media inquiry following a proposed increase in fares. • Short-range one-time request, such as a management focus to increase the fuel efficiency of the fleet in response to ris- ing fuel costs. • Long-range one-time request, such as a regional planning process that is relating levels of service provision to popu- lation and employment density, or a state effort to develop a process to incorporate performance into a formula-based distribution of grant funding. • Permanent internal benchmarking process, where agency performance will be evaluated broadly on a regular (e.g., annual) basis. • Establishment of a benchmarking network, where peer agen- cies will be sought out to form a permanent group to share information and knowledge to help the group improve its collective performance. The level of the benchmarking exercise should also be deter- mined at this stage since it determines which of the remaining steps in the methodology will need to be applied: • Level 1 (trend analysis): Steps 1, 2, and 4, and possibly Steps 6–8 depending on the question to be answered. • Level 2 (peer comparison): Steps 1–4, and possibly Steps 6–8 depending on the question to be answered. • Level 3 (direct agency contact): Steps 1–5, and frequently Steps 6–8. • Level 4 (benchmarking networks): Steps 1–3 once, Steps 4 and 5 annually, Step 6 through participation in working groups, and Steps 7 and 8 at the discretion of the agency. Step 2: Develop Performance Measures Step 2a: Performance Measure Selection The performance measures used in a peer comparison are, for the most part, dependent on the performance ques- tion being asked. For example, a question about the cost- effectiveness of an agency’s operations would focus on financial outcome measures, while a question about the effectiveness of an agency’s maintenance department could use measures related to maintenance activities (e.g., maintenance expenses), agency investments (e.g., average fleet age), and maintenance outcomes (e.g., revenue miles between failures). Additional descriptive measures that provide context about peer agencies are also valuable to incorporate into a review. Because each performance question is unique, it is not pos- sible to provide a standard set of measures to use. Instead, use Chapter 3 of this report to identify 6 to 10 outcome measures that are the most applicable to the performance question, plus additional descriptive measures as desired. In addition, Chapter 5 provides case-study applications of the methodol- ogy that include examples of performance measures used for each application. Performance measures not directly available or derivable from the NTD (or from the other standardized data included with FTIS) will require contacting the other transit agencies in the peer group. If peer agency data are needed, be sure to budget plenty of time into the process to contact the peers, to obtain the desired information from them, and to compile 31 1. Understand context 2. Develop performance measures 3. Establish a peer group 4. Compare performance 5. Contact best-practices peers 6. Develop implementation strategies 7. Implement the strategy 8. Monitor results Figure 2. Benchmarking steps.

the information. Examples of common situations where out- side agency data might be required are: • Performance questions involving specific service types (e.g., commuter bus routes); • Performance questions involving customer satisfaction; • Performance questions involving quality-of-service factors such as reliability or crowding; and • Performance questions requiring detailed maintenance data. Significant challenges exist whenever non-standardized data are needed to answer a performance question. Agencies may not collect the desired information at all or may define desired measures differently. If data are available, they may not be com- piled in the desired format (e.g., route-specific results are pro- vided, but the requesting agency desires service-specific results). Therefore, the target agency should plan on performing addi- tional analysis to convert the data it receives into a useable form. It is often possible to obtain non-standard data from other agencies, but it does take more time and effort. Benchmarking networks are a good way for a group of tran- sit agencies to first agree on common definitions for non-NTD measures of interest and then to set up a regular data-collection and reporting process that all can benefit from. Step 2b: Identify Secondary Screening Measures This report’s recommended peer-grouping methodology incorporates a number of factors that can influence one tran- sit agency’s performance relative to another. However, it does not account for all potential factors. Depending on the per- formance question being asked, a secondary screening might need to be performed on the initial peer group produced by the methodology. These measures should be selected prior to forming peer groups to avoid any perception later on that the peer group was hand-picked to produce a desired result. Examples of factors that might be considered as part of a sec- ondary screening process include: • Institutional structure (e.g., appointed board vs. a directly elected board): Available from NTD form B-10. (All NTD forms with publicly released data are viewable through the FTIS software.) • Service operator (e.g., directly operated vs. purchased ser- vice): Although this factor is included in the peer-grouping methodology, it is not a pass/fail factor. Some performance questions, however, may require a peer group of agencies that purchase or do not purchase service. In other situa- tions, the presence, lack, or mix of contracted service could help explain performance results, and therefore, this factor would not be desirable for secondary screening. • Service philosophy [e.g., providing service to as many resi- dents and worksites as possible (coverage) vs. concentrating service where it generates the most ridership (efficiency)]: Determined from an Internet inspection of agency goals and/or route networks. • Service area type (e.g., being the only operator in a region): This report’s peer grouping methodology considers eight different service area types in forming peer groups, but allows peers to have somewhat dissimilar service areas. Some performance questions, however, may require exact matches. Service area information is available through FTIS; the Internet can also be used to compare agencies’ system maps. • Funding sources: Available from NTD form F-10. • Vehicles operated in maximum service: Available from NTD form B-10. • Peak-to-base ratio: Derivable for larger agencies (at least 150 vehicles in maximum service, excluding vanpool and demand response) from NTD form S-10. • FTA population categories for grant funding: An agency may wish to compare itself only to other agencies within its FTA funding category (e.g., <50,000 population, 50,000–200,000 population, 200,000 to 1 million popu- lation, >1 million population), or a funding category it expects to move into in the future. Service area popula- tions are available on NTD form B-10, while urban area populations are available through FTIS. • Capital facilities (e.g., number of maintenance facilities): Available from NTD form A-10. • Right-of-way types: Available from NTD form A-20. • Service days and span: Available from NTD form S-10. Some of the case studies given in Chapter 5 provide exam- ples of secondary screening. Step 2c: Identify Thresholds The peer-grouping methodology seeks to identify peer transit agencies that are similar to the target agency. It should not be expected that potential peers will be identical to the target agency, and the methodology allows potential peers to be different from the target agency in some respects. However, if a potential peer is substantially different in one respect from the target agency, it needs to be quite similar in several other respects for the methodology to identify it as a potential peer. The methodology testing determined that not all transit agencies were comfortable with having no thresholds on any given peer-grouping factor—some thought suggested peers were too big or too small in comparison to their agency, for example, despite considerable similarity elsewhere. This report discourages setting thresholds for peer-grouping fac- 32

tors (e.g., the size that constitutes “too big”) when not needed to address a particular performance question. However, it is also recognized that the credibility and eventual success of a benchmarking exercise depends in great measure on how its stakeholders (e.g., staff, decision-makers, board, or the pub- lic) perceive the peers used in the exercise. If the peers are not perceived to be credible, the results of the exercise will be questioned. Users of the methodology at the local level are in the best position to gauge the factors that might make peers not appear credible to their stakeholders. If thresholds are to be used, users should review the method- ology’s peer-grouping factors to determine (a) whether a threshold is needed and (b) what it should be. As with screen- ing measures, it is important to do this work in advance in order to avoid perceptions later on that the peer group was hand-picked. Step 3: Establish a Peer Group Overview The selection of a peer group is a vital part of the bench- marking process. Done well, the selection of an appropriate, credible peer group can provide solid guidance to the agency, point decision-makers towards appropriate directions, and help the agency implement realistic activities to improve its performance. On the other hand, selecting an inappropri- ate peer group at the start of the process can produce results that are not relevant to the agency’s situation, or can produce targets or expectations that are not realistic for the agency’s operating conditions. As discussed above, the credibility of the peer group is also important to stakeholders in the bench- marking process—if the peer group appears to be hand-picked to make the agency look good, any recommendations for action (or lack of action) that result from the process will be questioned. Ideally, between eight and ten transit agencies will ulti- mately make up the peer group. This number provides enough breadth to make meaningful comparisons without creating a burdensome data-collection or reporting effort. Some agen- cies have more unique characteristics than others, and it may not always be possible to come up with a credible group of eight peers. However, the peer group should include at least four other agencies to have sufficient breadth. Examples of situations where the ideal number of peers may not be achiev- able include: • Larger transit agencies generally, as there is a smaller pool of similar peers to work with; • Largest-in-class transit agencies (e.g., largest bus-only oper- ators), as nearly all potential peers will be smaller or will operate modes that the target agency does not operate; • Transit agencies operating relatively uncommon modes (e.g., commuter rail), as there is a smaller pool of potential peers to work with; and • Transit agencies with uncommon service types (e.g., bus operators that serve multiple urban areas), as again there is a small pool of potential peers. The peer-grouping methodology can be applied to a tran- sit agency as a whole (considering all modes operated by that agency), or to any of the specific modes operated by an agency. Larger multi-modal agencies that have difficulty finding a suf- ficient number of peers using the agency-wide peer-grouping option may consider forming mode-specific peer groups and comparing individual mode performance. Mode-specific groups are also the best choice for mode-specific evaluations, such as an evaluation of bus maintenance performance. Larger transit agencies that have difficulty finding peers may also consider looking internationally for peers, particu- larly to Canada. Statistics Canada provides data for most of the peer-grouping methodology’s demographic screening factors, including population, population density, low-income popu- lation, and 5-year population growth for census metropolitan areas. Many Canadian transit agency websites provide basic budget and service data that can be integrated into the peer- grouping process, and Canadian Urban Transit Association (CUTA) members have access to CUTA’s full Canadian Tran- sit Statistics database (28).1 For ease of use, this report’s basic peer-grouping method- ology has been implemented in the Web-based FTIS soft- ware, which provides a free, user-friendly interface to the full NTD. However, the methodology can also be implemented in a spreadsheet, and was used that way during the initial test- ing of the methodology. Detailed instructions on using FTIS to perform an initial peer grouping are provided in Appendix A, and full details of the calculation process used by the peer-grouping method- ology are provided in Appendix B. The following subsections summarize the material in these appendices. Step 3a: Register for FTIS The NTD component of FTIS is accessed at http://www.ftis. org/INTDAS/NTDLogin.aspx. The site is password protected, but a free password can be requested from this page. Users typ- ically receive a password within one business day. 33 1 An important difference that impacts performance ratios derived from CUTA ridership data is that U.S. ridership data are based on vehicle boardings (i.e., unlinked trips), while CUTA ridership data are based on total trips regardless of number of vehicles used (i.e., linked trips). Thus, a transit trip that includes a transfer counts as two rides in U.S. data, but only one ride in CUTA data. Unlinked trips is the sum of linked trips and number of transfers. Some larger Canadian agencies also report unlinked trip data to APTA.

Step 3b: Form an Initial Peer Group The initial peer-grouping portion of the methodology iden- tifies transit agencies that are similar to the target agency in a number of characteristics that can influence performance results between otherwise similar agencies. “Likeness scores” are used to determine the level of similarity between a poten- tial peer agency and the target agency both with respect to indi- vidual factors (e.g., urban area population, modes operated, and service areas) and for the agencies overall. Appendix A provides detailed instructions on using FTIS to form an ini- tial peer group. Transit agencies should not expect that their peers will be exactly like themselves. The methodology allows peers to differ substantially in one or more respects, but this must be compensated by a high degree of similarity in a number of other respects. (Agencies not comfortable with having a high degree of dissimilarity in a given factor can develop and apply screening thresholds, as described in Step 2c.) The goal is to identify a set of peers that are similar enough to the target agency that credible and useful insights can be drawn from the performance comparison to be conducted in Step 4. The methodology uses the following three screening fac- tors to help ensure that potential peers operate a similar mix of modes as the target agency: • Rail operator (yes/no). A rail operator is defined here as one that operates 150,000 or more rail vehicle miles annually. (This threshold is used to distinguish transit agencies that operate small vintage trolley or downtown streetcar circula- tors from large-scale rail operators.) This factor helps screen out rail-operating agencies as potential peers for bus-only operators. • Rail-only operator (yes/no). A rail-only operator operates rail and has no bus service. This factor is used to screen out multi-modal operators as peers for rail-only operators. • Heavy-rail operator (yes/no). A heavy-rail operator oper- ates the heavy rail (i.e., subway or rapid transit) mode. This factor helps identify other heavy-rail operators as peers for transit agencies that operate this mode. As discussed in more detail in Appendix A, bus-only oper- ators that wish to consider rail operators as potential peers can export a spreadsheet containing the peer-grouping results and then manually recalculate the likeness scores, excluding these three screening factors. Depending on the type of analysis (rail-specific vs. bus- specific or agency-wide) and the target agency’s urban area size, up to 14 peer-grouping factors are used to identify transit agencies similar to the target agency. All of these peer-grouping factors are based on nationally available, consistently defined and reported measures. The factors are: • Urban area population. Service area population would theoretically be a preferable variable to use, but it is not yet reported in a consistent way to the NTD. Instead, the methodology uses a combination of urban area population and service area type—discussed below—as a proxy for the number of people served. • Total annual vehicle miles operated. This is a measure of the amount of service provided, which reflects service fre- quencies, service spans, and service types operated. • Annual operating budget. Operating budget is a measure of the scale of a transit agency’s operations; agencies with similar budgets may face similar challenges. • Population density. Denser communities can be served more efficiently by transit. • Service area type. Agencies have been assigned one of eight service types, depending on the characteristics of their ser- vice (e.g., entire urban area, central city only, commuter service into a central city). • State capital (yes/no). State capitals tend to have a higher concentration of office employment than other similarly sized cities. • Percent college students. Universities provide a focal point for service and often directly or indirectly subsidize students’ transit usage, thus resulting in a higher level of ridership than in other similarly sized communities. • Population growth rate. Agencies serving rapidly growing communities face different challenges than either agencies serving communities with moderate growth rates or agen- cies serving communities that are shrinking in size. • Percent low-income population. The amount of low- income population is a factor that has been correlated with ridership levels. Low-income statistics reflect both household size and configuration in determining poverty status and are therefore a more robust measure than either household income or automobile ownership. • Annual roadway delay (hours) per traveler. Transit may be a more attractive option for commuters in cities where the roadway network is more congested. This factor is only used for target agencies in urban areas with populations of 1 million or more. • Freeway lane miles (thousands) per capita. Transit may be more competitive with the automobile from a travel- time perspective in cities with relatively few freeway lane- miles per capita. This factor is only used for target agencies in urban areas with populations of 1 million or more. • Percent service demand-responsive. This factor helps de- scribe the scale of agency’s investment in demand-response service (including ADA complementary paratransit service) 34

as compared with fixed-route service. This factor is only used for agency-wide and bus-mode comparisons. • Percent service purchased. Agencies that purchase their service will typically have different organization and cost structures than those that directly operate service. • Distance. This factor serves multiple functions. First, it serves as a proxy for other factors, such as climate, that are more difficult to quantify but tend to become more different the farther apart two agencies are. Second, agen- cies located within the same state are more likely to oper- ate under similar legislative requirements and have similar funding options available to them. Finally, for benchmark- ing purposes, closer agencies are easier to visit and stake- holders in the process are more likely to be familiar with nearby agencies and regions. This factor is not used for rail-mode-specific peer grouping due to the relatively small number of rail-operating agencies. Likeness scores for most of these factors are determined from the percentage difference between a potential peer’s value for the factor and the target agency’s value. A score of 0 indicates that the peer and target agency values are exactly alike, while a score of 1 indicates that one agency’s value is twice the amount of the other. For example, if the target agency was in a region with an urbanized area population of 100,000 while the population of a potential peer agency’s region was 150,000, the likeness score would be 0.5, as one population is 50% higher than the other. For the factors that cannot be compared by percentage difference (e.g., state cap- ital or distance), the factor likeness scores are based on for- mulas that are designed to produce similar types of results—a score of 0 indicates identical characteristics, a score of 1 indi- cates a difference, and a score of 2 or more indicates a sub- stantial difference. Appendix A provides the likeness score calculation details for all of the peer-grouping factors. The total likeness score is calculated from the individual screening and peer-grouping factor likeness scores as follows: A total likeness score of 0 indicates a perfect match between two agencies (and is unlikely to ever occur). Higher scores indicate greater levels of dissimilarity between two agencies. In general, a total likeness score under 0.50 indicates a good match, a score between 0.50 and 0.74 represents a satisfactory match, and a score between 0.75 and 0.99 represents poten- tial peers that may usable, but care should be taken to inves- tigate potential differences that may make them unsuitable. Peers with scores greater than or equal to 1.00 are undesirable due to a large number of differences with the target agency, Total likeness score Sum screening factor sc = ores Sum peer grouping factor scores Cou ( )+ ( ) nt peer grouping factors( ) . but may occasionally be the only candidates available to fill out a peer group. A total likeness score of 70 or higher may indicate that a potential peer had missing data for one of the screening factors. (A factor likeness score of 1,000 is assigned for missing data; dividing 1,000 by the number of screening factors results in scores of 70 and higher.) In some cases, suitable peers may be found in this group by manually re-calculating the total like- ness score in a spreadsheet and removing the missing factor from consideration, if the user determines that the factor is not essential for the performance question being asked. Missing congestion-related factors, for example, might be more easily ignored than a missing total operating budget. Step 3c: Performing Secondary Screening Some performance questions may require looking at a nar- rower set of potential peers than found in the initial peer group. For example, one case study described in Chapter 5 involves an agency that did not have a dedicated local funding source and was interested in comparing itself to peers that did have one. Another case study involves an agency in a region that was about to reach 200,000 population (thus moving into a differ- ent funding category) and wanted to compare itself to peers that were already at 200,000 population or more. Some agen- cies may simply want to make sure that no peer agency is “too different” to be a potential peer for a particular application. Data contained in FTIS can often be used to perform these kinds of screenings. Some other kinds of screening, for exam- ple based on agency policy or types of routes operated (e.g., commuter bus or BRT), will require Internet searches or agency contacts to obtain the information. The general process to follow is to first identify how many peers would ideally end up in the peer group. For the sake of this example, this number will be eight. Starting with the highest- ranked potential peer (i.e., the one with the lowest total like- ness score), check whether the agency meets the secondary screening criteria. If the agency does not meet the criteria, replace it with the next available agency in the list that meets the screening criteria. For example, if the #1-ranked potential peer does not meet the criteria, check the #9-ranked agency next, then #10, and so forth, until an agency is found that meets the criteria. Repeat the process with the #2-ranked potential peer. Continue until a group of eight peers that meets the sec- ondary screening criteria is formed, or until a potential peer’s total likeness score becomes too high (e.g., is 1.00 or higher). Table 15 shows an example of the screening process for Knoxville Area Transit, using “existence of a dedicated local funding source” as a criterion. The top 20 “most similar” agencies to Knoxville are shown in the table in order of their total likeness score. The table also shows whether or not each agency has a dedicated local funding source. In this case, 35

seven of Knoxville’s top eight peers have a dedicated local funding source. Connecticut Transit–New Haven Division does not, so it would be replaced by the next-highest peer in the list that does—in this case, Western Reserve Transit Authority. Although it is the 16th-most-similar agency in the list, it still has a good total likeness score of 0.53. Although not needed in this example, some user judgment might be needed about the extent of dedicated local funding that would qualify. Some local funding sources might only provide 1% or less of an agency’s total operating revenue, for example. Step 4: Compare Performance The performance measures to be used in the benchmarking effort were specified during Step 2a. Now that a final peer group has been identified, Step 4 focuses on gathering the data associ- ated with those performance measures and analyzing the data. Step 4a: Gather Performance Data NTD Data Performance measures that are directly collected by the NTD or can be derived from NTD measures can be obtained through FTIS. The process for doing so is described in detail in Appendix A. NTD measures provide both descriptive infor- mation such as operating costs and revenue hours and out- come measures such as ridership. Many useful performance measures, however, are ratios of two other measures. For example, cost per trip is a measure of cost-effectiveness, cost per revenue hour is a measure of cost-efficiency, and trips per revenue hour is a measure of productivity. None of these ratios is directly reported by the NTD, but all can be derived from other NTD measures. FTIS provides many common performance ratios, and any ratio derivable from NTD data can be calculated by exporting it from FTIS to a spreadsheet. One potential concern that users may have with NTD data is the time lag between when data are submitted and when data are officially released, which can be up to 2 years. Rapidly changing external conditions—for example, fuel price increases or a downturn in the economy—may result in the most recent conditions available through the NTD not being reflective of current conditions. There are several ways that these data lag issues can be addressed if they are felt to be a concern: 1. Request NTD viewer passwords directly from the peer agencies. These passwords allow users to view, but not alter, data fields in the various NTD forms. As long as agencies are willing to share their viewer passwords, the agency perform- 36 Agency City State Likeness Score Dedicated Local Funding? Use as Peer? Knoxville Area Transit Knoxville TN 0.00 1 W inston-Salem Transit Authority Winston-Salem NC 0.25 Yes 2 S outh Bend Public Transportation Corporation South Bend IN 0.36 Yes 3 B irmingham-Jefferson County Transit Authority Birmingham AL 0.36 Yes 4 C onnecticut Transit - New Haven Division New Haven CT 0.39 No 5 F ort Wayne Public Transportation Corporation Fort Wayne IN 0.41 Yes 6 T ransit Authority of Omaha Omaha NE 0.41 Yes 7 C hatham Area Transit Authority Savannah GA 0.42 Yes 8 S tark Area Regional Transit Authority Canton OH 0.44 Yes 9 T he Wave Transit System Mobile AL 0.46 No 10 Capital Area Transit Raleigh NC 0.48 No 11 Capital Area Transit Harrisburg PA 0.48 No 12 Shreveport Area Transit System Shreveport LA 0.49 No 13 Rockford Mass Transit District Rockford IL 0.50 No 14 Erie Metropolitan Transit Authority Erie PA 0.52 No 15 Capital Area Transit System Baton Rouge LA 0.52 No 16 Western Reserve Transit Authority Youngstown OH 0.53 Yes 17 Central Oklahoma Transportation & Parking Auth. Oklahoma City OK 0.53 No 18 Des Moines Metropolitan Transit Authority Des Moines IA 0.55 No 19 Mass Transportation Authority Flint MI 0.56 Yes 20 Escambia County Area Transit Pensacola FL 0.57 No Table 15. Example secondary screening process for Knoxville Area Transit.

ing the peer comparison has access to the most up-to-date information available. 2. Request data from state DOTs. Many states require their transit agencies to report NTD data to them at the same time they report it to the FTA. 3. Review trends in NTD monthly data. The following vari- ables are available on a monthly basis, with only an approx- imate 6-month time lag: unlinked passenger trips, revenue miles, revenue hours, vehicles operated in maximum ser- vice, and number of typical days operated in a month. 4. Review trends in one’s own data. Are unusual differences between current data and the most-recent NTD data due to external, national factors that would tend to affect all peers (in which case conclusions about the target agency’s performance relative to its peers should still be valid), or are they due to agency- or region-specific changes? With either of the first two options, it should be kept in mind that data obtained prior to their official release from the NTD may not yet have gone through a full quality-control check. Therefore, performing checks on the data as described in Step 4b (e.g., checking for consistent trends) is particularly recommended in those cases. Peer Agency Data Transit agencies requesting data for a peer analysis from other agencies should accompany their data request with the following: (a) an explanation of how they plan to use the data and whether the peer agency’s data and results can or will be kept confidential, and (b) a request for documenting how the measures are defined and, if appropriate, how the data for the measures are collected. Transit agencies may be more willing to share data if they can be assured that the results will be kept confidential. This avoids potential embarrassment to the peer agency if they turn out to be one of the worst-in-group peers in one or more areas, and also saves them the potential trouble of having to explain differences in results to their stakeholders if they do not agree with the study’s methodology or result interpretations. In one of the case studies conducted for this project, for example, one agency was not interested in sharing customer-satisfaction data because they disagreed with the way the target agency calcu- lated and used a publicly reported customer-satisfaction index. The potential peer did not want to be publicly compared to the target agency using the target agency’s methodology. Confidentiality can be addressed in a peer-grouping study by identifying which transit agencies were selected as peers but not publicly identifying the specific agency associated with a specific data point in graphs and reports. This information would, of course, be available internally to the agency (to help them identify best-in-group peers), but conclusions about where the target agency stands relative to its peers can still be made and supported when the peer agency results are shown anonymously. The graphs that accompany the examples of data-quality checks in Step 4b give examples of how informa- tion can be presented informatively yet confidentially. It is important to understand how measures are defined and—in some cases—how the data were collected. For exam- ple, on-time performance is a commonly used reliability measure. However, there are wide variations in how transit agencies define “on-time” (e.g., 0 to 5 minutes late vs. 1 minute early to 2 minutes late) that influence the measure’s value, since a more generous range of time that is considered “on-time” will result in a higher on-time performance value (1). In addi- tion, the location where on-time performance is measured— departure from the start of the route, a mid-route point, or arrival at the route’s terminal—can influence the measure results. For a peer agency’s non-NTD data to be useful for a peer- comparison, the measure values need to be defined similarly, or the measure values need to be re-calculated from raw data using a common definition. The likelihood of having similar definitions is highest when an industry standard or recom- mended practice exists for the measure. For example, at the time of writing, APTA was developing a draft standard on defining rail transit on-time performance (32), while TCRP Report 47 (43) provides recommendations on phras- ing customer-satisfaction survey questions. The likelihood of being able to calculate measures from raw data is highest when the data are automatically recorded and stored (e.g., data from automatic passenger counter or automated vehicle location equipment) or when a measure is derived from other measures calculated in a standardized way. Normalizing Cost Data Transit agencies will often want to normalize cost data to (a) reflect the effects of inflation and (b) reflect differences in labor costs between regions. Adjusting for inflation allows a trend analysis to clearly show whether an agency’s costs are changing at a rate faster or slower than inflation. Adjusting for labor costs differences makes it easier to draw conclusions that differences in costs between agencies are due to internal agency efficiency differences rather then external cost differ- ences. Some of the case studies in Chapter 5 provide exam- ples of performing inflation and cost-of-living adjustments; the general process is described below. The consumer price index (CPI) can be used to adjust costs for inflation. CPIs for the country as a whole, regions of the country, and 26 metropolitan areas are available from the Bureau of Labor Statistics (BLS) website (http://www.bls.gov/ cpi/data.htm). FTIS also provides the national CPI. To adjust costs for inflation, multiply the cost by (base year CPI)/ 37

(analysis year CPI). For example, the national CPI was 179.9 for 2002 and 201.6 for 2006. To adjust 2002 prices to 2006 lev- els for use in a trend analysis, 2002 costs would be multiplied by (201.6/179.9) or 1.121. Average labor wage rates can be used to adjust costs for dif- ferences in labor costs between regions since labor costs are typically the largest component of operating costs. These data are available from the Bureau of Labor Statistics (http://www. bls.gov/oes/oes_dl.htm) for all metropolitan areas. The “all occupations” average hourly rate for a metropolitan area is recommended for this adjustment because the intent here is to adjust for the general labor environment in each region, over which an agency has no control, rather than for a tran- sit agency’s actual labor rates, over which an agency has some control. Identifying differences in a transit agency’s labor costs, after adjusting for regional variations, can be an important out- come of a peer-comparison evaluation. Although it is possible to drill down into the BLS wage database to get more-specific data—for example, average wages for “bus drivers, transit and intercity”—the ability to compare agency-controllable costs would be lost because the more-detailed category would be dominated by the transit agency’s own workforce. The “all occupations” rates, on the other hand, allow an agency to (a) investigate whether it is spending more or less for its labor relative to its region’s average wages, and (b) adjust its costs to reflect differences in a region’s overall cost of living (which impacts overall average wages within the region). To adjust peer agency costs for differences in labor costs, multiply the cost by (target agency metropolitan area labor cost)/(peer agency metropolitan area labor cost). For exam- ple, Denver’s average hourly wage rate in 2008 was $22.67, while Portland’s was $21.66. If Denver RTD is performing the analysis and wants to adjust TriMet costs to reflect the higher wages in the Denver region, it would multiply TriMet costs by (22.67/21.66), or 1.047. Step 4b: Analyze Performance Data Checking Before diving into a full analysis of the data, it is useful to create graphs for each measure to check for potential data problems, such as unusually high or low values for a given agency’s performance measure for a given year, and for values that bounce up and down with no apparent trend. The follow- ing figures give examples of these kinds of checks. Figure 3 illustrates outlier data points. Peer 4 has an obvi- ous outlier for the year 2003. As it is much higher than the agency’s other values (including prior years, if one went back into the database) and is much higher than any other agency’s values, that data point could be discarded. The rest of Peer 4’s data show consistent trends; however, since this agency had an outlier and would be the best-in-group performer for this measure, it would be worth a phone call to the agency to con- firm the validity of the other years’ values. Peer 5 also has an outlier for the year 2004. The value is not out of line with other agencies’ values, but is inconsistent with Peer 5’s over- all trend. In this case, a phone call would find out whether the agency tried (and then later abandoned) something in 2004 that would have improved performance, or whether the data point is simply incorrect. In Figure 4, Peer 2’s values for the percent of breaks and allowances as part of total operating time are nearly zero and 38 Demand Response 0 10 20 30 40 50 60 Peer 1 Peer 2 Peer 3 Peer 4 Peer 5 Tampa Fa re bo x Re co ve ry (% ) 2003 2004 2005 2006 2007 Figure 3. Outlying data points example.

far below those of the other agencies in the peer group. It might be easy to conclude that this is an error, as vehicle oper- ators must take breaks, but this would be incorrect in this case. According to the NTD data definitions, breaks that are taken as part of operator layovers are counted as platform time, whereas paid breaks and meal allowances are consid- ered straight time and are accounted for differently. There- fore, Peer 2’s values could actually be correct (and are, as confirmed by a phone call). Peer 7 is substantially higher than the others and may be treating all layover time as break time. The conclusion to be drawn from this data check is that the measure being used will not provide the desired information (a comparison of schedule efficiency). Direct agency contacts would need to be made instead. Figure 5 shows a graph of spare ratio (the number of spare transit vehicles as a percentage of transit vehicles used in max- imum service). As Figure 5(a) shows, spare ratio values can change significantly from one year to the next as new bus fleets are brought into service and old bus fleets are retired. It can be difficult to discern trends in the data. Figure 5(b) shows the same variable, but calculated as a three-year rolling average (i.e., year 2007 values represent an average of the actual 2005–2007 values). It is easier to discern from this ver- sion of the graph that Denver’s average spare ratio (along with Peer 1, Peer 3, and Peer 4) has held relatively constant over the longer term, while Peer 2’s average spare ratio has decreased over time and the other two peers’ spare ratios have increased over time. In this case, there is no apparent problem with the 39 Agency-wide 0% 2% 4% 6% 8% 10% 12% 14% 16% 18% UTA Br ea ks & A llo w an ce s vs . T ot al O pe ra tin g Ti m e 2002 2003 2004 2005 2006 Peer 1 Peer 2 Peer 3 Peer 4 Peer 5 Peer 6 Peer 7 Figure 4. Outlying peer agency example. Motorbus 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Denver Peer 1 Peer 2 Peer 3 Peer 4 Peer 5 Peer 6 Peer 7Denver Peer 1 Peer 2 Peer 3 Peer 4 Peer 5 Peer 6 Peer 7 Sp ar e Ra tio (% ) 2003 2004 2005 2006 2007 2003 2004 2005 2006 2007 (a) Spare Ratio Annual Values (b) Spare Ratio as a Three-Year Rolling Average Motorbus Sp ar e Ra tio (3 -Y ea r R oll ing A ve rag e) (% ) Figure 5. Data volatility example.

data, but the data check has been used to investigate a poten- tially better way to analyze and present the data. Data Interpretation For each measure selected for the evaluation, the target agency’s performance is compared to the performance of the peer agencies. Ideally, this evaluation would look both at the target agency’s current position relative to its peers (e.g., best- in-class, superior, average, inferior), and the agency’s trend. Even if a transit agency’s performance is better than most of its peers, a trend of declining performance might still be a cause for concern, particularly if the peer trend was one of improving performance. Trend analysis also helps iden- tify whether particularly good (or bad) performance was sustained or was a one-time event, and can also be used for forecasting (e.g., agency performance is below the agency’s target at present, but if current trends continue is forecast to reach the agency’s target in 2 years). Graphing the performance-measure values is a good first step in analyzing and interpreting the data. Any spreadsheet program can be used, and FTIS also provides basic graphing functions. It may be helpful to start by looking at patterns in the data. In Figure 6, for example, it can be seen that the gen- eral trend in the data for all peers, except Peer 7, has been an increase in operating costs per boarding over the 5-year period, with Peers 3–6 experiencing steady and significant increases each year. Denver’s cost per boarding, in compari- son, has consistently been the second-best in its peer group during this time, and Denver’s cost per boarding has increased by about half as much as the top-performing peer. Most of Denver’s peers also experienced a sharp increase in costs dur- ing at least one of the years included in the analysis, while Denver’s year-to-year change has been relatively small and, therefore, more predictable. This analysis would indicate that Denver has done a good job of controlling cost per boarding, relative to its peers. Sometimes a measure included in the analysis may turn out to be misleading. For example, farebox recovery (the por- tion of operating costs covered by fare revenue) is a com- monly used performance measure in the transit industry and is readily available through FTIS. When this measure is applied to Knoxville, however, Knoxville’s fare recovery ratio is by far the lowest of its peers, as indicated in Figure 7(a). Given that Knoxville’s performance is among the best in its peer group in a number of other measures, an analyst should ask why this result occurred. Clues to the answer can be obtained through a closer inspection of the NTD data. NTD form F-10, available within FTIS, provides informa- tion about each agency’s revenue, broken down by a number of sources. For 2007, this form shows that Knoxville earned nearly as much revenue from “other transportation revenue” as it did from bus fares. A visit to the agency’s website, where budget information is available, confirms that the agency receives revenue from the University of Tennessee for oper- ating free shuttle service to the campus and sports venues. Therefore, farebox recovery is not telling the entire story about how much of Knoxville’s service is self-supporting. As an alternative, all directly generated non-tax revenue used for operations can be compared to operating costs (a measure known as the operating ratio). This requires more work, as non-fare revenue should be allocated among the various 40 Motorbus $0.00 $1.00 $2.00 $3.00 $4.00 $5.00 $6.00 $7.00 Co st p er B oa rd in g 2002 2003 2004 2005 20062006 peer median Denver Peer 1 Peer 2 Peer 3 Peer 4 Peer 5 Peer 6 Peer 7 Figure 6. Pattern investigation example.

modes operated (it is only reported on a system-wide basis), but all of the required data to make this allocation is available through FTIS, and the necessary calculations can be readily performed within a spreadsheet. Figure 7(b) shows the results of these calculations, where it can be seen that Knoxville used to be at the top of its peer group in terms of operating ratio but is now in the middle of the group, as the university payments apparently dropped sub- stantially in 2006. A comparison of the two graphs also shows that Knoxville is the only agency among its peers (all of whom have dedicated local funding sources) to get much directly gen- erated revenue at present from anything except fares. A final example of data interpretation is shown in Figure 8, comparing agencies’ annual casualty and liability costs, nor- malized by annual vehicle miles operated. This graph tells several stories. First, it can be clearly seen that a single serious accident can have a significant impact on a transit agency’s casualty and liability costs in a given year because many agen- cies are self-insured. Second, it shows how often the peer group experiences serious accidents. Third, it indicates trends in casualty and liability costs over the 5-year period. Eugene, Peer 3, and Peer 6 were the best performers in this group over the study period, while Peer 7’s costs were consistently higher than the group as a whole. 41 Agency-wide 0 10 20 30 40 50 60 70 Eugene Peer 1 Peer 2 Peer 3 Peer 4 Peer 5 Peer 6 Peer 7 Peer 8 Ca su al ty an d lia bi lit y co st p er v eh ic le m ile (ce nts ) 2003 2004 2005 2006 20072007 peer median Figure 8. Data interpretation example #2. Motorbus 0% 5% 10% 15% 20% 25% 30% 35% Knoxville Fa re bo x Re co ve ry (% ) 2003 2004 2005 2006 2007 2007 peer median (a) Farebox Recovery (b) Directly Generated Funds Recovery Motorbus 0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% Di re ct ly Ge n er at ed Fu nd s Re co ve ry (% ) 2003 2004 2005 2006 2007 2007 peer median Peer 1 Peer 2 Peer 3 Peer 4 Peer 5 Peer 6 Peer 7 Peer 8 Knoxville Peer 1 Peer 2 Peer 3 Peer 4 Peer 5 Peer 6 Peer 7 Peer 8 Figure 7. Data interpretation example #1.

Results Presentation The results of the data analysis will need to be documented for presentation to the stakeholders in the process. The exact form will depend on the audience, but can include any or all of the following: • An executive summary highlighting the key findings, • A summary table presenting a side-by-side comparison of the numeric results for all the measures for all the peers, • Graphs, potentially including trend indicators (such as arrows) or lines indicating the group average, • A combination of graph and table, with the table providing the numeric results to accompany the graph, • A combination of graph and text, with the text interpret- ing the data shown in the graph, • Multiple graphs, with one or more secondary graphs show- ing descriptive data that support the interpretation of the main graph, and • Graphics that support the interpretation of text, tables, and/or graphs. Peer group averages can be calculated as either means or medians. Means are more susceptible to being influenced by a transit agency with particularly good or particularly poor per- formance, while medians provide a good indicator of where the middle of the group lies. The case studies in Chapter 5 and the material in Appen- dix C give a variety of examples of how performance informa- tion can be presented. TCRP Report 88 (1) contains a section providing guidance on presenting performance results, and publications available on the European benchmarking network websites (14–16, 47) can also be used as examples. Step 5: Contact Best-Practices Peers At this point in the process, a transit agency knows where its performance stands with respect to its peers, but not the reasons why. Contacting top-performing peers addresses the “why” aspect and can lead to identifying other transit agencies’ prac- tices that can be adopted to improve one’s own performance. In most cases a transit agency will find one or more areas where it is not the best performer among its peers. An agency with superior performance relative to most of its peers, and possessing a culture of continuous improvement, would con- tinue the process to identify what it can learn from its top- performing peers to improve its already good performance. When an agency identifies areas of weakness relative to its peers, it is recommended that it continue the benchmarking process to see what it can learn from its best-performing peers. For Level 1 and 2 benchmarking efforts, it is possible to skip this step and proceed directly to Step 6, developing an implementation strategy. However, doing so carries a higher risk of failure since agencies may unwittingly choose a strat- egy already tried unsuccessfully elsewhere or may choose a strategy that results in a smaller performance improvement than might have been achieved with alternative strategies. Step 5 is the defining characteristic of a Level 3 benchmark- ing effort, while the working groups used as part of a Level 4 benchmarking effort would automatically build this step into the process. Step 5 would also normally be built into the pro- cess when a benchmarking effort is being conducted with an eye toward changing how the agency conducts business. The kind of information that is desired at this step is beyond what can be found from databases and online sources. Instead, executive interviews are conducted to determine how the best- practices agencies have achieved their performance, to identify lessons learned and factors that could inhibit implementa- tion or improvement, and to develop suggestions for the tar- get agency. There are several formats for conducting these interviews, which can be tailored for the specific needs of the performance review. • Blue ribbon panels of expert staff and/or top management from peer agencies are appropriate to bring in for one-time- only or limited-term reviews, such as a special management focus on security or a large capital project review. • Site visits can be useful for hands-on understanding of how peer agencies operate. The staff involved could range from line staff to top management, depending on the specific issues being addressed. • Working groups can be established for topic-specific dis- cussions on performance, such as a working group on pre- ventative maintenance practices. Line staff and mid-level management in the topic area would be most likely to be involved. The private sector has also used staff exchanges as a way of obtaining a deeper understanding of another organization’s business practices by having one or two select staff become immersed in the peer organization’s activities for an extended period of time. Involving staff from multiple levels and functions within the transit agency helps increase the chances of identifying good practices or ideas, helps increase the potential for staff buy-in into any recommendations for change that are made as a result of the contacts, helps percolate the concept of continuous improvement throughout the transit agency, and helps pro- vide opportunities for staff leadership and professional growth. Step 6: Develop an Implementation Strategy In Step 6, the transit agency develops a strategy for making changes to the current agency environment, with the goal of improving its performance. Ideally, the strategy development 42

process will be informed by a study of best practices, which would have been performed in Step 5. The strategy should include performance goals (i.e., quantify the desired outcome) and provide a timeline for implementation, and should iden- tify any required funding. The strategy also needs to identify the internal (e.g., business practices or agency policies) or external (e.g., regional policies or new revenue sources) changes that would be needed to successfully implement the strategy. Top- level management and transit agency board support is vital to getting the process underway. However, support for the strat- egy will need to be developed at all levels of the organization: lower-level managers and staff also need to buy into the need for change and understand the potential benefits of change. Therefore, the implementation strategy should also include details on how information will be disseminated to agency staff and external stakeholders and should include plans for devel- oping internal and external stakeholder support for imple- menting the strategy. Step 7: Implement the Strategy TCRP Report 88 (1) identified that once a performance evaluation is complete and a strategy is identified, the process can often halt due to lack of funding or stakeholder support. If actual changes designed to improve performance are not implemented at the end of the process, the peer review risks becoming a paper exercise, and the lack of action can reduce stakeholder confidence in the effectiveness of future perfor- mance evaluations. If problems arise during implementation, the agency should be prepared to address them quickly so that the strategy can stay on course. Step 8: Monitor Performance As noted in Step 6, the implementation strategy should include a timeline for results. A timeline for monitoring should also be established to make sure that progress is being made toward the established goals. Depending on the goal and the overall strategy timeline, the reporting frequency could range from monthly to annually. If the monitoring effort indi- cates a lack of progress, the implementation strategy should be revisited and revised if necessary. Hopefully, however, the monitoring will show that performance is improving. In the longer term, the transit agency should continue its peer-comparison efforts on a regular basis. The process should be simpler the second time around because many or all of the agency’s peers will still be appropriate for a new effort, points of contact will have been established with the peers, and the agency’s staff will now be familiar with the process and will have seen the improvements that resulted from the first effort. The agency’s peers hopefully will also have been working to improve their own performance, so there may be something new to learn from them—either by investigating a new per- formance topic or by revisiting an old one after a few years. A successful initial peer-comparison effort may also serve as a catalyst for forming more-formal performance-comparison arrangements among transit agencies, perhaps leading to the development of a benchmarking network. 43

TRB’s Transit Cooperative Research Program (TCRP) Report 141: A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry explores the use of performance measurement and benchmarking as tools to help identify the strengths and weaknesses of a transit organization, set goals or performance targets, and identify best practices to improve performance.

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Assessing and predicting the quality of research master’s theses: an application of scientometrics

  • Published: 03 June 2020
  • Volume 124 , pages 953–972, ( 2020 )

Cite this article

  • Zheng Xie   ORCID: orcid.org/0000-0003-0391-8725 1 ,
  • Yanwu Li 2 &
  • Zhemin Li 1  

763 Accesses

4 Citations

Explore all metrics

The educational quality of research master’s degree can be in part reflected by the examiner score of the thesis. This study focuses on finding positive predictors of this score with the aim of developing assessment and prediction methods for the educational quality of postgraduates. This study is based on regression analysis of the characteristics extracted from publications and references involving 1038 research master’s theses written at three universities in China. The analysis indicates that for a thesis, the number and the integrated impact factor of its references in Science Citation Index Expanded (SCIE) journals are significantly positive predictors of having publications in such journals. Additionally, the number and the integrated impact factor of a thesis’ representative publications, defined as the publications authored by the master’s student as a first author or second author with tutors in lead position, in SCIE journals, are significantly positive predictors of its examiner score. Based on these predictors, a range of indicators is provided to assess thesis quality, to measure the contributions of disciplines to postgraduate education, to predict postgraduates’ research outcomes, and to provide benchmarks regarding the quality and quantity of their reading work.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

dissertation methodology benchmarking

Similar content being viewed by others

The use of cronbach’s alpha when developing and reporting research instruments in science education.

Keith S. Taber

dissertation methodology benchmarking

How to design bibliometric research: an overview and a framework proposal

Oğuzhan Öztürk, Rıdvan Kocaman & Dominik K. Kanbach

dissertation methodology benchmarking

Systematic Reviews in Educational Research: Methodology, Perspectives and Application

Impact factor (IF) of a journal at a given year is the average number of citations received at that year for its publications at two preceding years (Garfield 1994 , 2006 ). See https://clarivate.com/webofsciencegroup/essays/impact-factor/ .

Science Citation Index Expanded indexes over 9200 major journals across 178 scientific disciplines. In this study, these journals are called SCIE journals for short. See https://clarivate.com/webofsciencegroup/solutions/webofscience-scie/ .

See http://www.moj.gov.cn/Department/content/2004-09/03/592_201359.html

See http://old.moe.gov.cn/publicfiles/business/htmlfiles/moe/s6183/201112/128828.html .

See http://cdgdc.edu.cn/xwyyjsjyxx/sy/glmd/264462.shtml .

See http://cdgdc.edu.cn/xwyyjsjyxx/zlpj/ .

Abt, H. A. (2000). Do important papers produce high citation counts? Scientometrics , 48 (1), 65–70.

Google Scholar  

Aittola, H. (2008). Doctoral education and doctoral theses-changing assessment practices. In J. Välimaa & O. H. Ylijoki (Eds.), Cultural Perspectives on Higher Education (pp. 161–177). Dordrecht: Springer.

Anderson, C., Day, K., & McLaughlin, P. (2006). Mastering the dissertation: lecturers’ representations of the purposes and processes of master’s level dissertation supervision. Studies in Higher Education , 31 (2), 149–168.

Bornmann, L., & Mutz, R. (2011). Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages in field-normalization. Journal of Informetrics , 1 (5), 228–230.

Bourke, S. (2007). Ph.D. thesis quality: the views of examiners. South African Journal of Higher Education , 21 (8), 1042–1053.

Bourke, S., & Holbrook, A. P. (2013). Examining PhD and research masters theses. Assessment & Evaluation in Higher Education , 38 (4), 407–416.

Bouyssou, D., & Marchant, T. (2011). Bibliometric rankings of journals based on impact factors: An axiomatic approach. Journal of Informetrics , 5 (1), 75–86.

Bouyssou, D., & Marchant, T. (2011). Ranking scientists and departments in a consistent manner. Journal of the American Society for Information Science and Technology , 62 (9), 1761–1769.

Braun, T., & Glänzel, W. (1990). United Germany: The new scientific superpower? Scientometrics , 19 , 513–521.

De Bruin, R. E., Kint, A., Luwel, M., & Moed, H. F. (1993). A study of research evaluation and planning: The university of Ghent. Research Evaluation , 3 (1), 25–41.

Böhning, D. (1992). Multinomial logistic regression algorithm. Annals of the Institute of Statistical Mathematics , 44 (1), 197–200.

MathSciNet   MATH   Google Scholar  

Eng, J. (2003). Sample size estimation: How many individuals should be studied? Radiology , 227 (2), 309–313.

Fernández-Cano, A., & Bueno, A. (1999). Synthesizing scientometric patterns in Spanish educational research. Scientometrics , 46 (2), 349–367.

Freedman, D. A. (2009). Statistical models: Theory and practice . Cambridge: Cambridge University Press.

MATH   Google Scholar  

Garfield, E. (1970). Citation indexing for studying science. Nature , 227 (5259), 669–671.

Garfield, E. (1994). The impact factor. Current Contents , 25 (20), 3–7.

Garfield, E. (2006). The history and meaning of the journal impact factor. JAMA , 295 (1), 90–93.

Hagen, N. (2010). Deconstructing doctoral dissertations: How many papers does it take to make a PhD? Scientometrics , 85 (2), 567–579.

Hansford, B. C., & Maxwell, T. W. (1993). A masters degree program: Structural components and examiners’ comments. Higher Education Research and Development , 12 (2), 171–187.

Hemlin, S. (1993). Scientific quality in the eyes of the scientist: a questionnaire study. Scientometrics , 27 (1), 3–18.

Holbrook, A., Bourke, S., Fairbairn, H., & Lovat, T. (2014). The focus and substance of formative comment provided by PhD examiners. Studies in Higher Education , 39 (6), 983–1000.

Holbrook, A., Bourke, S., Lovat, T., & Dally, K. (2004). Investigating PhD thesis examination reports. International Journal of Educational Research , 41 , 98–120.

Holbrook, A., Bourke, S., Lovat, T., & Fairbairn, H. (2008). Consistency and inconsistency in PhD thesis examination. Australian Journal of Education , 52 (1), 36–48.

Kamler, B. (2008). Rethinking doctoral publication practices: Writing from and beyond the thesis. Studies in Higher Education , 33 (3), 283–294.

Kyvik, S., & Thune, T. (2015). Assessing the quality of PhD dissertations: a survey of external committee members. Assessment & Evaluation in Higher Education , 40 (5), 768–782.

Lariviére, V. (2012). On the shoulders of students? The contribution of PhD students to the advancement of knowledge. Scientometrics , 90 (2), 463–481.

Leydesdorff, L., & Bornmann, L. (2011). Integrated impact indicators compared with impact factors: An alternative research design with policy implications. Journal of the American Society for Information Science and Technology , 62 (11), 2133–2146.

Lisee, C., Lariviere, V., & Archambault, E. (2008). Conference proceedings as a source of scientific information: A bibliometric analysis. Journal of the American Society for Information Science and Technology , 59 (11), 1776–1784.

MacRoberts, M. H., & MacRoberts, B. R. (1989). Problems of citation analysis: A critical review. Journal of the American Society for Information Science , 40 (5), 342–349.

MacRoberts, M. H., & MacRoberts, B. R. (2018). The mismeasure of science: Citation analysis. Journal of the American Society for Information Science , 69 (3), 474–482.

Mason, S., Merga, M. K., & Morris, J. E. (2019). Choosing the thesis by publication approach: Motivations and influencers for doctoral candidates. The Australian Educational Researcher ,. https://doi.org/10.1007/s13384-019-00367-7 .

Article   Google Scholar  

Mason, S., Merga, M. K., & Morris, J. E. (2020). Typical scope of time commitment and research outputs of thesis by publication in Australia. Higher Education Research & Development , 39 (2), 244–258.

Moed, H. F., De Bruin, R. E., & Van Leeuwen, T. N. (1995). New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications. Scientometrics , 33 (3), 381–422.

Mullins, G., & Kiley, M. (2002). It’s a PhD, not a Nobel Prize: How experienced examiners assess research theses. Studies in Higher Education , 27 (4), 369–386.

Nelder, J. A., & Wedderburn, R. W. (1972). Generalized linear models. Journal of the Royal Statistical Society: Series A (General) , 135 (3), 370–384.

Pilcher, N. (2011). The UK postgraduate masters dissertation: An elusive chameleon? Teaching in Higher Education , 16 (1), 29–40.

Prieto, E., Holbrook, A., & Bourke, S. (2016). An analysis of PhD examiners’ reports in engineering. European Journal of Engineering Education , 41 (2), 192–203.

Stracke, E., & Kumar, V. (2010). Feedback and self-regulated learning: insights from supervisors’ and PhD examiners’ reports. Reflective Practice , 11 (1), 19–32.

Tinkler, P., & Jackson, C. (2000). Examining the doctorate: institutional policy and the PhD examination process in Britain. Studies in Higher Education , 25 , 167–180.

Tinkler, P., & Jackson, C. (2004). The doctoral examination process: A handbook for students, examiners and supervisors . Maidenhead: Open University Press.

Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. (2011). Towards a new crown indicator: An empirical analysis. Scientometrics , 87 , 467–481.

Winter, R., Griffiths, M., & Green, K. (2000). The academic qualities of practice: What are the criteria for a practice-based PhD? Studies in Higher Education , 25 (1), 25–37.

Xie, Z. (2020). Predicting the number of coauthors for researchers: A learning model. Journal of Informetrics , 14 (2), 101036.

Xie, Z., & Xie, Z. (2019). Modelling the dropout patterns of MOOC learners. Tsinghua Science and Technology , 25 (3), 313–324.

Zong, Q. J., Shen, H. Z., Yuan, Q. J., Hu, X. W., Hou, Z. P., & Deng, S. G. (2013). Doctoral dissertations of Library and Information Science in China: A co-word analysis. Scientometrics , 94 (2), 781–799.

Download references

Acknowledgements

The authors are grateful to Professor Shannon Mason in the Nagasaki University and anonymous reviewers for their helpful comments and feedback. LYW is supported by National Education Science Foundation of China (Grant No. DIA180383). XZ is supported by National Natural Science Foundation of China (Grant No. 61773020).

Author information

Authors and affiliations.

College of Liberal Arts and Sciences, National University of Defense Technology, Changsha, Hunan, China

Zheng Xie & Zhemin Li

Graduate School, National University of Defense Technology, Changsha, Hunan, China

You can also search for this author in PubMed   Google Scholar

Contributions

LYW motivated this study and provided empirical data. LZM preprocessed the data. XZ designed the methods to analyze the data, and wrote the manuscript. All authors discussed the research and approved the final version of the manuscript.

Corresponding author

Correspondence to Zheng Xie .

Ethics declarations

Conflicts of interest.

The authors declare that they have no conflicts of interest.

Appendix A: Minimum sample size

Assume the size of group from which a sample is taken to be infinite. Let the confidence level be \(1-\alpha\) . Denote the corresponding z -score of \(\alpha\) by \(z_{ {\alpha }/{2}}\) , the expected proportion by p , the population standard deviation by \(\sigma\) , and the margin of error by E . If the expected proportion and population standard deviation are not known, the sample proportion and sample standard deviation can be used (Eng 2003 ).

The formula of the minimum sample size required for estimating the population proportion is

Let \(\alpha =5\%\) , \(p=\)  sample population proportion, and \(E=0.15\) . For regression analysis on having representative publications, \(n=42,33,29, 38\) for Biological , Engineering , Information , and Physical sciences respectively.

The corresponding formula for estimating the population mean is

Let \(\alpha =5\%\) , \(\sigma =\)  sample standard deviation, and \(E= 1.5\%\) . For the regression analysis on examiner score, \(n=52, 53, 51, 43\) for Biological , Engineering , Information , and Physical sciences respectively.

Appendix B: More results of regression

Figure  10 shows linear regression results between the examiner score and the indexes derived from the references of theses. The number and the integrated impact factor of SCIE references are significantly positive predictors of the examiner score in information sciences , and the number is significantly positive in engineering . There are no significant relationship in the other cases.

figure 10

The relationship between the examiner score and the indexes derived from references. The panels show the mean examiner score of theses with the same index value (red squares), the predicted score (solid dot lines), and confidence intervals (dashed lines). The p value is that of \(\chi ^2\) -test. (Color figure online)

Figure  11 shows that for each disciplinary group, the number of representative publications of a thesis follows a Gamma distribution. Therefore, Gamma regression can be utilized to analyse the relationship between the number of representative publications and the indexes derived from references. Gamma regression is a generalized linear model that assumes that the response variable follows a Gamma distribution. The negative reciprocal of the expected value of the Gamma distribution is fitted by a linear combination of predictors (Nelder and Wedderburn 1972 ).

Figure  12 shows Gamma regression results. Except for biological sciences , the number of SCIE references is a significantly positive predictor of the number of representative publications. And there is no significant relationship between the number of non-SCIE references and the number of representative publications. These results may be statistically meaningless due to the small sample size of theses having a given number of representative publications.

figure 11

The distribution of the number of representative publications. The panels show the empirical distributions (red circles) and Gamma distributions (blue squares). The KS test cannot reject the hypothesis that the number of representative publications follows a Gamma distribution, p value \(>5\%\) . (Color figure online)

figure 12

The relationship between the number of representative publications and that of SCIE/non-SCIE references. The panels show the average number of representative publications of theses with the same index value (red squares), the predicted value (solid dot lines), and confidence intervals (dashed lines). The p value is that of \(\chi ^2\) -test. (Color figure online)

Rights and permissions

Reprints and permissions

About this article

Xie, Z., Li, Y. & Li, Z. Assessing and predicting the quality of research master’s theses: an application of scientometrics. Scientometrics 124 , 953–972 (2020). https://doi.org/10.1007/s11192-020-03489-3

Download citation

Received : 15 June 2019

Published : 03 June 2020

Issue Date : August 2020

DOI : https://doi.org/10.1007/s11192-020-03489-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Data science applications in education
  • Higher education
  • Assessment methodologies
  • Find a journal
  • Publish with us
  • Track your research
  • Bibliography
  • More Referencing guides Blog Automated transliteration Relevant bibliographies by topics
  • Automated transliteration
  • Relevant bibliographies by topics
  • Referencing guides

Efficient Decentralized Learning Methods for Deep Neural Networks

Decentralized learning is the key to training deep neural networks (DNNs) over large distributed datasets generated at different devices and locations, without the need for a central server. They enable next-generation applications that require DNNs to interact and learn from their environment continuously. The practical implementation of decentralized algorithms brings about its unique set of challenges. In particular, these algorithms should be (a) compatible with time-varying graph structures, (b) compute and communication efficient, and (c) resilient to heterogeneous data distributions. The objective of this thesis is to enable efficient decentralized learning in deep neural networks addressing the abovementioned challenges. Towards this, firstly a communication-efficient decentralized algorithm (Sparse-Push) that supports directed and time-varying graphs with error-compensated communication compression is proposed. Second, a low-precision decentralized training that aims to reduce memory requirements and computational complexity is proposed. Here, we design ”Range-EvoNorm” as the normalization activation layer which is better suited for low-precision decentralized training. Finally, addressing the problem of data heterogeneity, three impactful advancements namely Neighborhood Gradient Mean (NGM), Global Update Tracking (GUT), and Cross-feature Contrastive Loss (CCL) are proposed. NGM utilizes extra communication rounds to obtain cross-agent gradient information whereas GUT tracks global update information with no communication overhead, improving the performance on heterogeneous data. CCL explores an orthogonal direction of using a data-free knowledge distillation approach to handle heterogeneous data in decentralized setups. All the algorithms are evaluated on computer vision tasks using standard image-classification datasets. We conclude this dissertation by presenting a summary of the proposed decentralized methods and their trade-offs for heterogeneous data distributions. Overall, the methods proposed in this thesis address the critical limitations of training deep neural networks in a decentralized setup and advance the state-of-the-art in this domain.

Degree Type

  • Doctor of Philosophy
  • Electrical and Computer Engineering

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Additional committee member 2, additional committee member 3, additional committee member 4, usage metrics.

  • Deep learning
  • Neural networks

CC BY 4.0

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 15 March 2024

Benchmarking spatial clustering methods with spatially resolved transcriptomics data

  • Zhiyuan Yuan   ORCID: orcid.org/0000-0002-9367-4236 1 , 2   na1 ,
  • Fangyuan Zhao 3 , 4   na1 ,
  • Senlin Lin   ORCID: orcid.org/0009-0001-6593-4088 3 , 4 ,
  • Yu Zhao   ORCID: orcid.org/0000-0001-8179-4903 5 ,
  • Jianhua Yao   ORCID: orcid.org/0000-0001-9157-9596 5 ,
  • Yan Cui 1 , 2 , 6 ,
  • Xiao-Yong Zhang   ORCID: orcid.org/0000-0001-8965-1077 1 &
  • Yi Zhao   ORCID: orcid.org/0000-0001-6046-8420 3 , 4  

Nature Methods ( 2024 ) Cite this article

5008 Accesses

34 Altmetric

Metrics details

  • Computational models
  • Computational platforms and environments
  • RNA sequencing

Spatial clustering, which shares an analogy with single-cell clustering, has expanded the scope of tissue physiology studies from cell-centroid to structure-centroid with spatially resolved transcriptomics (SRT) data. Computational methods have undergone remarkable development in recent years, but a comprehensive benchmark study is still lacking. Here we present a benchmark study of 13 computational methods on 34 SRT data (7 datasets). The performance was evaluated on the basis of accuracy, spatial continuity, marker genes detection, scalability, and robustness. We found existing methods were complementary in terms of their performance and functionality, and we provide guidance for selecting appropriate methods for given scenarios. On testing additional 22 challenging datasets, we identified challenges in identifying noncontinuous spatial domains and limitations of existing methods, highlighting their inadequacies in handling recent large-scale tasks. Furthermore, with 145 simulated data, we examined the robustness of these methods against four different factors, and assessed the impact of pre- and postprocessing approaches. Our study offers a comprehensive evaluation of existing spatial clustering methods with SRT data, paving the way for future advancements in this rapidly evolving field.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 print issues and online access

251,40 € per year

only 20,95 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

dissertation methodology benchmarking

Similar content being viewed by others

dissertation methodology benchmarking

Assessing GPT-4 for cell type annotation in single-cell RNA-seq analysis

Wenpin Hou & Zhicheng Ji

dissertation methodology benchmarking

Spatially organized cellular communities form the developing human heart

Elie N. Farah, Robert K. Hu, … Neil C. Chi

dissertation methodology benchmarking

scGPT: toward building a foundation model for single-cell multi-omics using generative AI

Haotian Cui, Chloe Wang, … Bo Wang

Data availability

Data1 to Data12 were downloaded from ref. 64 . Data13 to Data21 are available from ref. 65 . Data22 to Data24 were downloaded from ref. 66 . Data25 to Data29 were downloaded from ref. 67 . Data30 was downloaded from ref. 68 . Data31 to Data33 are available from ref. 69 . Data34 was downloaded from ref. 69 . Data35 to Data41 were downloaded from ref. 70 . Data42 to Data54 were downloaded from https://www.livercellatlas.org/ . Data55 to Data56 are available at GSE111672 . Data57 to Data87 were downloaded from ref. 71 . Source data are provided with this paper.

Code availability

The code and scripts used for data preprocessing and visualization are available at https://github.com/zhaofangyuan98/SDMBench . Our benchmarking workflow is provided as a reproducible pipeline at https://github.com/zhaofangyuan98/SDMBench/tree/main/SDMBench . We also provide a tutorial at https://github.com/zhaofangyuan98/SDMBench/tree/main/Tutorial .

Vandereyken, K., Sifrim, A., Thienpont, B. & Voet, T. Methods and applications for single-cell and spatial multi-omics. Nat. Rev. Genet. https://doi.org/10.1038/s41576-023-00580-2 (2023).

Seferbekova, Z., Lomakin, A., Yates, L. R. & Gerstung, M. Spatial biology of cancer evolution. Nat. Rev. Genet. https://doi.org/10.1038/s41576-022-00553-x (2022).

Article   PubMed   Google Scholar  

Moffitt, J. R., Lundberg, E. & Heyn, H. The emerging landscape of spatial profiling technologies. Nat. Rev. Genet. https://doi.org/10.1038/s41576-022-00515-3 (2022).

Zeng, H. et al. Spatially resolved single-cell translatomics at molecular resolution. Science 380 , eadd3067 (2023).

Article   CAS   PubMed   Google Scholar  

Shi, H. et al. Spatial atlas of the mouse central nervous system at molecular resolution. Nature https://doi.org/10.1038/s41586-023-06569-5 (2023).

Article   PubMed   PubMed Central   Google Scholar  

Chen, A. et al. Single-cell spatial transcriptome reveals cell-type organization in the macaque cortex. Cell 186 , 3726–3743 e3724 (2023).

Zhang, M. et al. Spatially resolved cell atlas of the mouse primary motor cortex by MERFISH. Nature 598 , 137–143 (2021).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Zhang, M. et al. Molecularly defined and spatially resolved cell atlas of the whole mouse brain. Nature 624 , 343–354 (2023).

Chang, Y. et al. Define and visualize pathological architectures of human tissues from spatially resolved transcriptomics using deep learning. Comput. Struct. Biotechnol. J. 20 , 4600–4617 (2022).

Dong, K. & Zhang, S. Deciphering spatial domains from spatially resolved transcriptomics with adaptive graph attention auto-encoder. Nat. Commun. https://doi.org/10.1038/s41467-022-29439-6 (2021).

Fu, H. et al. Unsupervised spatial embedded deep representation of spatial transcriptomics. Preprint at bioRxiv https://doi.org/10.1101/2021.06.15.448542 (2021).

Hu, J. et al. SpaGCN: integrating gene expression, spatial location and histology to identify spatial domains and spatially variable genes by graph convolutional network. Nat. Methods 18 , 1342–1351 (2021).

Li, J., Chen, S., Pan, X., Yuan, Y. & Shen, H.-B. Cell clustering for spatial transcriptomics data with graph neural networks. Nat. Comput. Sci. 2 , 399–408 (2022).

Yuan, Z. et al. SOTIP is a versatile method for microenvironment modeling with spatial omics data. Nat. Commun. 13 , 7330 (2022).

Yang, M. et al. Position-informed contrastive learning for spatially resolved omics deciphers hierarchical tissue structure at both cellular and niche levels. Preprint at Research Square https://doi.org/10.21203/rs.3.rs-1067780/v1 (2022).

Cable, D. M. et al. Cell type-specific inference of differential expression in spatial transcriptomics. Nat. Methods 19 , 1076–1087 (2022).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Zeng, H. et al. Integrative in situ mapping of single-cell transcriptional states and tissue histopathology in a mouse model of Alzheimer’s disease. Nat. Neurosci. https://doi.org/10.1038/s41593-022-01251-x (2023).

Palla, G., Fischer, D. S., Regev, A. & Theis, F. J. Spatial components of molecular tissue biology. Nat. Biotechnol. https://doi.org/10.1038/s41587-021-01182-1 (2022).

Rao, A., Barkley, D., Franca, G. S. & Yanai, I. Exploring tissue architecture using spatial transcriptomics. Nature 596 , 211–220 (2021).

Cheng, A., Hu, G. & Li, W. V. Benchmarking cell-type clustering methods for spatially resolved transcriptomics data. Brief. Bioinform. 24 , bbac475 (2023).

Xu, Z. et al. STOmicsDB: a comprehensive database for spatial transcriptomics data sharing, analysis and visualization. Nucleic Acids Res. 52 , D1053–D1061 (2024).

Long, B., Miller, J. & The SpaceTx Consortium. SpaceTx: a roadmap for benchmarking spatial transcriptomics exploration of the brain. Preprint at https://arxiv.org/abs/2301.08436 (2023).

Megill, C. et al. Cellxgene: a performant, scalable exploration platform for high dimensional sparse matrices. Preprint at bioRxiv https://doi.org/10.1101/2021.04.05.438318 (2021).

Fan, Z., Chen, R. & Chen, X. SpatialDB: a database for spatially resolved transcriptomes. Nucleic Acids Res. https://doi.org/10.1093/nar/gkz934 (2019).

Maynard, K. R. et al. Transcriptome-scale spatial gene expression in the human dorsolateral prefrontal cortex. Nat. Neurosci. 24 , 425–436 (2021).

Yuan, Z. et al. SODB facilitates comprehensive exploration of spatial omics data. Nat. Methods https://doi.org/10.1038/s41592-023-01773-7 (2023).

Chen, A. et al. Spatiotemporal transcriptomic atlas of mouse organogenesis using DNA nanoball-patterned arrays. Cell 185 , 1777–1792 e1721 (2022).

Chen, X., Sun, Y.-C., Church, G. M., Lee, J. H. & Zador, A. M. Efficient in situ barcode sequencing using padlock probe-based BaristaSeq. Nucleic Acids Res. 46 , e22 (2018).

Chen, K. H., Boettiger, A. N., Moffitt, J. R., Wang, S. Y. & Zhuang, X. W. Spatially resolved, highly multiplexed RNA profiling in single cells. Science https://doi.org/10.1126/science.aaa6090 (2015).

Codeluppi, S. et al. Spatial organization of the somatosensory cortex revealed by osmFISH. Nat. Methods 15 , 932–935 (2018).

Wang, X. et al. Three-dimensional intact-tissue sequencing of single-cell transcriptional states. Science 361 , eaat5691 (2018).

Ren, H., Walker, B. L., Cang, Z. & Nie, Q. Identifying multicellular spatiotemporal organization of cells with SpaceFlow. Nat. Commun. 13 , 4076 (2022).

Wolf, F. A., Angerer, P. & Theis, F. J. SCANPY: large-scale single-cell gene expression data analysis. Genome Biol. 19 , 15 (2018).

Traag, V. A., Waltman, L. & Van Eck, N. J. From Louvain to Leiden: guaranteeing well-connected communities. Sci. Rep. 9 , 5233 (2019).

Zhao, E. et al. Spatial transcriptomics at subspot resolution with BayesSpace. Nat. Biotechnol. https://doi.org/10.1038/s41587-021-00935-2 (2021).

Pham, D. et al. Robust mapping of spatiotemporal trajectories and cell–cell interactions in healthy and diseased tissues. Nat. Commun. 14 , 7739 (2023).

Zixuan, C., Ning., X., Nie, A., Xu, M. & Zhang, J. SCAN-IT: domain segmentation of spatial transcriptomics images by graph neural network. In 32nd British Machine Vision Conference https://www.bmvc2021-virtualconference.com/conference/papers/paper_1139.html (2021).

Dong, K. & Zhang, S. Deciphering spatial domains from spatially resolved transcriptomics with an adaptive graph attention auto-encoder. Nat. Commun. 13 , 1739 (2022).

Zong, Y. et al. conST: an interpretable multi-modal contrastive learning framework for spatial transcriptomics. Preprint at bioRxiv https://doi.org/10.1101/2022.01.14.476408 (2022).

Li, Z. & Zhou, X. BASS: multi-scale and multi-sample analysis enables accurate cell type clustering and spatial domain detection in spatial transcriptomic studies. Genome Biol. 23 , 168 (2022).

Long, Y. et al. Spatially informed clustering, integration, and deconvolution of spatial transcriptomics with GraphST. Nat. Commun. 14 , 1155 (2023).

Li, B. et al. Benchmarking spatial and single-cell transcriptomics integration methods for transcript distribution prediction and cell type deconvolution. Nat. Methods 19 , 662–670 (2022).

Moses, L. & Pachter, L. Museum of spatial transcriptomics. Nat. Methods 19 , 534–546 (2022).

Rosenberg, A. & Hirschberg, J. V-measure: a conditional entropy-based external cluster evaluation measure. In Proc. 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL) 410–420 (2007).

Shang, L. & Zhou, X. Spatially aware dimension reduction for spatial transcriptomics. Nat. Commun. 13 , 7203 (2022).

Zuo, C. et al. Elucidating tumor heterogeneity from spatially resolved transcriptomics data by multi-view graph collaborative learning. Nat. Commun. 13 , 5962 (2022).

Moran, P. A. Notes on continuous stochastic phenomena. Biometrika 37 , 17–23 (1950).

Article   MathSciNet   CAS   PubMed   Google Scholar  

Geary, R. C. The contiguity ratio and statistical mapping. Incorp. Stat. 5 , 115–146 (1954).

Google Scholar  

Fang, R. et al. Conservation and divergence of cortical cell organization in human and mouse revealed by MERFISH. Science 377 , 56–62 (2022).

Moffitt, J. R. et al. Molecular, spatial, and functional single-cell profiling of the hypothalamic preoptic region. Science 362 , eaau5324 (2018).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Andersson, A. et al. Spatial deconvolution of HER2-positive breast cancer delineates tumor-associated cell type interactions. Nat. Commun. 12 , 6012 (2021).

Guilliams, M. et al. Spatial proteogenomics reveals distinct and evolutionarily conserved hepatic macrophage niches. Cell 185 , 379–396. e338 (2022).

Moncada, R. et al. Integrating microarray-based spatial transcriptomics and single-cell RNA-seq reveals tissue architecture in pancreatic ductal adenocarcinomas. Nat. Biotechnol. 38 , 333–342 (2020).

Lohoff, T. et al. Integration of spatial and single-cell transcriptomic data elucidates mouse organogenesis. Nat. Biotechnol. https://doi.org/10.1038/s41587-021-01006-2 (2021).

Allen, W. E., Blosser, T. R., Sullivan, Z. A., Dulac, C. & Zhuang, X. Molecular and spatial signatures of mouse brain aging at single-cell resolution. Cell https://doi.org/10.1016/j.cell.2022.12.010 (2022).

Korsunsky, I. et al. Fast, sensitive and accurate integration of single-cell data with Harmony. Nat. Methods 16 , 1289–1296 (2019).

Luecken, M. D. et al. Benchmarking atlas-level data integration in single-cell genomics. Nat. Methods 19 , 41–50 (2022).

Saelens, W., Cannoodt, R., Todorov, H. & Saeys, Y. A comparison of single-cell trajectory inference methods. Nat. Biotechnol. 37 , 547–554 (2019).

Wolf, F. A. et al. Louvain usage in Scanpy. Scanpy https://scanpy.readthedocs.io/en/stable/generated/scanpy.tl.louvain.html (2018).

Wolf, F. A. et al. Leiden usage in Scanpy. Scanpy https://scanpy.readthedocs.io/en/stable/generated/scanpy.tl.leiden.html (2018).

Hao, M., Hua, K. & Zhang, X. SOMDE: a scalable method for identifying spatially variable genes with self-organizing map. Bioinformatics https://doi.org/10.1093/bioinformatics/btab471 (2021).

Sun, S., Zhu, J. & Zhou, X. Statistical analysis of spatial expression patterns for spatially resolved transcriptomic studies. Nat. Methods 17 , 193–200 (2020).

Sun, S. et al. SPARK usage for spatially variable gene detection. Xiang Zhou Lab https://xzhoulab.github.io/SPARK/ (2020).

Maynard, K. R. et al. spatialLIBD for hosting dorsolateral prefrontal cortex 10x Visium dataset. spatialLIBD http://research.libd.org/spatialLIBD (2021).

Xu, Z. et al. STOmicsDB database page of mouse embryo Stereo-seq dataset. China National GeneBank https://db.cngb.org/stomics/mosta/ (2022).

Long, B. et al. Webpage of SpaceTx. The SpaceTX Consortium https://spacetx.github.io/ (2023).

Moffitt, J. R. et al. Data from: Molecular, spatial and functional single-cell profiling of the hypothalamic preoptic region. Dryad . https://doi.org/10.5061/dryad.8t8s248 (2018).

Codeluppi, S. et al. Data and code availability. Expression data: loom file with osmFISH data. Linnarsson Lab http://linnarssonlab.org/osmFISH/availability/ (2018).

Wang, X. et al. Data from: Three-dimensional intact-tissue sequencing of single-cell transcriptional states. Deisseroth Lab http://clarityresourcecenter.org/ (2018).

Andersson, A. et al. Spatial deconvolution of HER2-positive breast cancer delineates tumor-associated cell type interactions. Zenodo https://doi.org/10.5281/zenodo.4751624 (2021).

Allen, W. E. et al. Molecular and spatial signatures of mouse brain aging at single-cell resolution. CZ CELLxGENE https://cellxgene.cziscience.com/collections/31937775-0602-4e52-a799-b6acdd2bac2e (2022).

Wang, J. et al. scGNN is a novel graph neural network framework for single-cell RNA-Seq analyses. Nat. Commun. 12 , 1882 (2021).

Kiselev, V. Y. et al. SC3: consensus clustering of single-cell RNA-seq data. Nat. Methods 14 , 483–486 (2017).

Wang, B., Zhu, J. J., Pierson, E., Ramazzotti, D. & Batzoglou, S. Visualization and analysis of single-cell RNA-seq data by kernel-based similarity learning. Nat. Methods 14 , 414–416 (2017).

Pedregosa, F. et al. Homogeneity score usage in scikit-learn. scikit-learn https://scikit-learn.org/stable/modules/generated/sklearn.metrics.homogeneity_score.html (2014).

Pedregosa, F. et al. Completeness score usage in scikit-learn. scikit-learn https://scikit-learn.org/stable/modules/generated/sklearn.metrics.completeness_score.html (2014).

Alexandrov, T. & Bartels, A. Testing for presence of known and unknown molecules in imaging mass spectrometry. Bioinformatics 29 , 2335–2342 (2013).

Guo, L. et al. Data filtering and its prioritization in pipelines for spatial segmentation of mass spectrometry imaging. Anal. Chem. 93 , 4788–4793 (2021).

Rousseeuw, P. J. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20 , 53–65 (1987).

Article   Google Scholar  

Miller, B. F., Bambah-Mukku, D., Dulac, C., Zhuang, X. & Fan, J. Characterizing spatial gene expression heterogeneity in spatially resolved single-cell transcriptomic data with nonuniform cellular densities. Genome Res. 31 , 1843–1855 (2021).

Ren, H. et al. SpaceFlow. GitHub https://github.com/hongleir/SpaceFlow (2022).

Virtanen, P. et al. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat. Methods 17 , 261–272 (2020).

Harris, C. R. et al. Array programming with NumPy. Nature 585 , 357–362 (2020).

Palla, G. et al. Squidpy: a scalable framework for spatial omics analysis. Nat. Methods https://doi.org/10.1038/s41592-021-01358-2 (2022).

McKinney, W. Data structures for statistical computing in Python. In Proc. 9th Python in Science Conference Vol. 445 (eds van der Walt, S. & Millman, J.) 51–56 (2010).

Pedregosa, F. et al. Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12 , 2825–2830 (2011).

MathSciNet   Google Scholar  

Hunter, J. D. Matplotlib: a 2D graphics environment. Comput. Sci. Eng. 9 , 90–95 (2007).

Waskom, M. L. Seaborn: statistical data visualization. J. Open Source Softw. 6 , 3021 (2021).

Article   ADS   Google Scholar  

Davis, M., Sick, J. & Eschbacher, A. palettable: color palettes for Python. Astrophysics Source Code Library ascl: 2202.2005 (2022).

Download references

Acknowledgements

This study was supported by National Nature Science Foundation of China (62303119, Z.Y.), Chenguang Program of Shanghai Education Development Foundation and Shanghai Municipal Education Commission (22CGA02, Z.Y.), Shanghai Science and Technology Development Funds (23YF1403000 Z.Y.), Tencent AI Lab Rhino-Bird Focused Research Program (RBFR2023008, Z.Y.), Innovation Fund of Institute of Computing and Technology, CAS (E161080 and E161030, Yi Zhao) and Beijing Natural Science Foundation Haidian Origination and Innovation Joint Fund (L222007, Yi Zhao). This work was also supported by Shanghai Municipal Science and Technology Major Project (no. 2018SHZDZX01), ZJ Lab, and Shanghai Center for Brain Science and Brain-Inspired Technology, and 111 Project (no. B18015). The authors would like to acknowledge the Nanjing Institute of InforSuperBahn MLOps for providing the training and evaluation platform.

Author information

These authors contributed equally: Zhiyuan Yuan, Fangyuan Zhao.

Authors and Affiliations

Center for Medical Research and Innovation, Shanghai Pudong Hospital, Fudan University Pudong Medical Center, Fudan University, Shanghai, China

Zhiyuan Yuan, Yan Cui & Xiao-Yong Zhang

Institute of Science and Technology for Brain-Inspired Intelligence; MOE Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence; MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China

Zhiyuan Yuan & Yan Cui

Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China

Fangyuan Zhao, Senlin Lin & Yi Zhao

University of Chinese Academy of Sciences, Beijing, China

Tencent AI Lab, Shenzhen, China

Yu Zhao & Jianhua Yao

Bioinformatics Center, Institute for Chemical Research, Kyoto University, Kyoto, Japan

You can also search for this author in PubMed   Google Scholar

Contributions

Yi Zhao and Z.Y. conceived and designed the study. Z.Y. and Yi Zhao designed the metrics, benchmark pipeline, and collected the methods and datasets. F.Z. and Z.Y. implemented the benchmarking pipeline. Z.Y. implemented the divide and conquer strategy. Z.Y. and F.Z. analyzed the results and generated the figures. Z.Y., F.Z. and Yi Zhao wrote the manuscript. Yu Zhao, J.Y. and Y.C. helped implement the large data scalability. X.Z. and J.Y. provided tissue anatomical knowledge. S.L. helped re-implement the methods.

Corresponding authors

Correspondence to Zhiyuan Yuan or Yi Zhao .

Ethics declarations

Competing interests.

The author declares no competing interests.

Peer review

Peer review information.

Nature Methods thanks Karoline Holler and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Primary Handling Editor: Madhura Mukhopadhyay, in collaboration with the Nature Methods team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended data fig. 1 the differences between spatial clustering and cell type clustering..

Spatial clustering and cell type clustering are different tasks, we explained their differences in their goals, features, and representative work. We also used an example from mouse motor cortex data to explain their differences.

Extended Data Fig. 2 Methods performance on various biotechnologies.

On the heatmap, the rows represent the biotechnologies, the columns represent the methods, and each value in the figure represents the NMI values.

Extended Data Fig. 3 User guidance.

Recommend the suitable methods for users according to the data at hand. Note that the method choice was based on the accuracy scores. For more specific recommendations, users should look at Fig. 4 to refer to other aspects of performance.

Extended Data Fig. 4 Performance on challenging datasets.

A: This figure records all methods IoU across small and non-continuous data, where data35-data41 are breast cancer data and data42-data54 are liver data. B: This figure records the number of successful identifications (IoU >= 0.5) for each method.

Extended Data Fig. 5 Limitations of current methods on large-scale datasets.

A large-scale MERFISH dataset was used to illustrate that current methods cannot be applied on the dataset. A: The dataset information. B: Other large-scale datasets available in the field. Each point is a dataset, x stands for the number of cells, y stands for the number of slices. The publication information is annotated beside the points. Colors indicate different spatial technologies. C: Issues of each method when applied on the dataset in A. Time issue means the running time exceeds 5 hours, and memory issue means the program report”out of memory” error. Computational resources can be found in Methods. D: The running time of BASS and STAGATE, as the function of the number of slices of the dataset in (A).

Source data

Supplementary information, supplementary information.

Supplementary Figs. 1–51 and Notes 1–13.

Reporting Summary

Peer review file, supplementary tables 1–3.

Supplementary Table 1. Data information. Supplementary Table 2. Running status of benchmarking methods. Supplementary Table 3. Parameter searching range of benchmarking methods.

Source Data Fig. 1

Raw data of bar plots in Fig. 1b.

Source Data Fig. 2

Raw data of methods benchmarking for MERFISH and Visium data in Fig. 2.

Source Data Fig. 3

Raw data of correlation matrix in Fig. 3.

Source Data Fig. 4

Raw data of overall performance comparisons in Fig. 4.

Source Data Fig. 5

Raw data of large-scale scalability in Fig. 5.

Source Data Fig. 6

Raw data of robustness evaluations in Fig. 6.

Source Data Extended Data Fig./Table 5

Raw data of running time in Extended Data Fig. 5.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Yuan, Z., Zhao, F., Lin, S. et al. Benchmarking spatial clustering methods with spatially resolved transcriptomics data. Nat Methods (2024). https://doi.org/10.1038/s41592-024-02215-8

Download citation

Received : 13 March 2023

Accepted : 16 February 2024

Published : 15 March 2024

DOI : https://doi.org/10.1038/s41592-024-02215-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

dissertation methodology benchmarking

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Creating a Corporate Social Responsibility Program with Real Impact

  • Emilio Marti,
  • David Risi,
  • Eva Schlindwein,
  • Andromachi Athanasopoulou

dissertation methodology benchmarking

Lessons from multinational companies that adapted their CSR practices based on local feedback and knowledge.

Exploring the critical role of experimentation in Corporate Social Responsibility (CSR), research on four multinational companies reveals a stark difference in CSR effectiveness. Successful companies integrate an experimental approach, constantly adapting their CSR practices based on local feedback and knowledge. This strategy fosters genuine community engagement and responsive initiatives, as seen in a mining company’s impactful HIV/AIDS program. Conversely, companies that rely on standardized, inflexible CSR methods often fail to achieve their goals, demonstrated by a failed partnership due to local corruption in another mining company. The study recommends encouraging broad employee participation in CSR and fostering a culture that values CSR’s long-term business benefits. It also suggests that sustainable investors and ESG rating agencies should focus on assessing companies’ experimental approaches to CSR, going beyond current practices to examine the involvement of diverse employees in both developing and adapting CSR initiatives. Overall, embracing a dynamic, data-driven approach to CSR is essential for meaningful social and environmental impact.

By now, almost all large companies are engaged in corporate social responsibility (CSR): they have CSR policies, employ CSR staff, engage in activities that aim to have a positive impact on the environment and society, and write CSR reports. However, the evolution of CSR has brought forth new challenges. A stark contrast to two decades ago, when the primary concern was the sheer neglect of CSR, the current issue lies in the ineffective execution of these practices. Why do some companies implement CSR in ways that create a positive impact on the environment and society, while others fail to do so? Our research reveals that experimentation is critical for impactful CSR, which has implications for both companies that implement CSR and companies that externally monitor these CSR activities, such as sustainable investors and ESG rating agencies.

  • EM Emilio Marti is an assistant professor at the Rotterdam School of Management (RSM) at Erasmus University Rotterdam.
  • DR David Risi is a professor at the Bern University of Applied Sciences and a habilitated lecturer at the University of St. Gallen. His research focuses on how companies organize CSR and sustainability.
  • ES Eva Schlindwein is a professor at the Bern University of Applied Sciences and a postdoctoral fellow at the University of Oxford. Her research focuses on how organizations navigate tensions between business and society.
  • AA Andromachi Athanasopoulou is an associate professor at Queen Mary University of London and an associate fellow at the University of Oxford. Her research focuses on how individuals manage their leadership careers and make ethically charged decisions.

Partner Center

Help | Advanced Search

Electrical Engineering and Systems Science > Image and Video Processing

Title: benchmarking image transformers for prostate cancer detection from ultrasound data.

Abstract: PURPOSE: Deep learning methods for classifying prostate cancer (PCa) in ultrasound images typically employ convolutional networks (CNNs) to detect cancer in small regions of interest (ROI) along a needle trace region. However, this approach suffers from weak labelling, since the ground-truth histopathology labels do not describe the properties of individual ROIs. Recently, multi-scale approaches have sought to mitigate this issue by combining the context awareness of transformers with a CNN feature extractor to detect cancer from multiple ROIs using multiple-instance learning (MIL). In this work, we present a detailed study of several image transformer architectures for both ROI-scale and multi-scale classification, and a comparison of the performance of CNNs and transformers for ultrasound-based prostate cancer classification. We also design a novel multi-objective learning strategy that combines both ROI and core predictions to further mitigate label noise. METHODS: We evaluate 3 image transformers on ROI-scale cancer classification, then use the strongest model to tune a multi-scale classifier with MIL. We train our MIL models using our novel multi-objective learning strategy and compare our results to existing baselines. RESULTS: We find that for both ROI-scale and multi-scale PCa detection, image transformer backbones lag behind their CNN counterparts. This deficit in performance is even more noticeable for larger models. When using multi-objective learning, we can improve performance of MIL, with a 77.9% AUROC, a sensitivity of 75.9%, and a specificity of 66.3%. CONCLUSION: Convolutional networks are better suited for modelling sparse datasets of prostate ultrasounds, producing more robust features than transformers in PCa detection. Multi-scale methods remain the best architecture for this task, with multi-objective learning presenting an effective way to improve performance.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. 2: Steps of methodology of the thesis

    dissertation methodology benchmarking

  2. A guide on how to write/structure a dissertation report

    dissertation methodology benchmarking

  3. Navigating the Best Research Methodology steps? The Professor's Advice

    dissertation methodology benchmarking

  4. Benchmarking Research Best Practices

    dissertation methodology benchmarking

  5. How to Write a Dissertation Methodology

    dissertation methodology benchmarking

  6. PPT

    dissertation methodology benchmarking

VIDEO

  1. How to write your methodology chapter for dissertation students

  2. IETF 119: Benchmarking Methodology (BMWG) 2024-03-21 03:00

  3. Research Methodology Dissertation

  4. Dissertation Writing 101: Why You Have To Let Go #shorts

  5. Dissertation Research Methodology เรียนรู้จากตัวอย่างจริง I Industrial Engineering EP.120

  6. Launch event of the Urban Benchmark methodology

COMMENTS

  1. How To Write The Methodology Chapter

    Do yourself a favour and start with the end in mind. Section 1 - Introduction. As with all chapters in your dissertation or thesis, the methodology chapter should have a brief introduction. In this section, you should remind your readers what the focus of your study is, especially the research aims. As we've discussed many times on the blog ...

  2. Dissertation Methodology

    The structure of a dissertation methodology can vary depending on your field of study, the nature of your research, and the guidelines of your institution. However, a standard structure typically includes the following elements: Introduction: Briefly introduce your overall approach to the research.

  3. How to Write the Methodology for a Dissertation

    In a dissertation proposal, the methodology is written in the future tense (e.g. "The research design will be…"). However, when you come to write the methodology for the dissertation, this research has already been completed, so the methodology should be written in the past tense (e.g. "The research design was …").

  4. Writing the Dissertation

    Guide contents. As part of the Writing the Dissertation series, this guide covers the most common conventions found in a methodology chapter, giving you the necessary knowledge, tips and guidance needed to impress your markers! The sections are organised as follows: Getting Started - Defines the methodology and its core characteristics.; Structure - Provides a detailed walk-through of common ...

  5. What Is a Research Methodology?

    Step 1: Explain your methodological approach. Step 2: Describe your data collection methods. Step 3: Describe your analysis method. Step 4: Evaluate and justify the methodological choices you made. Tips for writing a strong methodology chapter. Other interesting articles.

  6. PDF Reference Guide: Benchmarking or Research?

    INTENT of benchmarking is to assess and improve established practices (i.e., usual practices) within an organization or unit. INTENT of the activity is to generate knowledge—by generating hypotheses, testing them, and answering research questions—to develop new paradigms or untested methods, or establish standards where none are accepted.

  7. Benchmarking your research

    Benchmarking your research performance against other comparable individuals, institutions, and research centers, can be another method of demonstrating the impact and engagement of your research. This approach can be useful when applying for grant funding or career advancement. The Library guide Research evidence for grants and promotion is a ...

  8. PDF A Complete Dissertation

    DISSERTATION CHAPTERS Order and format of dissertation chapters may vary by institution and department. 1. Introduction 2. Literature review 3. Methodology 4. Findings 5. Analysis and synthesis 6. Conclusions and recommendations Chapter 1: Introduction This chapter makes a case for the signifi-cance of the problem, contextualizes the

  9. Benchmarking organizational resilience: A

    Dissertation or Thesis; Benchmarking organizational resilience: A cross-sectional comparative research study. ... This research extends prior research that developed a methodology and survey tool for measuring and benchmarking organizational resilience. Subsequent research utilized the methodology and survey tool on organizations in New Zealand ...

  10. Benchmarking: A Method for Quality Assessment and ...

    Introduction. This chapter discusses the concept of quality in addition to quality assessment, self-evaluation, benchmarking, and quality enhancement, which comprise the quality spectrum. The chapter explores benchmarking as a method used for quality enhancement and focuses on the definitions, theory, rationale, and best practices of benchmarking.

  11. Benchmarking of thesis research: A case study

    Benchmarking is a suitable methodology to apply to these practices. The authors assisted an engineering-and-design company in adopting this psychotherapists' practice and applied it to a work ...

  12. A Complete Guide To Dissertation Methodology

    The methodology is perhaps the most challenging and laborious part of the dissertation. Essentially, the methodology helps in understanding the broad, philosophical approach behind the methods of research you chose to employ in your study. The research methodology elaborates on the 'how' part of your research.

  13. Dissertations / Theses: 'Benchmarking methodology'

    This methodology consists of three main components: benchmarking measures, benchmarking data collection processes, and benchmarking data collection tool. In this approach results of previous studies from the literature were used too. In order to verify and validate the methodology project data were collected in two middle size software ...

  14. 1. Proposal, Lit Search, and Benchmark

    How to design, write, and present a successful dissertation proposal [ebook] by Elizabeth A. Wentz This concise, hands-on book by author Elizabeth A. Wentz is essential reading for any graduate student entering the dissertation process in the social or behavioral sciences. The book addresses the importance of ethical scientific research, developing your curriculum vitae, effective reading and ...

  15. PDF DISSERTATION HANDBOOK

    Dissertation Benchmarks I-V combine for 12 credits and span over two and a half years. The DPB is normally completed during the first three semesters of a student's doctoral program. DIS 890 Dissertation Plan Benchmark (2 credits) DIS 891 Dissertation Benchmark I - Chapter 1 (2 credits)

  16. Chapter 4

    The level of the benchmarking exercise should also be deter- mined at this stage since it determines which of the remaining steps in the methodology will need to be applied: â ¢ Level 1 (trend analysis): Steps 1, 2, and 4, and possibly Steps 6â 8 depending on the question to be answered. â ¢ Level 2 (peer comparison): Steps 1â 4, and ...

  17. Assessing and predicting the quality of research master's ...

    The educational quality of research master's degree can be in part reflected by the examiner score of the thesis. This study focuses on finding positive predictors of this score with the aim of developing assessment and prediction methods for the educational quality of postgraduates. This study is based on regression analysis of the characteristics extracted from publications and references ...

  18. Benchmarking ~ Why it's Important

    Practice benchmarking makes use of data collection. It mainly uses qualitative data to compare how people, technology, or processes conduct activities. Process mapping is vital at this stage to help during the comparison of information. This process helps you to find how and where loopholes in higher learning occur.

  19. PDF A Method for Stakeholder-based Comparative Benchmarking of Airports

    A METHOD FOR STAKEHOLDER-BASED COMPARATIVE BENCHMARKING OF AIRPORTS by Claes Johan David Schaar A Dissertation Submitted to the Graduate Faculty of George Mason University In Partial fulfillment of The Requirements for the Degree of Doctor of Philosophy Information Technology Committee:

  20. Dissertations / Theses: 'Quality benchmarks'

    The research shows that benchmarking as a method will have a significant impact on ordinary quality assurance in higher education. This doctoral dissertation revealed challenges to integrate external quality audits and internally driven benchmarking. ... and 魏志衡. "A Study on Restructuring Evaluation Indicators of the Taipei Quality ...

  21. Dissertations / Theses: 'Benchmarking'

    The scope of the thesis is limited to benchmark the processor only based on assembly coding. The quality check of compiler is not included. The method of the benchmarking was proposed by BDTI, Berkeley Design Technology Incorporations, which is the general methodology used in world wide DSP industry.

  22. Efficient Decentralized Learning Methods for Deep Neural Networks

    We conclude this dissertation by presenting a summary of the proposed decentralized methods and their trade-offs for heterogeneous data distributions. Overall, the methods proposed in this thesis address the critical limitations of training deep neural networks in a decentralized setup and advance the state-of-the-art in this domain.

  23. Benchmarking spatial clustering methods with spatially resolved

    Here we present a benchmark study of 13 computational methods on 34 SRT data (7 datasets). The performance was evaluated on the basis of accuracy, spatial continuity, marker genes detection ...

  24. Dissertation Methodology Benchmarking

    Dissertation Methodology Benchmarking - We hire a huge amount of professional essay writers to make sure that our essay service can deal with any subject, regardless of complexity. Place your order by filling in the form on our site, or contact our customer support agent requesting someone write my essay, and you'll get a quote. ...

  25. Dissertation Methodology Benchmarking

    Dissertation Methodology Benchmarking, Sample Resume Community Outreach Manager, Narrative Essay On Human Nature, Paragraph Worksheets For Middle School Review, Critical Thinking Detective, A Nightmare Essay 200 Words, Login Or Register To Post Comments Custom Essay Inurl Node

  26. [2403.20254] Benchmarking the Robustness of Temporal Action Detection

    Temporal action detection (TAD) aims to locate action positions and recognize action categories in long-term untrimmed videos. Although many methods have achieved promising results, their robustness has not been thoroughly studied. In practice, we observe that temporal information in videos can be occasionally corrupted, such as missing or blurred frames. Interestingly, existing methods often ...

  27. Creating a Corporate Social Responsibility Program with Real Impact

    By now, almost all large companies are engaged in corporate social responsibility (CSR): they have CSR policies, employ CSR staff, engage in activities that aim to have a positive impact on the ...

  28. [2403.20150] TFB: Towards Comprehensive and Fair Benchmarking of Time

    To support the integration of different methods into the benchmark and enable fair comparisons, TFB features a flexible and scalable pipeline that eliminates biases. Next, we employ TFB to perform a thorough evaluation of 21 Univariate Time Series Forecasting (UTSF) methods on 8,068 univariate time series and 14 Multivariate Time Series ...

  29. [2403.18233] Benchmarking Image Transformers for Prostate Cancer

    PURPOSE: Deep learning methods for classifying prostate cancer (PCa) in ultrasound images typically employ convolutional networks (CNNs) to detect cancer in small regions of interest (ROI) along a needle trace region. However, this approach suffers from weak labelling, since the ground-truth histopathology labels do not describe the properties of individual ROIs. Recently, multi-scale ...