Essay on Computer and its Uses for School Students and Children

500+ words essay on computer.

In this essay on computer, we are going to discuss some useful things about computers. The modern-day computer has become an important part of our daily life. Also, their usage has increased much fold during the last decade. Nowadays, they use the computer in every office whether private or government. Mankind is using computers for over many decades now. Also, they are used in many fields like agriculture, designing, machinery making, defense and many more. Above all, they have revolutionized the whole world.

essay on computer

History of Computers

It is very difficult to find the exact origin of computers. But according to some experts computer exists at the time of world war-II. Also, at that time they were used for keeping data. But, it was for only government use and not for public use. Above all, in the beginning, the computer was a very large and heavy machine.

Working of a Computer 

The computer runs on a three-step cycle namely input, process, and output. Also, the computer follows this cycle in every process it was asked to do. In simple words, the process can be explained in this way. The data which we feed into the computer is input, the work CPU do is process and the result which the computer give is output.

Components and Types of Computer

The simple computer basically consists of CPU, monitor, mouse, and keyboard . Also, there are hundreds of other computer parts that can be attached to it. These other parts include a printer, laser pen, scanner , etc.

The computer is categorized into many different types like supercomputers, mainframes, personal computers (desktop), PDAs, laptop, etc. The mobile phone is also a type of computer because it fulfills all the criteria of being a computer.

Get the huge list of more than 500 Essay Topics and Ideas

Uses of Computer in Various Fields

As the usage of computer increased it became a necessity for almost every field to use computers for their operations. Also, they have made working and sorting things easier. Below we are mentioning some of the important fields that use a computer in their daily operation.

Medical Field

They use computers to diagnose diseases, run tests and for finding the cure for deadly diseases . Also, they are able to find a cure for many diseases because of computers.

Whether it’s scientific research, space research or any social research computers help in all of them. Also, due to them, we are able to keep a check on the environment , space, and society. Space research helped us to explore the galaxies. While scientific research has helped us to locate resources and various other useful resources from the earth.

For any country, his defence is most important for the safety and security of its people. Also, computer in this field helps the country’s security agencies to detect a threat which can be harmful in the future. Above all the defense industry use them to keep surveillance on our enemy.

Threats from a Computer

Computers have become a necessity also, they have become a threat too. This is due to hackers who steal your private data and leak them on internet. Also, anyone can access this data. Apart from that, there are other threats like viruses, spams, bug and many other problems.

computer based system essay

The computer is a very important machine that has become a useful part of our life. Also, the computers have twin-faces on one side it’s a boon and on the other side, it’s a bane. Its uses completely depend upon you. Apart from that, a day in the future will come when human civilization won’t be able to survive without computers as we depend on them too much. Till now it is a great discovery of mankind that has helped in saving thousands and millions of lives.

Frequently Asked Questions on Computer

Q.1  What is a computer?

A.1 A computer is an electronic device or machine that makes our work easier. Also, they help us in many ways.

Q.2 Mention various fields where computers are used?

A.2  Computers are majorly used in defense, medicine, and for research purposes.

Customize your course in 30 seconds

Which class are you in.

tutor

  • Travelling Essay
  • Picnic Essay
  • Our Country Essay
  • My Parents Essay
  • Essay on Favourite Personality
  • Essay on Memorable Day of My Life
  • Essay on Knowledge is Power
  • Essay on Gurpurab
  • Essay on My Favourite Season
  • Essay on Types of Sports

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Download the App

Google Play

Advertisement

Advertisement

An automated essay scoring systems: a systematic literature review

  • Published: 23 September 2021
  • Volume 55 , pages 2495–2527, ( 2022 )

Cite this article

  • Dadi Ramesh   ORCID: orcid.org/0000-0002-3967-8914 1 , 2 &
  • Suresh Kumar Sanampudi 3  

31k Accesses

72 Citations

6 Altmetric

Explore all metrics

Assessment in the Education system plays a significant role in judging student performance. The present evaluation system is through human assessment. As the number of teachers' student ratio is gradually increasing, the manual evaluation process becomes complicated. The drawback of manual evaluation is that it is time-consuming, lacks reliability, and many more. This connection online examination system evolved as an alternative tool for pen and paper-based methods. Present Computer-based evaluation system works only for multiple-choice questions, but there is no proper evaluation system for grading essays and short answers. Many researchers are working on automated essay grading and short answer scoring for the last few decades, but assessing an essay by considering all parameters like the relevance of the content to the prompt, development of ideas, Cohesion, and Coherence is a big challenge till now. Few researchers focused on Content-based evaluation, while many of them addressed style-based assessment. This paper provides a systematic literature review on automated essay scoring systems. We studied the Artificial Intelligence and Machine Learning techniques used to evaluate automatic essay scoring and analyzed the limitations of the current studies and research trends. We observed that the essay evaluation is not done based on the relevance of the content and coherence.

Similar content being viewed by others

computer based system essay

Automated Essay Scoring Systems

computer based system essay

Automated Essay Scoring System Based on Rubric

Avoid common mistakes on your manuscript.

1 Introduction

Due to COVID 19 outbreak, an online educational system has become inevitable. In the present scenario, almost all the educational institutions ranging from schools to colleges adapt the online education system. The assessment plays a significant role in measuring the learning ability of the student. Most automated evaluation is available for multiple-choice questions, but assessing short and essay answers remain a challenge. The education system is changing its shift to online-mode, like conducting computer-based exams and automatic evaluation. It is a crucial application related to the education domain, which uses natural language processing (NLP) and Machine Learning techniques. The evaluation of essays is impossible with simple programming languages and simple techniques like pattern matching and language processing. Here the problem is for a single question, we will get more responses from students with a different explanation. So, we need to evaluate all the answers concerning the question.

Automated essay scoring (AES) is a computer-based assessment system that automatically scores or grades the student responses by considering appropriate features. The AES research started in 1966 with the Project Essay Grader (PEG) by Ajay et al. ( 1973 ). PEG evaluates the writing characteristics such as grammar, diction, construction, etc., to grade the essay. A modified version of the PEG by Shermis et al. ( 2001 ) was released, which focuses on grammar checking with a correlation between human evaluators and the system. Foltz et al. ( 1999 ) introduced an Intelligent Essay Assessor (IEA) by evaluating content using latent semantic analysis to produce an overall score. Powers et al. ( 2002 ) proposed E-rater and Intellimetric by Rudner et al. ( 2006 ) and Bayesian Essay Test Scoring System (BESTY) by Rudner and Liang ( 2002 ), these systems use natural language processing (NLP) techniques that focus on style and content to obtain the score of an essay. The vast majority of the essay scoring systems in the 1990s followed traditional approaches like pattern matching and a statistical-based approach. Since the last decade, the essay grading systems started using regression-based and natural language processing techniques. AES systems like Dong et al. ( 2017 ) and others developed from 2014 used deep learning techniques, inducing syntactic and semantic features resulting in better results than earlier systems.

Ohio, Utah, and most US states are using AES systems in school education, like Utah compose tool, Ohio standardized test (an updated version of PEG), evaluating millions of student's responses every year. These systems work for both formative, summative assessments and give feedback to students on the essay. Utah provided basic essay evaluation rubrics (six characteristics of essay writing): Development of ideas, organization, style, word choice, sentence fluency, conventions. Educational Testing Service (ETS) has been conducting significant research on AES for more than a decade and designed an algorithm to evaluate essays on different domains and providing an opportunity for test-takers to improve their writing skills. In addition, they are current research content-based evaluation.

The evaluation of essay and short answer scoring should consider the relevance of the content to the prompt, development of ideas, Cohesion, Coherence, and domain knowledge. Proper assessment of the parameters mentioned above defines the accuracy of the evaluation system. But all these parameters cannot play an equal role in essay scoring and short answer scoring. In a short answer evaluation, domain knowledge is required, like the meaning of "cell" in physics and biology is different. And while evaluating essays, the implementation of ideas with respect to prompt is required. The system should also assess the completeness of the responses and provide feedback.

Several studies examined AES systems, from the initial to the latest AES systems. In which the following studies on AES systems are Blood ( 2011 ) provided a literature review from PEG 1984–2010. Which has covered only generalized parts of AES systems like ethical aspects, the performance of the systems. Still, they have not covered the implementation part, and it’s not a comparative study and has not discussed the actual challenges of AES systems.

Burrows et al. ( 2015 ) Reviewed AES systems on six dimensions like dataset, NLP techniques, model building, grading models, evaluation, and effectiveness of the model. They have not covered feature extraction techniques and challenges in features extractions. Covered only Machine Learning models but not in detail. This system not covered the comparative analysis of AES systems like feature extraction, model building, and level of relevance, cohesion, and coherence not covered in this review.

Ke et al. ( 2019 ) provided a state of the art of AES system but covered very few papers and not listed all challenges, and no comparative study of the AES model. On the other hand, Hussein et al. in ( 2019 ) studied two categories of AES systems, four papers from handcrafted features for AES systems, and four papers from the neural networks approach, discussed few challenges, and did not cover feature extraction techniques, the performance of AES models in detail.

Klebanov et al. ( 2020 ). Reviewed 50 years of AES systems, listed and categorized all essential features that need to be extracted from essays. But not provided a comparative analysis of all work and not discussed the challenges.

This paper aims to provide a systematic literature review (SLR) on automated essay grading systems. An SLR is an Evidence-based systematic review to summarize the existing research. It critically evaluates and integrates all relevant studies' findings and addresses the research domain's specific research questions. Our research methodology uses guidelines given by Kitchenham et al. ( 2009 ) for conducting the review process; provide a well-defined approach to identify gaps in current research and to suggest further investigation.

We addressed our research method, research questions, and the selection process in Sect.  2 , and the results of the research questions have discussed in Sect.  3 . And the synthesis of all the research questions addressed in Sect.  4 . Conclusion and possible future work discussed in Sect.  5 .

2 Research method

We framed the research questions with PICOC criteria.

Population (P) Student essays and answers evaluation systems.

Intervention (I) evaluation techniques, data sets, features extraction methods.

Comparison (C) Comparison of various approaches and results.

Outcomes (O) Estimate the accuracy of AES systems,

Context (C) NA.

2.1 Research questions

To collect and provide research evidence from the available studies in the domain of automated essay grading, we framed the following research questions (RQ):

RQ1 what are the datasets available for research on automated essay grading?

The answer to the question can provide a list of the available datasets, their domain, and access to the datasets. It also provides a number of essays and corresponding prompts.

RQ2 what are the features extracted for the assessment of essays?

The answer to the question can provide an insight into various features so far extracted, and the libraries used to extract those features.

RQ3, which are the evaluation metrics available for measuring the accuracy of algorithms?

The answer will provide different evaluation metrics for accurate measurement of each Machine Learning approach and commonly used measurement technique.

RQ4 What are the Machine Learning techniques used for automatic essay grading, and how are they implemented?

It can provide insights into various Machine Learning techniques like regression models, classification models, and neural networks for implementing essay grading systems. The response to the question can give us different assessment approaches for automated essay grading systems.

RQ5 What are the challenges/limitations in the current research?

The answer to the question provides limitations of existing research approaches like cohesion, coherence, completeness, and feedback.

2.2 Search process

We conducted an automated search on well-known computer science repositories like ACL, ACM, IEEE Explore, Springer, and Science Direct for an SLR. We referred to papers published from 2010 to 2020 as much of the work during these years focused on advanced technologies like deep learning and natural language processing for automated essay grading systems. Also, the availability of free data sets like Kaggle (2012), Cambridge Learner Corpus-First Certificate in English exam (CLC-FCE) by Yannakoudakis et al. ( 2011 ) led to research this domain.

Search Strings : We used search strings like “Automated essay grading” OR “Automated essay scoring” OR “short answer scoring systems” OR “essay scoring systems” OR “automatic essay evaluation” and searched on metadata.

2.3 Selection criteria

After collecting all relevant documents from the repositories, we prepared selection criteria for inclusion and exclusion of documents. With the inclusion and exclusion criteria, it becomes more feasible for the research to be accurate and specific.

Inclusion criteria 1 Our approach is to work with datasets comprise of essays written in English. We excluded the essays written in other languages.

Inclusion criteria 2  We included the papers implemented on the AI approach and excluded the traditional methods for the review.

Inclusion criteria 3 The study is on essay scoring systems, so we exclusively included the research carried out on only text data sets rather than other datasets like image or speech.

Exclusion criteria  We removed the papers in the form of review papers, survey papers, and state of the art papers.

2.4 Quality assessment

In addition to the inclusion and exclusion criteria, we assessed each paper by quality assessment questions to ensure the article's quality. We included the documents that have clearly explained the approach they used, the result analysis and validation.

The quality checklist questions are framed based on the guidelines from Kitchenham et al. ( 2009 ). Each quality assessment question was graded as either 1 or 0. The final score of the study range from 0 to 3. A cut off score for excluding a study from the review is 2 points. Since the papers scored 2 or 3 points are included in the final evaluation. We framed the following quality assessment questions for the final study.

Quality Assessment 1: Internal validity.

Quality Assessment 2: External validity.

Quality Assessment 3: Bias.

The two reviewers review each paper to select the final list of documents. We used the Quadratic Weighted Kappa score to measure the final agreement between the two reviewers. The average resulted from the kappa score is 0.6942, a substantial agreement between the reviewers. The result of evolution criteria shown in Table 1 . After Quality Assessment, the final list of papers for review is shown in Table 2 . The complete selection process is shown in Fig. 1 . The total number of selected papers in year wise as shown in Fig. 2 .

figure 1

Selection process

figure 2

Year wise publications

3.1 What are the datasets available for research on automated essay grading?

To work with problem statement especially in Machine Learning and deep learning domain, we require considerable amount of data to train the models. To answer this question, we listed all the data sets used for training and testing for automated essay grading systems. The Cambridge Learner Corpus-First Certificate in English exam (CLC-FCE) Yannakoudakis et al. ( 2011 ) developed corpora that contain 1244 essays and ten prompts. This corpus evaluates whether a student can write the relevant English sentences without any grammatical and spelling mistakes. This type of corpus helps to test the models built for GRE and TOFEL type of exams. It gives scores between 1 and 40.

Bailey and Meurers ( 2008 ), Created a dataset (CREE reading comprehension) for language learners and automated short answer scoring systems. The corpus consists of 566 responses from intermediate students. Mohler and Mihalcea ( 2009 ). Created a dataset for the computer science domain consists of 630 responses for data structure assignment questions. The scores are range from 0 to 5 given by two human raters.

Dzikovska et al. ( 2012 ) created a Student Response Analysis (SRA) corpus. It consists of two sub-groups: the BEETLE corpus consists of 56 questions and approximately 3000 responses from students in the electrical and electronics domain. The second one is the SCIENTSBANK(SemEval-2013) (Dzikovska et al. 2013a ; b ) corpus consists of 10,000 responses on 197 prompts on various science domains. The student responses ladled with "correct, partially correct incomplete, Contradictory, Irrelevant, Non-domain."

In the Kaggle (2012) competition, released total 3 types of corpuses on an Automated Student Assessment Prize (ASAP1) (“ https://www.kaggle.com/c/asap-sas/ ” ) essays and short answers. It has nearly 17,450 essays, out of which it provides up to 3000 essays for each prompt. It has eight prompts that test 7th to 10th grade US students. It gives scores between the [0–3] and [0–60] range. The limitations of these corpora are: (1) it has a different score range for other prompts. (2) It uses statistical features such as named entities extraction and lexical features of words to evaluate essays. ASAP +  + is one more dataset from Kaggle. It is with six prompts, and each prompt has more than 1000 responses total of 10,696 from 8th-grade students. Another corpus contains ten prompts from science, English domains and a total of 17,207 responses. Two human graders evaluated all these responses.

Correnti et al. ( 2013 ) created a Response-to-Text Assessment (RTA) dataset used to check student writing skills in all directions like style, mechanism, and organization. 4–8 grade students give the responses to RTA. Basu et al. ( 2013 ) created a power grading dataset with 700 responses for ten different prompts from US immigration exams. It contains all short answers for assessment.

The TOEFL11 corpus Blanchard et al. ( 2013 ) contains 1100 essays evenly distributed over eight prompts. It is used to test the English language skills of a candidate attending the TOFEL exam. It scores the language proficiency of a candidate as low, medium, and high.

International Corpus of Learner English (ICLE) Granger et al. ( 2009 ) built a corpus of 3663 essays covering different dimensions. It has 12 prompts with 1003 essays that test the organizational skill of essay writing, and13 prompts, each with 830 essays that examine the thesis clarity and prompt adherence.

Argument Annotated Essays (AAE) Stab and Gurevych ( 2014 ) developed a corpus that contains 102 essays with 101 prompts taken from the essayforum2 site. It tests the persuasive nature of the student essay. The SCIENTSBANK corpus used by Sakaguchi et al. ( 2015 ) available in git-hub, containing 9804 answers to 197 questions in 15 science domains. Table 3 illustrates all datasets related to AES systems.

3.2 RQ2 what are the features extracted for the assessment of essays?

Features play a major role in the neural network and other supervised Machine Learning approaches. The automatic essay grading systems scores student essays based on different types of features, which play a prominent role in training the models. Based on their syntax and semantics and they are categorized into three groups. 1. statistical-based features Contreras et al. ( 2018 ); Kumar et al. ( 2019 ); Mathias and Bhattacharyya ( 2018a ; b ) 2. Style-based (Syntax) features Cummins et al. ( 2016 ); Darwish and Mohamed ( 2020 ); Ke et al. ( 2019 ). 3. Content-based features Dong et al. ( 2017 ). A good set of features appropriate models evolved better AES systems. The vast majority of the researchers are using regression models if features are statistical-based. For Neural Networks models, researches are using both style-based and content-based features. The following table shows the list of various features used in existing AES Systems. Table 4 represents all set of features used for essay grading.

We studied all the feature extracting NLP libraries as shown in Fig. 3 . that are used in the papers. The NLTK is an NLP tool used to retrieve statistical features like POS, word count, sentence count, etc. With NLTK, we can miss the essay's semantic features. To find semantic features Word2Vec Mikolov et al. ( 2013 ), GloVe Jeffrey Pennington et al. ( 2014 ) is the most used libraries to retrieve the semantic text from the essays. And in some systems, they directly trained the model with word embeddings to find the score. From Fig. 4 as observed that non-content-based feature extraction is higher than content-based.

figure 3

Usages of tools

figure 4

Number of papers on content based features

3.3 RQ3 which are the evaluation metrics available for measuring the accuracy of algorithms?

The majority of the AES systems are using three evaluation metrics. They are (1) quadrated weighted kappa (QWK) (2) Mean Absolute Error (MAE) (3) Pearson Correlation Coefficient (PCC) Shehab et al. ( 2016 ). The quadratic weighted kappa will find agreement between human evaluation score and system evaluation score and produces value ranging from 0 to 1. And the Mean Absolute Error is the actual difference between human-rated score to system-generated score. The mean square error (MSE) measures the average squares of the errors, i.e., the average squared difference between the human-rated and the system-generated scores. MSE will always give positive numbers only. Pearson's Correlation Coefficient (PCC) finds the correlation coefficient between two variables. It will provide three values (0, 1, − 1). "0" represents human-rated and system scores that are not related. "1" represents an increase in the two scores. "− 1" illustrates a negative relationship between the two scores.

3.4 RQ4 what are the Machine Learning techniques being used for automatic essay grading, and how are they implemented?

After scrutinizing all documents, we categorize the techniques used in automated essay grading systems into four baskets. 1. Regression techniques. 2. Classification model. 3. Neural networks. 4. Ontology-based approach.

All the existing AES systems developed in the last ten years employ supervised learning techniques. Researchers using supervised methods viewed the AES system as either regression or classification task. The goal of the regression task is to predict the score of an essay. The classification task is to classify the essays belonging to (low, medium, or highly) relevant to the question's topic. Since the last three years, most AES systems developed made use of the concept of the neural network.

3.4.1 Regression based models

Mohler and Mihalcea ( 2009 ). proposed text-to-text semantic similarity to assign a score to the student essays. There are two text similarity measures like Knowledge-based measures, corpus-based measures. There eight knowledge-based tests with all eight models. They found the similarity. The shortest path similarity determines based on the length, which shortest path between two contexts. Leacock & Chodorow find the similarity based on the shortest path's length between two concepts using node-counting. The Lesk similarity finds the overlap between the corresponding definitions, and Wu & Palmer algorithm finds similarities based on the depth of two given concepts in the wordnet taxonomy. Resnik, Lin, Jiang&Conrath, Hirst& St-Onge find the similarity based on different parameters like the concept, probability, normalization factor, lexical chains. In corpus-based likeness, there LSA BNC, LSA Wikipedia, and ESA Wikipedia, latent semantic analysis is trained on Wikipedia and has excellent domain knowledge. Among all similarity scores, correlation scores LSA Wikipedia scoring accuracy is more. But these similarity measure algorithms are not using NLP concepts. These models are before 2010 and basic concept models to continue the research automated essay grading with updated algorithms on neural networks with content-based features.

Adamson et al. ( 2014 ) proposed an automatic essay grading system which is a statistical-based approach in this they retrieved features like POS, Character count, Word count, Sentence count, Miss spelled words, n-gram representation of words to prepare essay vector. They formed a matrix with these all vectors in that they applied LSA to give a score to each essay. It is a statistical approach that doesn’t consider the semantics of the essay. The accuracy they got when compared to the human rater score with the system is 0.532.

Cummins et al. ( 2016 ). Proposed Timed Aggregate Perceptron vector model to give ranking to all the essays, and later they converted the rank algorithm to predict the score of the essay. The model trained with features like Word unigrams, bigrams, POS, Essay length, grammatical relation, Max word length, sentence length. It is multi-task learning, gives ranking to the essays, and predicts the score for the essay. The performance evaluated through QWK is 0.69, a substantial agreement between the human rater and the system.

Sultan et al. ( 2016 ). Proposed a Ridge regression model to find short answer scoring with Question Demoting. Question Demoting is the new concept included in the essay's final assessment to eliminate duplicate words from the essay. The extracted features are Text Similarity, which is the similarity between the student response and reference answer. Question Demoting is the number of repeats in a student response. With inverse document frequency, they assigned term weight. The sentence length Ratio is the number of words in the student response, is another feature. With these features, the Ridge regression model was used, and the accuracy they got 0.887.

Contreras et al. ( 2018 ). Proposed Ontology based on text mining in this model has given a score for essays in phases. In phase-I, they generated ontologies with ontoGen and SVM to find the concept and similarity in the essay. In phase II from ontologies, they retrieved features like essay length, word counts, correctness, vocabulary, and types of word used, domain information. After retrieving statistical data, they used a linear regression model to find the score of the essay. The accuracy score is the average of 0.5.

Darwish and Mohamed ( 2020 ) proposed the fusion of fuzzy Ontology with LSA. They retrieve two types of features, like syntax features and semantic features. In syntax features, they found Lexical Analysis with tokens, and they construct a parse tree. If the parse tree is broken, the essay is inconsistent—a separate grade assigned to the essay concerning syntax features. The semantic features are like similarity analysis, Spatial Data Analysis. Similarity analysis is to find duplicate sentences—Spatial Data Analysis for finding Euclid distance between the center and part. Later they combine syntax features and morphological features score for the final score. The accuracy they achieved with the multiple linear regression model is 0.77, mostly on statistical features.

Süzen Neslihan et al. ( 2020 ) proposed a text mining approach for short answer grading. First, their comparing model answers with student response by calculating the distance between two sentences. By comparing the model answer with student response, they find the essay's completeness and provide feedback. In this approach, model vocabulary plays a vital role in grading, and with this model vocabulary, the grade will be assigned to the student's response and provides feedback. The correlation between the student answer to model answer is 0.81.

3.4.2 Classification based Models

Persing and Ng ( 2013 ) used a support vector machine to score the essay. The features extracted are OS, N-gram, and semantic text to train the model and identified the keywords from the essay to give the final score.

Sakaguchi et al. ( 2015 ) proposed two methods: response-based and reference-based. In response-based scoring, the extracted features are response length, n-gram model, and syntactic elements to train the support vector regression model. In reference-based scoring, features such as sentence similarity using word2vec is used to find the cosine similarity of the sentences that is the final score of the response. First, the scores were discovered individually and later combined two features to find a final score. This system gave a remarkable increase in performance by combining the scores.

Mathias and Bhattacharyya ( 2018a ; b ) Proposed Automated Essay Grading Dataset with Essay Attribute Scores. The first concept features selection depends on the essay type. So the common attributes are Content, Organization, Word Choice, Sentence Fluency, Conventions. In this system, each attribute is scored individually, with the strength of each attribute identified. The model they used is a random forest classifier to assign scores to individual attributes. The accuracy they got with QWK is 0.74 for prompt 1 of the ASAS dataset ( https://www.kaggle.com/c/asap-sas/ ).

Ke et al. ( 2019 ) used a support vector machine to find the response score. In this method, features like Agreeability, Specificity, Clarity, Relevance to prompt, Conciseness, Eloquence, Confidence, Direction of development, Justification of opinion, and Justification of importance. First, the individual parameter score obtained was later combined with all scores to give a final response score. The features are used in the neural network to find whether the sentence is relevant to the topic or not.

Salim et al. ( 2019 ) proposed an XGBoost Machine Learning classifier to assess the essays. The algorithm trained on features like word count, POS, parse tree depth, and coherence in the articles with sentence similarity percentage; cohesion and coherence are considered for training. And they implemented K-fold cross-validation for a result the average accuracy after specific validations is 68.12.

3.4.3 Neural network models

Shehab et al. ( 2016 ) proposed a neural network method that used learning vector quantization to train human scored essays. After training, the network can provide a score to the ungraded essays. First, we should process the essay to remove Spell checking and then perform preprocessing steps like Document Tokenization, stop word removal, Stemming, and submit it to the neural network. Finally, the model will provide feedback on the essay, whether it is relevant to the topic. And the correlation coefficient between human rater and system score is 0.7665.

Kopparapu and De ( 2016 ) proposed the Automatic Ranking of Essays using Structural and Semantic Features. This approach constructed a super essay with all the responses. Next, ranking for a student essay is done based on the super-essay. The structural and semantic features derived helps to obtain the scores. In a paragraph, 15 Structural features like an average number of sentences, the average length of sentences, and the count of words, nouns, verbs, adjectives, etc., are used to obtain a syntactic score. A similarity score is used as semantic features to calculate the overall score.

Dong and Zhang ( 2016 ) proposed a hierarchical CNN model. The model builds two layers with word embedding to represents the words as the first layer. The second layer is a word convolution layer with max-pooling to find word vectors. The next layer is a sentence-level convolution layer with max-pooling to find the sentence's content and synonyms. A fully connected dense layer produces an output score for an essay. The accuracy with the hierarchical CNN model resulted in an average QWK of 0.754.

Taghipour and Ng ( 2016 ) proposed a first neural approach for essay scoring build in which convolution and recurrent neural network concepts help in scoring an essay. The network uses a lookup table with the one-hot representation of the word vector of an essay. The final efficiency of the network model with LSTM resulted in an average QWK of 0.708.

Dong et al. ( 2017 ). Proposed an Attention-based scoring system with CNN + LSTM to score an essay. For CNN, the input parameters were character embedding and word embedding, and it has attention pooling layers and used NLTK to obtain word and character embedding. The output gives a sentence vector, which provides sentence weight. After CNN, it will have an LSTM layer with an attention pooling layer, and this final layer results in the final score of the responses. The average QWK score is 0.764.

Riordan et al. ( 2017 ) proposed a neural network with CNN and LSTM layers. Word embedding, given as input to a neural network. An LSTM network layer will retrieve the window features and delivers them to the aggregation layer. The aggregation layer is a superficial layer that takes a correct window of words and gives successive layers to predict the answer's sore. The accuracy of the neural network resulted in a QWK of 0.90.

Zhao et al. ( 2017 ) proposed a new concept called Memory-Augmented Neural network with four layers, input representation layer, memory addressing layer, memory reading layer, and output layer. An input layer represents all essays in a vector form based on essay length. After converting the word vector, the memory addressing layer takes a sample of the essay and weighs all the terms. The memory reading layer takes the input from memory addressing segment and finds the content to finalize the score. Finally, the output layer will provide the final score of the essay. The accuracy of essay scores is 0.78, which is far better than the LSTM neural network.

Mathias and Bhattacharyya ( 2018a ; b ) proposed deep learning networks using LSTM with the CNN layer and GloVe pre-trained word embeddings. For this, they retrieved features like Sentence count essays, word count per sentence, Number of OOVs in the sentence, Language model score, and the text's perplexity. The network predicted the goodness scores of each essay. The higher the goodness scores, means higher the rank and vice versa.

Nguyen and Dery ( 2016 ). Proposed Neural Networks for Automated Essay Grading. In this method, a single layer bi-directional LSTM accepting word vector as input. Glove vectors used in this method resulted in an accuracy of 90%.

Ruseti et al. ( 2018 ) proposed a recurrent neural network that is capable of memorizing the text and generate a summary of an essay. The Bi-GRU network with the max-pooling layer molded on the word embedding of each document. It will provide scoring to the essay by comparing it with a summary of the essay from another Bi-GRU network. The result obtained an accuracy of 0.55.

Wang et al. ( 2018a ; b ) proposed an automatic scoring system with the bi-LSTM recurrent neural network model and retrieved the features using the word2vec technique. This method generated word embeddings from the essay words using the skip-gram model. And later, word embedding is used to train the neural network to find the final score. The softmax layer in LSTM obtains the importance of each word. This method used a QWK score of 0.83%.

Dasgupta et al. ( 2018 ) proposed a technique for essay scoring with augmenting textual qualitative Features. It extracted three types of linguistic, cognitive, and psychological features associated with a text document. The linguistic features are Part of Speech (POS), Universal Dependency relations, Structural Well-formedness, Lexical Diversity, Sentence Cohesion, Causality, and Informativeness of the text. The psychological features derived from the Linguistic Information and Word Count (LIWC) tool. They implemented a convolution recurrent neural network that takes input as word embedding and sentence vector, retrieved from the GloVe word vector. And the second layer is the Convolution Layer to find local features. The next layer is the recurrent neural network (LSTM) to find corresponding of the text. The accuracy of this method resulted in an average QWK of 0.764.

Liang et al. ( 2018 ) proposed a symmetrical neural network AES model with Bi-LSTM. They are extracting features from sample essays and student essays and preparing an embedding layer as input. The embedding layer output is transfer to the convolution layer from that LSTM will be trained. Hear the LSRM model has self-features extraction layer, which will find the essay's coherence. The average QWK score of SBLSTMA is 0.801.

Liu et al. ( 2019 ) proposed two-stage learning. In the first stage, they are assigning a score based on semantic data from the essay. The second stage scoring is based on some handcrafted features like grammar correction, essay length, number of sentences, etc. The average score of the two stages is 0.709.

Pedro Uria Rodriguez et al. ( 2019 ) proposed a sequence-to-sequence learning model for automatic essay scoring. They used BERT (Bidirectional Encoder Representations from Transformers), which extracts the semantics from a sentence from both directions. And XLnet sequence to sequence learning model to extract features like the next sentence in an essay. With this pre-trained model, they attained coherence from the essay to give the final score. The average QWK score of the model is 75.5.

Xia et al. ( 2019 ) proposed a two-layer Bi-directional LSTM neural network for the scoring of essays. The features extracted with word2vec to train the LSTM and accuracy of the model in an average of QWK is 0.870.

Kumar et al. ( 2019 ) Proposed an AutoSAS for short answer scoring. It used pre-trained Word2Vec and Doc2Vec models trained on Google News corpus and Wikipedia dump, respectively, to retrieve the features. First, they tagged every word POS and they found weighted words from the response. It also found prompt overlap to observe how the answer is relevant to the topic, and they defined lexical overlaps like noun overlap, argument overlap, and content overlap. This method used some statistical features like word frequency, difficulty, diversity, number of unique words in each response, type-token ratio, statistics of the sentence, word length, and logical operator-based features. This method uses a random forest model to train the dataset. The data set has sample responses with their associated score. The model will retrieve the features from both responses like graded and ungraded short answers with questions. The accuracy of AutoSAS with QWK is 0.78. It will work on any topics like Science, Arts, Biology, and English.

Jiaqi Lun et al. ( 2020 ) proposed an automatic short answer scoring with BERT. In this with a reference answer comparing student responses and assigning scores. The data augmentation is done with a neural network and with one correct answer from the dataset classifying reaming responses as correct or incorrect.

Zhu and Sun ( 2020 ) proposed a multimodal Machine Learning approach for automated essay scoring. First, they count the grammar score with the spaCy library and numerical count as the number of words and sentences with the same library. With this input, they trained a single and Bi LSTM neural network for finding the final score. For the LSTM model, they prepared sentence vectors with GloVe and word embedding with NLTK. Bi-LSTM will check each sentence in both directions to find semantic from the essay. The average QWK score with multiple models is 0.70.

3.4.4 Ontology based approach

Mohler et al. ( 2011 ) proposed a graph-based method to find semantic similarity in short answer scoring. For the ranking of answers, they used the support vector regression model. The bag of words is the main feature extracted in the system.

Ramachandran et al. ( 2015 ) also proposed a graph-based approach to find lexical based semantics. Identified phrase patterns and text patterns are the features to train a random forest regression model to score the essays. The accuracy of the model in a QWK is 0.78.

Zupanc et al. ( 2017 ) proposed sentence similarity networks to find the essay's score. Ajetunmobi and Daramola ( 2017 ) recommended an ontology-based information extraction approach and domain-based ontology to find the score.

3.4.5 Speech response scoring

Automatic scoring is in two ways one is text-based scoring, other is speech-based scoring. This paper discussed text-based scoring and its challenges, and now we cover speech scoring and common points between text and speech-based scoring. Evanini and Wang ( 2013 ), Worked on speech scoring of non-native school students, extracted features with speech ratter, and trained a linear regression model, concluding that accuracy varies based on voice pitching. Loukina et al. ( 2015 ) worked on feature selection from speech data and trained SVM. Malinin et al. ( 2016 ) used neural network models to train the data. Loukina et al. ( 2017 ). Proposed speech and text-based automatic scoring. Extracted text-based features, speech-based features and trained a deep neural network for speech-based scoring. They extracted 33 types of features based on acoustic signals. Malinin et al. ( 2017 ). Wu Xixin et al. ( 2020 ) Worked on deep neural networks for spoken language assessment. Incorporated different types of models and tested them. Ramanarayanan et al. ( 2017 ) worked on feature extraction methods and extracted punctuation, fluency, and stress and trained different Machine Learning models for scoring. Knill et al. ( 2018 ). Worked on Automatic speech recognizer and its errors how its impacts the speech assessment.

3.4.5.1 The state of the art

This section provides an overview of the existing AES systems with a comparative study w. r. t models, features applied, datasets, and evaluation metrics used for building the automated essay grading systems. We divided all 62 papers into two sets of the first set of review papers in Table 5 with a comparative study of the AES systems.

3.4.6 Comparison of all approaches

In our study, we divided major AES approaches into three categories. Regression models, classification models, and neural network models. The regression models failed to find cohesion and coherence from the essay because it trained on BoW(Bag of Words) features. In processing data from input to output, the regression models are less complicated than neural networks. There are unable to find many intricate patterns from the essay and unable to find sentence connectivity. If we train the model with BoW features in the neural network approach, the model never considers the essay's coherence and coherence.

First, to train a Machine Learning algorithm with essays, all the essays are converted to vector form. We can form a vector with BoW and Word2vec, TF-IDF. The BoW and Word2vec vector representation of essays represented in Table 6 . The vector representation of BoW with TF-IDF is not incorporating the essays semantic, and it’s just statistical learning from a given vector. Word2vec vector comprises semantic of essay in a unidirectional way.

In BoW, the vector contains the frequency of word occurrences in the essay. The vector represents 1 and more based on the happenings of words in the essay and 0 for not present. So, in BoW, the vector does not maintain the relationship with adjacent words; it’s just for single words. In word2vec, the vector represents the relationship between words with other words and sentences prompt in multiple dimensional ways. But word2vec prepares vectors in a unidirectional way, not in a bidirectional way; word2vec fails to find semantic vectors when a word has two meanings, and the meaning depends on adjacent words. Table 7 represents a comparison of Machine Learning models and features extracting methods.

In AES, cohesion and coherence will check the content of the essay concerning the essay prompt these can be extracted from essay in the vector from. Two more parameters are there to access an essay is completeness and feedback. Completeness will check whether student’s response is sufficient or not though the student wrote correctly. Table 8 represents all four parameters comparison for essay grading. Table 9 illustrates comparison of all approaches based on various features like grammar, spelling, organization of essay, relevance.

3.5 What are the challenges/limitations in the current research?

From our study and results discussed in the previous sections, many researchers worked on automated essay scoring systems with numerous techniques. We have statistical methods, classification methods, and neural network approaches to evaluate the essay automatically. The main goal of the automated essay grading system is to reduce human effort and improve consistency.

The vast majority of essay scoring systems are dealing with the efficiency of the algorithm. But there are many challenges in automated essay grading systems. One should assess the essay by following parameters like the relevance of the content to the prompt, development of ideas, Cohesion, Coherence, and domain knowledge.

No model works on the relevance of content, which means whether student response or explanation is relevant to the given prompt or not if it is relevant to how much it is appropriate, and there is no discussion about the cohesion and coherence of the essays. All researches concentrated on extracting the features using some NLP libraries, trained their models, and testing the results. But there is no explanation in the essay evaluation system about consistency and completeness, But Palma and Atkinson ( 2018 ) explained coherence-based essay evaluation. And Zupanc and Bosnic ( 2014 ) also used the word coherence to evaluate essays. And they found consistency with latent semantic analysis (LSA) for finding coherence from essays, but the dictionary meaning of coherence is "The quality of being logical and consistent."

Another limitation is there is no domain knowledge-based evaluation of essays using Machine Learning models. For example, the meaning of a cell is different from biology to physics. Many Machine Learning models extract features with WordVec and GloVec; these NLP libraries cannot convert the words into vectors when they have two or more meanings.

3.5.1 Other challenges that influence the Automated Essay Scoring Systems.

All these approaches worked to improve the QWK score of their models. But QWK will not assess the model in terms of features extraction and constructed irrelevant answers. The QWK is not evaluating models whether the model is correctly assessing the answer or not. There are many challenges concerning students' responses to the Automatic scoring system. Like in evaluating approach, no model has examined how to evaluate the constructed irrelevant and adversarial answers. Especially the black box type of approaches like deep learning models provides more options to the students to bluff the automated scoring systems.

The Machine Learning models that work on statistical features are very vulnerable. Based on Powers et al. ( 2001 ) and Bejar Isaac et al. ( 2014 ), the E-rater was failed on Constructed Irrelevant Responses Strategy (CIRS). From the study of Bejar et al. ( 2013 ), Higgins and Heilman ( 2014 ), observed that when student response contain irrelevant content or shell language concurring to prompt will influence the final score of essays in an automated scoring system.

In deep learning approaches, most of the models automatically read the essay's features, and some methods work on word-based embedding and other character-based embedding features. From the study of Riordan Brain et al. ( 2019 ), The character-based embedding systems do not prioritize spelling correction. However, it is influencing the final score of the essay. From the study of Horbach and Zesch ( 2019 ), Various factors are influencing AES systems. For example, there are data set size, prompt type, answer length, training set, and human scorers for content-based scoring.

Ding et al. ( 2020 ) reviewed that the automated scoring system is vulnerable when a student response contains more words from prompt, like prompt vocabulary repeated in the response. Parekh et al. ( 2020 ) and Kumar et al. ( 2020 ) tested various neural network models of AES by iteratively adding important words, deleting unimportant words, shuffle the words, and repeating sentences in an essay and found that no change in the final score of essays. These neural network models failed to recognize common sense in adversaries' essays and give more options for the students to bluff the automated systems.

Other than NLP and ML techniques for AES. From Wresch ( 1993 ) to Madnani and Cahill ( 2018 ). discussed the complexity of AES systems, standards need to be followed. Like assessment rubrics to test subject knowledge, irrelevant responses, and ethical aspects of an algorithm like measuring the fairness of student response.

Fairness is an essential factor for automated systems. For example, in AES, fairness can be measure in an agreement between human score to machine score. Besides this, From Loukina et al. ( 2019 ), the fairness standards include overall score accuracy, overall score differences, and condition score differences between human and system scores. In addition, scoring different responses in the prospect of constructive relevant and irrelevant will improve fairness.

Madnani et al. ( 2017a ; b ). Discussed the fairness of AES systems for constructed responses and presented RMS open-source tool for detecting biases in the models. With this, one can change fairness standards according to their analysis of fairness.

From Berzak et al.'s ( 2018 ) approach, behavior factors are a significant challenge in automated scoring systems. That helps to find language proficiency, word characteristics (essential words from the text), predict the critical patterns from the text, find related sentences in an essay, and give a more accurate score.

Rupp ( 2018 ), has discussed the designing, evaluating, and deployment methodologies for AES systems. They provided notable characteristics of AES systems for deployment. They are like model performance, evaluation metrics for a model, threshold values, dynamically updated models, and framework.

First, we should check the model performance on different datasets and parameters for operational deployment. Selecting Evaluation metrics for AES models are like QWK, correlation coefficient, or sometimes both. Kelley and Preacher ( 2012 ) have discussed three categories of threshold values: marginal, borderline, and acceptable. The values can be varied based on data size, model performance, type of model (single scoring, multiple scoring models). Once a model is deployed and evaluates millions of responses every time for optimal responses, we need a dynamically updated model based on prompt and data. Finally, framework designing of AES model, hear a framework contains prompts where test-takers can write the responses. One can design two frameworks: a single scoring model for a single methodology and multiple scoring models for multiple concepts. When we deploy multiple scoring models, each prompt could be trained separately, or we can provide generalized models for all prompts with this accuracy may vary, and it is challenging.

4 Synthesis

Our Systematic literature review on the automated essay grading system first collected 542 papers with selected keywords from various databases. After inclusion and exclusion criteria, we left with 139 articles; on these selected papers, we applied Quality assessment criteria with two reviewers, and finally, we selected 62 writings for final review.

Our observations on automated essay grading systems from 2010 to 2020 are as followed:

The implementation techniques of automated essay grading systems are classified into four buckets; there are 1. regression models 2. Classification models 3. Neural networks 4. Ontology-based methodology, but using neural networks, the researchers are more accurate than other techniques, and all the methods state of the art provided in Table 3 .

The majority of the regression and classification models on essay scoring used statistical features to find the final score. It means the systems or models trained on such parameters as word count, sentence count, etc. though the parameters extracted from the essay, the algorithm are not directly training on essays. The algorithms trained on some numbers obtained from the essay and hear if numbers matched the composition will get a good score; otherwise, the rating is less. In these models, the evaluation process is entirely on numbers, irrespective of the essay. So, there is a lot of chance to miss the coherence, relevance of the essay if we train our algorithm on statistical parameters.

In the neural network approach, the models trained on Bag of Words (BoW) features. The BoW feature is missing the relationship between a word to word and the semantic meaning of the sentence. E.g., Sentence 1: John killed bob. Sentence 2: bob killed John. In these two sentences, the BoW is "John," "killed," "bob."

In the Word2Vec library, if we are prepared a word vector from an essay in a unidirectional way, the vector will have a dependency with other words and finds the semantic relationship with other words. But if a word has two or more meanings like "Bank loan" and "River Bank," hear bank has two implications, and its adjacent words decide the sentence meaning; in this case, Word2Vec is not finding the real meaning of the word from the sentence.

The features extracted from essays in the essay scoring system are classified into 3 type's features like statistical features, style-based features, and content-based features, which are explained in RQ2 and Table 3 . But statistical features, are playing a significant role in some systems and negligible in some systems. In Shehab et al. ( 2016 ); Cummins et al. ( 2016 ). Dong et al. ( 2017 ). Dong and Zhang ( 2016 ). Mathias and Bhattacharyya ( 2018a ; b ) Systems the assessment is entirely on statistical and style-based features they have not retrieved any content-based features. And in other systems that extract content from the essays, the role of statistical features is for only preprocessing essays but not included in the final grading.

In AES systems, coherence is the main feature to be considered while evaluating essays. The actual meaning of coherence is to stick together. That is the logical connection of sentences (local level coherence) and paragraphs (global level coherence) in a story. Without coherence, all sentences in a paragraph are independent and meaningless. In an Essay, coherence is a significant feature that is explaining everything in a flow and its meaning. It is a powerful feature in AES system to find the semantics of essay. With coherence, one can assess whether all sentences are connected in a flow and all paragraphs are related to justify the prompt. Retrieving the coherence level from an essay is a critical task for all researchers in AES systems.

In automatic essay grading systems, the assessment of essays concerning content is critical. That will give the actual score for the student. Most of the researches used statistical features like sentence length, word count, number of sentences, etc. But according to collected results, 32% of the systems used content-based features for the essay scoring. Example papers which are on content-based assessment are Taghipour and Ng ( 2016 ); Persing and Ng ( 2013 ); Wang et al. ( 2018a , 2018b ); Zhao et al. ( 2017 ); Kopparapu and De ( 2016 ), Kumar et al. ( 2019 ); Mathias and Bhattacharyya ( 2018a ; b ); Mohler and Mihalcea ( 2009 ) are used content and statistical-based features. The results are shown in Fig. 3 . And mainly the content-based features extracted with word2vec NLP library, but word2vec is capable of capturing the context of a word in a document, semantic and syntactic similarity, relation with other terms, but word2vec is capable of capturing the context word in a uni-direction either left or right. If a word has multiple meanings, there is a chance of missing the context in the essay. After analyzing all the papers, we found that content-based assessment is a qualitative assessment of essays.

On the other hand, Horbach and Zesch ( 2019 ); Riordan Brain et al. ( 2019 ); Ding et al. ( 2020 ); Kumar et al. ( 2020 ) proved that neural network models are vulnerable when a student response contains constructed irrelevant, adversarial answers. And a student can easily bluff an automated scoring system by submitting different responses like repeating sentences and repeating prompt words in an essay. From Loukina et al. ( 2019 ), and Madnani et al. ( 2017b ). The fairness of an algorithm is an essential factor to be considered in AES systems.

While talking about speech assessment, the data set contains audios of duration up to one minute. Feature extraction techniques are entirely different from text assessment, and accuracy varies based on speaking fluency, pitching, male to female voice and boy to adult voice. But the training algorithms are the same for text and speech assessment.

Once an AES system evaluates essays and short answers accurately in all directions, there is a massive demand for automated systems in the educational and related world. Now AES systems are deployed in GRE, TOEFL exams; other than these, we can deploy AES systems in massive open online courses like Coursera(“ https://coursera.org/learn//machine-learning//exam ”), NPTEL ( https://swayam.gov.in/explorer ), etc. still they are assessing student performance with multiple-choice questions. In another perspective, AES systems can be deployed in information retrieval systems like Quora, stack overflow, etc., to check whether the retrieved response is appropriate to the question or not and can give ranking to the retrieved answers.

5 Conclusion and future work

As per our Systematic literature review, we studied 62 papers. There exist significant challenges for researchers in implementing automated essay grading systems. Several researchers are working rigorously on building a robust AES system despite its difficulty in solving this problem. All evaluating methods are not evaluated based on coherence, relevance, completeness, feedback, and knowledge-based. And 90% of essay grading systems are used Kaggle ASAP (2012) dataset, which has general essays from students and not required any domain knowledge, so there is a need for domain-specific essay datasets to train and test. Feature extraction is with NLTK, WordVec, and GloVec NLP libraries; these libraries have many limitations while converting a sentence into vector form. Apart from feature extraction and training Machine Learning models, no system is accessing the essay's completeness. No system provides feedback to the student response and not retrieving coherence vectors from the essay—another perspective the constructive irrelevant and adversarial student responses still questioning AES systems.

Our proposed research work will go on the content-based assessment of essays with domain knowledge and find a score for the essays with internal and external consistency. And we will create a new dataset concerning one domain. And another area in which we can improve is the feature extraction techniques.

This study includes only four digital databases for study selection may miss some functional studies on the topic. However, we hope that we covered most of the significant studies as we manually collected some papers published in useful journals.

Adamson, A., Lamb, A., & December, R. M. (2014). Automated Essay Grading.

Ajay HB, Tillett PI, Page EB (1973) Analysis of essays by computer (AEC-II) (No. 8-0102). Washington, DC: U.S. Department of Health, Education, and Welfare, Office of Education, National Center for Educational Research and Development

Ajetunmobi SA, Daramola O (2017) Ontology-based information extraction for subject-focussed automatic essay evaluation. In: 2017 International Conference on Computing Networking and Informatics (ICCNI) p 1–6. IEEE

Alva-Manchego F, et al. (2019) EASSE: Easier Automatic Sentence Simplification Evaluation.” ArXiv abs/1908.04567 (2019): n. pag

Bailey S, Meurers D (2008) Diagnosing meaning errors in short answers to reading comprehension questions. In: Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications (Columbus), p 107–115

Basu S, Jacobs C, Vanderwende L (2013) Powergrading: a clustering approach to amplify human effort for short answer grading. Trans Assoc Comput Linguist (TACL) 1:391–402

Article   Google Scholar  

Bejar, I. I., Flor, M., Futagi, Y., & Ramineni, C. (2014). On the vulnerability of automated scoring to construct-irrelevant response strategies (CIRS): An illustration. Assessing Writing, 22, 48-59.

Bejar I, et al. (2013) Length of Textual Response as a Construct-Irrelevant Response Strategy: The Case of Shell Language. Research Report. ETS RR-13-07.” ETS Research Report Series (2013): n. pag

Berzak Y, et al. (2018) “Assessing Language Proficiency from Eye Movements in Reading.” ArXiv abs/1804.07329 (2018): n. pag

Blanchard D, Tetreault J, Higgins D, Cahill A, Chodorow M (2013) TOEFL11: A corpus of non-native English. ETS Research Report Series, 2013(2):i–15, 2013

Blood, I. (2011). Automated essay scoring: a literature review. Studies in Applied Linguistics and TESOL, 11(2).

Burrows S, Gurevych I, Stein B (2015) The eras and trends of automatic short answer grading. Int J Artif Intell Educ 25:60–117. https://doi.org/10.1007/s40593-014-0026-8

Cader, A. (2020, July). The Potential for the Use of Deep Neural Networks in e-Learning Student Evaluation with New Data Augmentation Method. In International Conference on Artificial Intelligence in Education (pp. 37–42). Springer, Cham.

Cai C (2019) Automatic essay scoring with recurrent neural network. In: Proceedings of the 3rd International Conference on High Performance Compilation, Computing and Communications (2019): n. pag.

Chen M, Li X (2018) "Relevance-Based Automated Essay Scoring via Hierarchical Recurrent Model. In: 2018 International Conference on Asian Language Processing (IALP), Bandung, Indonesia, 2018, p 378–383, doi: https://doi.org/10.1109/IALP.2018.8629256

Chen Z, Zhou Y (2019) "Research on Automatic Essay Scoring of Composition Based on CNN and OR. In: 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, p 13–18, doi: https://doi.org/10.1109/ICAIBD.2019.8837007

Contreras JO, Hilles SM, Abubakar ZB (2018) Automated essay scoring with ontology based on text mining and NLTK tools. In: 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), 1-6

Correnti R, Matsumura LC, Hamilton L, Wang E (2013) Assessing students’ skills at writing analytically in response to texts. Elem Sch J 114(2):142–177

Cummins, R., Zhang, M., & Briscoe, E. (2016, August). Constrained multi-task learning for automated essay scoring. Association for Computational Linguistics.

Darwish SM, Mohamed SK (2020) Automated essay evaluation based on fusion of fuzzy ontology and latent semantic analysis. In: Hassanien A, Azar A, Gaber T, Bhatnagar RF, Tolba M (eds) The International Conference on Advanced Machine Learning Technologies and Applications

Dasgupta T, Naskar A, Dey L, Saha R (2018) Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In: Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications p 93–102

Ding Y, et al. (2020) "Don’t take “nswvtnvakgxpm” for an answer–The surprising vulnerability of automatic content scoring systems to adversarial input." In: Proceedings of the 28th International Conference on Computational Linguistics

Dong F, Zhang Y (2016) Automatic features for essay scoring–an empirical study. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing p 1072–1077

Dong F, Zhang Y, Yang J (2017) Attention-based recurrent convolutional neural network for automatic essay scoring. In: Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017) p 153–162

Dzikovska M, Nielsen R, Brew C, Leacock C, Gi ampiccolo D, Bentivogli L, Clark P, Dagan I, Dang HT (2013a) Semeval-2013 task 7: The joint student response analysis and 8th recognizing textual entailment challenge

Dzikovska MO, Nielsen R, Brew C, Leacock C, Giampiccolo D, Bentivogli L, Clark P, Dagan I, Trang Dang H (2013b) SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge. *SEM 2013: The First Joint Conference on Lexical and Computational Semantics

Educational Testing Service (2008) CriterionSM online writing evaluation service. Retrieved from http://www.ets.org/s/criterion/pdf/9286_CriterionBrochure.pdf .

Evanini, K., & Wang, X. (2013, August). Automated speech scoring for non-native middle school students with multiple task types. In INTERSPEECH (pp. 2435–2439).

Foltz PW, Laham D, Landauer TK (1999) The Intelligent Essay Assessor: Applications to Educational Technology. Interactive Multimedia Electronic Journal of Computer-Enhanced Learning, 1, 2, http://imej.wfu.edu/articles/1999/2/04/ index.asp

Granger, S., Dagneaux, E., Meunier, F., & Paquot, M. (Eds.). (2009). International corpus of learner English. Louvain-la-Neuve: Presses universitaires de Louvain.

Higgins, D., & Heilman, M. (2014). Managing what we can measure: Quantifying the susceptibility of automated scoring systems to gaming behavior. Educational Measurement: Issues and Practice, 33(3), 36–46.

Horbach A, Zesch T (2019) The influence of variance in learner answers on automatic content scoring. Front Educ 4:28. https://doi.org/10.3389/feduc.2019.00028

https://www.coursera.org/learn/machine-learning/exam/7pytE/linear-regression-with-multiple-variables/attempt

Hussein, M. A., Hassan, H., & Nassef, M. (2019). Automated language essay scoring systems: A literature review. PeerJ Computer Science, 5, e208.

Ke Z, Ng V (2019) “Automated essay scoring: a survey of the state of the art.” IJCAI

Ke, Z., Inamdar, H., Lin, H., & Ng, V. (2019, July). Give me more feedback II: Annotating thesis strength and related attributes in student essays. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 3994-4004).

Kelley K, Preacher KJ (2012) On effect size. Psychol Methods 17(2):137–152

Kitchenham B, Brereton OP, Budgen D, Turner M, Bailey J, Linkman S (2009) Systematic literature reviews in software engineering–a systematic literature review. Inf Softw Technol 51(1):7–15

Klebanov, B. B., & Madnani, N. (2020, July). Automated evaluation of writing–50 years and counting. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 7796–7810).

Knill K, Gales M, Kyriakopoulos K, et al. (4 more authors) (2018) Impact of ASR performance on free speaking language assessment. In: Interspeech 2018.02–06 Sep 2018, Hyderabad, India. International Speech Communication Association (ISCA)

Kopparapu SK, De A (2016) Automatic ranking of essays using structural and semantic features. In: 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI), p 519–523

Kumar, Y., Aggarwal, S., Mahata, D., Shah, R. R., Kumaraguru, P., & Zimmermann, R. (2019, July). Get it scored using autosas—an automated system for scoring short answers. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 9662–9669).

Kumar Y, et al. (2020) “Calling out bluff: attacking the robustness of automatic scoring systems with simple adversarial testing.” ArXiv abs/2007.06796

Li X, Chen M, Nie J, Liu Z, Feng Z, Cai Y (2018) Coherence-Based Automated Essay Scoring Using Self-attention. In: Sun M, Liu T, Wang X, Liu Z, Liu Y (eds) Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data. CCL 2018, NLP-NABD 2018. Lecture Notes in Computer Science, vol 11221. Springer, Cham. https://doi.org/10.1007/978-3-030-01716-3_32

Liang G, On B, Jeong D, Kim H, Choi G (2018) Automated essay scoring: a siamese bidirectional LSTM neural network architecture. Symmetry 10:682

Liua, H., Yeb, Y., & Wu, M. (2018, April). Ensemble Learning on Scoring Student Essay. In 2018 International Conference on Management and Education, Humanities and Social Sciences (MEHSS 2018). Atlantis Press.

Liu J, Xu Y, Zhao L (2019) Automated Essay Scoring based on Two-Stage Learning. ArXiv, abs/1901.07744

Loukina A, et al. (2015) Feature selection for automated speech scoring.” BEA@NAACL-HLT

Loukina A, et al. (2017) “Speech- and Text-driven Features for Automated Scoring of English-Speaking Tasks.” SCNLP@EMNLP 2017

Loukina A, et al. (2019) The many dimensions of algorithmic fairness in educational applications. BEA@ACL

Lun J, Zhu J, Tang Y, Yang M (2020) Multiple data augmentation strategies for improving performance on automatic short answer scoring. In: Proceedings of the AAAI Conference on Artificial Intelligence, 34(09): 13389-13396

Madnani, N., & Cahill, A. (2018, August). Automated scoring: Beyond natural language processing. In Proceedings of the 27th International Conference on Computational Linguistics (pp. 1099–1109).

Madnani N, et al. (2017b) “Building better open-source tools to support fairness in automated scoring.” EthNLP@EACL

Malinin A, et al. (2016) “Off-topic response detection for spontaneous spoken english assessment.” ACL

Malinin A, et al. (2017) “Incorporating uncertainty into deep learning for spoken language assessment.” ACL

Mathias S, Bhattacharyya P (2018a) Thank “Goodness”! A Way to Measure Style in Student Essays. In: Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications p 35–41

Mathias S, Bhattacharyya P (2018b) ASAP++: Enriching the ASAP automated essay grading dataset with essay attribute scores. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).

Mikolov T, et al. (2013) “Efficient Estimation of Word Representations in Vector Space.” ICLR

Mohler M, Mihalcea R (2009) Text-to-text semantic similarity for automatic short answer grading. In: Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009) p 567–575

Mohler M, Bunescu R, Mihalcea R (2011) Learning to grade short answer questions using semantic similarity measures and dependency graph alignments. In: Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies p 752–762

Muangkammuen P, Fukumoto F (2020) Multi-task Learning for Automated Essay Scoring with Sentiment Analysis. In: Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop p 116–123

Nguyen, H., & Dery, L. (2016). Neural networks for automated essay grading. CS224d Stanford Reports, 1–11.

Palma D, Atkinson J (2018) Coherence-based automatic essay assessment. IEEE Intell Syst 33(5):26–36

Parekh S, et al (2020) My Teacher Thinks the World Is Flat! Interpreting Automatic Essay Scoring Mechanism.” ArXiv abs/2012.13872 (2020): n. pag

Pennington, J., Socher, R., & Manning, C. D. (2014, October). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532–1543).

Persing I, Ng V (2013) Modeling thesis clarity in student essays. In:Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) p 260–269

Powers DE, Burstein JC, Chodorow M, Fowles ME, Kukich K (2001) Stumping E-Rater: challenging the validity of automated essay scoring. ETS Res Rep Ser 2001(1):i–44

Google Scholar  

Powers, D. E., Burstein, J. C., Chodorow, M., Fowles, M. E., & Kukich, K. (2002). Stumping e-rater: challenging the validity of automated essay scoring. Computers in Human Behavior, 18(2), 103–134.

Ramachandran L, Cheng J, Foltz P (2015) Identifying patterns for short answer scoring using graph-based lexico-semantic text matching. In: Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications p 97–106

Ramanarayanan V, et al. (2017) “Human and Automated Scoring of Fluency, Pronunciation and Intonation During Human-Machine Spoken Dialog Interactions.” INTERSPEECH

Riordan B, Horbach A, Cahill A, Zesch T, Lee C (2017) Investigating neural architectures for short answer scoring. In: Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications p 159–168

Riordan B, Flor M, Pugh R (2019) "How to account for misspellings: Quantifying the benefit of character representations in neural content scoring models."In: Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications

Rodriguez P, Jafari A, Ormerod CM (2019) Language models and Automated Essay Scoring. ArXiv, abs/1909.09482

Rudner, L. M., & Liang, T. (2002). Automated essay scoring using Bayes' theorem. The Journal of Technology, Learning and Assessment, 1(2).

Rudner, L. M., Garcia, V., & Welch, C. (2006). An evaluation of IntelliMetric™ essay scoring system. The Journal of Technology, Learning and Assessment, 4(4).

Rupp A (2018) Designing, evaluating, and deploying automated scoring systems with validity in mind: methodological design decisions. Appl Meas Educ 31:191–214

Ruseti S, Dascalu M, Johnson AM, McNamara DS, Balyan R, McCarthy KS, Trausan-Matu S (2018) Scoring summaries using recurrent neural networks. In: International Conference on Intelligent Tutoring Systems p 191–201. Springer, Cham

Sakaguchi K, Heilman M, Madnani N (2015) Effective feature integration for automated short answer scoring. In: Proceedings of the 2015 conference of the North American Chapter of the association for computational linguistics: Human language technologies p 1049–1054

Salim, Y., Stevanus, V., Barlian, E., Sari, A. C., & Suhartono, D. (2019, December). Automated English Digital Essay Grader Using Machine Learning. In 2019 IEEE International Conference on Engineering, Technology and Education (TALE) (pp. 1–6). IEEE.

Shehab A, Elhoseny M, Hassanien AE (2016) A hybrid scheme for Automated Essay Grading based on LVQ and NLP techniques. In: 12th International Computer Engineering Conference (ICENCO), Cairo, 2016, p 65-70

Shermis MD, Mzumara HR, Olson J, Harrington S (2001) On-line grading of student essays: PEG goes on the World Wide Web. Assess Eval High Educ 26(3):247–259

Stab C, Gurevych I (2014) Identifying argumentative discourse structures in persuasive essays. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) p 46–56

Sultan MA, Salazar C, Sumner T (2016) Fast and easy short answer grading with high accuracy. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies p 1070–1075

Süzen, N., Gorban, A. N., Levesley, J., & Mirkes, E. M. (2020). Automatic short answer grading and feedback using text mining methods. Procedia Computer Science, 169, 726–743.

Taghipour K, Ng HT (2016) A neural approach to automated essay scoring. In: Proceedings of the 2016 conference on empirical methods in natural language processing p 1882–1891

Tashu TM (2020) "Off-Topic Essay Detection Using C-BGRU Siamese. In: 2020 IEEE 14th International Conference on Semantic Computing (ICSC), San Diego, CA, USA, p 221–225, doi: https://doi.org/10.1109/ICSC.2020.00046

Tashu TM, Horváth T (2019) A layered approach to automatic essay evaluation using word-embedding. In: McLaren B, Reilly R, Zvacek S, Uhomoibhi J (eds) Computer Supported Education. CSEDU 2018. Communications in Computer and Information Science, vol 1022. Springer, Cham

Tashu TM, Horváth T (2020) Semantic-Based Feedback Recommendation for Automatic Essay Evaluation. In: Bi Y, Bhatia R, Kapoor S (eds) Intelligent Systems and Applications. IntelliSys 2019. Advances in Intelligent Systems and Computing, vol 1038. Springer, Cham

Uto M, Okano M (2020) Robust Neural Automated Essay Scoring Using Item Response Theory. In: Bittencourt I, Cukurova M, Muldner K, Luckin R, Millán E (eds) Artificial Intelligence in Education. AIED 2020. Lecture Notes in Computer Science, vol 12163. Springer, Cham

Wang Z, Liu J, Dong R (2018a) Intelligent Auto-grading System. In: 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS) p 430–435. IEEE.

Wang Y, et al. (2018b) “Automatic Essay Scoring Incorporating Rating Schema via Reinforcement Learning.” EMNLP

Zhu W, Sun Y (2020) Automated essay scoring system using multi-model Machine Learning, david c. wyld et al. (eds): mlnlp, bdiot, itccma, csity, dtmn, aifz, sigpro

Wresch W (1993) The Imminence of Grading Essays by Computer-25 Years Later. Comput Compos 10:45–58

Wu, X., Knill, K., Gales, M., & Malinin, A. (2020). Ensemble approaches for uncertainty in spoken language assessment.

Xia L, Liu J, Zhang Z (2019) Automatic Essay Scoring Model Based on Two-Layer Bi-directional Long-Short Term Memory Network. In: Proceedings of the 2019 3rd International Conference on Computer Science and Artificial Intelligence p 133–137

Yannakoudakis H, Briscoe T, Medlock B (2011) A new dataset and method for automatically grading ESOL texts. In: Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies p 180–189

Zhao S, Zhang Y, Xiong X, Botelho A, Heffernan N (2017) A memory-augmented neural model for automated grading. In: Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale p 189–192

Zupanc K, Bosnic Z (2014) Automated essay evaluation augmented with semantic coherence measures. In: 2014 IEEE International Conference on Data Mining p 1133–1138. IEEE.

Zupanc K, Savić M, Bosnić Z, Ivanović M (2017) Evaluating coherence of essays using sentence-similarity networks. In: Proceedings of the 18th International Conference on Computer Systems and Technologies p 65–72

Dzikovska, M. O., Nielsen, R., & Brew, C. (2012, June). Towards effective tutorial feedback for explanation questions: A dataset and baselines. In  Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies  (pp. 200-210).

Kumar, N., & Dey, L. (2013, November). Automatic Quality Assessment of documents with application to essay grading. In 2013 12th Mexican International Conference on Artificial Intelligence (pp. 216–222). IEEE.

Wu, S. H., & Shih, W. F. (2018, July). A short answer grading system in chinese by support vector approach. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications (pp. 125-129).

Agung Putri Ratna, A., Lalita Luhurkinanti, D., Ibrahim I., Husna D., Dewi Purnamasari P. (2018). Automatic Essay Grading System for Japanese Language Examination Using Winnowing Algorithm, 2018 International Seminar on Application for Technology of Information and Communication, 2018, pp. 565–569. https://doi.org/10.1109/ISEMANTIC.2018.8549789 .

Sharma A., & Jayagopi D. B. (2018). Automated Grading of Handwritten Essays 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2018, pp 279–284. https://doi.org/10.1109/ICFHR-2018.2018.00056

Download references

Not Applicable.

Author information

Authors and affiliations.

School of Computer Science and Artificial Intelligence, SR University, Warangal, TS, India

Dadi Ramesh

Research Scholar, JNTU, Hyderabad, India

Department of Information Technology, JNTUH College of Engineering, Nachupally, Kondagattu, Jagtial, TS, India

Suresh Kumar Sanampudi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Dadi Ramesh .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (XLSX 80 KB)

Rights and permissions.

Reprints and permissions

About this article

Ramesh, D., Sanampudi, S.K. An automated essay scoring systems: a systematic literature review. Artif Intell Rev 55 , 2495–2527 (2022). https://doi.org/10.1007/s10462-021-10068-2

Download citation

Published : 23 September 2021

Issue Date : March 2022

DOI : https://doi.org/10.1007/s10462-021-10068-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Short answer scoring
  • Essay grading
  • Natural language processing
  • Deep learning
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

An automated essay scoring systems: a systematic literature review

Dadi ramesh.

1 School of Computer Science and Artificial Intelligence, SR University, Warangal, TS India

2 Research Scholar, JNTU, Hyderabad, India

Suresh Kumar Sanampudi

3 Department of Information Technology, JNTUH College of Engineering, Nachupally, Kondagattu, Jagtial, TS India

Associated Data

Assessment in the Education system plays a significant role in judging student performance. The present evaluation system is through human assessment. As the number of teachers' student ratio is gradually increasing, the manual evaluation process becomes complicated. The drawback of manual evaluation is that it is time-consuming, lacks reliability, and many more. This connection online examination system evolved as an alternative tool for pen and paper-based methods. Present Computer-based evaluation system works only for multiple-choice questions, but there is no proper evaluation system for grading essays and short answers. Many researchers are working on automated essay grading and short answer scoring for the last few decades, but assessing an essay by considering all parameters like the relevance of the content to the prompt, development of ideas, Cohesion, and Coherence is a big challenge till now. Few researchers focused on Content-based evaluation, while many of them addressed style-based assessment. This paper provides a systematic literature review on automated essay scoring systems. We studied the Artificial Intelligence and Machine Learning techniques used to evaluate automatic essay scoring and analyzed the limitations of the current studies and research trends. We observed that the essay evaluation is not done based on the relevance of the content and coherence.

Supplementary Information

The online version contains supplementary material available at 10.1007/s10462-021-10068-2.

Introduction

Due to COVID 19 outbreak, an online educational system has become inevitable. In the present scenario, almost all the educational institutions ranging from schools to colleges adapt the online education system. The assessment plays a significant role in measuring the learning ability of the student. Most automated evaluation is available for multiple-choice questions, but assessing short and essay answers remain a challenge. The education system is changing its shift to online-mode, like conducting computer-based exams and automatic evaluation. It is a crucial application related to the education domain, which uses natural language processing (NLP) and Machine Learning techniques. The evaluation of essays is impossible with simple programming languages and simple techniques like pattern matching and language processing. Here the problem is for a single question, we will get more responses from students with a different explanation. So, we need to evaluate all the answers concerning the question.

Automated essay scoring (AES) is a computer-based assessment system that automatically scores or grades the student responses by considering appropriate features. The AES research started in 1966 with the Project Essay Grader (PEG) by Ajay et al. ( 1973 ). PEG evaluates the writing characteristics such as grammar, diction, construction, etc., to grade the essay. A modified version of the PEG by Shermis et al. ( 2001 ) was released, which focuses on grammar checking with a correlation between human evaluators and the system. Foltz et al. ( 1999 ) introduced an Intelligent Essay Assessor (IEA) by evaluating content using latent semantic analysis to produce an overall score. Powers et al. ( 2002 ) proposed E-rater and Intellimetric by Rudner et al. ( 2006 ) and Bayesian Essay Test Scoring System (BESTY) by Rudner and Liang ( 2002 ), these systems use natural language processing (NLP) techniques that focus on style and content to obtain the score of an essay. The vast majority of the essay scoring systems in the 1990s followed traditional approaches like pattern matching and a statistical-based approach. Since the last decade, the essay grading systems started using regression-based and natural language processing techniques. AES systems like Dong et al. ( 2017 ) and others developed from 2014 used deep learning techniques, inducing syntactic and semantic features resulting in better results than earlier systems.

Ohio, Utah, and most US states are using AES systems in school education, like Utah compose tool, Ohio standardized test (an updated version of PEG), evaluating millions of student's responses every year. These systems work for both formative, summative assessments and give feedback to students on the essay. Utah provided basic essay evaluation rubrics (six characteristics of essay writing): Development of ideas, organization, style, word choice, sentence fluency, conventions. Educational Testing Service (ETS) has been conducting significant research on AES for more than a decade and designed an algorithm to evaluate essays on different domains and providing an opportunity for test-takers to improve their writing skills. In addition, they are current research content-based evaluation.

The evaluation of essay and short answer scoring should consider the relevance of the content to the prompt, development of ideas, Cohesion, Coherence, and domain knowledge. Proper assessment of the parameters mentioned above defines the accuracy of the evaluation system. But all these parameters cannot play an equal role in essay scoring and short answer scoring. In a short answer evaluation, domain knowledge is required, like the meaning of "cell" in physics and biology is different. And while evaluating essays, the implementation of ideas with respect to prompt is required. The system should also assess the completeness of the responses and provide feedback.

Several studies examined AES systems, from the initial to the latest AES systems. In which the following studies on AES systems are Blood ( 2011 ) provided a literature review from PEG 1984–2010. Which has covered only generalized parts of AES systems like ethical aspects, the performance of the systems. Still, they have not covered the implementation part, and it’s not a comparative study and has not discussed the actual challenges of AES systems.

Burrows et al. ( 2015 ) Reviewed AES systems on six dimensions like dataset, NLP techniques, model building, grading models, evaluation, and effectiveness of the model. They have not covered feature extraction techniques and challenges in features extractions. Covered only Machine Learning models but not in detail. This system not covered the comparative analysis of AES systems like feature extraction, model building, and level of relevance, cohesion, and coherence not covered in this review.

Ke et al. ( 2019 ) provided a state of the art of AES system but covered very few papers and not listed all challenges, and no comparative study of the AES model. On the other hand, Hussein et al. in ( 2019 ) studied two categories of AES systems, four papers from handcrafted features for AES systems, and four papers from the neural networks approach, discussed few challenges, and did not cover feature extraction techniques, the performance of AES models in detail.

Klebanov et al. ( 2020 ). Reviewed 50 years of AES systems, listed and categorized all essential features that need to be extracted from essays. But not provided a comparative analysis of all work and not discussed the challenges.

This paper aims to provide a systematic literature review (SLR) on automated essay grading systems. An SLR is an Evidence-based systematic review to summarize the existing research. It critically evaluates and integrates all relevant studies' findings and addresses the research domain's specific research questions. Our research methodology uses guidelines given by Kitchenham et al. ( 2009 ) for conducting the review process; provide a well-defined approach to identify gaps in current research and to suggest further investigation.

We addressed our research method, research questions, and the selection process in Sect.  2 , and the results of the research questions have discussed in Sect.  3 . And the synthesis of all the research questions addressed in Sect.  4 . Conclusion and possible future work discussed in Sect.  5 .

Research method

We framed the research questions with PICOC criteria.

Population (P) Student essays and answers evaluation systems.

Intervention (I) evaluation techniques, data sets, features extraction methods.

Comparison (C) Comparison of various approaches and results.

Outcomes (O) Estimate the accuracy of AES systems,

Context (C) NA.

Research questions

To collect and provide research evidence from the available studies in the domain of automated essay grading, we framed the following research questions (RQ):

RQ1 what are the datasets available for research on automated essay grading?

The answer to the question can provide a list of the available datasets, their domain, and access to the datasets. It also provides a number of essays and corresponding prompts.

RQ2 what are the features extracted for the assessment of essays?

The answer to the question can provide an insight into various features so far extracted, and the libraries used to extract those features.

RQ3, which are the evaluation metrics available for measuring the accuracy of algorithms?

The answer will provide different evaluation metrics for accurate measurement of each Machine Learning approach and commonly used measurement technique.

RQ4 What are the Machine Learning techniques used for automatic essay grading, and how are they implemented?

It can provide insights into various Machine Learning techniques like regression models, classification models, and neural networks for implementing essay grading systems. The response to the question can give us different assessment approaches for automated essay grading systems.

RQ5 What are the challenges/limitations in the current research?

The answer to the question provides limitations of existing research approaches like cohesion, coherence, completeness, and feedback.

Search process

We conducted an automated search on well-known computer science repositories like ACL, ACM, IEEE Explore, Springer, and Science Direct for an SLR. We referred to papers published from 2010 to 2020 as much of the work during these years focused on advanced technologies like deep learning and natural language processing for automated essay grading systems. Also, the availability of free data sets like Kaggle (2012), Cambridge Learner Corpus-First Certificate in English exam (CLC-FCE) by Yannakoudakis et al. ( 2011 ) led to research this domain.

Search Strings : We used search strings like “Automated essay grading” OR “Automated essay scoring” OR “short answer scoring systems” OR “essay scoring systems” OR “automatic essay evaluation” and searched on metadata.

Selection criteria

After collecting all relevant documents from the repositories, we prepared selection criteria for inclusion and exclusion of documents. With the inclusion and exclusion criteria, it becomes more feasible for the research to be accurate and specific.

Inclusion criteria 1 Our approach is to work with datasets comprise of essays written in English. We excluded the essays written in other languages.

Inclusion criteria 2  We included the papers implemented on the AI approach and excluded the traditional methods for the review.

Inclusion criteria 3 The study is on essay scoring systems, so we exclusively included the research carried out on only text data sets rather than other datasets like image or speech.

Exclusion criteria  We removed the papers in the form of review papers, survey papers, and state of the art papers.

Quality assessment

In addition to the inclusion and exclusion criteria, we assessed each paper by quality assessment questions to ensure the article's quality. We included the documents that have clearly explained the approach they used, the result analysis and validation.

The quality checklist questions are framed based on the guidelines from Kitchenham et al. ( 2009 ). Each quality assessment question was graded as either 1 or 0. The final score of the study range from 0 to 3. A cut off score for excluding a study from the review is 2 points. Since the papers scored 2 or 3 points are included in the final evaluation. We framed the following quality assessment questions for the final study.

Quality Assessment 1: Internal validity.

Quality Assessment 2: External validity.

Quality Assessment 3: Bias.

The two reviewers review each paper to select the final list of documents. We used the Quadratic Weighted Kappa score to measure the final agreement between the two reviewers. The average resulted from the kappa score is 0.6942, a substantial agreement between the reviewers. The result of evolution criteria shown in Table ​ Table1. 1 . After Quality Assessment, the final list of papers for review is shown in Table ​ Table2. 2 . The complete selection process is shown in Fig. ​ Fig.1. 1 . The total number of selected papers in year wise as shown in Fig. ​ Fig.2. 2 .

Quality assessment analysis

Final list of papers

An external file that holds a picture, illustration, etc.
Object name is 10462_2021_10068_Fig1_HTML.jpg

Selection process

An external file that holds a picture, illustration, etc.
Object name is 10462_2021_10068_Fig2_HTML.jpg

Year wise publications

What are the datasets available for research on automated essay grading?

To work with problem statement especially in Machine Learning and deep learning domain, we require considerable amount of data to train the models. To answer this question, we listed all the data sets used for training and testing for automated essay grading systems. The Cambridge Learner Corpus-First Certificate in English exam (CLC-FCE) Yannakoudakis et al. ( 2011 ) developed corpora that contain 1244 essays and ten prompts. This corpus evaluates whether a student can write the relevant English sentences without any grammatical and spelling mistakes. This type of corpus helps to test the models built for GRE and TOFEL type of exams. It gives scores between 1 and 40.

Bailey and Meurers ( 2008 ), Created a dataset (CREE reading comprehension) for language learners and automated short answer scoring systems. The corpus consists of 566 responses from intermediate students. Mohler and Mihalcea ( 2009 ). Created a dataset for the computer science domain consists of 630 responses for data structure assignment questions. The scores are range from 0 to 5 given by two human raters.

Dzikovska et al. ( 2012 ) created a Student Response Analysis (SRA) corpus. It consists of two sub-groups: the BEETLE corpus consists of 56 questions and approximately 3000 responses from students in the electrical and electronics domain. The second one is the SCIENTSBANK(SemEval-2013) (Dzikovska et al. 2013a ; b ) corpus consists of 10,000 responses on 197 prompts on various science domains. The student responses ladled with "correct, partially correct incomplete, Contradictory, Irrelevant, Non-domain."

In the Kaggle (2012) competition, released total 3 types of corpuses on an Automated Student Assessment Prize (ASAP1) (“ https://www.kaggle.com/c/asap-sas/ ” ) essays and short answers. It has nearly 17,450 essays, out of which it provides up to 3000 essays for each prompt. It has eight prompts that test 7th to 10th grade US students. It gives scores between the [0–3] and [0–60] range. The limitations of these corpora are: (1) it has a different score range for other prompts. (2) It uses statistical features such as named entities extraction and lexical features of words to evaluate essays. ASAP +  + is one more dataset from Kaggle. It is with six prompts, and each prompt has more than 1000 responses total of 10,696 from 8th-grade students. Another corpus contains ten prompts from science, English domains and a total of 17,207 responses. Two human graders evaluated all these responses.

Correnti et al. ( 2013 ) created a Response-to-Text Assessment (RTA) dataset used to check student writing skills in all directions like style, mechanism, and organization. 4–8 grade students give the responses to RTA. Basu et al. ( 2013 ) created a power grading dataset with 700 responses for ten different prompts from US immigration exams. It contains all short answers for assessment.

The TOEFL11 corpus Blanchard et al. ( 2013 ) contains 1100 essays evenly distributed over eight prompts. It is used to test the English language skills of a candidate attending the TOFEL exam. It scores the language proficiency of a candidate as low, medium, and high.

International Corpus of Learner English (ICLE) Granger et al. ( 2009 ) built a corpus of 3663 essays covering different dimensions. It has 12 prompts with 1003 essays that test the organizational skill of essay writing, and13 prompts, each with 830 essays that examine the thesis clarity and prompt adherence.

Argument Annotated Essays (AAE) Stab and Gurevych ( 2014 ) developed a corpus that contains 102 essays with 101 prompts taken from the essayforum2 site. It tests the persuasive nature of the student essay. The SCIENTSBANK corpus used by Sakaguchi et al. ( 2015 ) available in git-hub, containing 9804 answers to 197 questions in 15 science domains. Table ​ Table3 3 illustrates all datasets related to AES systems.

ALL types Datasets used in Automatic scoring systems

Features play a major role in the neural network and other supervised Machine Learning approaches. The automatic essay grading systems scores student essays based on different types of features, which play a prominent role in training the models. Based on their syntax and semantics and they are categorized into three groups. 1. statistical-based features Contreras et al. ( 2018 ); Kumar et al. ( 2019 ); Mathias and Bhattacharyya ( 2018a ; b ) 2. Style-based (Syntax) features Cummins et al. ( 2016 ); Darwish and Mohamed ( 2020 ); Ke et al. ( 2019 ). 3. Content-based features Dong et al. ( 2017 ). A good set of features appropriate models evolved better AES systems. The vast majority of the researchers are using regression models if features are statistical-based. For Neural Networks models, researches are using both style-based and content-based features. The following table shows the list of various features used in existing AES Systems. Table ​ Table4 4 represents all set of features used for essay grading.

Types of features

We studied all the feature extracting NLP libraries as shown in Fig. ​ Fig.3. that 3 . that are used in the papers. The NLTK is an NLP tool used to retrieve statistical features like POS, word count, sentence count, etc. With NLTK, we can miss the essay's semantic features. To find semantic features Word2Vec Mikolov et al. ( 2013 ), GloVe Jeffrey Pennington et al. ( 2014 ) is the most used libraries to retrieve the semantic text from the essays. And in some systems, they directly trained the model with word embeddings to find the score. From Fig. ​ Fig.4 4 as observed that non-content-based feature extraction is higher than content-based.

An external file that holds a picture, illustration, etc.
Object name is 10462_2021_10068_Fig3_HTML.jpg

Usages of tools

An external file that holds a picture, illustration, etc.
Object name is 10462_2021_10068_Fig4_HTML.jpg

Number of papers on content based features

RQ3 which are the evaluation metrics available for measuring the accuracy of algorithms?

The majority of the AES systems are using three evaluation metrics. They are (1) quadrated weighted kappa (QWK) (2) Mean Absolute Error (MAE) (3) Pearson Correlation Coefficient (PCC) Shehab et al. ( 2016 ). The quadratic weighted kappa will find agreement between human evaluation score and system evaluation score and produces value ranging from 0 to 1. And the Mean Absolute Error is the actual difference between human-rated score to system-generated score. The mean square error (MSE) measures the average squares of the errors, i.e., the average squared difference between the human-rated and the system-generated scores. MSE will always give positive numbers only. Pearson's Correlation Coefficient (PCC) finds the correlation coefficient between two variables. It will provide three values (0, 1, − 1). "0" represents human-rated and system scores that are not related. "1" represents an increase in the two scores. "− 1" illustrates a negative relationship between the two scores.

RQ4 what are the Machine Learning techniques being used for automatic essay grading, and how are they implemented?

After scrutinizing all documents, we categorize the techniques used in automated essay grading systems into four baskets. 1. Regression techniques. 2. Classification model. 3. Neural networks. 4. Ontology-based approach.

All the existing AES systems developed in the last ten years employ supervised learning techniques. Researchers using supervised methods viewed the AES system as either regression or classification task. The goal of the regression task is to predict the score of an essay. The classification task is to classify the essays belonging to (low, medium, or highly) relevant to the question's topic. Since the last three years, most AES systems developed made use of the concept of the neural network.

Regression based models

Mohler and Mihalcea ( 2009 ). proposed text-to-text semantic similarity to assign a score to the student essays. There are two text similarity measures like Knowledge-based measures, corpus-based measures. There eight knowledge-based tests with all eight models. They found the similarity. The shortest path similarity determines based on the length, which shortest path between two contexts. Leacock & Chodorow find the similarity based on the shortest path's length between two concepts using node-counting. The Lesk similarity finds the overlap between the corresponding definitions, and Wu & Palmer algorithm finds similarities based on the depth of two given concepts in the wordnet taxonomy. Resnik, Lin, Jiang&Conrath, Hirst& St-Onge find the similarity based on different parameters like the concept, probability, normalization factor, lexical chains. In corpus-based likeness, there LSA BNC, LSA Wikipedia, and ESA Wikipedia, latent semantic analysis is trained on Wikipedia and has excellent domain knowledge. Among all similarity scores, correlation scores LSA Wikipedia scoring accuracy is more. But these similarity measure algorithms are not using NLP concepts. These models are before 2010 and basic concept models to continue the research automated essay grading with updated algorithms on neural networks with content-based features.

Adamson et al. ( 2014 ) proposed an automatic essay grading system which is a statistical-based approach in this they retrieved features like POS, Character count, Word count, Sentence count, Miss spelled words, n-gram representation of words to prepare essay vector. They formed a matrix with these all vectors in that they applied LSA to give a score to each essay. It is a statistical approach that doesn’t consider the semantics of the essay. The accuracy they got when compared to the human rater score with the system is 0.532.

Cummins et al. ( 2016 ). Proposed Timed Aggregate Perceptron vector model to give ranking to all the essays, and later they converted the rank algorithm to predict the score of the essay. The model trained with features like Word unigrams, bigrams, POS, Essay length, grammatical relation, Max word length, sentence length. It is multi-task learning, gives ranking to the essays, and predicts the score for the essay. The performance evaluated through QWK is 0.69, a substantial agreement between the human rater and the system.

Sultan et al. ( 2016 ). Proposed a Ridge regression model to find short answer scoring with Question Demoting. Question Demoting is the new concept included in the essay's final assessment to eliminate duplicate words from the essay. The extracted features are Text Similarity, which is the similarity between the student response and reference answer. Question Demoting is the number of repeats in a student response. With inverse document frequency, they assigned term weight. The sentence length Ratio is the number of words in the student response, is another feature. With these features, the Ridge regression model was used, and the accuracy they got 0.887.

Contreras et al. ( 2018 ). Proposed Ontology based on text mining in this model has given a score for essays in phases. In phase-I, they generated ontologies with ontoGen and SVM to find the concept and similarity in the essay. In phase II from ontologies, they retrieved features like essay length, word counts, correctness, vocabulary, and types of word used, domain information. After retrieving statistical data, they used a linear regression model to find the score of the essay. The accuracy score is the average of 0.5.

Darwish and Mohamed ( 2020 ) proposed the fusion of fuzzy Ontology with LSA. They retrieve two types of features, like syntax features and semantic features. In syntax features, they found Lexical Analysis with tokens, and they construct a parse tree. If the parse tree is broken, the essay is inconsistent—a separate grade assigned to the essay concerning syntax features. The semantic features are like similarity analysis, Spatial Data Analysis. Similarity analysis is to find duplicate sentences—Spatial Data Analysis for finding Euclid distance between the center and part. Later they combine syntax features and morphological features score for the final score. The accuracy they achieved with the multiple linear regression model is 0.77, mostly on statistical features.

Süzen Neslihan et al. ( 2020 ) proposed a text mining approach for short answer grading. First, their comparing model answers with student response by calculating the distance between two sentences. By comparing the model answer with student response, they find the essay's completeness and provide feedback. In this approach, model vocabulary plays a vital role in grading, and with this model vocabulary, the grade will be assigned to the student's response and provides feedback. The correlation between the student answer to model answer is 0.81.

Classification based Models

Persing and Ng ( 2013 ) used a support vector machine to score the essay. The features extracted are OS, N-gram, and semantic text to train the model and identified the keywords from the essay to give the final score.

Sakaguchi et al. ( 2015 ) proposed two methods: response-based and reference-based. In response-based scoring, the extracted features are response length, n-gram model, and syntactic elements to train the support vector regression model. In reference-based scoring, features such as sentence similarity using word2vec is used to find the cosine similarity of the sentences that is the final score of the response. First, the scores were discovered individually and later combined two features to find a final score. This system gave a remarkable increase in performance by combining the scores.

Mathias and Bhattacharyya ( 2018a ; b ) Proposed Automated Essay Grading Dataset with Essay Attribute Scores. The first concept features selection depends on the essay type. So the common attributes are Content, Organization, Word Choice, Sentence Fluency, Conventions. In this system, each attribute is scored individually, with the strength of each attribute identified. The model they used is a random forest classifier to assign scores to individual attributes. The accuracy they got with QWK is 0.74 for prompt 1 of the ASAS dataset ( https://www.kaggle.com/c/asap-sas/ ).

Ke et al. ( 2019 ) used a support vector machine to find the response score. In this method, features like Agreeability, Specificity, Clarity, Relevance to prompt, Conciseness, Eloquence, Confidence, Direction of development, Justification of opinion, and Justification of importance. First, the individual parameter score obtained was later combined with all scores to give a final response score. The features are used in the neural network to find whether the sentence is relevant to the topic or not.

Salim et al. ( 2019 ) proposed an XGBoost Machine Learning classifier to assess the essays. The algorithm trained on features like word count, POS, parse tree depth, and coherence in the articles with sentence similarity percentage; cohesion and coherence are considered for training. And they implemented K-fold cross-validation for a result the average accuracy after specific validations is 68.12.

Neural network models

Shehab et al. ( 2016 ) proposed a neural network method that used learning vector quantization to train human scored essays. After training, the network can provide a score to the ungraded essays. First, we should process the essay to remove Spell checking and then perform preprocessing steps like Document Tokenization, stop word removal, Stemming, and submit it to the neural network. Finally, the model will provide feedback on the essay, whether it is relevant to the topic. And the correlation coefficient between human rater and system score is 0.7665.

Kopparapu and De ( 2016 ) proposed the Automatic Ranking of Essays using Structural and Semantic Features. This approach constructed a super essay with all the responses. Next, ranking for a student essay is done based on the super-essay. The structural and semantic features derived helps to obtain the scores. In a paragraph, 15 Structural features like an average number of sentences, the average length of sentences, and the count of words, nouns, verbs, adjectives, etc., are used to obtain a syntactic score. A similarity score is used as semantic features to calculate the overall score.

Dong and Zhang ( 2016 ) proposed a hierarchical CNN model. The model builds two layers with word embedding to represents the words as the first layer. The second layer is a word convolution layer with max-pooling to find word vectors. The next layer is a sentence-level convolution layer with max-pooling to find the sentence's content and synonyms. A fully connected dense layer produces an output score for an essay. The accuracy with the hierarchical CNN model resulted in an average QWK of 0.754.

Taghipour and Ng ( 2016 ) proposed a first neural approach for essay scoring build in which convolution and recurrent neural network concepts help in scoring an essay. The network uses a lookup table with the one-hot representation of the word vector of an essay. The final efficiency of the network model with LSTM resulted in an average QWK of 0.708.

Dong et al. ( 2017 ). Proposed an Attention-based scoring system with CNN + LSTM to score an essay. For CNN, the input parameters were character embedding and word embedding, and it has attention pooling layers and used NLTK to obtain word and character embedding. The output gives a sentence vector, which provides sentence weight. After CNN, it will have an LSTM layer with an attention pooling layer, and this final layer results in the final score of the responses. The average QWK score is 0.764.

Riordan et al. ( 2017 ) proposed a neural network with CNN and LSTM layers. Word embedding, given as input to a neural network. An LSTM network layer will retrieve the window features and delivers them to the aggregation layer. The aggregation layer is a superficial layer that takes a correct window of words and gives successive layers to predict the answer's sore. The accuracy of the neural network resulted in a QWK of 0.90.

Zhao et al. ( 2017 ) proposed a new concept called Memory-Augmented Neural network with four layers, input representation layer, memory addressing layer, memory reading layer, and output layer. An input layer represents all essays in a vector form based on essay length. After converting the word vector, the memory addressing layer takes a sample of the essay and weighs all the terms. The memory reading layer takes the input from memory addressing segment and finds the content to finalize the score. Finally, the output layer will provide the final score of the essay. The accuracy of essay scores is 0.78, which is far better than the LSTM neural network.

Mathias and Bhattacharyya ( 2018a ; b ) proposed deep learning networks using LSTM with the CNN layer and GloVe pre-trained word embeddings. For this, they retrieved features like Sentence count essays, word count per sentence, Number of OOVs in the sentence, Language model score, and the text's perplexity. The network predicted the goodness scores of each essay. The higher the goodness scores, means higher the rank and vice versa.

Nguyen and Dery ( 2016 ). Proposed Neural Networks for Automated Essay Grading. In this method, a single layer bi-directional LSTM accepting word vector as input. Glove vectors used in this method resulted in an accuracy of 90%.

Ruseti et al. ( 2018 ) proposed a recurrent neural network that is capable of memorizing the text and generate a summary of an essay. The Bi-GRU network with the max-pooling layer molded on the word embedding of each document. It will provide scoring to the essay by comparing it with a summary of the essay from another Bi-GRU network. The result obtained an accuracy of 0.55.

Wang et al. ( 2018a ; b ) proposed an automatic scoring system with the bi-LSTM recurrent neural network model and retrieved the features using the word2vec technique. This method generated word embeddings from the essay words using the skip-gram model. And later, word embedding is used to train the neural network to find the final score. The softmax layer in LSTM obtains the importance of each word. This method used a QWK score of 0.83%.

Dasgupta et al. ( 2018 ) proposed a technique for essay scoring with augmenting textual qualitative Features. It extracted three types of linguistic, cognitive, and psychological features associated with a text document. The linguistic features are Part of Speech (POS), Universal Dependency relations, Structural Well-formedness, Lexical Diversity, Sentence Cohesion, Causality, and Informativeness of the text. The psychological features derived from the Linguistic Information and Word Count (LIWC) tool. They implemented a convolution recurrent neural network that takes input as word embedding and sentence vector, retrieved from the GloVe word vector. And the second layer is the Convolution Layer to find local features. The next layer is the recurrent neural network (LSTM) to find corresponding of the text. The accuracy of this method resulted in an average QWK of 0.764.

Liang et al. ( 2018 ) proposed a symmetrical neural network AES model with Bi-LSTM. They are extracting features from sample essays and student essays and preparing an embedding layer as input. The embedding layer output is transfer to the convolution layer from that LSTM will be trained. Hear the LSRM model has self-features extraction layer, which will find the essay's coherence. The average QWK score of SBLSTMA is 0.801.

Liu et al. ( 2019 ) proposed two-stage learning. In the first stage, they are assigning a score based on semantic data from the essay. The second stage scoring is based on some handcrafted features like grammar correction, essay length, number of sentences, etc. The average score of the two stages is 0.709.

Pedro Uria Rodriguez et al. ( 2019 ) proposed a sequence-to-sequence learning model for automatic essay scoring. They used BERT (Bidirectional Encoder Representations from Transformers), which extracts the semantics from a sentence from both directions. And XLnet sequence to sequence learning model to extract features like the next sentence in an essay. With this pre-trained model, they attained coherence from the essay to give the final score. The average QWK score of the model is 75.5.

Xia et al. ( 2019 ) proposed a two-layer Bi-directional LSTM neural network for the scoring of essays. The features extracted with word2vec to train the LSTM and accuracy of the model in an average of QWK is 0.870.

Kumar et al. ( 2019 ) Proposed an AutoSAS for short answer scoring. It used pre-trained Word2Vec and Doc2Vec models trained on Google News corpus and Wikipedia dump, respectively, to retrieve the features. First, they tagged every word POS and they found weighted words from the response. It also found prompt overlap to observe how the answer is relevant to the topic, and they defined lexical overlaps like noun overlap, argument overlap, and content overlap. This method used some statistical features like word frequency, difficulty, diversity, number of unique words in each response, type-token ratio, statistics of the sentence, word length, and logical operator-based features. This method uses a random forest model to train the dataset. The data set has sample responses with their associated score. The model will retrieve the features from both responses like graded and ungraded short answers with questions. The accuracy of AutoSAS with QWK is 0.78. It will work on any topics like Science, Arts, Biology, and English.

Jiaqi Lun et al. ( 2020 ) proposed an automatic short answer scoring with BERT. In this with a reference answer comparing student responses and assigning scores. The data augmentation is done with a neural network and with one correct answer from the dataset classifying reaming responses as correct or incorrect.

Zhu and Sun ( 2020 ) proposed a multimodal Machine Learning approach for automated essay scoring. First, they count the grammar score with the spaCy library and numerical count as the number of words and sentences with the same library. With this input, they trained a single and Bi LSTM neural network for finding the final score. For the LSTM model, they prepared sentence vectors with GloVe and word embedding with NLTK. Bi-LSTM will check each sentence in both directions to find semantic from the essay. The average QWK score with multiple models is 0.70.

Ontology based approach

Mohler et al. ( 2011 ) proposed a graph-based method to find semantic similarity in short answer scoring. For the ranking of answers, they used the support vector regression model. The bag of words is the main feature extracted in the system.

Ramachandran et al. ( 2015 ) also proposed a graph-based approach to find lexical based semantics. Identified phrase patterns and text patterns are the features to train a random forest regression model to score the essays. The accuracy of the model in a QWK is 0.78.

Zupanc et al. ( 2017 ) proposed sentence similarity networks to find the essay's score. Ajetunmobi and Daramola ( 2017 ) recommended an ontology-based information extraction approach and domain-based ontology to find the score.

Speech response scoring

Automatic scoring is in two ways one is text-based scoring, other is speech-based scoring. This paper discussed text-based scoring and its challenges, and now we cover speech scoring and common points between text and speech-based scoring. Evanini and Wang ( 2013 ), Worked on speech scoring of non-native school students, extracted features with speech ratter, and trained a linear regression model, concluding that accuracy varies based on voice pitching. Loukina et al. ( 2015 ) worked on feature selection from speech data and trained SVM. Malinin et al. ( 2016 ) used neural network models to train the data. Loukina et al. ( 2017 ). Proposed speech and text-based automatic scoring. Extracted text-based features, speech-based features and trained a deep neural network for speech-based scoring. They extracted 33 types of features based on acoustic signals. Malinin et al. ( 2017 ). Wu Xixin et al. ( 2020 ) Worked on deep neural networks for spoken language assessment. Incorporated different types of models and tested them. Ramanarayanan et al. ( 2017 ) worked on feature extraction methods and extracted punctuation, fluency, and stress and trained different Machine Learning models for scoring. Knill et al. ( 2018 ). Worked on Automatic speech recognizer and its errors how its impacts the speech assessment.

The state of the art

This section provides an overview of the existing AES systems with a comparative study w. r. t models, features applied, datasets, and evaluation metrics used for building the automated essay grading systems. We divided all 62 papers into two sets of the first set of review papers in Table ​ Table5 5 with a comparative study of the AES systems.

State of the art

Comparison of all approaches

In our study, we divided major AES approaches into three categories. Regression models, classification models, and neural network models. The regression models failed to find cohesion and coherence from the essay because it trained on BoW(Bag of Words) features. In processing data from input to output, the regression models are less complicated than neural networks. There are unable to find many intricate patterns from the essay and unable to find sentence connectivity. If we train the model with BoW features in the neural network approach, the model never considers the essay's coherence and coherence.

First, to train a Machine Learning algorithm with essays, all the essays are converted to vector form. We can form a vector with BoW and Word2vec, TF-IDF. The BoW and Word2vec vector representation of essays represented in Table ​ Table6. 6 . The vector representation of BoW with TF-IDF is not incorporating the essays semantic, and it’s just statistical learning from a given vector. Word2vec vector comprises semantic of essay in a unidirectional way.

Vector representation of essays

In BoW, the vector contains the frequency of word occurrences in the essay. The vector represents 1 and more based on the happenings of words in the essay and 0 for not present. So, in BoW, the vector does not maintain the relationship with adjacent words; it’s just for single words. In word2vec, the vector represents the relationship between words with other words and sentences prompt in multiple dimensional ways. But word2vec prepares vectors in a unidirectional way, not in a bidirectional way; word2vec fails to find semantic vectors when a word has two meanings, and the meaning depends on adjacent words. Table ​ Table7 7 represents a comparison of Machine Learning models and features extracting methods.

Comparison of models

In AES, cohesion and coherence will check the content of the essay concerning the essay prompt these can be extracted from essay in the vector from. Two more parameters are there to access an essay is completeness and feedback. Completeness will check whether student’s response is sufficient or not though the student wrote correctly. Table ​ Table8 8 represents all four parameters comparison for essay grading. Table ​ Table9 9 illustrates comparison of all approaches based on various features like grammar, spelling, organization of essay, relevance.

Comparison of all models with respect to cohesion, coherence, completeness, feedback

comparison of all approaches on various features

What are the challenges/limitations in the current research?

From our study and results discussed in the previous sections, many researchers worked on automated essay scoring systems with numerous techniques. We have statistical methods, classification methods, and neural network approaches to evaluate the essay automatically. The main goal of the automated essay grading system is to reduce human effort and improve consistency.

The vast majority of essay scoring systems are dealing with the efficiency of the algorithm. But there are many challenges in automated essay grading systems. One should assess the essay by following parameters like the relevance of the content to the prompt, development of ideas, Cohesion, Coherence, and domain knowledge.

No model works on the relevance of content, which means whether student response or explanation is relevant to the given prompt or not if it is relevant to how much it is appropriate, and there is no discussion about the cohesion and coherence of the essays. All researches concentrated on extracting the features using some NLP libraries, trained their models, and testing the results. But there is no explanation in the essay evaluation system about consistency and completeness, But Palma and Atkinson ( 2018 ) explained coherence-based essay evaluation. And Zupanc and Bosnic ( 2014 ) also used the word coherence to evaluate essays. And they found consistency with latent semantic analysis (LSA) for finding coherence from essays, but the dictionary meaning of coherence is "The quality of being logical and consistent."

Another limitation is there is no domain knowledge-based evaluation of essays using Machine Learning models. For example, the meaning of a cell is different from biology to physics. Many Machine Learning models extract features with WordVec and GloVec; these NLP libraries cannot convert the words into vectors when they have two or more meanings.

Other challenges that influence the Automated Essay Scoring Systems.

All these approaches worked to improve the QWK score of their models. But QWK will not assess the model in terms of features extraction and constructed irrelevant answers. The QWK is not evaluating models whether the model is correctly assessing the answer or not. There are many challenges concerning students' responses to the Automatic scoring system. Like in evaluating approach, no model has examined how to evaluate the constructed irrelevant and adversarial answers. Especially the black box type of approaches like deep learning models provides more options to the students to bluff the automated scoring systems.

The Machine Learning models that work on statistical features are very vulnerable. Based on Powers et al. ( 2001 ) and Bejar Isaac et al. ( 2014 ), the E-rater was failed on Constructed Irrelevant Responses Strategy (CIRS). From the study of Bejar et al. ( 2013 ), Higgins and Heilman ( 2014 ), observed that when student response contain irrelevant content or shell language concurring to prompt will influence the final score of essays in an automated scoring system.

In deep learning approaches, most of the models automatically read the essay's features, and some methods work on word-based embedding and other character-based embedding features. From the study of Riordan Brain et al. ( 2019 ), The character-based embedding systems do not prioritize spelling correction. However, it is influencing the final score of the essay. From the study of Horbach and Zesch ( 2019 ), Various factors are influencing AES systems. For example, there are data set size, prompt type, answer length, training set, and human scorers for content-based scoring.

Ding et al. ( 2020 ) reviewed that the automated scoring system is vulnerable when a student response contains more words from prompt, like prompt vocabulary repeated in the response. Parekh et al. ( 2020 ) and Kumar et al. ( 2020 ) tested various neural network models of AES by iteratively adding important words, deleting unimportant words, shuffle the words, and repeating sentences in an essay and found that no change in the final score of essays. These neural network models failed to recognize common sense in adversaries' essays and give more options for the students to bluff the automated systems.

Other than NLP and ML techniques for AES. From Wresch ( 1993 ) to Madnani and Cahill ( 2018 ). discussed the complexity of AES systems, standards need to be followed. Like assessment rubrics to test subject knowledge, irrelevant responses, and ethical aspects of an algorithm like measuring the fairness of student response.

Fairness is an essential factor for automated systems. For example, in AES, fairness can be measure in an agreement between human score to machine score. Besides this, From Loukina et al. ( 2019 ), the fairness standards include overall score accuracy, overall score differences, and condition score differences between human and system scores. In addition, scoring different responses in the prospect of constructive relevant and irrelevant will improve fairness.

Madnani et al. ( 2017a ; b ). Discussed the fairness of AES systems for constructed responses and presented RMS open-source tool for detecting biases in the models. With this, one can change fairness standards according to their analysis of fairness.

From Berzak et al.'s ( 2018 ) approach, behavior factors are a significant challenge in automated scoring systems. That helps to find language proficiency, word characteristics (essential words from the text), predict the critical patterns from the text, find related sentences in an essay, and give a more accurate score.

Rupp ( 2018 ), has discussed the designing, evaluating, and deployment methodologies for AES systems. They provided notable characteristics of AES systems for deployment. They are like model performance, evaluation metrics for a model, threshold values, dynamically updated models, and framework.

First, we should check the model performance on different datasets and parameters for operational deployment. Selecting Evaluation metrics for AES models are like QWK, correlation coefficient, or sometimes both. Kelley and Preacher ( 2012 ) have discussed three categories of threshold values: marginal, borderline, and acceptable. The values can be varied based on data size, model performance, type of model (single scoring, multiple scoring models). Once a model is deployed and evaluates millions of responses every time for optimal responses, we need a dynamically updated model based on prompt and data. Finally, framework designing of AES model, hear a framework contains prompts where test-takers can write the responses. One can design two frameworks: a single scoring model for a single methodology and multiple scoring models for multiple concepts. When we deploy multiple scoring models, each prompt could be trained separately, or we can provide generalized models for all prompts with this accuracy may vary, and it is challenging.

Our Systematic literature review on the automated essay grading system first collected 542 papers with selected keywords from various databases. After inclusion and exclusion criteria, we left with 139 articles; on these selected papers, we applied Quality assessment criteria with two reviewers, and finally, we selected 62 writings for final review.

Our observations on automated essay grading systems from 2010 to 2020 are as followed:

  • The implementation techniques of automated essay grading systems are classified into four buckets; there are 1. regression models 2. Classification models 3. Neural networks 4. Ontology-based methodology, but using neural networks, the researchers are more accurate than other techniques, and all the methods state of the art provided in Table ​ Table3 3 .
  • The majority of the regression and classification models on essay scoring used statistical features to find the final score. It means the systems or models trained on such parameters as word count, sentence count, etc. though the parameters extracted from the essay, the algorithm are not directly training on essays. The algorithms trained on some numbers obtained from the essay and hear if numbers matched the composition will get a good score; otherwise, the rating is less. In these models, the evaluation process is entirely on numbers, irrespective of the essay. So, there is a lot of chance to miss the coherence, relevance of the essay if we train our algorithm on statistical parameters.
  • In the neural network approach, the models trained on Bag of Words (BoW) features. The BoW feature is missing the relationship between a word to word and the semantic meaning of the sentence. E.g., Sentence 1: John killed bob. Sentence 2: bob killed John. In these two sentences, the BoW is "John," "killed," "bob."
  • In the Word2Vec library, if we are prepared a word vector from an essay in a unidirectional way, the vector will have a dependency with other words and finds the semantic relationship with other words. But if a word has two or more meanings like "Bank loan" and "River Bank," hear bank has two implications, and its adjacent words decide the sentence meaning; in this case, Word2Vec is not finding the real meaning of the word from the sentence.
  • The features extracted from essays in the essay scoring system are classified into 3 type's features like statistical features, style-based features, and content-based features, which are explained in RQ2 and Table ​ Table3. 3 . But statistical features, are playing a significant role in some systems and negligible in some systems. In Shehab et al. ( 2016 ); Cummins et al. ( 2016 ). Dong et al. ( 2017 ). Dong and Zhang ( 2016 ). Mathias and Bhattacharyya ( 2018a ; b ) Systems the assessment is entirely on statistical and style-based features they have not retrieved any content-based features. And in other systems that extract content from the essays, the role of statistical features is for only preprocessing essays but not included in the final grading.
  • In AES systems, coherence is the main feature to be considered while evaluating essays. The actual meaning of coherence is to stick together. That is the logical connection of sentences (local level coherence) and paragraphs (global level coherence) in a story. Without coherence, all sentences in a paragraph are independent and meaningless. In an Essay, coherence is a significant feature that is explaining everything in a flow and its meaning. It is a powerful feature in AES system to find the semantics of essay. With coherence, one can assess whether all sentences are connected in a flow and all paragraphs are related to justify the prompt. Retrieving the coherence level from an essay is a critical task for all researchers in AES systems.
  • In automatic essay grading systems, the assessment of essays concerning content is critical. That will give the actual score for the student. Most of the researches used statistical features like sentence length, word count, number of sentences, etc. But according to collected results, 32% of the systems used content-based features for the essay scoring. Example papers which are on content-based assessment are Taghipour and Ng ( 2016 ); Persing and Ng ( 2013 ); Wang et al. ( 2018a , 2018b ); Zhao et al. ( 2017 ); Kopparapu and De ( 2016 ), Kumar et al. ( 2019 ); Mathias and Bhattacharyya ( 2018a ; b ); Mohler and Mihalcea ( 2009 ) are used content and statistical-based features. The results are shown in Fig. ​ Fig.3. 3 . And mainly the content-based features extracted with word2vec NLP library, but word2vec is capable of capturing the context of a word in a document, semantic and syntactic similarity, relation with other terms, but word2vec is capable of capturing the context word in a uni-direction either left or right. If a word has multiple meanings, there is a chance of missing the context in the essay. After analyzing all the papers, we found that content-based assessment is a qualitative assessment of essays.
  • On the other hand, Horbach and Zesch ( 2019 ); Riordan Brain et al. ( 2019 ); Ding et al. ( 2020 ); Kumar et al. ( 2020 ) proved that neural network models are vulnerable when a student response contains constructed irrelevant, adversarial answers. And a student can easily bluff an automated scoring system by submitting different responses like repeating sentences and repeating prompt words in an essay. From Loukina et al. ( 2019 ), and Madnani et al. ( 2017b ). The fairness of an algorithm is an essential factor to be considered in AES systems.
  • While talking about speech assessment, the data set contains audios of duration up to one minute. Feature extraction techniques are entirely different from text assessment, and accuracy varies based on speaking fluency, pitching, male to female voice and boy to adult voice. But the training algorithms are the same for text and speech assessment.
  • Once an AES system evaluates essays and short answers accurately in all directions, there is a massive demand for automated systems in the educational and related world. Now AES systems are deployed in GRE, TOEFL exams; other than these, we can deploy AES systems in massive open online courses like Coursera(“ https://coursera.org/learn//machine-learning//exam ”), NPTEL ( https://swayam.gov.in/explorer ), etc. still they are assessing student performance with multiple-choice questions. In another perspective, AES systems can be deployed in information retrieval systems like Quora, stack overflow, etc., to check whether the retrieved response is appropriate to the question or not and can give ranking to the retrieved answers.

Conclusion and future work

As per our Systematic literature review, we studied 62 papers. There exist significant challenges for researchers in implementing automated essay grading systems. Several researchers are working rigorously on building a robust AES system despite its difficulty in solving this problem. All evaluating methods are not evaluated based on coherence, relevance, completeness, feedback, and knowledge-based. And 90% of essay grading systems are used Kaggle ASAP (2012) dataset, which has general essays from students and not required any domain knowledge, so there is a need for domain-specific essay datasets to train and test. Feature extraction is with NLTK, WordVec, and GloVec NLP libraries; these libraries have many limitations while converting a sentence into vector form. Apart from feature extraction and training Machine Learning models, no system is accessing the essay's completeness. No system provides feedback to the student response and not retrieving coherence vectors from the essay—another perspective the constructive irrelevant and adversarial student responses still questioning AES systems.

Our proposed research work will go on the content-based assessment of essays with domain knowledge and find a score for the essays with internal and external consistency. And we will create a new dataset concerning one domain. And another area in which we can improve is the feature extraction techniques.

This study includes only four digital databases for study selection may miss some functional studies on the topic. However, we hope that we covered most of the significant studies as we manually collected some papers published in useful journals.

Below is the link to the electronic supplementary material.

Not Applicable.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Dadi Ramesh, Email: moc.liamg@44hsemaridad .

Suresh Kumar Sanampudi, Email: ni.ca.hutnj@idupmanashserus .

  • Adamson, A., Lamb, A., & December, R. M. (2014). Automated Essay Grading.
  • Ajay HB, Tillett PI, Page EB (1973) Analysis of essays by computer (AEC-II) (No. 8-0102). Washington, DC: U.S. Department of Health, Education, and Welfare, Office of Education, National Center for Educational Research and Development
  • Ajetunmobi SA, Daramola O (2017) Ontology-based information extraction for subject-focussed automatic essay evaluation. In: 2017 International Conference on Computing Networking and Informatics (ICCNI) p 1–6. IEEE
  • Alva-Manchego F, et al. (2019) EASSE: Easier Automatic Sentence Simplification Evaluation.” ArXiv abs/1908.04567 (2019): n. pag
  • Bailey S, Meurers D (2008) Diagnosing meaning errors in short answers to reading comprehension questions. In: Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications (Columbus), p 107–115
  • Basu S, Jacobs C, Vanderwende L. Powergrading: a clustering approach to amplify human effort for short answer grading. Trans Assoc Comput Linguist (TACL) 2013; 1 :391–402. doi: 10.1162/tacl_a_00236. [ CrossRef ] [ Google Scholar ]
  • Bejar, I. I., Flor, M., Futagi, Y., & Ramineni, C. (2014). On the vulnerability of automated scoring to construct-irrelevant response strategies (CIRS): An illustration. Assessing Writing, 22, 48-59.
  • Bejar I, et al. (2013) Length of Textual Response as a Construct-Irrelevant Response Strategy: The Case of Shell Language. Research Report. ETS RR-13-07.” ETS Research Report Series (2013): n. pag
  • Berzak Y, et al. (2018) “Assessing Language Proficiency from Eye Movements in Reading.” ArXiv abs/1804.07329 (2018): n. pag
  • Blanchard D, Tetreault J, Higgins D, Cahill A, Chodorow M (2013) TOEFL11: A corpus of non-native English. ETS Research Report Series, 2013(2):i–15, 2013
  • Blood, I. (2011). Automated essay scoring: a literature review. Studies in Applied Linguistics and TESOL, 11(2).
  • Burrows S, Gurevych I, Stein B. The eras and trends of automatic short answer grading. Int J Artif Intell Educ. 2015; 25 :60–117. doi: 10.1007/s40593-014-0026-8. [ CrossRef ] [ Google Scholar ]
  • Cader, A. (2020, July). The Potential for the Use of Deep Neural Networks in e-Learning Student Evaluation with New Data Augmentation Method. In International Conference on Artificial Intelligence in Education (pp. 37–42). Springer, Cham.
  • Cai C (2019) Automatic essay scoring with recurrent neural network. In: Proceedings of the 3rd International Conference on High Performance Compilation, Computing and Communications (2019): n. pag.
  • Chen M, Li X (2018) "Relevance-Based Automated Essay Scoring via Hierarchical Recurrent Model. In: 2018 International Conference on Asian Language Processing (IALP), Bandung, Indonesia, 2018, p 378–383, doi: 10.1109/IALP.2018.8629256
  • Chen Z, Zhou Y (2019) "Research on Automatic Essay Scoring of Composition Based on CNN and OR. In: 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, p 13–18, doi: 10.1109/ICAIBD.2019.8837007
  • Contreras JO, Hilles SM, Abubakar ZB (2018) Automated essay scoring with ontology based on text mining and NLTK tools. In: 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), 1-6
  • Correnti R, Matsumura LC, Hamilton L, Wang E. Assessing students’ skills at writing analytically in response to texts. Elem Sch J. 2013; 114 (2):142–177. doi: 10.1086/671936. [ CrossRef ] [ Google Scholar ]
  • Cummins, R., Zhang, M., & Briscoe, E. (2016, August). Constrained multi-task learning for automated essay scoring. Association for Computational Linguistics.
  • Darwish SM, Mohamed SK (2020) Automated essay evaluation based on fusion of fuzzy ontology and latent semantic analysis. In: Hassanien A, Azar A, Gaber T, Bhatnagar RF, Tolba M (eds) The International Conference on Advanced Machine Learning Technologies and Applications
  • Dasgupta T, Naskar A, Dey L, Saha R (2018) Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In: Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications p 93–102
  • Ding Y, et al. (2020) "Don’t take “nswvtnvakgxpm” for an answer–The surprising vulnerability of automatic content scoring systems to adversarial input." In: Proceedings of the 28th International Conference on Computational Linguistics
  • Dong F, Zhang Y (2016) Automatic features for essay scoring–an empirical study. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing p 1072–1077
  • Dong F, Zhang Y, Yang J (2017) Attention-based recurrent convolutional neural network for automatic essay scoring. In: Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017) p 153–162
  • Dzikovska M, Nielsen R, Brew C, Leacock C, Gi ampiccolo D, Bentivogli L, Clark P, Dagan I, Dang HT (2013a) Semeval-2013 task 7: The joint student response analysis and 8th recognizing textual entailment challenge
  • Dzikovska MO, Nielsen R, Brew C, Leacock C, Giampiccolo D, Bentivogli L, Clark P, Dagan I, Trang Dang H (2013b) SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge. *SEM 2013: The First Joint Conference on Lexical and Computational Semantics
  • Educational Testing Service (2008) CriterionSM online writing evaluation service. Retrieved from http://www.ets.org/s/criterion/pdf/9286_CriterionBrochure.pdf .
  • Evanini, K., & Wang, X. (2013, August). Automated speech scoring for non-native middle school students with multiple task types. In INTERSPEECH (pp. 2435–2439).
  • Foltz PW, Laham D, Landauer TK (1999) The Intelligent Essay Assessor: Applications to Educational Technology. Interactive Multimedia Electronic Journal of Computer-Enhanced Learning, 1, 2, http://imej.wfu.edu/articles/1999/2/04/ index.asp
  • Granger, S., Dagneaux, E., Meunier, F., & Paquot, M. (Eds.). (2009). International corpus of learner English. Louvain-la-Neuve: Presses universitaires de Louvain.
  • Higgins D, Heilman M. Managing what we can measure: quantifying the susceptibility of automated scoring systems to gaming behavior” Educ Meas Issues Pract. 2014; 33 :36–46. doi: 10.1111/emip.12036. [ CrossRef ] [ Google Scholar ]
  • Horbach A, Zesch T. The influence of variance in learner answers on automatic content scoring. Front Educ. 2019; 4 :28. doi: 10.3389/feduc.2019.00028. [ CrossRef ] [ Google Scholar ]
  • https://www.coursera.org/learn/machine-learning/exam/7pytE/linear-regression-with-multiple-variables/attempt
  • Hussein, M. A., Hassan, H., & Nassef, M. (2019). Automated language essay scoring systems: A literature review. PeerJ Computer Science, 5, e208. [ PMC free article ] [ PubMed ]
  • Ke Z, Ng V (2019) “Automated essay scoring: a survey of the state of the art.” IJCAI
  • Ke, Z., Inamdar, H., Lin, H., & Ng, V. (2019, July). Give me more feedback II: Annotating thesis strength and related attributes in student essays. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 3994-4004).
  • Kelley K, Preacher KJ. On effect size. Psychol Methods. 2012; 17 (2):137–152. doi: 10.1037/a0028086. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kitchenham B, Brereton OP, Budgen D, Turner M, Bailey J, Linkman S. Systematic literature reviews in software engineering–a systematic literature review. Inf Softw Technol. 2009; 51 (1):7–15. doi: 10.1016/j.infsof.2008.09.009. [ CrossRef ] [ Google Scholar ]
  • Klebanov, B. B., & Madnani, N. (2020, July). Automated evaluation of writing–50 years and counting. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 7796–7810).
  • Knill K, Gales M, Kyriakopoulos K, et al. (4 more authors) (2018) Impact of ASR performance on free speaking language assessment. In: Interspeech 2018.02–06 Sep 2018, Hyderabad, India. International Speech Communication Association (ISCA)
  • Kopparapu SK, De A (2016) Automatic ranking of essays using structural and semantic features. In: 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI), p 519–523
  • Kumar, Y., Aggarwal, S., Mahata, D., Shah, R. R., Kumaraguru, P., & Zimmermann, R. (2019, July). Get it scored using autosas—an automated system for scoring short answers. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 9662–9669).
  • Kumar Y, et al. (2020) “Calling out bluff: attacking the robustness of automatic scoring systems with simple adversarial testing.” ArXiv abs/2007.06796
  • Li X, Chen M, Nie J, Liu Z, Feng Z, Cai Y (2018) Coherence-Based Automated Essay Scoring Using Self-attention. In: Sun M, Liu T, Wang X, Liu Z, Liu Y (eds) Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data. CCL 2018, NLP-NABD 2018. Lecture Notes in Computer Science, vol 11221. Springer, Cham. 10.1007/978-3-030-01716-3_32
  • Liang G, On B, Jeong D, Kim H, Choi G. Automated essay scoring: a siamese bidirectional LSTM neural network architecture. Symmetry. 2018; 10 :682. doi: 10.3390/sym10120682. [ CrossRef ] [ Google Scholar ]
  • Liua, H., Yeb, Y., & Wu, M. (2018, April). Ensemble Learning on Scoring Student Essay. In 2018 International Conference on Management and Education, Humanities and Social Sciences (MEHSS 2018). Atlantis Press.
  • Liu J, Xu Y, Zhao L (2019) Automated Essay Scoring based on Two-Stage Learning. ArXiv, abs/1901.07744
  • Loukina A, et al. (2015) Feature selection for automated speech scoring.” BEA@NAACL-HLT
  • Loukina A, et al. (2017) “Speech- and Text-driven Features for Automated Scoring of English-Speaking Tasks.” SCNLP@EMNLP 2017
  • Loukina A, et al. (2019) The many dimensions of algorithmic fairness in educational applications. BEA@ACL
  • Lun J, Zhu J, Tang Y, Yang M (2020) Multiple data augmentation strategies for improving performance on automatic short answer scoring. In: Proceedings of the AAAI Conference on Artificial Intelligence, 34(09): 13389-13396
  • Madnani, N., & Cahill, A. (2018, August). Automated scoring: Beyond natural language processing. In Proceedings of the 27th International Conference on Computational Linguistics (pp. 1099–1109).
  • Madnani N, et al. (2017b) “Building better open-source tools to support fairness in automated scoring.” EthNLP@EACL
  • Malinin A, et al. (2016) “Off-topic response detection for spontaneous spoken english assessment.” ACL
  • Malinin A, et al. (2017) “Incorporating uncertainty into deep learning for spoken language assessment.” ACL
  • Mathias S, Bhattacharyya P (2018a) Thank “Goodness”! A Way to Measure Style in Student Essays. In: Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications p 35–41
  • Mathias S, Bhattacharyya P (2018b) ASAP++: Enriching the ASAP automated essay grading dataset with essay attribute scores. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
  • Mikolov T, et al. (2013) “Efficient Estimation of Word Representations in Vector Space.” ICLR
  • Mohler M, Mihalcea R (2009) Text-to-text semantic similarity for automatic short answer grading. In: Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009) p 567–575
  • Mohler M, Bunescu R, Mihalcea R (2011) Learning to grade short answer questions using semantic similarity measures and dependency graph alignments. In: Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies p 752–762
  • Muangkammuen P, Fukumoto F (2020) Multi-task Learning for Automated Essay Scoring with Sentiment Analysis. In: Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop p 116–123
  • Nguyen, H., & Dery, L. (2016). Neural networks for automated essay grading. CS224d Stanford Reports, 1–11.
  • Palma D, Atkinson J. Coherence-based automatic essay assessment. IEEE Intell Syst. 2018; 33 (5):26–36. doi: 10.1109/MIS.2018.2877278. [ CrossRef ] [ Google Scholar ]
  • Parekh S, et al (2020) My Teacher Thinks the World Is Flat! Interpreting Automatic Essay Scoring Mechanism.” ArXiv abs/2012.13872 (2020): n. pag
  • Pennington, J., Socher, R., & Manning, C. D. (2014, October). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532–1543).
  • Persing I, Ng V (2013) Modeling thesis clarity in student essays. In:Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) p 260–269
  • Powers DE, Burstein JC, Chodorow M, Fowles ME, Kukich K. Stumping E-Rater: challenging the validity of automated essay scoring. ETS Res Rep Ser. 2001; 2001 (1):i–44. [ Google Scholar ]
  • Powers DE, Burstein JC, Chodorow M, Fowles ME, Kukich K. Stumping e-rater: challenging the validity of automated essay scoring. Comput Hum Behav. 2002; 18 (2):103–134. doi: 10.1016/S0747-5632(01)00052-8. [ CrossRef ] [ Google Scholar ]
  • Ramachandran L, Cheng J, Foltz P (2015) Identifying patterns for short answer scoring using graph-based lexico-semantic text matching. In: Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications p 97–106
  • Ramanarayanan V, et al. (2017) “Human and Automated Scoring of Fluency, Pronunciation and Intonation During Human-Machine Spoken Dialog Interactions.” INTERSPEECH
  • Riordan B, Horbach A, Cahill A, Zesch T, Lee C (2017) Investigating neural architectures for short answer scoring. In: Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications p 159–168
  • Riordan B, Flor M, Pugh R (2019) "How to account for misspellings: Quantifying the benefit of character representations in neural content scoring models."In: Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications
  • Rodriguez P, Jafari A, Ormerod CM (2019) Language models and Automated Essay Scoring. ArXiv, abs/1909.09482
  • Rudner, L. M., & Liang, T. (2002). Automated essay scoring using Bayes' theorem. The Journal of Technology, Learning and Assessment, 1(2).
  • Rudner, L. M., Garcia, V., & Welch, C. (2006). An evaluation of IntelliMetric™ essay scoring system. The Journal of Technology, Learning and Assessment, 4(4).
  • Rupp A. Designing, evaluating, and deploying automated scoring systems with validity in mind: methodological design decisions. Appl Meas Educ. 2018; 31 :191–214. doi: 10.1080/08957347.2018.1464448. [ CrossRef ] [ Google Scholar ]
  • Ruseti S, Dascalu M, Johnson AM, McNamara DS, Balyan R, McCarthy KS, Trausan-Matu S (2018) Scoring summaries using recurrent neural networks. In: International Conference on Intelligent Tutoring Systems p 191–201. Springer, Cham
  • Sakaguchi K, Heilman M, Madnani N (2015) Effective feature integration for automated short answer scoring. In: Proceedings of the 2015 conference of the North American Chapter of the association for computational linguistics: Human language technologies p 1049–1054
  • Salim, Y., Stevanus, V., Barlian, E., Sari, A. C., & Suhartono, D. (2019, December). Automated English Digital Essay Grader Using Machine Learning. In 2019 IEEE International Conference on Engineering, Technology and Education (TALE) (pp. 1–6). IEEE.
  • Shehab A, Elhoseny M, Hassanien AE (2016) A hybrid scheme for Automated Essay Grading based on LVQ and NLP techniques. In: 12th International Computer Engineering Conference (ICENCO), Cairo, 2016, p 65-70
  • Shermis MD, Mzumara HR, Olson J, Harrington S. On-line grading of student essays: PEG goes on the World Wide Web. Assess Eval High Educ. 2001; 26 (3):247–259. doi: 10.1080/02602930120052404. [ CrossRef ] [ Google Scholar ]
  • Stab C, Gurevych I (2014) Identifying argumentative discourse structures in persuasive essays. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) p 46–56
  • Sultan MA, Salazar C, Sumner T (2016) Fast and easy short answer grading with high accuracy. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies p 1070–1075
  • Süzen, N., Gorban, A. N., Levesley, J., & Mirkes, E. M. (2020). Automatic short answer grading and feedback using text mining methods. Procedia Computer Science, 169, 726–743.
  • Taghipour K, Ng HT (2016) A neural approach to automated essay scoring. In: Proceedings of the 2016 conference on empirical methods in natural language processing p 1882–1891
  • Tashu TM (2020) "Off-Topic Essay Detection Using C-BGRU Siamese. In: 2020 IEEE 14th International Conference on Semantic Computing (ICSC), San Diego, CA, USA, p 221–225, doi: 10.1109/ICSC.2020.00046
  • Tashu TM, Horváth T (2019) A layered approach to automatic essay evaluation using word-embedding. In: McLaren B, Reilly R, Zvacek S, Uhomoibhi J (eds) Computer Supported Education. CSEDU 2018. Communications in Computer and Information Science, vol 1022. Springer, Cham
  • Tashu TM, Horváth T (2020) Semantic-Based Feedback Recommendation for Automatic Essay Evaluation. In: Bi Y, Bhatia R, Kapoor S (eds) Intelligent Systems and Applications. IntelliSys 2019. Advances in Intelligent Systems and Computing, vol 1038. Springer, Cham
  • Uto M, Okano M (2020) Robust Neural Automated Essay Scoring Using Item Response Theory. In: Bittencourt I, Cukurova M, Muldner K, Luckin R, Millán E (eds) Artificial Intelligence in Education. AIED 2020. Lecture Notes in Computer Science, vol 12163. Springer, Cham
  • Wang Z, Liu J, Dong R (2018a) Intelligent Auto-grading System. In: 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS) p 430–435. IEEE.
  • Wang Y, et al. (2018b) “Automatic Essay Scoring Incorporating Rating Schema via Reinforcement Learning.” EMNLP
  • Zhu W, Sun Y (2020) Automated essay scoring system using multi-model Machine Learning, david c. wyld et al. (eds): mlnlp, bdiot, itccma, csity, dtmn, aifz, sigpro
  • Wresch W. The Imminence of Grading Essays by Computer-25 Years Later. Comput Compos. 1993; 10 :45–58. doi: 10.1016/S8755-4615(05)80058-1. [ CrossRef ] [ Google Scholar ]
  • Wu, X., Knill, K., Gales, M., & Malinin, A. (2020). Ensemble approaches for uncertainty in spoken language assessment.
  • Xia L, Liu J, Zhang Z (2019) Automatic Essay Scoring Model Based on Two-Layer Bi-directional Long-Short Term Memory Network. In: Proceedings of the 2019 3rd International Conference on Computer Science and Artificial Intelligence p 133–137
  • Yannakoudakis H, Briscoe T, Medlock B (2011) A new dataset and method for automatically grading ESOL texts. In: Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies p 180–189
  • Zhao S, Zhang Y, Xiong X, Botelho A, Heffernan N (2017) A memory-augmented neural model for automated grading. In: Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale p 189–192
  • Zupanc K, Bosnic Z (2014) Automated essay evaluation augmented with semantic coherence measures. In: 2014 IEEE International Conference on Data Mining p 1133–1138. IEEE.
  • Zupanc K, Savić M, Bosnić Z, Ivanović M (2017) Evaluating coherence of essays using sentence-similarity networks. In: Proceedings of the 18th International Conference on Computer Systems and Technologies p 65–72
  • Dzikovska, M. O., Nielsen, R., & Brew, C. (2012, June). Towards effective tutorial feedback for explanation questions: A dataset and baselines. In  Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies  (pp. 200-210).
  • Kumar, N., & Dey, L. (2013, November). Automatic Quality Assessment of documents with application to essay grading. In 2013 12th Mexican International Conference on Artificial Intelligence (pp. 216–222). IEEE.
  • Wu, S. H., & Shih, W. F. (2018, July). A short answer grading system in chinese by support vector approach. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications (pp. 125-129).
  • Agung Putri Ratna, A., Lalita Luhurkinanti, D., Ibrahim I., Husna D., Dewi Purnamasari P. (2018). Automatic Essay Grading System for Japanese Language Examination Using Winnowing Algorithm, 2018 International Seminar on Application for Technology of Information and Communication, 2018, pp. 565–569. 10.1109/ISEMANTIC.2018.8549789.
  • Sharma A., & Jayagopi D. B. (2018). Automated Grading of Handwritten Essays 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2018, pp 279–284. 10.1109/ICFHR-2018.2018.00056

Sample details

  • Views: 1,001

Related Topics

  • Mobile Phone
  • Information system
  • Search Engine
  • Impact of Technology
  • Artificial Intelligence
  • Classical Conditioning
  • Mass production
  • World Wide Web
  • Programming
  • Moon Landing
  • biotechnology

Computer Based Information System

Computer Based Information System

Computer Based Information System (CBIS) is an information system in which the computer plays a major role. Such a system consists of the following elements:

  • Hardware: The term hardware refers to machinery. This category includes the computer itself, which is often referred to as the central processing unit (CPU), and all of its support equipments. Among the support equipments are input and output devices, storage devices and communications devices.
  • Software: The term software refers to computer programs and the manuals (if any) that support them. Computer programs are machine-readable instructions that direct the circuitry within the hardware parts of the Computer Based Information System (CBIS) to function in ways that produce useful information from data. Programs are generally stored on some input / output medium-often a disk or tape.
  • Data: Data are facts that are used by program to produce useful information. Like programs, data are generally stored in machine-readable from on disk or tape until the computer needs them.
  • Procedures: Procedures are the policies that govern the operation of a computer system. Procedures are to people what software is to hardware” is a common analogy that is used to illustrate the role of procedures in a CBIS.
  • People: Every Computer Based Information System (CBIS) needs people if it is to be useful. Often the most over-lookedelement of the CBIS is the people: probably the components that most influence the success or failure of information system.

Types of Computer Based Information Systems: 

ready to help you now

Without paying upfront

Transaction Processing Systems

The most fundamental computer based system in an organisation pertains to the processing of business transactions. A transaction processing system can be defined as a computer based system that captures, classifies, stores, maintains, updates and retrieves transaction data for record keeping and for input to other types of CBIS. Transaction Processing Systems are aimed at improving the routine business activities on which all organizations depend. A transaction is any event or activity that affects the whole organisation. Placing orders, billing customers, hiring of employees and depositing cheques are some of the common transactions.

The types of transactions that occur vary from organisation to organisation. But this is true that all organisations process transactions as a major part of their daily business activities. The most successful organisations perform this work of transaction processing in a very systematic way. Transaction processing systems provide speed and accuracy and can be programmed to follow routines without any variance.

Management Information System Data processing by computers has been extremely effective because of several reasons

The main reason being that huge amount of data relating to accounts and other transactions can be processed very quickly. Earlier most of the computer applications were concerned with record keeping and the automation of routine clerical processes. However, in recent years, increasing attention has been focussed on computer applications providing information for policy making, management planning and control purposes. MIS are more concerned with management function. MIS can be described as information system that can provide all levels of management with information essential to the running of smooth business.

Decision Support Systems

It is an information system that offers the kind of information that may not be predictable, the kind that business professionals may need only once. These systems do not produce regularly scheduled management reports. Instead, they are designed to respond to a wide range of requests. It is true that all the decisions in an organisation are not of a recurring nature.

Decision support systems assist managers who must make decisions that are not highly structured, often called unstructured or semi-structured decisions. A decision is considered unstructured if there are no clear procedures for making the decision and if not all the factors to be considered in the decision can be readily identified in advance. Judgement of the manager plays a vital role in decision making where the problem is not structured. The decision support system supports, but does not replace, judgement of manager.

Office Automation Systems

Office automation systems are among the newest and most rapidly expanding computer based information systems. They are being developed with the hopes and expectations that they will increase the efficiency and productivity of office workers-typists, secretaries, administrative assistants, staff professionals, managers and the like. Many organisations have taken the First step toward automating their offices. Often this step involves the use of word processing equipment to facilitate the typing, storing, revising and printing of textual materials.

Another development is a computer based communications system such as electronic mail which allows people to communicate in an electronic mode through computer terminals. An office automation system can be described as a multi-function, integrated computer based system that allows many office activities to be performed in an electronic mode.

Cite this page

https://graduateway.com/computer-based-information-system/

You can get a custom paper by one of our expert writers

  • Digital Technology

Information Technology

  • Engineering

Check more samples on your topics

The technical system: information, information technology, and information systems.

In today’s world of technology a business has many options to choose from when comes to business software application. However, when making that decision, the actual application that is chosen should be driven by the needs of the business. For this particular situation, a quality control application has been chosen. This type of application offers

Understand How to Manage Elecronic and Paper Based Information

Unit three: Principles of managing information and producing documents Assessment You should use this file to complete your Assessment. * The first thing you need to do is save a copy of this document, either onto your computer or a disk * Then work through your Assessment, remembering to save your work regularly * When

Web Based Information Systems Management Pepsico Inc Business

There are barely a few people in the universe who are unfamiliar with the word Pepsi. Wordss such as Cola or sodium carbonate have about become synonymous with it. Pepsi is arguably the most celebrated soft-drink consumed by one million millions all over the universe. And the company responsible behind this success is one of

Information Architecture Impact On Healthcare Computer Science

Computer Science

IntroductionNowadays health care is indispensable for all people to be healthy and bulk of people search the information sing the health care from the web sites. The health care websites can back up the people to acquire the cognition and information sing the health care. The ground why people find the health care information from

Tom’s Hardware: Review and Appraisal as Computer Information Source

Information

  Tom’s Hardware: Review and Appraisal as Computer Information SourceTom’s Hardware is a website offering information on almost anything about personal computers and laptops that one can think of. It is very much well known in communities of computer enthusiasts for its intensive reviews and valuable insider news. Anyone who visits the website will instantly notice

Computer Literacy: Important Skills to Access Information

Information literacy

Computer literacy: Important Skills to Access Information. Abstract This year millions of children and adults will be subjected to computer literacy education. Yet this new computer literacy movement has little or no basis in research or philosophy. The concepts of ‘information literacy’ and ‘computer literacy’ are described, and reviewed, by way of a literature survey.

Famous Examples Of The Computer Worm Computer Science

Introduction: The computing machine worms are plans that reproduce, duplicate independently and travel to distribute across webs, it does non trust on the host file or boot sector and the transportation of files between computing machines to distribute and this is the chief cardinal difference between the computing machine virus and the worm virus. Computer

Misuse of a Computer, Computer Network, or Network Device

Background Cybercrime is besides called computing machine offense. The usage of computing machines as an instrument to farther illegal terminals. such as perpetrating fraud. trafficking in kid erotica and rational belongings. stealing individualities. or go againsting privateness is cybercrime. Cybercrime. particularly through the Internet. has grown in importance as the computing machine has become cardinal to

Computer Organization: Technical Problem About Computer Hard Drive Disk

ABSTRACT This study is entitled “Technical Problem About Computer Hard Disk Drive”. It enables to achieve in rendering information about what is hard disk drive and some technical problem encountered on it. It also allows the reader to understand the possible answer of the said problem. Hard Disk Drive hardware in computer sometimes encounter problem

computer based system essay

Hi, my name is Amy 👋

In case you can't find a relevant example, our professional writers are ready to help you write a unique paper. Just talk to our smart assistant Amy and she'll connect you with the best match.

  • Review article
  • Open access
  • Published: 02 October 2017

Computer-based technology and student engagement: a critical review of the literature

  • Laura A. Schindler   ORCID: orcid.org/0000-0001-8730-5189 1 ,
  • Gary J. Burkholder 2 , 3 ,
  • Osama A. Morad 1 &
  • Craig Marsh 4  

International Journal of Educational Technology in Higher Education volume  14 , Article number:  25 ( 2017 ) Cite this article

373k Accesses

134 Citations

38 Altmetric

Metrics details

Computer-based technology has infiltrated many aspects of life and industry, yet there is little understanding of how it can be used to promote student engagement, a concept receiving strong attention in higher education due to its association with a number of positive academic outcomes. The purpose of this article is to present a critical review of the literature from the past 5 years related to how web-conferencing software, blogs, wikis, social networking sites ( Facebook and Twitter ), and digital games influence student engagement. We prefaced the findings with a substantive overview of student engagement definitions and indicators, which revealed three types of engagement (behavioral, emotional, and cognitive) that informed how we classified articles. Our findings suggest that digital games provide the most far-reaching influence across different types of student engagement, followed by web-conferencing and Facebook . Findings regarding wikis, blogs, and Twitter are less conclusive and significantly limited in number of studies conducted within the past 5 years. Overall, the findings provide preliminary support that computer-based technology influences student engagement, however, additional research is needed to confirm and build on these findings. We conclude the article by providing a list of recommendations for practice, with the intent of increasing understanding of how computer-based technology may be purposefully implemented to achieve the greatest gains in student engagement.

Introduction

The digital revolution has profoundly affected daily living, evident in the ubiquity of mobile devices and the seamless integration of technology into common tasks such as shopping, reading, and finding directions (Anderson, 2016 ; Smith & Anderson, 2016 ; Zickuhr & Raine, 2014 ). The use of computers, mobile devices, and the Internet is at its highest level to date and expected to continue to increase as technology becomes more accessible, particularly for users in developing countries (Poushter, 2016 ). In addition, there is a growing number of people who are smartphone dependent, relying solely on smartphones for Internet access (Anderson & Horrigan, 2016 ) rather than more expensive devices such as laptops and tablets. Greater access to and demand for technology has presented unique opportunities and challenges for many industries, some of which have thrived by effectively digitizing their operations and services (e.g., finance, media) and others that have struggled to keep up with the pace of technological innovation (e.g., education, healthcare) (Gandhi, Khanna, & Ramaswamy, 2016 ).

Integrating technology into teaching and learning is not a new challenge for universities. Since the 1900s, administrators and faculty have grappled with how to effectively use technical innovations such as video and audio recordings, email, and teleconferencing to augment or replace traditional instructional delivery methods (Kaware & Sain, 2015 ; Westera, 2015 ). Within the past two decades, however, this challenge has been much more difficult due to the sheer volume of new technologies on the market. For example, in the span of 7 years (from 2008 to 2015), the number of active apps in Apple’s App Store increased from 5000 to 1.75 million. Over the next 4 years, the number of apps is projected to rise by 73%, totaling over 5 million (Nelson, 2016 ). Further compounding this challenge is the limited shelf life of new devices and software combined with significant internal organizational barriers that hinder universities from efficiently and effectively integrating new technologies (Amirault, 2012 ; Kinchin, 2012 ; Linder-VanBerschot & Summers 2015 ; Westera, 2015 ).

Many organizational barriers to technology integration arise from competing tensions between institutional policy and practice and faculty beliefs and abilities. For example, university administrators may view technology as a tool to attract and retain students, whereas faculty may struggle to determine how technology coincides with existing pedagogy (Lawrence & Lentle-Keenan, 2013 ; Lin, Singer, & Ha, 2010 ). In addition, some faculty may be hesitant to use technology due to lack of technical knowledge and/or skepticism about the efficacy of technology to improve student learning outcomes (Ashrafzadeh & Sayadian, 2015 ; Buchanan, Sainter, & Saunders, 2013 ; Hauptman, 2015 ; Johnson, 2013 ; Kidd, Davis, & Larke, 2016 ; Kopcha, Rieber, & Walker, 2016 ; Lawrence & Lentle-Keenan, 2013 ; Lewis, Fretwell, Ryan, & Parham, 2013 ; Reid, 2014 ). Organizational barriers to technology adoption are particularly problematic given the growing demands and perceived benefits among students about using technology to learn (Amirault, 2012 ; Cassidy et al., 2014 ; Gikas & Grant, 2013 ; Paul & Cochran, 2013 ). Surveys suggest that two-thirds of students use mobile devices for learning and believe that technology can help them achieve learning outcomes and better prepare them for a workforce that is increasingly dependent on technology (Chen, Seilhamer, Bennett, & Bauer, 2015 ; Dahlstrom, 2012 ). Universities that fail to effectively integrate technology into the learning experience miss opportunities to improve student outcomes and meet the expectations of a student body that has grown accustomed to the integration of technology into every facet of life (Amirault, 2012 ; Cook & Sonnenberg, 2014 ; Revere & Kovach, 2011 ; Sun & Chen, 2016 ; Westera, 2015 ).

The purpose of this paper is to provide a literature review on how computer-based technology influences student engagement within higher education settings. We focused on computer-based technology given the specific types of technologies (i.e., web-conferencing software, blogs, wikis, social networking sites, and digital games) that emerged from a broad search of the literature, which is described in more detail below. Computer-based technology (hereafter referred to as technology) requires the use of specific hardware, software, and micro processing features available on a computer or mobile device. We also focused on student engagement as the dependent variable of interest because it encompasses many different aspects of the teaching and learning process (Bryson & Hand, 2007 ; Fredricks, Blumenfeld, & Parks, 1994; Wimpenny & Savin-Baden, 2013 ), compared narrower variables in the literature such as final grades or exam scores. Furthermore, student engagement has received significant attention over the past several decades due to shifts towards student-centered, constructivist instructional methods (Haggis, 2009 ; Wright, 2011 ), mounting pressures to improve teaching and learning outcomes (Axelson & Flick, 2011 ; Kuh, 2009 ), and promising studies suggesting relationships between student engagement and positive academic outcomes (Carini, Kuh, & Klein, 2006 ; Center for Postsecondary Research, 2016 ; Hu & McCormick, 2012 ). Despite the interest in student engagement and the demand for more technology in higher education, there are no articles offering a comprehensive review of how these two variables intersect. Similarly, while many existing student engagement conceptual models have expanded to include factors that influence student engagement, none highlight the overt role of technology in the engagement process (Kahu, 2013 ; Lam, Wong, Yang, & Yi, 2012 ; Nora, Barlow, & Crisp, 2005 ; Wimpenny & Savin-Baden, 2013 ; Zepke & Leach, 2010 ).

Our review aims to address existing gaps in the student engagement literature and seeks to determine whether student engagement models should be expanded to include technology. The review also addresses some of the organizational barriers to technology integration (e.g., faculty uncertainty and skepticism about technology) by providing a comprehensive account of the research evidence regarding how technology influences student engagement. One limitation of the literature, however, is the lack of detail regarding how teaching and learning practices were used to select and integrate technology into learning. For example, the methodology section of many studies does not include a pedagogical justification for why a particular technology was used or details about the design of the learning activity itself. Therefore, it often is unclear how teaching and learning practices may have affected student engagement levels. We revisit this issue in more detail at the end of this paper in our discussions of areas for future research and recommendations for practice. We initiated our literature review by conducting a broad search for articles published within the past 5 years, using the key words technology and higher education , in Google Scholar and the following research databases: Academic Search Complete, Communication & Mass Media Complete, Computers & Applied Sciences Complete, Education Research Complete, ERIC, PsycARTICLES, and PsycINFO . Our initial search revealed themes regarding which technologies were most prevalent in the literature (e.g., social networking, digital games), which then lead to several, more targeted searches of the same databases using specific keywords such as Facebook and student engagement. After both broad and targeted searches, we identified five technologies (web-conferencing software, blogs, wikis, social networking sites, and digital games) to include in our review.

We chose to focus on technologies for which there were multiple studies published, allowing us to identify areas of convergence and divergence in the literature and draw conclusions about positive and negative effects on student engagement. In total, we identified 69 articles relevant to our review, with 36 pertaining to social networking sites (21 for Facebook and 15 for Twitter ), 14 pertaining to digital games, seven pertaining to wikis, and six pertaining to blogs and web-conferencing software respectively. Articles were categorized according to their influence on specific types of student engagement, which will be described in more detail below. In some instances, one article pertained to multiple types of engagement. In the sections that follow, we will provide an overview of student engagement, including an explanation of common definitions and indicators of engagement, followed by a synthesis of how each type of technology influences student engagement. Finally, we will discuss areas for future research and make recommendations for practice.

  • Student engagement

Interest in student engagement began over 70 years ago with Ralph Tyler’s research on the relationship between time spent on coursework and learning (Axelson & Flick, 2011 ; Kuh, 2009 ). Since then, the study of student engagement has evolved and expanded considerably, through the seminal works of Pace ( 1980 ; 1984 ) and Astin ( 1984 ) about how quantity and quality of student effort affect learning and many more recent studies on the environmental conditions and individual dispositions that contribute to student engagement (Bakker, Vergel, & Kuntze, 2015 ; Gilboy, Heinerichs, & Pazzaglia, 2015 ; Martin, Goldwasser, & Galentino, 2017 ; Pellas, 2014 ). Perhaps the most well-known resource on student engagement is the National Survey of Student Engagement (NSSE), an instrument designed to assess student participation in various educational activities (Kuh, 2009 ). The NSSE and other engagement instruments like it have been used in many studies that link student engagement to positive student outcomes such as higher grades, retention, persistence, and completion (Leach, 2016 ; McClenney, Marti, & Adkins, 2012 ; Trowler & Trowler, 2010 ), further convincing universities that student engagement is an important factor in the teaching and learning process. However, despite the increased interest in student engagement, its meaning is generally not well understood or agreed upon.

Student engagement is a broad and complex phenomenon for which there are many definitions grounded in psychological, social, and/or cultural perspectives (Fredricks et al., 1994; Wimpenny & Savin-Baden, 2013 ; Zepke & Leach, 2010 ). Review of definitions revealed that student engagement is defined in two ways. One set of definitions refer to student engagement as a desired outcome reflective of a student’s thoughts, feelings, and behaviors about learning. For example, Kahu ( 2013 ) defines student engagement as an “individual psychological state” that includes a student’s affect, cognition, and behavior (p. 764). Other definitions focus primarily on student behavior, suggesting that engagement is the “extent to which students are engaging in activities that higher education research has shown to be linked with high-quality learning outcomes” (Krause & Coates, 2008 , p. 493) or the “quality of effort and involvement in productive learning activities” (Kuh, 2009 , p. 6). Another set of definitions refer to student engagement as a process involving both the student and the university. For example, Trowler ( 2010 ) defined student engagement as “the interaction between the time, effort and other relevant resources invested by both students and their institutions intended to optimize the student experience and enhance the learning outcomes and development of students and the performance, and reputation of the institution” (p. 2). Similarly, the NSSE website indicates that student engagement is “the amount of time and effort students put into their studies and other educationally purposeful activities” as well as “how the institution deploys its resources and organizes the curriculum and other learning opportunities to get students to participate in activities that decades of research studies show are linked to student learning” (Center for Postsecondary Research, 2017 , para. 1).

Many existing models of student engagement reflect the latter set of definitions, depicting engagement as a complex, psychosocial process involving both student and university characteristics. Such models organize the engagement process into three areas: factors that influence student engagement (e.g., institutional culture, curriculum, and teaching practices), indicators of student engagement (e.g., interest in learning, interaction with instructors and peers, and meaningful processing of information), and outcomes of student engagement (e.g., academic achievement, retention, and personal growth) (Kahu, 2013 ; Lam et al., 2012 ; Nora et al., 2005 ). In this review, we examine the literature to determine whether technology influences student engagement. In addition, we will use Fredricks et al. ( 2004 ) typology of student engagement to organize and present research findings, which suggests that there are three types of engagement (behavioral, emotional, and cognitive). The typology is useful because it is broad in scope, encompassing different types of engagement that capture a range of student experiences, rather than narrower typologies that offer specific or prescriptive conceptualizations of student engagement. In addition, this typology is student-centered, focusing exclusively on student-focused indicators rather than combining student indicators with confounding variables, such as faculty behavior, curriculum design, and campus environment (Coates, 2008 ; Kuh, 2009 ). While such variables are important in the discussion of student engagement, perhaps as factors that may influence engagement, they are not true indicators of student engagement. Using the typology as a guide, we examined recent student engagement research, models, and measures to gain a better understanding of how behavioral, emotional, and cognitive student engagement are conceptualized and to identify specific indicators that correspond with each type of engagement, as shown in Fig. 1 .

Conceptual framework of types and indicators of student engagement

Behavioral engagement is the degree to which students are actively involved in learning activities (Fredricks et al., 2004 ; Kahu, 2013 ; Zepke, 2014 ). Indicators of behavioral engagement include time and effort spent participating in learning activities (Coates, 2008 ; Fredricks et al., 2004 ; Kahu, 2013 ; Kuh, 2009 ; Lam et al., 2012 ; Lester, 2013 ; Trowler, 2010 ) and interaction with peers, faculty, and staff (Coates, 2008 ; Kahu, 2013 ; Kuh, 2009 ; Bryson & Hand, 2007 ; Wimpenny & Savin-Baden, 2013 : Zepke & Leach, 2010 ). Indicators of behavioral engagement reflect observable student actions and most closely align with Pace ( 1980 ) and Astin’s ( 1984 ) original conceptualizations of student engagement as quantity and quality of effort towards learning. Emotional engagement is students’ affective reactions to learning (Fredricks et al., 2004 ; Lester, 2013 ; Trowler, 2010 ). Indicators of emotional engagement include attitudes, interests, and values towards learning (Fredricks et al., 2004 ; Kahu, 2013 ; Lester, 2013 ; Trowler, 2010 ; Wimpenny & Savin-Baden, 2013 ; Witkowski & Cornell, 2015 ) and a perceived sense of belonging within a learning community (Fredricks et al., 2004 ; Kahu, 2013 ; Lester, 2013 ; Trowler, 2010 ; Wimpenny & Savin-Baden, 2013 ). Emotional engagement often is assessed using self-report measures (Fredricks et al., 2004 ) and provides insight into how students feel about a particular topic, delivery method, or instructor. Finally, cognitive engagement is the degree to which students invest in learning and expend mental effort to comprehend and master content (Fredricks et al., 2004 ; Lester, 2013 ). Indicators of cognitive engagement include: motivation to learn (Lester, 2013 ; Richardson & Newby, 2006 ; Zepke & Leach, 2010 ); persistence to overcome academic challenges and meet/exceed requirements (Fredricks et al., 2004 ; Kuh, 2009 ; Trowler, 2010 ); and deep processing of information (Fredricks et al., 2004 ; Kahu, 2013 ; Lam et al., 2012 ; Richardson & Newby, 2006 ) through critical thinking (Coates, 2008 ; Witkowski & Cornell, 2015 ), self-regulation (e.g., set goals, plan, organize study effort, and monitor learning; Fredricks et al., 2004 ; Lester, 2013 ), and the active construction of knowledge (Coates, 2008 ; Kuh, 2009 ). While cognitive engagement includes motivational aspects, much of the literature focuses on how students use active learning and higher-order thinking, in some form, to achieve content mastery. For example, there is significant emphasis on the importance of deep learning, which involves analyzing new learning in relation previous knowledge, compared to surface learning, which is limited to memorization, recall, and rehearsal (Fredricks et al., 2004 ; Kahu, 2013 ; Lam et al., 2012 ).

While each type of engagement has distinct features, there is some overlap across cognitive, behavioral, and emotional domains. In instances where an indicator could correspond with more than one type of engagement, we chose to match the indicator to the type of engagement that most closely aligned, based on our review of the engagement literature and our interpretation of the indicators. Similarly, there is also some overlap among indicators. As a result, we combined and subsumed similar indicators found in the literature, where appropriate, to avoid redundancy. Achieving an in-depth understanding of student engagement and associated indicators was an important pre-cursor to our review of the technology literature. Very few articles used the term student engagement as a dependent variable given the concept is so broad and multidimensional. We found that specific indicators (e.g., interaction, sense of belonging, and knowledge construction) of student engagement were more common in the literature as dependent variables. Next, we will provide a synthesis of the findings regarding how different types of technology influence behavioral, emotional, and cognitive student engagement and associated indicators.

Influence of technology on student engagement

We identified five technologies post-literature search (i.e., web-conferencing, blogs, wikis, social networking sites , and digital games) to include in our review, based on frequency in which they appeared in the literature over the past 5 years. One commonality among these technologies is their potential value in supporting a constructivist approach to learning, characterized by the active discovery of knowledge through reflection of experiences with one’s environment, the connection of new knowledge to prior knowledge, and interaction with others (Boghossian, 2006 ; Clements, 2015 ). Another commonality is that most of the technologies, except perhaps for digital games, are designed primarily to promote interaction and collaboration with others. Our search yielded very few studies on how informational technologies, such as video lectures and podcasts, influence student engagement. Therefore, these technologies are notably absent from our review. Unlike the technologies we identified earlier, informational technologies reflect a behaviorist approach to learning in which students are passive recipients of knowledge that is transmitted from an expert (Boghossian, 2006 ). The lack of recent research on how informational technologies affect student engagement may be due to the increasing shift from instructor-centered, behaviorist approaches to student-centered, constructivist approaches within higher education (Haggis, 2009 ; Wright, 2011 ) along with the ubiquity of web 2.0 technologies.

  • Web-conferencing

Web-conferencing software provides a virtual meeting space where users login simultaneously and communicate about a given topic. While each software application is unique, many share similar features such as audio, video, or instant messaging options for real-time communication; screen sharing, whiteboards, and digital pens for presentations and demonstrations; polls and quizzes for gauging comprehension or eliciting feedback; and breakout rooms for small group work (Bower, 2011 ; Hudson, Knight, & Collins, 2012 ; Martin, Parker, & Deale, 2012 ; McBrien, Jones, & Cheng, 2009 ). Of the technologies included in this literature review, web-conferencing software most closely mimics the face-to-face classroom environment, providing a space where instructors and students can hear and see each other in real-time as typical classroom activities (i.e., delivering lectures, discussing course content, asking/answering questions) are carried out (Francescucci & Foster, 2013 ; Hudson et al., 2012 ). Studies on web-conferencing software deployed Adobe Connect, Cisco WebEx, Horizon Wimba, or Blackboard Collaborate and made use of multiple features, such as screen sharing, instant messaging, polling, and break out rooms. In addition, most of the studies integrated web-conferencing software into courses on a voluntary basis to supplement traditional instructional methods (Andrew, Maslin-Prothero, & Ewens, 2015 ; Armstrong & Thornton, 2012 ; Francescucci & Foster, 2013 ; Hudson et al., 2012 ; Martin et al., 2012 ; Wdowik, 2014 ). Existing studies on web-conferencing pertain to all three types of student engagement.

Studies on web-conferencing and behavioral engagement reveal mixed findings. For example, voluntary attendance in web-conferencing sessions ranged from 54 to 57% (Andrew et al., 2015 ; Armstrong & Thornton, 2012 ) and, in a comparison between a blended course with regular web-conferencing sessions and a traditional, face-to-face course, researchers found no significant difference in student attendance in courses. However, students in the blended course reported higher levels of class participation compared to students in the face-to-face course (Francescucci & Foster, 2013 ). These findings suggest while web-conferencing may not boost attendance, especially if voluntary, it may offer more opportunities for class participation, perhaps through the use of communication channels typically not available in a traditional, face-to-face course (e.g., instant messaging, anonymous polling). Studies on web-conferencing and interaction, another behavioral indicator, support this assertion. For example, researchers found that students use various features of web-conferencing software (e.g., polling, instant message, break-out rooms) to interact with peers and the instructor by asking questions, expressing opinions and ideas, sharing resources, and discussing academic content (Andrew et al., 2015 ; Armstrong & Thornton, 2012 ; Hudson et al., 2012 ; Martin et al., 2012 ; Wdowik, 2014 ).

Studies on web-conferencing and cognitive engagement are more conclusive than those for behavioral engagement, although are fewer in number. Findings suggest that students who participated in web-conferencing demonstrated critical reflection and enhanced learning through interactions with others (Armstrong & Thornton, 2012 ), higher-order thinking (e.g., problem-solving, synthesis, evaluation) in response to challenging assignments (Wdowik, 2014 ), and motivation to learn, particularly when using polling features (Hudson et al., 2012 ). There is only one study examining how web-conferencing affects emotional engagement, although it is positive suggesting that students who participated in web-conferences had higher levels of interest in course content than those who did not (Francescucci & Foster, 2013 ). One possible reason for the positive cognitive and emotional engagement findings may be that web-conferencing software provides many features that promote active learning. For example, whiteboards and breakout rooms provide opportunities for real-time, collaborative problem-solving activities and discussions. However, additional studies are needed to isolate and compare specific web-conferencing features to determine which have the greatest effect on student engagement.

A blog, which is short for Weblog, is a collection of personal journal entries, published online and presented chronologically, to which readers (or subscribers) may respond by providing additional commentary or feedback. In order to create a blog, one must compose content for an entry, which may include text, hyperlinks, graphics, audio, or video, publish the content online using a blogging application, and alert subscribers that new content is posted. Blogs may be informal and personal in nature or may serve as formal commentary in a specific genre, such as in politics or education (Coghlan et al., 2007 ). Fortunately, many blog applications are free, and many learning management systems (LMSs) offer a blogging feature that is seamlessly integrated into the online classroom. The ease of blogging has attracted attention from educators, who currently use blogs as an instructional tool for the expression of ideas, opinions, and experiences and for promoting dialogue on a wide range of academic topics (Garrity, Jones, VanderZwan, de la Rocha, & Epstein, 2014 ; Wang, 2008 ).

Studies on blogs show consistently positive findings for many of the behavioral and emotional engagement indicators. For example, students reported that blogs promoted interaction with others, through greater communication and information sharing with peers (Chu, Chan, & Tiwari, 2012 ; Ivala & Gachago, 2012 ; Mansouri & Piki, 2016 ), and analyses of blog posts show evidence of students elaborating on one another’s ideas and sharing experiences and conceptions of course content (Sharma & Tietjen, 2016 ). Blogs also contribute to emotional engagement by providing students with opportunities to express their feelings about learning and by encouraging positive attitudes about learning (Dos & Demir, 2013 ; Chu et al., 2012 ; Yang & Chang, 2012 ). For example, Dos and Demir ( 2013 ) found that students expressed prejudices and fears about specific course topics in their blog posts. In addition, Yang and Chang ( 2012 ) found that interactive blogging, where comment features were enabled, lead to more positive attitudes about course content and peers compared to solitary blogging, where comment features were disabled.

The literature on blogs and cognitive engagement is less consistent. Some studies suggest that blogs may help students engage in active learning, problem-solving, and reflection (Chawinga, 2017 ; Chu et al., 2012 ; Ivala & Gachago, 2012 ; Mansouri & Piki, 2016 ), while other studies suggest that students’ blog posts show very little evidence of higher-order thinking (Dos & Demir, 2013 ; Sharma & Tietjen, 2016 ). The inconsistency in findings may be due to the wording of blog instructions. Students may not necessarily demonstrate or engage in deep processing of information unless explicitly instructed to do so. Unfortunately, it is difficult to determine whether the wording of blog assignments contributed to the mixed results because many of the studies did not provide assignment details. However, studies pertaining to other technologies suggest that assignment wording that lacks specificity or requires low-level thinking can have detrimental effects on student engagement outcomes (Hou, Wang, Lin, & Chang, 2015 ; Prestridge, 2014 ). Therefore, blog assignments that are vague or require only low-level thinking may have adverse effects on cognitive engagement.

A wiki is a web page that can be edited by multiple users at once (Nakamaru, 2012 ). Wikis have gained popularity in educational settings as a viable tool for group projects where group members can work collaboratively to develop content (i.e., writings, hyperlinks, images, graphics, media) and keep track of revisions through an extensive versioning system (Roussinos & Jimoyiannis, 2013 ). Most studies on wikis pertain to behavioral engagement, with far fewer studies on cognitive engagement and none on emotional engagement. Studies pertaining to behavioral engagement reveal mixed results, with some showing very little enduring participation in wikis beyond the first few weeks of the course (Nakamaru, 2012 ; Salaber, 2014 ) and another showing active participation, as seen in high numbers of posts and edits (Roussinos & Jimoyiannis, 2013 ). The most notable difference between these studies is the presence of grading, which may account for the inconsistencies in findings. For example, in studies where participation was low, wikis were ungraded, suggesting that students may need extra motivation and encouragement to use wikis (Nakamaru, 2012 ; Salaber, 2014 ). Findings regarding the use of wikis for promoting interaction are also inconsistent. In some studies, students reported that wikis were useful for interaction, teamwork, collaboration, and group networking (Camacho, Carrión, Chayah, & Campos, 2016 ; Martínez, Medina, Albalat, & Rubió, 2013 ; Morely, 2012 ; Calabretto & Rao, 2011 ) and researchers found evidence of substantial collaboration among students (e.g., sharing ideas, opinions, and points of view) in wiki activity (Hewege & Perera, 2013 ); however, Miller, Norris, and Bookstaver ( 2012 ) found that only 58% of students reported that wikis promoted collegiality among peers. The findings in the latter study were unexpected and may be due to design flaws in the wiki assignments. For example, the authors noted that wiki assignments were not explicitly referred to in face-to-face classes; therefore, this disconnect may have prevented students from building on interactive momentum achieved during out-of-class wiki assignments (Miller et al., 2012 ).

Studies regarding cognitive engagement are limited in number but more consistent than those concerning behavioral engagement, suggesting that wikis promote high levels of knowledge construction (i.e., evaluation of arguments, the integration of multiple viewpoints, new understanding of course topics; Hewege & Perera, 2013 ), and are useful for reflection, reinforcing course content, and applying academic skills (Miller et al., 2012 ). Overall, there is mixed support for the use of wikis to promote behavioral engagement, although making wiki assignments mandatory and explicitly referring to wikis in class may help bolster participation and interaction. In addition, there is some support for using wikis to promote cognitive engagement, but additional studies are needed to confirm and expand on findings as well as explore the effect of wikis on emotional engagement.

Social networking sites

Social networking is “the practice of expanding knowledge by making connections with individuals of similar interests” (Gunawardena et al., 2009 , p. 4). Social networking sites, such as Facebook, Twitter, Instagram, and LinkedIn, allow users to create and share digital content publicly or with others to whom they are connected and communicate privately through messaging features. Two of the most popular social networking sites in the educational literature are Facebook and Twitter (Camus, Hurt, Larson, & Prevost, 2016 ; Manca & Ranieri, 2013 ), which is consistent with recent statistics suggesting that both sites also are exceedingly popular among the general population (Greenwood, Perrin, & Duggan, 2016 ). In the sections that follow, we examine how both Facebook and Twitter influence different types of student engagement.

Facebook is a web-based service that allows users to create a public or private profile and invite others to connect. Users may build social, academic, and professional connections by posting messages in various media formats (i.e., text, pictures, videos) and commenting on, liking, and reacting to others’ messages (Bowman & Akcaoglu, 2014 ; Maben, Edwards, & Malone, 2014 ; Hou et al., 2015 ). Within an educational context, Facebook has often been used as a supplementary instructional tool to lectures or LMSs to support class discussions or develop, deliver, and share academic content and resources. Many instructors have opted to create private Facebook groups, offering an added layer of security and privacy because groups are not accessible to strangers (Bahati, 2015 ; Bowman & Akcaoglu, 2014 ; Clements, 2015 ; Dougherty & Andercheck, 2014 ; Esteves, 2012 ; Shraim, 2014 ; Maben et al., 2014 ; Manca & Ranieri, 2013 ; Naghdipour & Eldridge, 2016 ; Rambe, 2012 ). The majority of studies on Facebook address behavioral indicators of student engagement, with far fewer focusing on emotional or cognitive engagement.

Studies that examine the influence of Facebook on behavioral engagement focus both on participation in learning activities and interaction with peers and instructors. In most studies, Facebook activities were voluntary and participation rates ranged from 16 to 95%, with an average of rate of 47% (Bahati, 2015 ; Bowman & Akcaoglu, 2014 ; Dougherty & Andercheck, 2014 ; Fagioli, Rios-Aguilar, & Deil-Amen, 2015 ; Rambe, 2012 ; Staines & Lauchs, 2013 ). Participation was assessed by tracking how many students joined course- or university-specific Facebook groups (Bahati, 2015 ; Bowman & Akcaoglu, 2014 ; Fagioli et al., 2015 ), visited or followed course-specific Facebook pages (DiVall & Kirwin, 2012 ; Staines & Lauchs, 2013 ), or posted at least once in a course-specific Facebook page (Rambe, 2012 ). The lowest levels of participation (16%) arose from a study where community college students were invited to use the Schools App, a free application that connects students to their university’s private Facebook community. While the authors acknowledged that building an online community of college students is difficult (Fagioli et al., 2015 ), downloading the Schools App may have been a deterrent to widespread participation. In addition, use of the app was not tied to any specific courses or assignments; therefore, students may have lacked adequate incentive to use it. The highest level of participation (95%) in the literature arose from a study in which the instructor created a Facebook page where students could find or post study tips or ask questions. Followership to the page was highest around exams, when students likely had stronger motivations to access study tips and ask the instructor questions (DiVall & Kirwin, 2012 ). The wide range of participation in Facebook activities suggests that some students may be intrinsically motivated to participate, while other students may need some external encouragement. For example, Bahati ( 2015 ) found that when students assumed that a course-specific Facebook was voluntary, only 23% participated, but when the instructor confirmed that the Facebook group was, in fact, mandatory, the level of participation rose to 94%.

While voluntary participation in Facebook activities may be lower than desired or expected (Dyson, Vickers, Turtle, Cowan, & Tassone, 2015 ; Fagioli et al., 2015 ; Naghdipour & Eldridge, 2016 ; Rambe, 2012 ), students seem to have a clear preference for Facebook compared to other instructional tools (Clements, 2015 ; DiVall & Kirwin, 2012 ; Hurt et al., 2012 ; Hou et al., 2015 ; Kent, 2013 ). For example, in one study where an instructor shared course-related information in a Facebook group, in the LMS, and through email, the level of participation in the Facebook group was ten times higher than in email or the LMS (Clements, 2015 ). In other studies, class discussions held in Facebook resulted in greater levels of participation and dialogue than class discussions held in LMS discussion forums (Camus et al., 2016 ; Hurt et al., 2012 ; Kent, 2013 ). Researchers found that preference for Facebook over the university’s LMS is due to perceptions that the LMS is outdated and unorganized and reports that Facebook is more familiar, convenient, and accessible given that many students already visit the social networking site multiple times per day (Clements, 2015 ; Dougherty & Andercheck, 2014 ; Hurt et al., 2012 ; Kent, 2013 ). In addition, students report that Facebook helps them stay engaged in learning through collaboration and interaction with both peers and instructors (Bahati, 2015 ; Shraim, 2014 ), which is evident in Facebook posts where students collaborated to study for exams, consulted on technical and theoretical problem solving, discussed course content, exchanged learning resources, and expressed opinions as well as academic successes and challenges (Bowman & Akcaoglu, 2014 ; Dougherty & Andercheck, 2014 ; Esteves, 2012 Ivala & Gachago, 2012 ; Maben et al., 2014 ; Rambe, 2012 ; van Beynen & Swenson, 2016 ).

There is far less evidence in the literature about the use of Facebook for emotional and cognitive engagement. In terms of emotional engagement, studies suggest that students feel positively about being part of a course-specific Facebook group and that Facebook is useful for expressing feelings about learning and concerns for peers, through features such as the “like” button and emoticons (Bowman & Akcaoglu, 2014 ; Dougherty & Andercheck, 2014 ; Naghdipour & Eldridge, 2016 ). In addition, being involved in a course-specific Facebook group was positively related to students’ sense of belonging in the course (Dougherty & Andercheck, 2014 ). The research on cognitive engagement is less conclusive, with some studies suggesting that Facebook participation is related to academic persistence (Fagioli et al., 2015 ) and self-regulation (Dougherty & Andercheck, 2014 ) while other studies show low levels of knowledge construction in Facebook posts (Hou et al., 2015 ), particularly when compared to discussions held in the LMS. One possible reason may be because the LMS is associated with formal, academic interactions while Facebook is associated with informal, social interactions (Camus et al., 2016 ). While additional research is needed to confirm the efficacy of Facebook for promoting cognitive engagement, studies suggest that Facebook may be a viable tool for increasing specific behavioral and emotional engagement indicators, such as interactions with others and a sense of belonging within a learning community.

Twitter is a web-based service where subscribers can post short messages, called tweets, in real-time that are no longer than 140 characters in length. Tweets may contain hyperlinks to other websites, images, graphics, and/or videos and may be tagged by topic using the hashtag symbol before the designated label (e.g., #elearning). Twitter subscribers may “follow” other users and gain access to their tweets and also may “retweet” messages that have already been posted (Hennessy, Kirkpatrick, Smith, & Border, 2016 ; Osgerby & Rush, 2015 ; Prestridge, 2014 ; West, Moore, & Barry, 2015 ; Tiernan, 2014 ;). Instructors may use Twitter to post updates about the course, clarify expectations, direct students to additional learning materials, and encourage students to discuss course content (Bista, 2015 ; Williams & Whiting, 2016 ). Several of the studies on the use of Twitter included broad, all-encompassing measures of student engagement and produced mixed findings. For example, some studies suggest that Twitter increases student engagement (Evans, 2014 ; Gagnon, 2015 ; Junco, Heibergert, & Loken, 2011 ) while other studies suggest that Twitter has little to no influence on student engagement (Junco, Elavsky, & Heiberger, 2013 ; McKay, Sanko, Shekhter, & Birnbach, 2014 ). In both studies suggesting little to no influence on student engagement, Twitter use was voluntary and in one of the studies faculty involvement in Twitter was low, which may account for the negative findings (Junco et al., 2013 ; McKay et al., 2014 ). Conversely, in the studies that show positive findings, Twitter use was mandatory and often directly integrated with required assignments (Evans, 2014 ; Gagnon, 2015 ; Junco et al., 2011 ). Therefore, making Twitter use mandatory, increasing faculty involvement in Twitter, and integrating Twitter into assignments may help to increase student engagement.

Studies pertaining to specific behavioral student engagement indicators also reveal mixed findings. For example, in studies where course-related Twitter use was voluntary, 45-91% of students reported using Twitter during the term (Hennessy et al., 2016 ; Junco et al., 2013 ; Ross, Banow, & Yu, 2015 ; Tiernan, 2014 ; Williams & Whiting, 2016 ), but only 30-36% reported making contributions to the course-specific Twitter page (Hennessy et al., 2016 ; Tiernan, 2014 ; Ross et al., 2015 ; Williams & Whiting, 2016 ). The study that reported a 91% participation rate was unique because the course-specific Twitter page was accessible via a public link. Therefore, students who chose only to view the content (58%), rather than contribute to the page, did not have to create a Twitter account (Hennessy et al., 2016 ). The convenience of not having to create an account may be one reason for much higher participation rates. In terms of low participation rates, a lack of literacy, familiarity, and interest in Twitter , as well as a preference for Facebook , are cited as contributing factors (Bista, 2015 ; McKay et al., 2014 ; Mysko & Delgaty, 2015 ; Osgerby & Rush, 2015 ; Tiernan, 2014 ). However, when the use of Twitter was required and integrated into class discussions, the participation rate was 100% (Gagnon, 2015 ). Similarly, 46% of students in one study indicated that they would have been more motivated to participate in Twitter activities if they were graded (Osgerby & Rush, 2015 ), again confirming the power of extrinsic motivating factors.

Studies also show mixed results for the use of Twitter to promote interactions with peers and instructors. Researchers found that when instructors used Twitter to post updates about the course, ask and answer questions, and encourage students to tweet about course content, there was evidence of student-student and student-instructor interactions in tweets (Hennessy et al., 2016 ; Tiernan, 2014 ). Some students echoed these findings, suggesting that Twitter is useful for sharing ideas and resources, discussing course content, asking the instructor questions, and networking (Chawinga, 2017 ; Evans, 2014 ; Gagnon, 2015 ; Hennessy et al., 2016 ; Mysko & Delgaty, 2015 ; West et al., 2015 ) and is preferable over speaking aloud in class because it is more comfortable, less threatening, and more concise due to the 140 character limit (Gagnon, 2015 ; Mysko & Delgaty, 2015 ; Tiernan, 2014 ). Conversely, other students reported that Twitter was not useful for improving interaction because they viewed it predominately for social, rather than academic, interactions and they found the 140 character limit to be frustrating and restrictive. A theme among the latter studies was that a large proportion of the sample had never used Twitter before (Bista, 2015 ; McKay et al., 2014 ; Osgerby & Rush, 2015 ), which may have contributed to negative perceptions.

The literature on the use of Twitter for cognitive and emotional engagement is minimal but nonetheless promising in terms of promoting knowledge gains, the practical application of content, and a sense of belonging among users. For example, using Twitter to respond to questions that arose in lectures and tweet about course content throughout the term is associated with increased understanding of course content and application of knowledge (Kim et al., 2015 ; Tiernan, 2014 ; West et al., 2015 ). While the underlying mechanisms pertaining to why Twitter promotes an understanding of content and application of knowledge are not entirely clear, Tiernan ( 2014 ) suggests that one possible reason may be that Twitter helps to break down communication barriers, encouraging shy or timid students to participate in discussions that ultimately are richer in dialogue and debate. In terms of emotional engagement, students who participated in a large, class-specific Twitter page were more likely to feel a sense of community and belonging compared to those who did not participate because they could more easily find support from and share resources with other Twitter users (Ross et al., 2015 ). Despite the positive findings about the use of Twitter for cognitive and emotional engagement, more studies are needed to confirm existing results regarding behavioral engagement and target additional engagement indicators such as motivation, persistence, and attitudes, interests, and values about learning. In addition, given the strong negative perceptions of Twitter that still exist, additional studies are needed to confirm Twitter ’s efficacy for promoting different types of behavioral engagement among both novice and experienced Twitter users, particularly when compared to more familiar tools such as Facebook or LMS discussion forums.

  • Digital games

Digital games are “applications using the characteristics of video and computer games to create engaging and immersive learning experiences for delivery of specified learning goals, outcomes and experiences” (de Freitas, 2006 , p. 9). Digital games often serve the dual purpose of promoting the achievement of learning outcomes while making learning fun by providing simulations of real-world scenarios as well as role play, problem-solving, and drill and repeat activities (Boyle et al., 2016 ; Connolly, Boyle, MacArthur, Hainey, & Boyle, 2012 ; Scarlet & Ampolos, 2013 ; Whitton, 2011 ). In addition, gamified elements, such as digital badges and leaderboards, may be integrated into instruction to provide additional motivation for completing assigned readings and other learning activities (Armier, Shepherd, & Skrabut, 2016 ; Hew, Huang, Chu, & Chiu, 2016 ). The pedagogical benefits of digital games are somewhat distinct from the other technologies addressed in this review, which are designed primarily for social interaction. While digital games may be played in teams or allow one player to compete against another, the focus of their design often is on providing opportunities for students to interact with academic content in a virtual environment through decision-making, problem-solving, and reward mechanisms. For example, a digital game may require students to adopt a role as CEO in a computer-simulated business environment, make decisions about a series of organizational issues, and respond to the consequences of those decisions. In this example and others, digital games use adaptive learning principles, where the learning environment is re-configured or modified in response to the actions and needs of students (Bower, 2016 ). Most of the studies on digital games focused on cognitive and emotional indicators of student engagement, in contrast to the previous technologies addressed in this review which primarily focused on behavioral indicators of engagement.

Existing studies provide support for the influence of digital games on cognitive engagement, through achieving a greater understanding of course content and demonstrating higher-order thinking skills (Beckem & Watkins, 2012 ; Farley, 2013 ; Ke, Xie, & Xie, 2016 ; Marriott, Tan, & Marriott, 2015 ), particularly when compared to traditional instructional methods, such as giving lectures or assigning textbook readings (Lu, Hallinger, & Showanasai, 2014 ; Siddique, Ling, Roberson, Xu, & Geng, 2013 ; Zimmermann, 2013 ). For example, in a study comparing courses that offered computer simulations of business challenges (e.g, implementing a new information technology system, managing a startup company, and managing a brand of medicine in a simulated market environment) and courses that did not, students in simulation-based courses reported higher levels of action-directed learning (i.e., connecting theory to practice in a business context) than students in traditional, non-simulation-based courses (Lu et al., 2014 ). Similarly, engineering students who participated in a car simulator game, which was designed to help students apply and reinforce the knowledge gained from lectures, demonstrated higher levels of critical thinking (i.e., analysis, evaluation) on a quiz than students who only attended lectures (Siddique et al., 2013 ).

Motivation is another cognitive engagement indicator that is linked to digital games (Armier et al., 2016 ; Chang & Wei, 2016 ; Dichev & Dicheva, 2017 ; Grimley, Green, Nilsen, & Thompson, 2012 ; Hew et al., 2016 ; Ibáñez, Di-Serio, & Delgado-Kloos, 2014 ; Ke et al., 2016 ; Liu, Cheng, & Huang, 2011 ; Nadolny & Halabi, 2016 ). Researchers found that incorporating gamified elements into courses, such as giving students digital rewards (e.g., redeemable points, trophies, and badges) for participating in learning activities or creating competition through the use of leaderboards where students can see how they rank against other students positively affects student motivation to complete learning tasks (Armier et al., 2016 ; Chang & Wei, 2016 ; Hew et al., 2016 ; Nadolny & Halabi, 2016 ). In addition, students who participated in gamified elements, such as trying to earn digital badges, were more motivated to complete particularly difficult learning activities (Hew et al., 2016 ) and showed persistence in exceeding learning requirements (Ibáñez et al., 2014 ). Research on emotional engagement may help to explain these findings. Studies suggest that digital games positively affect student attitudes about learning, evident in student reports that games are fun, interesting, and enjoyable (Beckem & Watkins, 2012 ; Farley, 2013 ; Grimley et al., 2012 ; Hew et al., 2016 ; Liu et al., 2011 ; Zimmermann, 2013 ), which may account for higher levels of student motivation in courses that offered digital games.

Research on digital games and behavioral engagement is more limited, with only one study suggesting that games lead to greater participation in educational activities (Hew et al., 2016 ). Therefore, more research is needed to explore how digital games may influence behavioral engagement. In addition, research is needed to determine whether the underlying technology associated with digital games (e.g., computer-based simulations and virtual realities) produce positive engagement outcomes or whether common mechanisms associated with both digital and non-digital games (e.g., role play, rewards, and competition) account for those outcomes. For example, studies in which non-digital, face-to-face games were used also showed positive effects on student engagement (Antunes, Pacheco, & Giovanela, 2012 ; Auman, 2011 ; Coffey, Miller, & Feuerstein, 2011 ; Crocco, Offenholley, & Hernandez, 2016 ; Poole, Kemp, Williams, & Patterson, 2014 ; Scarlet & Ampolos, 2013 ); therefore, it is unclear if and how digitizing games contributes to student engagement.

Discussion and implications

Student engagement is linked to a number of academic outcomes, such as retention, grade point average, and graduation rates (Carini et al., 2006 ; Center for Postsecondary Research, 2016 ; Hu & McCormick, 2012 ). As a result, universities have shown a strong interest in how to increase student engagement, particularly given rising external pressures to improve learning outcomes and prepare students for academic success (Axelson & Flick, 2011 ; Kuh, 2009 ). There are various models of student engagement that identify factors that influence student engagement (Kahu, 2013 ; Lam et al., 2012 ; Nora et al., 2005 ; Wimpenny & Savin-Baden, 2013 ; Zepke & Leach, 2010 ); however, none include the overt role of technology despite the growing trend and student demands to integrate technology into the learning experience (Amirault, 2012 ; Cook & Sonnenberg, 2014 ; Revere & Kovach, 2011 ; Sun & Chen, 2016 ; Westera, 2015 ). Therefore, the primary purpose of our literature review was to explore whether technology influences student engagement. The secondary purpose was to address skepticism and uncertainty about pedagogical benefits of technology (Ashrafzadeh & Sayadian, 2015 ; Kopcha et al., 2016 ; Reid, 2014 ) by reviewing the literature regarding the efficacy of specific technologies (i.e., web-conferencing software, blogs, wikis, social networking sites, and digital games) for promoting student engagement and offering recommendations for effective implementation, which are included at the end of this paper. In the sections that follow, we provide an overview of the findings, an explanation of existing methodological limitations and areas for future research, and a list of best practices for integrating the technologies we reviewed into the teaching and learning process.

Summary of findings

Findings from our literature review provide preliminary support for including technology as a factor that influences student engagement in existing models (Table 1 ). One overarching theme is that most of the technologies we reviewed had a positive influence on multiple indicators of student engagement, which may lead to a larger return on investment in terms of learning outcomes. For example, digital games influence all three types of student engagement and six of the seven indicators we identified, surpassing the other technologies in this review. There were several key differences in the design and pedagogical use between digital games and other technologies that may explain these findings. First, digital games were designed to provide authentic learning contexts in which students could practice skills and apply learning (Beckem & Watkins, 2012 ; Farley, 2013 ; Grimley et al., 2012 ; Ke et al., 2016 ; Liu et al., 2011 ; Lu et al., 2014 ; Marriott et al., 2015 ; Siddique et al., 2013 ), which is consistent with experiential learning and adult learning theories. Experiential learning theory suggests that learning occurs through interaction with one’s environment (Kolb, 2014 ) while adult learning theory suggests that adult learners want to be actively involved in the learning process and be able apply learning to real life situations and problems (Cercone, 2008 ). Second, students reported that digital games (and gamified elements) are fun, enjoyable, and interesting (Beckem & Watkins, 2012 ; Farley, 2013 ; Grimley et al., 2012 ; Hew et al., 2016 ; Liu et al., 2011 ; Zimmermann, 2013 ), feelings that are associated with a flow-like state where one is completely immersed in and engaged with the activity (Csikszentmihalyi, 1988 ; Weibel, Wissmath, Habegger, Steiner, & Groner, 2008 ). Third, digital games were closely integrated into the curriculum as required activities (Farley, 2013 ; Grimley et al., 2012 , Ke et al., 2016 ; Liu et al., 2011 ; Marriott et al., 2015 ; Siddique et al., 2013 ) as opposed to wikis, Facebook , and Twitter , which were often voluntary and used to supplement lectures (Dougherty & Andercheck, 2014 Nakamaru, 2012 ; Prestridge, 2014 ; Rambe, 2012 ).

Web-conferencing software and Facebook also yielded the most positive findings, influencing four of the seven indicators of student engagement, compared to other collaborative technologies, such as blogs, wikis, and Twitter . Web-conferencing software was unique due to the sheer number of collaborative features it offers, providing multiple ways for students to actively engage with course content (screen sharing, whiteboards, digital pens) and interact with peers and the instructor (audio, video, text chats, breakout rooms) (Bower, 2011 ; Hudson et al., 2012 ; Martin et al., 2012 ; McBrien et al., 2009 ); this may account for the effects on multiple indicators of student engagement. Positive findings regarding Facebook ’s influence on student engagement could be explained by a strong familiarity and preference for the social networking site (Clements, 2015 ; DiVall & Kirwin, 2012 ; Hurt et al., 2012 ; Hou et al., 2015 ; Kent, 2013 ; Manca & Ranieri, 2013 ), compared to Twitter which was less familiar or interesting to students (Bista, 2015 ; McKay et al., 2014 ; Mysko & Delgaty, 2015 ; Osgerby & Rush, 2015 ; Tiernan, 2014 ). Wikis had the lowest influence on student engagement, with mixed findings regarding behavioral engagement, limited, but conclusive findings, regarding one indicator of cognitive engagement (deep processing of information), and no studies pertaining to other indicators of cognitive engagement (motivation, persistence) or emotional engagement.

Another theme that arose was the prevalence of mixed findings across multiple technologies regarding behavioral engagement. Overall, the vast majority of studies addressed behavioral engagement, and we expected that technologies designed specifically for social interaction, such as web-conferencing, wikis, and social networking sites, would yield more conclusive findings. However, one possible reason for the mixed findings may be that the technologies were voluntary in many studies, resulting in lower than desired participation rates and missed opportunities for interaction (Armstrong & Thornton, 2012 ; Fagioli et al., 2015 ; Nakamaru, 2012 ; Rambe, 2012 ; Ross et al., 2015 ; Williams & Whiting, 2016 ), and mandatory in a few studies, yielding higher levels of participation and interaction (Bahati, 2015 ; Gagnon, 2015 ; Roussinos & Jimoyiannis, 2013 ). Another possible reason for the mixed findings is that measures of variables differed across studies. For example, in some studies participation meant that a student signed up for a Twitter account (Tiernan, 2014 ), used the Twitter account for class (Williams & Whiting, 2016 ), or viewed the course-specific Twitter page (Hennessy et al., 2016 ). The pedagogical uses of the technologies also varied considerably across studies, making it difficult to make comparisons. For example, Facebook was used in studies to share learning materials (Clements, 2015 ; Dyson et al., 2015 ), answer student questions about academic content or administrative issues (Rambe, 2012 ), prepare for upcoming exams and share study tips (Bowman & Akcaoglu, 2014 ; DiVall & Kirwin, 2012 ), complete group work (Hou et al., 2015 ; Staines & Lauchs, 2013 ), and discuss course content (Camus et al., 2016 ; Kent, 2013 ; Hurt et al., 2012 ). Finally, cognitive indicators (motivation and persistence) drew the fewest amount of studies, which suggests that research is needed to determine whether technologies affect these indicators.

Methodological limitations

While there appears to be preliminary support for the use of many of the technologies to promote student engagement, there are significant methodological limitations in the literature and, as a result, findings should be interpreted with caution. First, many studies used small sample sizes and were limited to one course, one degree level, and one university. Therefore, generalizability is limited. Second, very few studies used experimental or quasi-experimental designs; therefore, very little evidence exists to substantiate a cause and effect relationship between technologies and student engagement indicators. In addition, in many studies that did use experimental or quasi-experimental designs, participants were not randomized; rather, participants who volunteered to use a specific technology were compared to those who chose not to use the technology. As a result, there is a possibility that fundamental differences between users and non-users could have affected the engagement results. Furthermore, many of the studies did not isolate specific technological features (e.g, using only the breakout rooms for group work in web-conferencing software, rather than using the chat feature, screen sharing, and breakout rooms for group work). Using multiple features at once could have conflated student engagement results. Third, many studies relied on one source to measure technological and engagement variables (single source bias), such as self-report data (i.e., reported usage of technology and perceptions of student engagement), which may have affected the validity of the results. Fourth, many studies were conducted during a very brief timeframe, such as one academic term. As a result, positive student engagement findings may be attributed to a “novelty effect” (Dichev & Dicheva, 2017 ) associated with using a new technology. Finally, many studies lack adequate details about learning activities, raising questions about whether poor instructional design may have adversely affected results. For example, an instructor may intend to elicit higher-order thinking from students, but if learning activity instructions are written using low-level verbs, such as identify, describe, and summarize, students will be less likely to engage in higher-order thinking.

Areas for future research

The findings of our literature review suggest that the influence of technology on student engagement is still a developing area of knowledge that requires additional research to build on promising, but limited, evidence, clarify mixed findings, and address several gaps in the literature. As such, our recommendations for future areas of research are as follows:

Examine the effect of collaborative technologies (i.e., web-conferencing, blogs, wikis, social networking sites ) on emotional and cognitive student engagement. There are significant gaps in the literature regarding whether these technologies affect attitudes, interests, and values about learning; a sense of belonging within a learning community; motivation to learn; and persistence to overcome academic challenges and meet or exceed requirements.

Clarify mixed findings, particularly regarding how web-conferencing software, wikis, and Facebook and Twitter affect participation in learning activities. Researchers should make considerable efforts to gain consensus or increase consistency on how participation is measured (e.g., visited Facebook group or contributed one post a week) in order to make meaningful comparisons and draw conclusions about the efficacy of various technologies for promoting behavioral engagement. In addition, further research is needed to clarify findings regarding how wikis and Twitter influence interaction and how blogs and Facebook influence deep processing of information. Future research studies should include justifications for the pedagogical use of specific technologies and detailed instructions for learning activities to minimize adverse findings from poor instructional design and to encourage replication.

Conduct longitudinal studies over several academic terms and across multiple academic disciplines, degree levels, and institutions to determine long-term effects of specific technologies on student engagement and to increase generalizability of findings. Also, future studies should take individual factors into account, such as gender, age, and prior experience with the technology. Studies suggest that a lack of prior experience or familiarity with Twitter was a barrier to Twitter use in educational settings (Bista, 2015 , Mysko & Delgaty, 2015 , Tiernan, 2014 ); therefore, future studies should take prior experience into account.

Compare student engagement outcomes between and among different technologies and non-technologies. For example, studies suggest that students prefer Facebook over Twitter (Bista, 2015 ; Osgerby & Rush, 2015 ), but there were no studies that compared these technologies for promoting student engagement. Also, studies are needed to isolate and compare different features within the same technology to determine which might be most effective for increasing engagement. Finally, studies on digital games (Beckem & Watkins, 2012 ; Grimley et al., 2012 ; Ke et al., 2016 ; Lu et al., 2014 ; Marriott et al., 2015 ; Siddique et al., 2013 ) and face-to-face games (Antunes et al., 2012 ; Auman, 2011 ; Coffey et al., 2011 ; Crocco et al., 2016 ; Poole et al., 2014 ; Scarlet & Ampolos, 2013 ) show similar, positive effects on student engagement, therefore, additional research is needed to determine the degree to which the delivery method (i.e.., digital versus face-to-face) accounts for positive gains in student engagement.

Determine whether other technologies not included in this review influence student engagement. Facebook and Twitter regularly appear in the literature regarding social networking, but it is unclear how other popular social networking sites, such as LinkedIn, Instagram, and Flickr, influence student engagement. Future research should focus on the efficacy of these and other popular social networking sites for promoting student engagement. In addition, there were very few studies about whether informational technologies, which involve the one-way transmission of information to students, affect different types of student engagement. Future research should examine whether informational technologies, such as video lectures, podcasts, and pre-recorded narrated Power Point presentations or screen casts, affect student engagement. Finally, studies should examine the influence of mobile software and technologies, such as educational apps or smartphones, on student engagement.

Achieve greater consensus on the meaning of student engagement and its distinction from similar concepts in the literature, such as social and cognitive presence (Garrison & Arbaugh, 2007 )

Recommendations for practice

Despite the existing gaps and mixed findings in the literature, we were able to compile a list of recommendations for when and how to use technology to increase the likelihood of promoting student engagement. What follows is not an exhaustive list; rather, it is a synthesis of both research findings and lessons learned from the studies we reviewed. There may be other recommendations to add to this list; however, our intent is to provide some useful information to help address barriers to technology integration among faculty who feel uncertain or unprepared to use technology (Ashrafzadeh & Sayadian, 2015 ; Hauptman, 2015 ; Kidd et al., 2016 ; Reid, 2014 ) and to add to the body of practical knowledge in instructional design and delivery. Our recommendations for practice are as follows:

Consider context before selecting technologies. Contextual factors such as existing technological infrastructure and requirements, program and course characteristics, and the intended audience will help determine which technologies, if any, are most appropriate (Bullen & Morgan, 2011 ; Bullen, Morgan, & Qayyum, 2011 ). For example, requiring students to use a blog that is not well integrated with the existing LMS may prove too frustrating for both the instructor and students. Similarly, integrating Facebook- and Twitter- based learning activities throughout a marketing program may be more appropriate, given the subject matter, compared to doing so in an engineering or accounting program where social media is less integral to the profession. Finally, do not assume that students appreciate or are familiar with all technologies. For example, students who did not already have Facebook or Twitter accounts were less likely to use either for learning purposes and perceived setting up an account to be an increase in workload (Bista, 2015 , Clements, 2015 ; DiVall & Kirwin, 2012 ; Hennessy et al., 2016 ; Mysko & Delgaty, 2015 , Tiernan, 2014 ). Therefore, prior to using any technology, instructors may want to determine how many students already have accounts and/or are familiar with the technology.

Carefully select technologies based on their strengths and limitations and the intended learning outcome. For example, Twitter is limited to 140 characters, making it a viable tool for learning activities that require brevity. In one study, an instructor used Twitter for short pop quizzes during lectures, where the first few students to tweet the correct answer received additional points (Kim et al., 2015 ), which helped students practice applying knowledge. In addition, studies show that students perceive Twitter and Facebook to be primarily for social interactions (Camus et al., 2016 ; Ross et al., 2015 ), which may make these technologies viable tools for sharing resources, giving brief opinions about news stories pertaining to course content, or having casual conversations with classmates rather than full-fledged scholarly discourse.

Incentivize students to use technology, either by assigning regular grades or giving extra credit. The average participation rates in voluntary web-conferencing, Facebook , and Twitter learning activities in studies we reviewed was 52% (Andrew et al., 2015 ; Armstrong & Thornton, 2012 ; Bahati, 2015 ; Bowman & Akcaoglu, 2014 ; Divall & Kirwin, 2012 ; Dougherty & Andercheck, 2014 ; Fagioli et al., 2015 ; Hennessy et al., 2016 ; Junco et al., 2013 ; Rambe, 2012 ; Ross et al., 2015 ; Staines & Lauchs, 2013 ; Tiernan, 2014 ; Williams & Whiting, 2016 ). While there were far fewer studies on the use of technology for graded or mandatory learning activities, the average participation rate reported in those studies was 97% (Bahati2015; Gagnon, 2015 ), suggesting that grading may be a key factor in ensuring students participate.

Communicate clear guidelines for technology use. Prior to the implementation of technology in a course, students may benefit from an overview the technology, including its navigational features, privacy settings, and security (Andrew et al., 2015 ; Hurt et al., 2012 ; Martin et al., 2012 ) and a set of guidelines for how to use the technology effectively and professionally within an educational setting (Miller et al., 2012 ; Prestridge, 2014 ; Staines & Lauchs, 2013 ; West et al., 2015 ). In addition, giving students examples of exemplary and poor entries and posts may also help to clarify how they are expected to use the technology (Shraim, 2014 ; Roussinos & Jimoyiannis, 2013 ). Also, if instructors expect students to use technology to demonstrate higher-order thinking or to interact with peers, there should be explicit instructions to do so. For example, Prestridge ( 2014 ) found that students used Twitter to ask the instructor questions but very few interacted with peers because they were not explicitly asked to do so. Similarly, Hou et al., 2015 reported low levels of knowledge construction in Facebook , admitting that the wording of the learning activity (e.g., explore and present applications of computer networking) and the lack of probing questions in the instructions may have been to blame.

Use technology to provide authentic and integrated learning experiences. In many studies, instructors used digital games to simulate authentic environments in which students could apply new knowledge and skills, which ultimately lead to a greater understanding of content and evidence of higher-order thinking (Beckem & Watkins, 2012 ; Liu et al., 2011 ; Lu et al., 2014 ; Marriott et al., 2015 ; Siddique et al., 2013 ). For example, in one study, students were required to play the role of a stock trader in a simulated trading environment and they reported that the simulation helped them engage in critical reflection, enabling them to identify their mistakes and weaknesses in their trading approaches and strategies (Marriott et al., 2015 ). In addition, integrating technology into regularly-scheduled classroom activities, such as lectures, may help to promote student engagement. For example, in one study, the instructor posed a question in class, asked students to respond aloud or tweet their response, and projected the Twitter page so that everyone could see the tweets in class, which lead to favorable comments about the usefulness of Twitter to promote engagement (Tiernan, 2014 ).

Actively participate in using the technologies assigned to students during the first few weeks of the course to generate interest (Dougherty & Andercheck, 2014 ; West et al., 2015 ) and, preferably, throughout the course to answer questions, encourage dialogue, correct misconceptions, and address inappropriate behavior (Bowman & Akcaoglu, 2014 ; Hennessy et al., 2016 ; Junco et al., 2013 ; Roussinos & Jimoyiannis, 2013 ). Miller et al. ( 2012 ) found that faculty encouragement and prompting was associated with increases in students’ expression of ideas and the degree to which they edited and elaborated on their peers’ work in a course-specific wiki.

Be mindful of privacy, security, and accessibility issues. In many studies, instructors took necessary steps to help ensure privacy and security by creating closed Facebook groups and private Twitter pages, accessible only to students in the course (Bahati, 2015 ; Bista, 2015 ; Bowman & Akcaoglu, 2014 ; Esteves, 2012 ; Rambe, 2012 ; Tiernan, 2014 ; Williams & Whiting, 2016 ) and by offering training to students on how to use privacy and security settings (Hurt et al., 2012 ). Instructors also made efforts to increase accessibility of web-conferencing software by including a phone number for students unable to access audio or video through their computer and by recording and archiving sessions for students unable to attend due to pre-existing conflicts (Andrew et al., 2015 ; Martin et al., 2012 ). In the future, instructors should also keep in mind that some technologies, like Facebook and Twitter , are not accessible to students living in China; therefore, alternative arrangements may need to be made.

In 1985, Steve Jobs predicted that computers and software would revolutionize the way we learn. Over 30 years later, his prediction has yet to be fully confirmed in the student engagement literature; however, our findings offer preliminary evidence that the potential is there. Of the technologies we reviewed, digital games, web-conferencing software, and Facebook had the most far-reaching effects across multiple types and indicators of student engagement, suggesting that technology should be considered a factor that influences student engagement in existing models. Findings regarding blogs, wikis, and Twitter, however, are less convincing, given a lack of studies in relation to engagement indicators or mixed findings. Significant methodological limitations may account for the wide range of findings in the literature. For example, small sample sizes, inconsistent measurement of variables, lack of comparison groups, and missing details about specific, pedagogical uses of technologies threaten the validity and reliability of findings. Therefore, more rigorous and robust research is needed to confirm and build upon limited but positive findings, clarify mixed findings, and address gaps particularly regarding how different technologies influence emotional and cognitive indicators of engagement.

Abbreviations

Learning management system

Amirault, R. J. (2012). Distance learning in the 21 st century university. Quarterly Review of Distance Education, 13 (4), 253–265.

Google Scholar  

Anderson, M. (2016). More Americans using smartphones for getting directions, streaming TV . Washington, D.C.: Pew Research Center Retrieved from http://www.pewresearch.org/fact-tank/2016/01/29/us-smartphone-use/ .

Anderson, M., & Horrigan, J. B. (2016). Smartphones help those without broadband get online, but don’t necessary bridge the digital divide . Washington, D.C.: Pew Research Center Retrieved from http://www.pewresearch.org/fact-tank/2016/10/03/smartphones-help-those-without-broadband-get-online-but-dont-necessarily-bridge-the-digital-divide/ .

Andrew, L., Maslin-Prothero, S., & Ewens, B. (2015). Enhancing the online learning experience using virtual interactive classrooms. Australian Journal of Advanced Nursing, 32 (4), 22–31.

Antunes, M., Pacheco, M. R., & Giovanela, M. (2012). Design and implementation of an educational game for teaching chemistry in higher education. Journal of Chemical Education, 89 (4), 517–521. doi: 10.1021/ed2003077 .

Article   Google Scholar  

Armier, D. J., Shepherd, C. E., & Skrabut, S. (2016). Using game elements to increase student engagement in course assignments. College Teaching, 64 (2), 64–72 https://doi.org/10.1080/87567555.2015.1094439 .

Armstrong, A., & Thornton, N. (2012). Incorporating Brookfield’s discussion techniques synchronously into asynchronous online courses. Quarterly Review of Distance Education, 13 (1), 1–9.

Ashrafzadeh, A., & Sayadian, S. (2015). University instructors’ concerns and perceptions of technology integration. Computers in Human Behavior, 49 , 62–73. doi: 10.1016/j.chb.2015.01.071 .

Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Personnel, 25 (4), 297–308.

Auman, C. (2011). Using simulation games to increase student and instructor engagement. College Teaching, 59 (4), 154–161. doi: 10.1080/87567555 .

Axelson, R. D., & Flick, A. (2011). Defining student engagement. Change: The magazine of higher learning, 43 (1), 38–43.

Bahati, B. (2015). Extending student discussions beyond lecture room walls via Facebook. Journal of Education and Practice, 6 (15), 160–171.

Bakker, A. B., Vergel, A. I. S., & Kuntze, J. (2015). Student engagement and performance: A weekly diary study on the role of openness. Motivation and Emotion, 39 (1), 49–62. doi: 10.1007/s11031-014-9422-5 .

Beckem, J. I., & Watkins, M. (2012). Bringing life to learning: Immersive experiential learning simulations for online and blended courses. Journal if Asynchronous Learning Networks, 16 (5), 61–70 https://doi.org/10.24059/olj.v16i5.287 .

Bista, K. (2015). Is Twitter an effective pedagogical tool in higher education? Perspectives of education graduate students. Journal of the Scholarship Of Teaching And Learning, 15 (2), 83–102 https://doi.org/10.14434/josotl.v15i2.12825 .

Boghossian, P. (2006). Behaviorism, constructivism, and Socratic pedagogy. Educational Philosophy and Theory, 38 (6), 713–722 https://doi.org/10.1111/j.1469-5812.2006.00226.x .

Bower, M. (2011). Redesigning a web-conferencing environment to scaffold computing students’ creative design processes. Journal of Educational Technology & Society, 14 (1), 27–42.

MathSciNet   Google Scholar  

Bower, M. (2016). A framework for adaptive learning design in a Web-conferencing environment. Journal of Interactive Media in Education, 2016 (1), 11 http://doi.org/10.5334/jime.406 .

Article   MathSciNet   Google Scholar  

Bowman, N. D., & Akcaoglu, M. (2014). “I see smart people!”: Using Facebook to supplement cognitive and affective learning in the university mass lecture. The Internet and Higher Education, 23 , 1–8. doi: 10.1016/j.iheduc.2014.05.003 .

Boyle, E. A., Hainey, T., Connolly, T. M., Gray, G., Earp, J., Ott, M., et al. (2016). An update to the systematic literature review of empirical evidence of the impacts and outcomes of computer games and serious games. Computers & Education, 94 , 178–192. doi: 10.1016/j.compedu.2015.11.003 .

Bryson, C., & Hand, L. (2007). The role of engagement in inspiring teaching and learning. Innovations in Education and Teaching International, 44 (4), 349–362. doi: 10.1080/14703290701602748 .

Buchanan, T., Sainter, P., & Saunders, G. (2013). Factors affecting faculty use of learning technologies: Implications for models of technology adoption. Journal of Computer in Higher Education, 25 (1), 1–11.

Bullen, M., & Morgan, T. (2011). Digital learners not digital natives. La Cuestión Universitaria, 7 , 60–68.

Bullen, M., Morgan, T., & Qayyum, A. (2011). Digital learners in higher education: Generation is not the issue. Canadian Journal of Learning and Technology, 37 (1), 1–24.

Calabretto, J., & Rao, D. (2011). Wikis to support collaboration of pharmacy students in medication management workshops -- a pilot project. International Journal of Pharmacy Education & Practice, 8 (2), 1–12.

Camacho, M. E., Carrión, M. D., Chayah, M., & Campos, J. M. (2016). The use of wiki to promote students’ learning in higher education (Degree in Pharmacy). International Journal of Educational Technology in Higher Education, 13 (1), 1–8 https://doi.org/10.1186/s41239-016-0025-y .

Camus, M., Hurt, N. E., Larson, L. R., & Prevost, L. (2016). Facebook as an online teaching tool: Effects on student participation, learning, and overall course performance. College Teaching, 64 (2), 84–94 https://doi.org/10.1080/87567555.2015.1099093 .

Carini, R. M., Kuh, G. D., & Klein, S. P. (2006). Student engagement and student learning: Testing the linkages. Research in Higher Education, 47 (1), 1–32. doi: 10.1007/s11162-005-8150-9 .

Cassidy, E. D., Colmenares, A., Jones, G., Manolovitz, T., Shen, L., & Vieira, S. (2014). Higher Education and Emerging Technologies: Shifting Trends in Student Usage. The Journal of Academic Librarianship, 40 , 124–133. doi: 10.1016/j.acalib.2014.02.003 .

Center for Postsecondary Research (2016). Engagement insights: Survey findings on the quality of undergraduate education . Retrieved from http://nsse.indiana.edu/NSSE_2016_Results/pdf/NSSE_2016_Annual_Results.pdf .

Center for Postsecondary Research (2017). About NSSE. Retrieved on February 15, 2017 from http://nsse.indiana.edu/html/about.cfm

Cercone, K. (2008). Characteristics of adult learners with implications for online learning design. AACE Journal, 16 (2), 137–159.

Chang, J. W., & Wei, H. Y. (2016). Exploring Engaging Gamification Mechanics in Massive Online Open Courses. Educational Technology & Society, 19 (2), 177–203.

Chawinga, W. D. (2017). Taking social media to a university classroom: teaching and learning using Twitter and blogs. International Journal of Educational Technology in Higher Education, 14 (1), 3 https://doi.org/10.1186/s41239-017-0041-6 .

Chen, B., Seilhamer, R., Bennett, L., & Bauer, S. (2015). Students’ mobile learning practices in higher education: A multi-year study. In EDUCAUSE Review Retrieved from http://er.educause.edu/articles/2015/6/students-mobile-learning-practices-in-higher-education-a-multiyear-study .

Chu, S. K., Chan, C. K., & Tiwari, A. F. (2012). Using blogs to support learning during internship. Computers & Education, 58 (3), 989–1000. doi: 10.1016/j.compedu.2011.08.027 .

Clements, J. C. (2015). Using Facebook to enhance independent student engagement: A case study of first-year undergraduates. Higher Education Studies, 5 (4), 131–146 https://doi.org/10.5539/hes.v5n4p131 .

Coates, H. (2008). Attracting, engaging and retaining: New conversations about learning . Camberwell: Australian Council for Educational Research Retrieved from http://research.acer.edu.au/cgi/viewcontent.cgi?article=1015&context=ausse .

Coffey, D. J., Miller, W. J., & Feuerstein, D. (2011). Classroom as reality: Demonstrating campaign effects through live simulation. Journal of Political Science Education, 7 (1), 14–33.

Coghlan, E., Crawford, J. Little, J., Lomas, C., Lombardi, M., Oblinger, D., & Windham, C. (2007). ELI Discovery Tool: Guide to Blogging . Retrieved from https://net.educause.edu/ir/library/pdf/ELI8006.pdf .

Connolly, T. M., Boyle, E. A., MacArthur, E., Hainey, T., & Boyle, J. M. (2012). A systematic literature review of empirical evidence on computer games and serious games. Computers & Education, 59 , 661–686. doi: 10.1016/j.compedu.2012.03.004 .

Cook, C. W., & Sonnenberg, C. (2014). Technology and online education: Models for change. ASBBS E-Journal, 10 (1), 43–59.

Crocco, F., Offenholley, K., & Hernandez, C. (2016). A proof-of-concept study of game-based learning in higher education. Simulation & Gaming, 47 (4), 403–422. doi: 10.1177/1046878116632484 .

Csikszentmihalyi, M. (1988). The flow experience and its significance for human psychology. In M. Csikszentmihalyi & I. Csikszentmihalyi (Eds.), Optimal experience: Psychological studies of flow in consciousness (pp. 15–13). Cambridge, UK: Cambridge University Press.

Chapter   Google Scholar  

Dahlstrom, E. (2012). ECAR study of undergraduate students and information technology, 2012 (Research Report). Retrieved from http://net.educause.edu/ir/library/pdf/ERS1208/ERS1208.pdf

de Freitas, S. (2006). Learning in immersive worlds: A review of game-based learning . Retrieved from https://curve.coventry.ac.uk/open/file/aeedcd86-bc4c-40fe-bfdf-df22ee53a495/1/learning%20in%20immersive%20worlds.pdf .

Dichev, C., & Dicheva, D. (2017). Gamifying education: What is known, what is believed and what remains uncertain: A critical review. International Journal of Educational Technology in Higher Education, 14 (9), 1–36. doi: 10.1186/s41239-017-0042-5 .

DiVall, M. V., & Kirwin, J. L. (2012). Using Facebook to facilitate course-related discussion between students and faculty members. American Journal of Pharmaceutical Education, 76 (2), 1–5 https://doi.org/10.5688/ajpe76232 .

Dos, B., & Demir, S. (2013). The analysis of the blogs created in a blended course through the reflective thinking perspective. Educational Sciences: Theory & Practice, 13 (2), 1335–1344.

Dougherty, K., & Andercheck, B. (2014). Using Facebook to engage learners in a large introductory course. Teaching Sociology, 42 (2), 95–104 https://doi.org/10.1177/0092055x14521022 .

Dyson, B., Vickers, K., Turtle, J., Cowan, S., & Tassone, A. (2015). Evaluating the use of Facebook to increase student engagement and understanding in lecture-based classes. Higher Education: The International Journal of Higher Education and Educational Planning, 69 (2), 303–313 https://doi.org/10.1007/s10734-014-9776-3.

Esteves, K. K. (2012). Exploring Facebook to enhance learning and student engagement: A case from the University of Philippines (UP) Open University. Malaysian Journal of Distance Education, 14 (1), 1–15.

Evans, C. (2014). Twitter for teaching: Can social media be used to enhance the process of learning? British Journal of Educational Technology, 45 (5), 902–915 https://doi.org/10.1111/bjet.12099 .

Fagioli, L., Rios-Aguilar, C., & Deil-Amen, R. (2015). Changing the context of student engagement: Using Facebook to increase community college student persistence and success. Teachers College Record, 17 , 1–42.

Farley, P. C. (2013). Using the computer game “FoldIt” to entice students to explore external representations of protein structure in a biochemistry course for nonmajors. Biochemistry and Molecular Biology Education, 41 (1), 56–57 https://doi.org/10.1002/bmb.20655 .

Francescucci, A., & Foster, M. (2013). The VIRI classroom: The impact of blended synchronous online courses on student performance, engagement, and satisfaction. Canadian Journal of Higher Education, 43 (3), 78–91.

Fredricks, J., Blumenfeld, P., & Paris, A. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74 (1), 59–109. doi: 10.3102/00346543074001059 .

Gagnon, K. (2015). Using twitter in health professional education: A case study. Journal of Allied Health, 44 (1), 25–33.

Gandhi, P., Khanna, S., & Ramaswamy, S. (2016). Which industries are the most digital (and why?) . Retrieved from https://hbr.org/2016/04/a-chart-that-shows-which-industries-are-the-most-digital-and-why .

Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues, and future directions. The Internet and Higher Education, 10 (3), 157–172 http://dx.doi.org/10.1016/j.iheduc.2007.04.001 .

Garrity, M. K., Jones, K., VanderZwan, K. J., de la Rocha, A. R., & Epstein, I. (2014). Integrative review of blogging: Implications for nursing education. Journal of Nursing Education, 53 (7), 395–401. doi: 10.3928/01484834-20140620-01 .

Gikas, J., & Grant, M. M. (2013). Mobile computing devices in higher education: Student perspectives on learning with cellphones, smartphones & social media. The Internet and Higher Education, 19 , 18–26 http://dx.doi.org/10.1016/j.iheduc.2013.06.002 .

Gilboy, M. B., Heinerichs, S., & Pazzaglia, G. (2015). Enhancing student engagement using the flipped classroom. Journal of Nutrition Education and Behavior, 47 (1), 109–114 http://dx.doi.org/10.1016/j.jneb.2014.08.008 .

Greenwood, S., Perrin, A., & Duggan, M. (2016). Social media update 2016 . Washington.: Pew Research Center Retrieved from http://www.pewinternet.org/2016/11/11/social-media-update-2016/ .

Grimley, M., Green, R., Nilsen, T., & Thompson, D. (2012). Comparing computer game and traditional lecture using experience ratings from high and low achieving students. Australasian Journal of Educational Technology, 28 (4), 619–638 https://doi.org/10.14742/ajet.831 .

Gunawardena, C. N., Hermans, M. B., Sanchez, D., Richmond, C., Bohley, M., & Tuttle, R. (2009). A theoretical framework for building online communities of practice with social networking tools. Educational Media International, 46 (1), 3–16 https://doi.org/10.1080/09523980802588626 .

Haggis, T. (2009). What have we been thinking of? A critical overview of 40 years of student learning research in higher education. Studies in Higher Education, 34 (4), 377–390. doi: 10.1080/03075070902771903 .

Hauptman, P.H. (2015). Mobile technology in college instruction. Faculty perceptions and barriers to adoption (Doctoral dissertation). Retrieved from ProQuest. (AAI3712404).

Hennessy, C. M., Kirkpatrick, E., Smith, C. F., & Border, S. (2016). Social media and anatomy education: Using twitter to enhance the student learning experience in anatomy. Anatomical Sciences Education, 9 (6), 505–515 https://doi.org/10.1002/ase.1610 .

Hew, K. F., Huang, B., Chu, K. S., & Chiu, D. K. (2016). Engaging Asian students through game mechanics: Findings from two experiment studies. Computers & Education, 93 , 221–236. doi: 10.1016/j.compedu.2015.10.010 .

Hewege, C. R., & Perera, L. R. (2013). Pedagogical significance of wikis: Towards gaining effective learning outcomes. Journal of International Education in Business, 6 (1), 51–70 https://doi.org/10.1108/18363261311314953 .

Hou, H., Wang, S., Lin, P., & Chang, K. (2015). Exploring the learner’s knowledge construction and cognitive patterns of different asynchronous platforms: comparison of an online discussion forum and Facebook. Innovations in Education and Teaching International, 52 (6), 610–620. doi: 10.1080/14703297.2013.847381 .

Hu, S., & McCormick, A. C. (2012). An engagement-based student typology and its relationship to college outcomes. Research in Higher Education, 53 , 738–754. doi: 10.1007/s11162-012-9254-7 .

Hudson, T. M., Knight, V., & Collins, B. C. (2012). Perceived effectiveness of web conferencing software in the digital environment to deliver a graduate course in applied behavior analysis. Rural Special Education Quarterly, 31 (2), 27–39.

Hurt, N. E., Moss, G. S., Bradley, C. L., Larson, L. R., Lovelace, M. D., & Prevost, L. B. (2012). The ‘Facebook’ effect: College students’ perceptions of online discussions in the age of social networking. International Journal for the Scholarship of Teaching & Learning, 6 (2), 1–24 https://doi.org/10.20429/ijsotl.2012.060210 .

Ibáñez, M. B., Di-Serio, A., & Delgado-Kloos, C. (2014). Gamification for engaging computer science students in learning activities: A case study. IEEE Transactions on Learning Technologies, 7 (3), 291–301 https://doi.org/10.1109/tlt.2014.2329293 .

Ivala, E., & Gachago, D. (2012). Social media for enhancing student engagement: The use of facebook and blogs at a university of technology. South African Journal of Higher Education, 26 (1), 152–167.

Johnson, D. R. (2013). Technological change and professional control in the professoriate. Science, Technology & Human Values, 38 (1), 126–149. doi: 10.1177/0162243911430236 .

Junco, R., Elavsky, C. M., & Heiberger, G. (2013). Putting Twitter to the test: Assessing outcomes for student collaboration, engagement and success. British Journal of Educational Technology, 44 (2), 273–287. doi: 10.1111/j.1467-8535.2012.01284.x .

Junco, R., Heibergert, G., & Loken, E. (2011). The effect of Twitter on college student engagement and grades. Journal of Computer Assisted Learning, 27 (2), 119–132. doi: 10.1111/j.1365-2729.2010.00387.x .

Kahu, E. R. (2013). Framing student engagement in higher education. Studies in Higher Education, 38 (5), 758–773. doi: 10.1080/03075079.2011.598505 .

Kaware, S. S., & Sain, S. K. (2015). ICT Application in Education: An Overview. International Journal of Multidisciplinary Approach & Studies, 2 (1), 25–32.

Ke, F., Xie, K., & Xie, Y. (2016). Game-based learning engagement: A theory- and data-driven exploration. British Journal of Educational Technology, 47 (6), 1183–1201 https://doi.org/10.1111/bjet.12314 .

Kent, M. (2013). Changing the conversation: Facebook as a venue for online class discussion in higher education. Journal of Online Learning & Teaching, 9 (4), 546–565 https://doi.org/10.1353/rhe.2015.0000 .

Kidd, T., Davis, T., & Larke, P. (2016). Experience, adoption, and technology: Exploring the phenomenological experiences of faculty involved in online teaching at once school of public health. International Journal of E-Learning, 15 (1), 71–99.

Kim, Y., Jeong, S., Ji, Y., Lee, S., Kwon, K. H., & Jeon, J. W. (2015). Smartphone response system using twitter to enable effective interaction and improve engagement in large classrooms. IEEE Transactions on Education, 58 (2), 98–103 https://doi.org/10.1109/te.2014.2329651 .

Kinchin. (2012). Avoiding technology-enhanced non-learning. British Journal of Educational Technology, 43 (2), E43–E48.

Kolb, D. A. (2014). Experiential learning: Experience as the source of learning and development (2nd ed.). Upper Saddle River: Pearson Education, Inc..

Kopcha, T. J., Rieber, L. P., & Walker, B. B. (2016). Understanding university faculty perceptions about innovation in teaching and technology. British Journal of Educational Technology, 47 (5), 945–957. doi: 10.1111/bjet.12361 .

Krause, K., & Coates, H. (2008). Students’ engagement in first-year university. Assessment and Evaluation in Higher Education, 33 (5), 493–505. doi: 10.1080/02602930701698892 .

Kuh, G. D. (2009). The National Survey of Student Engagement: Conceptual and empirical foundations. New Directions for Institutional Research, 141 , 5–20.

Lam, S., Wong, B., Yang, H., & Yi, L. (2012). Understanding student engagement with a contextual model. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of Research on Student Engagement (pp. 403–419). New York: Springer.

Lawrence, B., & Lentle-Keenan, S. (2013). Teaching beliefs and practice, institutional context, and the uptake of Web-based technology. Distance Education, 34 (1), 4–20.

Leach, L. (2016). Enhancing student engagement in one institution. Journal of Further and Higher Education, 40 (1), 23–47.

Lester, D. (2013). A review of the student engagement literature. Focus on Colleges, Universities, and Schools, 7 (1), 1–8.

Lewis, C. C., Fretwell, C. E., Ryan, J., & Parham, J. B. (2013). Faculty use of established and emerging technologies in higher education: A unified theory of acceptance and use of technology perspective. International Journal of Higher Education, 2 (2), 22–34 http://dx.doi.org/10.5430/ijhe.v2n2p22 .

Lin, C., Singer, R., & Ha, L. (2010). Why university members use and resist technology? A structure enactment perspective. Journal of Computing in Higher Education, 22 (1), 38–59. doi: 10.1007/s12528-010-9028-1 .

Linder-VanBerschot, J. A., & Summers, L. L. (2015). Designing instruction in the face of technology transience. Quarterly Review of Distance Education, 16 (2), 107–118.

Liu, C., Cheng, Y., & Huang, C. (2011). The effect of simulation games on the learning of computational problem solving. Computers & Education, 57 (3), 1907–1918 https://doi.org/10.1016/j.compedu.2011.04.002 .

Lu, J., Hallinger, P., & Showanasai, P. (2014). Simulation-based learning in management education: A longitudinal quasi-experimental evaluation of instructional effectiveness. Journal of Management Development, 33 (3), 218–244. doi: 10.1108/JMD-11-2011-0115 .

Maben, S., Edwards, J., & Malone, D. (2014). Online engagement through Facebook groups in face-to-face undergraduate communication courses: A case study. Southwestern Mass Communication Journal, 29 (2), 1–27.

Manca, S., & Ranieri, M. (2013). Is it a tool suitable for learning? A critical review of the literature on Facebook as a technology-enhanced learning environment. Journal of Computer Assisted Learning, 29 (6), 487–504. doi: 10.1111/jcal.12007 .

Mansouri, S. A., & Piki, A. (2016). An exploration into the impact of blogs on students’ learning: Case studies in postgraduate business education. Innovations in Education And Teaching International, 53 (3), 260–273 http://dx.doi.org/10.1080/14703297.2014.997777 .

Marriott, P., Tan, S. W., & Marriot, N. (2015). Experiential learning – A case study of the use of computerized stock market trading simulation in finance education. Accounting Education, 24 (6), 480–497 http://dx.doi.org/10.1080/09639284.2015.1072728 .

Martin, F., Parker, M. A., & Deale, D. F. (2012). Examining interactivity in synchronous virtual classrooms. International Review of Research in Open and Distance Learning, 13 (3), 227–261.

Martin, K., Goldwasser, M., & Galentino, R. (2017). Impact of Cohort Bonds on Student Satisfaction and Engagement. Current Issues in Education, 19 (3), 1–14.

Martínez, A. A., Medina, F. X., Albalat, J. A. P., & Rubió, F. S. (2013). Challenges and opportunities of 2.0 tools for the interdisciplinary study of nutrition: The case of the Mediterranean Diet wiki. International Journal of Educational Technology in Higher Education, 10 (1), 210–225 https://doi.org/10.7238/rusc.v10i1.1341 .

McBrien, J. L., Jones, P., & Cheng, R. (2009). Virtual spaces: Employing a synchronous online classroom to facilitate student engagement in online learning. International Review of Research in Open and Distance Learning, 10 (3), 1–17 https://doi.org/10.19173/irrodl.v10i3.605 .

McClenney, K., Marti, C. N., & Adkins, C. (2012). Student engagement and student outcomes: Key findings from “CCSSE” validation research . Austin: Community College Survey of Student Engagement.

McKay, M., Sanko, J., Shekhter, I., & Birnbach, D. (2014). Twitter as a tool to enhance student engagement during an interprofessional patient safety course. Journal of Interprofessional Care, 28 (6), 565–567 https://doi.org/10.3109/13561820.2014.912618 .

Miller, A. D., Norris, L. B., & Bookstaver, P. B. (2012). Use of wikis in pharmacy hybrid elective courses. Currents in Pharmacy Teaching & Learning, 4 (4), 256–261. doi: 10.1016/j.cptl.2012.05.004 .

Morley, D. A. (2012). Enhancing networking and proactive learning skills in the first year university experience through the use of wikis. Nurse Education Today, 32 (3), 261–266.

Mysko, C., & Delgaty, L. (2015). How and why are students using Twitter for #meded? Integrating Twitter into undergraduate medical education to promote active learning. Annual Review of Education, Communication & Language Sciences, 12 , 24–52.

Nadolny, L., & Halabi, A. (2016). Student participation and achievement in a large lecture course with game-based learning. Simulation and Gaming, 47 (1), 51–72. doi: 10.1177/1046878115620388 .

Naghdipour, B., & Eldridge, N. H. (2016). Incorporating social networking sites into traditional pedagogy: A case of facebook. TechTrends, 60 (6), 591–597 http://dx.doi.org/10.1007/s11528-016-0118-4 .

Nakamaru, S. (2012). Investment and return: Wiki engagement in a “remedial” ESL writing course. Journal of Research on Technology in Education, 44 (4), 273–291.

Nelson, R. (2016). Apple’s app store will hit 5 million apps by 2020, more than doubling its current size . Retrieved from https://sensortower.com/blog/app-store-growth-forecast-2020 .

Nora, A., Barlow, E., & Crisp, G. (2005). Student persistence and degree attainment beyond the first year in college. In A. Seidman (Ed.), College Student Retention (pp. 129–154). Westport: Praeger Publishers.

Osgerby, J., & Rush, D. (2015). An exploratory case study examining undergraduate accounting students’ perceptions of using Twitter as a learning support tool. International Journal of Management Education, 13 (3), 337–348. doi: 10.1016/j.ijme.2015.10.002 .

Pace, C. R. (1980). Measuring the quality of student effort. Current Issues in Higher Education, 2 , 10–16.

Pace, C. R. (1984). Student effort: A new key to assessing quality . Los Angeles: University of California, Higher Education Research Institute.

Paul, J. A., & Cochran, J. D. (2013). Key interactions for online programs between faculty, students, technologies, and educational institutions: A holistic framework. Quarterly Review of Distance Education, 14 (1), 49–62.

Pellas, N. (2014). The influence of computer self-efficacy, metacognitive self-regulation, and self-esteem on student engagement in online learning programs: Evidence from the virtual world of Second Life. Computers in Human Behavior, 35 , 157–170. doi: 10.1016/j.chb.2014.02.048 .

Poole, S. M., Kemp, E., Williams, K. H., & Patterson, L. (2014). Get your head in the game: Using gamification in business education to connect with Generation Y. Journal for Excellence in Business Education, 3 (2), 1–9.

Poushter, J. (2016). Smartphone ownership and internet usage continues to climb in emerging economies . Washington, D.C.: Pew Research Center Retrieved from http://www.pewglobal.org/2016/02/22/smartphone-ownership-and-internet-usage-continues-to-climb-in-emerging-economies/ .

Prestridge, S. (2014). A focus on students’ use of Twitter - their interactions with each other, content and interface. Active Learning in Higher Education, 15 (2), 101–115.

Rambe, P. (2012). Activity theory and technology mediated interaction: Cognitive scaffolding using question-based consultation on “Facebook”. Australasian Journal of Educational Technology, 28 (8), 1333–1361 https://doi.org/10.14742/ajet.775 .

Reid, P. (2014). Categories for barriers to adoption of instructional technologies. Education and Information Technologies, 19 (2), 383–407.

Revere, L., & Kovach, J. V. (2011). Online technologies for engagement learning: A meaningful synthesis for educators. Quarterly Review of Distance Education, 12 (2), 113–124.

Richardson, J. C., & Newby, T. (2006). The role of students’ cognitive engagement in online learning. American Journal of Distance Education, 20 (1), 23–37 http://dx.doi.org/10.1207/s15389286ajde2001_3 .

Ross, H. M., Banow, R., & Yu, S. (2015). The use of Twitter in large lecture courses: Do the students see a benefit? Contemporary Educational Technology, 6 (2), 126–139.

Roussinos, D., & Jimoyiannis, A. (2013). Analysis of students’ participation patterns and learning presence in a wiki-based project. Educational Media International, 50 (4), 306–324 https://doi.org/10.1080/09523987.2013.863471 .

Salaber, J. (2014). Facilitating student engagement and collaboration in a large postgraduate course using wiki-based activities. International Journal of Management Education, 12 (2), 115–126. doi: 10.1016/j.ijme.2014.03.006 .

Scarlet, J., & Ampolos, L. (2013). Using game-based learning to teach psychopharmacology. Psychology Learning and Teaching, 12 (1), 64–70 https://doi.org/10.2304/plat.2013.12.1.64 .

Sharma, P., & Tietjen, P. (2016). Examining patterns of participation and meaning making in student blogs: A case study in higher education. American Journal of Distance Education, 30 (1), 2–13 http://dx.doi.org/10.1080/08923647.2016.1119605 .

Shraim, K. Y. (2014). Pedagogical innovation within Facebook: A case study in tertiary education in Palestine. International Journal of Emerging Technologies in Learning, 9 (8), 25–31. doi: 10.3991/ijet.v9i8.3805 .

Siddique, Z., Ling, C., Roberson, P., Xu, Y., & Geng, X. (2013). Facilitating higher-order learning through computer games. Journal of Mechanical Design, 135 (12), 121004–121010.

Smith, A., & Anderson, M. (2016). Online Shopping and E-Commerce . Washington, D.C.: Pew Research Center Retrieved from http://www.pewinternet.org/2016/12/19/online-shopping-and-e-commerce/ .

Staines, Z., & Lauchs, M. (2013). Students’ engagement with Facebook in a university undergraduate policing unit. Australasian Journal of Educational Technology, 29 (6), 792–805 https://doi.org/10.14742/ajet.270 .

Sun, A., & Chen, X. (2016). Online education and its effective practice: A research review. Journal of Information Technology Education: Research, 15 , 157–190.

Tiernan, P. (2014). A study of the use of Twitter by students for lecture engagement and discussion. Education and Information Technologies, 19 (4), 673–690 https://doi.org/10.1007/s10639-012-9246-4 .

Trowler, V. (2010). Student engagement literature review . Lancaster: Lancaster University Retrieved from http://www.lancaster.ac.uk/staff/trowler/StudentEngagementLiteratureReview.pdf .

Trowler, V., & Trowler, P. (2010). Student engagement evidence summary . Lancaster: Lancaster University Retrieved from http://eprints.lancs.ac.uk/61680/1/Deliverable_2._Evidence_Summary._Nov_2010.pdf .

van Beynen, K., & Swenson, C. (2016). Exploring peer-to-peer library content and engagement on a student-run Facebook group. College & Research Libraries, 77 (1), 34–50 https://doi.org/10.5860/crl.77.1.34 .

Wang, S. (2008). Blogs in education. In M. Pagani (Ed.), Encyclopedia of Multimedia Technology and Networking (2nd ed., pp. 134–139). Hershey: Information Sciences Reference.

Wdowik, S. (2014). Using a synchronous online learning environment to promote and enhance transactional engagement beyond the classroom. Campus — Wide Information Systems, 31 (4), 264–275. doi: 10.1108/CWIS-10-2013-0057 .

Weibel, D., Wissmath, B., Habegger, S., Steiner, Y., & Groner, R. (2008). Playing online games against computer-vs. human-controlled opponents: Effects on presence, flow, and enjoyment. Computers in Human Behavior, 24 (5), 2274–2291 https://doi.org/10.1016/j.chb.2007.11.002 .

West, B., Moore, H., & Barry, B. (2015). Beyond the tweet: Using Twitter to enhance engagement, learning, and success among first-year students. Journal of Marketing Education, 37 (3), 160–170. doi: 10.1177/0273475315586061 .

Westera, W. (2015). Reframing the role of educational media technologies. Quarterly Review of Distance Education, 16 (2), 19–32.

Whitton, N. (2011). Game engagement theory and adult learning. Simulation & Gaming, 42 (5), 596–609.

Williams, D., & Whiting, A. (2016). Exploring the relationship between student engagement, Twitter, and a learning management system: A study of undergraduate marketing students. International Journal of Teaching & Learning in Higher Education, 28 (3), 302–313.

Wimpenny, K., & Savin-Baden, M. (2013). Alienation, agency, and authenticity: A synthesis of the literature on student engagement. Teaching in Higher Education, 18 (3), 311–326. doi: 10.1080/13562517.2012.725223 .

Witkowski, P., & Cornell, T. (2015). An Investigation into Student Engagement in Higher Education Classrooms. InSight: A Journal of Scholarly Teaching, 10 , 56–67.

Wright, G. B. (2011). Student-centered learning in higher education. International Journal of Teaching and Learning in Higher Education, 23 (3), 92–97.

Yang, C., & Chang, Y. (2012). Assessing the effects of interactive blogging on student attitudes towards peer interaction, learning motivation, and academic achievements. Journal of Computer Assisted Learning, 28 (2), 126–135 https://doi.org/10.1111/j.1365-2729.2011.00423.x .

Zepke, N. (2014). Student engagement research in higher education: questioning an academic orthodoxy. Teaching in Higher Education, 19 (6), 697–708 http://dx.doi.org/10.1080/13562517.2014.901956 .

Zepke, N., & Leach, L. (2010). Improving student engagement: Ten proposals for action. Active Learning in Higher Education, 11 (3), 167–177. doi: 10.1177/1469787410379680 .

Zickuhr, K., & Raine, L. (2014). E-reading rises as device ownership jumps . Washington, D.C.: Pew Research Center Retrieved from http://www.pewinternet.org/2014/01/16/e-reading-rises-as-device-ownership-jumps/ .

Zimmermann, L. K. (2013). Using a virtual simulation program to teach child development. College Teaching, 61 (4), 138–142. doi: 10.1080/87567555.2013.817377 .

Download references

Acknowledgements

Not applicable.

This research was supported in part by a Laureate Education, Incl. David A. Wilson research grant study awarded to the second author, “A Comparative Analysis of Student Engagement and Critical Thinking in Two Approaches to the Online Classroom”.

Availability of data and materials

Authors’ contributions.

The first and second authors contributed significantly to the writing, review, and conceptual thinking of the manuscript. The third author provided a first detailed outline of what the paper could address, and the fourth offer provided input and feedback through critical review. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Ethics approval and consent to participate.

The parent study was approved by the University of Liverpool Online International Online Ethics Review Committee, approval number 04-24-2015-01.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and affiliations.

University of Liverpool Online, Liverpool, UK

Laura A. Schindler & Osama A. Morad

Laureate Education, Inc., Baltimore, USA

Gary J. Burkholder

Walden University, Minneapolis, USA

University of Lincoln, Lincoln, UK

Craig Marsh

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Laura A. Schindler .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Schindler, L.A., Burkholder, G.J., Morad, O.A. et al. Computer-based technology and student engagement: a critical review of the literature. Int J Educ Technol High Educ 14 , 25 (2017). https://doi.org/10.1186/s41239-017-0063-0

Download citation

Received : 31 March 2017

Accepted : 06 June 2017

Published : 02 October 2017

DOI : https://doi.org/10.1186/s41239-017-0063-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Social networking

computer based system essay

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

A survey of computer-based essay marking (CBEM) systems

Profile image of Saadiyah  Darus

2000, Proccedings of International Conference "Education & ICT in the New Millennium'

The following survey is presented by firstly looking at the initial research carried out in the area of computer-based essay marking systems. Early researchers, inspired by natural language processing, studied how the computer might play a role in evaluating students’ writing. Computer-based essay marking systems developed from late 1960’s, attempted to prove that a computer can mark essays as good as any other human beings. These are Project Essay Grade-1, 2 and 3. As time goes by, more sophisticated systems are developed, such as A Simple Text Automatic Marking System, Intelligent Essay Assessor, e-rater and the program developed by S. L. Larkey. Some of these systems have explored the use of advanced computational linguistics and statistical techniques for automatically scoring a variety of writing responses. Most of these systems are promising at the research level. The paper conclusively indicates that further research needs to be carried out in order to make these programs commercially available to instructors.

Related Papers

Gema Online Journal of Language Studies

Nazlia Omar , Siti H Stapa , Saadiyah Darus , Tg Nor Rizan Tg Mohd Maasum

computer based system essay

Saadiyah Darus

"The main objective of this research is to develop the framework of a Computer-Based Essay Marking (CBEM) system for writing in ESL (English as a Second Language) at Institutions of Higher Learning (IHLs) in Malaysia. An initial study shows that a number of CBEM systems are available. In order to determine whether they are suitable for marking students’ writing in ESL, the study investigated lecturers’ and students’ expectations of the CBEM systems using questionnaire surveys. The study also uses Criterion to mark students’ essays. The results of this study suggest that existing CBEM systems are not suitable for marking ESL writing at IHLs in Malaysia. This is due to the fact that lecturers and students have certain expectations these CBEM systems failed to meet. In this paper, we will describe the proposed framework of a CBEM system for ESL writing at IHLs in Malaysia. Since this framework is intended to be used by the software designer in designing and implementing the system, we will describe the framework in the form of the software requirements."

Nazlia Omar , Saadiyah Darus

Although computers and artificial intelligence have been proposed as tools to facilitate the evaluationof student essays, they are not specifically developed for Malaysian ESL (English as a secondlanguage) learners. A marking tool which is specifically developed to analyze errors in ESL writing isvery much needed. Though there are numerous techniques adopted in automated essay marking,research on the formation and use of heuristics to aid the construction of computer-based essaymarking system has been scarce. Thus, this paper aims to introduce new heuristics that can be usedto mark essays automatically and detect grammatical errors in tenses. This approach, which usesnatural language processing technique, can be applied as part of the software requirement for aCBEM (Computer Based Essay Marking) system for ESL learners. The preliminary result based on thetraining set shows that the heuristics are useful and can improve the effectiveness of automated essaymarking tool for writing in ESL.

Automated essay marking systems developed fromlate 1960’s attempted to prove that a computer canmark essays as good as any other human beings. This paper discusses an approach for developing anautomated marking tool for English as a second language (ESL) and introduces heuristics and rule-based approach to detect grammatical errors in tensesin ESL essays. The results show that heuristics and rule-based approach is useful and can improve theeffectiveness of automated essay marking tool for writing in ESL.

Proceedings of the …

Norazwa Abas

… dan Peraturan Untuk …

Kertas ini membincangkan satu pendekatan bagi membangunkan satualatan pemeriksaan nahu secara automatik bagi penulisan esei bahasa Inggerissebagai bahasa kedua dalam pemprosesan bahasa tabii. Kertas inimembincangkan kaedah pemeriksaan dan pemarkahan esei yang biasa digunakan oleh pensyarah dan sistem pemeriksaan esei yang sedia ada.Seterusnya kaedah heuristik dan peraturan untuk mengesan ralat nahu dalam penulisan esei bahasa Inggeris sebagai bahasa kedua dibincangkan. Hasil kajian mendapati penggunaan heuristik dan peraturan dapat memperbaiki keputusan didalam pemeriksaan nahu secara automatik bagi penulisan esei bahasa Inggeris sebagai bahasa kedua.

GEMA Online™ Journal of Language Studies. 12(4): 1089-1107.

The International Journal of Learning Vol 18 Issue 9: Page 69-80

Noriah Ismail

It is important to note that many ESL tertiary students face a lot of problems in writing. Their inability to write well not only affects their grade in their English proficiency class, but their overall course grade as well. In light of this issue, it is important to analyze the students’ writing problems and needs in order to provide a suitable writing module which can enhance their writing ability. This study investigates ESL Tertiary students’ writing problems and needs in the BEL 311 English proficiency course at MARA University of Technology, Johor. In addition, the study also looks at the students’ and lecturers’ suggestions of the important elements for the proposed supplementary online writing program IQ-Write for the course. In this study, 60 Part Three Diploma students taking the BEL 311 course were given a set of questionnaires and four lecturers who have taught the course for more than 5 years were interviewed. Results indicate that the students face a lot of difficulties and problems which include their inability to be inquisitive and critical when writing, the lack of practice time being allocated in class, as well as the dull and ineffective writing module and activities conducted in class. Thus, they feel that an additional online writing program (IQ-Write) should be developed and be made to cater to the students’ writing needs in order to help increase their interest and performance in academic writing.

RELATED PAPERS

International Journal of Learning

Internet Journal of e-Language Learning & Teaching

Advances in Language and Literary Studies

Siti H Stapa

Mohd Sallehhudin Abd Aziz

Jurnal Teknologi

pkukmweb.ukm.my

Rosniah Mustaffa

Norizan Abdul Razak

kabir ojulari

Sirhajwan Idek , Lee Fong

GEMA Online Journal of Language Studies

Serge Gabarre , Cécile Gabarre

5th International Conference on Education and New Learning Technologies 1-3 July 2013: Edulearn13 Proceedings: (ISI Indexed Conference Proceeding) - Pages 4622-4626 http://library.iated.org/view/ISMAIL2013FOS

International Conference'Education & ICT in the New …

Rohaida Mohd Saat

The International Journal of Knowledge, Culture and Change Management Vol 11 Issue 6: Page 187-198

Kum Yoke K Y Soo , normah ismail

Julie Chuah Suan Choo

Advances in Language and Literary Studies [ALLS] , Andrew N Williams , Piku Chowdhury , Noor Hashima Abd Aziz , Sadhna Nair Ramachandran Nair , Marie Ploquin , Amizura Hanadi Mohd Radzi , Mohana Ram , Naemeh Nahavandi

Deepak Singh

Sirhajwan Idek , Teoh Hoon , Lee Fong , Gurnam Sidhu

penerbit.utm.my

supyan hussin

5th International Conference on Education And New Learning Technologies EDULEARN13 :Barcelona Spain : Pages 4617-4621 (ISI Indexed Conference Proceeding) http://library.iated.org/view/ISMAIL2013USE

Dr Othman Zainon

Radha M.K. Nambiar , Zaini Amir

Radha Nambiar

… Online Journal of …

Tg Nor Rizan Tg Mohd Maasum , Rosniah Mustaffa

Marlia Puteh

Nazeila Zahidin

Maryam Ahmad

SIAN HOON TEOH

Asian Journal of University Education

Nabilah Abdullah

noriah mohd. ishak

Nafiseh Zarei

Assoc. Prof. Ts. Dr. Kamaruzzaman Ismail

norlili suriati

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

We use cookies to enhance our website for you. Proceed if you agree to this policy or learn more about it.

  • Essay Database >
  • Essays Examples >
  • Essay Topics

Essays on Computer System

33 samples on this topic

To many students, composing Computer System papers comes easy; others require the help of various types. The WowEssays.com collection includes expertly crafted sample essays on Computer System and relevant issues. Most definitely, among all those Computer System essay examples, you will find a paper that get in line with what you see as a worthy paper. You can be sure that literally every Computer System piece presented here can be used as a sharp example to follow in terms of overall structure and writing different chapters of a paper – introduction, main body, or conclusion.

If, however, you have a hard time coming up with a good Computer System essay or don't have even a minute of extra time to explore our sample database, our free essay writer company can still be of great assistance to you. The matter is, our experts can tailor a sample Computer System paper to your personal needs and particular requirements within the pre-agreed timeframe. Buy college essays today!

Advanced Information Management And The Application Of Technology Essay Sample

Introduction

Proper Term Paper Example About Redesigning Security Operations

Example of essay on duties of purchasing agents, proper research proposal example about the impact of computers on individual learning, question & answer on scare-ware and famous threats, human behavior effects on health care informatics essay template for faster writing, and the date {type) to use as a writing model, free research paper about les schwab company, example of protecting clients and servers research paper, computer i/o architecture and organization: example essay by an expert writer to follow, expertly written essay on privacy, security and ethical issues in computer science – information system and internet to follow.

Executive summary

Origins Of Cyber Security Essay

Good example of cyber crime research paper, good example of computer vision literature review, free new story essay: top-quality sample to follow, good clinic's auto reception literature review example, duties: exemplar essay to follow.

Internship Report

This report covers my internship period at Fastenal Distribution Center. In this organization, I was given the role of being an assistant manager. The time proved to be very informative both academically and professionally. I was able to learn a myriad of skills that could not be learned in the classroom settings. The period was a platform upon which I was able to know about my strengths as a future management professional while also shedding light on the aspects I should work on to be a better manager in the future.

The Repercussions Of Trojan Horse: A Top-Quality Research Paper For Your Inspiration

Example of cyber law research paper.

(Institution Name)

Free Real Incidents Critical Thinking Example

Incident Report Planning

Free What Is Artificial Intelligence? Essay: Top-Quality Sample To Follow

Operating system essays examples, control chart for the process case study examples.

Management Case

The Life And Exploits Of Kevin Mitnick Course Work

The Life and Exploits of Kevin Mitnick.

Course Work On Digital Data And File Systems

Research paper on social engineering and privacy, taxonomy of information technology course work, research paper on information systems in business.

Information Systems

The Programming Process Term Paper

The objective of computer programming is to develop electronic solutions for a particular company. In today’s world, most companies use software systems in order to run their processes efficiently. Companies store their source data details in the system and perform transactions on them through computers. The software users expect certain outputs and reports from a computer system.

Steps in the Programming Process

Information Technology Course Work

Information Technology is a collection of hardware and software used for storing (saving), retrieving (opening) and manipulating (editing) of information in a computer system. Information in an IT environment refers to the results of processed data while data is the raw material fed into the computer. The main purpose of IT is to process the data and give out the results as information. Whereby the processing may involve; saving, editing and manipulation.[1] There has to be an interface for linking the different components of an IT system. This can be best described using the diagram below:

Case Study On The Swimming Pool

275 words = 1 page double-spaced

submit your paper

Password recovery email has been sent to [email protected]

Use your new password to log in

You are not register!

By clicking Register, you agree to our Terms of Service and that you have read our Privacy Policy .

Now you can download documents directly to your device!

Check your email! An email with your password has already been sent to you! Now you can download documents directly to your device.

or Use the QR code to Save this Paper to Your Phone

The sample is NOT original!

Short on a deadline?

Don't waste time. Get help with 11% off using code - GETWOWED

No, thanks! I'm fine with missing my deadline

ESSAY SAUCE

ESSAY SAUCE

FOR STUDENTS : ALL THE INGREDIENTS OF A GOOD ESSAY

Essay: Computer based information system

Essay details and download:.

  • Subject area(s): Information technology essays
  • Reading time: 4 minutes
  • Price: Free download
  • Published: 15 September 2019*
  • File format: Text
  • Words: 906 (approx)
  • Number of pages: 4 (approx)

Text preview of this essay:

This page of the essay has 906 words. Download the full version above.

Assignment Introduction to information system Section: 1 Date: 1/May/2017 Object :Computer based information system Computer based information system: Computer Based Information Systems (CBIS) is an information handling framework into a great data and can be use it as apparatuses that bolster basic leadership, coordination and control and additionally perception and examination. A few terms identified with CBIS integrate PC base. Information: Data is an accumulation of crude, raw numbers.  The word crude implies that the truths have not yet been handled. It might be comprise of numbers, characters, images or picture. Data: Processed information is called data. At the point when crude raw numbers are handled and masterminded in some request then they move toward becoming data. Data has appropriate implications. Really we handle information to change over it into data. system:  System is an arrangement of component or segments that connect with each other to accomplish a shared objective. both conceptual and solid, which comprises of a few interrelated segments to each other Advantages Computer based information system: The benefits of a computer-based information system are many and benefit many community groups for example we utilize a computer-based information system in centralized communication and information, business generation, social networking  etc. Communication Without PCs you clients can get in touch with you through telephone, fax or postal mail, or by strolling in the entryway. With PCs, they can contact you through email, Facebook and other web-based social networking destinations and your site. They can remark on your blog and finish your client overviews. Being in contact with your clients helps you realize what you’re doing well, what you ought to enhance, and what they need. This simplicity of cooperation is probably going to increment as more people utilize cell phones to get to the Internet. Information Centrality Access to information by means of a PC arranges data framework is focal, giving a “one-stop” area to discover and get to relevant PC information. Most huge scale organizations and associations utilize some kind of focal database to oversee client data, oversee ad records, store item data and monitor orders. Cases of focal database arrangements are MySQL, PostgreSQL or Microsoft SQL database arrangements, combined with custom programming which giv Types Computer based information system: ‘ Exchange Processing Systems 1.Record everyday exchanges. 2. Helps managers by creating databases. ‘ Administration Information Systems 1. Shorten the definite information of the exchange preparing system. 2. Produces standard reports for center level chiefs. ‘ Choice Support Systems 1. Draws on the natty gritty information of the exchange handling framework. 2. Gives an adaptable apparatus to center level directors for examination. ‘ Executive Support Systems 1. Presents data in a very compressed frame. 2. Consolidates the interior information from TPS and MIS with outside information. 3. Tops level administrators supervise operations and create key arrangements. Element Computer based information system: Hardware: The term equipment alludes to hardware. This classification incorporates the PC itself, which is regularly alluded to as the focal preparing unit (CPU), and the greater part of its bolster hardware. Among the help tools are input and output devices, storage devices and communications devices. SOFTWARE:  The term programming alludes to PC programs and the manuals that bolster them. PC projects are machine-coherent instructions that coordinate the hardware inside the equipment parts of the Computer Based Information System to work in ways that create valuable data from information. Projects are for the most part put away on some information/yield medium-frequently a circle or tape. DATA:  Data are actualities that are use it  by program to create valuable data. Like projects, information are for the most part put away in machine-discernable shape on plate or tape until the PC needs them.  It might be comprise of numbers, characters, images or picture. PROCEDURES: Procedures are the arrangements that oversee the operation of a PC framework. Systems are to individuals what programming is to equipment is a typical similarity that is use it to show the part of strategies in a CBIS. PEOPLE : People are required for the operation of all data framework. Each Computer Based Information System needs individuals in the event that it is to be valuable. Frequently the most over-looked component of the CBIS is the general population. . Knowledge, Computer-Based Information Systems and the Communication of Meaning: The recommendation that all learning stays subjective has noteworthy ramifications for the degree to which any information, regardless of how express, might be shared utilizing CBIS. On the off chance that all information has inferred parts and implicit information is hard to systematize and impart then this proposes that the sharing of unequivocal information may not be as simple the objectivist point of view of recommends. The significance of the route in which certain sorts of learning turn out to be socially implanted inside ranges of expert movement is recognized by Lave and Wenger (1991). They distinguish thisprepare as one of the improvement of ‘groups of practice’ where working and the sharing of learning are social-collective exercises which contain numerous casual perspectives. Conclusion Finally , we have many different ways to use computer based information system for example the campiness collect and analyze information of customers and also use it to analyze the customer opinion about the product to know  the best product.in this report  I touched on several topics like Advantages Computer based information system, types Computer based information system, Element Computer based information system, Knowledge, Computer-Based Information Systems and the Communication of Meaning. Reference: Khalid Moidu Warren Davies Robert P. Bostrom F.A. Wilson Josh wepman

...(download the rest of the essay above)

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Computer based information system . Available from:<https://www.essaysauce.com/information-technology-essays/essay-2017-05-24-000cyy/> [Accessed 04-03-24].

These Information technology essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on Essay.uk.com at an earlier date.

Essay Categories:

  • Accounting essays
  • Architecture essays
  • Business essays
  • Computer science essays
  • Criminology essays
  • Economics essays
  • Education essays
  • Engineering essays
  • English language essays
  • Environmental studies essays
  • Essay examples
  • Finance essays
  • Geography essays
  • Health essays
  • History essays
  • Hospitality and tourism essays
  • Human rights essays
  • Information technology essays
  • International relations
  • Leadership essays
  • Linguistics essays
  • Literature essays
  • Management essays
  • Marketing essays
  • Mathematics essays
  • Media essays
  • Medicine essays
  • Military essays
  • Miscellaneous essays
  • Music Essays
  • Nursing essays
  • Philosophy essays
  • Photography and arts essays
  • Politics essays
  • Project management essays
  • Psychology essays
  • Religious studies and theology essays
  • Sample essays
  • Science essays
  • Social work essays
  • Sociology essays
  • Sports essays
  • Types of essay
  • Zoology essays

Privacy Overview

Help | Advanced Search

Computer Science > Information Retrieval

Title: prospect personalized recommendation on large language model-based agent platform.

Abstract: The new kind of Agent-oriented information system, exemplified by GPTs, urges us to inspect the information system infrastructure to support Agent-level information processing and to adapt to the characteristics of Large Language Model (LLM)-based Agents, such as interactivity. In this work, we envisage the prospect of the recommender system on LLM-based Agent platforms and introduce a novel recommendation paradigm called Rec4Agentverse, comprised of Agent Items and Agent Recommender. Rec4Agentverse emphasizes the collaboration between Agent Items and Agent Recommender, thereby promoting personalized information services and enhancing the exchange of information beyond the traditional user-recommender feedback loop. Additionally, we prospect the evolution of Rec4Agentverse and conceptualize it into three stages based on the enhancement of the interaction and information exchange among Agent Items, Agent Recommender, and the user. A preliminary study involving several cases of Rec4Agentverse validates its significant potential for application. Lastly, we discuss potential issues and promising directions for future research.

Submission history

Access paper:.

  • Download PDF
  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Computer-Based Inventory System Essay Example

Computer-Based Inventory System Essay Example

  • Pages: 2 (316 words)
  • Published: August 5, 2018
  • Type: Article

Computer-based system is a complex system wherein information technology plays a major role. It makes the work easier, faster and more accurate. Due to that fact, the automated scheme has become essential to small and big companies for they are expected to give the best services possible. Nevertheless, some businesses still prefer sticking with the system that is not integrated with technology. Probable causes are computer illiterate staff and lack of funds.

Companies, especially the big ones are recommended to switch from manual to automated systems because this will improve the efficiency and productivity of the business which will uplift the industry’s reputation. One of the most sought after automated systems of different companies is a purchasing and inventory system which comes hand in hand. A purchasing and inventory system is very import

ant in every organization because a good purchase and inventory management can create excellent productivity.

Primarily, i6nventory work consists of input, output and restock. Input is a process of buying new products into the inventory and replacing the old products with the new ones. Meanwhile, output is a procedure of taking out the products from the inventory for sales or usage and refill is a process of increasing the number of existing products in the inventory in order to fulfill the insufficient products or escalating demands.

Most of the retailing market is using traditional way in the inventory management system where a person is assigned to check and record the stock by hand using pen and paper. It is where operations with regards to all the stock will be archived. Investing in one will provide the company a bigger chance of acquirin

more customers because of their fast and reliable services. It will also ensure the accuracy of their operation records which can be bestowed to the customers and partners upon request. This will develop trust between the company and the clients.

  • Home-Style Cookies Essay Example
  • Barilla Spa Essay Example
  • Donner Company Case Essay Example
  • Assessment of inventory management Essay Example
  • Captiva Conglomerate Essay Example
  • Perpetual Inventory System Essay Example
  • Copper Kettle Catering Essay Example
  • Quality Problem on the Greasex Line Essay Example
  • Operations Management Case Study ToysPlus, Inc. Essay Example
  • Zara: It for Fast Fashion Essay Example
  • Delta Rice Mill Essay Example
  • Boys and Boden Essay Example
  • Sam Walton Essay Example
  • Warehousing and Logistics Essay Example
  • Critical Parameters In Supply Chain Management Commerce Essay Example
  • Chief Executive Officer essays
  • Convenience Store essays
  • Firm essays
  • Training And Development essays
  • Unilever essays
  • Variable Cost essays
  • Virgin Group essays
  • Bargaining essays
  • Entity essays
  • Pest analysis essays
  • Accounting essays
  • Andrew Carnegie essays
  • Automation essays
  • Business Cycle essays
  • Business Intelligence essays
  • Business Model essays
  • Business Operations essays
  • Business Software essays
  • Cooperation essays
  • Cooperative essays
  • Corporate Social Responsibility essays
  • Corporation essays
  • Customer Relationship Management essays
  • Family Business essays
  • Franchising essays
  • Harvard Business School essays
  • Harvard university essays
  • Human Resource Management essays
  • Infrastructure essays
  • Inventory essays
  • Logistics essays
  • Management essays
  • Manufacturing essays
  • Market essays
  • Marketing essays
  • Multinational Corporation essays
  • News Media essays
  • Online Shopping essays
  • Quality Assurance essays
  • Richard Branson essays
  • Sales essays
  • Selling essays
  • Shopping Mall essays
  • Small Business essays
  • Starting a Business essays
  • Stock essays
  • Strategy essays
  • Structure essays
  • Trade Union essays
  • Waste essays

Haven't found what you were looking for?

Search for samples, answers to your questions and flashcards.

  • Enter your topic/question
  • Receive an explanation
  • Ask one question at a time
  • Enter a specific assignment topic
  • Aim at least 500 characters
  • a topic sentence that states the main or controlling idea
  • supporting sentences to explain and develop the point you’re making
  • evidence from your reading or an example from the subject area that supports your point
  • analysis of the implication/significance/impact of the evidence finished off with a critical conclusion you have drawn from the evidence.

Unfortunately copying the content is not possible

Tell us your email address and we’ll send this sample there..

By continuing, you agree to our Terms and Conditions .

  • Free Essays

Computer-based Systems

Sorry, but downloading text is forbidden on this website. if you need this or any other sample, we can send it to you via email. please, specify your valid email address.

By clicking "Submit", you agree to our terms of service and privacy policy . We'll occasionally send you account related and promo emails.

Thank you! How about make it original at only $13.90/page?

Let us edit it for you at only $13.90 to make it 100% original!

Effective Technical and Human Implementation of Computer-based Systems (ETHICS) is a systems design methodology characterised by a high-level of user involvement at the design stage, the setting of clear job satisfaction objectives, and recognition of organisational factors to ensure compatibility and proper functionality. It encompasses the socio-technical view of systems which states "[in order] for a system to be effective the technology must fit closely with the social and organisational factors" (Avison & Fitzgerald, 1995, pg 353). This method has been further developed and this has led to the creation of an approach specifically for the requirements determination stage of a project, known as QUICKethics (Quality Information from Considered Knowledge).A useful and correct requirements specification will be greatly aided by a method which "assists the user to think systematically and analytically about his or her information needs" (Mumford, 1995, pg 95).

QUICKethics attempts this through enabling users to work both individually and within groups to consider their roles and responsibilities and relate these to their information needs. This process will help users in defining their requirements fully and correctly, and will also address possible conflicts of interest. Discussion groups will facilitate greater understanding for the users of their needs and how the system can satisfy them, and will encourage co-operation between different levels of the organisation in regards to system demands.Participation of those affected by the proposed system being part of their decision making process, something which is part of many other methodologies, is seen as essential to the QUICKethics approach. The procedure 'empowers' users as they see that their interests are respected and that their knowledge is being used.

It provides the means for users to get involved in decisions concerning changes to their work processes and allows them to see how the use of technology might improve their job satisfaction. This course of action is much more likely to stimulate an effective requirements analysis. .The main argument against the QUICKethics method is that it is impractical and that unskilled users will have difficulty completing the design process adequately. This by no means a concrete disadvantage however, as there are those that believe with the proper training and support, the users will produce an effective and useable requirements specification. Method Five: JAD Joint Application Design (JAD) is another method that has been found to be effective in the collection of detailed system requirements.

The technique revolves around there being a workshop attended by representatives from different departments and different levels of the organisation, at which there is a structured discussion concerning the design of the proposed system. There are stages before and after this, involving firstly preparatory work and lastly collation of data, but the workshop is certainly the main component.This process is seen as being highly beneficial to the requirements determination stage, indeed Kettlehut, 1993 states that "organisations that have adopted JAD have reported savings of 40% in project design time." It is an approach that encourages communication between users; it recognises that different viewpoints need to be expressed in order to gain an overall consensus on requirements. This calls for the co-operation of all involved to overcome any conflicts, but success will result in the clarification of requirements and is also likely to mean that decisions will be approved quickly and be supported by management.Any uncertainty over technical capabilities or staff difficulties in expressing their information needs in a clear manner, or indeed any queries whatsoever, can be addressed during the workshop.

This facilitates a better understanding of the system throughout the organisation, may win over unsupportive workers, and of course helps to produce a more effective requirements specification. There are some problems with this approach, most of which can be seen as disadvantages of workshops themselves. Certain people can dominate discussions whilst others contribute very seldom, which can lead to a biased or unrepresentative conclusions being drawn. Politics are an especially pertinent factor here, as vendettas and alliances can once again lead to prejudiced decisions being made or unresolved conflict.Method Six: TQM Total Quality Management (TQM) is a very common method that has been utilised within business organisations for many years. It can be seen as "commitment to the continuous improvement of work processes with the goal of satisfying internal and external customers" (Ward, 1994).

Within the specific context of IS requirements specification, this approach involves the elimination of the causes of root problems, and places emphasis on the customer understanding and agreeing upon all requirements. General TQM concepts that are likely to be employed include team building, empowerment, and continuous process improvement.Due to the fact that managers and staff are liable to be experienced with this method and therefore a common basis is provided for problem definition and solving, traditional communication difficulties between system developers and users should be eased. Users commonly cannot express their needs in a form that is helpful to developers, and they in turn have trouble explaining how IS can assist in reaching business goals - the platform of TQM will help both articulate themselves in a more universal manner.The importance placed upon gaining acceptance of requirements from customers inherent within this approach will engage the problems of disparate interpretations of business needs occurring and of requirements that cross organisational boundaries or conflict with each other.

Everybody affected by the proposed system must be aware of and consent to the specified requirements, thus ensuring a relevant and functional specification endorsed by the customer themselves.TQM also stresses that the developer must have an in-depth knowledge of their customer(s), to enable a greater understanding of the business situation and to create a system which is capable of fulfilling the needs of all functions of an organisation. Management should become more familiar with IS capabilities and terminology through this process as well, as there must be interaction between both sides to facilitate a high level of comprehension.Possible drawbacks to the application of this method to an IS project are that TQM is renowned for being a business technique and may be viewed with skepticism by some when it is employed in an IS context. Also, this method is not a structured methodology containing strict guidelines; it is more a philosophy and therefore may be misinterpreted or applied incorrectly by those with little experience.

During the course of this essay, six varying methods that can be used for the collection and clarification of user requirements have been examined. Traditional approaches - interviews, questionnaires and prototypes - have been looked at, along with methodologies designed specifically for the purpose of obtaining requirements in an IS environment - JAD and QUICKethics. A common business technique that has only recently started being employed within this context - TQM - has also been evaluated.A conclusion that can be drawn from the study of these very different approaches is that there is no 'best' method for requirements elicitation in any IS project. There may be certain methods that are more appropriate than others in particular situations, but it would be incorrect to suggest just one technique would produce a fully complete and relevant specification.

 There are benefits and drawbacks associated with all of the six approaches, with emphasis placed upon different factors by each one of them. QUICKethics, for example, is very interested in the social factors affecting organisations. This diversity leads to the conclusion that a combination of methods is the paramount means for obtaining a correct user requirements specification.This is not to say choosing a random list of current available methods will lead to success, choices must be carefully made based upon the characteristics of the organisation involved and the type of system desired, amongst other things. A in-depth knowledge of these methods and experience of IS projects will be the biggest advantage a developer can have during the requirements determination stage.

Describe the characteristics of good quality information”Information is quite simply data that…

Changes in the health care environment produced fundamental hafts In the delivery…

Moreover, they operate on incomparable speeds, thus saving human time and effort…

Problem Statement The problem statement of this project Is employees do not…

Preface Traditional training methods and computer-based training (CB) methods are similar in…

Technology today offers many new opportunities for innovation in educational assessment through…

The purpose of this paper is to provide an overview of Zone-Based…

IntroductionThis report is about the need to upgrade the new computer systems…

Office automation can be explained as the use of computer systems used…

Refers to any use of a computer for a purpose for which…

Different personality have different views base on operating system. To a college…

What is computer information systemThe major system hardening methods and strategies that…

Jennifer from StudyTiger

Hi! We can edit and customize this paper for you. Just send your request for getting no plagiarism essay

HAVEN’T FOUND YOUR TOPIC?

Let us write it for you!

School of Electrical and Computer Engineering

College of engineering, critical infrastructure systems are vulnerable to a new kind of cyberattack.

Industrial Control Screen (iStock)

Engineers and computer scientists show how bad actors can exploit browser-based control systems in industrial facilities with easy-to-deploy, difficult-to-detect malware.

In recent years, browser and web-based technology has become a powerful tool for operators of infrastructure and industrial systems. But it also has opened a new pathway for bad actors to seize control of these systems, potentially endangering critical power, water, and other infrastructure.

Georgia Tech researchers have found a way to hijack the computers that control these physical systems. Called programmable logic controllers (PLCs), they increasingly have embedded webservers and are accessed on site via web browsers. Attackers can exploit this approach and gain full access to the system.

That means they could spin motors out of control, shut off power relays or water pumps, disrupt internet or telephone communication, or steal critical information. They could even launch weapons — or stop the launch of weapons.

“We think there is an entirely new class of PLC malware that's just waiting to happen. We're calling it web-based PLC malware. And it gives you full device and physical process control,” said Ryan Pickren, a Ph.D. student in the School of Electrical and Computer Engineering (ECE) and the lead author of a new study describing the malware and its implications.

The research team will present their findings Feb. 29 at the 2024 Network and Distributed Systems Security Symposium .

Get the full story on the College of Engineering website.

Joshua Stewart College of Engineering

[email protected]

News room topics

  • Share full article

Advertisement

Supported by

Guest Essay

Many Americans Believe the Economy Is Rigged

A color photo of a cracked sidewalk with a large puddle at its center. There is a reflection of the top of the U.S. Capitol building in the puddle.

By Katherine J. Cramer and Jonathan D. Cohen

Ms. Cramer is a co-chair of the Commission on Reimagining Our Economy at the American Academy of Arts & Sciences. Mr. Cohen is a senior program officer at the American Academy of Arts & Sciences.

When asked what drives the economy, many Americans have a simple, single answer that comes to mind immediately: “greed.” They believe the rich and powerful have designed the economy to benefit themselves and have left others with too little or with nothing at all.

We know Americans feel this way because we asked them. Over the past two years, as part of a project with the American Academy of Arts and Sciences, we and a team of people conducted over 30 small-group conversations with Americans from almost every corner of the country. While national indicators may suggest that the economy is strong, the Americans we listened to are mostly not thriving. They do not see the economy as nourishing or supporting them. Instead, they tend to see it as an obstacle, a set of external forces out of their control that nonetheless seems to hold sway over their lives.

Take the perceived prevalence of greed. This is hardly a new feeling, but it has been exacerbated recently by inflation and higher housing costs. Americans experience these phenomena not as abstract concepts or political talking points but rather as grocery stores and landlords demanding more money.

Income inequality has been in decline over the last few years. But try explaining that to someone struggling to pay the rent. “I just feel like the underdog can’t get ahead, and it’s all about greed and profit,” one Kentucky participant noted. It is not necessarily the actual distribution of wealth that troubles people. It is the feeling that the economy is rigged against them.

There is a clear disconnect between the macroeconomic story and the micro-American experience. While a tight job market has produced historic gains for lower-income workers, many of the low-income workers we spoke with are unable to accumulate enough money to build a safety net for themselves. “I like the feeling of not living on the edge of disaster,” a special-education teacher in rural Tennessee said. “[I am] at my fullest potential economically” right now, but “I’m still one doctor’s visit away from not being there, and pretty much most people I know are.”

If there is a singular explanation for dissatisfaction with the economy, it is a lack of financial certainty. While direct government assistance early in the pandemic certainly helped many in 2020 and 2021, millions of households still struggled to get food, and many millions fell behind on rent. These feelings of instability do not dissipate quickly, especially when rising prices make trips to the store adventures in budgetary arithmetic and the threat of an accident or a surprise medical bill looms around every corner. “Uncertainty really affects your well-being. It affects what you do. It affects how you behave,” said a unionized airport worker in Virginia who tutors in the evenings.

An absence of economic resilience prevents people from spending time with family, from getting involved in their community and from finding ways to build a safety net. “The way the economy is going right now, you don’t know where it’s going to be tomorrow, next week,” a human resources employee in Indiana said. Well-being “is about being financially stable. It’s not about being rich, but it’s about being able to take care of your everyday needs without stressing.”

Stress is a rampant part of American life, much of it caused by financial insecurity. Some people aspire for the mansion on the hill. Many others are looking just to get their feet on solid ground.

One does not need to look hard beyond traditional metrics to see the prevalence of insecurity. In June an industry report found that auto loan delinquencies were higher than they were at the peak of the Great Recession. Credit card use has swelled, and delinquencies are at among their highest rates in a decade. After hitting a historic low in 2021 thanks to the expansion of the child tax credit, child poverty more than doubled in 2022 after the tax credit’s expansion expired. Also in 2022, rates of food insecurity reached their highest levels since 2015.

Such trends do not affect all Americans equally. Most disproportionately affect Black and Hispanic households, which perhaps helps explain Republicans’ gains in these communities, according to recent polls. Geography plays a major role, too. In some parts of the country — particularly rural areas — many people feel they have been left out of the progress and promise of the high-tech economy. Even if their finances remain in good health, they seem to fear for the future of their community, and they blame the economy.

The political system is supposed to make all this better. Instead, even as both major parties have vied to cast themselves as the standard-bearer of the working class, many Americans see politicians as unable or unwilling to do anything to help them. “In my democracy, I’d like to see us get rid of Republicans, Democrats,” one Kentucky participant told us. “Just stand up there, tell me what you can do. If you can do it, I don’t have to care what you are.” Many Americans seem to see Washington as awash in partisan squabbles over things that have little effect on their lives. Many believe that politicians are looking out for their political party, not the American people.

It should not be surprising, then, that so many are so pessimistic about a seemingly strong economy. A rising gross domestic product lifts lots of boats, but many Americans feel as if they are drowning.

What would make the people we talked to less stressed? The ability to accumulate savings. Low-wage workers have seen their incomes rise only for many of these gains to be wiped out by inflation. And the costs of housing, health care and child care can quickly absorb even a very robust rainy-day fund. Without a safety net that can propel people into security, the threat of these costs will continue to make many Americans feel unstable, uncertain and decidedly unhappy about the economy.

A helpful starting point would be to address benefit cliffs — income eligibility cutoffs built into certain benefits programs. As households earn more money, they can make themselves suddenly ineligible for benefits that would let them build up enough wealth to no longer need any government support. In Kansas, for example, a family of four remains eligible for Medicaid as long as it earns under $39,900. A single dollar in additional income results in the loss of health care coverage — and an alternative will certainly not cost only a buck.

Reforming these types of cliffs for health care, child care, housing and food assistance programs would allow the millions of households receiving state aid to achieve a sense of stability. Take this mother in Chicago who told us that her income is just above the eligibility cutoffs. The cliff “knocks me out of a lot of the opportunity to qualify for a lot of the programs that could assist in benefiting myself and my child,” she said.

The Americans we listened to want resiliency so they can feel that they are in control of their lives and that they have a say in the direction of their community and their nation. They want a system focused less on how the economy is doing and more on how Americans are doing. As one Houston man observed: “We’re so far down on the economic chain that we don’t have nothing. It seems like our voices don’t matter.” But they do matter. The rest of us just need to listen.

Katherine J. Cramer is a political science professor at the University of Wisconsin, Madison. Jonathan D. Cohen is the author of “For a Dollar and a Dream: State Lotteries in Modern America.”

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips . And here’s our email: [email protected] .

Follow the New York Times Opinion section on Facebook , Instagram , TikTok , X and Threads .

Watch CBS News

Hacking at UnitedHealth unit cripples a swath of the U.S. health system: What to know

By Darius Tahir

Updated on: February 29, 2024 / 8:48 PM EST / KFF Health News

Early in the morning of Feb. 21, Change Healthcare, a company unknown to most Americans that plays a huge role in the U.S. health system, issued a brief  statement  saying some of its applications were "currently unavailable."

By the afternoon, the company described the situation as a "cybersecurity" problem.

Since then, it has rapidly blossomed into a crisis.

The company, recently purchased by insurance giant UnitedHealth Group, reportedly suffered a cyberattack. The impact is wide and expected to grow. Change Healthcare's business is maintaining health care's pipelines — payments, requests for insurers to authorize care, and much more. Those pipes handle a big load: Change says on its  website , "Our cloud-based network supports 14 billion clinical, financial, and operational transactions annually."

Initial media reports have focused on the impact on pharmacies, but techies say that's understating the issue. The American Hospital Association  says  many of its members aren't getting paid and that doctors can't check whether patients have coverage for care.

But even that's just a slice of the emergency: CommonWell , an institution that helps health providers share medical records, information critical to care, also relies on Change technology. The system contained  records on 208 million individuals as of July 2023. Courtney Baker, CommonWell marketing manager, said the network "has been disabled out of an abundance of caution."

"It's small ripple pools that will get bigger and bigger over time, if it doesn't get solved," Saad Chaudhry, chief digital and information officer at Luminis Health, a hospital system in Maryland, told KFF Health News.

Here's what to know about the hack.

Who did it?

Media reports are fingering ALPHV, a notorious ransomware group also known as Blackcat, which has become the target of numerous law enforcement agencies worldwide. While UnitedHealth Group has said it is a "suspected nation-state associated" attack, some outside analysts  dispute  the linkage. The gang has previously been blamed for hacking casino companies MGM and Caesars, among many other targets.

The Department of Justice  alleged  in December, before the Change hack, that the group's victims had already paid it hundreds of millions of dollars in ransoms.

Is this a new problem?

Absolutely not. A study published in JAMA Health Forum in December 2022 found that the annual number of ransomware attacks against hospitals and other providers  doubled  from 2016 to 2021.

"It's more of the same, man," said Aaron Miri, the chief digital and information officer at Baptist Health in Jacksonville, Florida.

Because the assaults disable the target's computer systems, providers have to shift to paper, slowing them down and making them vulnerable to missing information.

Further, a study published in May 2023 in JAMA Network Open examining the effects of an attack on a health system found that waiting times, median length of stay, and incidents of patients leaving against medical advice all increased — at neighboring emergency departments. The results, the authors wrote , mean cyberattacks "should be considered a regional disaster."

Attacks have devastated rural hospitals, Miri said. And wherever health care providers are hit, patient safety issues follow.

What does it mean for patients?

Year after year, more Americans' health data is breached. That exposes people to identity theft and medical error.

Care can also suffer. For example, a 2017 attack, dubbed "NotPetya," forced a rural West Virginia  hospital  to reboot its operations and hit pharma company Merck so  hard  it wasn't able to fulfill production targets for an HPV vaccine.

Because of the Change Healthcare attack, some patients may be routed to new pharmacies less affected by billing problems. Patients' bills may also be delayed, industry executives said. At some point, many patients are likely to receive notices their data was breached. Depending on the exact data that has been pilfered, those patients may be at risk for identity theft, Chaudhry said. Companies often offer free credit monitoring services in those situations.

"Patients are dying because of this," Miri said. Indeed, an October preprint from researchers at the University of Minnesota  found  a nearly 21% increase in mortality for patients in a ransomware-stricken hospital.

How did it happen?

The Health Information Sharing and Analysis Center, an industry coordinating group that disseminates intel on attacks, has  told  its members that flaws in an application called ConnectWise ScreenConnect are to blame. Exact details couldn't be confirmed.

It's a tool tech support teams use to remotely troubleshoot computer problems, and the attack is "apparently fairly trivial to execute," H-ISAC warned members. The group said it expects additional victims and advised its members to update their technology. When the attack first hit, the AHA  recommended  its members disconnect from systems both at Change and its corporate parent, UnitedHealth's Optum unit. That would affect services ranging from claims approvals to reference tools.

Millions of Americans see physicians and other practitioners employed by UnitedHealth and are covered by the company's insurance plans.

UnitedHealth has said only Change's systems are affected and that it's safe for hospitals to use other digital services provided by UnitedHealth and Optum, which include claims filing and processing systems.

But not many chief information officers "are jumping to reconnect," Chaudhry said. "It's an uneasy feeling."

Miri says Baptist is using the conglomerate's technology and that he trusts UnitedHealth's word that it's safe.

Where's the federal government?

Neither executive was sanguine about the future of cybersecurity in health care. "It's going to get worse," Chaudhry said.

"It's a shame the feds aren't helping more," Miri said. "You'd think if our nuclear infrastructure were under attack the feds would respond with more gusto."

While the departments of Justice and State have targeted the ALPHV group, the government has stayed behind the scenes more in the aftermath of this attack. Chaudhry said the FBI and the Department of Health and Human Services have been attending calls organized by the AHA to brief members about the situation.

Miri said rural hospitals in particular could use more funding for security and that agencies like the Food and Drug Administration should have mandatory standards for cybersecurity.

There's some recognition among officials that improvements need to be made.

"This latest attack is just more evidence that the status quo isn't working and we have to take steps to shore up cybersecurity in the health industry," said Sen. Mark Warner (D-Va.), the chair of the Senate Select Committee on Intelligence and a longtime advocate for stronger cybersecurity, in a statement to KFF Health News.

KFF Health News (formerly known as Kaiser Health News, or KHN)   is a national newsroom that produces in-depth journalism about health issues. Together with Policy Analysis and Polling, KHN is one of the three major operating programs at  KFF  (Kaiser Family Foundation). KFF is an endowed nonprofit organization providing information on health issues to the nation.

  • Cybersecurity and Infrastructure Security Agency
  • UnitedHealth Group
  • Cyberattack

More from CBS News

On Call: How mental health is impacting maternal mortality

New York City shelter system, nonprofits face surge in need for help

NYPD releases image of suspect in A train attack in Manhattan

New Jersey State Police actively recruiting diverse group of candidates

IMAGES

  1. Computer System

    computer based system essay

  2. (PDF) Essay on the understanding of computer & systems sciences

    computer based system essay

  3. Computer Systems in Enterprises Essay Example

    computer based system essay

  4. (PDF) A Framework of a Computer-Based Essay Marking System for ESL Writing

    computer based system essay

  5. ≫ Understanding of Computer and Systems Sciences Free Essay Sample on

    computer based system essay

  6. Essay on Computer

    computer based system essay

VIDEO

  1. Computer Organization Lecture 12

  2. Computer Studies

  3. Computer based system engineering Part 1

  4. L-4 Computer Objectives

  5. Computer Organization: Lecture (3) Chapter 2 (Slides 37:91)

  6. Computer Organization Lecture 17

COMMENTS

  1. Essay on Computer and its Uses in 500 Words for Students

    The computer runs on a three-step cycle namely input, process, and output. Also, the computer follows this cycle in every process it was asked to do. In simple words, the process can be explained in this way. The data which we feed into the computer is input, the work CPU do is process and the result which the computer give is output.

  2. Computer-Based Information Systems

    Computer-based information systems can be used in the differentiation strategy. For example, computer-based information systems can make communication between clients and the company faster and easier (Davies). Robust online customer support and online shop can distinguish the company from its competitors, thus aiding in the implementation of ...

  3. An automated essay scoring systems: a systematic literature review

    Automated essay scoring (AES) is a computer-based assessment system that automatically scores or grades the student responses by considering appropriate features. The AES research started in 1966 with the Project Essay Grader (PEG) by Ajay et al. . PEG evaluates the writing characteristics such as grammar, diction, construction, etc., to grade ...

  4. An automated essay scoring systems: a systematic literature review

    Automated essay scoring (AES) is a computer-based assessment system that automatically scores or grades the student responses by considering appropriate features. The AES research started in 1966 with the Project Essay Grader (PEG) by Ajay et al. . PEG evaluates the writing characteristics such as grammar, diction, construction, etc., to grade ...

  5. Computer Based Information Systems Information Technology Essay

    Computer based information systems -. Computer based information systems are used by managers within an organisation in order to increase productivity at that company and also maximise efficiency. CBI can be advantageous in many ways, for example a computer is able to collect and analyse more data than an average human, in which you as a ...

  6. ⇉Computer Based Information System Essay Example

    Computer programs are machine-readable instructions that direct the circuitry within the hardware parts of the Computer Based Information System (CBIS) to function in ways that produce useful information from data. Programs are generally stored on some input / output medium-often a disk or tape. Data: Data are facts that are used by program to ...

  7. Computer-Based Systems Effective Implementation Essay (Critical Writing)

    ETHICS is an acronym that stands for Effective Technical and Human Implementation of Computer-based Systems (ETHICS). It is the overall methodology of the design process guiding the development of Information Systems in Organizations. Under this methodology, there is a provision that gives attention to the needs of the people involved in the ...

  8. Common Computer Based Information Systems Information Technology Essay

    A DSS consist of user, system software, data-internal and external, and decision models. Three types of decision models are strategic, tactcal, and operational. Executive support systems (ESS)-assist top-level executives. An executive support system is similar to MIS or DSS but easier to use.

  9. Computer-based technology and student engagement: a ...

    Computer-based technology has infiltrated many aspects of life and industry, yet there is little understanding of how it can be used to promote student engagement, a concept receiving strong attention in higher education due to its association with a number of positive academic outcomes. The purpose of this article is to present a critical review of the literature from the past 5 years related ...

  10. A survey of computer-based essay marking (cbem) systems

    Computer-based essay marking systems developed from late 1960's, attempted to prove that a computer can mark essays as good as any other human beings. These are Project Essay Grade-1, 2 and 3. ...

  11. PDF Computer-Based Writing Instruction

    Automated essay scoring (AES) systems are the most prominent among computer-based writing tools. AES systems are technolo-gies that allow computers to automatically evaluate the content, structure, and quality of written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003, 2013). In line with this goal, AES has been advertised as

  12. A survey of computer-based essay marking (CBEM) systems

    Computer-based essay marking systems developed from late 1960's, attempted to prove that a computer can mark essays as good as any other human beings. These are Project Essay Grade-1, 2 and 3. As time goes by, more sophisticated systems are developed, such as A Simple Text Automatic Marking System, Intelligent Essay Assessor, e-rater and the ...

  13. 412 Computer Topics for Essays & Research Topics about ...

    A computer virus is a software program designed to interfere with the normal computer functioning by infecting the computer operating system. The learning efficiency of the student is significantly increased by the use of computers since the student is able to make use of the learning model most suited to him/her.

  14. Computer Based Information System Essay

    A computer-based information system (CBIS) is an information system that uses computer technology to perform different functions. For example, the google uses internet to accomplish its task, to reach its customers. Computer Based Information System is a field of studying computers and algorithmic processes including their applications. Such a ...

  15. Computer System Essay Examples

    Essays on Computer System. 33 samples on this topic. To many students, composing Computer System papers comes easy; others require the help of various types. The WowEssays.com collection includes expertly crafted sample essays on Computer System and relevant issues. Most definitely, among all those Computer System essay examples, you will find ...

  16. Texas rolls out computer scoring for STAAR essay questions

    The Texas Education Agency rolled out the new "automated scoring engine," a computer-based grading system, ... The grading program analyzes essays based on about 40 different criteria, he said

  17. Computer Based Information System

    Computer Based Information System (CBIS) is an information system in which the computer plays a major role. Such a system consists of the following elements: * Hardware: The term hardware refers to machinery. This category includes the computer itself, which is often referred to as the central processing unit (CPU), and all of its support ...

  18. Introduction to Computer-Based Information System

    403. Information is one of five main types of resources to which the manager has access. All the resources, including information, can be managed. The importance of information management increases as business becomes more complex and computer capabilities expand. Computer output is used by managers, non-managers, and persons and organizations ...

  19. Computer-Based Career Information Systems Essay

    1754 Words. 8 Pages. 8 Works Cited. Open Document. Computer-Based Career Information Systems. The adage "information is power" can certainly be applied to the marriage of career information with computers. In an era that is characterized by a rapidly changing employment and occupational outlook, the ability to access computerized career ...

  20. Essay: Computer based information system

    This page of the essay has 906 words. Download the full version above. Computer Based Information Systems (CBIS) is an information handling framework into a great data and can be use it as apparatuses that bolster basic leadership, coordination and control and additionally perception and examination.

  21. Prospect Personalized Recommendation on Large Language Model-based

    The new kind of Agent-oriented information system, exemplified by GPTs, urges us to inspect the information system infrastructure to support Agent-level information processing and to adapt to the characteristics of Large Language Model (LLM)-based Agents, such as interactivity. In this work, we envisage the prospect of the recommender system on LLM-based Agent platforms and introduce a novel ...

  22. Computer-based Learning and Virtual Classrooms

    Updated: Feb 23rd, 2024. E learning is a wide range of processes that include computer-based learning and virtual classrooms that is received and sent through the Internet, audio and videotape, satellite broadcast, CD-ROM, and intranets. We can generally term it as electronic means of communication, education, and training.

  23. Computer-Based Inventory System Essay Example

    Computer-Based Inventory System Essay Example 🎓 Get access to high-quality and unique 50 000 college essay examples and more than 100 000 flashcards and test answers from around the world! ... Computer-based system is a complex system wherein information technology plays a major role. It makes the work easier, faster and more accurate. Due ...

  24. Computers scoring Texas students' STAAR essay answers, state ...

    Humans validate about 25% of the answers scored by the computer. Essays are routed to human scorers based on certain conditions or if the scoring engine expresses low confidence about its ...

  25. Computer-based Systems Free Essay Example from StudyTiger

    Effective Technical and Human Implementation of Computer-based Systems (ETHICS) is a systems design methodology characterised by a high-level of user. STUDY TIGER. ... Computer-based Systems; Computer-based Systems. B. Pages:5 Words:1288. WE WILL WRITE A CUSTOM ESSAY SAMPLE ON FOR ONLY $13.90/PAGE. Order Now. Download:.pdf,.docx,.epub,.txt ...

  26. Critical Infrastructure Systems Are Vulnerable to a New Kind of

    In recent years, browser and web-based technology has become a powerful tool for operators of infrastructure and industrial systems. But it also has opened a new pathway for bad actors to seize control of these systems, potentially endangering critical power, water, and other infrastructure.

  27. Opinion

    Ms. Cramer is a co-chair of the Commission on Reimagining Our Economy at the American Academy of Arts & Sciences. Mr. Cohen is a senior program officer at the American Academy of Arts & Sciences ...

  28. Hacking at UnitedHealth unit cripples a swath of the U.S. health system

    Early in the morning of Feb. 21, Change Healthcare, a company unknown to most Americans that plays a huge role in the U.S. health system, issued a brief statement saying some of its applications ...