Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10.5 Evaluating Websites

Obviously, the same tests of evidence discussed earlier must be applied to information you find on a website. You need to determine the reputation or authority of the information producer, the accuracy of what you find online, the recency of the information, the completeness of the data, and so forth.

There are many more clues and tips for helping assure you are thoroughly assessing the information you find on a specific site. One of the main considerations when determining how or if information on a website can be used is to understand the reason the online information producer is making the information available. The website’s main goal might be:

Advocacy: attempting to persuade or influence opinion toward a particular position or point of view

Marketing: using the site to promote a particular product or service by providing information to consumers or to other businesses

News: publishing news and information by media companies is an alternative to their traditional delivery methods.

Personal expression: creating personal sites, blogs, or social networking pages allows individuals to express their interests and unique perspectives.

The bottom line here is it’s like the old computer adage: garbage in / garbage out. So, if you want to ensure that you are not producing garbage, make sure you don’t use it.

If you ask, and answer, the following three questions when you go to any website, you should be able to avoid misusing or misunderstanding the information you find:

Who is sharing this information?

Why are they sharing it?

How do they know what they claim to know?

Let’s look at Wikipedia as an example of evaluation. Categorizing Wikipedia is difficult because its contributors may be scholarly experts in their fields, they may be private citizens with some professional or personal knowledge about a topic, but they may also be individuals with misguided information or pranksters whose goal is to deface of this popular site. It also contains obscure information from popular culture or other realms that you may want to learn something about and its contributions are sometimes highly informative and accurate. Because the sources of the information cannot be easily verified or the motivation for providing the information determined, Wikipedia entries can only be used with skepticism and require “second-sourcing” from more authoritative information contributors.

Reading the “About Us” information on any website will give you valuable information about the site creators’ agenda, backers, and mission. But be sure to critically evaluate the self-description and remember the importance of understanding “ambiguous” terms as discussed earlier in this lesson.

Remember also the contributors to the information search that the information strategy model identifies. Private-sector institutional sources are likely to host sites that advance their business, point of view or support for a particular policy or program. Journalistic organizations host websites to disseminate their news content and gather information from their readers. Scholarly websites produced by universities, research centers or individual scholars are generally designed to highlight the academic work of that institution or individual researcher. Informal sources may host family websites, hobby websites, sites to spread a personal philosophy and for many other purposes. It is your responsibility to identify the type of contributor who created that site when you are deciding whether or not to use any information you find there.

Here is a link to a little cartoon with a good explanation of how to evaluate a website:  https://www.youtube.com/watch?v=aem3JahbXfk

Information Strategies for Communicators Copyright © 2015 by Kathleen A. Hansen and Nora Paul is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Lesson 2: Developing a Website Evaluation Tool

The purpose of this assignment is to challenge, test and ultimately come to general agreement on evaluation criteria for websites. This exercise is student driven, but don't be surprised if your teacher offers some guidance along the way.

Learner Outcomes

At the completion of this exercise:

  • you will demonstrate your ability to critically examine the quality of a web site by developing a web site evaluation tool.
  • you will be able to communicate to others your ideas about what makes a high quality web site and explain how you would evaluate a site.

Develop a website evaluation tool. Use the knowledge and perspective gained in Lesson 1 to develop a rubric for measuring the quality of websites. Follow these steps:

  • Pair up and take five minutes to share and discuss the merits and problems of the "good" and "bad" websites chosen in the activity of the previous lesson . One site should be a clear example of good design and one an example of poor design. Discuss specific traits that could be used to evaluate sites.
  • Join another pair and now, in a group of four, review your lists of traits that were generated in step one. Synthesize the lists to no fewer than four but no more than seven general traits that could be used to evaluate most any web site. As much as possible, make each trait discrete and clear. Combine similar traits. Eliminate redundant, obtuse, or invalid traits.
  • Once you reach consensus on the traits, decide on a numeric scale to use for judging how well a website rates for each of the traits.
  • Brainstorm a list of descriptors that define major point values on the numeric scale. What does a high score, a low score look like?
  • Now that you have all the components for the evaluation rubric, sketch the complete evaluation rubric with a marker on butcher paper. Write boldly and large enough for others to read from a distance. Your poster (evaluation tool) will be displayed on a wall.
  • Your instructor will now assign you a specific website to evaluate. After receiving the assignment, each person in your group will individually use this evaluation rubric to evaluate the assigned site. It is important that you evaluate the site without collaboration or discussion.
  • After all members have had enough time to evaluate, compare how your group members rated the assigned site on each major trait.
  • If someone in the group rated a trait radically differently from the rest of the group, ask them to explain why. Can the group persuade the radical, or the radical persuade the group? Is a compromise necessary? Try to reach a consensus score for each trait. Does the tool need to be changed somehow to make it more useful?
  • Decide on a reporter or spokesperson. Display your poster. Have the spokesperson share with the rest of the class how well your group's evaluation tool worked when applied to the assigned website.
  • As a class, synthesize the various evaluation tools into a single rubric. Find what traits are most commonly used. Sometimes groups refer to the same trait using different terminology, so the class must agree on what term to use (a groups' shared understanding of a term is called nomenclature ).

Great! Proceed to Module 2 .

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Humanities LibreTexts

5.2.3: Evaluating for Credibility

  • Last updated
  • Save as PDF
  • Page ID 98836

  • Cheryl Lowry
  • The Ohio State University via Ohio State University Libraries

Evaluating for Credibility

Next, you’ll be evaluating each of the sources that you deemed relevant.

What are the clues for inferring a source’s credibility? Let’s start with evaluating websites, since we all do so much of our research online. But we’ll also include where to find clues relevant to sources in other formats when they differ from what’s good to use with websites. Looking at specific places in the sources will mean you don’t have to read all of every resource to determine its worth to you.

And remember, the more you take these steps, the faster it goes because always examining your sources becomes second nature.

What Used to Help

It used to be easier to draw conclusions about an information source’s credibility, depending on whether it was a print source or a web source. We knew we had to be more careful about information on the web–simply because all the filters that promoted accuracy involved in the print publishing process were absent from most web publishing. After all, it takes very little money, skill, and responsible intent to put content on the web, compared with what has to be done to convince print publishers that your content is accurate and that they will make money by printing it.

However, many publishers who once provided only print materials have now turned to the web and have brought along their rigorous standards for accuracy. Among them are the publishers of government, university, and scholarly (peer-reviewed) journal websites. Sites for U.S. mainline news organizations also strive for accuracy rather than persuasion–because they know their readers have traditionally expected it. All in all, more websites now take appropriate care for accuracy than what used to be true on the web.

Nonetheless, it still remains very easy and inexpensive to publish on the web without any of the filters associated with print. So we all still need the critical thinking skills you’ll learn here to determine whether websites’ information is credible enough to suit your purpose.

5 Factors to Consider

Evaluating a website for credibility means considering the five factors below in relation to your purpose for the information. These factors are what you should gather clues about and use to decide whether a site is right for your purpose.

  • The source’s neighborhood on the web.
  • Author and/or publisher’s background.
  • The degree of bias.
  • Recognition from others.
  • Thoroughness of the content.

How many factors you consider at any one time depends on your purpose when seeking information. In other words, you’ll consider all five factors when you’re looking for information for a research project or other high-stakes situation where making mistakes have serious consequences. But you might consider only the first three factors at other times.

Principles of Assessing Student Writing

Grading and giving feedback are deeply linked to student educational outcomes. In an online environment, it is especially important for you to offer thoughtful, substantive feedback to your students on their writing–to help them understand where they are communicating their ideas successfully and where they can continue to develop. In a remote course, responding to students in a clear, engaged, and specific manner, provides an opportunity to connect with your students and to support their learning both within your online course and beyond.

Assessing writing can and should be complementary with your pedagogy and curriculum. We suggest that you create a plan to assess student writing that promotes transparency , accessibility , and inclusive pedagogy. This requires some advance preparation and careful thought. Writing scholar John Bean writes that “Because we teachers have little opportunity to discuss grading practice with colleagues, we often develop criteria that seem universal to us but may appear idiosyncratic or even eccentric to others.” Criteria for student success should be fair, consistent, public, and clear.

Your feedback is where students can see and/or hear you engaging with their ideas and acknowledging their labor; it’s also where students feel most vulnerable and where you might feel pressed for time or frustrated at students’ missteps. It can help to remember the purpose of commenting on student assignments: to coach revision and growth (in the present piece and in future work for your course or program). Using audio feedback or screencast feedback can be a great way to articulate these priorities.

You should also be mindful that writing assignments are not a neutral component to students’ experiences in your class. Along with course syllabi and policies, assignments comprise a significant component of larger “ecologies” of assessment – that is, systems of judging student learning and performance (Inoue 2015). The ecology of your course shapes not only what but how students learn, and it does so in ways that can be either inclusionary or exclusionary.  Inclusive assessment asks instructors to think about assessment as a way to support student development in writing, rather than for the purposes of gatekeeping a discipline or profession from “poor performers.”

Other characteristics of inclusive assessment include transparency (the TILT framework foregrounds equity in education); flexibility ; alignment among an assignment’s learning goals, its central task, and its evaluation criteria; and linguistic justice (which recognizes that student performance of “standard” English is not a measure of intelligence or effort).

The WAC program’s principles of inclusive assessment are:

To make your expectations clear, be sure to identify how you will be assessing student writing, and how that assessment fits with your course’s learning objectives.

Beyond a few basics, what makes for effective writing will vary depending on the learning goals for the assignment, the genre of the paper, the subject matter, the specific tasks, the discipline, and the level of the course. It is crucial to develop criteria that match the specific learning goals and the genre of your assignment. What’s valued in one discipline differs in others. 

In addition to sharing your evaluation criteria, spend time in your online class discussing the kinds of feedback you’re giving, and give students the opportunity to ask questions about your responses.

For an example of a writing assignment that ties evaluation to learning objectives, check out Professor Jennifer Gipson’s French 248 assignment.

Whether on a rough draft or a final draft, offer specific , actionable feedback to students with suggestions for improvement that emphasize “global concerns” such as ideas, argument, and organization over “local concerns” such as sentence-level error. In a draft where students will have the opportunity to revise their work, this feedback will likely be more substantial than in a final draft.

Research shows that students are often confused by what we want them to concentrate on in their writing and in their revisions. Our comments on their writing too often lead students to make only superficial revisions to words and sentences, overlooking larger structural revisions that would most improve a paper. So as we design writing assignments, develop evaluation criteria, and comment on and evaluate our students’ final papers, we need to find ways to communicate clearly with students about different levels of revision and about priorities for their writing and revising.

We can help signal priorities if we clearly differentiate between global and local writing concerns . In our assignments, comments, conferences, and evaluation criteria, we can help students by focusing first on conceptual- and structural-level planning and revisions before grammatical- and lexical-level revisions. By no means are we advocating that we ignore language problems in our students’ writing. But we want to offer students clear guidance about how to strengthen their ideas, their analyses, and their arguments, so that students have papers worth editing and polishing. Then we can turn our attention—and our students’—to improving sentences, words, and punctuation. When we respond to their ideas, we signal to students that we care about their development as writers.

To see sample online feedback on a student paper in Sociology, check out this resource.

For more support on global and local concerns, check out the WAC program’s resource, Global and Local Concerns in Student Writing .

A criterion-based evaluation guide that communicates an instructor’s expectations for student performance in an assignment, specifically through the identification and description of evaluation criteria. Our resource on Principles of Rubric Design is an excellent resource to draw from.

For more information about the limits of broad, general evaluation rubrics, see Anson et al, “Big Rubrics and Weird Genres: The Futility of Using Generic Assessment Tools Across Diverse Instructional Contexts,” The Journal of Writing Assessmen t 5.1 (2012).

Acknowledge and support  (rather than penalize) the range of languages, dialects, and rhetorics used by students for whom white mainstream English (often called “standard” English) is not their accustomed language.

For ideas to support your students’ diverse languages or dialects, here are a few resources:

  • Conference on College Composition and Communication Statement on Anti-Black Racism and Black Linguistic Justice
  • Conference on College Composition and Communication’s Students’ Right to Their Own Language
  • The UW Writing Center, “Translingualism: An Alternative to Restrictive Monolingual Ideologies in Writing Instruction”

When you take time to provide feedback, it is worth taking the additional step of creating an activity or assignment that asks students to review and reflect on your feedback with the goal of identifying priorities for their attention and improvement on future assignments. For example, students can submit short learning journal entries or individual assignments reviewing their strengths, areas for improvement, and plans for their next assignment or draft.

Alternative Ways to Assess Student Writing

Recent conversations in the field of Writing Studies have identified how traditional writing assessment can lead to an overemphasis on letter grades and an underemphasis on feedback and student development. In our current difficult learning and teaching environment, it can be challenging to imagine and implement new assessment practices; at the same time, these practices can allow you to connect more deeply with students who are learning remotely.

We hope to provide more robust resources on alternative assessment practices in the near future. In the meantime, we offer links below to two types of alternative assessment.

A version of competency-based assessment that uses pass/fail grading paired with feedback and revision. Individual assignments must meet stated specifications in order to receive credit. There is no partial credit or stepped-down grades (A, AB, B, etc.), but students are provided feedback as well as options for revising or dropping assignments that did not meet specs. See Chapters 5, 6, and 8 of Linda B. Nilson’s Specifications Grading for more information, or access the full book online.

  • Linda Nilson in Inside Higher Ed , “Yes, Virginia, There’s a Better Way to Grade”
  • Humboldt State Center for Teaching and Learning, “Empowering Students Through Specs Grading”
  • Johns Hopkins University, “What is Specifications Grading and Why Should You Consider Using It?”

A determination of students’ course grade based on cumulative labor or effort . Individual assignments receive feedback but no grade, and negotiation between instructor and individual students is encouraged. Contracts have been documented to work best in democratic classrooms where they are built into the classroom culture.

  •  Asao Inoue, Labor-Based Grading Contracts: Building Equity and Inclusion in the Compassionate Writing Classroom   (free e-book)
  • Asao Inoue, Grading Contract for First-Year Writing course
  • Virginia Schwarz, Grading Contract resources

Logo for OpenEd@JWU

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

120 10.5 Evaluating Websites

Obviously, the same tests of evidence discussed earlier must be applied to information you find on a website. You need to determine the reputation or authority of the information producer, the accuracy of what you find online, the recency of the information, the completeness of the data, and so forth.

There are many more clues and tips for helping assure you are thoroughly assessing the information you find on a specific site. One of the main considerations when determining how or if information on a website can be used is to understand the reason the online information producer is making the information available. The website’s main goal might be:

Advocacy: attempting to persuade or influence opinion toward a particular position or point of view

Marketing: using the site to promote a particular product or service by providing information to consumers or to other businesses

News: publishing news and information by media companies is an alternative to their traditional delivery methods.

Personal expression: creating personal sites, blogs, or social networking pages allows individuals to express their interests and unique perspectives.

The bottom line here is it’s like the old computer adage: garbage in / garbage out. So, if you want to ensure that you are not producing garbage, make sure you don’t use it.

If you ask, and answer, the following three questions when you go to any website, you should be able to avoid misusing or misunderstanding the information you find:

Who is sharing this information?

Why are they sharing it?

How do they know what they claim to know?

Let’s look at Wikipedia as an example of evaluation. Categorizing Wikipedia is difficult because its contributors may be scholarly experts in their fields, they may be private citizens with some professional or personal knowledge about a topic, but they may also be individuals with misguided information or pranksters whose goal is to deface of this popular site. It also contains obscure information from popular culture or other realms that you may want to learn something about and its contributions are sometimes highly informative and accurate. Because the sources of the information cannot be easily verified or the motivation for providing the information determined, Wikipedia entries can only be used with skepticism and require “second-sourcing” from more authoritative information contributors.

Reading the “About Us” information on any website will give you valuable information about the site creators’ agenda, backers, and mission. But be sure to critically evaluate the self-description and remember the importance of understanding “ambiguous” terms as discussed earlier in this lesson.

Remember also the contributors to the information search that the information strategy model identifies. Private-sector institutional sources are likely to host sites that advance their business, point of view or support for a particular policy or program. Journalistic organizations host websites to disseminate their news content and gather information from their readers. Scholarly websites produced by universities, research centers or individual scholars are generally designed to highlight the academic work of that institution or individual researcher. Informal sources may host family websites, hobby websites, sites to spread a personal philosophy and for many other purposes. It is your responsibility to identify the type of contributor who created that site when you are deciding whether or not to use any information you find there.

Here is a link to a little cartoon with a good explanation of how to evaluate a website:  https://www.youtube.com/watch?v=aem3JahbXfk

Information Strategies for Communicators Copyright © 2015 by Kathleen A. Hansen and Nora Paul is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Center for Teaching

Assessing student learning.

5.2 assignment website evaluation

Forms and Purposes of Student Assessment

Assessment is more than grading, assessment plans, methods of student assessment, generative and reflective assessment, teaching guides related to student assessment, references and additional resources.

Student assessment is, arguably, the centerpiece of the teaching and learning process and therefore the subject of much discussion in the scholarship of teaching and learning. Without some method of obtaining and analyzing evidence of student learning, we can never know whether our teaching is making a difference. That is, teaching requires some process through which we can come to know whether students are developing the desired knowledge and skills, and therefore whether our instruction is effective. Learning assessment is like a magnifying glass we hold up to students’ learning to discern whether the teaching and learning process is functioning well or is in need of change.

To provide an overview of learning assessment, this teaching guide has several goals, 1) to define student learning assessment and why it is important, 2) to discuss several approaches that may help to guide and refine student assessment, 3) to address various methods of student assessment, including the test and the essay, and 4) to offer several resources for further research. In addition, you may find helfpul this five-part video series on assessment that was part of the Center for Teaching’s Online Course Design Institute.

What is student assessment and why is it Important?

In their handbook for course-based review and assessment, Martha L. A. Stassen et al. define assessment as “the systematic collection and analysis of information to improve student learning” (2001, p. 5). An intentional and thorough assessment of student learning is vital because it provides useful feedback to both instructors and students about the extent to which students are successfully meeting learning objectives. In their book Understanding by Design , Grant Wiggins and Jay McTighe offer a framework for classroom instruction — “Backward Design”— that emphasizes the critical role of assessment. For Wiggins and McTighe, assessment enables instructors to determine the metrics of measurement for student understanding of and proficiency in course goals. Assessment provides the evidence needed to document and validate that meaningful learning has occurred (2005, p. 18). Their approach “encourages teachers and curriculum planners to first ‘think like an assessor’ before designing specific units and lessons, and thus to consider up front how they will determine if students have attained the desired understandings” (Wiggins and McTighe, 2005, p. 18). [1]

Not only does effective assessment provide us with valuable information to support student growth, but it also enables critically reflective teaching. Stephen Brookfield, in Becoming a Critically Reflective Teacher, argues that critical reflection on one’s teaching is an essential part of developing as an educator and enhancing the learning experience of students (1995). Critical reflection on one’s teaching has a multitude of benefits for instructors, including the intentional and meaningful development of one’s teaching philosophy and practices. According to Brookfield, referencing higher education faculty, “A critically reflective teacher is much better placed to communicate to colleagues and students (as well as to herself) the rationale behind her practice. She works from a position of informed commitment” (Brookfield, 1995, p. 17). One important lens through which we may reflect on our teaching is our student evaluations and student learning assessments. This reflection allows educators to determine where their teaching has been effective in meeting learning goals and where it has not, allowing for improvements. Student assessment, then, both develop the rationale for pedagogical choices, and enables teachers to measure the effectiveness of their teaching.

The scholarship of teaching and learning discusses two general forms of assessment. The first, summative assessment , is one that is implemented at the end of the course of study, for example via comprehensive final exams or papers. Its primary purpose is to produce an evaluation that “sums up” student learning. Summative assessment is comprehensive in nature and is fundamentally concerned with learning outcomes. While summative assessment is often useful for communicating final evaluations of student achievement, it does so without providing opportunities for students to reflect on their progress, alter their learning, and demonstrate growth or improvement; nor does it allow instructors to modify their teaching strategies before student learning in a course has concluded (Maki, 2002).

The second form, formative assessment , involves the evaluation of student learning at intermediate points before any summative form. Its fundamental purpose is to help students during the learning process by enabling them to reflect on their challenges and growth so they may improve. By analyzing students’ performance through formative assessment and sharing the results with them, instructors help students to “understand their strengths and weaknesses and to reflect on how they need to improve over the course of their remaining studies” (Maki, 2002, p. 11). Pat Hutchings refers to as “assessment behind outcomes”: “the promise of assessment—mandated or otherwise—is improved student learning, and improvement requires attention not only to final results but also to how results occur. Assessment behind outcomes means looking more carefully at the process and conditions that lead to the learning we care about…” (Hutchings, 1992, p. 6, original emphasis). Formative assessment includes all manner of coursework with feedback, discussions between instructors and students, and end-of-unit examinations that provide an opportunity for students to identify important areas for necessary growth and development for themselves (Brown and Knight, 1994).

It is important to recognize that both summative and formative assessment indicate the purpose of assessment, not the method . Different methods of assessment (discussed below) can either be summative or formative depending on when and how the instructor implements them. Sally Brown and Peter Knight in Assessing Learners in Higher Education caution against a conflation of the method (e.g., an essay) with the goal (formative or summative): “Often the mistake is made of assuming that it is the method which is summative or formative, and not the purpose. This, we suggest, is a serious mistake because it turns the assessor’s attention away from the crucial issue of feedback” (1994, p. 17). If an instructor believes that a particular method is formative, but he or she does not take the requisite time or effort to provide extensive feedback to students, the assessment effectively functions as a summative assessment despite the instructor’s intentions (Brown and Knight, 1994). Indeed, feedback and discussion are critical factors that distinguish between formative and summative assessment; formative assessment is only as good as the feedback that accompanies it.

It is not uncommon to conflate assessment with grading, but this would be a mistake. Student assessment is more than just grading. Assessment links student performance to specific learning objectives in order to provide useful information to students and instructors about learning and teaching, respectively. Grading, on the other hand, according to Stassen et al. (2001) merely involves affixing a number or letter to an assignment, giving students only the most minimal indication of their performance relative to a set of criteria or to their peers: “Because grades don’t tell you about student performance on individual (or specific) learning goals or outcomes, they provide little information on the overall success of your course in helping students to attain the specific and distinct learning objectives of interest” (Stassen et al., 2001, p. 6). Grades are only the broadest of indicators of achievement or status, and as such do not provide very meaningful information about students’ learning of knowledge or skills, how they have developed, and what may yet improve. Unfortunately, despite the limited information grades provide students about their learning, grades do provide students with significant indicators of their status – their academic rank, their credits towards graduation, their post-graduation opportunities, their eligibility for grants and aid, etc. – which can distract students from the primary goal of assessment: learning. Indeed, shifting the focus of assessment away from grades and towards more meaningful understandings of intellectual growth can encourage students (as well as instructors and institutions) to attend to the primary goal of education.

Barbara Walvoord (2010) argues that assessment is more likely to be successful if there is a clear plan, whether one is assessing learning in a course or in an entire curriculum (see also Gelmon, Holland, and Spring, 2018). Without some intentional and careful plan, assessment can fall prey to unclear goals, vague criteria, limited communication of criteria or feedback, invalid or unreliable assessments, unfairness in student evaluations, or insufficient or even unmeasured learning. There are several steps in this planning process.

  • Defining learning goals. An assessment plan usually begins with a clearly articulated set of learning goals.
  • Defining assessment methods. Once goals are clear, an instructor must decide on what evidence – assignment(s) – will best reveal whether students are meeting the goals. We discuss several common methods below, but these need not be limited by anything but the learning goals and the teaching context.
  • Developing the assessment. The next step would be to formulate clear formats, prompts, and performance criteria that ensure students can prepare effectively and provide valid, reliable evidence of their learning.
  • Integrating assessment with other course elements. Then the remainder of the course design process can be completed. In both integrated (Fink 2013) and backward course design models (Wiggins & McTighe 2005), the primary assessment methods, once chosen, become the basis for other smaller reading and skill-building assignments as well as daily learning experiences such as lectures, discussions, and other activities that will prepare students for their best effort in the assessments.
  • Communicate about the assessment. Once the course has begun, it is possible and necessary to communicate the assignment and its performance criteria to students. This communication may take many and preferably multiple forms to ensure student clarity and preparation, including assignment overviews in the syllabus, handouts with prompts and assessment criteria, rubrics with learning goals, model assignments (e.g., papers), in-class discussions, and collaborative decision-making about prompts or criteria, among others.
  • Administer the assessment. Instructors then can implement the assessment at the appropriate time, collecting evidence of student learning – e.g., receiving papers or administering tests.
  • Analyze the results. Analysis of the results can take various forms – from reading essays to computer-assisted test scoring – but always involves comparing student work to the performance criteria and the relevant scholarly research from the field(s).
  • Communicate the results. Instructors then compose an assessment complete with areas of strength and improvement, and communicate it to students along with grades (if the assignment is graded), hopefully within a reasonable time frame. This also is the time to determine whether the assessment was valid and reliable, and if not, how to communicate this to students and adjust feedback and grades fairly. For instance, were the test or essay questions confusing, yielding invalid and unreliable assessments of student knowledge.
  • Reflect and revise. Once the assessment is complete, instructors and students can develop learning plans for the remainder of the course so as to ensure improvements, and the assignment may be changed for future courses, as necessary.

Let’s see how this might work in practice through an example. An instructor in a Political Science course on American Environmental Policy may have a learning goal (among others) of students understanding the historical precursors of various environmental policies and how these both enabled and constrained the resulting legislation and its impacts on environmental conservation and health. The instructor therefore decides that the course will be organized around a series of short papers that will combine to make a thorough policy report, one that will also be the subject of student presentations and discussions in the last third of the course. Each student will write about an American environmental policy of their choice, with a first paper addressing its historical precursors, a second focused on the process of policy formation, and a third analyzing the extent of its impacts on environmental conservation or health. This will help students to meet the content knowledge goals of the course, in addition to its goals of improving students’ research, writing, and oral presentation skills. The instructor then develops the prompts, guidelines, and performance criteria that will be used to assess student skills, in addition to other course elements to best prepare them for this work – e.g., scaffolded units with quizzes, readings, lectures, debates, and other activities. Once the course has begun, the instructor communicates with the students about the learning goals, the assignments, and the criteria used to assess them, giving them the necessary context (goals, assessment plan) in the syllabus, handouts on the policy papers, rubrics with assessment criteria, model papers (if possible), and discussions with them as they need to prepare. The instructor then collects the papers at the appropriate due dates, assesses their conceptual and writing quality against the criteria and field’s scholarship, and then provides written feedback and grades in a manner that is reasonably prompt and sufficiently thorough for students to make improvements. Then the instructor can make determinations about whether the assessment method was effective and what changes might be necessary.

Assessment can vary widely from informal checks on understanding, to quizzes, to blogs, to essays, and to elaborate performance tasks such as written or audiovisual projects (Wiggins & McTighe, 2005). Below are a few common methods of assessment identified by Brown and Knight (1994) that are important to consider.

According to Euan S. Henderson, essays make two important contributions to learning and assessment: the development of skills and the cultivation of a learning style (1980). The American Association of Colleges & Universities (AAC&U) also has found that intensive writing is a “high impact” teaching practice likely to help students in their engagement, learning, and academic attainment (Kuh 2008).

Things to Keep in Mind about Essays

  • Essays are a common form of writing assignment in courses and can be either a summative or formative form of assessment depending on how the instructor utilizes them.
  • Essays encompass a wide array of narrative forms and lengths, from short descriptive essays to long analytical or creative ones. Shorter essays are often best suited to assess student’s understanding of threshold concepts and discrete analytical or writing skills, while longer essays afford assessments of higher order concepts and more complex learning goals, such as rigorous analysis, synthetic writing, problem solving, or creative tasks.
  • A common challenge of the essay is that students can use them simply to regurgitate rather than analyze and synthesize information to make arguments. Students need performance criteria and prompts that urge them to go beyond mere memorization and comprehension, but encourage the highest levels of learning on Bloom’s Taxonomy . This may open the possibility for essay assignments that go beyond the common summary or descriptive essay on a given topic, but demand, for example, narrative or persuasive essays or more creative projects.
  • Instructors commonly assume that students know how to write essays and can encounter disappointment or frustration when they discover that this is sometimes not the case. For this reason, it is important for instructors to make their expectations clear and be prepared to assist, or provide students to resources that will enhance their writing skills. Faculty may also encourage students to attend writing workshops at university writing centers, such as Vanderbilt University’s Writing Studio .

Exams and time-constrained, individual assessment

Examinations have traditionally been a gold standard of assessment, particularly in post-secondary education. Many educators prefer them because they can be highly effective, they can be standardized, they are easily integrated into disciplines with certification standards, and they are efficient to implement since they can allow for less labor-intensive feedback and grading. They can involve multiple forms of questions, be of varying lengths, and can be used to assess multiple levels of student learning. Like essays they can be summative or formative forms of assessment.

Things to Keep in Mind about Exams

  • Exams typically focus on the assessment of students’ knowledge of facts, figures, and other discrete information crucial to a course. While they can involve questioning that demands students to engage in higher order demonstrations of comprehension, problem solving, analysis, synthesis, critique, and even creativity, such exams often require more time to prepare and validate.
  • Exam questions can be multiple choice, true/false, or other discrete answer formats, or they can be essay or problem-solving. For more on how to write good multiple choice questions, see this guide .
  • Exams can make significant demands on students’ factual knowledge and therefore can have the side-effect of encouraging cramming and surface learning. Further, when exams are offered infrequently, or when they have high stakes by virtue of their heavy weighting in course grade schemes or in student goals, they may accompany violations of academic integrity.
  • In the process of designing an exam, instructors should consider the following questions. What are the learning objectives that the exam seeks to evaluate? Have students been adequately prepared to meet exam expectations? What are the skills and abilities that students need to do well on the exam? How will this exam be utilized to enhance the student learning process?

Self-Assessment

The goal of implementing self-assessment in a course is to enable students to develop their own judgment and the capacities for critical meta-cognition – to learn how to learn. In self-assessment students are expected to assess both the processes and products of their learning. While the assessment of the product is often the task of the instructor, implementing student self-assessment in the classroom ensures students evaluate their performance and the process of learning that led to it. Self-assessment thus provides a sense of student ownership of their learning and can lead to greater investment and engagement. It also enables students to develop transferable skills in other areas of learning that involve group projects and teamwork, critical thinking and problem-solving, as well as leadership roles in the teaching and learning process with their peers.

Things to Keep in Mind about Self-Assessment

  • Self-assessment is not self-grading. According to Brown and Knight, “Self-assessment involves the use of evaluative processes in which judgement is involved, where self-grading is the marking of one’s own work against a set of criteria and potential outcomes provided by a third person, usually the [instructor]” (1994, p. 52). Self-assessment can involve self-grading, but instructors of record retain the final authority to determine and assign grades.
  • To accurately and thoroughly self-assess, students require clear learning goals for the assignment in question, as well as rubrics that clarify different performance criteria and levels of achievement for each. These rubrics may be instructor-designed, or they may be fashioned through a collaborative dialogue with students. Rubrics need not include any grade assignation, but merely descriptive academic standards for different criteria.
  • Students may not have the expertise to assess themselves thoroughly, so it is helpful to build students’ capacities for self-evaluation, and it is important that they always be supplemented with faculty assessments.
  • Students may initially resist instructor attempts to involve themselves in the assessment process. This is usually due to insecurities or lack of confidence in their ability to objectively evaluate their own work, or possibly because of habituation to more passive roles in the learning process. Brown and Knight note, however, that when students are asked to evaluate their work, frequently student-determined outcomes are very similar to those of instructors, particularly when the criteria and expectations have been made explicit in advance (1994).
  • Methods of self-assessment vary widely and can be as unique as the instructor or the course. Common forms of self-assessment involve written or oral reflection on a student’s own work, including portfolio, logs, instructor-student interviews, learner diaries and dialog journals, post-test reflections, and the like.

Peer Assessment

Peer assessment is a type of collaborative learning technique where students evaluate the work of their peers and, in return, have their own work evaluated as well. This dimension of assessment is significantly grounded in theoretical approaches to active learning and adult learning . Like self-assessment, peer assessment gives learners ownership of learning and focuses on the process of learning as students are able to “share with one another the experiences that they have undertaken” (Brown and Knight, 1994, p. 52).  However, it also provides students with other models of performance (e.g., different styles or narrative forms of writing), as well as the opportunity to teach, which can enable greater preparation, reflection, and meta-cognitive organization.

Things to Keep in Mind about Peer Assessment

  • Similar to self-assessment, students benefit from clear and specific learning goals and rubrics. Again, these may be instructor-defined or determined through collaborative dialogue.
  • Also similar to self-assessment, it is important to not conflate peer assessment and peer grading, since grading authority is retained by the instructor of record.
  • While student peer assessments are most often fair and accurate, they sometimes can be subject to bias. In competitive educational contexts, for example when students are graded normatively (“on a curve”), students can be biased or potentially game their peer assessments, giving their fellow students unmerited low evaluations. Conversely, in more cooperative teaching environments or in cases when they are friends with their peers, students may provide overly favorable evaluations. Also, other biases associated with identity (e.g., race, gender, or class) and personality differences can shape student assessments in unfair ways. Therefore, it is important for instructors to encourage fairness, to establish processes based on clear evidence and identifiable criteria, and to provide instructor assessments as accompaniments or correctives to peer evaluations.
  • Students may not have the disciplinary expertise or assessment experience of the instructor, and therefore can issue unsophisticated judgments of their peers. Therefore, to avoid unfairness, inaccuracy, and limited comments, formative peer assessments may need to be supplemented with instructor feedback.

As Brown and Knight assert, utilizing multiple methods of assessment, including more than one assessor when possible, improves the reliability of the assessment data. It also ensures that students with diverse aptitudes and abilities can be assessed accurately and have equal opportunities to excel. However, a primary challenge to the multiple methods approach is how to weigh the scores produced by multiple methods of assessment. When particular methods produce higher range of marks than others, instructors can potentially misinterpret and mis-evaluate student learning. Ultimately, they caution that, when multiple methods produce different messages about the same student, instructors should be mindful that the methods are likely assessing different forms of achievement (Brown and Knight, 1994).

These are only a few of the many forms of assessment that one might use to evaluate and enhance student learning (see also ideas present in Brown and Knight, 1994). To this list of assessment forms and methods we may add many more that encourage students to produce anything from research papers to films, theatrical productions to travel logs, op-eds to photo essays, manifestos to short stories. The limits of what may be assigned as a form of assessment is as varied as the subjects and skills we seek to empower in our students. Vanderbilt’s Center for Teaching has an ever-expanding array of guides on creative models of assessment that are present below, so please visit them to learn more about other assessment innovations and subjects.

Whatever plan and method you use, assessment often begins with an intentional clarification of the values that drive it. While many in higher education may argue that values do not have a role in assessment, we contend that values (for example, rigor) always motivate and shape even the most objective of learning assessments. Therefore, as in other aspects of assessment planning, it is helpful to be intentional and critically reflective about what values animate your teaching and the learning assessments it requires. There are many values that may direct learning assessment, but common ones include rigor, generativity, practicability, co-creativity, and full participation (Bandy et al., 2018). What do these characteristics mean in practice?

Rigor. In the context of learning assessment, rigor means aligning our methods with the goals we have for students, principles of validity and reliability, ethics of fairness and doing no harm, critical examinations of the meaning we make from the results, and good faith efforts to improve teaching and learning. In short, rigor suggests understanding learning assessment as we would any other form of intentional, thoroughgoing, critical, and ethical inquiry.

Generativity. Learning assessments may be most effective when they create conditions for the emergence of new knowledge and practice, including student learning and skill development, as well as instructor pedagogy and teaching methods. Generativity opens up rather than closes down possibilities for discovery, reflection, growth, and transformation.

Practicability. Practicability recommends that learning assessment be grounded in the realities of the world as it is, fitting within the boundaries of both instructor’s and students’ time and labor. While this may, at times, advise a method of learning assessment that seems to conflict with the other values, we believe that assessment fails to be rigorous, generative, participatory, or co-creative if it is not feasible and manageable for instructors and students.

Full Participation. Assessments should be equally accessible to, and encouraging of, learning for all students, empowering all to thrive regardless of identity or background. This requires multiple and varied methods of assessment that are inclusive of diverse identities – racial, ethnic, national, linguistic, gendered, sexual, class, etcetera – and their varied perspectives, skills, and cultures of learning.

Co-creation. As alluded to above regarding self- and peer-assessment, co-creative approaches empower students to become subjects of, not just objects of, learning assessment. That is, learning assessments may be more effective and generative when assessment is done with, not just for or to, students. This is consistent with feminist, social, and community engagement pedagogies, in which values of co-creation encourage us to critically interrogate and break down hierarchies between knowledge producers (traditionally, instructors) and consumers (traditionally, students) (e.g., Saltmarsh, Hartley, & Clayton, 2009, p. 10; Weimer, 2013). In co-creative approaches, students’ involvement enhances the meaningfulness, engagement, motivation, and meta-cognitive reflection of assessments, yielding greater learning (Bass & Elmendorf, 2019). The principle of students being co-creators of their own education is what motivates the course design and professional development work Vanderbilt University’s Center for Teaching has organized around the Students as Producers theme.

Below is a list of other CFT teaching guides that supplement this one and may be of assistance as you consider all of the factors that shape your assessment plan.

  • Active Learning
  • An Introduction to Lecturing
  • Beyond the Essay: Making Student Thinking Visible in the Humanities
  • Bloom’s Taxonomy
  • Classroom Assessment Techniques (CATs)
  • Classroom Response Systems
  • How People Learn
  • Service-Learning and Community Engagement
  • Syllabus Construction
  • Teaching with Blogs
  • Test-Enhanced Learning
  • Assessing Student Learning (a five-part video series for the CFT’s Online Course Design Institute)

Angelo, Thomas A., and K. Patricia Cross. Classroom Assessment Techniques: A Handbook for College Teachers . 2 nd edition. San Francisco: Jossey-Bass, 1993. Print.

Bandy, Joe, Mary Price, Patti Clayton, Julia Metzker, Georgia Nigro, Sarah Stanlick, Stephani Etheridge Woodson, Anna Bartel, & Sylvia Gale. Democratically engaged assessment: Reimagining the purposes and practices of assessment in community engagement . Davis, CA: Imagining America, 2018. Web.

Bass, Randy and Heidi Elmendorf. 2019. “ Designing for Difficulty: Social Pedagogies as a Framework for Course Design .” Social Pedagogies: Teagle Foundation White Paper. Georgetown University, 2019. Web.

Brookfield, Stephen D. Becoming a Critically Reflective Teacher . San Francisco: Jossey-Bass, 1995. Print

Brown, Sally, and Peter Knight. Assessing Learners in Higher Education . 1 edition. London ;Philadelphia: Routledge, 1998. Print.

Cameron, Jeanne et al. “Assessment as Critical Praxis: A Community College Experience.” Teaching Sociology 30.4 (2002): 414–429. JSTOR . Web.

Fink, L. Dee. Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses. Second Edition. San Francisco, CA: Jossey-Bass, 2013. Print.

Gibbs, Graham and Claire Simpson. “Conditions under which Assessment Supports Student Learning. Learning and Teaching in Higher Education 1 (2004): 3-31. Print.

Henderson, Euan S. “The Essay in Continuous Assessment.” Studies in Higher Education 5.2 (1980): 197–203. Taylor and Francis+NEJM . Web.

Gelmon, Sherril B., Barbara Holland, and Amy Spring. Assessing Service-Learning and Civic Engagement: Principles and Techniques. Second Edition . Stylus, 2018. Print.

Kuh, George. High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter , American Association of Colleges & Universities, 2008. Web.

Maki, Peggy L. “Developing an Assessment Plan to Learn about Student Learning.” The Journal of Academic Librarianship 28.1 (2002): 8–13. ScienceDirect . Web. The Journal of Academic Librarianship. Print.

Sharkey, Stephen, and William S. Johnson. Assessing Undergraduate Learning in Sociology . ASA Teaching Resource Center, 1992. Print.

Walvoord, Barbara. Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education. Second Edition . San Francisco, CA: Jossey-Bass, 2010. Print.

Weimer, Maryellen. Learner-Centered Teaching: Five Key Changes to Practice. Second Edition . San Francisco, CA: Jossey-Bass, 2013. Print.

Wiggins, Grant, and Jay McTighe. Understanding By Design . 2nd Expanded edition. Alexandria,

VA: Assn. for Supervision & Curriculum Development, 2005. Print.

[1] For more on Wiggins and McTighe’s “Backward Design” model, see our teaching guide here .

Photo credit

Creative Commons License

Teaching Guides

  • Online Course Development Resources
  • Principles & Frameworks
  • Pedagogies & Strategies
  • Reflecting & Assessing
  • Challenges & Opportunities
  • Populations & Contexts

Quick Links

  • Services for Departments and Schools
  • Examples of Online Instructional Modules

In selecting free Web based information sources pay attention to the following criteria:

Examine: Credentials of the producer or sponsor delivering the information

  • Look for " about us ," " home ," " biography ", and " credits " on the home page
  • Check other publications by the author or sponsor by going to Library of Congress Online Catalog Amazon.com
  • Examine and decode a URL address Wiki defintion: Generic top-level domain
  • Check who owns a domain Go to Whois.net
  • Search Google or Amazon for other publications or sites done by the author.

Consider: Information currency at the time of publication

  • Check the frequency of updates
  • Look for dates, updates, revision dates
  • Avoid undated information sources

Consider or determine: Why was the site created?

  • To entertain
  • To advertise/sell a product
  • To promote a point of view or belief
  • To spoof or as a sham or hoax

Extra Materials

To see the University of Idaho Libraries criteria for website evaluation, click below:

IMAGES

  1. 5.2 Assignment Website Evaluation.pdf

    5.2 assignment website evaluation

  2. Website Evaluation Assignment

    5.2 assignment website evaluation

  3. Website evaluation assignment

    5.2 assignment website evaluation

  4. Website Evaluation

    5.2 assignment website evaluation

  5. Mister Wilson's Web Design Class: Web Design Assignment Grading Rubric

    5.2 assignment website evaluation

  6. Website Evaluation Assignment

    5.2 assignment website evaluation

VIDEO

  1. Web Applications Testing Tutoria Lesson 1.2 How to test web applications apps intro

  2. Tut8-Matthew 5

  3. Video Based Evaluation

  4. Website Evaluation Captioned 2014 YouTube version 18OCT23

  5. GPISD Culturally Responsive PD Evaluation

  6. Peer evaluation assignment DEC

COMMENTS

  1. 5 2assignment 1 .docx

    Life Skills 5.2 Assignment Website Evaluation Name: Hanna Ali Date: 01/21/21 Directions: Review the lesson Evaluating Online Resources. Then do an online search using the words "diet" and/or "nutrition". You can use www.google.com to do the search, or any other search engine of your choice. From the results of your search, choose three websites that you would like to evaluate.

  2. 10.5 Evaluating Websites

    10.5 Evaluating Websites. Obviously, the same tests of evidence discussed earlier must be applied to information you find on a website. You need to determine the reputation or authority of the information producer, the accuracy of what you find online, the recency of the information, the completeness of the data, and so forth. There are many ...

  3. WebD2: Developing a Website Evaluation Tool

    Activities. Develop a website evaluation tool. Use the knowledge and perspective gained in Lesson 1 to develop a rubric for measuring the quality of websites. Follow these steps: Pair up and take five minutes to share and discuss the merits and problems of the "good" and "bad" websites chosen in the activity of the previous lesson.

  4. 5.2.3: Evaluating for Credibility

    5 Factors to Consider. Evaluating a website for credibility means considering the five factors below in relation to your purpose for the information. These factors are what you should gather clues about and use to decide whether a site is right for your purpose. The source's neighborhood on the web. Author and/or publisher's background.

  5. (PDF) Web site evaluation: Trends and existing approaches

    This paper presents a general survey of the current Web Site Measurement Approaches of some famous web site evaluation systems and tools world wide and focuses on web site evaluation by using ...

  6. Website quality evaluation: a model for developing comprehensive

    The field of website quality evaluation attracts the interest of a range of disciplines, each bringing its own particular perspective to bear. This study aims to identify the main characteristics - methods, techniques and tools - of the instruments of evaluation described in this literature, with a specific concern for the factors analysed ...

  7. Principles of Evaluating Student Writing

    The WAC program's principles of inclusive assessment are: To make your expectations clear, be sure to identify how you will be assessing student writing, and how that assessment fits with your course's learning objectives. Beyond a few basics, what makes for effective writing will vary depending on the learning goals for the assignment, the ...

  8. 10.5 Evaluating Websites

    120. 10.5 Evaluating Websites. Obviously, the same tests of evidence discussed earlier must be applied to information you find on a website. You need to determine the reputation or authority of the information producer, the accuracy of what you find online, the recency of the information, the completeness of the data, and so forth.

  9. Assessing Student Learning

    To provide an overview of learning assessment, this teaching guide has several goals, 1) to define student learning assessment and why it is important, 2) to discuss several approaches that may help to guide and refine student assessment, 3) to address various methods of student assessment, including the test and the essay, and 4) to offer ...

  10. 5.3 Evaluating Websites

    All information sources need to be evaluated, but Websites offer additional challenges and need more scrutiny. The Web is an excellent source of information for: News CNN.com News. Grey literature. pamphlets, technical reports, handouts, and associations' literature , i.e. Pacific Island Gray Literature Project. Self-publishing.

  11. 17.2 Evaluating Your Sources

    3. Short pieces from newspapers or credible websites. Simple reporting of events, research findings, or policy changes. Often point to useful Tier 2 or Tier 1 sources, may provide a factoid or two not found anywhere else. Strategic Google searches or article databases including newspapers and magazines. 4.

  12. Research Assignment Ideas

    Finding Sources & Research Process (G4) 2.1 Students are able to locate general information sources to increase familiarity with a topic (A). 2.2 Students are able to use library research tools and indicators of authority to determine the credibility of sources (A). 2.3 Students are able to locate physical resources found in the library or ...

  13. 5 2assignment.docx

    View Homework Help - 5_2assignment.docx from SCIENCE 870 at Craig High, Janesville. Life Skills 5.2 Assignment Website Evaluation Name: Date: Directions: Review the lesson Evaluating Online

  14. 2-2 applied activity

    Introduction The goal of this assignment is to choose one of three websites and to create a report based on heuristic evaluation, consistency, and cognitive walkthrough. The website that I have chosen to base my report on is https:worlds-worst-website/. This report will be based upon the eight golden rules heuristic approach.

  15. 5.2 assignment WebsiteEvaluation.docx.pdf

    View 5.2 assignment WebsiteEvaluation.docx.pdf from ENGLISH 950-1 at Emmaus High School. Life Skills Name: Date: 5.2 Assignment Website Evaluation Directions: Review the lesson Evaluating Online

  16. Webwork Section 5.2

    Online homework, Chapter 5 Section 2, Webwork, Summer 2018 randi besio assignment integration by substitution due 08/16/2018 at 11:59pm pdt point) evaluate the