ATI Student Orientation: Steps to Success Module


Students also viewed
Ati student orientation: steps to success mod…, orientation ati, ati student orientation, recent flashcard sets, character / cechy charaktery #2.

Cell biology and movement across a membrane
Collapse of second reich

Bogdan English
Sets found in the same folder
Ati achieve : study skills, ati- personal hygene, ati knowledge and clinical judgement beginnin…, ati: vital signs.

Other sets by this creator
Gastrointestinal, chapter 2 legal & ethical aspects of nursing…, most common injectable medications, most common medication uses, verified questions.
What properties of carbon explain carbon's ability to form many different macromolecules?
We all have observed that ice floats on liquid water. Why? What is unusual about this?
Each of the identical wheels has a mass of 4 kg and a radius of gyration k z k_z k z = 120 mm and is mounted on a horizontal shaft AB secured to the vertical shaft at O. In case (a), the horizontal shaft is fixed to a collar at O which is free to rotate about the vertical y-axis. In case (b), the shaft is secured by a yoke hinged about the x-axis to the collar. If the wheel has a large angular velocity p = 3600 rev/min about its z-axis in the position, determine any precession which occurs and the bending moment M A M_A M A in the shaft at A for each case. Neglect the small mass of the shaft and fitting at O.
A pulsed laser emits light of wavelength λ \lambda λ . For a pulse of duration Δ t \Delta t Δ t having energy T E R , T_{\mathrm{ER}}, T ER , find (a) the physical length of the pulse as it travels through space and (b) the number of photons in it. (c) The beam has a circular cross section having diameter d. Find the number of photons per unit volume.
Recommended textbook solutions

Pharmacology and the Nursing Process

The Human Body in Health and Disease

Mathematics for Health Sciences

Exercise Physiology: Theory and Application to Fitness and Performance
Other quizlet sets, ob exam 1 winter 2022.
Online Quiz Chapter 1 & 2
Anatomy test lymphatic system.

STUDENT PORTAL 101
Date Published: August 16, 2018
A s a nursing student, we know you’ve got a lot on your plate. To help you navigate the ins and outs of ATI’s student portal, here are the basics.
You’ll be glad you read this, even if this isn’t your first semester using ATI.
The Home Page
Let us introduce you to your new best friend. The home page will be personalized with your recent activity for easy access to items you’ve recently worked on. Pick up where you left off in your modules and reference your Pulse score. If you aren’t sure what that means, check out this article, Have You Checked Your Pulse? How To Understand Your Pulse Score.
Another feature on the home page is the News section. This will be the “News” source on the left-hand column to provide updates for opportunities to help develop your nursing skills, such as The Job Hunt Facebook Live event series and links to external resources.
Before we move on, there’s one last thing to keep in mind. The homepage is also where you can add products using the Product ID Number that your instructor will give you.

My ATI is where you might be spending the most of your time in the portal. The Learn, Test, Improve, Apply and NCLEX Prep tabs are filled with content to learn and practice. Depending on your educator and your institution, what you see here may vary.
- Learn: The Learn tab provides you with flashcards, tutorials, and eBooks to help you learn within different areas. You can go back into any of these areas and remediate within modules to master specific content areas.
- Test: The Test tab is where Practice Assessments and Learning System quizzes are housed, and it’s where you can go to find a Proctored Assessment when it’s in session. It will say at the top of the section if a Proctored Assessment is available so you won’t have to scroll through other Practice Assessments to find what you’re looking for.
- Improve: This section is where you’ll access your post-assessment Focused Review. Because we know everyone is different, these reviews are individualized to each practice and proctored assessment attempt and is intended to help boost your knowledge in areas you might be struggling in. Your Focused Review will look different than your classmates depending on how you performed within the assessments from the “Test” tab.
- Apply: The Apply tab gives you the opportunity to apply your knowledge in a simulation setting within the portal. The simulations and Video Case Studies will guide you through a virtual scenario to give you that extra practice you might not get time for in the sim lab.
- NCLEX Prep: NCLEX Prep will help you as you approach the end of your nursing school career as you get closer to taking the NCLEX exam. The questions and adaptive testing will replicate what the actual test will be like when it’s time for your big day. Start looking at this sooner rather than later to ensure you give yourself plenty of time for NCLEX preparation.
Remember that you might not have some of these products available depending on your institution or level in your program, so don’t sweat it if this list included products that you haven’t seen before!

This tab will display your results from previous attempts in the assessments, tutorials, quizzes and more. You can check this to watch for your improvements or for those tricky areas you might need more practice in.
The Help tab is filled with information for all your questions that arise when using the site. This section contains FAQs that deal with username and password issues, Focused Review access, purchasing materials and more.
A nine-minute video listed under “Getting Started with ATI” provides additional information about using modules within different content areas and the overall startup of your student account. This section will be your first stop if you have any questions about using the website.
For a more in depth overview, sign in to your ATI Student Portal and click the following link. http://sitefinity.atitesting.com/docs/default-source/default-document-library/howtocomprehensivestudent-newui.pdf?sfvrsn=2

Miscellaneous Information + Resources
At the very top right side of the website, you can review your account, access the online store for additional materials, find ATI’s contact information for further help, or to sign out of your student portal.
Make the most out of this semester and the rest of your nursing school career with some of the best practice tools you can get. Happy studying!


Preparing for graduation
Ati capstone comprehensive content review, ati capstone comprehensive content review.
The ATI Capstone Comprehensive Content Review was designed to partner with your nursing program as students prepare for graduation. Capstone is tailored to your program and integrated as a supplement to your current pre-graduation curriculum in preparation for the Comprehensive Predictor® exam.
The program’s customized integration is designed to help students assess and remediate on important content, while increasing their accountability and providing educators more time to focus on curriculum.
Here’s how it works:
- Students spend approximately 4-6 hours per week engaging in their review, depending on their knowledge level.
- A weekly report with students’ progress is sent to schools, identifying those who require additional review time, enabling faculty to provide better remediation.
- Students are paired with an educator to guide them through a pre-set calendar of online content review before sitting for the ATI Comprehensive Predictor.
- Student are provided a personalized study plan based on the knowledge level determined from their assessments.
Each week, students focus on one specific content area, moving through the following either in class or from home:
- Pre-assessment quiz
- Content area assessment
- Individualized Focused Review®
- Individualized post-assessment assignment provided by the educator
- Weekly review tips available in the classroom

For more information about the ATI Capstone Content Review, contact your Client Executive.


Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Ch. 15 Teacher made assessment strategies
Kevin Seifert and Rosemary Sutton

Teacher-made assessment strategies
Kym teaches sixth grade students in an urban school where most of the families in the community live below the poverty line. Each year the majority of the students in her school fail the statewide tests. Kym follows school district teaching guides and typically uses direct instruction in her Language Arts and Social Studies classes. The classroom assessments are designed to mirror those on the statewide tests so the students become familiar with the assessment format.
When Kym is in a graduate summer course on motivation she reads an article called, “Teaching strategies that honor and motivate inner-city African American students” (Teel, Debrin-Parecki, & Covington, 1998) and she decides to change her instruction and assessment in fall in four ways:
- First, she stresses an incremental approach to ability focusing on effort and allows students to revise their work several times until the criteria are met.
- Second, she gives students choices in performance assessments (e.g. oral presentation, art project, creative writing).
- Third, she encourages responsibility by asking students to assist in classroom tasks such as setting up video equipment, handing out papers etc.
- Fourth, she validates student’ cultural heritage by encouraging them to read biographies and historical fiction from their own cultural backgrounds.
Kym reports that the changes in her students’ effort and demeanor in class are dramatic: students are more enthusiastic, work harder, and produce better products. At the end of the year twice as many of her students pass the statewide test than the previous year.
Afterward. Kym still teaches sixth grade in the same school district and continues to modify the strategies described above. Even though the performance of the students she taught improved the school was closed because, on average, the students’ performance was poor. Kym gained a Ph.D and teaches Educational Psychology to preservice and inservice teachers in evening classes.
Kym’s story illustrates several themes related to assessment that we explore in this chapter on teacher-made assessment strategies and in the chapter on standardized testing.
First, choosing effective classroom assessments is related to instructional practices, beliefs about motivation, and the presence of statewide standardized testing.
Second, some teacher-made classroom assessments enhance student learning and motivation —some do not.
Third, teachers can improve their teaching through action research . This involves identifying a problem (e.g. low motivation and achievement), learning about alternative approaches (e.g. reading the literature), implementing the new approaches, observing the results (e.g. students’ effort and test results), and continuing to modify the strategies based on their observations.
Best practices in assessing student learning have undergone dramatic changes in the last 20 years. When Rosemary was a mathematics teacher in the 1970s, she did not assess students’ learning she tested them on the mathematics knowledge and skills she taught during the previous weeks. The test formats varied little and students always did them individually with pencil and paper.
Many teachers, including mathematics teachers, now use a wide variety of methods to determine what their students have learned and also use this assessment information to modify their instruction. In this chapter, the focus is on using classroom assessments to improve student learning and we begin with some basic concepts.
Basic concepts
Assessment is an integrated process of gaining information about students’ learning and making value judgments about their progress (Linn & Miller, 2005). Information about students’ progress can be obtained from a variety of sources, including projects, portfolios, performances, observations, and tests. The information about students’ learning is often assigned specific numbers or grades and this involves measurement. Measurement answers the question, “How much?” and is used most commonly when the teacher scores a test or product and assigns numbers (e.g. 28 /30 on the biology test; 90/100 on the science project).
Evaluation is the process of making judgments about the assessment information (Airasian, 2005). These judgments may be about individual students (e.g. should Jacob’s course grade take into account his significant improvement over the grading period?), the assessment method used (e.g. is the multiple choice test a useful way to obtain information about problem solving), or one’s own teaching (e.g. most of the students this year did much better on the essay assignment than last year so my new teaching methods seem effective).
The primary focus in this chapter is on assessment for learning, where the priority is designing and using assessment strategies to enhance student learning and development. Assessment for learning is often referred to as formative assessment, i.e. it takes place during the course of instruction by providing information that teachers can use to revise their teaching and students can use to improve their learning (Black, Harrison, Lee, Marshall & Wiliam, 2004).
Formative assessment includes both informal assessment involving spontaneous unsystematic observations of students’ behaviors (e.g. during a question and answer session or while the students are working on an assignment) and formal assessment involving pre-planned, systematic gathering of data.
Assessment of learning is formal assessment that involves assessing students in order to certify their competence and fulfill accountability mandates and is the primary focus of the next chapter on standardized tests but is also considered in this chapter. Assessment of learning is typically summative , that is, administered after the instruction is completed (e.g. a final examination in an educational psychology course). Summative assessments provide information about how well students mastered the material, whether students are ready for the next unit, and what grades should be given (Airasian, 2005).
Assessment for learning: an overview of the process
Using assessment to advance students’ learning not just check on learning requires viewing assessment as a process that is integral to the all phases of teaching including planning, classroom interactions and instruction, communication with parents, and self-reflection (Stiggins, 2002). Essential steps in assessment for learning include:
Step 1: Having clear instructional goals and communicating them to students
Teachers need to think carefully about the purposes of each lesson and unit. This may be hard for beginning teachers. Teachers must communicate the lesson goals and objectives to their students so they know what is important for them to learn. No matter how thorough a teacher’s planning has been, if students do not know what they are supposed to learn they will not learn as much.
Step 2: Selecting appropriate assessment techniques
Selecting and administrating assessment techniques that are appropriate for the goals of instruction as well as the developmental level of the students are crucial components of effective assessment for learning. Teachers need to know the characteristics of a wide variety of classroom assessment techniques and how these techniques can be adapted for various content, skills, and student characteristics. They also should understand the role reliability, validity, and the absence of bias should play is choosing and using assessment techniques. Much of this chapter focuses on this information.
Step 3: Using assessment to enhance motivation and confidence
Students’ motivation and confidence is influenced by the type of assessment used as well as the feedback given about the assessment results. Consider, Samantha a college student who takes a history class in which the professor’s lectures and text book focus on really interesting major themes. However, the assessments are all multiple-choice tests that ask about facts and Samantha, who initially enjoys the classes and readings, becomes angry, loses confidence she can do well, and begins to spend less time on the class material. The type of feedback provided to students is also important and we elaborate on these ideas later in this chapter.
Step 4: Adjusting instruction based on information
An essential component of assessment for learning is that the teacher uses the information gained from assessment to adjust instruction. These adjustments occur in during lesson when a teacher may decide that students’ responses to questions indicate sufficient understanding to introduce a new topic, or that her observations of students’ behavior indicates that they do not understand the assignment and so need further explanation. Adjustments also occur when the teacher reflects on the instruction after the lesson is over and is planning for the next day. We provide examples of adjusting instruction in this chapter.
Step 5: Communicating with parents and guardians
Students’ learning and development is enhanced when teachers communicate with parents regularly about their children’s performance. Teachers communicate with parents in a variety of ways including newsletters, telephone conversations, email, school district websites and parent-teacher conferences. Effective communication requires that teachers can clearly explain the purpose and characteristics of the assessment as well as the meaning of students’ performance. This requires a thorough knowledge of the types and purposes of teacher made and standardized assessments and well as clear communication skills.
We now consider each step in the process of assessment for learning in more detail. In order to be able to select and administer appropriate assessment techniques teachers need to know about the variety of techniques that can be used as well as what factors ensure that the assessment techniques are high quality. We begin by considering high quality assessments.

Selecting appropriate assessment techniques: high quality assessments
For an assessment to be high quality it needs to have good validity and reliability as well as absence of bias.
Validity is the evaluation of the “adequacy and appropriateness of the interpretations and uses of assessment results” for a given group of individuals (Linn & Miller, 2005, p. 68).
For example, is it appropriate to conclude that the results of a mathematics test on fractions given to recent immigrants accurately represents their understanding of fractions?
Is it appropriate for the teacher to conclude, based on her observations, that a kindergarten student, Jasmine, has Attention Deficit Disorder because she does not follow the teacher’s oral instructions?
Obviously, in each situation other interpretations are possible that the immigrant students have poor English skills rather than mathematics skills, or that Jasmine may be hearing impaired.
It is important to understand that validity refers to the interpretation and uses made of the results of an assessment procedure not of the assessment procedure itself. For example, making judgments about the results of the same test on fractions may be valid if all the students understand English well. A teacher, concluding from her observations that the kindergarten student has Attention Deficit Disorder (ADD) may be appropriate if the student has been screened for hearing and other disorders (although the classification of a disorder like ADD cannot be made by one teacher). Validity involves making an overall judgment of the degree to which the interpretations and uses of the assessment results are justified. Validity is a matter of degree (e.g. high, moderate, or low validity) rather than all-or none (e.g. totally valid vs invalid) (Linn & Miller, 2005).
Three sources of evidence are considered when assessing validity—content, construct and predictive.
Content validity evidence is associated with the question: How well does the assessment include the content or tasks it is supposed to? For example, suppose your educational psychology instructor devises a mid-term test and tells you this includes chapters one to seven in the text book. Obviously, all the items in test should be based on the content from educational psychology, not your methods or cultural foundations classes. Also, the items in the test, should cover content from all seven chapters and not just chapters three to seven—unless the instructor tells you that these chapters have priority.
Teachers’ have to be clear about their purposes and priorities for instruction before they can begin to gather evidence related content validity . Content validation determines the degree that assessment tasks are relevant and representative of the tasks judged by the teacher (or test developer) to represent their goals and objectives (Linn & Miller, 2005). It is important for teachers to think about content validation when devising assessment tasks and one way to help do this is to devise a Table of Specifications. (link below)
An example, based on Pennsylvania’s State standards for grade 3 geography, is in . In the left-hand column is the instructional content for a 20-item test the teacher has decided to construct with two kinds of instructional objectives: identification and uses or locates. The second and third columns identify the number of items for each content area and each instructional objective. Notice that the teacher has decided that six items should be devoted to the sub area of geographic representations- more than any other sub area. Devising a table of specifications helps teachers determine if some content areas or concepts are over-sampled (i.e. there are too many items) and some concepts are under-sampled (i.e. there are too few items).

Construct validity evidence is more complex than content validity evidence. Often, we are interested in making broader judgments about student’s performances than specific skills such as doing fractions. The focus may be on constructs such as mathematical reasoning or reading comprehension.
A construct is a characteristic of a person we assume exists to help explain behavior.
For example, we use the concept of test anxiety to explain why some individuals when taking a test have difficulty concentrating, have physiological reactions such as sweating, and perform poorly on tests but not in class assignments. Similarly, mathematics reasoning and reading comprehension are constructs as we use them to help explain performance on an assessment.
Construct validation is the process of determining the extent to which performance on an assessment can be interpreted in terms of the intended constructs and is not influenced by factors irrelevant to the construct.
For example, judgments about recent immigrants’ performance on a mathematical reasoning test administered in English will have low construct validity if the results are influenced by English language skills that are irrelevant to mathematical problem solving. Similarly, construct validity of end-of-semester examinations is likely to be poor for those students who are highly anxious when taking major tests but not during regular class periods or when doing assignments. Teachers can help increase construct validity by trying to reduce factors that influence performance but are irrelevant to the construct being assessed. These factors include anxiety, English language skills, and reading speed (Linn & Miller 2005).
A third form of validity evidence is called criterion-related validity. Selective colleges in the USA use the ACT or SAT among other criteria to choose who will be admitted because these standardized tests help predict freshman grades, i.e. have high criterion-related validity. Some K-12 schools give students math or reading tests in the fall semester in order to predict which are likely to do well on the annual state tests administered in the spring semester and which students are unlikely to pass the tests and will need additional assistance. If the tests administered in fall do not predict students’ performances accurately, then the additional assistance may be given to the wrong students illustrating the importance of criterion-related validity.
Reliability
Reliability refers to the consistency of the measurement (Linn & Miller 2005). Suppose Mr Garcia is teaching a unit on food chemistry in his tenth-grade class and gives an assessment at the end of the unit using test items from the teachers’ guide. Reliability is related to questions such as: How similar would the scores of the students be if they had taken the assessment on a Friday or Monday? Would the scores have varied if Mr Garcia had selected different test items, or if a different teacher had graded the test? An assessment provides information about students by using a specific measure of performance at one particular time. Unless the results from the assessment are reasonably consistent over different occasions, different raters, or different tasks (in the same content domain) confidence in the results will be low and so cannot be useful in improving student learning.
Obviously, we cannot expect perfect consistency. Students’ memory, attention, fatigue, effort, and anxiety fluctuate and so influence performance. Even trained raters vary somewhat when grading assessment such as essays, a science project, or an oral presentation. Also, the wording and design of specific items influence students’ performances. However, some assessments are more reliable than others and there are several strategies teachers can use to increase reliability
- First, assessments with more tasks or items typically have higher reliability.
To understand this, consider two tests one with five items and one with 50 items. Chance factors influence the shorter test more then the longer test. If a student does not understand one of the items in the first test the total score is very highly influenced (it would be reduced by 20 per cent). In contrast, if there was one item in the test with 50 items that were confusing, the total score would be influenced much less (by only 2 percent). Obviously, this does not mean that assessments should be inordinately long, but, on average, enough tasks should be included to reduce the influence of chance variations.
- Second, clear directions and tasks help increase reliability.
If the directions or wording of specific tasks or items are unclear, then students have to guess what they mean undermining the accuracy of their results.
- Third, clear scoring criteria are crucial in ensuring high reliability (Linn & Miller, 2005).
Absence of bias
Bias occurs in assessment when there are components in the assessment method or the administration of the assessment that distort the performance of the student because of their personal characteristics such as gender, ethnicity, or social class (Popham, 2005).
- Two types of assessment bias are important: offensiveness and unfair penalization.
An assessment is most likely to be offensive to a subgroup of students when negative stereotypes are included in the test. For example, the assessment in a health class could include items, in which all the doctors were men and all the nurses were women. Or, a series of questions in a social studies class could portray Latinos and Asians as immigrants rather than native born Americans. In these examples, some female, Latino or Asian students are likely to be offended by the stereotypes and this can distract them from performing well on the assessment.
Unfair penalization occurs when items disadvantage one group not because they may be offensive but because of differential background experiences. For example, an item for math assessment that assumes knowledge of a particular sport may disadvantage groups not as familiar with that sport (e.g. American football for recent immigrants). Or an assessment on team work that asks students to model their concept of a team on a symphony orchestra is likely to be easier for those students who have attended orchestra performances—probably students from affluent families. Unfair penalization does not occur just because some students do poorly in class. For example, asking questions about a specific sport in a physical education class when information on that sport had been discussed in class is not unfair penalization as long as the questions do not require knowledge beyond that taught in class that some groups are less likely to have.
It can be difficult for new teachers teaching in multi-ethnic classrooms to devise interesting assessments that do not penalize any groups of students. Teachers need to think seriously about the impact of students’ differing backgrounds on the assessment they use in class. Listening carefully to what students say is important as is learning about the backgrounds of the students.
Selecting appropriate assessment techniques II: types of teacher-made assessments
One of the challenges for beginning teachers is to select and use appropriate assessment techniques. In this section, we summarize the wide variety of types of assessments that classroom teachers use. First, we discuss the informal techniques teachers use during instruction that typically require instantaneous decisions. Then we consider formal assessment techniques that teachers plan before instruction and allow for reflective decisions.

Teachers’ observation, questioning, and record keeping
During teaching, teachers not only have to communicate the information they planned, but also continuously monitor students’ learning and motivation in order to determine whether modifications have to be made (Airasian, 2005). Beginning teachers find this more difficult than experienced teachers because of the complex cognitive skills required to improvise and be responsive to students’ needs while simultaneously keeping in mind the goals and plans of the lesson (Borko & Livingston, 1989).
- The informal assessment strategies teachers most often use during instruction are observation and questioning.
Observation
Effective teachers observe their students from the time they enter the classroom. Some teachers greet their students at the door not only to welcome them, but also to observe their mood and motivation. Are Hannah and Naomi still not talking to each other? Does Ethan have his materials with him? Gaining information on such questions can help the teacher foster student learning more effectively (e.g. suggesting Ethan goes back to his locker to get his materials before the bell rings or avoiding assigning Hannah and Naomi to the same group).
During instruction, teachers observe students’ behavior to gain information about students’ level of interest and understanding of the material or activity. Observation includes looking at non-verbal behaviors as well as listening to what the students are saying. For example, a teacher may observe that a number of students are looking out of the window rather than watching the science demonstration, or a teacher may hear students making comments in their group indicating they do not understand what they are supposed to be doing.
Observations also help teachers decide which student to call on next, whether to speed up or slow down the pace of the lesson, when more examples are needed, whether to begin or end an activity, how well students are performing a physical activity, and if there are potential behavior problems (Airasian, 2005). Many teachers find that moving around the classroom helps them observe more effectively because they can see more students from a variety of perspectives. However, the fast pace and complexity of most classrooms makes it difficult for teachers to gain as much information as they want.
Questioning
Teachers ask questions for many instructional reasons, including keeping students’ attention on the lesson, highlighting important points and ideas, promoting critical thinking, allowing students to learn from each other’s answers, and providing information about students’ learning. Devising good appropriate questions and using students’ responses to make effective instantaneous instructional decisions is very difficult.
Some strategies to improve questioning include:
- planning and writing down the instructional questions that will be asked
- allowing sufficient wait time for students to respond
- listening carefully to what students say rather than listening for what is expected
- varying the types of questions asked
- making sure some of the questions are higher level
- asking follow-up questions
While the informal assessment based on spontaneous observation and questioning is essential for teaching there are inherent problems with the validity, reliability and bias in this information (Airasian, 2005; Stiggins 2005). We summarize these issues and some ways to reduce the problems in Table 1
Record keeping
Keeping records of observations improves reliability and can be used to enhance understanding of one student, a group, or the whole class’ interactions. Sometimes this requires help from other teachers. For example, Alexis, a beginning science teacher is aware of the research documenting that longer wait time enhances students’ learning (e.g. Rowe, 2003) but is unsure of her behaviors so she asks a colleague to observe and record her wait times during one class period. Alexis learns her wait times are very short for all students so she starts practicing silently counting to five whenever she asks students a question.
Teachers can keep anecdotal records about students without help from peers. These records contain descriptions of incidents of a student’s behavior, the time and place the incident takes place, and a tentative interpretation of the incident. For example, the description of the incident might involve Joseph, a second-grade student, who fell asleep during the mathematics class on a Monday morning.
A tentative interpretation could be the student did not get enough sleep over the weekend, but alternative explanations could be the student is sick or is on medications that make him drowsy. Obviously, additional information is needed and the teacher could ask Joseph why he is so sleepy and also observe him to see if he looks tired and sleepy over the next couple of weeks.
Anecdotal records often provide important information and are better than relying on one’s memory, but they take time to maintain and it is difficult for teachers to be objective. For example, after seeing Joseph fall asleep the teacher may now look for any signs of Joseph’s sleepiness—ignoring the days he is not sleepy. Also, it is hard for teachers to sample a wide enough range of data for their observations to be highly reliable.
Teachers also conduct more formal observations , especially for students who have IEP’s. An example of the importance of informal and formal observations in a preschool follows:
The class of preschoolers in a suburban neighborhood of a large city has eight special needs students and four students—the peer models—who have been selected because of their well-developed language and social skills. Some of the special needs students have been diagnosed with delayed language, some with behavior disorders, and several with autism.
The students are sitting on the mat with the teacher who has a box with sets of three “cool” things of varying size (e.g. toy pandas) and the students are asked to put the things in order by size, big, medium and small. Students who are able are also requested to point to each item in turn and say “This is the big one”, “This is the medium one” and “This is the little one”.
For some students, only two choices (big and little) are offered because that is appropriate for their developmental level. The teacher informally observes that one of the boys is having trouble keeping his legs still so she quietly asks the aid for a weighted pad that she places on the boy’s legs to help him keep them still. The activity continues and the aide carefully observes student’s behaviors and records on IEP progress cards whether a child meets specific objectives such as: “When given two picture or object choices, Mark will point to the appropriate object in 80 per cent of the opportunities.”
The teacher and aides keep records of the relevant behavior of the special needs students during the half day they are in preschool. The daily records are summarized weekly. If there are not enough observations that have been recorded for a specific objective, the teacher and aide focus their observations more on that child, and if necessary, try to create specific situations that relate to that objective. At end of each month the teacher calculates whether the special needs children are meeting their IEP objectives.

Selected response items
Common formal assessment formats used by teachers are multiple choice, matching, and true/false items . In selected response items students must select a response provided by the teacher or the test developer rather than constructing a response in their own words or actions. Selected response items do not require that students recall the information, but rather recognize the correct answer.
Tests with these items are called objective because the results are not influenced by scorers’ judgments or interpretations and so are often machine scored. Eliminating potential errors in scoring increases the reliability of tests, but teachers who only use objective tests are liable to reduce the validity of their assessment because objective tests are not appropriate for all learning goals (Linn & Miller, 2005). Effective assessment for learning as well as assessment of learning must be based on aligning the assessment technique to the learning goals and outcomes.
For example, if the goal is for students to conduct an experiment then they should be asked to do that rather that than being asked about conducting an experiment.
Common problems
Selected response items are easy to score but are hard to devise. Teachers often do not spend enough time constructing items and common problems include:
- Unclear wording in the items
- True or False: Although George Washington was born into a wealthy family, his father died when he was only 11, he worked as a youth as a surveyor of rural lands, and later stood on the balcony of Federal Hall in New York when he took his oath of office in 1789.
- Cues that are not related the content being examined.
- A common clue is that all the true statements on a true/false test or the corrective alternatives on a multiple-choice test are longer than the untrue statements or the incorrect alternatives.
- Using negatives (or double negatives) the items.
- A poor item . “True or False: None of the steps made by the student was unnecessary.”
- A better item . True or False: “All of the steps were necessary.”
Students often do not notice the negative terms or find them confusing so avoiding them is generally recommended (Linn & Miller 2005). However, since standardized tests often use negative items, teachers sometimes deliberately include some negative items to give students practice in responding to that format.
- Taking sentences directly from a textbook or lecture notes.
Removing the words from their context often makes them ambiguous or can change the meaning. For example, a statement from Chapter 3 taken out of context suggests all children are clumsy. “Similarly, with jumping, throwing and catching: the large majority of children can do these things, though often a bit clumsily.” A fuller quotation makes it clearer that this sentence refers to 5-year-olds: For some fives, running still looks a bit like a hurried walk, but usually it becomes more coordinated within a year or two. Similarly, with jumping, throwing and catching: the large majority of children can do these things, though often a bit clumsily, by the time they start school, and most improve their skills noticeably during the early elementary years.” If the abbreviated form was used as the stem in a true/false item it would obviously be misleading.
- Avoid trivial questions
e.g. Jean Piaget was born in what year?
While it important to know approximately when Piaget made his seminal contributions to the understanding of child development, the exact year of his birth (1880) is not important.
Strengths and weaknesses
All types of selected response items have a number of strengths and weaknesses.
- True/False items are appropriate for measuring factual knowledge such as vocabulary, formulae, dates, proper names, and technical terms. They are very efficient as they use a simple structure that students can easily understand, and take little time to complete. They are also easier to construct than multiple choice and matching items. However, students have a 50 percent probability of getting the answer correct through guessing so it can be difficult to interpret how much students know from their test scores.
In matching items, two parallel columns containing terms, phrases, symbols, or numbers are presented and the student is asked to match the items in the first column with those in the second column. Typically, there are more items in the second column to make the task more difficult and to ensure that if a student makes one error they do not have to make another.
Matching items most often are used to measure lower level knowledge , such as persons and their achievements, dates and historical events, terms and definitions, symbols and concepts, plants or animals and classifications (Linn & Miller, 2005). An example with Spanish language words and their English equivalents is below:
Directions: On the line to the left of the Spanish word in Column A, write the letter of the English word in Column B that has the same meaning.In matching items, two parallel columns containing terms, phrases, symbols, or numbers are presented and the student is asked to match the items in the first column with those in the second column. Typically there are more items in the second column to make the task more difficult and to ensure that if a student makes one error they do not have to make another. Matching items most often are used to measure lower level knowledge such as persons and their achievements, dates and historical events, terms and definitions, symbols and concepts, plants or animals and classifications (Linn & Miller, 2005). An example with Spanish language words and their English equivalents is below:
While matching items may seem easy to devise it is hard to create homogenous lists. Other problems with matching items and suggested remedies are in Table 37.
Multiple Choice items are the most commonly used type of objective test items because they have a number of advantages over other objective test items.
- Most importantly, they can be adapted to assess higher levels thinking such as application as well as lower level factual knowledge. The first example below assesses knowledge of a specific fact, whereas the second example assesses application of knowledge.
Who is best known for their work on the development of the morality of justice?
- b) Vygotsky
- d) Kohlberg
Which one of the following best illustrates the law of diminishing returns
- a) A factory doubled its labor force and increased production by 50 per cent
- b) The demand for an electronic product increased faster than the supply of the product
- c) The population of a country increased faster than agricultural self sufficiency
- d) A machine decreased in efficacy as its parts became worn out
(Adapted from Linn and Miller 2005, p, 193).
There are several other advantages of multiple choice items. Students have to recognize the correct answer not just know the incorrect answer as they do in true/false items. Also, the opportunity for guessing is reduced because four or five alternatives are usually provided whereas in true/false items students only have to choose between two choices. Also, multiple choice items do not need homogeneous material as matching items do.
However, creating good multiple-choice test items is difficult and students (maybe including you) often become frustrated when taking a test with poor multiple choice items. Three steps must be considered when constructing a multiple-choice item: formulating a clearly stated problem, identifying plausible alternatives, and removing irrelevant clues to the answer.
Constructed response items
Formal assessment also includes constructed response items, in which students are asked to recall information and create an answer—not just recognize if the answer is correct—so guessing is reduced.
- Constructed response items can be used to assess a wide variety of kinds of knowledge and two major kinds are discussed: completion or short answer (also called short response) and extended response.
Completion and short answer
Completion and short answer items can be answered in a word, phrase, number, or symbol. These types of items are essentially the same only varying in whether the problem is presented as a statement or a question (Linn & Miller 2005). For example:
Completion: The first traffic light in the US was invented by…………….
Short Answer: Who invented the first traffic light in the US?
These items are often used in mathematics tests, e.g.
3 + 10 = …………..?
If x = 6, what does x(x-1) =……….
Draw the line of symmetry on the following shape
A major advantage of these items is they that they are easy to construct. However, apart from their use in mathematics they are unsuitable for measuring complex learning outcomes and are often difficult to score. Completion and short answer tests are sometimes called objective tests as the intent is that there is only one correct answer and so there is no variability in scoring but unless the question is phrased very carefully, there are frequently a variety of correct answers. For example, consider the item
Where was President Lincoln born?………………..
The teacher may expect the answer “in a log cabin” but other correct answers are also “on Sinking Spring Farm”, “in Hardin County” or “in Kentucky”. Common errors in these items are summarized below.
Extended response
Extended response items are used in many content areas and answers may vary in length from a paragraph to several pages. Questions that require longer responses are often called essay questions. Extended response items have several advantages and the most important is their adaptability for measuring complex learning outcomes — particularly integration and application . These items also require that students write and therefore provide teachers a way to assess writing skills. A commonly cited advantage to these items is their ease in construction; however, carefully worded items that are related to learning outcomes and assess complex learning are hard to devise (Linn & Miller, 2005).
Well-constructed items phrase the question so the task of the student is clear. Often this involves providing hints or planning notes. In the first example below the actual question is clear not only because of the wording, but because of the format (i.e. it is placed in a box). In the second and third examples planning notes are provided:
Example 1 : Third grade mathematics:
The owner of a bookstore gave 14 books to the school. The principal will give an equal number of books to each of three classrooms and the remaining books to the school library. How many books could the principal give to each student and the school?
Show all your work on the space below and on the next page. Explain in words how you found the answer. Tell why you took the steps you did to solve the problem.
From Illinois Standards Achievement Test, 2006;
Example 2 : Fifth grade science: The grass is always greener
Jose and Maria noticed three different types of soil, black soil, sand, and clay, were found in their neighborhood. They decided to investigate the question, “How does the type of soil (black soil, sand, and clay) under grass sod affect the height of grass?”
Plan an investigation that could answer their new question.
In your plan, be sure to include:
- Prediction of the outcome of the investigation
- Materials needed to do the investigation
- Procedure that includes:
- logical steps to do the investigation
- one variable kept the same (controlled)
- one variable changed (manipulated)
- any variables being measure and recorded
- how often measurements are taken and recorded (From Washington State 2004 assessment of student learning )
Example 3: Grades 9-11 English:
Writing prompt
Some people think that schools should teach students how to cook. Other people think that cooking is something that ought to be taught in the home. What do you think? Explain why you think as you do. Planning notes
Choose One:
□ I think schools should teach students how to cook
□ I think cooking should l be taught in the home
I think cooking should be taught in … …………………………. . because………
( school ) or (the home)
(From Illinois Measure of Annual Growth in English)
A major disadvantage of extended response items is the difficulty in reliable scoring. Not only do various teachers score the same response differently but also the same teacher may score the identical response differently on various occasions (Linn & Miller 2005). A variety of steps can be taken to improve the reliability and validity of scoring.
First, teachers should begin by writing an outline of a model answer. This helps make it clear what students are expected to include. Second, a sample of the answers should be read. This assists in determining what the students can do and if there are any common misconceptions arising from the question. Third, teachers have to decide what to do about irrelevant information that is included (e.g. is it ignored or are students penalized) and how to evaluate mechanical errors such as grammar and spelling. Then, a point scoring or a scoring rubric should be used.
In point scoring components of the answer are assigned points. For example, if students were asked:
What are the nature, symptoms, and risk factors of hyperthermia?
Point Scoring Guide:
Definition (natures) 2 pts
Symptoms (1 pt for each) 5 pts
Risk Factors (1 point for each) 5 pts
Writing 3 pts
This provides some guidance for evaluation and helps consistency, but point scoring systems often lead the teacher to focus on facts (e.g. naming risk factors) rather than higher level thinking that may undermine the validity of the assessment if the teachers’ purposes include higher level thinking. A better approach is to use a scoring rubric that describes the quality of the answer or performance at each level.
Scoring rubrics
Scoring rubrics can be holistic or analytica l. In holistic scoring rubrics, general descriptions of performance are made and a single overall score is obtained. An example from grade 2 language arts in Los Angeles Unified School District classifies responses into four levels: not proficient, partially proficient, proficient and advanced is on Exhibit 4.
EXHIBIT 4: EXAMPLE OF HOLISTIC SCORING RUBRIC: ENGLISH LANGUAGE ARTS GRADE 2
Assignment. Write about an interesting, fun, or exciting story you have read in class this year. Some of the things you could write about are:
- What happened in the story (the plot or events)
- Where the events took place (the setting)
- People, animals, or things in the story ( the characters)
In your writing make sure you use facts and details from the story to describe everything clearly. After you write about the story, explain what makes the story interesting, fun or exciting.
Analytical rubrics provide descriptions of levels of student performance on a variety of characteristics. For example, six characteristics used for assessing writing developed by the Northwest Regional Education Laboratory (NWREL) are:
- ideas and content
- organization
- word choice
- sentence fluency
- conventions
Holistic rubrics have the advantages that they can be developed more quickly than analytical rubrics. They are also faster to use as there is only one dimension to examine. However, they do not provide students feedback about which aspects of the response are strong and which aspects need improvement (Linn & Miller, 2005). This means they are less useful for assessment for learning. An important use of rubrics is to use them as teaching tools and provide them to students before the assessment so they know what knowledge and skills are expected.
Teachers can use scoring rubrics as part of instruction by giving students the rubric during instruction, providing several responses, and analyzing these responses in terms of the rubric. For example, use of accurate terminology is one dimension of the science rubric in Table 40. An elementary science teacher could discuss why it is important for scientists to use accurate terminology, give examples of inaccurate and accurate terminology, provide that component of the scoring rubric to students, distribute some examples of student responses (maybe from former students), and then discuss how these responses would be classified according to the rubric.
- This strategy of assessment for learning should be more effective if the teacher:
(a) emphasizes to students why using accurate terminology is important when learning science rather than how to get a good grade on the test (we provide more details about this in the section on motivation later in this chapter)
(b) provides an exemplary response so students can see a model
(c) emphasizes that the goal is student improvement on this skill not ranking students.
Performance assessments
Typically, in performance assessments, students complete a specific task while teachers observe the process or procedure (e.g. data collection in an experiment) as well as the product (e.g. completed report) (Popham, 2005; Stiggens, 2005). The tasks that students complete in performance assessments are not simple—in contrast to selected response items—and include the following:
- playing a musical instrument
- athletic skills
- artistic creation
- conversing in a foreign language
- engaging in a debate about political issues
- conducting an experiment in science
- repairing a machine
- writing a term paper
- using interaction skills to play together
These examples all involve complex skills but illustrate that the term performance assessment is used in a variety of ways. For example, the teacher may not observe all of the process (e.g. she sees a draft paper but the final product is written during out-of-school hours) and essay tests are typically classified as performance assessments (Airasian, 2000). In addition, in some performance assessments there may be no clear product (e.g. the performance may be group interaction skills).
Two related terms, alternative assessment and authentic assessment are sometimes used instead of performance assessment but they have different meanings (Linn & Miller, 2005).
Alternative assessmen t refers to tasks that are not pencil-and-paper and while many performance assessments are not pencil-and paper tasks some are (e.g. writing a term paper, essay test).
- Alternative assessment also refers an assessment system that is used to assess students with the most significant cognitive disability or multiple disabilities that significantly impact intellectual functioning and adaptive behavior.
Click here to watch the video on Dynamic Learning Maps assessment system (DLM) (8:50 minutes)
Authentic assessment is used to describe tasks that students do that are similar to those in the “real world”. Classroom tasks vary in level of authenticity (Popham, 2005). For example, a Japanese language class taught in a high school in Chicago conversing in Japanese in Tokyo is highly authentic— but only possible in a study abroad program or trip to Japan. Conversing in Japanese with native Japanese speakers in Chicago is also highly authentic, and conversing with the teacher in Japanese during class is moderately authentic. Much less authentic is a matching test on English and Japanese words. In a language arts class, writing a letter (to an editor) or a memo to the principal is highly authentic as letters and memos are common work products.
- However, writing a five-paragraph paper is not as authentic as such papers are not used in the world of work.
- However, a five-paragraph paper is a complex task and would typically be classified as a performance assessment.
Internet Resource on Performance Assessment
The Inside Mathematics website has Performance Assessments Ta sks for grades 2 through 8 and high school math (algebra, functions, geometry, statistics and probability, and number and quantity). The assessments are aligned to the Common Core Standards for Mathematics: http://www.insidemathematics.org/performance-assessment-tasks You may download and use these tasks for professional development purposes without modifying the tasks.
Advantages and disadvantages
There are several advantages of performance assessment s (Linn & Miller 2005). First, the focus is on complex learning outcomes that often cannot be measured by other methods. Second, performance assessments typically assess process or procedure as well as the product. For example, the teacher can observe if the students are repairing the machine using the appropriate tools and procedures as well as whether the machine functions properly after the repairs. Third, well designed performance assessments communicate the instructional goals and meaningful learning clearly to students. For example, if the topic in a fifth-grade art class is one-point perspective the performance assessment could be drawing a city scene that illustrates one-point perspective. This assessment is meaningful and clearly communicates the learning goal. This performance assessment is a good instructional activity and has good content validity—common with well-designed performance assessments (Linn & Miller 2005).
- One major disadvantage with performance assessments is that they are typically very time consuming for students and teachers. This means that fewer assessments can be gathered so if they are not carefully devised fewer learning goals will be assessed—which can reduce content validity.
State curriculum guidelines can be helpful in determining what should be included in a performance assessment. For example, Eric, a dance teacher in a high school in Tennessee learns that the state standards indicate that dance students at the highest level should be able to do demonstrate consistency and clarity in performing technical skills by:
- performing complex movement combinations to music in a variety of meters and styles
- performing combinations and variations in a broad dynamic range
- demonstrating improvement in performing movement combinations through self-evaluation
- critiquing a live or taped dance production based on given criteria
Eric devises the following performance task for his eleventh-grade modern dance class.
In groups of 4-6 students will perform a dance at least 5 minutes in length. The dance selected should be multifaceted so that all the dancers can demonstrate technical skills, complex movements, and a dynamic range (Items 1-2). Students will videotape their rehearsals and document how they improved through self-evaluation (Item 3). Each group will view and critique the final performance of one other group in class (Item 4). Eric would need to scaffold most steps in this performance assessment. The groups probably would need guidance in selecting a dance that allowed all the dancers to demonstrate the appropriate skills; critiquing their own performances constructively; working effectively as a team, and applying criteria to evaluate a dance.
- Another disadvantage of performance assessments is they are hard to assess reliably which can lead to inaccuracy and unfair evaluation. As with any constructed response assessment, scoring rubrics are very important.
An example of holistic and analytic scoring rubrics designed to assess a completed product are in Exhibit 4 and Table 4. A rubric designed to assess the process of group interactions is in Table 5.
This rubric was devised for middle grade science ,but could be used in other subject areas when assessing group process. In some performance assessments, several scoring rubrics should be used. In the dance performance example above Eric should have scoring rubrics for the performance skills, the improvement based on self-evaluation, the team work, and the critique of the other group.
Obviously, devising a good performance assessment is complex and Linn and Miller (2005) recommend that teachers should:
- Create performance assessments that require students to use complex cognitive skills. Sometimes teachers devise assessments that are interesting and that the students enjoy but do not require students to use higher level cognitive skills that lead to significant learning. Focusing on high level skills and learning outcomes is particularly important because performance assessments are typically so time consuming.
- Ensure that the task is clear to the students. Performance assessments typically require multiple steps so students need to have the necessary prerequisite skills and knowledge as well as clear directions. Careful scaffolding is important for successful performance assessments.
- Specify expectations of the performance clearly by providing students scoring rubrics during the instruction. This not only helps students understand what it expected but it also guarantees that teachers are clear about what they expect. Thinking this through while planning the performance assessment can be difficult for teachers, but is crucial as it typically leads to revisions of the actual assessment and directions provided to students.
- Reduce the importance of unessential skills in completing the task. What skills are essential depends on the purpose of the task. For example, for a science report, is the use of publishing software essential? If the purpose of the assessment is for students to demonstrate the process of the scientific method including writing a report, then the format of the report may not be significant. However, if the purpose includes integrating two subject areas, science and technology, then the use of publishing software is important. Because performance assessments take time it is tempting to include multiple skills without carefully considering if all the skills are essential to the learning goals.

“A portfolio is a meaningful collection of student work that tells the story of student achievement or growth” (Arter, Spandel, & Culham, 1995, p. 2).
Portfolios are a purposeful collection of student work not just folders of all the work a student does.
Portfolios are used for a variety of purposes and developing a portfolio system can be confusing and stressful unless the teachers are clear on their purpose.
The varied purposes can be illustrated as four dimensions (Linn & Miller 2005):
When the primary purpose is assessment for learning, the emphasis is on student self-reflection and responsibility for learning.
Students not only select samples of their work they wish to include, but also reflect and interpret their own work. Portfolios containing this information can be used to aid communication as students can present and explain their work to their teachers and parents (Stiggins, 2005). Portfolios focusing on assessment of learning contain students’ work samples that certify accomplishments for a classroom grade, graduation, state requirements etc.
Typically, students have less choice in the work contained in such portfolios as some consistency is needed for this type of assessment. For example, the writing portfolios that fourth and seventh graders are required to submit in Kentucky must contain a self-reflective statement and an example of three pieces of writing (reflective, personal experience or literary, and transactive). Students do choose which of their pieces of writing in each type to include in the portfolio.
Portfolios can be designed to focus on student progress or current accomplishments.
For example, audio tapes of English language learners speaking could be collected over one year to demonstrate growth in learning. Student progress portfolios may also contain multiple versions of a single piece of work. For example, a writing project may contain notes on the original idea, outline, first draft, comments on the first draft by peers or teacher, second draft, and the final finished product (Linn & Miller 2005). If the focus is on current accomplishments, only recent completed work samples are included.
Portfolios can focus on documenting student activities or highlighting important accomplishments.
Documentation portfolios are inclusive, containing all the work samples rather than focusing on one special strength, best work, or progress. In contrast, showcase portfolios focus on best work. The best work is typically identified by students. One aim of such portfolios is that students learn how to identify products that demonstrate what they know and can do. Students are not expected to identify their best work in isolation but also use the feedback from their teachers and peers.
A final distinction can be made between a finished portfolio—maybe used to for a job application—versus a working portfolio that typically includes day-to-day work samples.
Working portfolios evolve over time and are not intended to be used for assessment of learning. The focus in a working portfolio is on developing ideas and skills so students should be allowed to make mistakes, freely comment on their own work, and respond to teacher feedback (Linn & Miller, 2005). Finished portfolios are designed for use with a particular audience and the products selected may be drawn from a working portfolio. For example, in a teacher education program, the working portfolio may contain work samples from all the courses taken. A student may develop one finished portfolio to demonstrate she has mastered the required competencies in the teacher education program and a second finished portfolio for her job application.
Portfolios used well in classrooms have several advantages. They provide a way of documenting and evaluating growth in a much more nuanced way than selected response tests can. Also, portfolios can be integrated easily into instruction, i.e. used for assessment for learning. Portfolios also encourage student self-evaluation and reflection, as well as ownership for learning (Popham, 2005). Using classroom assessment to promote student motivation is an important component of assessment for learning which is considered in the next section.
However, there are some major disadvantages of portfolio use. First, good portfolio assessment takes an enormous amount of teacher time and organization. The time is needed to help students understand the purpose and structure of the portfolio, decide which work samples to collect, and to self-reflect. Some of this time needs to be conducted in one-to-one conferences. Reviewing and evaluating the portfolios out of class time is also enormously time consuming. Teachers have to weigh if the time spent is worth the benefits of the portfolio use.
Second, evaluating portfolios reliability and eliminating bias can be even more difficult than in a constructed response assessment because the products are more varied. The experience of the statewide use of portfolios for assessment in writing and mathematics for fourth and eighth graders in Vermont is sobering. Teachers used the same analytic scoring rubric when evaluating the portfolio. In the first two years of implementation samples from schools were collected and scored by an external panel of teachers. In the first year the agreement among raters (i.e. inter-rater reliability) was poor for mathematics and reading; in the second year the agreement among raters improved for mathematics but not for reading. However, even with the improvement in mathematics the reliability was too low to use the portfolios for individual student accountability (Koretz, Stecher, Klein & McCaffrey, 1994).
- When reliability is low, validity is also compromised because unstable results cannot be interpreted meaningfully.
If teachers do use portfolios in their classroom, the series of steps needed for implementation are outlined in Table 36. If the school or district has an existing portfolio system these steps may have to be modified.
Steps in implementing a classroom portfolio program
1. Make sure students own their portfolios.
- Talk to your students about your ideas of the portfolio, the different purposes, and the variety of work samples. If possible, have them help make decisions about the kind of portfolio you implement.
2. Decide on the purpose.
- Will the focus be on growth or current accomplishments? Best work showcase or documentation? Good portfolios can have multiple purposes, but the teacher and students need to be clear about the purpose.
3. Decide what work samples to collect.
- For example, in writing, is every writing assignment included? Are early drafts as well as final products included?
4. Collect and store work samples
Decide where the work sample will be stored. For example, will each student have a file folder in a file cabinet, or a small plastic tub on a shelf in the classroom?
5. Select criteria to evaluate samples.
- If possible, work with students to develop scoring rubrics. This may take considerable time as different rubrics may be needed for the variety of work samples. If you are using existing scoring rubrics, discuss with students’ possible modifications after the rubrics have been used at least once.
6. Teach and require students conduct self-evaluations of their own work.
- Help students learn to evaluate their own work using agreed upon criteria. For younger students, the self-evaluations may be simple (strengths, weaknesses, and ways to improve); for older students, a more analytic approach is desirable including using the same scoring rubrics that the teachers will use.
7. Schedule and conduct portfolio conferences.
- Teacher-student conferences are time consuming, but conferences is essential for the portfolio process to significantly enhance learning. These conferences should aid students’ self-evaluation and should take place frequently.
8. Involve parents.
- Parents need to understand the portfolio process. Encourage parents to review the work samples. You may wish to schedule parent, teacher-students conferences in which students talk about their work samples.
Source: Adapted from Popham (2005)
Assessment that enhances motivation and student confidence
Studies on testing and learning conducted more than 20 years ago demonstrated that tests promote learning and that more frequent tests are more effective than less frequent tests (Dempster & Perkins, 1993). Frequent smaller tests encourage continuous effort rather than last minute cramming and may also reduce test anxiety because the consequences of errors are reduced. College students report preferring more frequent testing than infrequent testing (Bangert-Downs, Kulik, Kulik, 1991).
- More recent research indicates that teachers’ assessment purpose and beliefs, the type of assessment selected, and the feedback given contributes to the assessment climate in the classroom which influences students’ confidence and motivation. The use of self-assessment is also important in establishing a positive assessment climate.
Teachers’ purposes and beliefs
Student motivation can be enhanced when the purpose of assessment is promoting student learning and this is clearly communicated to students by what teachers say and do (Harlen, 2006). This approach to assessment is associated with what the psychologist, Carol Dweck, (2000) calls an incremental view of ability or intelligence. An incremental view assumes that ability increases whenever an individual learns more. This means that effort is valued because effort leads to knowing more and therefore having more ability. Individuals with an incremental view also ask for help when needed and respond well to constructive feedback as the primary goal is increased learning and mastery.
In contrast, a fixed view of ability assumes that some people have more ability than others and nothing much can be done to change that. Individuals with a fixed view of ability often view effort in opposition to ability (“Smart people don’t have to study”) and so do not try as hard, and are less likely to ask for help as that indicates that they are not smart. While there are individual differences in students’ beliefs about their views of intelligence, teachers’ beliefs and classroom practices influence students’ perceptions and behaviors.
Teachers with an incremental view of intelligence, communicate to students that the goal of learning is mastering the material and figuring things out. Assessment is used by these teachers to understand what students know so they can decide whether to move to the next topic, re-teach the entire class, or provide remediation for a few students. Assessment also helps students understand their own learning and demonstrate their competence. Teachers with these views say things like, “We are going to practice over and over again. That’s how you get good. And you’re going to make mistakes. That’s how you learn.” (Patrick, Anderman, Ryan, Edelin, Midgley, 2001, p. 45).
In contrast, teachers with a fixed view (fixed mindset) of ability are more likely to believe that the goal of learning is doing well on tests especially outperforming others. These teachers are more likely to say things that imply fixed abilities e.g. “This test will determine what your math abilities are”, or stress the importance of interpersonal competition, “We will have speech competition and the top person will compete against all the other district schools and last year the winner got a big award and their photo in the paper.”
When teachers stress, interpersonal competition, some students may be motivated but there can only a few winners so there are many more students who know they have no chance of winning. Another problem with interpersonal competition in assessment is that the focus can become winning rather than understanding the material.
Teachers who communicate to their students that ability is incremental and that the goal of assessment is promoting learning rather that ranking students, or awarding prizes to those who did very well, or catching those who did not pay attention, are likely to enhance students’ motivation.
Choosing assessments
The choice of assessment task also influences students’ motivation and confidence. First, assessments that have clear criteria that students understand and can meet rather than assessments that pit students against each other in interpersonal competition enhances motivation (Black, Harrison, Lee, Marshall, Wiliam, 2004). This is consistent with the point we made in the previous section about the importance of focusing on enhancing learning for all students rather than ranking students.
Second , meaningful assessment tasks enhance student motivation. Students often want to know why they have to do something and teachers need to provide meaningful answers. For example, a teacher might say, “You need to be able to calculate the area of a rectangle because if you want new carpet you need to know how much carpet is needed and how much it would cost.” Well-designed performance tasks are often more meaningful to students than selected response tests so students will work harder to prepare for them.
Third, providing choices of assessment tasks can enhance student sense of autonomy and motivation according to self-determination theory. Kym, the sixth-grade teacher whose story began this chapter, reports that giving students choices was very helpful. Another middle school social studies teacher Aaron, gives his students a choice of performance tasks at the end of the unit on the US Bill of Rights. Students have to demonstrate specified key ideas, but can do that by making up a board game, presenting a brief play, composing a rap song etc.
Aaron reports that students work much harder on this performance assessment which allows them to use their strengths than previously when he did not provide any choices and gave a more traditional assignment. Measurement experts caution that a danger of giving choices is that the assessment tasks are no longer equivalent and so the reliability of scoring is reduced so it is particularly important to use well designed scoring rubrics. Fourth, assessment tasks should be challenging, but achievable with reasonable effort (Elliott, McGregor & Thrash, 2004). This is often hard for beginning teachers to do, who may give assessment tasks that are too easy or too hard, because they have to learn to match their assessment to the skills of their students.
Providing feedback
When the goal is assessment for learning, providing constructive feedback that helps students know what they do and do not understand as well as encouraging them to learn from their errors is fundamental. Effective feedback should be given as soon as possible as the longer the delay between students’ work and feedback the longer students will continue to have some misconceptions. Also, delays reduce the relationship between students’ performance and the feedback as students can forget what they were thinking during the assessment.
Effective feedback should also inform students clearly what they did well and what needs modification. General comments just as “good work, A”, or “needs improvement” do not help students understand how to improve their learning. Giving feedback to students using well designed scoring rubrics helps clearly communicate strengths and weaknesses.
Obviously, grades are often needed, but teachers can minimize the focus by placing the grade after the comments or on the last page of a paper. It can also be helpful to allow students to keep their grades private making sure when returning assignments that the grade is not prominent (e.g. not using red ink on the top page) and never asking students to read their scores aloud in class. Some students choose to share their grades—but that should be their decision not their teachers.
When grading, teachers often become angry at the mistakes that students make. It is easy for teachers to think something like: “With all the effort I put into teaching, this student could not even be bothered to follow the directions or spell check!” Many experienced teachers believe that communicating their anger is not helpful, so rather than saying: “How dare you turn in such shoddy work”, they rephrase it as, “I am disappointed that your work on this assignment does not meet the standards set” (Sutton, 2003). Research evidence also suggests that comments such as “You are so smart” for a high-quality performance can be counterproductive.
This is surprising to many teachers, but if students are told they are smart when they produce a good product, then if they do poorly on the next assignment the conclusion must be they are “not smart” (Dweck, 2000). More effective feedback focuses on positive aspects of the task (not the person), as well as strategies, and effort. The focus of the feedback should relate to the criteria set by the teacher and how improvements can be made.
When the teacher and student are from different racial/ethnic backgrounds, providing feedback that enhances motivation and confidence but also includes criticism can be particularly challenging because the students of color have historical reasons to distrust negative comments from a white teacher. Research by Cohen Steele, Ross (1999) indicates that “wise” feedback from teachers needs three components: positive comments, criticisms, and an assurance that the teacher believes the student can reach higher standards. We describe this research is more detail in “Deciding for yourself about the research” found in Appendix #2.
Self and peer assessment
In order to reach a learning goal, students need to understand the meaning of the goal, the steps necessary to achieve a goal, and if they are making satisfactory progress towards that goal (Sadler, 1989). This involves self-assessment and recent research has demonstrated that well designed self-assessment can enhance student learning and motivation (Black & Wiliam, 2006). For self-assessment to be effective, students need explicit criteria such as those in an analytical scoring rubric. These criteria are either provided by the teacher or developed by the teacher in collaboration with students. Because students seem to find it easier to understand criteria for assessment tasks if they can examine other students’ work alongside their own, self-assessment often involves peer assessment.
An example of a strategy used by teachers involves asking students to use “traffic lights” to indicate of their confidence in their assignment or homework. Red indicates that they were unsure of their success, orange that they were partially unsure, and green that they were confident of their success. The students who labeled their own work as orange and green worked in mixed groups to evaluate their own work while the teacher worked with the students who had chosen red (Black & Wiliam, 2006).
If self and peer assessment is used, it is particularly important that the teachers establish a classroom culture for assessment that is based on incremental views of ability and learning goals. If the classroom atmosphere focuses on interpersonal competition, students have incentives in self and peer assessment to inflate their own evaluations (and perhaps those of their friends) because there are limited rewards for good work.
Adjusting instruction based on assessment
Using assessment information to adjust instruction is fundamental to the concept of assessment for learning. Teachers make these adjustments “in the moment” during classroom instruction as well as during reflection and planning periods. Teachers use the information they gain from questioning and observation to adjust their teaching during classroom instruction.
If students cannot answer a question, the teacher may need to rephrase the question, probe understanding of prior knowledge, or change the way the current idea is being considered. It is important for teachers to learn to identify when only one or two students need individual help because they are struggling with the concept, and when a large proportion of the class is struggling so whole group intervention is needed.
After the class is over, effective teachers spend time analyzing how well the lessons went, what students did and did not seem to understand, and what needs to be done the next day. Evaluation of student work also provides important information for teachers.
If many students are confused about a similar concept the teacher needs to reteach it and consider new ways of helping students understand the topic. If the majority of students complete the tasks very quickly and well, the teacher might decide that the assessment was not challenging enough. Sometimes teachers become dissatisfied with the kinds of assessments they have assigned when they are grading—perhaps because they realize there was too much emphasis on lower level learning, that the directions were not clear enough, or the scoring rubric needed modification.
Teachers who believe that assessment data provides information about their own teaching and that they can find ways to influence student learning have high teacher efficacy or beliefs that they can make a difference in students’ lives. In contrast, teachers who think that student performance is mostly due to fixed student characteristics or the homes they come from (e.g. “no wonder she did so poorly considering what her home life is like”) have low teacher efficacy (Tschannen-Moran, Woolfolk Hoy, & Hoy, 1998).
Communication with parents and guardians
Clear communication with parents about classroom assessment is important—but often difficult for beginning teachers. The same skills that are needed to communicate effectively with students are also needed when communicating with parents and guardians. Teachers need to be able to explain to parents the purpose of the assessment, why they selected this assessment technique, and what the criteria for success are. Some teachers send home newsletters monthly or at the beginning of a major assessment task explaining the purpose and nature of the task, any additional support that is needed (e.g. materials, library visits), and due dates. Some parents will not be familiar with performance assessments or the use of self and peer assessment so teachers need to take time to explain these approaches carefully.
Many school districts now communicate though websites that have mixtures of public information available to all parents in the class (e.g. curriculum and assessment details) as well information restricted to the parents or guardians of specific students (e.g. the attendance and grades). Teachers report this is helpful as parents have access to their child’s performance immediately and when necessary, can talk to their child and teacher quickly.
The recommendations we provided above on the type of feedback that should be given to students also apply when talking to parents. That is, the focus should be on students’ performance on the task, what was done well and what needs work, rather than general comments about how “smart” or “weak” the child is. If possible, comments should focus on strategies that the child uses well or needs to improve (e.g. reading test questions carefully, organization in a large project). When the teacher is white and the student or parents are minority, trust can be an issue so using “wise” feedback when talking to parents may help.
Action research: studying yourself and your students
Assessment for learning emphasizes devising and conducting assessment data in order to improve teaching and learning and so is related to action research (also called teacher research). Action research can lead to decisions that improve a teacher’s own teaching or the teaching of colleagues. Kym, the teacher we described at the beginning of this chapter, conducted action research in her own classroom as she identified a problem of poor student motivation and achievement, investigated solutions during the course on motivation, tried new approaches, and observed the resulting actions.
Cycles of planning, acting and reflecting
Action research is usually described as a cyclical process with the following stages (Mertler, 2006).
- Planning Stage. Planning has three components. First, planning involves identifying and defining a problem. Problems sometimes start with some ill-defined unease or feeling that something is wrong and it can take time to identify the problem clearly so that it becomes a researchable question. The next step, is reviewing the related literature and this may occur within a class or workshop that the teachers are attending. Teachers may also explore the literature on their own or in teacher study groups. The third step is developing a research plan. The research plan includes what kind of data will be collected (e.g. student test scores, observation of one or more students, as well as how and when it will be collected (e.g. from files, in collaboration with colleagues, in spring or fall semester).
- Acting sage. During this stage, the teacher is collecting and analyzing data. The data collected and the analyses do not need to be complex because action research, to be effective, has to be manageable.
- Developing an action plan. In this stage, the teacher develops a plan to make changes and implements these changes. This is the action component of action research and it is important that teachers document their actions carefully so that they can communicate them to others.
Communicating and reflection. An important component of all research is communicating information. Results can be shared with colleagues in the school or district, in an action research class at the local college, at conferences, or in journals for teachers. Action research can also involve students as active participants and if this is the case, communication may include students and parents. Communicating with others helps refine ideas and so typically aids in reflection. During reflection teachers/researchers ask such questions as: “What did I learn?” “What should I have done differently?” “What should I do next?” Questions such as these often lead to a new cycle of action research beginning with planning and then moving to the other steps.

Ethical issues—privacy, voluntary consent
Teachers are accustomed to collecting students’ test scores, data about performances, and descriptions of behaviors as an essential component of teaching. However, if teachers are conducting action research and they plan to collect data that will be shared outside the school community then permission from parents (or guardians) and students must be obtained in order to protect the privacy of students and their families. Typically, permission is obtained by an informed consent form that summarizes the research, describes the data that will be collected, indicates that participation is voluntary, and provides a guarantee of confidentiality or anonymity (Hubbard & Power, 2005).
Many large school districts have procedures for establishing informed consent as well as person in the central office who is responsible for the district guidelines and specific application process. If the action research is supported in some way by a college of university (e.g. through a class) then informed consent procedures of that institution must be followed.
One common area of confusion for teachers is the voluntary nature of student participation in research. If the data being collected are for a research study, students can choose not to participate. This is contrary to much regular classroom instruction where teachers tell students they have to do the work or complete the tasks.

Grading and reporting
Assigning students grades is an important component of teaching and many school districts issue progress reports, interim reports, or midterm grades as well as final semester grades. Traditionally, these reports were printed on paper and sent home with students or mailed to students’ homes. Increasingly, school districts are using web-based grade management systems that allow parents to access their child’s grades on each individual assessment as well as the progress reports and final grades.
Grading can be frustrating for teachers as there are many factors to consider. In addition, report cards typically summarize in brief format a variety of assessments and so cannot provide much information about students’ strengths and weaknesses. This means that report cards focus more on assessment of learning than assessment for learning. There are a number of decisions that have to be made when assigning students’ grades and schools often have detailed policies that teachers have to follow. In the next section, we consider the major questions associated with grading.
How are various assignments and assessments weighted?
Students typically complete a variety of assignments during a grading period, such as homework, quizzes, performance assessments, etc. Teachers have to decide—preferably before the grading period begins—how each assignment will be weighted. For example, a sixth-grade math teacher may decide to weight the grades in the following manner:
Deciding how to weight assignments should be done carefully as it communicates to students and parents what teachers believe is important, and also may be used to decide how much effort students will exert (e.g. “If homework is only worth 5 per cent, it is not worth completing twice a week”).
Should social skills or effort be included?
Elementary school teachers are more likely than middle or high school teachers to include some social skills into report cards (Popham, 2005). These may be included as separate criteria in the report card or weighted into the grade for that subject. For example, the grade for mathematics may include an assessment of group cooperation or self-regulation during mathematics lessons. Some schools and teachers endorse including social skills arguing that developing such skills is important for young students and that students need to learn to work with others and manage their own behaviors in order to be successful. Others believe that grades in subject areas should be based on the cognitive performances—and that if assessments of social skills are made they should be clearly separated from the subject grade on the report card. Obviously, clear criteria such as those contained in analytical scoring rubrics should be used if social skills are graded.
Teachers often find it difficult to decide whether effort and improvement should be included as a component of grades. One approach is for teachers to ask students to submit drafts of an assignment and make improvements based on the feedback they received. The grade for the assignment may include some combination of the score for the drafts, the final version, and the amount of improvement the students made based on the feedback provided.
A more controversial approach is basing grades on effort when students try really hard day after day but still cannot complete their assignments well. These students could have identified special needs or be recent immigrants that have limited English skills. Some school districts have guidelines for handling such cases. One disadvantage of using improvement as a component of grades is that the most competent students in class may do very well initially and have little room for improvement—unless teachers are skilled at providing additional assignments that will help challenge these students.
Teachers often use “hodgepodge grading”, i.e. a combination of achievement, effort, growth, attitude or class conduct, homework, and class participation. A survey of over 8,500 middle and high school students in the US state of Virginia supported the hodgepodge practices commonly used by their teachers (Cross & Frary, 1999).
How should grades be calculated?
Two options are commonly used: absolute grading and relative grading. In absolute grading grades are assigned based on criteria the teacher has devised. If an English teacher has established a level of proficiency needed to obtain an A and no student meets that level then no A’s will be given. Alternatively, if every student meets the established level, then all the students will get A’s (Popham, 2005). Absolute grading systems may use letter grades or pass/fail.
In relative grading, the teacher ranks the performances of students from worst to best (or best to worst) and those at the top get high grades, those in the middle moderate grades, and those at the bottom low grades. This is often described as “grading on the curve” and can be useful to compensate for an examination or assignment that students find much easier or harder than the teacher expected.
However, relative grading can be unfair to students because the comparisons are typically within one class, so an A in one class may not represent the level of performance of an A in another class. Relative grading systems may discourage students from helping each other improve as students are in competition for limited rewards. In fact, Bishop (1999) argues that grading on the curve gives students a personal interest in persuading each other not to study as a serious student makes it more difficult for others to get good grades.
What kinds of grade descriptions should be used?
Traditionally, a letter grade system is used (e.g. A, B, C, D, F ) for each subject. The advantages of these grade descriptions are they are convenient, simple, and can be averaged easily. However, they do not indicate what objectives the student has or has not met nor students’ specific strengths and weaknesses (Linn & Miller 2005). Elementary schools often use a pass-fail (or satisfactory-unsatisfactory) system and some high schools and colleges do as well. Pass-fail systems in high school and college allow students to explore new areas and take risks on subjects that they may have limited preparation for, or is not part of their major (Linn & Miller 2005). While a pass-fail system is easy to use, it offers even less information about students’ level of learning.
A pass-fail system is also used in classes that are taught under a mastery-learning approach in which students are expected to demonstrate mastery on all the objectives in order to receive course credit. Under these conditions, it is clear that a pass means that the student has demonstrated mastery of all the objectives.
Some schools have implemented a checklist of the objectives in subject areas to replace the traditional letter grade system, and students are rated on each objective using descriptors such as Proficient, Partially Proficient, and Needs Improvement. For example, the checklist for students in a fourth-grade class in California may include the four types of writing that are required by the English language state content standards
- writing narratives
- writing responses to literature
- writing information reports
- writing summaries
The advantages of this approach are that it communicates students’ strengths and weaknesses clearly, and it reminds the students and parents the objectives of the school. However, if too many objectives are included, then the lists can become so long that they are difficult to understand.
Chapter summary
The purpose of classroom assessment can be assessment for learning or assessment of learning. Essential steps of assessment for learning include communicating instructional goals clearly to students; selecting appropriate high-quality assessments that match the instructional goals and students’ backgrounds; using assessments that enhance student motivation and confidence, adjusting instruction based on assessment, and communicating assessment results with parents and guardians. Action research can help teachers understand and improve their teaching. A number of questions are important to consider when devising grading systems.
References:
Seifert, K. and Sutton, R. (2009). Educational Psychology. Saylor Foundation. ( Chapter 11) Retrieved from http://home.cc.umanitoba.ca/~seifert/EdPsy2009.pdf (CC BY)
[Dynamic Learning Maps]. (2013, Nov. 26). The DLM System. [Video File]. Retrieved from https://youtu.be/Ltr6SV8zbn0
Ch. 15 Teacher made assessment strategies by Kevin Seifert and Rosemary Sutton is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.
Share This Book
- Faculty and Staff

Assessment and Curriculum Support Center
Using portfolios in program assessment.
On this page:
- What is a portfolio?
- Portfolios as a data-collection method for assessment
- Advantages and disadvantages
- Creating and designing portfolios
- Questions to ask before adopting portfolios
- E-Portfolios
- Links: universities implementing portfolios; online portfolios
- E-portfolio software and review
1. What is a portfolio?
Back to Top
A portfolio is a systematic collection of student work that represents student activities, accomplishments, and achievements over a specific period of time in one or more areas of the curriculum. There are two main types of portfolios:
Showcase Portfolios: Students select and submit their best work. The showcase portfolio emphasizes the products of learning. Developmental Portfolios : Students select and submit pieces of work that can show evidence of growth or change over time. The growth portfolio emphasizes the process of learning.
STUDENTS’ REFLECTIVE ESSAY: In both types of portfolios, students write reflective essays or introductory memos to the faculty/assessment committee to explain the work and reflect on how the collection demonstrates their accomplishments, explains why they selected the particular examples, and/or describes changes in their knowledge/ability/attitude.
2. Portfolios as a data-collection method for assessment
Portfolios can be created for course assessment as well as program assessment. Although the content may be similar, the assessment process is different.
3. Advantages and disadvantages
Advantages of a portfolio
- Enables faculty to assess a set of complex tasks, including interdisciplinary learning and capabilities, with examples of different types of student work.
- Helps faculty identify curriculum gaps, a lack of alignment with outcomes.
- Promotes faculty discussions on student learning, curriculum, pedagogy, and student support services.
- Encourages student reflection on their learning. Students may come to understand what they have and have not learned.
- Provides students with documentation for job applications or applications to graduate school.
Disadvantages of a portfolio
- Faculty time required to prepare the portfolio assignment and assist students as they prepare them. Logistics are challenging.
- Students must retain and compile their own work, usually outside of class. Motivating students to take the portfolio seriously may be difficult.
- Transfer students may have difficulties meeting program-portfolio requirements.
- Storage demands can overwhelm (which is one reason why e-portfolios are chosen).
4. Using portfolios in assessment
TIP: START SMALL. Showcase portfolio : Consider starting with one assignment plus a reflective essay from a senior-level course as a pilot project. A faculty group evaluates the “mini-portfolios” using a rubric. Use the results from the pilot project to guide faculty decisions on adding to or modifying the portfolio process. Developmental portfolio : Consider starting by giving a similar assignment in two sequential courses: e.g., students write a case study in a 300-level course and again in a 400-level course. In the 400-level course, students also write a reflection based on their comparison of the two case studies. A faculty group evaluates the “mini-portfolios” using a rubric. Use the results to guide the faculty members as they modify the portfolio process.
Suggested steps:
- Determine the purpose of the portfolio. Decide how the results of a portfolio evaluation will be used to inform the program.
- Identify the learning outcomes the portfolio will address.Tip: Identify at least 6 course assignments that are aligned with the outcomes the portfolio will address. Note: When planning to implement a portfolio requirement, the program may need to modify activities or outcomes in courses, the program, or the institution.
- Decide what students will include in their portfolio. Portfolios can contain a range of items–plans, reports, essays, resume, checklists, self-assessments, references from employers or supervisors, audio and video clips. In a showcase portfolio, students include work completed near the end of their program. In a developmental portfolio, students include work completed early and late in the program so that development can be judged.Tip: Limit the portfolio to 3-4 pieces of student work and one reflective essay/memo.
- Identify or develop the scoring criteria (e.g., a rubric) to judge the quality of the portfolio.Tip: Include the scoring rubric with the instructions given to students (#6 below).
- Establish standards of performance and examples (e.g., examples of a high, medium, and low scoring portfolio).
- Create student instructions that specify how students collect, select, reflect, format, and submit.Tip: Emphasize to students the purpose of the portfolio and that it is their responsibility to select items that clearly demonstrate mastery of the learning outcomes. Emphasize to faculty that it is their responsibility to help students by explicitly tying course assignments to portfolio requirements.
Collect – Tell students where in the curriculum or co-curricular activities they will produce evidence related to the outcomes being assessed. Select – Ask students to select the evidence. Instruct students to label each piece of evidence according to the learning outcome being demonstrated. Reflect – Give students directions on how to write a one or two-page reflective essay/memo that explains why they selected the particular examples, how the pieces demonstrate their achievement of the program outcomes, and/or how their knowledge/ability/attitude changed. Format –Tell students the format requirements (e.g., type of binder, font and style guide requirements, online submission requirements). Submit – Give submission (and pickup) dates and instructions.
- A faculty group scores the portfolios using the scoring criteria. Use examples of the standards of performance to ensure consistency across scoring sessions and readers.Tip: In large programs, select a random sample of portfolios to score (i.e., do not score every portfolio).
- Share the results and use them to improve the program.
5. Questions to consider before adopting a portfolio requirement
- What is the purpose of the portfolio requirement? To document student learning? Demonstrate student development? Learn about students’ reflections on their learning? Create a document useful to students? Help students grow through personal reflection on their personal goals?
- Will portfolios be showcase or developmental?
- When and how will students be told about the requirement, including what materials they need to collect or to produce for it?
- What are the minimum and maximum lengths or sizes for portfolios?
- Who will decide which materials will be included in portfolios- -faculty or students?
- What elements will be required in the portfolio- -evidence only from courses in the discipline, other types of evidence, evidence directly tied to learning outcomes, previously graded products or clean copies?
- Will students be graded on the portfolios? If so, how and by whom?
- How will the portfolios be assessed to evaluate and improve the program?
- What can be done for students who have inadequate evidence through no fault of their own? (E.g., transfer students)
- What will motivate students to take the portfolio requirement seriously?
- How will the portfolio be submitted–hard copy or electronic copy?
- Who “owns” the portfolios–students or the program/university? If the program/university owns them, how long will the portfolios be retained after the students graduate?
- Who has access to the portfolios and for what purposes?
- How will student privacy and confidentiality be protected?
6. E-portfolios (electronic portfolios)
Traditional portfolios consist of papers in a folder. Electronic or “e-portfolios” consist of documents stored electronically. Electronic portfolios offer rich possibilities for learning and assessment, with the added dimension of technology.
- What about an electronic portfolio is central to the assessment?
- Who is the audience for the portfolio? Will that audience have the hardware, software, skills, time, and inclination to access the portfolio electronically?
- Does the institution have the hardware and software in place to create portfolios electronically? If not, what will it cost and who will install it? Does the institution have the IT/technical staff to support e-portfolios?
- What is the current level of computer skills of the students and faculty members involved in this project? Who will teach them how to use the technology necessary to create and view electronic portfolios?
- Easy to share with multiple readers simultaneously.
- Allows for asynchronous use for both students and faculty.
- Allows for multi-media product submissions.
- Offers search strategies for easy access to materials.
- Makes updating entries easier.
- Creating navigational links may help students see how their experiences interrelate.
- Provides students the opportunity to improve as well as demonstrate their technology skills.
- Allows faculty to remain in touch with students after graduation if the portfolio can become students’ professional portfolio.
- Time is needed to master the software. Students may not have sufficient computer skills to showcase their work properly.
- Faculty and students may be reluctant to learn a new software program.
- Requires IT expertise and support for both students and faculty.
- Cost associated with developing an in-house platform or the purchase of a commercial product may be expensive.
- Cost associated with maintaining portfolio software. Ongoing support and training are necessary.
- An external audience may not have access to proprietary software. Proprietary software may hinder portability.
- Requires large amounts of computer space.
- Privacy and security. Who will have access to the portfolio?
7. Links to universities implementing portfolios
Truman State University: http://assessment.truman.edu/components/portfolio/
Penn State: http://portfolio.psu.edu/
University of Denver: https://portfolio.du.edu/pc/index
8. Electronic portfolio software
Laulima Open Source Portfolio . Laulima has an Open Source Portfolio (OSP) tool option. Contact UH ITS for information about turning on this tool.
List of E-Portfolio Software & Tools . ePortfolio-related Tools and Technologies wiki.
Sources Consulted:
- Skidmore College, Assessment at Skidmore College, http://cms.skidmore.edu/assessment/Handbook/portfolio.cfm
- Mary Allen – University of Hawaii at Manoa Spring 2008 Assessment Workshops
- ERIC Digest, Assessment Portfolios (ED447725), http://www.users.muohio.edu/shermalw/eric_digests/ed447725.pdf
- Portfolio Assessment: Instructional Guide (2nd Ed.), http://libdr1.ied.edu.hk/pubdata/img00/arch00/link/archive/1/instarh/1921_image.pdf
- Cambridge, B.L., Kahn, S., Tompkins, D.P., Yancey, K.B. (Eds.). (2001). Electronic portfolios: Emerging practices in student, faculty, and institutional learning . Washington, DC: American Association for Higher Education.
Academic Resources
- Academic Calendar
- Academic Success
- Campus Connect
- DePaul Central
- Desire2Learn (D2L)
- [email protected] (Email)
- University Catalog
Campus Resources
- Campus Security
- Campus Maps
University Resources
- Technology Help Desk
Information For
- Alumni & Friends
- Current Students
- Faculty & Staff
- Teaching Guides
- How Students Learn
- Course Design
- Instructional Methods
- Assignment Design
- Exit Tickets and Midterm Surveys
- Direct vs. Indirect Assessment
- Assessment and Bias
Low-Stakes Assignments
- High-Stakes Assignments
- Responding to Plagiarism
- Assessing Reflection
- Submitting Grades
- Learning Activities
- Flex Teaching
- Online Teaching
- Scholarship of Teaching and Learning
- Reflective Practice
- Inclusive Teaching
- Teaching at DePaul
- Support Services
- Technology Tools
Teaching Commons > Teaching Guides > Feedback & Grading > Low-Stakes Assignments

As Vincent Tinto writes in Completing College: Rethinking Institutional Action (2012) , "To be effective, assessments must be frequent, early, and formative." At colleges and universities recognized by the National Survey on Student Engagement for their success in promoting student's active engagement in their learning: "Feedback from faculty to students is timely and frequent, as documented both by NSSE data and by interviews with students and faculty members" (Kuh, et al. 2010).
Low-stakes assignments tend to work best when they generate formative feedback regarding where students are in the course, what they are doing well, and where they may need development to ultimately succeed in the class. At DePaul, there is a general expectation that students will be given feedback early in the quarter or term, and low-stakes assignments are a powerful method of doing so.
For students in their first year, whether as freshmen or transfers, such feedback is especially important in order to help them understand the expectations at DePaul. Early feedback also enables faculty to use DePaul's BlueStar system to identify students who may need additional assistance and share that information with the students and their academic advisors.
Benefits of Low-Stakes Assignments
Low-stakes assignments
- Give students a realistic idea of their performance early in the term, enabling them to seek appropriate resources as needed
- Open up lines of communication between students and their instructors, and may increase students' willingness to ask for help
- Provide feedback for instructors on how well students are absorbing information and progressing in their skill development
- Allow instructors to direct students to resources if they need further assistance or support
- Give students an opportunity to be active participants in the evaluation of their own learning
- Increase the likelihood that students will attend class and be active and engaged
Low Stakes-Assignments in Online and Flex Modality Courses
While all classes can benefit from frequent low-stakes assignments, it's especially important in online and Flex modalities. In a face-to-face classroom, instructors may assess student learning in informal ways using nonverbal cues—a puzzled look or an affirming nod, for example. In a online and Flex courses, instructors may have more difficulty picking up these cues from students, especially if they don't always have their cameras on or you're not meeting synchronously. In Flex classes, it can also be more challenging to pick up on nonverbal cues among in-person students if also trying to monitor engagement and participation among remote students at the same time.
D2L Tools for Low-Stakes Assignments
D2L tools will help you to implement low-stakes assignments in all types of courses and course modalities.
Quizzes are often used for multiple-choice or other objective question types because they can be graded automatically, allowing students to get feedback immediately. For low-stakes assignments, you may want to allow students to review the questions and their answers after they take the quiz. You can set up Submission Views to allow for this. Instructors can also view quiz statistics to see if the class is struggling with a particular concept and review that topic with them.
Discussions
Discussions allow students to share short pieces of writing or other media with you and their classmates. They work well for sharing rough drafts or other early-stage components of larger projects for peer review or for certain journaling assignments.
Submissions
Submission folders allow students to submit work privately to you. You may be used to using Submissions for major assignments like research papers. For shorter, less formal pieces of writing, you can change the Submission Type to “Text” to allow students to type their response directly into D2L rather than uploading a file from their computer.
Examples of Low-Stakes Assignments
Though given less weight, low-stakes assignments may be similar in type and kind to high-stakes assignments: they tend to reflect the kind of work students are going to be expected to do for a final exam, paper, or other summative project.
Breaking Down Larger Assignments
Components of a larger project.
When assigning students a writing or research project, break down the elements of the project and use one or more as a low-stakes assignment. Require students to submit their works-in-progress so that they can receive early written feedback and a small grade, which could consist simply of a check or check-minus. Any one (or more) of the following elements could be collected and used as a low-stakes assignment:
- Prospectus or proposal
- Thesis statement
- Annotated bibliography
- Specific sections of the final project (e.g., introduction, methods, lit. review)
- Early-stage drafts of a paper
Early Drafts and Peer Review
Midway through a writing project, have students bring a full or partial draft of their paper to class and then exchange feedback with a peer. Students will receive valuable feedback that they can use to revise their work.
Try providing students with a rubric to help them give their peers targeted, assignment-specific feedback. Also, consider inviting Writing Center tutors to your class to model how peer review can be conducted effectively.
Mid-Project/Quarter Conferences
Around midterm or during the planning stages of a major course project, ask students to meet with you for conferences. If the class is too large for you to meet with each student individually, assign students to meet with you in groups, or one-on-one with their TA. The conference time might be dedicated to discussing the students’ progress towards course goals or providing feedback on a particular project.
To get the most out of your conference time, ask students to complete and bring with them a self-assessment form or project proposal. Alternatively, ask students to write down one question they have about the course, content covered in class, or an assignment they are currently working on. Having a document to reference will keep the conversation on track and help put both you and your students at ease.
Group Work Planning and Reflection
When students are working in groups, require them to submit a statement outlining each group members’ responsibilities and/or a timeline that measures their progress. Periodically throughout the group project, ask students to submit short reflections detailing their progress and any areas where they need support.
Weekly Quizzes
For a course where exams are the primary means of summative assessment, give students a quiz at the end of each week with questions based on content covered up to that point. Although the quiz might not count for credit (or might only count for a very small portion of the final grade), it will give students an idea of what they already learned and which concepts they need to spend more time with. Quizzes can be paper-based, conducted during class using Polling or posted online in D2L Quizzes .
Class Problem Solving
In a course where students learn new computational or mathematical concepts, have a problem posted on the board or screen at the start of each class, or in a collaborative document . Students can work individually, in pairs, or in small groups to find the solution. Spend part of class going over the problem as a big group, or review the problem in a short video .
Course Concept Journals
Consider having students keep a regular journal where they can engage with and apply course concepts. For example, in an Introduction to Political Science course, ask students to read the politics section of the New York Times and keep a weekly concept-application journal. For each entry, students should select one article that they read, summarize it, and show how the article demonstrates a theoretical concept discussed that week in class. Review students’ journals every other week or so and give each entry brief feedback.
Reading Journals
In a course where students are required to do weekly readings, assign a reading journal. Entries might require students to summarize and respond to the source, or to answer a set of questions provided by the instructor. Students can submit their journal entries through D2L , by handing in hard copies every other week, or by posting them on a course blog .
Short Writing Assignments
Give students a prompt that corresponds to a class reading assignment, concept, or activity and have them turn in a short written response. Return the students’ responses with your brief feedback, indicating whether the students are on track or not (a simple “check plus” or “check minus” could be used). In large classes where you might not have time to give individualized written feedback to each student, share and discuss one or two anonymous students responses at the next class meeting or in a D2L News item or weekly introduction.
Discussion Questions
Post open-ended questions designed to promote engagement with the current course content or an upcoming project or assignment. Invite students to ask questions and identify areas of confusion, and answer the questions posted by their peers. See Facilitating Discussions for additional suggestions.
Support for Low-Stakes Assignments at DePaul
Discuss how you can incorporate writing in a low-stakes assignment by contacting Matthew Pearson , Director of the UCWbL. You can also invite peer tutors to your classroom to demonstrate effective peer review before having your own students give feedback to one another.
If you’re interested in using an element of a research project as a means of giving early feedback to students, connect with a university librarian . University librarians can offer feedback and resources for designing research projects for students.
Have a question about integrating discussion forums in your classes? Not sure which platform might be best for setting up a course blog? Contact the CTL for help identifying and implementing a technology tool to help meet your goals.
Further Reading
Angelo, T., & Cross, K.P. (1993). Classroom assessment techniques . San Francisco: Jossey-Bass, Inc.
Bean, J. C. (1996). Engaging ideas: the professors guide to integrating writing,critical thinking and active learning in the classroom . San Francisco: Jossey-Bass Publishers.
Kuh, G. D., Kinzie, J., Schuh, J.S., and Whitt, E.J. (2010). Student success in college: Creating conditions that matter. San Francisco, CA: Jossey-Bass.
Lang, J. M. (2013). Cheating Lessons: Learning from Academic Dishonesty . Cambridge, MA: Harvard University Press.
Nilson, L. B. (2010). Teaching at its best: A research-based resource for college instructors . San Francisco: Jossey-Bass.
Tinto, Vincent. (2012). Completing college: Rethinking institutional action . Chicago, IL: University of Chicago Press.
Warnock, Scott. (2013). Frequent, low-stakes grading: Assessment for communication, confidence . Faculty Focus. Madison, WI: Magna Publications.
- Center for Innovative Teaching and Learning
- Instructional Guide
- Rubrics for Assessment
A rubric is an explicit set of criteria used for assessing a particular type of work or performance (TLT Group, n.d.) and provides more details than a single grade or mark. Rubrics, therefore, will help you grade more objectively.
Have your students ever asked, “Why did you grade me that way?” or stated, “You never told us that we would be graded on grammar!” As a grading tool, rubrics can address these and other issues related to assessment: they reduce grading time; they increase objectivity and reduce subjectivity; they convey timely feedback to students and they improve students’ ability to include required elements of an assignment (Stevens & Levi, 2005). Grading rubrics can be used to assess a range of activities in any subject area
Elements of a Rubric
Typically designed as a grid-type structure, a grading rubric includes criteria, levels of performance, scores, and descriptors which become unique assessment tools for any given assignment. The table below illustrates a simple grading rubric with each of the four elements for a history research paper.
Criteria identify the trait, feature or dimension which is to be measured and include a definition and example to clarify the meaning of each trait being assessed. Each assignment or performance will determine the number of criteria to be scored. Criteria are derived from assignments, checklists, grading sheets or colleagues.
Examples of Criteria for a term paper rubric
- Introduction
- Arguments/analysis
- Grammar and punctuation
- Internal citations
Levels of performance
Levels of performance are often labeled as adjectives which describe the performance levels. Levels of performance determine the degree of performance which has been met and will provide for consistent and objective assessment and better feedback to students. These levels tell students what they are expected to do. Levels of performance can be used without descriptors but descriptors help in achieving objectivity. Words used for levels of performance could influence a student’s interpretation of performance level (such as superior, moderate, poor or above or below average).
Examples to describe levels of performance
- Excellent, Good, Fair, Poor
- Master, Apprentice, Beginner
- Exemplary, Accomplished, Developing, Beginning, Undeveloped
- Complete, Incomplete
Levels of performance determine the degree of performance which has been met and will provide for consistent and objective assessment and better feedback to students.
Scores make up the system of numbers or values used to rate each criterion and often are combined with levels of performance. Begin by asking how many points are needed to adequately describe the range of performance you expect to see in students’ work. Consider the range of possible performance level.
Example of scores for a rubric
1, 2, 3, 4, 5 or 2, 4, 6, 8
Descriptors
Descriptors are explicit descriptions of the performance and show how the score is derived and what is expected of the students. Descriptors spell out each level (gradation) of performance for each criterion and describe what performance at a particular level looks like. Descriptors describe how well students’ work is distinguished from the work of their peers and will help you to distinguish between each student’s work. Descriptors should be detailed enough to differentiate between the different level and increase the objectivity of the rater.
Descriptors...describe what performance at a particular level looks like.
Developing a Grading Rubric
First, consider using any of a number of existing rubrics available online. Many rubrics can be used “as is.” Or, you could modify a rubric by adding or deleting elements or combining others for one that will suit your needs. Finally, you could create a completely customized rubric using specifically designed rubric software or just by creating a table with the rubric elements. The following steps will help you develop a rubric no matter which option you choose.
- Select a performance/assignment to be assessed. Begin with a performance or assignment which may be difficult to grade and where you want to reduce subjectivity. Is the performance/assignment an authentic task related to learning goals and/or objectives? Are students replicating meaningful tasks found in the real world? Are you encouraging students to problem solve and apply knowledge? Answer these questions as you begin to develop the criteria for your rubric.
Begin with a performance or assignment which may be difficult to grade and where you want to reduce subjectivity.
- List criteria. Begin by brainstorming a list of all criteria, traits or dimensions associated task. Reduce the list by chunking similar criteria and eliminating others until you produce a range of appropriate criteria. A rubric designed for formative and diagnostic assessments might have more criteria than those rubrics rating summative performances (Dodge, 2001). Keep the list of criteria manageable and reasonable.
- Write criteria descriptions. Keep criteria descriptions brief, understandable, and in a logical order for students to follow as they work on the task.
- Determine level of performance adjectives. Select words or phrases that will explain what performance looks like at each level, making sure they are discrete enough to show real differences. Levels of performance should match the related criterion.
- Develop scores. The scores will determine the ranges of performance in numerical value. Make sure the values make sense in terms of the total points possible: What is the difference between getting 10 points versus 100 points versus 1,000 points? The best and worst performance scores are placed at the ends of the continuum and the other scores are placed appropriately in between. It is suggested to start with fewer levels and to distinguish between work that does not meet the criteria. Also, it is difficult to make fine distinctions using qualitative levels such as never, sometimes, usually or limited acceptance, proficient or NA, poor, fair, good, very good, excellent. How will you make the distinctions?
It is suggested to start with fewer [score] levels and to distinguish between work that does not meet the criteria.
- Write the descriptors. As a student is judged to move up the performance continuum, previous level descriptions are considered achieved in subsequent description levels. Therefore, it is not necessary to include “beginning level” descriptors in the same box where new skills are introduced.
- Evaluate the rubric. As with any instructional tool, evaluate the rubric each time it is used to ensure it matches instructional goals and objectives. Be sure students understand each criterion and how they can use the rubric to their advantage. Consider providing more details about each of the rubric’s areas to further clarify these sections to students. Pilot test new rubrics if possible, review the rubric with a colleague, and solicit students’ feedback for further refinements.
Types of Rubrics
Determining which type of rubric to use depends on what and how you plan to evaluate. There are several types of rubrics including holistic, analytical, general, and task-specific. Each of these will be described below.
All criteria are assessed as a single score. Holistic rubrics are good for evaluating overall performance on a task. Because only one score is given, holistic rubrics tend to be easier to score. However, holistic rubrics do not provide detailed information on student performance for each criterion; the levels of performance are treated as a whole.
- “Use for simple tasks and performances such as reading fluency or response to an essay question . . .
- Getting a quick snapshot of overall quality or achievement
- Judging the impact of a product or performance” (Arter & McTighe, 2001, p 21)
Each criterion is assessed separately, using different descriptive ratings. Each criterion receives a separate score. Analytical rubrics take more time to score but provide more detailed feedback.
- “Judging complex performances . . . involving several significant [criteria] . . .
- Providing more specific information or feedback to students . . .” (Arter & McTighe, 2001, p 22)
A generic rubric contains criteria that are general across tasks and can be used for similar tasks or performances. Criteria are assessed separately, as in an analytical rubric.
- “[Use] when students will not all be doing exactly the same task; when students have a choice as to what evidence will be chosen to show competence on a particular skill or product.
- [Use] when instructors are trying to judge consistently in different course sections” (Arter & McTighe, 2001, p 30)
Task-specific
Assesses a specific task. Unique criteria are assessed separately. However, it may not be possible to account for each and every criterion involved in a particular task which could overlook a student’s unique solution (Arter & McTighe, 2001).
- “It’s easier and faster to get consistent scoring
- [Use] in large-scale and “high-stakes” contexts, such as state-level accountability assessments
- [Use when] you want to know whether students know particular facts, equations, methods, or procedures” (Arter & McTighe, 2001, p 28)
Grading rubrics are effective and efficient tools which allow for objective and consistent assessment of a range of performances, assignments, and activities. Rubrics can help clarify your expectations and will show students how to meet them, making students accountable for their performance in an easy-to-follow format. The feedback that students receive through a grading rubric can help them improve their performance on revised or subsequent work. Rubrics can help to rationalize grades when students ask about your method of assessment. Rubrics also allow for consistency in grading for those who team teach the same course, for TAs assigned to the task of grading, and serve as good documentation for accreditation purposes. Several online sources exist which can be used in the creation of customized grading rubrics; a few of these are listed below.
Arter, J., & McTighe, J. (2001). Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance. Thousand Oaks, CA: Corwin Press, Inc.
Stevens, D. D., & Levi, A. J. (2005). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. Sterling, VA: Stylus.
The Teaching, Learning, and Technology Group (n.d.). Rubrics: Definition, tools, examples, references. http://www.tltgroup.org/resources/flashlight/rubrics.htm
Selected Resources
Dodge, B. (2001). Creating a rubric on a given task. http://webquest.sdsu.edu/rubrics/rubrics.html
Wilson, M. (2006). Rethinking rubrics in writing assessment. Portsmouth, NH: Heinemann.
Rubric Builders and Generators
eMints.org (2011). Rubric/scoring guide. http://www.emints.org/webquest/rubric.shtml
General Rubric Generator. http://www.teach-nology.com/web_tools/rubrics/general/
RubiStar (2008). Create rubrics for your project-based learning activities. http://rubistar.4teachers.org/index.php

Suggested citation
Northern Illinois University Center for Innovative Teaching and Learning. (2012). Rubrics for assessment. In Instructional guide for university faculty and teaching assistants. Retrieved from https://www.niu.edu/citl/resources/guides/instructional-guide
- Active Learning Activities
- Assessing Student Learning
- Direct vs. Indirect Assessment
- Examples of Classroom Assessment Techniques
- Formative and Summative Assessment
- Peer and Self-Assessment
- Reflective Journals and Learning Logs
- The Process of Grading
Phone: 815-753-0595 Email: [email protected]
Connect with us on
Facebook page Twitter page YouTube page Instagram page LinkedIn page

IMAGES
VIDEO
COMMENTS
Where can you go to access your ATI online practice assessments? ... "Improve" tab ... An online review I can create after each Content Master Series or
STUDENT PORTAL 101 · Improve: This section is where you'll access your post-assessment Focused Review. · Apply: The Apply tab gives you the
Pre-assessment quiz; Content area assessment; Individualized Focused Review®; Individualized post-assessment assignment provided by the educator
We can see here that the feedback on any assignment submitted will actually be given by the teacher/lecturer who evaluates
Students may take the pre-assignment quiz as many times as needed to master the content andprepare for the post assignment assessment.
Each week, the ATI Capstone post-assignment is due no later than 11:59 p.m. ... healthWeek 8: Community health and nursing leadershipPost-assessmentWeek 3:
In this chapter, the focus is on using classroom assessments to improve student ... assignment than last year so my new teaching methods seem effective).
Identify the learning outcomes the portfolio will address.Tip: Identify at least 6 course assignments that are aligned with the outcomes the portfolio will
The purpose of low-stakes assignments is to provide students with an indication of ... "To be effective, assessments must be frequent, early, and formative.
Criteria are derived from assignments, checklists, grading sheets or ... about each of the rubric's areas to further clarify these sections to students.