I was young, unsure, naïve and inexperienced. Perhaps, this is true for all new teachers, but either way, it was right for me. I knew I had a lot to learn in what it meant to teach and assess my students in my Spanish world language classroom on the content they were learning. Years later, I wonder about my first groups of students and what they even learned. What did they really walk out of my classroom knowing? What did their final grade truly reflect?

Two things jump out at me as I think back. First, my gradebook consisted of a list of items starting from classroom participation to attendance, to homework, exams, quizzes and extra credit. All those things, just to know if a student could speak Spanish or not at their level. It wasn’t until about 3 years into teaching that I realised my grading was pointless. What did it really mean when a student earned a B+? That they were there daily, did their homework and extra credit but didn’t do well on assessments?  Or that they did well on assessments but never participated in class nor did homework? What did the B+ really reflect about that student and their ability to speak Spanish? One thing was for sure. My emphasis was not on the most critical factor: did the students really learn the content and, if yes, then how could they use it in the real world?

The second thing that jumped out to me was that my poor grading structure was reflected in how I scored assessments. Assessments are used to judge a student’s performance against standards. Without them, students are unable to know how to grow, and teachers are blind in picking the next steps in the lesson. For me, the assessments I planned were not clear in what students knew about Spanish. Nothing about my assessments applied to the real world nor did they require higher order thinking. For instance, in real communication, we negotiate meaning most commonly in one on one or small group conversations. That wasn’t going to happen for my students if all they could do was list off verb conjugations. In reality, I was never actually preparing my students to use Spanish beyond the classroom. I would grade a student based on how many right words were conjugated, or how many vocabulary words they memorised or how perfect their grammar was. But, is that how communication works? Who, when they first start learning a language, speaks in perfect grammar? Who speaks in a list of vocabulary words? I began to wonder. How do I assess my students on their ability to use Spanish that would mimic what they would do with it in the real world? This is a fundamental part of learning. That we gain knowledge to enhance our life and use what we learn outside the classroom.

Two things were evident to me. One, my gradebook needed to be cleaned up and two, I needed to learn how to more accurately assess students’ knowledge of the content we were learning, which meant I needed to rethink my focus. In other words, the standards I was using.

My first action step was to collaborate with colleagues and learn what more experienced educators were doing. And that is when I learned about standards-based grading. Here I was, an expert student, with my bachelors, masters and certificates in other fields, and I had never heard of nor experienced standards-based grading. But I was intrigued. I began reading articles about it. My first two were Seven Reasons for Standards Based Grading by Patricia Sciffiney (2008) and Tips from Marzano: Formative Assessments and Standards Based Grading by Rob Marzano (n.d.).

I had no idea how inaccurately I was assigning a grade to students, so I began to act. First, I cleaned up my gradebook. I got rid of anything that did not reflect what a student could do in a language and focused solely on just the standards for the classroom. This meant, I was not grading if a student was on time to class or participated, I was exclusively focused on their ability to use Spanish spontaneously as assessed by performance tasks (i.e. standards) that reflected the use of the language in a real-world context. That was it. The only two categories in my gradebook were summative and formative assessments.

Next, I had to learn the difference between formative and summative if they were the only two sections in my gradebook. This was the lesson I learned from comparing both. Formative assessments occur during learning and guide my instruction as I gain insight into what the students can and cannot do with the content. Examples of formative assessments I used include quizzes, exit slips, Think-Pair-Share, Poll Everywhere, Kahoot or Fist of Five. It did not include participation, attendance or behaviour. Summative is done at the end of a learning segment (i.e. unit) to gauge what a student has learned, such as, end of the unit exam or cumulative project. In other words, what they can do with the knowledge they have gained. Before I gave one written or oral test per unit, and that was it. There were no retakes, and we did not go over the topic again. The summative assessments were autopsies grading students after the process of learning had occurred. These kinds of assessments were stealing from my students the opportunity to assess, reflect, refine and re-assess. Instead, I began assessing their ability to actually use the language based on the topics we had covered, such as having a conversation with a random partner in class about a variety of topics we have included.

I also learned how formative and summative assessments play an important role together. Summative assessments will never be a fair measurement of a student’s knowledge if they are not preceded by several (and I mean several) formative assessments. Our job as teachers is to guide students towards the end goal, mastering the standard. Not using formative assessments is the same as a person wearing a blindfold running a race. They have no idea how they need to adjust or shift their direction because no pathway is straight, there are obstacles, they need to know how much farther they have to go and what skills or knowledge is required for a safe journey. As educators, we are on the sidelines of the race coaching our students toward the finish line. How can we then expect students to reach the end goal if we too have our blinders on? Formative assessments take off the blindfold and are almost sneaky if you think about it. Imagine now that the student is running the race and you are on the sidelines, adjusting their pace, calling out the upcoming obstacles, highlighting the positives and guiding them toward the finish line while they too are not blindfolded! With formative assessments, we get a sneak peek into students’ understanding. Inside their brain. We can find out what they know, do not know, are confused about and see just how close they are to reaching the goal. We then can adjust our lessons to meet their needs to get them closer and closer to the end goal. That is the definition of quality teaching.

Now, with the distinction of formative and summative completed, I had to address what measure I was using to represent students’ mastery of a standard. This meant a shift from a 100% grading scale to a 4-point scale based on a rubric. A rubric defines levels of performance that are shaped based on what it would look like to either exceed, meet, approach or be below the standard. Each level equals a point (1 equals below, and 4 equals exceed). I knew I had to update my standards to ensure they mimicked real-world application of using Spanish. I was fortunate to discover the ACTFL I Can Statements (ACTFL n.d.) that were explicitly designed for this purpose.

With a focus on a standard and a 4-point scale, my assessments shifted from primary recall or verb conjugation to real-world tasks asking the student to show what they can do with Spanish spontaneously at the moment. I began asking students to describe pictures to me, sit down and have spontaneous conversations with peers or write email letters to a real audience. I was genuinely assessing what they could do with the language, and the rubric was the guide. The rubric also mitigated the subjectivity in my grading because I had a list of bulleted items per level to guide the scoring. I was no longer getting out my red pen, marking every single grammar error and assigning an arbitrary letter grade or assigning one mathematically (9 right out of 25 = 64%).

Assessing student work off; a rubric sparked a desire for more collaboration with other world language teachers. We now had accountability in our grading because a vast majority of us were all using the same rubric and gradebook outline. This started a shift in our world language teachers to meet and practice scoring assessments together and using the rubric as our guide.

Shifting from a 0 -100% scale to a 1- 4 scale did not work perfectly in the gradebook. I still lived in a world where letter grades and a percent were assigned as a final grade. At this point, the entire school community had adopted standards-based grading, so we came up with our own scale of what the 4 point scale looked like as percentages. It is important to note that this was not an easy shift. It took a lot of critical conversations, learning, trial and error, reflection and trust between staff. We ended up assigning percentages as follows: 1-Below = 55%, 2-Approaching = 70%, 3-Meets = 85% and 4-Exceeding = 100%.

The shift to standards-based grading also helped me make another significant change. In the real world, job applications or teacher candidates take one of several national assessments to earn an ACTFL Proficiency Level (ACTFL, 2012) that represents their ability to use a language in each mode of communication (reading, listening, speaking, writing). There are proficiency level benchmarks job candidates much reach to be applicable for a position. I wanted to do the same in my classroom by creating level benchmarks of the proficiency levels per course. Our focus was on their ability to use the language, and solely that, as the only source of assessment. It was amazing to hear students talk about what they needed to do to move to the next level, remind each other to level up and better yet, understood their language journey and where they were headed. My dream was to no longer assign a letter grade at all, only proficiency levels, and that students would then strive to earn proficiency levels required for career and college readiness.

            The learning journey continued, and I needed to tackle another essential piece of grading: assigning a zero to missing or late work. Many of us had never experienced anything else, and the idea of teaching a student responsibility still lingered in our minds. Why shouldn’t a student earn a 0 if they did nothing? It was not until I watched a series of Rick Wormeli videos on YouTube (Stenhouse, 2010) that it made logical sense to me. Basically, if I assign Student A a 55% because they could not show evidence of knowing the standard, but took the test, then why do I assign Student B a 0% because they weren’t there. In these cases, neither instance show evidence of knowing the standard, but one gets penalised for their absence. Therefore, theoretically, the question that would arise is whether I am grading for attendance or knowledge of the standard? To learn more, I’d recommend reading The Case Against the Zero by Doug Reeves (2004) and watch some of Rick Wormeli’s videos.

As I continued to learn, read, reflect and process my grading shift with colleagues, key factors began to stick out.  First, assessments are not secretive. The teacher should never be the only one that knows the end goal, and we aren’t here to trick our students. They should know, from the beginning of the learning, what the standard is and how they should show their knowledge of the rule. Moreover, students should not see the rubric for the first time as they sit down to take the summative exam. Bust out the rubric in the beginning and have it be part of the learning process! Second, assessments are not autopsies. Learning is live and ongoing, so I must provide descriptive and timely feedback to my students, following which the students may either work in groups to correct errors or be given a retake opportunity. Third, students need to self-assess their own work and set goals. The rubrics provide a perfect outlet to do this, and I often had students grade themselves on the rubric during formative and summative assessments. I would ask them to tell me what level of mastery they were at and what they needed to do next to move up to the next level.

I am no longer a brand-new teacher. As I enter my second decade in education, I continue to learn, grow and improve my practice. My journey to strengthen my grading and assessment practices not only taught me the right amount, but I witnessed firsthand the impact it had on my students. They learned more, advanced to higher levels of Spanish, and shared countless stories with me of using Spanish in real-world contexts!  In the beginning, by overloading my gradebook and creating assessments that were not applicable to the real world, my students were never given an opportunity to learn and earn a grade based on their knowledge. That meant, many times, I either hold them back from achieving more or provided them with a false sense of security of their true capability. Think of an 8th grader taking Algebra 1. That student earns a B- because they came to class, cleaned the teacher’s room for extra credit and was nice, but was always approaching the standard. However, they still scored at a B- due to all the non-content based items that were graded. Now, that student is entering high school as a freshman, trying to manoeuvre through the minefield of older adolescents, puberty and identity formation. You have them placed in Algebra 2, and why not, they already earned a B- last year. However, that student won that B- not because he/she knew an average amount of the material. This would result in students feeling insecure, frustrated, and behind in their class. We cannot keep putting students in positions where they are reflected as knowing more or less than they actually do. Basically, a grade should only reflect one thing, what the student has learned about the content being taught.

About the Author: 

Alissa Farias

Alissa Farias

Alissa Farias is a Data Coach from Tacoma, WA and a former high school Spanish teacher. She is also an ASCD Emerging Leader, a WA-ASCD OYEA recipient, ACTFL LILL participant and was awarded the Inspirational Leadership Award from WAFLT. She presents locally and nationally, has led curriculum alignment and development of world language frameworks. She believes that when one stops being reflective, they are no longer effective and thrives to be a lifelong learner. She is passionate about equitable education, world and dual language programs, travelling and her two beautiful daughters.

 

Citations

  1. (n.d.). Retrieved December, from https://www.actfl.org/publications/guidelines-and-manuals/ncssfl-actfl-can-do-statements.
  2. (2012). ACTFL Proficiency Guidelines 2012. Retrieved from https://www.actfl.org/sites/default/files/pdfs/public/ACTFLProficiencyGuidelines2012_FINAL.pdf
  3. Reeves, D. B. (2004). The Case Against the Zero. Phi Delta Kappan, 86 (4), pp. 324-325.
  4. Marzano, Roberto (no date). Tips From Dr. Marzano: Formative Assessments and Standards Based Grading. Retrieved from https://www.marzanoresearch.com/resources/tips/fasbg_tips_archive
  5. Scriffiny, P. (2008). Seven Reasons for Standards Based Grading. Educational Leadership, 66 (2), pp. 70-74.
  6. Sten House Publishers (Producer). (2010, November 10). Rick Wormeli: Standards-based grading. [Video File]. Retrieved from https://www.youtube.com/watch?v=h-QF9Q4gxVM&t=6s.
1 Comment

Comments are closed.

  1. CMT 3 months ago

    Hi Alissa, I’m one of your neighbors on this page. I felt your article did a great job of walking readers through the rational and the process that led you to adopt standards-based grading in the form of a rubric. As you can see from my article “Are We Teaching For Understanding” I also want to bring educators’ attention to the critical question: What are we assuming our students have learned when they walk out the door?

CONTACT US

We're not around right now. But you can send us an email and we'll get back to you, asap.

Sending

Privacy Policy | +92-51-2724070 | subscribe@pakistanascd.org | ©2019 The Reformer

Head Office: Plot # 179, 3rd floor, Zuhra Icon, Civic Center, Phase 4, Bahria Town, Islamabad, Pakistan, 44000

Log in with your credentials

or    

Forgot your details?

Create Account

By continuing to use the site, you agree to the use of cookies. more information

PAKISTAN ASCD Cookies, Tracking and Advertising Disclosures

When you visit the Service, we may use cookies and similar technologies, like pixels, web beacons, and local storage, to collect information about how you use PAKISTAN ASCD’s Services and to provide features to you.

Some of the content, advertising, and functionality on our Services may be provided by third parties that are not affiliated with us. Such third parties include

We use third-party analytics tools to help us measure traffic and usage trends for the Service. These tools collect information sent by your device or our Service, including the web pages you visit, add-ons, and other information that assists us in improving the Service.

 Details of Third-Party Services and Your Options

Below is a description of our advertising and audience-measurement companies, and the options you have with their services.

Google: PAKISTAN ASCD uses Google Analytics to help analyze our performance and our delivery of services and advertising to you. You can prevent your data from being collected by Google Analytics by using the Google Analytics Opt-Out Browser Ad-On, which you can find at this URL: https://tools.google.com/dlpage/gaoptout/

cookie) together to report how your ad impressions, other uses of ad services, and interactions with these ad impressions and ad services are related to visits to PAKISTAN ASCD's sites.

Facebook: PAKISTAN ASCD uses Facebook’s Custom Audience Remarketing Tools. Please go to aboutads.info/choices to opt-out of such tools.

LinkedIn: PAKISTAN ASCD uses LinkedIn’s Lead Accelerator services. Please go to https://www.linkedin.com/psettings/guest-controls to unsubscribe from target ads and to manage your LinkedIn advertising

Twitter: PAKISTAN ASCD uses Twitter’s tailored advertising and remarketing Please go to https://support.twitter.com/articles/20170405 to turn off tailored ads and to exercise other options.

 How long will cookies stay on my device?

The length of time a cookie will stay on your computer or mobile device depends on whether it is a "persistent" or "session" cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay on your computer or mobile device until they expire or are deleted.

General Choices

To learn more about the choices that advertisers provide generally for individuals to influence how information about their online activities over time and across third-party websites or online services is collected and used, visit the Network Advertising Initiative at http://www.networkadvertising.org/managing/opt_out.asp and the Digital Advertising Alliance’s www.Aboutads.Info/choices/ service.

Close