Pages

Wednesday, January 10, 2018

How do you know what they know?

All day today, I kept thinking "I'm totally going to blog tonight... here's idea A, idea B, idea C, etc", but then 4pm came and the thoughts just disappeared or my after-school brain thought "That was a dumb idea... why did I think that would be a good blog post??"  It's so frustrating, but one of my goals this year is to #PushSend, even when I'm struggling.


Over the past few weeks, Pam and I have been slowly reading and discussing Embedding Formative Assessment by Dylan Wiliam and Siobhan Leahy. (Abbreviated EFA2) While there are parts of the book that have been fairly difficult to read, I've also been able to walk away with several usable strategies - and we're only on Chapter 4! :)

This year's theme of "What did you teach? What did they learn?" was inspired by this book and I have really tried to focus on this question each day.  A similar statement from the book was a reminder to stop worrying about the label of "formative assessment" and start thinking about whether your classroom activities will help your students learn more.  As a result of this book, I've really tried to focus on being intentional on "how do I know what they know?"  

In Chapter 3, the authors focus on learning intentions (aka objectives) and success criteria (aka rubrics) and it was this chapter that I've really focused on this week.  

Exit Tickets:  I've renamed my exit tickets as "Daily Reflections".  This week, when students arrived to class on Monday, I had a half sheet in their folder with space for this week's Daily Reflections.  Each day, I have really focused on my theme question of "What did you teach? What did they learn?" to guide me as I write that day's question.  I've tried to be intentional with the Reflection prompt as I think about the day's objective and what I hoped they would be able to do/understand at the end of the lesson.  Today, I tried a suggestion from EFA2 where students generate their own test/quiz questions, hoping to trigger what the authors call the "generation effect", where students remember responses they generate better than responses given to them.  

Success Criteria:  One of the other strategies I tried from Chapter 3 was one about analyzing student work.  The authors spent quite a bit of time in Chapter 3 referencing "success criteria" or rubrics for student work.  In general, I use rubrics for quizzes and tests that mimic the AP Exam style grading, but I remember trying to use an online rubric maker several years ago and I really struggled to understand the difference between "few / some / many" or other descriptors that are often used in those rubrics.  The authors instead suggest  that we should communicate quality work to our students by providing student work samples, not rubrics because rubrics rarely have the same meaning for students as they do for us.  The authors suggested showing examples of various quality to students and having the students give feedback.  For their justification, the authors said there were two main reasons to do this:  (1) students are better about spotting mistakes in the work of others than they are in their own work and (2) when students notice mistakes in the work of others, they are less likely to make the same mistakes in their own work.

I tried this strategy yesterday and today in my Stat class.  I like to start the 2nd semester with simulations because it's kind of fun and eases them back into our course; however, students often struggle with the written portion of a simulation.  I modified a previous AP problem and created two student samples of vastly different quality, and thanks to hubby's help, vastly different handwriting as well!  I asked students to pretend they were the teacher and read the two samples, writing feedback to Student A and Student B regarding what each student did well and what each student needs to improve.  After time to individually assess the student work, I had them share out in their table groups, and then finally as a class share-out of Star and Wish.  Finally, I had them close their eyes and use their fingers to show me what they felt each sample would have scored on the AP Exam on the 0-4 scale.  While I was pleased with the level of engagement each student showed and the quality of their discussions / critiques, the real power of this strategy showed up later in the hour when they had time to work on a problem set.  Many of them referenced the success criteria from the student samples, really putting effort into mimicking the work of the "good" paper and providing each other feedback regarding the quality of their written descriptions.  This was definitely a strategy that I will use again!!


No comments:

Post a Comment