On Wednesday, I was at the Board Office for our monthly Reading Specialist Meeting. I sent out some tweets during my time there, but these two are the ones that I’ve returned to often.
It’s reporting season, so there is even more talk about assessment, evaluation, and instruction than usual. With greater focus on the Science of Reading and a larger emphasis on decoding, educators are trying to figure out what this means when it comes to assessment and evaluation. Many of us in our Board used to use DRA, but these texts are not what you would usually consider “decodable texts.” So should we still be using this resource? What might we use instead? There’s a lot of uncertainty, and educators everywhere are wondering aloud together. A couple of teachers replied to my DRA tweet, and I promised to blog to share more of my thinking. I think that I was hoping to come to some conclusions before I blog, but today, I decided on a different approach. This post is all about my wonders.
As I shared during the Reading Specialist meeting, when educators ask me about using DRA or not using it, I always ask them the same question. This is a question that I’ve asked myself over the years when it comes to using any assessment, and I think that it’s an important one to think about.
- What will [name of the assessment] tell you about your students that you do not already know about them?
For a while now, we’ve often relied on formal assessments of some sort to help with determining success and deciding on marks. The problem is that by focusing primarily on these types of assessment, I worry that educators might have lost their faith in their own professional judgement. In the past couple of weeks alone, I’ve talked to so many Kindergarten and Grade 1 educators. They all know their kids. They all understand where they are at in terms of reading and writing skills, and what they need next. They also know the year-end benchmarks and how their current skills align with these. Knowing all of this, what will a formal assessment tell them versus what might it just confirm? Do we need this confirmation? Why?
This takes me to part two of my wonders:
- What about Growing Success?
I know that every elementary and secondary teacher has heard about this document and quickly thinks about the “triangulation of data” — observations, conversations, and work products — but why does this always seem like such a new idea when it comes to evaluation? Maybe the fact that Growing Success does not just emphasize standardized tools becomes a worry when it comes to substantiating a mark. But even when we are using a standardized option, I think that we need to consider how this data compares to other data collected, and how this might give us a picture of the whole child. I think that the following tweet sums up so much, and again makes me grateful for the amazing educators that I work with every day.
From here, I need to move to move to my third big wonder:
- If DRA was initially considered a formative assessment, why are we using it for summative purposes?
I’ll admit that as a classroom educator, I probably used DRA more frequently for summative assessment. In Kindergarten, I only did DRA at the end of the school year, and for those students that I knew would be a Level 4 or above. Why? They were meeting or exceeding benchmarks, so this data could be valuable for Grade 1 educators. It might also tell us what skills we need to focus on next to move to the next level (e.g., exposure to more challenging high frequency words, a closer look at certain vowel sounds, or a greater focus on comprehension). For those that were not meeting benchmark, I was worried that a DRA would just highlight that these students are “below.” What impact would this have on instruction? Our Board’s Phonological Awareness Screener would better highlight areas of need and next steps, so I chose to use this instead. Now it’s as I write about this process that I wonder if DRA, just like the screener, could be used for both summative and formative purposes. Maybe when used prior to reporting, it’s helping me determine a mark or comment for that child, but it’s also helping me plan next steps for upcoming small group instruction. It might also highlight some important starting points for next year’s teacher. Does this count as formative then?
DRA might not be valuable on its own for determining a mark, but …
- could it be part of the puzzle, especially if used for the right child at the right time?
Last week, I blogged about my Grade 1 PD session and the decodable text that we wrote together. This takes me back to my cognitive dissonance tweet, and the follow-up conversation that I had with Jodie Howcroft about decodable texts vs. DRA. Please note here that at no point did Jodie give me any answers, but what she did do, was provide me with some wonders that allowed us to come to a conclusion together. Or at least it was a conclusion at that moment, which might continue to change as we talk and explore more.
The big part of all of this that continues to make a lot of sense to me is that a decodable text is not levelled. Now this becomes somewhat tricky, for technically any text is decodable if you know the code. Uncovering the Logic of English will make you re-think a lot of irregular words. So maybe it comes down to the type of decodable text that it is, and when we created that text to use with Grade 1 students, the focus was on vowel sounds, blends, digraphs, and high frequency words that have been taught and reinforced in class. Classroom instruction though is based on observing students and determining the needs of that group of students, so even if students can all read that text, does that mean that they’re all at grade level?
In the past for Grade 1, one piece of data that most educators likely would have used to determine a reading mark is DRA. Knowing that a Level 16 is the benchmark for the end of Grade 1, we would want students to be at least a Level 8 (likely a Level 10 or 12), to get a B right now. Technically, those students that are reading a Level 8 text, can decode, so if students are meeting benchmark at this time of the school year, wouldn’t it still hold true that they could read this text? The problem is, knowing what we know now about decoding, is it still reasonable for a 16 to be a year-end benchmark? I really don’t know.
The other interesting thing that came out of using a decodable text in this way — regardless of if it’s an instructional tool versus an assessment one — is that even instructional tools can provide us with data. When using this text with students, educators noticed …
- which students have difficulty with vowel sounds.
- which students do not know digraphs.
- which students struggle with blends.
- which students are still reading word-by-word (which we know can impact on comprehension).
- which students need a visual or two to connect with text in order to support comprehension.
- which students do not know the high frequency words that have been taught in class.
- which students are not applying their knowledge of phonics’ skills in isolation when reading a text.
This then led to us discussing next steps for both full class and small group instruction. If assessment informs practice, then could a decodable text be used for assessment, even if not in a standardized way?
With that then, I get to the end of a long blog post that does everything but come up with a definitive answer on DRA and the role that it might play in assessment. Maybe there isn’t one answer and maybe that’s okay. While this was likely not intentional, I will say that the DRA and decodable text conundrum has led to some of the best conversations and reflections with educators that I’ve ever been involved in. As a Reading Specialist, these opportunities to co-problem solve and co-plan are incredible, so for now, maybe I’m okay with a little uncertainty. Knowing that learning happens through wonder, discomfort, and change, could this whole debate provide just that? I hope that others can weigh in and share their thinking and conversations, for I wonder what role our shared thinking will have on this assessment conundrum.
Aviva