Accountability
Accountability was integrated into the education system over two decades ago. As a policy initiative, it brought with it many high stakes. Students and the ways of teaching have evolved during that timeframe, but do we know more about how schools are doing?
As a former assessment director, I have several opinions about assessment data and how it is used in accountability systems. Over time, we have placed many burdens on a single test score. Frankly, the tests were not designed to provide the information for all the ways they are being used. The first step in test design and development is to determine the purpose and desired data outcomes of the assessment. This leads to the type of evidence later needed to construct the validity argument. In many cases, the data from around 40 test items is used to determine if the student demonstrates proficiency on given content standards. This assumes that the student was provided quality instruction on the content and the assessment was developed in a way to elicit the understanding of a student. Both of these components rely on human engagement and interaction. Additionally, the scoring of the assessment relies on human engagement and interaction.
The tests in and of themselves are not flawed or my least favorite term often noted, “invalid”. However, they were always designed to be limited in scope. There is only so much information we can gather about student understanding when it relies on a single point in time, on a given day, with a small sample of test items. As we contemplate accountability in the future, there needs to be another way forward.
Accountability systems based on assessment data rely on lag indicators. The concept of lag implies that we wait several months, in this case a full year, to document if there is an increase or decrease. The better question becomes if there are other lag indicators that might offer insight or better yet, use leading indicators. Leading indicators should be used by school and corporation leaders to determine if they are implementing the strategies to move the needle. As I mentioned in a previous post, chasing a number does not actually move the needle.
I don’t have a silver bullet for solving accountability. However, after 20 years of evidence, it appears that the original policy guardrails of accountability have not offered what we once hoped. The stakes are high, the summative assessment is the focus, and we are still questioning if we are doing what is best for students. We need to consider if other information can guide what we are really trying to learn about how students are being served.