Implementing purpose-driven assessment - SmartBrief

All Articles Education Edtech Implementing purpose-driven assessment

Sponsored

Implementing purpose-driven assessment

ESSA has set new guidelines for school assessment. Here's how to make sure your assessment practices are intentional and effective.

8 min read

Edtech

Implementing purpose-driven assessment

Pixabay

This post is sponsored by Northwest Evaluation Association

The Every Student Succeeds Act (ESSA), signed into law last December by President Obama, introduced a number of changes to federal policy on school assessments. What are these changes and how do schools navigate this process? NWEA Senior Research Manager Andy Hegedus outlines what schools need to know about navigating through this process.

ESSA has opened the way for multiple forms of assessment throughout the year. What are some examples of these measures? What insight do we hope to gain from this new strategy?

ESSA provides states and districts more latitude and control over their assessment decisions. That’s one reason why we emphasize the importance of districts defining their own assessment system that meet their needs. We want districts to realize that they now have more choices and they can use this control to improve learning.

What are the potential risks of this approach? How can schools prepare for this?

No Child Left Behind (NCLB) created the impression that students were over tested. However, according to our Gallup surveys, students and parents don’t believe students are over tested — they just want assessments that are instructionally useful and to know that the results will be used for the benefit of the student. The risk in reacting to perceived over testing is that districts simply throw out assessments wholesale without considering their utility to students and teachers.

Educators will have more input on assessments. How can they make sure their purposes are aligned to their assessments? What are some ways they can avoid redundancies and gaps?

Let’s start by agreeing that everyone is trying to build an efficient assessment system that meets all their critical requirements while spending the least amount of testing time and money. The secret is to start by identifying the purposes for assessments. There are many. Assessment data is used to set goals with students, identify appropriate reading material, group students, identify strengths and weaknesses in curriculum, communicate overall district performance to the school board and public and more.

Most educators start their review by looking at the assessment they already use. The problem here, though, is that this makes it easy to miss the gap between what they need and what they have.

Here’s what we recommend.

  1. Define the purposes. Start by gathering a diverse group of stakeholders and have them clarify the meaning of assessment terminology, such as “What is a benchmark assessment?” Next, identify all the purposes for which they need the assessment data. Be sure to consider how it will apply to different student populations (such as English-language learners and students with disabilities), academic subjects, organizational departments, and stakeholders, including students, teachers, parents, school board members and the community.  Finally, rate each purpose on how critical it is to successfully meet the mission of the district.
  2. Examine the existing assessments. Identify the purposes each assessment can fulfill based on its design. This takes some savvy since vendor marketing material and their assessment’s technical manual can make different claims. Remember this: What each assessment can fulfill and what you are actually using them for might be two different things.
  3. Identify gaps and redundancies. A gap of concern occurs when an important purpose exists and there is no assessment meeting that need. A redundancy occurs when multiple assessments meet the same purpose. Gaps require you to find an assessment to fill it. Redundancies can be reduced by eliminating assessments that include purposes already covered by other assessments. At a minimum, when you are done, there needs to be an assessment providing the data required to meet each critical purpose.

Doing this process rigorously will help everyone understand the time investment for the assessments and how the data will be used. Clarifying the purposes of assessment data can go a long way to having the data used as intended.

What are some effective ways schools can support teachers in administering assessments and applying the data to their instruction? How can we ensure we’re serving all learners, including students with special needs, English-language learners and at-risk students?

Teachers are the ones who use assessment data most directly to benefit students. It’s important that they understand the purpose of the tool, how to administer it properly, and how to interpret and act on results of the assessment in the classroom.

The importance of educators understanding the purpose is obvious. Teachers are less likely to take the test seriously if they — and their students — don’t understand how it is useful to them.

Properly administering the test means understanding the appropriateness of an assessment for different groups of students. For example, assume a teacher is giving a test of cognitive ability to a group of students that includes English-language learners. The teacher must know if that test is appropriate for students who have not mastered English. If it is not, the consequences can be serious; it could result in underestimating the ability of these learners.

The right structures can help teachers use data properly. One way to do this is to provide protocols for professional learning communities to use in their meetings. This creates opportunities for informal conversations where teachers can engage each other on ways they use data to guide instruction in their classroom. Structuring time so teachers can work in teams and learn from each other can also support professional learning and deepen their knowledge of the appropriate uses and limitations of assessment data.

Hopefully, with better implementation and knowledge about the data, teachers will use the data in ways that students experience the connection and recognize the value of taking the assessment.

Parents have become influential stakeholders in the assessment conversation. What are their expectations?

Most educators and parents agree that assessments are necessary and useful tools for meeting students’ needs. Both groups support assessments that aim to improve instruction in the classroom, according to our Gallup survey. For example, parents consider multiple types of assessments — including interim and formative assessments — helpful to their children’s learning. However, they are skeptical that state accountability tests improve the quality of teaching. The report also found that while parents generally were less supportive of high-stakes assessments, support for summative assessments among minority parents was strong, indicating their belief that these assessments are essential to monitoring and attacking achievement gaps.

What kinds of data do parents want? A 2014 survey of parents found that most — 95% — want data that helps them monitor their child’s progress and alerts them to potential issues. Seventy-seven percent want data that tells them what kinds of activities they can do at home to support their child. This might include books in the child’s area of interest that are on his or her reading level or drill parents can do during their commutes.

Testing integrity has been an issue in some districts. How can schools demonstrate transparency to their stakeholders?

One of the great lessons of NCLB was that metrics and incentives used to measure learning would influence instruction. Basically, what gets measured is what gets taught. Under NCLB, the metric representing student achievement was the proportion of students who scored proficient or above on their state assessment. The incentive was the punitive consequences associated with failure to meet required Adequate Yearly Progress benchmarks each year.  

The impact was devastating. The most widely reported issue was that of measuring progress relative to a proficiency bar. This encouraged schools to focus on “bubble students” — children whose performance was close enough to the proficiency bar that teachers could likely move them above the bar with timely intervention. This practice hurt students who were deemed too far below the proficiency bar to be likely to change status during the school year. It also ignored the needs of high-performing students who were already exceeding proficiency cut scores.

Another common problem in many schools was that growth on the MAP assessment was measured by comparing the difference between fall and spring scores. When stakes for schools and teachers were attached to the measure, however, some schools reported an increase in summer loss. In many cases, we found that what schools were characterizing as summer loss, was actually inadvertent alteration — or in extreme cases, purposeful gaming — of testing conditions, such as encouraging more engagement in spring testing than the fall.

Be careful about metrics that measure only one dimension of your program or focus solely on one group of students. Given the potential that metrics and incentives have to change behavior, school systems need to consider the right way to deploy them and make sure they don’t impede educators’ ability to maintain focus on the success of all students.

Interested in this topic? Learn more at the upcoming webinar Multiple Measures Done Right: The 7 Principles of Coherent Assessment Systems on September 21.

Andy Hegedus is senior research manager at NWEA. He supports the development of new research reporting and consulting services, and manages research projects that focus on understanding the growth drivers in schools with differing levels of challenges. Prior to joining NWEA, Hegedus held senior leadership positions in the Christina School District located in Delaware. Before serving in education, he spent nearly 20 years in the nuclear-power industry. He holds an Ed.D. in Education Leadership from the University of Delaware and is also a Broad Superintendents Academy Fellow.