Mary Anne Mather explains how careful analysis of test data can facilitate changes to help close the achievement gap
As schools face the reality of the Common Core State Standards (CCSS), assumptions about the effect they will have on struggling students is a prominent part of the conversation. The standards are meant to be achievement benchmarks that raise the bar for all students and provide guidance through the grade levels for the development of key learning goals. In particular, the CCSS for English Language Arts and Literacy in History/Social Studies, Science, and Technical Subjects explicitly address the use of challenging nonfiction texts at all grade levels and across content areas (see Appendix B for exemplar texts and sample performance tasks:
http://www.corestandards.org/assets/Appendix_B.pdf). This area in particular is concerning to educators, as it is subject to assumptions about what struggling students can realistically achieve.
Explore Assumptions
Although it is often overlooked, taking the time to engage in an in-depth conversation about these concerns and assumptions is an essential first step toward ensuring success for the students who struggle most. While educators and administrators generally believe that all students are capable of learning, what they learn and the way they should learn it has long been debated.
Facilitating conversations about vision and expectations in targeted professional development sessions is a good place to start. One activity called “Variations on a Theme,” drawn from the work of the Dufours, Eaker, and Karhanek,* gets people talking… and thinking. Based on the commonly held premise that all kids can learn, participants congregate at one of four posters that they feel best represents the majority opinion at their schools:
1. “We believe all kids can learn… based on their ability.”
2. “We believe all kids can learn … if they take advantage of the opportunity we give them to learn.”
3. “We believe all kids can learn… something, and we will help all students experience academic growth in a warm and nurturing environment.”
4. “We believe all kids can learn… and we will work to help all students achieve high standards of learning.”
Once assembled, they discuss reasons for their initial selections before a more in-depth description is revealed about what each selection really means. They read further about what is typically done at a school when students do not learn, as well as causes for students not learning that relate to each selection. The additional information can change participants’ outlooks, which leads them to other posters. Since people visibly move from one station to another, they can ascertain who thinks like they do and who thinks differently. The end result is a much clearer picture of where the school is in its current thinking about student success and where they really want to be.
Predict Results Then Analyze, Analyze, Analyze
After exploring assumptions and negotiating a shared vision for excellence, the next steps focus on systematic analyses of multiple data sources. If we accept the CCSS as achievement benchmarks for all students, understanding how data can reveal practices to scaffold success is essential. Data sources range from standardized state tests and district benchmark tests to common grade-level assessments and examples of student work. Test data is most useful when analyzed at the aggregate, disaggregate, strand, and item levels. Data can also include teacher observations, attendance, student input, and informal assessments to check for understanding. The range of data is important as teachers use a collaborative inquiry process to pinpoint specific student learning problems that surface as trends across data sources, student populations, school years, and grade levels.
We suggest a four-phase data analysis process that starts with making predictions before even looking at the data. Predictions help data analysis teams to:
• Activate their thinking about the data.
• Raise and further explore assumptions embedded in the predictions.
• Deepen understanding about the perspectives and orientations to the data colleagues bring to the table that can help or hinder success.
• Help contextualize the type of data that will be analyzed.
Predictions invite speculation and anticipation about what the data will say based on what and how content has been taught. Do ELL teachers predict differently than regular classroom teachers based on how content has been offered to varied student populations? What accommodations have been enacted for ELLs and other academically challenged students, and how might the accommodations affect outcomes?
The prediction phase is followed by creating a visual of the data set, studying the visual together to make factual observations, and then capturing inferences about why results appear as they do. The process sounds straightforward, but the revelations are often astounding. Facilitating collaborative data inquiry conversations using the four-phase process often gets at understanding rigor and the importance of setting shared high expectations across student populations. When four-phase dialog is used to study disaggregate data, teachers sometimes find that all students are struggling with a particular concept, even those who are most often proficient, or that some students in notoriously underachieving groups are actually holding their own. The data analysis conversations are always interesting.
At one school, teachers were shocked when not a single student scored “proficient” in fractions on the fourth-grade state standardized test — not just the struggling students they had predicted. The same was revealed when district benchmark tests were analyzed and cross-referenced. However, the teachers were convinced that students understood all the concepts, basing this on what they believed they had taught, informal classroom assessments, quizzes, and homework.
It wasn’t until they administered an open-response common assessment drawn from released state test items that they understood the issue. In deconstructing the math problem, they collaboratively ascertained the skills, concepts, vocabulary, and level of cognitive demand required to successfully answer the question. It became clear that the fractions content was not being taught at the same level of rigor that the state and district test questions required and that professional development was needed to unpack the curriculum and adjust classroom practices.
Investigating Causes
Once close data analysis reveals the problem, there is a natural tendency to jump to solutions and skip the third step in the process — investigating and verifying causes. Refrain from purchasing the new reading series, sending teachers to professional development sessions, or completely re-teaching a unit until you know the “why” behind the disappointing data. As one principal noted, “[Our school is under pressure to raise achievement], we can’t afford to spend our time doing things that aren’t going to get results.”
So many factors can figure into a successful formula to ensure higher achievement. Sometimes causes reveal the need for teachers to adjust practice and integrate specific new strategies into their instruction. Other times, causes can relate to curriculum/assessment alignment, scheduling, attendance, or the need for increased teacher content knowledge.
The Using Data program scaffolds the causal analysis process by using a pack of “Cause Cards” which address reasons for low achievement in English language arts, mathematics, and science, and issues related to low socio-economic and special education students. One set of cards in the pack focuses specifically on ELLs. These cards suggest causes that are often overlooked or historically not in the realm of professional conversations in certain settings.
The causal analysis process helped an elementary school in Florida verify a root cause for low student achievement in elementary mathematics needing a relatively simple fix. Special activities such as award ceremonies, assemblies, and even extra pull-out instruction for certain students (often the case for ELLs) were having a profound effect on their most challenged students. Once these activities were scheduled outside of regular classroom instruction time, results improved: 76% of their lowest quartile students made measurable gains in math in the first year of change.
At another school, the causal analysis process revealed that a problem with geometry actually related to academic vocabulary — a question referring to vertical angles confused students who were used to the term opposite angles. The problem was that the vocabulary being used in the classroom was not aligned with statewide assessments.
At a special needs high school, teachers regularly used simplified texts that were often below grade level to provide students with access to required curriculum content. Even with grading accommodations, they still had few students who passed the English language arts New York State Regents exam. As part of a professional development initiative with the Education Alliance at Brown University, they began by setting a shared vision for success and outlined a new path to attain it. Instead of offering simpler texts, they presented students with the more challenging texts along with many scaffolds to access them. After a year, the results spoke for themselves — seven students in the small group passed the English exam.
In the previous two cases, teachers initially tried to help students by avoiding academic material they deemed too difficult or confusing. Like so many others who work with struggling students, these teachers wanted all their students to experience success, and many believed they were helping students by not introducing academic vocabulary or challenging texts considered beyond a student’s ability. “Easing off” of curriculum and content rigor to help students who may be more academically challenged is counter to the tenets of the CCSS, and counter to the “all kids can learn” commitment.
Once teachers have verified the causes of student learning problems that surface across multiple data sources, they are ready to develop a plan for success. The plan outlines what actions will be taken, when, and by whom. The plan is intended as a touchstone for a continuous process that monitors the impact of targeted changes on intended outcomes. The monitoring includes ongoing collaborative inquiry and conversations that always end with a sequence of questions: “What have we learned? What are the next steps? Who needs to know?”
At a time in education reform when some express concerns about where the CCSS will leave our most challenged learners, we like to welcome the standards as an opportunity to close the achievement gap — a pathway to success, rather than a cause for concern. The program Using Data promotes an important core belief: “Significant improvement in student learning and in closing achievement gaps is a moral responsibility and a real possibility in a relatively short amount of time.”
As a moral responsibility, it is a call educators must answer. And, in truth, we are already seeing the fruits of new thinking and new approaches under the shared assumption that articulates, “All kids can learn when we establish standards all students are expected to achieve and we continue to work with them until they have done so.”
Reference
Used with permission and adapted from DuFour, DuFour, Eaker, and Karhanek, Whatever It Takes: How Professional Learning Communities Respond When Kids Don’t Learn. 2004, pp. 29-33. Bloomington, IN: National Educational Service.
Mary Anne Mather is a former classroom teacher. She is a professional development developer and consultant to nonprofits focused on integrating research-based strategies to improve teaching and learning. Recent clients include TERC Using Data, the Education Alliance at Brown University, the Regional Education Laboratory: Northeast and Islands (REL-NEI), and Foundations, Inc.