Profiles of Effective PD Initiatives : Owen J. Roberts Middle School

We often hear of schools and districts that have built large-scale PD programs around Stenhouse books and videos. We wondered how they developed these initiatives and what sort of impact they were having on professional growth and student learning. So we’ve asked Stenhouse editor (and longtime education journalist) Holly Holland to interview the innovative staff developers and administrators behind these initiatives and write a series of case studies. In the first installment of the series, Holly writes about how the staff of the Owen J. Roberts Middle School in Pottstown, Pennsylvania restructured their thinking about assessment and grading through their work with Rick Wormeli’s book, Fair Isn’t Always Equal.

Be sure to leave a comment or ask a question — five lucky commenters will get a free copy of Fair Isn’t Always Equal!

Owen J. Roberts Middle School, Pottstown, PA

In his previous school administrative job, Robert Salladino led a faculty study of Fair Isn’t Always Equal (Stenhouse, 2006) and “felt this incredible connection” to author Rick Wormeli’s message about effective assessment and grading in the differentiated classroom. So in 2007, when Salladino became principal of Owen J. Roberts Middle School in Pottstown, Pennsylvania, he made sure every teacher had a copy of the book.

Salladino and his leadership team also attended a two-day workshop with Wormeli and began encouraging teachers to implement recommended strategies such as letting students redo assignments and ensuring that all recorded grades were accurate, consistent, meaningful, and supportive of learning. The research supporting those practices is so strong that Salladino was surprised when many faculty members resisted the changes.

“We were hearing every argument that Rick mentions in the book,” Salladino said. “We were living it.”

Like Wormeli, he believes grades should indicate progress toward learning instead of reflecting an arbitrary and inconsistent collection of academic and nonacademic factors that might include test results, compliance with homework policies, subjective evaluations of effort, and points for class participation.

“The traditional way of thinking about grades is they reward kids or punish kids. We really have to say that grades are informational,” Salladino said. “One of the things we took away from Rick’s work is that in order for grades to be meaningful we have to focus on mastery learning. It should be about how well I learned, not how I turned in assignments. For late work and redoing work, for instance, we said to teachers: ‘The way we currently structure school, it’s set up so everybody should demonstrate mastery at the same time. If we let go of grades used to rank kids, what should it matter if you learned something after the teacher retaught it in a different way? It’s getting to the destination. It doesn’t matter if you needed a different route than the rest of the class. Ultimately, did you learn what we wanted you to learn?’”

Some teachers protested that letting students redo work would encourage them to shirk responsibility, whereas Wormeli and other assessment experts claim the opposite: If students have to keep revising their work until they meet high standards, they develop persistence and respect for excellence.

History teacher Michael Brilla and science teacher Stephen DeRafelo likened the philosophy to how they coach wrestling. Just as they don’t stop guiding kids when they perform poorly in a competition, which is a form of assessment about skill development, they also shouldn’t give up on students who need more time or instruction to understand subject content.

“If a kid bombed a test, why would I just move on?” asked DeRafelo. “If I’m structuring my class to build a knowledge base, it’s negligent of me to move on. The only way to offer the opportunity and encouragement for kids who didn’t get a lab right or a test is, ‘Let’s do this again.’”

Both teachers acknowledge they were initially skeptical about the value of shifting from traditional assessment and grading practices that expect everyone to learn at the same pace. Brilla had an “aha” moment when a colleague used the metaphor of two families traveling on the same day to Disney World. Must one family cancel the entire trip just because car trouble caused a delay in reaching the destination?

“As far as retests and redos, we talked about how as adults all the high-stakes tests we take you have the opportunity to do them over—the SAT, the LSAT, even the driver’s license test,” Brilla said. “The idea that you could learn from your mistakes from your first evaluation made sense to me then.”

Brilla and DeRafelo said they don’t offer retakes without reinforcing accountability. Working with students, they carefully analyze test results and design strategies that will help them do better the second time. Students and their parents must sign off on the plans, assuming ownership of the process. Brilla also asks students to complete a self-evaluation and reflection after every social studies project, which they then use to craft a plan to correct mistakes.

“I think kids are more willing to take risks than before because they know they will have the opportunity to fix things the next time,” Brilla said.

Salladino believes some teachers at Owen J. Roberts Middle School have resisted making similar changes because grading is one of the few areas in education they can control, and many are reluctant to open their practices to scrutiny. To persuade the skeptics, Salladino encourages teachers who’ve shifted to standards-based grading to share their successes with colleagues. Krista Venza, the school’s instructional support facilitator, said she also guides her colleagues to free resources, including explanatory videos and answers to common questions, which Wormeli has provided at a companion website.

“It’s really helpful for me to use Rick’s words to share with teachers,” Venza said. “It’s a different way of hearing it than maybe what I’ve been saying, another way for them to get it.”

While guiding the school’s veteran teachers toward fair and consistent grading practices, Salladino said he also questions job candidates to determine whether they would be supportive of the shift toward mastery learning. Additionally, new teachers receive copies of Day One and Beyond (Stenhouse, 2003), Wormeli’s guide for new middle-grades teachers, as part of the school’s induction program.

Salladino said he tries to model principles he expects teachers to use in the classroom in his own work with the faculty. For example, when new teachers turn in lesson plans, he does not offer a cursory and meaningless review. Instead, he suggests specific changes and asks them to resubmit the lesson plans after reflecting and revising their work. Before he distributes school communications, Salladino also seeks feedback from the assistant principal.

“We need to have the idea that all of our work needs polishing,” he said.

Math teacher Matthew Charleston took that message to heart last school year when he instituted a policy that students could retake any unit exam or major test. This school year he extended it to include all assignments, quizzes, and tests—with an important caveat. To take a similar but more difficult second assessment, students must correct and explain all errors in the previous version. High-performing students are easily motivated by the chance to improve their grades, he said. For struggling students, Charleston provides time during class or during breaks throughout the day to offer guided corrections.

“You see that ‘aha,’” he said. “They have more confidence.”

Charleston and other teachers who have adopted the changes recommended in Fair Isn’t Always Equal said they frequently encounter colleagues—including their own teaching spouses—who disagree with the different expectations. They believe evidence of their own successes will eventually sway the doubters. Having collegial conversations about difficult issues and giving teachers access to good professional resources is part of the plan to change the school’s culture one mind at a time.

“I want them to believe this is right for kids, not because my boss told me I had to do it,” Salladino said. “This has been no easy journey, but we continue to forge ahead. With each month and marking period I think we are bringing more people on board.”

14 comments March 21st, 2013

Preparing students for open response test questions

Though the Common Core assessments are still being developed, it’s clear that they will involve various kinds of “constructed response” questions beyond traditional multiple-choice items. Students will need to know how to read a series of excerpts, gather evidence from each piece, and formulate a coherent response. We asked Ardie Cole, author of Better Answers: Written Performance That Looks Good and Sounds Smart (2nd ed.), for her advice on preparing students for the upcoming tests.

As a teacher and a literacy coach, I lived through New York State’s early implementation of open-response tests—as well as the results. When the assessment’s results arrived, I was asked, as a member of the correction committee, to “fix the writing problem.” Fortunately, our state returned copies of the actual tests that I could closely analyze. I was surprised to discover it was actually students’ creativity that could ruin a response! That aha moment occurred while I was reading hundreds of test responses. In class, students were being encouraged to imagine and visualize and to make their writing creative. But this test writing demanded more structure and specificity.

Students needed experience in another, more structured genre that demands factual evidence from acceptable sources. So we implemented what did work. And when all was said and done, I sat down and wrote a book called Better Answers.

“The ‘Answer Sandwich’?” a math teacher asked me. “What the heck is that? Sounds like something I wouldn’t buy.”

I explained, “If you teach math, science, social studies, or technology and are starting to use assessments that have written responses—not just multiple-choice items—I bet you’ll be using the Better Answer Sandwich or something like it someday soon. And when you do, its protocol can be a lifesaver—and a time-saver! Plus I wager that you, yourself, will borrow it the next time you respond to an administrative memo, or when you return an item to a manufacturer, or when you write a letter to the editor in defense of some idea. In other words, it’s not only a school tool—it’s a real-world tool.”

It is the teachers of subjects like science, math, and social studies who quickly understand and then implement this sandwich structure into their lessons and assessments. Now, more English teachers are embracing the approach as well, in response to the emphasis the Common Core Language Arts Standards place on nonfiction reading and writing, argument, and use of evidence.

There’s a little more to that sandwich, though, than a couple of buns and the facts that are layered between them. For some, the Better Answer Sandwich, itself, may be enough—or at least an entrée (see visual). However, students taking the new Common Core assessments would definitely benefit from an expanded perspective, because they’d learn to analyze prompts before starting to write and they’d experience evaluating their completed responses. All of this is explained in Better Answers. Plus, the book’s CD provides lessons with digital PowerPoint slides, resources, real-world venues, and other goodies.

Across the country, students will have more constructed-response items on all tests they are taking in our schools. Research supports their inclusion. Still, school tests are but one reason to use this sandwich structure. Another reason to keep it up and running is because it’s even more valuable in the real world. Anytime—in school or out—when asked to explain, analyze, compare, or describe, why not let the Better Answer Sandwich be your guide—a GPS that leads writers down the right-answer road?

Add comment February 27th, 2013

Now Online: So What Do They Really Know?

“My hope is that teachers will recognize that many of the tools they already use, when given a slight tweak, can serve as powerful assessments that will inform instruction and improve achievement.”

How are students progressing?
What do they need next?
How do I plan my instruction to get students to the next level?

These are the core questions that Cris Tovani asks when assessing students. Her new book So What Do They Really Know? shows teachers how to expand their definition of assessment and make it a powerful part of everyday instruction.

Drawing on her roots as an elementary teacher, Cris explains how she adapted the workshop model to the realities of secondary school—multiple classrooms full of skeptical, struggling adolescent readers and writers. Throughout the book, she shares real student responses to surveys and conversations, a play-by-play description of her English class block, and sample lessons that vividly demonstrate successful practices.

Readers will discover how to:

  • use formative assessments to differentiate instruction;
  • maximize student work time and immediately assess student learning within the workshop model;
  • get trustworthy data from annotations—the most important assessment tool for reading;
  • give students timely and useful feedback;
  • assign grades that accurately reflect what students learn and what teachers value.

So What Do They Really Know? will start shipping in mid-July. You can pre-order and preview the entire book now!

1 comment June 29th, 2011

Check in with Rick Wormeli

We have been regularly updating the special section of the Stenhouse website dedicated to Rick Wormeli’s book Fair Isn’t Always Equal. After a FREE registration, you will find a wealth of resources assembled by Rick to answer all of your questions about assessing and grading in the differentiated classroom.

In the latest video we posted, Rick discusses how to set up gradebooks:

You can also send your questions to Rick by e-mailing him. He will not be able to answer all e-mails, but check back on our Q&A page to see if your question has been answered.

Add comment December 22nd, 2010

Quick Tip Tuesday: How to design a rubric

“Rubrics are a popular approach for focusing learning and for assessing and reporting student achievement,” writes Rick Wormeli is his recent book Fair Isn’t Always Equal: Assessing and Grading in the Differentiated Classroom. “Designing rubrics may be more complex than teachers realize,” Rick continues, “however, but we get better at it with each one we do.” And to help with that practice, he outlines seven steps to designing an effective, useful rubric.

How to Design a Rubric
1. Identify the essential and enduring content and skills you will expect students to demonstrate. Be specific.

2. Identify what qualifies as acceptable evidence that students have mastered content and skills. This will usually be your summative assessments and from these, you can create your pre-assessments.

3. Write a descriptor for the highest performance possible. This usually begins with the standard you’re trying to address. Be very specific, and be willing to adjust this descriptor as you generate the other levels of performance and as you teach the same unit over multiple years. Remember, there is no such thing as the perfect rubric. We will more than likely adjust rubrics every year they’re used.

4. At this point, you’ll have to make a decision: holistic or analytic? If you want to assess content and skills within the larger topic being addressed, go with analytic rubrics. They break tasks and concepts down for students so that they are assessed in each area. Analytical rubrics also require you to consider the relative weights (influences) of different elements. For example, in an essay, if “Quality of the Ideas” is more important than “Correct Spelling,” then it gets more influence in the final score. If you want to keep everything as a whole, go with holistic rubrics. Holistic rubrics take less time to use while grading, but they don’t provide as much specific feedback to students. In some cases, though, the difference in feedback is minor, and the work inherent with an analytical rubric doesn’t warrant the extra time it takes to design and use, especially at the secondary level where teachers can serve more than 200 students.
Another way of looking at the difference is this: The more analytic and detailed the rubric, the more subjective the scores can be.

The more gradations and shades of gray in a rubric, the more the score is up to the discretion of the teacher and is likely to differ from teacher to teacher, and even from day to day. The more holistic the rubric, the fewer the gradations and shades of gray and thereby, the more objective and reliable the scores can be. Of course, the more detailed the rubric, the more specific feedback we get for both teacher and student. It’s very rare to generate a rubric that is highly detailed and analytical while remaining objective and reliable teacher to teacher and over time.

Here are two examples: In a holistic rubric, we might ask students to write an expository paragraph, and the descriptor for the highest score lists all the required elements and attributes. With the same task in an analytical rubric, however, we create separate rubrics (levels of accomplishment with descriptors) within the larger one for each subset of skills, all outlined in one chart. In this case, the rubric might address: Content, Punctuation and Usage, Supportive Details, Organization, Accuracy, and Use of Relevant Information.

In a chemistry class’s holistic rubric, we might ask students to create a drawing and explanation of atoms, and the descriptor for the highest score lists all the features we want them to identify accurately. With the same task using an analytical rubric, however, we create separate rubrics for each subset of features—Anatomical Features: protons, neutrons, electrons and their ceaseless motion, ions, valence; Periodic Chart Identifiers: atomic number, mass number, period; Relationships and Bonds with Other Atoms: isotopes, molecules, shielding, metal/non-metal/metalloid families, bonds (covalent, ionic, and metallic).

Remember how powerful this becomes when students help design the rubric themselves. After working with a few rubrics that you design, make sure to give students the opportunity to design one. Determining what’s important in the lesson moves that knowledge to the front of students’ minds, where they can access it while they’re working. This happens when they have a chance to create the criteria with which their performances will be assessed.

5. Determine your label for each level of the rubric. Consider using three, four, or six levels instead of five for two reasons: 1) They are flexible and easily allow for gradations within each one, and 2) a five-level tiering quickly equates in most students’ and parents’ minds to letter grades (A, B, C, D, F) and such assumptions come with associative interpretations—the third level down is average or poor, depending on the community, for instance. The following list shows collections of successful rubric descriptor labels. Though most are written in groups of five, which I advise teachers not to use, they are provided in such groupings because that is what educators most commonly find on their district assessments. Look at the list’s entries as a sample reservoir of word choices.

  • Proficient, capable, adequate, limited, poor
  • Sophisticated, mature, good, adequate, naïve
  • Exceptional, strong, capable, developing, beginning, emergent
  • Exceeds standard, meets standard, making progress, getting started, no attempt
  • Exemplary, competent, satisfactory, inadequate, unable to begin effectively, no attempt

Descriptor terms need to be parallel; it’s important to keep the part of speech consistent. Use all adjectives or all adverbs, for example, not a mixture of parts of speech. Notice how this sequence on a rubric could be awkward for assessment and confusing to students:

  • Top, adequately, average, poorly, zero

6. Write your descriptors for each level, keeping in mind what you’ll accept as evidence of mastery. Once again, be specific, but understand that there is no perfect rubric. Alternative: Focus on the highest performance descriptor, writing it out in detail, and then indicate relative degrees of accomplishment for each of the other levels. For example, scoring 3.5 on a 5.0 rubric would indicate adequate understanding but with significant errors in some places. The places of confusion would be circled for the student in the main descriptor for the 5.0 level.

In my own teaching experience, this alternative has great merit. When students are given full descriptions for each level of a rubric, many of them steer themselves toward the second or third level’s requirements. They reason that there’s no need to be “exemplary”— the top level—when they’d be happy with the label “good” or “satisfactory.” These students either don’t believe themselves capable of achieving the top score’s criteria, or they see the requirements as too much work when compared with the lower level’s requirements. To lessen the workload, they are willing to settle for the lower-level score.

Don’t let them do this; don’t let them lose sight of full mastery. When all that is provided to students is the detailed description of full mastery, they focus on those requirements—it’s the only vision they have. All of their efforts rally around those criteria and, as a result, they achieve more of it.

7. “Test drive” the rubric with real student products. See whether it accounts for the variable responses students make, ensuring those who demonstrate mastery get high scores and those who don’t demonstrate mastery earn lower scores. Ask yourself: “Does this rubric provide enough feedback to students to help them grow? Does it assess what I want it to assess? Does it help me make instructional decisions regarding students’ learning?” If it doesn’t do one or more of these things, the rubric may need to be reworked.

Add comment July 20th, 2010


New From Stenhouse

Most Recent Posts

Stenhouse Author Sites

Archives

Categories

Blogroll

Classroom Blogs

Tags

Feeds