Analyzing Analysis: How the Parts Contribute to the Whole

The late, great writer Ursula Le Guin believed that “We read books to find out who we are. What other people, real or imaginary, do and think and feel… is an essential guide to our understanding of what we ourselves are and may become.” I believe this, too, which is why I made a case in my last post for bringing interpretation back into classrooms, as the means through which we can reap reading’s ultimate benefit. But here’s the other thing about interpretation: In addition to helping us develop moral compasses, empathy, and self-awareness, I think academically interpretation also helps us analyze. In fact, I see interpretation as the too often unrecognized behind-the-scene work needed for real analysis.

Think about it for a moment: Interpretation involves putting pieces of a text together to construct an understanding of its deeper meaning. It’s an act of construction, while analysis, on the other hand, deconstructs by separating a whole into its component parts ostensibly to see how the parts affect the whole. But how can readers analyze the function of the parts if they don’t really have a vision of the whole?

I suppose it’s possible to do this if both the whole and its parts are known or familiar, like the dog and its disassembled parts above. But as I wrote in Dynamic Teaching for Deeper Readingreaders who don’t have a vision of the whole beyond the gist can wind up like the blind men in the old Indian tale, who attempted to understand what an elephant was by analyzing a part of it. One man touched the trunk and thought an elephant was a snake; another felt the tail and concluded it was a rope; a third stroked the ear and thought an elephant was a fan. No one was able to make sense of the whole by analyzing a part.

When you have a deeper vision of the whole, however, analysis can be far more insightful. The third graders I wrote about in my last post, for instance, who were reading The Old Woman Who Named Things, didn’t notice every detail or initially understand every word. But once they’d developed an interpretation that encapsulated the whole, they were able to go back to a passage like this and have lots to say about why the writer had decided to have the old woman read this particular book.

In this way, these students were analyzing without explicitly being taught to do so. No learning to use acronyms like RAFT or ACE or sentence starters and templates. Instead, their analysis was a natural out-growth of having meaningfully interpreted the text. And if you’re wondering if what I’m describing is actually analysis, just imagine this example reframed as a question on a standardized Common Core test: “How does this paragraph contribute to the author’s message (or the theme or the character’s development)?”

Questions like this form the bulk of both the multiple choice questions and short constructed responses that students encounter on the PARCC, Smarter Balance and New York State/Engage NY assessments. And in my work with teachers, I’ve been recommending that once students have been able to thoroughly discuss and interpret whatever texts they’ve read as inter-active read alouds, whole class novels, or book club books, you invite them to consider a few analysis questions that either you or the students themselves can create by combining one word or phrase from each column (like the Chinese restaurant menus of my childhood):

I keep finding new words to add to this chart, so it’s a work in progress. But one thing I know for sure is that while students might need to learn the meaning of and nuances between these verbs, they’ll be far more ready to answer these kinds questions if they’ve thought deeply and interpreted what they’ve read, rather than staying on the surface—or, as many students do, only start to think until they hit the questions. And interestingly enough, I’m not the only one who believes this.

Last month, I came across a blog post by Timothy Shanahan called “If You Really Want Higher Test Scores: Rethink Reading Comprehension Instruction.” In the early days of the Common Core, Shanahan spent much time promoting the teaching of close reading by having students answer text-dependent questions over the course of three readings, the first to consider what the text says, the second how it says it, and the third what it means. More recently, however, he’s recognized that this has led many teachers to have a warped view of what it means to read. “Simply put,” he writes,

Reading is NOT the ability to answer certain kinds of questions about a text. . .  Not knowledge, comprehension, analysis, synthesis or evaluation questions. Not “right there,” “think and search,” “author and me,” or “on my own” questions. Not main idea, detail, inference, structure or author’s tone questions.

[Instead] reading is the ability to make sense of the ideas expressed in a text [through] the ability to negotiate the linguistic and conceptual barriers of a text” (or what I call ‘the problems’ a given text poses). Students who can make sense of a text’s ideas will be able to answer any kind of question about that text. While students who fail to scale those linguistic and conceptual barriers”—i.e., to solve those problems—will struggle with the simplest of questions.

And how does he propose teaching kids to do this? Basically, once they’ve learned to decode, by teaching them how to interpret.

Of course, the title of the blog post suggests that Shanahan sees higher test scores as the end goal of interpreting, whereas I see them as the by-product of more authentic and meaningful work. But just think about it: If we provided students with lots of opportunities to interpret right from the start of the year—with time set aside to regularly practice and experience how to move from interpretation to analysis, we wouldn’t have to drive ourselves and our students crazy with test prep at this point in the year. So let’s trade in all those literary analysis sentence stems, acronyms and worksheets and focus on supporting student interpretations as the backbone of analysis.


Pushing Back on the United States of Pearson


Last week I attended this year’s IRA Convention where every registered participant not associated with an exhibitor’s booth had to wear a name badge around their neck emblazoned with Pearson’s name and logo—which, in effect, made each and every one of us a walking advertisement for the corporate giant that seems to be taking over public education. Also last week third through eighth grade students throughout New York State were sitting at their desks with sharpened pencils, bubble sheets and test booklets published by Pearson, trying to make it through the three-day ordeal that was this year’s state ELA exam.

Subway Test PosterPearson created the tests as part of a $32 million five-year contract with New York State to design Common Core aligned assessments, and the word on the street was they were going to be hard. New York City had, in fact, already warned schools and parents to expect a dramatic drop in scores, and they spent $240,000 on what the New York Daily News called “a splashy ad campaign” explaining the drop to parents through posters that appeared in the subway and on ferries.

What all that money couldn’t buy, however, was any peace of mind, as reports from parents and teachers attest to on sites such as WNYC’s Schoolbook, the New York City Public School Parents blog, and the Teachers College Reading and Writing Project’s “Responses to the NYS ELA Exam” page. There you’ll find stories of students in tears, vomiting and even soiling themselves as their stress and anxiety levels mounted. And you’ll hear many tales of students running out of time, which was in short supply. According to testing expert Fred Smith, whose piece on the New York State tests appeared in the Washington Post’s “The Answer Sheet,” students had 7% less time per item than last year when the passages and questions weren’t as difficult. Not only does this make no sense, it’s also profoundly ironic: One of the Standards’ Six Instructional Shifts specifically tells teachers to be “patient [and] create more time in the curriculum for close and careful reading,” yet this year’s tests seemed to value speed over thoughtfulness and depth. And students had to waste what precious time they had on passages and questions that Pearson was field testing—that is, trying out for use on future tests—which served Pearson’s purposes, not students’.

As Smith says, such field testing “raises legal and ethical questions about forcing children to serve as subjects for commercial research purposes without their parents’ knowledge and informed consent.” And this wasn’t the only ethical question this year’s test brought up. As reported in the New York Post, At the Chalk Face and Diane Ravitch’s blog, several teachers noticed passages on the 6th and 8th grade tests that were in Pearson textbooks, giving students who’d read those texts in class an unfair advantage—and perhaps encouraging schools to buy additional Pearson products to up their students’ chances of scoring well.

Trademark SymbolThere were also reports of other kinds of product placement, with brand names, such as Nike, IBM and Mug Root Beer, appearing in many of the passages. Pearson has said this is an inevitable consequence of using ‘authentic’ texts. But while brand names do, of course, appear in lots of books and articles, you usually don’t see trademark symbols or footnotes such as the one that supposedly explained that “Mug Root Beer is the leading brand of Root Beer” beneath a passage that referred to the brand.

I say supposedly because the tests are kept under lock and key with teachers jeopardizing their careers by revealing specific details of the contents. This lack of transparency again raises questions about corporate versus citizens’ rights—though parents exercised their right to have their children ‘opt out’ of the test in record number this year, and a petition has started circulating online demanding that the State cancel its contract with Pearson.

The lack of transparency also means that parents and other taxpayers who have financed the tests cannot judge for themselves how well, or not, they lived up to Education Secretary Arne Duncan’s claim:

“For the first time, many teachers will have the state assessments they have longed for—tests of critical thinking skills and complex student learning that are not just fill-in-the-bubble tests of basic skills but support good teaching in the classroom.”

ELA Test BookletThe full battery of what Duncan calls these “game-changer” tests are not due out until the 2014-15 school year, but New York State and Pearson have said that this year’s assessments are in line with what’s to come—and Pearson’s in a position to know. They’ve been deeply involved in developing test items for PARCC, one of the two consortia that have received $360 million in federal funds to create the new assessments. Yet according to The National Center for Fair and Open Testing, these ‘game-changer’ exams will be “only marginally better than current tests” and will waste an enormous amount of time and money for everyone except Pearson.

As for IRA, it was heartening to hear (at least in the sessions I attended) more emphasis placed on best practice than data and more talk about meeting the needs of students than the needs of the test. There was even a little insurrection going on with those Pearson name badges: My fellow presenter Mary Lee Hahn of the A Year of Reading blog bought some clear packing tape and used it cover Pearson’s logo with her own business card, and several people used magic markers and editing marks to change PEARSON to A PERSON.

All that and the volume of online chatter I discovered about New York’s tests once I got home made think that there might still be a chance to raise our voices, flex our muscles, and reclaim the conversation from Pearson about where education is going.

Barry Lane at IRA

Educator, author and songwriter Barry Lane pushing Pearson out of the way at the 2013 International Reading Association Convention