Tuesday, September 6, 2016

When It Makes More Sense to Eat the Marshmallow

Almost everyone has heard of the marshmallow test. Researchers left small children in a room with a marshmallow. They told them – if you wait to eat the marshmallow, I’ll give you two when I get back. Then they watched what happened. Some children ate the marshmallow as soon as the door clicked behind the researcher. Others resisted the urge with a variety of (adorable) strategies. A follow up study of the children showed that the marshmallow resisters – the ones who could delay gratification for a better reward through self-control – were more successful in the future. Schools have begun to spend a lot of time parsing how best to develop this capacity in children.

Then Celeste Kidd thought differently. She was volunteering at a homeless shelter and began to wonder – what if one of these children were given a treat and told to wait before they ate it? Could their likely quick gobbling be explained by a theory of self-control?  She thought that expectations would play a bigger role. These children might expect to have their treat stolen – a big risk in a homeless shelter – and they might not expect adults to follow through on their promises – a big risk when adults are suffering. For these children, then, the most rational choice would be to eat the marshmallow right away. That is, it isn’t that they lack self-control, but rather that they are making the most sensible choice given the situation.

She decided to test her hypothesis by adding another element to the marshmallow test. She began with an art project. The children were given an old used crayon package and told they could use those to draw a picture or wait until the researcher returned with a brand-new set of exciting art supplies. All the children waited. After a brief delay, the researcher returned either with the promised set or without it, apologizing and saying, ‘‘I’m sorry, but I made a mistake. We don’t have any other art supplies after all. But why don’t you just use these used ones instead?’’ Then the marshmallow test was done. As usual, the children were given a marshmallow and told that they would get two if they could wait. Children who experienced the unreliable researcher who did not bring art supplies waited on average only 3 minutes. Children with the reliable researcher waited an average of 12 minutes. In other words, the children quickly learned to adapt their expectations from their experiences and acted accordingly.

It makes me wonder how many of the conclusions we draw about children are misguided. We keep trying to look inside for their motivations, aptitudes, and abilities, when we need merely look more often outside and ask what prompts their actions. Perhaps then, we might begin to break the cycle of expectations that closes around the children who can expect little (why wait? why ask for help? why try?); teachers, seeing their “lack of self-control” expect less of them. And so it goes - unless we see things differently.

Thursday, September 1, 2016

What is the cost of "best practice"?

In his book The End of Average, Todd Rose begins with a story of mysterious crashes of United States Air Force planes in the late 1940s. After multiple inquiries led nowhere, researchers wondered if the pilots had gotten bigger since the cockpit, based on average sizes, was designed in 1926. Using ten dimensions of size most relevant to flying, one of the researchers made a startling discovery – out of the 4063 pilots measured, not one airman fit within the average range on all ten dimensions. Even more surprising, he found that using only three dimensions, less than 3.5 percent of pilots were average sized on all three. In other words, there is no such thing as an average pilot. As Rose puts it, “If you’ve designed a cockpit to fit the average pilot, you’ve actually designed it to fit no one.” In an environment where split second reaction times are demanded, a lever just out of reach can have deadly consequences. Adjustable seating was designed. Not only did it prevent deaths, but it opened the possibility for people who aren’t even close to “average” – like women – to become pilots.

The dimensions of a learner are even more multi-faceted, complex and diverge, we know, as widely. Yet we continue to measure our children according to averages that don’t fit anyone, to apply solutions based on averages, to focus on “best practice” gleaned, of course, through averages. Consider John Hattie’s widely touted list, a synthesis of now more than 1200 meta-analyses about influences on learning and ranked according to effects on student achievement. How is the effect size calculated? Through the observed change in average scores divided by the standard deviation of the scores. Hattie chooses 0.4 as the point when the effect size is significant enough to make a difference to students. How did he choose that number? The average effect size of thousands of interventions studied is 0.4.

Our focus on “best practice” is like lavishing all our time to refine the fixed pilot seat, making it more precisely fitted to average. The trouble is, no matter how effective our strategies are “on average,” they don’t necessarily (or even likely) fit the children in front of us in our classrooms. Perhaps it’s time to spend our time thinking in a different direction entirely. Who knows what possibilities might open.