Monday, November 18, 2013

Lessons for Our Schools from Apple and Microsoft

Let me begin by saying that I am emphatically NOT suggesting we run our public schools “more like a business.” Businesses exist to make a profit, not to provide a service. And manufacturers practice strict quality control over which “raw materials” they allow to pass their doors, unlike traditional public schools, which welcome every child — no matter how poorly equipped, prepared, or supported at home.

I am suggesting, however, that the examples of both great and poor techniques used by two of our largest and most successful companies may instruct us on mistakes we can and should avoid.

First, What NOT to Do

An insightful analysis by Kurt Eichenwald (Vanity Fair, August 2012) called “Microsoft’s Lost Decade” explains how a company that once dominated the tech industry has “fallen flat in every arena it entered: e-books, music, search, social networking, etc.” Again and again, it blew long leads on competitors, and now even its strengths in operating systems and Office are being threatened by the free Google Chrome OS and Google Docs. A single Apple product, the iPhone, had come to produce higher sales than the entire Microsoft Corporation.

How did it come to this?

Eichenwald found intriguing and instructive answers in “interviews with dozens of current and former executives, as well as in thousands of pages of internal documents and legal records.” They might be summarized as (1) emphasizing immediate profits and losses killed innovation and design, and (2) force-ranking employees against one another killed collaboration and actively undermined the core business.

The first change came when a brilliant technical guy, Bill Gates, was replaced as CEO by Steve Ballmer — not a “product guy”" but rather “a businessman with a background in deal-making, finance, and product marketing.” If great marketing could produce more revenue, the products themselves were less important. Everyone started watching the daily stock price, and long-range research and development suffered. Even with huge leads on e-readers and mobile operating systems, Microsoft was left in the dust by competitors.

But the real damage was done by forcing employee evaluations to fit the bell curve of a normal distribution. Microsoft called this “stacked ranking,” and the resulting corporate culture of “self-immolating chaos” nearly sank the company. No matter how good staffers were, only ten percent of each unit could be ranked as excellent, and ten percent would be ranked as poor, with three other ranks between. Not surprisingly, they learned not to collaborate, to withhold vital help and information, and even to actively sabotage one another. After all, they “were rewarded not just for doing well but for making sure that their colleagues failed” — and fail they did. One pernicious feature of the system was that “outcomes were never predictable.” Even achieving all your objectives was no guarantee of a high ranking; crippling your (coworker) competition was the safest way to stay afloat. And “worse, because the reviews came every six months, employees and their supervisors — who were also ranked — focused on their short-term performance, rather than on longer efforts to innovate.”

Sound familiar?

As mandated by the federal Race to the Top legislation, Michigan (and most other states) adopted laws requiring teachers and schools to be similarly “stack-ranked” against one another in a normal distribution. No matter how well they perform objectively, only a few will be designated as top performers, and a steady percentage will be labeled as failures. And such teacher evaluations have proven to be capricious, if not random, from year to year: one year’s “teacher of the year” will be rated as ineffective the next year. Just doing your individual job effectively is not enough to guarantee a high ranking. And the test scores of the moment are The Most Important Thing, crowding out the serious intellectual work it takes to perfect one’s professional craft.

To absolutely no one’s surprise, the “poorest” performers are those who deal with special education, English language–learning, and economically disadvantaged students. These students start from behind, have greater needs and fewer supports, and tend to cluster in the more poorly resourced schools with the least experienced teachers. Ask yourself: if you were a teacher, and your job depended upon how much progress your students made on standardized tests, why would you want to teach those destined to perform more poorly? Sure, there are innate rewards to helping those who most need the help, but you still have a family to support.

Teachers may not sink to actively sabotaging one another, as Microsoft workers did, but will they be inclined to share their secrets, their reliable tricks for helping kids to learn better? Will they actively collaborate to help one another and their schools and districts reach organization-wide excellence? Will they put in the time to study, test, evaluate, and share best practices, getting ever better at what they do, rather than doing endless test-prep for the only measure that seems to count? We had better hope so, but this discredited management system does everything possible to prevent and undermine that vital collaboration and professional development.

Epilog

Microsoft has finally learned better. A few days ago, its head of human resources announced, “No more curve…. No more ratings.” The performance evaluation system had to change in order to foster the teamwork and long-range perspective that once made the company great. “We are optimizing for more timely feedback and meaningful discussions to help employees learn in the moment, grow and drive great results.” Wow. Do you think we could do that in our public schools again? Microsoft had a “lost decade” under this misguided system. It appears we are headed for a “lost generation.”

What TO Do

Apple’s “design wizard” insists that the company will always choose product quality over any strictly numerical measure of it. By that, he means that product specifications are not proxies for how good a product is or how satisfying it is to use. Marco della Cava, in a September 2013 USA Today, profiled Jonathan Ive, “the fertile and detail-obsessed mind behind culture-shaping products such as the lollipop-colored iMacs (1998), the iPod (2001), iPhone (2007) and iPad (2010).” His hardware group and Craig Federighi’s software group collaborated to produce the new IOS 7 and fifth-generation iPhones.

This duo notes that people care more about the quality of photos they take than the megapixels their phone boasts. The price and the screen size are similarly easy to measure but imperfectly aligned with perceived quality. As Ive notes, “There’s a more difficult path, and that’s to make better products, ones where maybe you can’t measure their value empirically. This is terribly important and at the heart of what we do.”

Apply THAT to schools!

Imagine if, instead of focusing seemingly all of our time and energies on “attributes that you can measure with a number,” in Ive’s words, we looked at students — our product — more holistically. Test scores are always a proxy for something else: knowledge, abilities, likelihood of future success, “college readiness.” Because they are simple numbers, they are very easy to compare across nations, states, districts, schools, and individuals. But test scores do not accurately measure nor reliably correlate with those real outcomes. Just as Microsoft found its personnel evaluations unpredictable, test scores vary from one iteration to the next in inexplicable ways. That is because they are actually quite poor measures. Using them to evaluate teachers and schools compounds the error, since the inputs on the child that produced that output included many, many more factors than those controlled by the teachers and the schools.

They don’t measure what we pretend they do, and the measurements are invalid because they vary in ways that we cannot explain. We rely upon them solely because they are numbers and therefore have an unwarranted cachet of “objectivity.” As Apple’s successful designers assert, numbers do not begin to tell the real story. What we really want for our children’s education, for our end “product,” is graduates who know how they learn, know how to find and evaluate information, can acquire skills on their own, know how to find and ask for appropriate help, are intellectually curious, take the initiative and the responsibility in their own learning process, are self-directed and self-disciplined, can work collaboratively with others, are self-confident in their written and presentation skills. Isn’t that what you want in your coworkers, hirees, managers … your own children?

The Apple folks say that they “care about how to design the inside of something you’ll never see, because we think it’s the right thing to do.” At its core, schooling should be about forming and molding the inside of our children, to help them become the top-quality products the world recognizes as the best. I fervently wish we’d catch a clue from the geniuses amongst us on how to do that.

1 comment :

  1. A cultural anthropologist I know studies people by interviewing them. She doesn't do "Quantity" analysis because she doesn't fill in blanks to answer prepared questions and then make graphs of answers and draw conclusions from the pretty numbers. Instead, she does what is called "quality" (I think mostly to distinguish it from the other kind). She changes her next question based on a prior response and then digests far-ranging discussions into analysis and recommendations. Her approach is considered less "scientific" because it's not stuffed with statistics like test scores.
    This controversy between easily measured outcomes and more complex and deeper observations seems to span disciplines. Great post and great analysis about what doesn't work and why. You should send this to the people who you reference in your post. Your approach should be more widely discussed.

    ReplyDelete