Blog Post

notes on the double-edged sword: on Jaron Lanier on algorithmic approaches to educational policy

I generally find Jaron Lanier a bit too reductionist, a bit too either/or, for my tastes. His recent New York Times column arguing for a return to innovative, creative educational approaches and a turn away from problematic assumptions inherent in algorithmic approaches to assessment ("Does the Digital Classroom Enfeeble the Mind?" Sept. 16, 2010) is characteristically both reductionist and either/or. This makes me worried, because the piece is also--characteristically--poetic and moving, which means we education-y types have been slinging it around like Tea Party candidates sling xenophobia and hate. Because it took me a little while to realize Lanier's message should worry us, I sent it on to my Twitter followers and drafted a glowing review of the piece to post here before realizing that the piece makes its own problematic assumptions about education and technologies and therefore calls for a much more critical read.

Lanier's biggest concern, one with which I sympathize, is that turning issues of educational accountability over to computers and computer-scored tests results in a double-edged sword that pushes both the most creative teachers and the most unimaginative teachers out of the classroom. Reflecting on his father's decision, in middle age, to become an elementary school teacher, Lanier writes that he

would have been unable to teach to the test. He once complained about errors in a sixth-grade math textbook, so he had the class learn math by designing a spaceship. My father would have been spat out by todays test-driven educational regime.

But this is not the whole story.... Its a romantic notion, the magic of teaching, but magic always has a dark side. Trusting teachers too much also has its perils. For every good teacher who is too creative to survive in the era of no child left behind, theres probably another tenacious, horrid teacher who might be dethroned only because of unquestionably bad outcomes on objective tests.

No matter where you stand on NCLB and the use of standardized tests, you have to admit that Lanier has a point. Using standardized testing statistics to make decisions, at a distance, about the quality of a teacher may very well help us push the terrible educators out of the classroom, but it's likely to also push out the most innovative teachers, the ones whose creativity, whose ability to foster deep and lifelong commitments to learning, don't show up in test scores.

The problem, though, is that Lanier connects this real, worrisome concern to the windmill he's been tilting at for some time: his conviction that internet technologies dehumanize us.

Lanier argues that while algorithmic, predictive approaches to some human experiences are "heartless," they're at least better than the alternative. As an example, he describes his frustration with algorithms that predict what sort of music he'd be interested in hearing, based on his previous musical selections. Lanier, a musician himself, writes that

(n)othing kills music for me as much as having some algorithm calculate what music I will want to hear. That seems to miss the whole point. Inventing your musical taste is the point, isnt it? Bringing computers into the middle of that is like paying someone to program a robot to have sex on your behalf so you dont have to.

And yet it seems we benefit from shining an objectifying digital light to disinfect our funky, lying selves once in a while. Its heartless to have music chosen by digital algorithms. But at least there are fewer people held hostage to the tastes of bad radio D.J.s than there once were. The trick is being ambidextrous, holding one hand to the heart while counting on the digits of the other.

Of course, this argument ignores the fact that "bad DJ's" are often themselves the products of a different set of algorithms, numbers calculated by music producers, radio conglomerates, anpandora screenshotd the FCC. In fact, as most of us know (or at least suspect), a pretty significant proportion of our daily experiences are managed by algorithms--by computers. When we need to quickly learn about an event, a term, a date, a location, we Google it. We don't go to Yahoo or or Ask. How come? Because Google's algorithms resulted in better, easier to navigate search results. When we add a new friend on Facebook, algorithms point us to other people we might know--and often, these suggestions help us broaden our social circles in useful, productive ways. Certainly we should worry about net neutrality and the dominance of Google, Facebook, and similar algorithmically driven tools; but in my view net neutrality is a political concern and not a strictly algorithmic one.

That's the first bone I have to pick with Lanier. The second is with what he lists as the deeper concern: what he thinks is the underlying message of algorithmic, statistically driven tools. He writes that

(s)ome of the top digital designs of the moment, both in school and in the rest of life, embed the underlying message that we understand the brain and its workings. That is false. We dont know how information is represented in the brain. We dont know how reason is accomplished by neurons.

I don't think he's quite accurate in this assessment. It seems to me that the real message is not "we understand how the brain works" but "we understand how people behave." In other words, the algorithms used by Google, Facebook, Twitter, Pandora and the like couldn't really care less about how our brains are wired; what matters to them, what makes for "good," useful results, is making sense of the social operations that drive our participation online.  Pandora's algorithm, for example, relies on the "music genome project," but the good folks at Pandora assume that musical tastes are about much more than DNA. Based on my musical preferences in the channel I call "Ani DiFranco Radio," it's entirely possible that I might get offered a Britney Spears song. I don't like Britney Spears, and she certainly doesn't belong on Ani DiFranco Radio. Why? Not because the musical structures of a Britney Spears song are opposed to my expressed musical tastes but because I don't like Britney Spears. Pandora lets me register a "thumbs down" and thus makes it less likely that I will be offered another Britney Spears song.

Likewise, when people argue that, for example, the SAT is a more accurate predictor of first-year college success than extracurricular involvement, parents' education levels, or other benchmarks, they're not arguing that the SAT understands how the brain works. They're making an argument about validity--basically, they argue that the SAT accurately measures what it's intended to measure.

It's fairly well established that if you want to do well on the SAT, you should do your best to be rich, white or East Asian, and male. It also turns out that being rich, white or East Asian, and male makes you more likely to succeed in your first year of college. In this respect, the SAT is making a perfectly valid prediction of college success. The issue, then, is not with the SAT itself but with the assumptions about what "counts" as learning--assumptions that lead to gender, racial, and class biases in both the SAT and in institutions of higher education.

Lanier is right that we should worry about the use of standardized tests to make accountability decisions, but it's not because the algorithms behind these tests erroneously claim to know how our brains work. It's because those algorithms erroneously claim to know beyond a doubt what "counts" as good learning, what "counts" as good teaching, and what "counts" as success. These social claims are far more dangerous, far more potentially destructive, than any biological or neuroscientific claims could ever be.



You are right on in your analysis. Lanier's writing and persuasive skill makes him a very dangerous person in many ways. Demonizing online community due to a lack of understanding of what it is really like and of the value to be derived is a pervasive problem. That he is smart enough to see beyond the stereotype but does not - that is downright scary.

The topic of how we really should be addressing education reform definitely needs a lot more discussion, and as you said, a path to move away from reductionist tendencies.

Liz D.


Jenna, you might enjoy this (critical) post about Lanier.

Howard Rheingold surfaced it on FriendFeed a couple of weeks ago.

Liz D.


Jenna, you are exactly right that Lanier is right about the dehumanizing impacts on students and teachers of standardized computerized testing and you are also exactly right that Lanier is wrong about the dehumanizing impact of "the digital."    At the P3 conference last week, the lovely, human David Gibson did a brilliant job showing why current item-response testing is so impoverished but also said that, with new forms of computation based on game-challenge metrics, if standardized, computer-gradable tests are seen as important, they are coming.  All that needs to change is the institutions that profit from current forms of testing.   In other words, there are algorithms for new kinds of testing that combine human and machine grading and that give students in-time, progressive feedback on their learning while rewarding innovation, creativity, and learning from mistakes and that do not reduce all the world's possible inferences to six silly bubbles of a, b, c, d, none of the above, all of the above.  Lanier has a genius for insight and also for being blind to the implications of his own insights.   But the conversation is the thing and thank you so much for getting one started here. 


Thank you, Jenna, for such a clear and congent post.  When I read Lanier's column in the NYTimes, I found it quite troubling but was having diffiuclty imagining what I would say in response. Lanier is so thoughtful and persuasive - and, as you point out, who could argue with his ode to his father's magnificent teaching?  But to leap from that to his claim that information technology dehumanizes us is short-sighted and, well, just wrong.  You did such a good job of thinking it through.  And thanks to Liz and Cathy for adding to the conversation.


Thanks, Robin! And Liz and Cathy, too. I think Jaron Lanier's approach to technologies is informative in at least two ways: It helps keep us on our toes about the challenges of thinking of technology as panacea; and it reminds us of the value of considering alternative viewpoints on issues we think we have figured out. I suppose that makes his work worth the read.