Blog Post

Is Raging Against the Machine Our Only Option?

Today's Chronicle of Higher Education includes a very interesting article on a new program by a former MIT prof that generates syntactically complex and grammatically correct yet nonsensical "babel" in order to test automated machine-grading tools.  The babel-filled but gramatically correct essay gets read by a popular machine-grading device and scores 5.4 out of 6 . . .

What do we make of this Fight Fire With Fire fable? Is it like MOOCs?  Decry the beast, rage against the machine, and then go back to business as usual?  Or is there something interesting to learn here?


HASTAC readers will guess the "interesting learning" bit is what I"ll suggestion:  In fact, I hope people use this study and program well and don't just do a dopey, "See, humans do it better." Yes. And no.

If the machine-generated test essay were a syntactically clumsy and grammatically incorrect bit of babble, research suggests machine readers might well offer better and more thorough advice on writing better-constructed sentences than would a human in stressed circumstances--say, a teacher with 80 papers to grade in 24 hours.

Why is this important? Because other research shows that, if you are a teacher with 80 students, and you are responsible but overburdened and underpaid, you might be tempted to (a) do a quick and dirty job of offering feedback and that helps no one or (b) resort to a multiple choice format instead.


This auto-generator reminds us tools are made to be used well by humans . . . not to replace us. And the reverse is also often true.  Instead of raging against the machine why not use it to help us see what we, as humans, do extraordinarly well and what machines do extraordinarily well, what we do poorly and what machines do poorly.  The point of all the recent behavioral economics research (thanks again, Dan Ariely!) is that we think we are better at lots of things than we actually are.  Hey! It turns out machines can be "overconfident" too.  And we certainly know our faith in machines can be overblown--as well as kneejerk skeptical.


As usual, the machinic truth lies somewhere in between rage and romance . . . Technorealism, I call it.



I think it is important to understand what CANNOT be programmed, or rather where/when programming is too complex. e.g., I find this online dating thing a complete joke when it comes to connecting two people. No software or iris-reading device can do what two people feel when the mood is there. 

Your book is inspiring me to write something on this.. I feel WE 70 - 80's generation have something significant to offer.. no I am not Luddite nor am I the app-crazy millennial.. something in between.. 

breathe IN


breathe OUT



I agree . . . except that online dating sites offer the people the chance to pitch in and do the human part wherease, without them, missed opportunities . . .


I'm about to write a comment to the Crowducate conversation along those lines too.   If I were completely anti-machine, I won't be typing . . . or using a pen . . . or reading a mass produced book . . .    But to think it will solve human-made problems is fantasy.  Take care and thanks for writing.


I learnt the hard way. My first marriage started online and subsequently realised what was missing. The most sophisticated machine would be invisible and incorporate more than just fulfill gaps in space and time. I am researching a very interesting aspect of technology.. something that resonates in this comment: if a teacher can be replaced by technology, such a teacher SHOULD be.. which means that there are special human functions that cannot be quantified. The theme was also explored in The Book..


This reminds me of a recent article in Slate by Will Oremus (@WillOremus): ’The First News Report on the L.A. Earthquake Was Written by a Robot’ ( If a particular task can be performed better by an algorithm than by a human, then perhaps it should be. Some worry that teachers might be replaced by computers. Others point out that, if teachers are teaching in such a way that the work that they are doing could be performed by a computer, then perhaps they should step aside an let the computer do the job. Alternatively, they could incorporate technology into their work, using it to do what it can do efficiently and well, and change how they teach, so that they are engaging with students as only humans can. 
The difficulty comes when decisions regarding the introduction and use of technology are driven by a desire to maximise profits and competitiveness by replacing human communication and engagement with a cheaper electronic simulation. Using an algorithm to assemble and present information in a coherent form is one thing, but expecting an algorithm to replace human-to-human communication that enables experiences that are meaningful because they are shared with other human beings, is something else. For contrary views, see Cynthia Breazeal's TEDTalk, ‘Will Man's Best Friend Be A Robot?’ (, and Abraham Verghese's TEDTalk, ‘A Doctor’s Touch’ ( 

The terms artificial intelligence and virtual reality are self-explanatory. Take a look at this:


Thanks very much for the link. It's an impressive short film. In creating simulations and digital aids, we build in only what we are aware of and what we think matters. It’s what we leave out that is most revealing.