Blog Post

Smart Machines, Bad Design

Below is a hilarious article reblogged from the NY Times on Line about industrial designer Donald Norman checking out this year's hot Christmas item, the digital photo frame. As someone who is easily frustrated by bad design or by design inconsistencies (just ask the cast of thousands it has taken to help me make a transition from PC life to a Mac: bottom line, so called "intuitive" decisions on a Mac are not intuitive if you spent ten hours a day for the last decade on a PC), I laughed out loud over this piece. It's fine if humans do what we do and machines do what they do, but do we really have to discuss it with one another, negotiate who has the better way of doing things, work out our likes and dislikes and compatibilities? Norman predicts we are entering an era (semantic web and AI) where more and more of our interactions with machines will lead to frustrations since automating our choices means simplifying our desires. I'm gonna repeat that line: automating our choices means simplifying our desires. It's like a lot of philosophy of mind these days. Were the human mind as simple as some evolutionary biologists think, wouldn't we have been extinct long ago? If nothing else, from simple boredom with one another.
Hey! I did it! I finally wrote a snarky blog post. I have entered my genre. Finally. |* _ *|
December 18, 2007
Findings

Why Nobody Likes a Smart Machine

At a Best Buy store in MidtownManhattan, Donald Norman was previewing a scene about to be re-enactedin living rooms around the world.

He was playing with one of this year?s hot Christmas gifts, adigital photo frame from Kodak. It had a wondrous list of features ? itcould display your pictures, send them to a printer, put on a slideshow, play your music ? and there was probably no consumer on earthbetter prepared to put it through its paces.

Dr. Norman, a cognitive scientist who is a professor atNorthwestern, has been the maestro of gizmos since publishing ?TheDesign of Everyday Things,? his 1988 critique of VCRs no one couldprogram, doors that couldn?t be opened without instructions and othertechnologies that seemed designed to drive humans crazy.

Besides writing scholarly analyses of gadgets, Dr. Norman has alsobeen testing and building them for companies like Apple andHewlett-Packard. One of his consulting gigs involved an early versionof this very technology on the shelf at Best Buy: a digital photo framedeveloped for a startup company that was later acquired by Kodak.

?This is not the frame I designed,? Dr. Norman muttered as he triedto navigate the menu on the screen. ?It?s bizarre. You have to look atthe front while pushing buttons on the back that you can?t see, butthere?s a long row of buttons that all feel the same. Are you expectedto memorize them??

He finally managed to switch the photo in the frame to vertical fromhorizontal. Then he spent five minutes trying to switch it back.

?I give up,? he said with a shrug. ?In any design, once you learnhow to do something once, you should be able to do it again. This isreally horrible.?

So the bad news is that despite two decades of lectures from Dr.Norman on the virtue of ?user-centered? design and the danger of adisease called ?featuritis,? people will still be cursing at theirgifts this Christmas.

And the worse news is that the gadgets of Christmas future will beeven harder to command, because we and our machines are about to gothrough a rocky transition as the machines get smarter and take overmore tasks. As Dr. Norman says in his new book, ?The Design of FutureThings,? what we?ll have here is a failure to communicate.

?It would be fine,? he told me, ?if we had intelligent devices thatwould work well without any human intervention. My clothes dryer is agood example: it figures out when the clothes are dry and stops. But weare moving toward intelligent machines that still require humansupervision and correction, and that is where the danger lies ?machines that fight with us over how to do things.?

Can this relationship be saved? Until recently, Dr. Norman believedin the favorite tool of couples therapists: better dialogue. But he hasconcluded that dialogue isn?t the answer, because we?re too differentfrom the machines.

You can?t explain to your car?s navigation system why you dislikeits short, efficient route because the scenery is ugly. Yourrefrigerator may soon know exactly what food it contains, what you?vealready eaten today and what your calorie limit is, but it won?t becapable of an intelligent dialogue about your need for that piece ofcheesecake.

To get along with machines, Dr. Norman suggests we build them usinga lesson from Delft, a town in the Netherlands where cyclists whizthrough crowds of pedestrians in the town square. If the pedestrianstry to avoid an oncoming cyclist, they?re liable to surprise him andcollide, but the cyclist can steer around them just fine if they ignorehim and keep walking along at the same pace. ?Behaving predictably,that?s the key,? Dr. Norman said. ?If our smart devices wereunderstandable and predictable, we wouldn?t dislike them so much.?Instead of trying to anticipate our actions, or debating the best plan,machines should let us know clearly what they?re doing.

Instead of beeping and buzzing mysteriously, or flashing arrays ofred and white lights, machines should be more like Dr. Norman?s idealof clear communication: a tea kettle that burbles as the water heatsand lets out a steam whistle when it?s finished. He suggests usingnatural sounds and vibrations that don?t require explanatory labels ora manual no one will ever read.

But no matter how clearly the machines send their signals, Dr.Norman expects that we?ll have a hard time adjusting to them. He wasn?tsurprised when I took him on a tour of the new headquarters of The NewYork Times and he kept hearing complaints from people about the smartelevators and window shades, or the automatic water faucets that refuseto dispense water. (For Dr. Norman?s analysis of our office building ofthe future, go to nytimes.com/tierneylab.)

As he watched our window shades mysteriously lowering themselves,having detected some change in cloud cover that eluded us, Dr. Normanrecalled the fight that he and his colleagues at Northwestern wagedagainst the computerized shades that kept letting sunlight glare ontheir computer screens.

?It took us a year and a half to get the administration to let uscontrol the shades in our own offices,? he said. ?Badly designedso-called intelligent technology makes us feel out of control,helpless. No wonder we hate it.? (For all our complaining, at The Timeswe have nicer shades that let us override the computer.)

Even when the bugs have been worked out of a new technology,designers will still turn out junk if they don?t get feedback fromusers ? a common problem when their customer is a large bureaucracy.Engineers have known how to build a simple alarm clock for more than acentury, so why can?t you figure out how to set the one in your hotelroom? Because, Dr. Norman said, the clock was bought by someone in thehotel?s purchasing department who has never tried to navigate all thosebuttons at 1 in the morning.

?Our frustrations with machines are not going to be solved withbetter machines,? Dr. Norman said. ?Most of our technologicaldifficulties come from the way we interact with our machines and withother people. The technology part of the problem is usually prettysimple. The people part is complicated.?

114

No comments