Blog Post

Are Multitaskers Worse at Multitasking?

Are multitaskers really worse at multitasking than those who don't multitask?   If you read the popular media accounts of the new study out in the Proceedings of the National Academy of Sciences by Eyal Ophir, Clifford Nass, and Anthony D. Wagner, "Cognitive Controi in Media Multitaskers," you would believe this to be the case.   Now, go back and read the scientific study itself.  The results are far less clear than the headlines.   Multitasking may make you worse at taking attentional tests.   But, hey! I'm a blogger.  I know you multitasking, media-stacking, netsurfing, always-on readers lose attention quickly.   Does that make you worse at multitasking--or better?


Before I come back to that question, I'm going to make an analogy.  If you are a high school student hoping to ace the SAT's, or a college student hoping to improve last year's score on the LSAT's or MCAT's or GRE's, everyone knows you don't study.  What a waste of time that would be!   The Kaplan family hasn't made millions on all those prepping books and tests because they don't get results.  No, if you want to do well on a certain test, you prep for it by learning how to take that test.  You learn its form, its implicit assumptions about how to answer a question, its implicit body of knowledge from which it draws, its pacing, and even how best to guess if you don't have a clue how to answer.   Myriad studies have shown these things all improve your test scores.  Do these prep methods make you smarter in the field? Not necessarily.  They make you better at taking the particular test for which you are studying.


I make this analogy because from all the popular reports on this study, including the quotes by one of the PI's, Clifford Nass, you would believe that people who engage in media multitasking are those least able to do so well at multitasking.   Or, as Nass is quoted as saying, "The shocking discovery of this research is that [high multitaskers] are lousy at everything that's necessary for multitasking."


Well, when I read the study itself, that conclsion is far from obvious.  In the summary, the authors conclude that "heavy media multitaskers are more susceptible to interference from irrelevant environmental stimuli and from irrelevant representations in memory.  . . .   Media multitaskers performed worse on a test of task-switching ability, likely due to reduced ability to filter out interference from the irrelevant task set."    That makes me curious because, intuitively, I would think that susceptibility to distraction is precisely what makes one a good multitasker.  That is, going back to Linda Stone's paradigm-changing studies of "continuous partial attention," it seems that multitasking is never entirely synchronous or entirely symmetrical.   We move in and out of attention constantly, and with a certain fickleness of attention.  Even when we think we are attending to only one thing, we are constantly being pulled (as we know from Raichle's work on neuronal brain activity) in different directions. 


There is very little way to tell, in a given moment, if our focus is distracted or not, even when all we are supposed to be doing is telling the psychologists if the two red rectangles or the two blue ones have changed.   Judging by many of my HASTAC readers, I would guess that you wouldn't keep their attention very long looking at a screen with flashing rectangles.   What does that screen tell you about attention in the real world?   Or is it like the Kaplan tests, you get better at the test by prepping for the test.  In other words, multitasking does not in any way prepare you to ace this experiment.   Even if you are Stanford students (like the test subjects) who have gotten into a fine university at least partly because you have mastered the ability to master standardized forms of test-taking.


My skepticism about studies of multitasking arises from experiments in the science of self-distraction.  We are in a moment of enormous technological change and, as in all previous such moments, those of us old enough to remember the "before" are distracted by the "after."   That means we see distraction everywhere.   "Multitasking" is the synonym for our anxieties about change.   But 80% of all brain energy is consumed by the brain's ability to distract itself.   Sleeping, for example, is a marvelous example (and I don't mean insomnia; I mean sleep itself) of multiple forms of distraction.  Try to tell a dream and you know that, but there are more quantified methods too based on rapid eye movements, respiration, perspiration, heart beats, and so forth.


Here's the bottom line:  Distraction is not only from media nor is it, as we believe in this moment, always external.  Heartburn or heartache is as distracting as the radio in the next room.  Or as the beep from the email coming in, or the next tweet, or the IM in the corner of the screen.  In fact, heartburn or heartache interfere with attention more so if there aren't a lot of other distractions around us.  There is lots we don't know about pain, for example, but we know that distraction can minimize it. 


Multitasking studies measures on the tasks we see as multiple.   I fervently believe texting while driving puts one at risks for accidents.  It scares me to see people texting while driving.  But the statistics for accidents after trauma--notice of an illness, a dear john from one's lover, an unemployment notice--are even higher than the statistics for texting.   Internal distractions, as Buddhism has known for eons, are more powerful even than those things that, in this technophiliac/technophobic moment, we "count" as distractions and multitasking.


Incidentally, in tracking down the original of this article in the PNAS, I was delighted to use the "chat" feature supplied by the able reference librarians at Perkins Library. (A subject for another post:  I tried to find this online but of course PNAS is subscription only.  Which means the vast majority of people won't have my privilege of checking the actual study against the popular account.)  In any case, I put my research into the able hands of Duke's marvelous full-service reference librarians.  (Thank you again!)   The whole exchange--from my logging in, trying to find the article myself, then turning to the Ask Ref function, using the chat function, and then actually receiving the pdf in my email inbox--took only 7 and a half minutes.  I kept count.  In that time, I was watching my twitter account, checked in on Facebook, was listening to MIA (boom boom boom . . . ).   (I charted that too.)   But I was also looking out the window, finishing a yoghurt, looking out the window again trying to see if the hummingbird was back, looking at the clock (keeping time), flexing my toes (a new pilates exercise), typing, noticing that my coffee cup was empty, thinking about a meeting this afternoon, reminding myself to send the air conditioner repair man a check, and thinking about writing this blog . . . and I am positive I have not listed one-one hundredth of the external and internal distractions in my life.  I sort.   We all do.  All the time.  And we sort among a range of sensory, emotional, physical, and media options, most of which we aren't aware of because we sort so effectively.


We have not remotely created the kind of complex tests for multitasking that our era needs.  It is interesting to me that there is a significant difference in the performance of those who consider themselves "high" media multitaskers and those who consider themselves "low" multitaskers, but I also wonder what all is embodied by that self-definition.  The authors write:  "These results suggest that heavy media multitaskers are distracted by the multiple streams of media they are consuming, or, alternatively, that those who infrequently multitask are more effective at volitionally allocating their attention in the face of distractions.  . . . It remains possible that future tests of higher-order cognition will uncover benefits, other than cognitive control, of heavy media multitasking, or will uncover skills specifically exhibited by HHHs not involving cognitive control."


Here, here!   That just might be the case.  Now, my question:  why didn't that open-ended, wise, speculative sentence make it into the popular media accounts about how multitasking is bad for you?





You wrote:

why didn't that open-ended, wise, speculative sentence make it into the popular media accounts about how multitasking is bad for you?

The authors of the O'Reilly Mind Hacks book and blog seem to agree with you in their post on the same study today:

It's [the study] also been picked up by hundreds of news sources, almost all of which miss the subtlety of what it's actually telling us.


Yes, I think that is true.   I also bet that the PI was quoted for his flourishing, catchy comment and not for other more measured and open-ended comments he made.  This is why I always go back to the source, the studies themselves, and not reports about the studies.   Thanks for this, Steve.


Sure, lab experiments arent the end of the story. But the study - which Ive read in the original - clearly shows that heavy multitaskers are deficient in voluntary cognitive control while trying to perform simple task-switching. In other words, they are poor at controlling their focus, and suckers for irrelevancy, as Clifford Nass attests. Thats a disturbing finding, whatever cognitive benefits we may later discover in multitasking. Certainly, attention is so all-encompassing and so crucial to our survival that were often not aware, so to speak, of how much information were processing in our environment and within what William James called our stream of consciousness. We are in many senses born interrupt-driven; to survive we have to be ever-alert to new stimuli in our environment. (And work by Jonathan Schooler indicates that mind-wandering may be good for creativity.) But at the same time, effortful attention, along with working memory, are keys to pursuing our goals. If we cant sort out the irrelevant in a simple brief lab task, chances are were not doing such a great job in the wider, complex world. (And given the plasticity of our brains, its not unimaginable that heavy multitasking does shape and even undermine our ability to focus deeply, evaluate and assess the information around us.) Drinking from the fire hose of media today may have its benefits, just as videogaming has been found to boost some types of visual attention. But let's keep the big picture in mind. If we sacrifice cognitive control in the name of high-speed, reactive, distracted living, then the costs of multitasking will be steep indeed.


Studies of the middle aged and elderly also show they are "suckers for distraction," even without multitasking . . . but do better at higher-level synthesizing and generalizing when given time to sort.   That same kind of multi-pronged approached to distraction is implied in this very well executed study but not in the popular press jeremiads against the horrors of multitasking.  I do not share your conclusion.  It is far too value-laden and those values are not in themselves supported by the findings.   I know Jonathan Shoolers work well, by the way, and "creativity" is another of those values that has to be factored into the final and total equation.  


I started posting a comment on this but I ended up turning it into a blog post (


The basic idea that I was working with was that all this discussion of how the mind works seems (I can't find the article, I can't say for sure) to be taking place with no attention paid to how the mind is managed. I recently saw Memento, where the premise is that a man is living for revenge but has no short term memory and the whole story revolves around his disciplined system of tatooing notes on his body and writing notes on photographs so he can function. What I was reminded of was that a couple months ago I started using this system for managing a bunch of simultaneous creative efforts at once that I might almost describe as the memento system for multitasking. Whenever I have an idea for any program I'm coding, song I'm writing, comic I'm creating, or anything, I immediately drop what I'm doing, get the most salient words of the idea into one of my perpetually open and carefuly organized note documents (if I stop to work out the idea as I notate it sometimes I can forget parts before I get them down) and I add in some note on the context of my thinknig of it (a song I was listening to, a url I was browsing), and afterwards I go right back to what I was doing initially. With those notes down I can almost always reconstruct exactly the thoughts I was having when I first had the idea and work with it later. I've had some great ideas this way, each of them taking me no actual time spent working on the project. I mention this because partly because of what you seem to get at in your post (namely that even if there are no single areas in which one would rate a multitasker as cognatively ahead (how does the study define which end of the spectrum is positive performance?), the collection of traits in the right context could make someone better at multitasking) and partly because I want to really think about the assumption that the sum of cognative superiorities equals a cognatively superior entity; if my cognative deficiencies inspire better behaviors, am I deficient? I guess that's the point of GATTACA, and don't quote me on this, but I think deaf drivers have better accident records than hearing-able drivers.


Yes, studies of hearing impaired drivers show their accident rates to be equal or better to that of those who can hear.  And your switching, jotting out an idea, and then returning is a very common practice.  There's a study of workflow by Mary Czerwinski of software writers in Silicon Valley.  She found their screens covered with the very-analog technology known as the Post-It Note on which they had scrawled messages to themselves, "Test PB patch DAN's PC--Waiting for AL," and so forth.  That is we disrupt and interrupt and distract ourselves all the time, even without technology.  It's great to have something to blame, though, for our distraction, and technology has been the scapegoat at least since the Industrial Revolution and perhaps since the invention, say, of the wheel.


I wonder though about the process of note taking, because I wouldn't intuitively see a post it to test a patch as a comparable action. What I've been trying to do is capture and store a mindset to be recalled, which I see as qualitatively different from a process organizational tool, though I think either has things to say about the management of mind as per my initial post. In thinking about it now I actually cannot picture doing what I've been doing while programming. This has mainly been in the context of song writing where the relevant details that will be useful to me later cannot really be described so much as (re)imagined. I wonder if this is really just a function of the medium--to dos for workflows, poetics to recall poetics. I tried writing down things I wanted to write songs about but those notes all just get thrown away -- they code as semantically empty to me when I return to them. It's really just the scattered image words, scenes I see, the music or words that triggered it initially. I'm trying to notate states of mind.


"I'm trying to notate states of mind."  Gorgeous.   I'd love to hear an update sometime.  Really intriguing, Evan.


Science hasn't shed much light yet on attention, but the conventional wisdom of what it is and how it works dominates all media discussion of it. It's something like Freudian psychoanalytic theory or the nutrition guidelines about low-fat, high-fiber diets: these ideas seemed tantalizingly real, but never panned out in practice.


When I read about studies like this, I always wonder about the missing historical context. Is our propensity for distraction heightened by having a myriad of technological devices that make us constantly accesible to friends, family, and work? Or is the expectation that we can sit down and concentrate for hours without interruption an artifact of the late capitalist society we live in? Scholars like E.P. Thompson have argued that workers had a profoundly different approach to working time prior to the industrial revolution and the forceful interventions of early capitalists. How much multi-tasking did workers in pre-capitalists societies do?


When I read about studies like this, I always wonder about the missing historical context. Is our propensity for distraction heightened by having a myriad of technological devices that make us constantly accesible to friends, family, and work? Or is the expectation that we can sit down and concentrate for hours without interruption an artifact of the late capitalist society we live in? Scholars like E.P. Thompson have argued that workers had a profoundly different approach to working time prior to the industrial revolution and the forceful interventions of early capitalists. How much multi-tasking did workers in pre-capitalists societies do?


Exactly!!  We are measuring "task" against a highly specific and historicized standard of productivity and labor, and I personally believe most of our models are Taylorist and may have worked for the 20th century but don't for the present.  We need new models of labor, productivity, task, and attention.  Thanks so much for your comment.