If I were to give you the high order of finding me a set of integers in the range of 1 through 20, inclusive, that sum to 34, I bet you that it would be easier for me to verify your answer (16, 18) than it would be for you to give me an answer. I imagine that you would do some factoring in your head, find that 34 is twice 17, realize that you can't give me the same number twice by my rules, then simply subtract one and add one to 17 and get a set that works. All I would have to do, on the other hand, would be to add them together in one step and give you a big thumbs up. For rote computational processes, we call one class, the class under which my simple verification addition step falls, P, and we call another, the class under which your set finding task falls, NP. Here's the (literally) million dollar question: are we prepared to resign to the asymmetry between such verification tasks (P) and set-finding tasks (NP)? There are scores and scores of NP problems that, under current methods, cannot possibly be solved with all of the atoms of the universe devoting themselves to rote computation. Proving that these NP tasks could be solved with P quickness would shatter the distinction, in profound ways, between creation and interpretation (though many of my English professors would yawn at this proposition and tell me that this was figured out 30 years ago)! Those who look at such problems computationally intuitively balk at the idea that these two classes are symmetrical, but proving it has, well, proven to be very difficult.
Over the weekend, a brave researcher from HP, one Vinay Deolalikar, wrote in an email to researchers, triumphantly, "I am pleased to announce a proof that P is not equal to NP, which is attached in 10pt and 12pt fonts." Forums of hackers immediately began circulating the paper, heralding the discovery as the most significant proof since Wiles proved Fermat's Last Theorem in 1995. Even those, such as yours truly, who have a casual interest in deep computer science problems, marveled at the introduction:
In order to apply this analysis to the space of solutions of random constraint satisfaction problems, we utilize and expand upon ideas from several fields spanning logic, statistics, graphical models, random ensembles, and statistical physics.
Statistical physics? WOW! As news of the candidate circulated the night of Sunday, August 8, numerous experts chimed in with commentary, some quick to fan expectations by positing potential flaws with the approach, for example, one put forth by "randomwalker": "since the author is using physics-based methods, there's the possibility that he is using something that's a 'theorem' in physics even though it is technically only a conjecture and hasn't actually been proven." Within a single day, Dick Lipton, Professor of Computer Science at Georgia Tech, assembled his list of major potential technical flaws with the paper, which then linked to a crowdsourcing experiment, a Google Doc, which has finally settled on a Wiki discussing the paper. All within the span of three days.
Indeed, it would seem as if Deolalikar's proof--and the Wikipedian frenzy surrounding its verification--has become a manifestation of the very problem it sets out to destroy. With some speculation that it took Deolalikar five years of strenuous creation, revision, inspiration, and epiphany to concoct the document, it's striking that it has taken all of three days to harness the power of our global network to verify its robustness. Regardless of the eventual outcome of this exercise (all signs point to, regrettably, unfixable flaws with Deolalikar's approach), it's probably one of the best case studies that we've come across in the last decade of the Internet proving its worth as a vehicle for mathematical discussion and critique of logic.