In keeping with issues of economics and computational power, there is a very neat paper out of Princeton by Arora, Barak, Brunnermeier, and Ge entitled “Computational Complexity and Information Asymmetry in Financial Products“. Can you embed an NP-hard problem into the pricing problem for a financial instrument? As the authors point out, the answer is clearly “yes” and the issue is really how naturally you can do it. For instance, I could have an instrument that pays $1000 if the answer to a particular, large traveling salesman problem is less than 1 million miles, and $0 otherwise. Pricing such an instrument involves solving an NP-complete problem, but no one would argue that this implies anything about real financial instruments. The authors give another, similarly artificial, example:
Consider for example a derivative whose contract contains a 1000 digit integer n and has a nonzero payoff if the unemployment rate next January, when rounded to the nearest integer, is the last digit of a factor of n. A relatively unsophisticated seller can generate such a derivative together with a fairly accurate estimate of its yield (to the extent that unemployment rate is predictable), yet even Goldman Sachs would have no idea what to pay for it. This example shows both the difficulty of pricing arbitrary derivatives and the possible increase in asymmetry of information via
derivatives.
Nobody is actually interested in embedding an NP-Hard problem in a financial instrument in that way. If the pricing is not clear, and there is obvious information asymmetry, buyers will simply assume they are getting ripped off and walk away (see adverse selection, below).
This paper does something much more interesting. It develops a system where the firm offering financial assets seems to divide multiple assets up fairly but actually does so in a biased way. In order to distinguish this tampering from non-tampering, an outside analyst has to solve a densest subgraph problem (an NP-hard problem).
In finance and economics, there is a issue called adverse selection. Why am I hesitant to buy a used car? People will only sell their lemons. Why should we be cautious in hiring faculty who work at other schools? The other school knows more about them, and will choose not to compete if they want to get rid of them. Why should I be cautious when a bank chooses 10 mortgages to resell out of their portfolio of 10,000? They know where the lemons are and will choose to dump them given the chance.
What the authors are saying is that even when a company sells all of their assets in groups in an apparently random way, it is possible to hide manipulation. So adverse selection can occur in a way that is hard to determine. Maybe the buyers will assume no adverse selection and we get to dump our lemons! You can read the paper (or this blog posting from Andrew Appel) for some more information.
I have somewhat mixed feelings about this result (though I have only lived with it for a few hours: this may take more time to settle my views). On one hand, I really think that people in economics and finance need to be more aware of computational limits in their models. On the other hand, it is not clear that NP-hardness is a particularly good defense here. The authors have a somewhat breathless gloss on what NP-hardness means:
Computational complexity studies intractable problems, those that require more computational resources than can be provided by the fastest computers on earth put together.
Ummm… OK. Sanjeev Arora is one of the world’s experts in complexity. He is a Godel Prize Winner and a Fellow of the ACM. He even, most tellingly, has a wikipedia entry. I still don’t think this is the world’s best definition of intractable in the NP-hardness sense. In particular, if a group put out 1000 groupings of financial instruments, and I needed to solve the densest subgraph problem on the resulting instance, I would work very hard at getting an integer program, constraint program, dynamic program, or other program to actually solve the instance (particularly if someone is willing to pay me millions to do so). If the group then responded with 10,000 groupings, I would then simply declare that they are tampering and invoke whatever level of adverse selection correction you like (including just refusing to have anything to do with them). Intractable does not mean unsolvable, and not every size instance needs more computing than “the fastest computers on earth put together”.
NP-hardness talks about worst case behavior as problem size grows. Those of us in operations research spend a lot of time solving NP-hard problems like routing, timetabling, and many other problems because we really want solutions to instances of a particular size and with particular structure. Bill Cook will solve practically any instance of the NP-hard Traveling Salesman Problem that you like (particularly if the financial incentives are right) as long as you keep it to no more than 100,000 cities or so. I’d be happy to help a financial firm solve densest subgraph instances if it really mattered to them, NP-hardness be damned!
Of course, if this paper takes off, there might be real demand for operations researchers in looking at NP-Hard problems for the financial industry. And that would be great for our field!
Thanks to my finance colleague Bryan Routledge for pointing out the paper, and for providing the excellent (IMHO) title to this entry.