A *Guardian* (London) article has the provocative title “Bacteria make computers look like pocket calculators” (thanks Hamish for the pointer). They report on how bacteria can be used to search through feasible solutions to the traveling salesman problem to find optimal solutions:

The research, published today in the Journal of Biological Engineering, proves that bacteria can be used to solve a puzzle known as the Hamiltonian Path Problem. Imagine you want to tour the 10 biggest cities in the UK, starting in London (number 1) and finishing in Bristol (number 10). The solution to the Hamiltonian Path Problem is the the shortest possible route you can take.

This simple problem is surprisingly difficult to solve. There are over 3.5 million possible routes to choose from, and a regular computer must try them out one at a time to find the shortest. Alternatively, a computer made from millions of bacteria can look at every route simultaneously. The biological world also has other advantages. As time goes by, a bacterial computer will actually increase in power as the bacteria reproduce.

Programming such a computer is no easy task, however. The researchers coded a simplified version of the problem, using just three cities, by modifying the DNA of Escherichia coli bacteria. The cities were represented by a combination of genes causing the bacteria to glow red or green, and the possible routes between the cities were explored by the random shuffling of DNA. Bacteria producing the correct answer glowed both colours, turning them yellow.

Where to begin? Does that article really talk about the 3-city TSP?

The belief that NP-completeness means that you have to search through every possibility fundamentally misunderstands fifty years of research in computational combinatorial optimization. The “three city” TSP was solved essentially as soon as it was defined. Even the 10 city TSP was solved long before people wanted to look through 3.5 million choices.

Even given the premise that bacteria can look through all possibilities, where does that leave us? Computationally, I really think that 300 cities is a good lower bound for exact work on the TSP (see Concorde, and the discussion given in a previous post). If someone has a bacterial technique that might work on 10 cities, how will it work on a system with 10^600 times the number of solutions? Will you have 10^600 bacteria? For those confused by orders of magnitude, the human body has about 10^14 cells (perhaps) and the universe has perhaps 10^80 atoms. I am glad that Bill Cook and his coauthors did not need to find 10^520 other universes to work with when solving 300 city TSPS, let alone the number needed for solving the 85,900 city problem they have solved (about 10^869527 possible tours).

Now, maybe the *Guardian* is not the best place to look for scientific detail. Going to the original paper, however, does not make one very confident:

The serial approach that most silicon computer algorithms use is not well suited for

solving NP-complete problems because the number of potential solutions that must be evaluated

grows combinatorially with the size of the problem. For example, a Hamiltonian Path Problem

for a directed graph on ten nodes may require as many as 10! = 3,628,800 directed paths to be

evaluated. A static number of computer processors would require time proportional to this

number to solve the problem. Doubling the number of nodes to 20 would increase the possible

number of directed paths to 20! = 2.43 x 10^18, increasing the computational time by 12 orders of

magnitude. Improvement in computational capability could come from parallel processing and

an increase in the number of processors working on a problem. Significant breakthroughs in this

regard may be possible with the development of biological computing, because the number of

processors grows through cell division.

OK, biologists, tell us about 10^600 parallelism. The “serial approach that most silicon computer algorithms use is not well suited for solving NP-complete problems” handles 300 cities pretty easily.

Don’t get me wrong. I think it is really cool that bacteria can solve traveling salesman problems. I think it would be equally cool if you could put together a Lego system that can solve 10 city traveling salesman problems. I have an issue, though, when the researchers believe they are really saying anything about solving NP-hard problems. Talk about relaxations and heuristic solutions, and then I am fascinated. Talk about comparisons to complete enumeration, and I have much less interest.

Contrary to the *Guardian* title, I think computers are still looking like computers, and biological systems have a *long* way to go to compete with calculators, at least as far as the TSP goes.

It really gives a new dimension to the time vs space trade offs that you learn about when you study algorithms. *imagining a building overrun by bacteria* “But it will complete the problem really fast!”

I still find it somewhat surprising, or interesting the extent to which the “effort” (I’ll call it) is invariant on NP-hard issues. It will either A) Take forever, B) Require infinite space, C) Require infinite energy (I’m being very loose with the terminology, but you get the idea). For some reason it reminds me of Kleiber’s law.

Ah, it’s deja vu all over again. Apparently in the last seven years, we’ve evolved from raw DNA to bacteria, but we haven’t made much headway–neither in making these tools work nor in explaining them effectively to the public.