Computronium is claimed to be unobtanium

Quantum Computer researcher Suzanne Gildert claims that computronium is unobtanium.

Computronium is defined by some as a substance which approaches the theoretical limit of computational power that we can achieve through engineering of the matter around us. It would mean that every atom of a piece of matter would be put to useful work doing computation. [Note: right here I have an issue, because if there are limitations using all matter for computation that just changes the theoretical limit but computronium would still exist as it is defined as the best that is possible] Such a system would reside at the ultimate limits of efficiency, and the smallest amount of energy possible would be wasted through the generation of heat.

The computational power of a piece of matter really depends upon what you want it to do.

Arranging matter usefully

In the future we may be able to make atomic-size transistors, we still need to decide how to arrange those transistors. If we arrange them in a slightly different way, the programs that we run on them may suddenly become much more efficient. Constructing serial and parallel processors is just one way to think about rearranging your matter to compute differently. There are many other trade-offs, for example how the processor accesses memory, and whether it is analog or digital in nature.

Perhaps, then, to get around this problem, we could create a piece of matter that reprograms itself, that rearranges the atoms depending upon what you wanted to do. Then, you could have a processor with one large core if it is running a serial algorithm (like the recursive equation), and many small cores if it is running a parallel algorithm (like the game engine) Aha! That would get around this problem. Then my computronium can compute anything once more, in the most efficient way possible for a particular task.

However, we find that you still cannot win, even with this method. The ‘reprogramming of the matter’ stage would require a program all of its own to be run on the matter. The more clever your reprogramming program, the more time your processor would spend reconfiguring itself, and less time would be spent actually solving problems! You also have to somehow know in advance how to write the program that reprograms your matter, which again requires knowledge of what problems you might want to solve.

Note: I do not see an issue with having to have some idea of the problem sets. It is like putting together a series of computronium tools in a tool box. Each would be optimized for a class of problems which are distinct enough to justify having an optimized approach.

Computing limits and are Turing Machines practical ?

We find when we try to build Turing machines in real life that not everything is realistically computable. A Turing Machine in practice and a Turing Machine in principle are two very different beasts. This is because we are always limited by the resources that our real-world Turing machine has access to (it is obvious in our analogy that there is a limit to how quickly we can move the beads on the abacus). The efficiency of a computer is ALWAYS related to how you assemble it in the real world, what you are trying to do with it, and what resources you have available. One should be careful when dealing with models that assume no resource constraints.

Not one computronium but many

We find that we always have to cajole matter into computing the things we care about. We must invest extra resources and energy into the computation. We find that the best way to arranging computing elements depends upon what we want them to do. There is no magic ‘arrangement of matter’ which is all things to all people, no fabled ‘computronium’. We have to configure the matter in different ways to do different tasks, and the most efficient configuration varies vastly from task to task.

I think that there are relatively optimal near-computronium toolboxes.

Quantum computer computronium
Classical computer computronium
etc…