Foresight Global Weather Machine Theoretical Weapon and Drexler on Nanotech Development

J Storrs Hall discusses his weather machine designs and theoretical capabilities.

Weather machine Mark I – many small aerostats—a hydrogen balloon—at a guess an optimal size is somewhere between a millimeter and a centimeter in diameter that have a continuous layer in the stratosphere. Each aerostat contains a mirror, and also a control unit consisting of a radio receiver, computer, and GPS receiver. About 100 billion tonnes of material with regular technology and 10 million tons with more advanced nanotechnology.

It has a very thin shell of diamond, maybe just a nanometer thick. It’s round, and it has inside it (and possibly extending outside, giving it the shape of the planet Saturn) an equatorial plane that is a mirror. If you squashed it flat, you would have a disc only a few nanometers thick. Although you could build such a balloon out of materials that we build balloons of now, it would not be economical for our purposes. Given that we can build these aerostats so that the total amount of material in one is actually very, very small, we can inflate them with hydrogen in such a way that they will float at an altitude of twenty miles or so—well into the stratosphere and above the weather and the jet streams.

The radiative forcing associated with CO2 as a greenhouse gas, as generally mentioned in the theory of global warming, is on the order of one watt per square meter. The weather machine would allow direct control of a substantial fraction of the total insolation, on the order of a kilowatt per square meter—1000 times as much.

The Weather Machine, Mark II -Take the same aerostat, but inside put an aerogel composed of electronically switchable optical-frequency antennas—these are beginning to be looked at in the labs now under the name of nantennas. We can now tune the aerostat to be an absorber or transmitter of radiation in any desired frequency, in any desired direction (and if we’re really good, with any desired phase). It’s all solid state, with no need to control the aerostat’s physical attitude. Once we have that, the Weather Machine essentially becomes an enormous directional video screen, or with phase control, hologram.

At closest approach, with an active spot of 10,000 km diameter (Weather Machine Mark 2), using violet light for the beam, you could focus a petawatt beam on a 2.7mm spot on Phobos. A petawatt is about a quarter megaton per second. 2.7mm is about a tenth of an inch. I.e. you could blow Phobos up, write your name on it at about Sharpie handwriting size, or ablate the surface in a controlled way, creating reaction jets, and sending it scooting around in curlicues like a bumper car.

2. Shane Leggs discusses the brain being an adequate Artificial General Intelligence design (H/T J Storrs Hall)

Having dealt with computation, now we get to the algorithm side of things. One of the big things influencing me this year has been learning about how much we understand about how the brain works, in particular, how much we know that should be of interest to AGI designers. I won’t get into it all here, but suffice to say that just a brief outline of all this information would be a 20 page journal paper (there is currently a suggestion that I write such a paper next year with some Gatsby Unit neuroscientists, but for the time being I’ve got too many other things to attend to). At a high level what we are seeing in the brain is a fairly sensible looking AGI design. You’ve got hierarchical temporal abstraction formed for perception and action combined with more precise timing motor control, with an underlying system for reinforcement learning. The reinforcement learning system is essentially a type of temporal difference learning though unfortunately at the moment there is evidence in favour of actor-critic, Q-learning and also Sarsa type mechanisms — this picture should clear up in the next year or so. The system contains a long list of features that you might expect to see in a sophisticated reinforcement learner such as pseudo rewards for informative queues, inverse reward computations, uncertainty and environmental change modelling, dual model based and model free modes of operation, things to monitor context, it even seems to have mechanisms that reward the development of conceptual knowledge. When I ask leading experts in the field whether we will understand reinforcement learning in the human brain within ten years, the answer I get back is “yes, in fact we already have a pretty good idea how it works and our knowledge is developing rapidly.”


Shane Legg’s human level AGI prediction is the blue line and the red line is Verne Vinge.

Shane’s mode is about 2025, my expected value is actually a bit higher at 2028. This is not because I’ve become more pessimistic during the year, rather it’s because this time I’ve tried to quantify my beliefs more systematically and found that the probability I assign between 2030 and 2040 drags the expectation up. Perhaps more useful is my 90% credibility region, which from my current belief distribution comes out at 2018 to 2036.

Note : Eric Drexler reminded me that he no longer has any connection with the Foresight Institute and asked that I split the original post to avoid confusion.