December 26, 2009

World Nuclear 2009

The IEA nations currently generate about 80% of the worlds nuclear power and for the first nine months of the year are down -1.2% from 2008.

The OECD total for Jan-Sep,2009 is 1602.9 TWHe. For 2008 the OECD total was 2171 TWHe. The world total was 2601 TWhe in 2008.

This is an update of a November, 2009 report

Japan generated 22.41 GWH in November, 2009 and 21.78 GWH in Oct, 2009.

Hokkaido Electric Power Co (9509.T) started generating power to sell from a new nuclear reactor on Tuesday, the first new unit in over three years in Japan, and a move that will limit fuel purchases for the firm. Commercial operations at the 912-megawatt No.3 nuclear generator at its Tomari plant on Japan's northernmost island started in the late afternoon.

China's electricity output surged 26.9% year on year to 323.4 billion kWh in November, 2009 Nuclear power output increased 2.7% year on year to 5.51 billion kWh.

China's newly-installed power capacity totaled 69 million kWh in the first 11 months, of which hydropower capacity growth accounted for 21.8 percent.

China plans to achieve the goal of 20 billion kWh of installed solar power capacity in 2020

China would establish 7 wind power bases which would have over 10 billion kWh of power capacity each. By 2020, China's wind power capacity will reach 150 billion kWh. By 2020, these two sectors are expected to have capacity of 300 billion kWh hydropower and 30 billion kWh of biomass energy. Nuclear power will "see considerable growth."

China's electricity generation capacity will increase to 860 billion kW at the end of this year, the second largest after the United States.

Through November, 2009 in the USA year-to-date nuclear generation was 727.6 billion kilowatt-hours compared to 733.3 bkWh for the same period in 2008 and 734.5 bkWh in 2007 (the record year for nuclear generation).

China GDP Growth revised upwards

China's National Bureau of Statistics (NBS) revises its GDP growth for 2008 up from 9.0% to 9.6%

China raised GDP for 2008 to 31.4045 trillion yuan from the previous figure of 30.067 trillion yuan. The new figure amounted to about 4.59 trillion U.S. dollars, at the exchange rate on Dec. 31 of 2008. China's gross domestic product was 4.5% larger in 2008 than it had previously estimated.

The revised volume for the agriculture sector was 3.3702 trillion yuan, accounting for 10.7 percent of GDP, down from 3.4 trillion yuan.

The figure for the industrial sector was put at 14.9003 trillion yuan, accounting for 47.5 percent of GDP, larger than the earlier figure of 14.6183 trillion yuan.

China would likely revise higher its economic growth rates for the first three quarters of 2009, with overall economic output for the year likely to reach 33.5 trillion yuan. If China hits the 8.9% growth then it should 2009 with 34.2 trillion yuan economy. China is running an economic Census in 2010 and the last census caused an upward revision of about 17%.

That would leave China "just shy" of Japan as the world's second-biggest economy trailing the U.S. this year, and in position to overtake Japan in 2010.

China's economy may reach $5.5 trillion next year, Merrill Lynch said. That would displace Japan's economy, which is estimated by the International Monetary Fund to reach $5.19 trillion next year.

China's economy expanded 8.9% in the third quarter from a year earlier, accelerating from a 6.1% on-year expansion in the first quarter. Beijing has forecast that its economy will expand 8% in 2009.

The statistics bureau also on Friday revised higher the nation's energy consumption for 2008 by 2.1%. The revision means the amount of energy China used per unit of GDP fell by 5.2% from 2007 levels.

Year GDP(yuan) GDP growth USD/CNY China GDP China+HK US GDP US Growth
2007 25.8 13 7.3 3.5 3.7 13.8 1.1 Past Germany
2008 31.4 9.6 6.85 4.6 4.8 14.3 -3.1
2009 34.2 8.9 6.83 5.0 5.2 13.9 0.7 Passing Japan
2010 37.6 10 6.5 5.8 6.0 14.0 1.5 Past Japan
2011 41.0 9 6.0 6.8 7.0 14.2 1.9
2012 44.7 9 5.6 8.0 8.2 14.4 2.1
2013 48.7 9 5.1 9.6 9.8 14.7 3
2014 53.1 9 4.7 11.3 11.5 15.2 3
2015 57.9 9 4.2 13.8 14.0 15.6 3
2016 63.1 9 3.8 16.6 16.8 16.1 3 Past USA
2017 68.1 8 3.5 19.5 19.7 16.6 3
2018 73.6 8 3.2 23 23.2 17.1 3
2019 79.5 8 3 26.5 26.8 17.6 3
2020 86 8 3 28.6 28.9 18.1 3
2021 93 8 3 30.9 31.2 18.7 3
2022 100 8 3 33.4 33.7 19.2 3
2023 108 8 3 36.0 36.3 19.8 3
2024 117 8 3 38.9 39.2 20.4 3
2025 126 8 3 42.0 42.3 21.0 3
2026 136 7 3 45.4 45.7 21.6 3
2027 146 7 3 48.6 48.9 22.3 3
2028 156 7 3 52.0 52.3 23.0 3
2029 167 7 3 55.6 55.9 23.6 3
2030 179 7 3 59.5 59.8 24.4 3

Apple Tablet or iSlate Rumor Roundup

Steve Jobs is extremely happy with the rumored tablet

Jan 26, 2010 is the expected announcement date and sales should start before the end of March, 2010.

Appleinsider speculates on the features of the iSlate tablet

- tactile feedback bumps that appear and disappear
- The hand-based system was said to allow "unprecedented integration of typing, resting, pointing, scrolling, 3D manipulation, and handwriting into a versatile, ergonomic computer input device."

Macrumors : Apple purchased domain in 2007

Techcrunch found that Apple also holds trademarks in Europe and America iSlate in Nov 2006 and has other islate domain names (, and

Cnet is among several that expect Apple to succeed with the table iSlate

Eric Drexler Metamodern on Nanotech Development, Progress and Pathways

Drexler has some comments about nanotechnology development, progress in nanotechnology and development pathways.

Drexler indicates that "basement development" is not possible.

Note : Eric Drexler reminded me that he no longer has any connection with the Foresight Institute and asked that I split the original post to avoid confusion.

Molecular nanotechnology/nanofactories will not be developed in a basement. The implementation problems will involve complex devices and extensive laboratory facilities, they will be most effectively addressed by large teams of diverse and well-funded specialists.

Another idea that should be dropped, the idea of a direct leap from accessible, starting-point technologies to the most advanced technologies that have been discussed.

Molecular Manufacturing: Where’s the progress?

Problems of perception and organization are the chief obstacles to more rapid progress in developing molecular machine technologies on the critical path to fulfilling the promise that launched the field of nanotechnology.

Available technologies now enable the design and fabrication of intricate, atomically precise nanometer-scale objects made from a versatile engineering polymer, together with intricate, atomically precise, 100-nanometer scale frameworks that can be used to organize these objects to form larger 3D structures. These components can and have been designed to undergo spontaneous, atomically precise self assembly. Together, they provide an increasingly powerful means for organizing atomically precise structures of million-atom size, with the potential of incorporating an even wider range of functional components.

The nanometer-scale objects that I mentioned above have nylon-like backbones that link and organize an extraordinarily diverse set of molecular components to form structural elements, electronic devices, and machines. The problem is that because they are traditionally called “protein molecules” their nature is obscured by a powerful association with food. The frameworks have a similar representativeness-heuristic problem: “DNA” makes one think of genetic information in cells, but structural DNA nanotechnology uses it as a construction material.

We now have in hand the engineering materials for a new, breakthrough class of nanosystems, yet the bug in our minds whispers “meat” and “genes”. And even in more sophisticated minds, the biological origin of the these materials encourages the seductive idea that their engineering is a task that can be left to biologists. Developing complex, functional systems, however, is quite unlike studying complex, functional systems that already exist. In science, nature provides the pattern. In engineering, human beings provide the pattern. The difference in tasks and mindsets is profound.

Molecular machine to path to molecular manufacturing

Researchers working at Caltech and IBM have taken the first steps toward combining nanosystems [nanolithography, quantum dots, carbon nanotubes, DNA nanotechnology, self assembly etc...] of this kind with nanoscale circuitry to produce a new class of digital devices

The productive nanosystems I refer to above are, of course, ribosomes and nucleic acid polymerases, the programmable molecular machines that assemble polypeptide and polynucleotide chains. In making these polymers, productive nanosystems assemble monomers of different kinds in sequences specified by information encoded in the sequences of (other) polynucleotides, and these sequences determine how, for example, a foldamer product will fold and the functions that the resulting component can perform.

Advances along these lines can support the development of artificial productive nanosystems that are specialized to produce complex, atomically precise components of new kinds. The most accessible advances in this direction would be devices that expand the range of available foldamers. Clever exploitation of existing productive nanosystems has already expanded the range of products by enabling the use of a wider range of monomeric building blocks; new productive nanosystems could add the ability to build foldamers of wholly new kinds that offer (for example) stiffer backbones and greater chemical and physical stability.

The productive nanosystems in use today can operate only in aqueous environments, and their products are usually (but not always) used under the same conditions. I expect that next-generation productive nanosystems built from components of this sort will also be constrained to operate in aqueous environments. For chemical reasons, the presence of water limits the range of fabrication operations that these devices can perform, but these constraints allow more than one might suppose. Within the scope are not only novel foldamers and highly cross-linked 2D and 3D polymeric nanostructures, but also high-modulus inorganic solids, such as metal oxides and pyrite. Even metals and semiconductors are within the scope of aqueous synthesis.

The ability to make better and more robust components will, of course, enable the fabrication of better and more robust products, including better and more robust productive nanosystems that are not constrained to operate in aqueous environments. And these, of course, will provide means for working with a wider range of materials, enabling the production of components and systems that are even better. The expanded scope of component fabrication can be applied to improve Brownian assembly, but constrained Brownian assembly will become more practical and desirable as fabrication technologies advance.

Note that graphenes, carbon nanotubes, and related structures are locally 2D, and can be regarded as extreme cases of “highly cross-linked polymeric nanostructures”. The useful electronic and mechanical properties of these materials are legendary, and they also work well as low-friction nanoscale moving parts. With the aid of nanoscale arrays of catalytic particles, materials of this class have been synthesized at room temperature.

Beyond Ribosome Level Complexity

A productive nanosystem can build chains by controlling positions in just one dimension, extending a 1D chain by adding monomers to the end. A mechanism of a similar kind, with essentially 1D control could extend a 2D sheet by stepping along an edge, adding monomers to the end of a row. In some implementations, devices of this sort could be simpler than a ribosome.

To extend a complex structure with a 3D bond network, however, will typically require adding building blocks of specific kinds at specific locations across a 2D surface. This will typically require a mechanism that can step through a series of positions with two degrees of freedom — a step toward greater complexity

December 25, 2009

Video From the Manhattan Beach Project Longevity Summit

The Manhattan Beach Project was a conference of leading scientists, entrepreneurs, anti-aging doctors held in Manhattan Beach, California on November 13-15, 2009. The goal of the event was to create real time lines and real budgets designed to completely change the face of aging.

The Summit agenda lists a mix of well known names from the longevity advocacy and aging research communities, speaking on a range of interesting topics:
I. Intro - David Kekich
II. The Law of Accelerating Returns - Ray Kurzweil via video
III. Caloric Restriction - Stephen Spindler
IV. Evolutionary Genomics of Life Extension - Michael Rose
V. Telomere Maintenance - William Andrews
VI. Aging Genes and Manipulation - Stephen Coles
VII. Restoring Your Immune Function - Derya Unutmaz
VIII. Extracellular Aging and Regeneration - John Furber
IX. Stem Cells/Regenerative Medicine - Michael West
X.Tissue/Organ Storage - Gregory Fahy
XI. SENS/Mitochondrial Rejuvenation - Aubrey de Grey (Two topics - 30 minutes)
XII. Artificial General Intelligence - Peter Voss
XIII. Nanomedicine - Robert Freitas/Ralph Merkle
XIV. Genome Reengineering - Robert Bradbury (Multiple topics - 35 minutes)
XV. 7 Steps to Help Ensure Your Longevity - David Kekich
XVI. Cryonics - Ralph Merkle

Videos are now up. Fightingaging has the links to the videos

Stackable Nanoionic memory - the return of Programmable Metallization Cell Memory

Older image of nanoionic memory

Scientists at Arizona State University have developed an elegant method for significantly improving the memory capacity of electronic chips.

The researchers have shown that they can build stackable memory based on “ionic memory technology,” which could make them ideal candidates for storage cells in high-density memory. Best of all, the new method uses well-known electronics materials.

“This opens the door to inexpensive, high-density data storage by ‘stacking’ memory layers on top one another inside a single chip,” Kozicki said. “This could lead to hard drive data storage capacity on a chip, which enables portable systems that are smaller, more rugged and able to go longer between battery charges.”

Center for applied nanoionics

Coverage of nanoionics/programmable metallizaton cell from two years ago. Deployment of this and other new computer memory technology has not been on earlier schedules.

Kozicki said stacking memory cells has not been achieved before because the cells could not be isolated. Each memory cell has a storage element and an access device; the latter allowing you to read, write or erase each storage cell individually.

“Before, if you joined several memory cells together you wouldn’t be able to access one without accessing all of the others because they were all wired together,” Kozicki said. “What we did was put in an access, or isolation device, that electrically splits all of them into individual cells.”

Up until now, people built these access elements into the silicon substrate.

“But if you do that for one layer of memory and then you build another layer, where will you put the access device,” Kozicki asked. “You already used up the silicon on the first layer and it’s a single crystal, it is very difficult to have multiple layers of single crystal material.”

The new approach does use silicon, but not single crystal silicon, which can be deposited in layers as part of the three-dimensional memory fabrication process. Kozicki said his team was wrestling with how to find a way to build an electrical element, called a diode, into the memory cell. The diode would isolate the cells.

Kozicki said this idea usually involves several additional layers and processing steps when making the circuit, but his team found an elegant way of achieving diode capability by substituting one known material for another, in this case replacing a layer of metal with doped silicon.

“We can actually use a number of different types of silicon that can be layered,” he said. “We get away from using the substrate altogether for controlling the memory cells and put these access devices in the layers of memory above the silicon substrate.”

“Rather than having one transistor in the substrate controlling each memory cell, we have a memory cell with a built-in diode (access device) and since it is built into the cell, it will allow us to put in as many layers as we can squeeze in there,” Kozicki said. “We’ve shown that by replacing the bottom electrode with silicon it is feasible to go any number of layers above it.

With each layer applied, memory capacity significantly expands.

“It turns out to be a ridiculously simple idea, but it works,” Kozicki said of his stackable memory advance. “It works better than the complicated ideas work.”

“The key was the diodes, and making a diode that was simple and essentially integrated in with the memory cell. Once you do that, the rest is pretty straightforward.”

Foresight Global Weather Machine Theoretical Weapon and Drexler on Nanotech Development

J Storrs Hall discusses his weather machine designs and theoretical capabilities.

Weather machine Mark I - many small aerostats—a hydrogen balloon—at a guess an optimal size is somewhere between a millimeter and a centimeter in diameter that have a continuous layer in the stratosphere. Each aerostat contains a mirror, and also a control unit consisting of a radio receiver, computer, and GPS receiver. About 100 billion tonnes of material with regular technology and 10 million tons with more advanced nanotechnology.

It has a very thin shell of diamond, maybe just a nanometer thick. It’s round, and it has inside it (and possibly extending outside, giving it the shape of the planet Saturn) an equatorial plane that is a mirror. If you squashed it flat, you would have a disc only a few nanometers thick. Although you could build such a balloon out of materials that we build balloons of now, it would not be economical for our purposes. Given that we can build these aerostats so that the total amount of material in one is actually very, very small, we can inflate them with hydrogen in such a way that they will float at an altitude of twenty miles or so—well into the stratosphere and above the weather and the jet streams.

The radiative forcing associated with CO2 as a greenhouse gas, as generally mentioned in the theory of global warming, is on the order of one watt per square meter. The weather machine would allow direct control of a substantial fraction of the total insolation, on the order of a kilowatt per square meter—1000 times as much.

The Weather Machine, Mark II -Take the same aerostat, but inside put an aerogel composed of electronically switchable optical-frequency antennas—these are beginning to be looked at in the labs now under the name of nantennas. We can now tune the aerostat to be an absorber or transmitter of radiation in any desired frequency, in any desired direction (and if we’re really good, with any desired phase). It’s all solid state, with no need to control the aerostat’s physical attitude. Once we have that, the Weather Machine essentially becomes an enormous directional video screen, or with phase control, hologram.

At closest approach, with an active spot of 10,000 km diameter (Weather Machine Mark 2), using violet light for the beam, you could focus a petawatt beam on a 2.7mm spot on Phobos. A petawatt is about a quarter megaton per second. 2.7mm is about a tenth of an inch. I.e. you could blow Phobos up, write your name on it at about Sharpie handwriting size, or ablate the surface in a controlled way, creating reaction jets, and sending it scooting around in curlicues like a bumper car.

2. Shane Leggs discusses the brain being an adequate Artificial General Intelligence design (H/T J Storrs Hall)

Having dealt with computation, now we get to the algorithm side of things. One of the big things influencing me this year has been learning about how much we understand about how the brain works, in particular, how much we know that should be of interest to AGI designers. I won’t get into it all here, but suffice to say that just a brief outline of all this information would be a 20 page journal paper (there is currently a suggestion that I write such a paper next year with some Gatsby Unit neuroscientists, but for the time being I’ve got too many other things to attend to). At a high level what we are seeing in the brain is a fairly sensible looking AGI design. You’ve got hierarchical temporal abstraction formed for perception and action combined with more precise timing motor control, with an underlying system for reinforcement learning. The reinforcement learning system is essentially a type of temporal difference learning though unfortunately at the moment there is evidence in favour of actor-critic, Q-learning and also Sarsa type mechanisms — this picture should clear up in the next year or so. The system contains a long list of features that you might expect to see in a sophisticated reinforcement learner such as pseudo rewards for informative queues, inverse reward computations, uncertainty and environmental change modelling, dual model based and model free modes of operation, things to monitor context, it even seems to have mechanisms that reward the development of conceptual knowledge. When I ask leading experts in the field whether we will understand reinforcement learning in the human brain within ten years, the answer I get back is “yes, in fact we already have a pretty good idea how it works and our knowledge is developing rapidly.”

Shane Legg's human level AGI prediction is the blue line and the red line is Verne Vinge.

Shane's mode is about 2025, my expected value is actually a bit higher at 2028. This is not because I’ve become more pessimistic during the year, rather it’s because this time I’ve tried to quantify my beliefs more systematically and found that the probability I assign between 2030 and 2040 drags the expectation up. Perhaps more useful is my 90% credibility region, which from my current belief distribution comes out at 2018 to 2036.

Note : Eric Drexler reminded me that he no longer has any connection with the Foresight Institute and asked that I split the original post to avoid confusion.

December 24, 2009

Scientists Create World’s First Molecular Transistor

Ads : Nano Technology   Netbook    Technology News    Computer Software

Scanning electron microscope image (false color) illustrating a full pattern of the devices. The whole structure was defined on an oxidised Si wafer. The yellow regions show portions of the multi-layered Au electrodes (a thin Au layer with a thickness of ~15 nm; a thick Au layer with a thickness of ~100 nm), and the purple region represents the oxidised Al gate electrode. Au wires broken by the electromigration technique (Fig. 1a, inset) are placed on the top of the bottom-gate electrode. The contact pads to which a connection is made by wire bonding are not visible because they are located far from the active part of the device. Insets display the schematic images of Au-ODT-Au (right) and Au-BDT-Au (left) junctions. This is a conceptual diagram only, as only one molecular junction type at a time can be fabricated with the present process.

A group of scientists has succeeded in creating the first transistor made from a single molecule. The team, which includes researchers from Yale University and the Gwangju Institute of Science and Technology in South Korea, published their findings in the December 24 issue of the journal Nature.

The team, including Mark Reed, the Harold Hodgkinson Professor of Engineering & Applied Science at Yale, showed that a benzene molecule attached to gold contacts could behave just like a silicon transistor.

The researchers were able to manipulate the molecule’s different energy states depending on the voltage they applied to it through the contacts. By manipulating the energy states, they were able to control the current passing through the molecule.

The researchers were able to manipulate the molecule’s different energy states depending on the voltage they applied to it through the contacts. By manipulating the energy states, they were able to control the current passing through the molecule.

“It’s like rolling a ball up and over a hill, where the ball represents electrical current and the height of the hill represents the molecule’s different energy states,” Reed said. “We were able to adjust the height of the hill, allowing current to get through when it was low, and stopping the current when it was high.” In this way, the team was able to use the molecule in much the same way as regular transistors are used.

The work builds on previous research Reed did in the 1990s, which demonstrated that individual molecules could be trapped between electrical contacts. Since then, he and Takhee Lee, a former Yale postdoctoral associate and now a professor at the Gwangju Institute of Science and Technology, developed additional techniques over the years that allowed them to “see” what was happening at the molecular level.

Being able to fabricate the electrical contacts on such small scales, identifying the ideal molecules to use, and figuring out where to place them and how to connect them to the contacts were also key components of the discovery. “There were a lot of technological advances and understanding we built up over many years to make this happen,” Reed said.

There is a lot of interest in using molecules in computer circuits because traditional transistors are not feasible at such small scales. But Reed stressed that this is strictly a scientific breakthrough and that practical applications such as smaller and faster “molecular computers”—if possible at all—are many decades away.

“We’re not about to create the next generation of integrated circuits,” he said. “But after many years of work gearing up to this, we have fulfilled a decade-long quest and shown that molecules can act as transistors.”

Nature: Observation of molecular orbital gating

The control of charge transport in an active electronic device depends intimately on the modulation of the internal charge density by an external node1. For example, a field-effect transistor relies on the gated electrostatic modulation of the channel charge produced by changing the relative position of the conduction and valence bands with respect to the electrodes. In molecular-scale devices a longstanding challenge has been to create a true three-terminal device that operates in this manner (that is, by modifying orbital energy). Here we report the observation of such a solid-state molecular device, in which transport current is directly modulated by an external gate voltage. Resonance-enhanced coupling to the nearest molecular orbital is revealed by electron tunnelling spectroscopy, demonstrating direct molecular orbital gating in an electronic device. Our findings demonstrate that true molecular transistors can be created, and so enhance the prospects for molecularly engineered electronic devices.

25 page pdf with supplemental information

1. Device fabrication and characterization
2. Transfer characteristics of a 1,4-benzenedithiol molecular transistor
3. Low temperature transport and IET spectroscopy measurements
4. Experimental estimation of tunnelling barrier height in molecular junctions
5. LUMO-mediated electron tunnelling through Au-BDCN-Au junction
6. Vibrational mode assignments in IET spectra
7. Linewidth broadening in IET spectra of Au-BDT-Au junction
8. Projected density of states near HOMO levels of phenyl and alkyl molecules
9. Theoretical model on resonantly enhanced IET spectra

Engineers adjusted the voltage applied via gold contacts to a benzene molecule, allowing them to raise and lower the molecule’s energy states and demonstrate that it could be used exactly like a traditional transistor at the molecular level. Credit: Hyunwook Song and Takhee Lee

a, Representative I(V) curves measured at 4.2 K for different values of VG. Inset, the device structure and schematic. S, source; D, drain; G, gate. Scale bar, 100 nm. b, Fowler–Nordheim plots corresponding to the I(V) curves in a, exhibiting the transition from direct to Fowler–Nordheim tunnelling with a clear gate dependence. The plots are offset vertically for clarity. The arrows indicate the boundaries between transport regimes (corresponding to Vtrans). c, Linear scaling of Vtrans in terms of VG. The error bars denote the s.d. of individual measurements for several devices and the solid line represents a linear fit. Inset, the schematic of the energy band for HOMO-mediated hole tunnelling, where eVG,eff describes the actual amount of molecular orbital shift produced by gating. d, Two-dimensional colour map of dln(I/V2)/d(1/V) (from Fowler–Nordheim plots). Energy-band diagrams corresponding to four different regions (points A–D) are also shown. FN, Fowler–Nordheim tunnelling; DT, direct tunnelling.

Editor's summary

The ultimate in electronic device miniaturization would be the creation of circuit elements consisting of an individual molecule. A single-molecule transistor exploiting the electrostatic modulation of a molecule's orbital energy is a theoretical possibility. Now Hyunwook Song and colleagues report the successful realization of such a device, a proof of concept that should enhance the practical prospects for molecularly engineered electronics

View of James Kushmerick, National Institute of Standards and Technology

Transistors have been made from single molecules, where the flow of electrons is controlled by modulating the energy of the molecular orbitals. Insight from such systems could aid the development of future electronic devices.

Transistors, the fundamental elements of integrated circuits, control the flow of current between two electrodes (the source and drain electrodes) by modifying the voltage applied at a third electrode (the gate electrode). As manufacturers compete to produce ever smaller devices, one logical limit to circuit miniaturization is transistors whose channels are defined by a single molecule...


Trading Futures
Nano Technology
Netbook     Technology News
Computer Software
Future Predictions

Thank You

China Will Be Making Bigger Localized AP1000 Reactors

China a will build a 1.4 GW reactor starting in 2013 that will expand on the Westinghouse AP1000 design -- a simplified, "advanced passive" (AP) design that reduces the need for human action in the event of an accident.

Westinghouse, based in Cranberry, is building the first of four AP1000 nuclear-power plants for China under an agreement signed in 2007. Westinghouse finished its first major concrete pour Thursday on the first plant, which is scheduled to come on line in 2013.

Westinghouse is slated to build six, AP1000 plants in the United States, he said. The first one would come on line in 2016.

Westinghouse, which was acquired by Japan's Toshiba Group in 2006, has received orders worth $9.8 billion to build the four AP1000 plants in China.

The power plant, to be located in Weihai, a coastal city in eastern China's Shandong province, would start operating by late 2017, it said.

After the CAP1400 project, work on another CAP1700 project, which uses similar technology but with larger capacity of 1,700 mW, will begin, according to SNPTC.

Both projects still need the final approval of the government, according to the company.

"Construction of such demonstration projects will bring technology upgrades to China's nuclear power industry, which is vital to the future of the sector," said Tang Zide, an expert with SNPTC.

Reactors using the indigenous technology can have larger capacity than those with AP1000 technology, said Tang. What's more, several components of the reactor have been improved with domestic technology.

China is now putting much focus on the use of advanced technology in the country's nuclear power industry. The country decided to use the AP1000 technology to build four reactors. Two of these are in Sanmen in Zhejiang province and the others are in Haiyang, Shandong province.

The four reactors are also the first in the world to use the third generation technology. So far, construction of three has already started, two in Sanmen and one in Haiyang.

Taishan Areva EPR
Construction of the first-phase project of Taishan nuclear power plant started on the morning of December 21. Four pressurized water reactor (PWR) nuclear power units are scheduled to be built for the project and it is planned to have a total of six electricity generating units in the future. Currently, the government has approved building two units in the first phase project. The two power generating units have the world's largest single-unit installed capacity of 1.75 million kW.

The project adopts the advanced third-generation European pressurized water reactor (EPR) technology, which was jointly designed and developed by France's Areva Group and Germany's Siemens Company. Double-layer safety shells and four security design programs are adopted to ensure a higher security level after taking severe accident prevention and mitigation measures into full consideration.

According to the project plans, the No. 1 unit will be put into operation and generate electricity to the power grid in December 2013 and the No. 2 unit is scheduled to be put into operation in October 2014. After the two units are built, they will generate about 26 billion kW/h of electricity to the power grid annually, which is capable of meeting the electricity demand of a medium-sized city and will yield an output of more than 12 billion yuan.

Digital Quantum Batteries Managing Risk of a Quantum Engineering Age

This is a follow up on digital quantum batteries

Dr Alfred Hubler discusses the managing the risks of the high energy densities if digital quantum batteries are developed.

Could one use nano capacitors for energy storage, where dielectric breakthrough is prevented by quantization phenomena?

This is an exciting prospect. The introduction of digital batteries could spark an exciting transition from the nuclear age to a quantum engineering age.

The theoretical limit for electrostatic energy storage in quantum devices is very high. The energy density in heavy atoms, i.e. the ratio between the stored energy and atomic volume for an excitation from the ground state of heavy metal ions is 10,000 times higher than in hydrogen atoms. Based on this theoretical limit, the maximum density of retrievable energy in nano capacitors is comparable with the density of retrievable energy from nuclear reactions.

The rapid energy release of nano capacitors discharged by an electrical short makes them potent explosives, potentially exceeding the power of any chemical explosive. In contrast to nuclear explosives, the production of nano capacitors requires not radioactive substances but common, nonpoisonous, environmentally friendly chemicals. Without electrical charge, nano capacitors have no explosive power. In contrast to both nuclear and chemical explosives, simply using nano capacitors as batteries and slowly removing their charge as electrical current could easily and safely discharge them. In the discharged state, they could be shipped without safety concerns. At their destination they could be easily charged with electrical current. Even if an explosion were to occur, it would produce no radioactive waste, no long term radiation, and probably could be designed to produce no chemicals.

Technology to produce nano capacitors may already exist because similar quantum effects are used to keep charge separated in MTJ capacitors (used in MRAM), in flash drives and in laser diodes. In laser diodes, the recombination of electrons and holes is a forbidden transition. Therefore, the holes are long lived even if they are very close to the electrons. This leads to large electric fields. Literature data suggest that in standard laser diodes the electric field is above the breakthrough field of regular capacitors, and could be much larger. Conceivably, conventional flash drives could be used for energy storage, but they are not designed for this purpose: the substrates are thick and the energy to weight ratio is small. One could design large arrays of individually connected nano capacitors, to be charged and discharged one-by-one, similar to flash drives. In contrast to regular batteries, the output voltage would remain constant until the last nano capacitor is discharged. One could call these arrays of nano capacitors digital batteries. A digital battery’s stable output voltage is of key importance for sensors and other sensitive devices. However, the high energy density of such devices may pose a challenge in terms of safety. Charged digital batteries could explode if all nano capacitors were shorted simultaneously. In the electrically discharged state digital batteries probably pose no safety hazard whatsoever.

Vikas Shah, a Student of Hubler adds:

Despite the capacitors being better for the environment it should be taken into consideration that they are somewhat dangerous. While they hold no charge or if they are used responsibly and discharged slowly they are harmless. On the other hand if, say, one were to be fractured because of the design limitations stated above or because of improper use there would be consequences of catastrophic proportions. If a capacitor were to be broken the rapid release of electrons, depending on the charge being held within the plates, could cause an explosion like that of an atomic bomb.

One problem that Hubler faced was that in larger capacitors, namely parallel plate capacitors, is that when many electrons are stored on the negative plate they tent to repel each other and thusly push each other off of the plate and go to the positive plate. To work around this, Hubler looked to the atomic level where quantum phenomena prevent such happenings. For example we know that in an atom the nucleus and electrons are separate despite their charges. They coexist without attracting each other. Similarly, if very small capacitors, known as nano capacitors were made this problem of electrons pushing each other off plates would not occur. So, why not just make nano capacitors? Technology to create these nano capacitors already exists today. For example they can be seen in the MRAM component of flash drives. The problem as to why such capacitors aren’t already implemented into society today is because of limitations in design such as “tunneling and large tensions which may fracture the material” of the capacitors (Hubler). Also, capacitors right now do not hold much charge. Theoretically, capacitors can hold a much larger charge than batteries can. Because of this the maximum charge held by a capacitor is increasing at a faster rate than that of a battery. This is allowing capacitors to become more suited for commercial use. Although this is true, it will be a few years before capacitors can actually hold as much or more charge than a battery. When they do replace batteries not only will they be superior in holding a charge, they will be better for the environment.

December 23, 2009

Carnival of Space 134 - GL 1214b, Lakes on Titan and more

Carnival of Space 134 is up at Cambrian Sky

This site supplied an article about maxing out the VASIMR plasma rocket with nuclear fission or nuclear fusion reactors Note: nuclear fusion can have a superior option with the Bussard QED propulsion system (direct heating of electrons).

Centauri Dreams looked at A ‘Super-Earth’ with an Atmosphere

The new world is GJ 1214b, about 6.5 times as massive as the Earth, orbiting a small M-dwarf about a fifth the size of the Sun some forty light years from Earth.

But there’s more, a good deal more. At a distance of 1.3 million miles, the planet orbits its star every 38 hours, with an estimated temperature a little over 200 degrees Celsius. Because GJ 1214b transits the star, astronomers are able to measure its radius, which turns out to be 2.7 times that of Earth. The density derived from this suggests a composition of about three-fourths water and other ices and one-fourth rock

From Alan Boyle also looked at GL1214b. "An Alien water world has been found".

Astronomers say they have detected a planet just six and a half times as massive as Earth - at a distance so close its atmosphere could be studied, and with a density so low it's almost certain to have abundant water.

The alien world known as GJ 1214b orbits a red dwarf star one-fifth the size of our own sun, 40 light-years away in the constellation Ophiuchus, the astronomers reported in Thursday's issue of the journal Nature. News release from eurekalert

Alan boyle looks backwards on 50 years of science in space and looks forward as well

Bad Astronomer looks at Titan. Cassini took a gorgeous shot of Titan casting its shadow on Saturn

Planetary Society Blog also reported that Cassini VIMS saw the long-awaited glint off a Titan lake

Check out Carnival of Space 134 at Cambrian Sky for more

Nuclear Winter and City Firestorms

Robin Hanson at Overcoming Bias looks at nuclear winter again

Robin looks at the Scientific American article that attempts to make the case that a regional nuclear war in India and Pakistan would be sufficient to trigger a nuclear winter

Nuclear bombs dropped on cities and industrial areas in a fight between India and Pakistan would start firestorms that would put massive amounts of smoke into the upper atmosphere.

The nuclear winter case is predictated on getting 150 million tons (150 teragram case) of soot, smoke into the stratosphere and having it stay there The assumption seems to be that the cities will be targeted and the cities will burn in massive firestorms.

The burning characteristics of forest fires may not be a perfect analog for cities, but firestorms with injection of smoke into the upper atmosphere were observed in previous cases of burning cities, after the earthquake-induced fire in San Francisco in 1906 [London, 1906] and the firebombing of Dresden in 1945 [Vonnegut, 1969].
Note: The pro-nuclear winter need to mention this because there have been massive forest fires that have not produced the claimed effects.

The Steps needed to prove nuclear winter:
1. Prove that enough cities will have firestorms (the claim here is that does not happen)
2. Prove that when enough cities in a sufficient area have firestorms that enough smoke and soot gets into the stratosphere (trouble with this claim because of the Kuwait fires)
3. Prove that condition persists and effects climate as per models (others have questioned that but this issue is not addressed here

Nuclear war is definitely to be avoided but we can be precise about effects for proper planning, policy and civil defence.

Nextbigfuture has looked at Nuclear war effects before.

Here we look at some more information on city firestorms.

This Glasstone Blogpost with a lot of links to the effect of fires and thermal radiation Basically there would need to be an analysis of the building density and composition and loading in cities in Pakistan and India. Plus there would need to be an analysis of likely war targeting. Would it be cities or military installations ? After the discussion of firestorm pre-requisitites, I look at the composition of cities in India and Pakistan and do not see a correlation for a firestorm.

If there are not multiple citywide firestorms then there is no trigger for nuclear winter even if the later modeling (which is still uncertain) would even need to be considered. It also shows that civil defense that reduces the likelihood of fires and firestorms is relevant and useful.

Firestorms have always required at least 50% of buildings to be ignited. A 71 pages long report by Robert M. Rodden, Floyd I. John, and Richard Laurino, Exploratory Analysis of Fire Storms, Stanford Research Institute, California, report AD616638, May 1965, identified the following parameters required by all firestorms:

(1) More than 8 pounds of fuel per square foot (40 kg per square metre) of ground area. Hence firestorms occurred in wooden buildings, like Hiroshima or the medieval part of Hamburg. The combustible fuel load in London is just 24 kg/m2, whereas in the firestorm area of Hamburg in 1943 it was 156 kg/m2. The real reason for all the historical fire conflagrations was only exposed in 1989 by the analysis of L. E. Frost and E.L. Jones, ‘The Fire Gap and the Greater Durability of Nineteenth-Century Cities’ (Planning Perspectives, vol.4, pp. 333-47). Each medieval city was built cheaply from inflammable ‘tinderbox’ wooden houses, using trees from the surrounding countryside. By 1800, Britain had cut down most of its forests to build wood houses and to burn for heating, so the price of wood rapidly increased (due to the expense of transporting trees long distances), until it finally exceeded the originally higher price of brick and stone; so from then on all new buildings were built of brick when wooden ones decayed. This rapidly reduced the fire risk. Also, in 1932, British Standard 476 was issued, which specified the fire resistance of building materials. In addition, new cities were built with wider streets and rubbish disposal to prevent tinder accumulation in alleys, which created more effective fire breaks.

(2) More than 50% of structures ignited initially.

(3) Initial surface winds of less than 8 miles per hour.

(4) Initial ignition area exceeding 0.5 square mile.

The fuel loading per unit ground area is equal to fuel loading per unit area of a building, multiplied by the builtupness fraction of the area. E.g., Hamburg had a 45% builtupness (45% of the ground area was actually covered by buildings), and the buildings were multistorey medieval wooden constructions containing 70 pounds of fuel per square foot. Hence, in Hamburg the fuel loading of ground area was 0.45*70 = 32 pounds per square foot, which was enough for a firestorm.

By contrast, modern cities have a builtupness of only 10-25% in most residential areas and 40% in commercial and downtown areas. Modern wooden American houses have a fuel loading of 20 pounds per square foot of building area with a builtupness below 25%, so the fuel loading per square foot of ground is below 20*0.25 = 5 pounds per square foot, and would not produce a firestorm. Brick and concrete buildings contain on the average about 3.5 pounds per square foot of floor area, so they can't produce firestorms either, even if they are all ignited

Theodore Poston in his ignorant paper 'Possible Fatalities from Superfires following Nuclear Attacks in or Near Urban Areas', in the 1986 U.S. National Academy of Sciences book The Medical Implications of Nuclear War, assumes falsely that brick and concrete cities can burn like the small areas of medieval German cities and like Brode and Small, he simply ignores the mechanism for the firestorm in Hiroshima which had nothing to do with thermal radiation but was just due to overturned breakfast charcoal braziers. Theodore Poston also falsely complains that wooden houses exposed to nuclear tests didn't burn because they had white paint on them and shutters over the windows. That discredits Theodore Poston's whole anti-civil defence
countermeasure tirade by actually PROVING the value of simple civil defense; but actually if you open your eyes, you find that most wooden houses are painted white, and in a real city - unlike the empty Nevada desert - few windows will have a line of sight to the fireball anyway.

The Material of the Houses in India and Pakistan do not Appear to be Right for Firestorms

India Housing census of 2001

Material of Roofs in India

Concrete 19.8%
Tiles 32.6%
Grass, thatch,bamboo, wood, mud 21.9%
Other 25.7%

Material of Walls in India

Burnt Brick 43.7%
Mud, Unburnt Brick 32.2%
Grass, thatch, bamboo, Wood, etc. 10.2%
Other 13.9%

Material Used in Houses in India : Floors

Mosaic, Floor tiles 7.3%
Cement 26.5%
Mud 57.1%
Other 9.1%

Kerosene was used in 43% of homes for lighting. Kerosene is flammable but how much kersone per home ? How much more electrification has occurred.

Fuel was used for cooking. However, natural gas is often used in developed countries. So the cooking fuel would burn but how much per house ?

Pakistan also has a lot of mud and brick homes

Kuwait Oil Fires

The Kuwaiti oil fires were a result of the scorched earth policy of
Iraqi military forces retreating from Kuwait in 1991 after conquering
the country but being driven out by Coalition military forces ....
during the Persian Gulf War, the First Gulf War,or often as the Second Gulf War and by Iraqi leader Saddam Hussein as The Mother of all Battles, or commonly as Desert Storm, for the military response... showed the effects of vast emissions of particulate matter into the atmosphere in a geographically limited area; directly underneath the smoke plume constrained model calculations suggested that daytime temperature may have dropped by ~10°C within ~200 km of the source.

Carl Sagan of the TTAPS study warned in January 1991 that so much smoke from the fires "might get so high as to disrupt agriculture in much of South Asia...." Sagan later conceded in his book The Demon-Haunted World: Science as a Candle in the Dark is a book by astrophysicist Carl Sagan, which was first published in 1995.The book is intended to explain the scientific method to laypeople, and to encourage people to learn critical or skeptical thinking...that this prediction did not turn out to be correct: "it was pitch black at noon and temperatures dropped 4°-6°C over the Persian Gulf, but not much smoke reached stratospheric altitudes and Asia was spared."

The 2007 study discussed above noted that modern computer models have been applied to the Kuwait oil fires, finding that individual smoke plumes are not able to loft smoke into the stratosphere, but that smoke from fires covering a large area, like some forest fires or the burning of cities that would be expected to follow a nuclear strike, would loft significant amounts of smoke into the stratosphere

Older Analysis

Nuclear winter reappraised (1986 Foreign Affairs)

Nuclear war survival skills book from 1987

° Myth: Because some modern H-bombs are over 1000 times as powerful as the A-bomb that destroyed most of Hiroshima, these H-bombs are 1000 times as deadly and destructive.

° Facts: A nuclear weapon 1000 times as powerful as the one that blasted Hiroshima, if exploded under comparable conditions, produces equally serious blast damage to wood-frame houses over an area up to about 130 times as large, not 1000 times as large.

° Myth: A Russian nuclear attack on the United States would completely destroy all American cities.

° Facts: As long as Soviet leaders are rational they will continue to give first priority to knocking out our weapons and other military assets that can damage Russia and kill Russians. To explode enough nuclear weapons of any size to completely destroy American cities would be an irrational waste of warheads. The Soviets can make much better use of most of the warheads that would be required to completely destroy American cities; the majority of those warheads probably already are targeted to knock out our retaliatory missiles by being surface burst or near-surface burst on their hardened silos, located far from most cities and densely populated areas.

Unfortunately, many militarily significant targets - including naval vessels in port and port facilities, bombers and fighters on the ground, air base and airport facilities that can be used by bombers, Army installations, and key defense factories - are in or close to American cities. In the event of an all-out Soviet attack, most of these '"soft" targets would be destroyed by air bursts. Air bursting (see Fig. 1.4) a given weapon subjects about twice as large an area to blast effects severe enough to destroy "soft" targets as does surface bursting (see Fig. 1.1) the same weapon. Fortunately for Americans living outside blast and fire areas, air bursts produce only very tiny particles. Most of these extremely small radioactive particles remain airborne for so long that their radioactive decay and wide dispersal before reaching the ground make them much less life- endangering than the promptly deposited larger fallout particles from surface and near-surface bursts. However, if you are a survival minded American you should prepare to survive heavy fallout wherever you are. Unpredictable winds may bring fallout from unexpected directions. Or your area may be in a "hot spot" of life-endangering fallout caused by a rain-out or snow-out of both small and tiny particles from distant explosions. Or the enemy may use surface or near-surface bursts in your part of the country to crater long runways or otherwise disrupt U.S. retaliatory actions by producing heavy local fallout.

Today few if any of Russia's largest intercontinental ballistic missiles (ICBMs) are armed with a 20-megaton warhead. A huge Russian ICBM, the SS-18, typically carries 10 warheads each having a yield of 500 kilotons, each programmed to hit a separate target. See "Jane's Weapon Systems. 1987-1988." However, in March 1990 CIA Director William Webster told the U.S. Senate Armed Services Committee that ".... The USSR's strategic modernization program continues unabated," and that the SS-18 Mod 5 can carry 14 to 20 nuclear warheads. The warheads are generally assumed to be smaller than those of the older SS-18s.

° Myth: A heavy nuclear attack would set practically everything on fire, causing "firestorms" in cities that would exhaust the oxygen in the air. All shelter occupants would be killed by the intense heat.

° Facts: On aclear day, thermal pulses (heat radiation that travels at the speed of light) from an air burst can set fire to easily ignitable materials (such as window curtains, upholstery, dry newspaper, and dry grass) over about as large an area as is damaged by the blast. It can cause second-degree skin burns to exposed people who are as far as ten miles from a one-megaton (1 MT) explosion. (See Fig. 1.4.) (A 1-MT nuclear explosion is one that produces the same amount of energy as does one million tons of TNT.) If the weather is very clear and dry, the area of fire danger could be considerably larger. On a cloudy or smoggy day, however, particles in the air would absorb and scatter much of the heat radiation, and the area endangered by heat radiation from the fireball would be less than the area of severe blast damage.

"Firestorms" could occur only when the concentration of combustible structures is very high, as in the very dense centers of a few old American cities. At rural and suburban building densities, most people in earth- covered fallout shelters would not have their lives endangered by fires.



A fire storm is characterized by strong to gale force winds blowing toward the fire everywhere around the fire perimeter and results from the rising column of hot gases over an intense, mass fire drawing in the cool air from the periphery. These winds blow the fire brands into the burning area and tend to cool the unignited fuel outside so that ignition by radiated heat is more difficult, thus limiting fire spread. The conditions which give rise to a fire storm appear to be low natural wind velocity, flat terrain and a uniform distribution of high-surface density, highly combustible fuels which burn rapidly, coalescing individual fires into one burning mass within the fire perimeter.

Such fire storms have been observed in forest fires and were frequently experienced in the mass incendiary air raids in Europe and Japan during World War II. In fact, such fire storms were the most frequent type observed in Japan during mass raids.P )It was typical in such cases that the fire was mainly confined to the areas initially seeded with incendiary bombs but within these areas fire destruction was virtually complete. In Hiroshima, hundreds of fires were burning throughout the area ultimately burned over within ten minutes after the bomb exploded. Each of these spread rapidly to adjacent structures during the first half hour, by which time the fire storm was well developed. Practically all fire spread had ceased after two hours at which time the fire storm was approaching its peak intensity, with centrally directed winds of 30-40 mi/hr.

In Nagasaki, in spite of the similar yield, altitude of burst and weather conditions, a fire storm did not develop, probably because of the uneven terrain, the irregular layout of the city and the location of ground zero. In a long relatively narrow river valley north of the center of the city. Here, such spread of the fire beyond the area of initial IcItioon as vas observed, was to the southeast against the wind direction at the time of the explosion. Because the rate of spread was slower, the fire burned longer. Here also, the combination of terrain, city layout, position of ground zero and wind direction limited the spread of fire primarily areas seriously damaged by blast.

Civil defense document from 1973 (74 pages)

India's nuclear arsenal

Though India has not made any official statements about the size of it nuclear arsenal, the NRDC estimates that India has a stockpile of approximately 30-35 nuclear warheads and claims that India is producing additional nuclear materials. Joseph Cirincione at the Carnegie Endowment for International Peace estimates that India has produced enough weapons-grade plutonium for 50-90 nuclear weapons and a smaller but unknown quantity of weapons-grade uranium. Weapons-grade plutonium production takes place at the Bhabha Atomic Research Center, which is home to the Cirus reactor acquired from Canada, to the indigenous Dhruva reactor, and to a plutonium separation facility.

Shakti 1 claimed yield 43-60 kiloton, yield reported 12-25 kiloton
Shakti 2 Fission device, claimed yield 12 kiloton

Wikipedia lists the current indian nuclear weapons as 40-80 bombs

Pakistan has weapons of comparable yield

boosted device 1998 claimed 25-36 kiloton, reported 9-12 kiloton

Wikipedia indicates that Pakistan has 70-90 nuclear weapons From these tests Pakistan can be estimated to have developed operational warheads of 20 to 25kt and 150kt in the shape of low weight compact designs and may have 300–500kt large-size warheads

Analysts assume Pakistan could have developed operational 'tactical' warheads of 20 to 25 kt and 150 kilotons as well as heavy warheads with a yield of 300–500 kt. The low-yield weapons are probably designed to be carried by fighter bombers, such as Pakistan's F-16 Fighting Falcon or French Mirage 5 aircraft. Furthermore, these warheads are presumably fitted to Pakistan's short-range ballistic missiles. The higher-yield warheads are probably fitted to the Shaheen and Ghauri ballistic missiles.

Indian and Pakistan nuclear weapons appear to be mostly in the 20-25 kiloton range with a few larger ones. If Pakistan has a few larger nuclear weapons in the 150-500kt range then India probably does as well.

Nuclear proliferation analysis report from 2001


Simple and affordable defences against nuclear bombs and hurricanes

Further improvement of buildings for more resistance to nuclear bomb effects

Not included was white paint helps to reduce thermal effects. Reflects heat away.

Re-inventing civil defense with zero soft targets

Dark Matter Particles and Dark Galaxy

1. Scientists working on the Cryogenic Dark Matter Search (CDMS), in a disused iron ore mine in Minnesota, have announced that they had detected two weakly-interacting massive particles (WIMPs), that are thought to make up dark matter.

If they are confirmed by further observations that will begin next year, they would rank as one of the most important recent advances in physics and understanding of the cosmos.

The particles showed as two tiny pulses of heat deposited over the course of two years in chunks of germanium and silicon that had been cooled to a temperature near absolute zero.

The detectors are place half a mile down to avoid them being effected by background radiation.

But the scientists still said there was more than a 20 percent chance that the pulses were caused by fluctuations in the background radioactivity of their cavern, so the results were tantalizing, but not definitive.

This is confirmation of the rumor that started about two weeks ago

the results summary from Dec 17, 2009 from the Berkeley Cryogenic Dark Matter Search (CDMS) experiment (2 page pdf)

In this new data set we indeed see two events with characteristics consistent with those expected from WIMPs. However, there is also a chance that both events could be due to background particles. Scientists have a strict set of criteria for determining whether a new discovery has been made. The ratio of signal to background events must be large enough that there is no reasonable doubt. Typically there must be fewer than one chance in a thousand of the signal being due to background. In this case, a signal of about five events would have met those criteria. We estimate that there is about a one in four chance to have seen two backgrounds events, so we can make no claim to have discovered WIMPs. Instead we say that the rate of WIMP interactions with nuclei must be less than a particular value that depends on the mass of the WIMP. The numerical values obtained for these interaction rates from this data set are more stringent than those obtained from previous data for most WIMP masses predicted by theories. Such upper limits are still quite valuable in eliminating a number of theories that might explain dark matter.

What comes next? While the same set of detectors could be operated at Soudan for many more years to see if more WIMP events appear, this would not take advantage of new detector developments and would try the patience of even the most stalwart experimenters (not to mention theorists). A better way to increase our sensitivity to WIMPs is to increase the number (or mass) of detectors that might see them, while still maintaining our ability to keep backgrounds under control. This is precisely what CDMS experimenters (and many other collaborations worldwide) are now in the process of doing. By summer of 2010, we hope to have about three times more germanium nuclei sitting near absolute zero at Soudan, patiently waiting for WIMPs to come along and provide the perfect billiard ball shots that will offer compelling evidence for the direct detection of dark matter in the laboratory.

Soudan Underground Lab

Cryogenic Dark Matter Search Posters

Dark Galaxy Evidence: New evidence has been discovered by an international team led by astronomers from the National Science Foundation’s Arecibo Observatory and from Cardiff University in the United Kingdom that VIRGOHI 21, a mysterious cloud of hydrogen in the Virgo Cluster 50 million light-years from the Earth, is a Dark Galaxy, emitting no starlight. Their results not only indicate the presence of a dark galaxy but also explain the long-standing mystery of its strangely stretched neighbour. Skeptics of the dark-matter interpretation argue that VIRGOHI21 is simply a tidal tail of the nearby galaxy NGC 4254

December 22, 2009

Tiny nano-electromagnets turn a cloak of invisibility into a possibility

A team of researchers at the FOM institute AMOLF has succeeded for the first time in powering an energy transfer between nano-electromagnets with the magnetic field of light.

This breakthrough is of major importance in the quest for magnetic 'meta-materials' with which light rays can be deflected in every possible direction. This could make it possible to produce perfect lenses and, in the fullness of time, even 'invisibility cloaks'.

The artificial 'meta-materials' studied by the researchers consist of very small U-shaped metal 'nano-rings'. The electromagnetic field of light drives charges back and forth, thereby inducing an alternating current in each U shape. The tiny opening at the top of the ring makes sure that the current zooms around at the frequencies of light. In this way, each ring becomes a small but strong electromagnet, with its north and south poles alternating 500 billion times per second.

Physics Review Letters: Electric and Magnetic Dipole Coupling in Near-Infrared Split-Ring Metamaterial Arrays

We present experimental observations of strong electric and magnetic interactions between split ring resonators (SRRs) in metamaterials. We fabricated near-infrared planar metamaterials with different inter-SRR spacings along different directions. Our transmission measurements show blueshifts and redshifts of the magnetic resonance, depending on SRR orientation relative to the lattice. The shifts agree well with simultaneous magnetic and electric near-field dipole coupling. We also find large broadening of the resonance, accompanied by a decrease in effective cross section per SRR with increasing density due to superradiant scattering. Our data shed new light on Lorentz-Lorenz approaches to metamaterials.

4 page pdf on arxiv

In conclusion, we have measured large resonance shifts as a function of density in SRR arrays resonant at  = 1.4 μm. These shifts are due to strong near-field electrostatic and magnetostatic dipole coupling. Furthermore, we observe electrodynamic superradiant damping that causes resonance broadening and an effective reduction of the extinction cross section per SRR. Since the data show that the response of SRR arrays is not simply given by the product of the density
and polarizability of single constituents, we conclude that a Lorentz-Lorenz analysis to explain effective media parameters of metamaterials ‘atomistically’ is not valid. The fact that the Lorentz-Lorenz picture is invalid has important repercussions: It calls for a shift away from the paradigm that the highest polarizability per constituent is required to obtain the strongest electric or magnetic response from arrays of electric or magnetic scatterers. Our experiments show that increasing the density of highly polarizable constituents to raise the effective medium response is ineffective, since superradiant damping limits the achievable response. To strengthen  or μ, we propose that one ideally finds constituents that have both a smaller footprint and a smaller polarizability per constituent. We stress that even if constituent coupling modifies  and μ,
we do not call into question reported effectivemediumparameters or the conceptual validity thereof per se. The effective medium regime only breaks down when constituent coupling is so strong that collective modes of differently shaped macroscopic objects carved from the same SRR array have very different resonance frequencies or widths. In this regime interesting interesting physics comes into view, particularly regarding active devices. Specific examples are array antennas for spontaneous emission and ‘lasing spasers’, where the lowest-loss array mode will lase most easily.

Very strong interaction
The researchers made an important discovery by measuring how much light passes through a thick grid of these electromagnets. It appears that when the tiny currents of the rings are actuated by light the nano-magnets also influence each other and can power each other.

The researchers have also shown for the first time that the interaction with the magnetic field of light is very strong in these materials; just as strong as the interaction with the electrical field in the best 'classical' optical materials. This improved understanding of the nano-magnets and their interaction with light gives the researchers all the ingredients they need to disperse light along arbitrary paths.

Fast magnetic fields
We are all familiar with rod-shaped magnets: they are described as 'dipolar', with a north pole and a south pole, and the tendency to attract each other’s opposite poles and repel similar poles. We also know that, just like a compass, magnets align themselves along a magnetic field. This is how you can manipulate magnets with magnetic fields, and – vice versa – you can exercise control over magnetic fields using magnets. This commonplace intuition works particularly well for slowly changing magnetic fields, but not for those in a state of rapid flux.

Handicap for optics
Light is an electromagnetic wave consisting of a very rapidly fluctuating electrical field and an associated magnetic field. In principle, you can direct electromagnetic waves at will by manipulating both the electrical field and the magnetic field. But at the very high frequencies of light (500 THz – 500 billion vibrations per second), atoms scarcely respond to magnetic fields. This is why normal materials only control the electrical field of light and not the magnetic field, and is also why normal optical devices (lenses, mirrors and glass fibres) are handicapped in the way they work. But this type of control is actually possible with these artificial 'meta-materials'.

Electron-Cyclotron Resonance Thruster

Experimental ECR-GDM Thruster As a Model for Fusion Propulsion by . Jerome J. Brainerd and Al Reisz

A small experimental electric thruster, in which power is supplied via Electron-Cyclotron Resonance (ECR) absorption of microwaves by the propellant gas, has been tested at the NASA Marshall Space Flight Center (MSFC). The plasma generated is confined radially by an axial dc magnetic field applied through a series of watercooled coils around the resonance chamber. The B-field is maintained at about the strength required for ECR in the central part of the chamber (the ECR zone), with stronger magnetic fields acting as magnetic Gas Dynamic Mirrors (GDM) at the ends. The plasma is ejected into the vacuum chamber through the downstream magnetic mirror, which acts as a magnetic nozzle. Of the magnetic field configurations tested, the one most similar to the GDM device proposed by Kammash produced the most significant results. The Radio Frequency (RF) waves, or microwaves, that power the thruster, are launched axially into the resonance chamber and couple very strongly with the Argon plasma. Electron densities of 10^11 to 10^13 per cm^3 have been measured in the plume downstream of the magnetic nozzle. Bimodal velocity profiles have been measured in the .plume via Laser Induced Fluorescence (LIF).

Reisz Engineers info on the ECR

More Engineers info on the ECR

Reisz Engineer's Super-Accelerated Electromagnetic Engine for Space Exploration

Experimental 10 GHz ECR Thruster by Jerome J. Brainerd and Al Reisz, presented at the 42nd AIAA/ASME/SAE/ASEE Joint Propulsion Conference in Sacramento, California in July of 2006. (word doc)

An experimental electric thruster has been built for testing at the NASA Marshall Space Flight Center (MSFC). The plasma is formed by Electron-Cyclotron Resonance (ECR) absorption of microwaves at 10.1 GHz frequency. The plasma is confined radially (theta pinch) by an applied axial dc magnetic field, with a strength of 0.36 tesla (3600 gauss) for ECR. The field is shaped by a strong magnetic mirror on the upstream end and a magnetic nozzle on the downstream end. Argon is used as the test propellant, although a variety of gases may be utilized.

ECR-GDM Thruster for Fusion Propulsion

The concept of the Gasdynamic Mirror (GDM) device for fusion propulsion was proposed by and Lee (1995) over a decade ago and several theoretical papers has supported the feasibility of the concept. A new ECR plasma source has been built to supply power to the GDM experimental thruster previously tested at the Marshall Space Flight Center (MSFC). The new plasma generator, powered by microwaves at 2.45 or 10 GHz. is currently being tested. This ECR plasma source operates in a number of distinct plasma modes, depending upon the strength and shape of the local magnetic field. Of particular interest is the compact plasma jet issuing form the plasma generator when operated in a mirror configuration. The measured velocity profile in the jet plume is bimodal, possibly as a result of the GDM effect in the ECR chamber of the thruster

7 page word doc of ECR-GDM Thruster for Fusion Propulsion

Wikipedia on Electron cyclotron resonance

Electron cyclotron resonance related spacecraft thruster patent

A thruster has a chamber defined within a tube. The tube has a longitudinal axis which defines an axis of thrust; an injector injects ionizable gas within the tube, at one end of the chamber. A magnetic field generator with two coils generates a magnetic field parallel to the axis; the magnetic field has two maxima along the axis; an electromagnetic field generator has a first resonant cavity between the two coils generating a microwave ionizing field at the electron cyclotron resonance in the chamber, between the two maxima of the magnetic field. The electromagnetic field generator has a second resonant cavity on the other side of the second coil. The second resonant cavity generates a ponderomotive accelerating field accelerating the ionized gas. The thruster ionizes the gas by electron cyclotron resonance, and subsequently accelerates both electrons and ions by the magnetized ponderomotive force.

The thruster of relies on electron cyclotron resonance for producing a plasma, and on magnetized ponderomotive force for accelerating this plasma for producing thrust.

Development of the Electrodeless Plasma Thruster at High
Power: Investigations on the Microwave-Plasma Coupling

Improving the performances of the Electrodeless Plasma Thruster at higher
power level requires gaining a deeper understanding of the microwave-plasma energy
coupling. In order to investigate in details the dynamic of this coupling and its variations under diverse operational conditions, the Elwing Company has developed specific, highly versatile, components that can be fitted on the current 8-12 kW model of the electrodeless plasma thruster. This article fully exposes the development process of these components, which involves finite elements modeling, along with the design characteristics and operational capabilities of the microwave applicator and the magnetic structure.

Bussards paper on using IEC fusion discusses ECR.

12 page pdf that details the IEC fusion spacecraft designs for achieving up to one million ISP

New concepts for electrostatic confinement and control of reactions between fusionable fuels offer the prospect of clean, nonhazardous nuclear fusion propulsion systems of very high performance. These can use either direct-electric heating by relativistic electron beams, or propellant dilution of fusion products. If feasible, QED rocket engine systems may give F = (4000-10,000)/Isp, for 1500 < Isp < 1E6 s; two to three orders of magnitude higher than from any other conventional nuclear or electric space propulsion system concept.

All of the systems offer payloads of 14.4%-20.6%, with transit times to Mars’ orbit of 33 to 54 days, over a flight distance of about 90 million km. The mean angle of the flight vector to the tangent to the planetary orbit path is about 40 degrees: this is true “point-and-go” navigation! All vehicle force accelerations are well above the solar gravity field throughout their flight, thus all flights are high��thrust in character. This eliminates the need for added vehicle characteristic velocity required to lift propellant mass out through the solar field, as is the case for low-thrust systems a <0.1 mg that must spiral slowly out from the Sun.

Tonight on Fast Forward Radio

Economist Robin Hanson and futurist Brian Wang join us as we continue our special series leading up Foresight 2010. The conference, January 16-17 in Palo Alto, California, provides a unique opportunity to explore the convergence of nanotechnology and artificial intelligence and to celebrate the 20th anniversary of the founding of the Foresight Institute.

10:00 Eastern/9:00 Central/8:00 Mountain/7:00 Pacific.

Today Robin Hanson wrote Meh Transhumanism

I was going to title this “Against Transhumanism,” but then realized I’m more indifferent than against; it distracts some folks from what I think important, but probably attracts others; I can’t really tell if there is much overall effect.

Truth be told, folks who analyze the future but don’t frame their predictions or advice in terms of standard ideological categories are largely ignored, because few folks actually care much about the future except as a place to tell morality tales about who today is naughty vs. nice. It would be great if those who really cared more directly about the future could find each other and work together, but alas too many others want to pretend to be these folks to make this task anything but very hard

Blog talk radio Call-in Number: (347) 215-8972

Liquid fluoride thorium reactor in Wired

Wired covers the Liquid fluoride thorium reactor.

The thorium energy blog (which is where Kirk Sorenson and the others write and gather for discussion) has some additions.

Fluoride salt is NOT highly corrosive if it's put in the right container material. The high-nickel-alloy Hastelloy-N was proven by Oak Ridge scientists and engineers to be compatible with fluoride salt at the elevated temperatures at which LFTR would operate. Discovering Hastelloy-N and proving it would work was one of their great accomplishments.

Also, LFTR is tightly controlled--but it is predominantly self-controlled.

Sorensen and his pals began delving into this history, they discovered not only an alternative fuel but also the design for the alternative reactor. Using that template, the Energy From Thorium team helped produce a design for a new liquid fluoride thorium reactor, or LFTR (pronounced “lifter”), which, according to estimates by Sorensen and others, would be some 50 percent more efficient than today’s light-water uranium reactors. If the US reactor fleet could be converted to LFTRs overnight, existing thorium reserves would power the US for a thousand years.

Overseas, the nuclear power establishment is getting the message. In France, which already generates more than 75 percent of its electricity from nuclear power, the Laboratoire de Physique Subatomique et de Cosmologie has been building models of variations of Weinberg’s design for molten salt reactors to see if they can be made to work efficiently. The real action, though, is in India and China, both of which need to satisfy an immense and growing demand for electricity. The world’s largest source of thorium, India, doesn’t have any commercial thorium reactors yet. But it has announced plans to increase its nuclear power capacity: Nuclear energy now accounts for 9 percent of India’s total energy; the government expects that by 2050 it will be 25 percent, with thorium generating a large part of that. China plans to build dozens of nuclear reactors in the coming decade, and it hosted a major thorium conference last October. The People’s Republic recently ordered mineral refiners to reserve the thorium they produce so it can be used to generate nuclear power.

You can digg the wired article

Or reddit