Pages

November 28, 2009

Guangdong Nuclear Power Plans for More Uranium

The Wall Street Journal reports that China Guangdong Nuclear Power Holdings Co., one of the country's two nuclear-energy firms, said it will need more than 100,000 metric tons of uranium between 2009 and 2020 to feed its growing fleet of nuclear-power plants, a huge jump from current demand levels that underscores the scope of China's nuclear-energy ambitions.

Guangdong Nuclear Power's uranium needs will jump to 10,000 tons a year in 2020 from 2,000 tons this year. Guangdong Nuclear Power is expected to have 34 gigawatts of nuclear-power capacity in operation by 2020, accounting for more than 50% of China's total capacity, up from 3.94 gigawatts currently operational.



Guangdong Nuclear Power plans to build plants overseas and is targeting Southeast Asia, Mr. Zhou said. Several nations there, including Vietnam, Thailand, Malaysia and Indonesia, are considering building nuclear reactors. The company is also looking to invest in overseas uranium companies as well as ramping up domestic production to meet uranium needs, he said.

Guangdong Nuclear Power is cooperating with foreign companies such as French government-controlled nuclear group Areva, as well as uranium miners Cameco Corp. and Paladin Energy Ltd., on long-term uranium supplies, he said.

"We are willing to take a controlling or minority stake in uranium companies, cooperating in uranium exploration, production or processing," Mr. Zhou said.

Last November, China's Ministry of Commerce gave Guangdong Nuclear Power permission to import uranium for civil use. The only other importer is CNNC.

With an eye on future uranium needs, Guangdong Nuclear in early September 2009 offered 83.6 million Australian dollars (US$76.9 million) for control of Australian uranium-exploration company Energy Metals Ltd.




China and the USA Announce Greenhouse Gas Targets for 2020 That Are Weaker than the Kyoto Protocol

China is going to reduce the intensity of carbon dioxide emissions per unit of GDP in 2020 by 40 to 45 percent compared with the level of 2005.

"In 2020, the country's GDP will at least double that of now, so will the emissions of greenhouse gases (GHG). But the required reduction of emissions intensity by 40 to 45 percent in 2020 compared with the level of 2005 means the emissions of GHG in 2020 has to be roughly the same as emissions now," Qi Jianguo said.

In order to achieve the target, more efforts must be made besides strictly abiding by the principle of "energy-saving and emissions reductions," he said.

The government would devote major efforts to developing renewable and nuclear energies to ensure the consumption of non-fossil-fuel power accounted for 15 percent of the country's total primary energy consumption by 2020, said the State Council statement.

More trees would be planted and the country's forest area would increase by 40 million hectares and forest volume by 1.3 billion cubic meters from the levels of 2005.


India might come up with a plan to cut emissions in response to those announced by the US and China.

Desai also said that China's steps are not drastic, because it talks about reducing carbon intensity -- the amount of carbon dioxide emitted for every dollar of GDP it generates. "These announcements are not legally binding and China's GDP will continue to grow," he added.

The United States had earlier announced that it could offer a target reduction of 17% in greenhouse gas emissions by 2020 as compared to 2005 levels




Another expert feels that the steps announced by US are meaningless. "The US has announced an absolute reduction target of 17% below 2005 levels, by 2020. This means a mere 3% reduction below 1990 levels. Science demands that developed countries cut emission by 40% below 1990 levels. In fact, the US proposal is a death-knell for the Kyoto Protocol, which in its first commitment period had asked for more -- 5.2% reduction over 1990 levels," said Sunita Narain of the Centre for Science and Environment.


Open and Transparent Data Needed for Reproducibility and Verification of the Climate Models



Watts up with that explains the core of the Climategate issues

CRU’s decision to withhold data and code from public inspection is not only against the scientific method, given the impact their work has on governmental policies and taxpayer funded programs, it is, in my opinion, unethical. – Anthony Watts


(H/T J Storrs Hall at Foresight who also has two articles that explain more about how science works and Why raw data is important

George Monbiot, a climate change activist and author, says the following

But there are some messages that require no spin to make them look bad. There appears to be evidence here of attempts to prevent scientific data from being released and even to destroy material that was subject to a freedom of information request.

Worse still, some of the emails suggest efforts to prevent the publication of work by climate sceptics or to keep it out of a report by the Intergovernmental Panel on Climate Change. I believe that the head of the unit, Phil Jones, should now resign. Some of the data discussed in the emails should be re-analysed.


Those who are making the case about climate change need to do the extra work to address the doubts about the data and about the lack of transparency.

Some of the raw data has been dumped

Scientists at the University of East Anglia (UEA) have admitted throwing away much of the raw temperature data on which their predictions of global warming are based. The data were gathered from weather stations around the world and then adjusted to take account of variables in the way they were collected. The revised figures were kept, but the originals — stored on paper and magnetic tape — were dumped to save space when the CRU moved to a new building.

The CRU is the world’s leading centre for reconstructing past climate and temperatures. Climate change sceptics have long been keen to examine exactly how its data were compiled. That is now impossible.


There is also reports that data was deleted on purpose in 2009.

Steven McIntyre had sought release of CRU's data under the UK Freedom of Information Act. At first, Jones simply refused this and several similar requests from other parties. Then on July 27, 2009, CRU erased three key files from its public database, as Mr. McIntyre can prove easily because he has before-and-after screenshots. CRU followed up, in short order, with what McIntyre and some of his readers called an "unprecedented" "purge" of its public data directory. McIntyre's screenshots tell a breathtaking story of wholesale removal of files previously made available to the public, including, at one point, the deletion of every single listing in Phil Jones' public directory. Anthony Watts summed up the situation in one word: "panic."


CRU has posted its response that over 95% of the raw data is public

Our global temperature series tallies with those of other, completely independent, groups of scientists working for NASA and the National Climate Data Center in the United States, among others. Even if you were to ignore our findings, theirs show the same results. The facts speak for themselves; there is no need for anyone to manipulate them.

We have been bombarded by Freedom of Information requests to release the temperature data that are provided to us by meteorological services around the world via a large network of weather stations. This information is not ours to give without the permission of the meteorological services involved. We have responded to these Freedom of Information requests appropriately and with the knowledge and guidance of the Information Commissioner.


Wall Street Journal assessment of the CRU response: The response from the defenders of Mr. Mann and his circle has been that even if they did disparage doubters and exclude contrary points of view, theirs is still the best climate science. The proof for this is circular. It's the best, we're told, because it's the most-published and most-cited—in that same peer-reviewed literature. The public has every reason to ask why they felt the need to rig the game if their science is as indisputable as they claim.


Accuracy and Quality of the Climate Model Code


Ronald Bailey at Reason.com summarizes some of the analysis of the CRU models.

British statistician William Briggs is not impressed by the CRU climatologists' statistical acumen:

8 out 9 things are statistical factors that boost statistical uncertainty are not covered by CRU. Just one of the factors boosts uncertainty by 2 to ten times.

Detailed analysis of the climate modelling program code has begun

Any large computer programs will have bugs. The climate modelling code should be open source and public so that everyone knows exactly what is being done to produce the climate models and bugs can be found and corrected.

Climate change models were used by the EPA (Environmental Protection Agency) to set policy and is being used to set international policy which effects many billions and even trillions of dollars in projects.

Absolute openness and transparency is needed for the data and computer models on which the discussions and decisions are based.

Many of the policy changes that are being made based on the climate change case can also be made based on air pollution. I think the air pollution case is more solid. Air pollution has been correlated with increased health risks and deaths. Climate change needs to have accurate science and then let the policy decisions go where they will based on accurate science.

Micropillars With Quantum Dots for Firing Single Targeted Photons


Tiny towers, a hundred times thinner than a human hair, with special properties: such nanostructures are produced by the Department of Applied Physics at the University of Würzburg. (Image: Monika Emmerling / Adriana Wolf)

[from Nanowerk] What is special about the Würzburg quantum dot towers is that "with them it is possible to 'fire off' single photons in a targeted fashion. It is structural elements like these that are needed for the tap-proof transmission of data in the field of quantum cryptography," explains Würzburg physicist Stephan Reitzenstein.

Embedded in the center of the towers are some 100 quantum dots made from the semiconducting material indium gallium arsenide.


Non-resonant dot–cavity coupling and its potential for resonant single-quantum-dot spectroscopy

Non-resonant emitter–cavity coupling is a fascinating effect recently observed as unexpected pronounced cavity resonance emission even in strongly detuned single quantum dot–microcavity systems. This phenomenon indicates strong, complex light–matter interactions in these solid-state systems, and has major implications for single-photon sources and quantum information applications. We study non-resonant dot–cavity coupling of individual quantum dots in micropillars under resonant excitation, revealing a pronounced effect over positive and negative quantum dot mode detunings. Our results suggest a dominant role of phonon-mediated dephasing in dot–cavity coupling, giving a new perspective to the controversial discussions ongoing in the literature. Such enhanced insight is essential for various cavity-based quantum electrodynamic systems using emitters that experience phonon coupling, such as colour centres in diamond and colloidal nanocrystals11. Non-resonant coupling is demonstrated to be a versatile 'monitoring' tool for observing relevant quantum dot s-shell emission properties and background-free photon statistics.

5 pages of supplemental information



Fujitsu Labs Can Form Graphene Transistors on Silicon Wafers



Physorg reports that Fujitsu Laboratories has developed a novel technology for forming graphene transistors directly on the entire surface of large-scale insulating substrates at low temperatures while employing chemical-vapor deposition (CVD) techniques which are in widespread use in semiconductor manufacturing.

Fujitsu Laboratories developed novel technology that allows for graphene to be formed on insulating film substrate via CVD at the low fabrication-temperature of 650°C, enabling graphene-based transistors to be directly formed on the entire surface of substrates. Although the test substrate employed was a 75-mm silicon substrate (wafer) with oxide film, the new technique is applicable to larger 300-mm wafers as well.

1. Low-temperature synthesis of multi-layer graphene featuring thickness controlled via CVD, on entire surface of substrate


2. Process for forming transfer-free graphene transistors






Fujitsu Laboratories also developed a process for forming transistors that use graphene as the channel material, as outlined in the picture. This process is independent of wafer size, so it can be applied to large-scale substrates.

1. First, an iron catalyst is formed into the desired channel shape, using a conventional photolithographic process.

2. Graphene is then formed on the iron layer via CVD.

3. Source and drain electrodes of titanium-gold film are formed at both ends of the graphene, thereby "fixing" the graphene.

4. Next, just the iron catalyst is removed using acid, leaving the graphene suspended between the source and drain electrodes, with the graphene "bridged" between the electrodes.

5. Using atomic-layer deposition (ALD), a method for forming thin films, a layer of hafnium dioxide (HfO2) is grown on top of the graphene to stabilize the graphene.

6. Finally, a gate electrode is formed on top of the graphene and through the HfO2, resulting in the formation of a graphene transistor.



November 27, 2009

Supramolecular Polymers and Compartmentalized Nanoparticles

A Two million euro grant has been provided to dr. Bert Meijer of the Eindhoven University of Technology. The grant allows Meijer to explore the area of non-covalent synthesis of functional supramolecular systems. These supramolecular systems can be seen as small molecular factories built from molecules connected via weak interactions. They possess unique properties and have a range of possible applications.

The research project will start early in 2010 and has two main directions. By studying the mechanisms of the formation of supramolecular polymers, the scope and limitations of this new class of polymer systems will be investigated. This knowledge will be used to design, synthesize and self-assemble materials that dynamically adapt their properties upon external stimuli. These materials will also be applied as biomaterials in close cooperation with dr. Patricia Dankers of the TU/e, to make parts of a prototype bioartificial kidney. Hopefully this will lead to an improvement of current dialysis techniques and later maybe also to portable dialysis equipment.

Another challenge is in the in the realization of compartmentalized nanoparticles, molecular factories with multiple functionalities united in one very long and folded molecule. In order to achieve the best possible polymer systems, novel techniques to synthesize well-defined polymers with controlled sequence are introduced




Meijers group introduced a new class of materials, called supramolecular polymers. He showed for the first time that polymers, that are not consisting of long chains in which the repeating units are bound by strong (covalent) interactions but connected by weaker (non-covalent) interactions, can have very good and unique materials properties. The research foreseen will allow him to make the next step in the development of these materials and to introduce more complex functionalities to these supramolecular systems.


November 26, 2009

Some parts at 20 degrees Kelvin and Other at Room Temperature and the Whole Wire still Superconducts

According to Yonatan Dubi and Massimiliano Di Ventra of the University of California, San Diego. They have calculated that provided some points along the wire's length stay below the threshold temperature, the material will superconduct.

For this to work, the wire's surface must be extremely clean, allowing electrons to move freely and spread along the wire to create a uniform temperature. A material with a critical temperature of -193 °C could superconduct at room temperature, provided some sections were kept to -253 °C, they found. In principle, the colder these refrigeration points are, the fewer you need, Dubi says.


Physics Review B has the paper



Supercomputer SC09 Conference Highlights

HPCWire highlights of SC09

* 15 of 60 press releases were GPU related
* China's GPU-CPU "Tianhe" supercomputer making it into the number 5 position on the TOP500
* Japan's 3 petaflop TSUBAME 2.0 Fermi-equipped super scheduled for deployment in October 2010
* announcement of a new GPU computing collaboration network.
* HPC-capable Fermi GPUs from NVIDIA coming online in 2010
* 10 Gigabit Ethernet and Infiniband networking progressing
* continued InfiniBand dominance in high performance computing.
* more applications that oil and gas modelling and stock trading



* Virtualized HPC: The popularity of commodity hardware in HPC has encouraged a growing cadre of vendors to employ virtualization schemes to create big powerful machines from industry-standard building blocks. Unlike traditional virtualization, which splits a server for multiple OS environments, the model in HPC virtualization is to aggregate CPU, memory, and I/O across a cluster to create a unified resource under a single OS. The goal is to provide an alternative to the expense of the SMP mainframe and the complexity of a compute cluster, while at the same time offering the ability to reconfigure hardware dynamically.
* ScaleMP and 3Leaf, aggregate CPUs and memory for up to 16 cluster nodes, making them appear as an SMP machine to the application. RNA networks and NextIO focus on virtualized memory and I/O, respectively.




Possible Multiple Sclerosis Breakthrough

The Globe and Mail reports: Using ultrasound to examine the vessels leading in and out of the brain, Dr. Zamboni made a startling find: In more than 90 per cent of people with multiple sclerosis, including his spouse, the veins draining blood from the brain were malformed or blocked. In people without MS, they were not.

He hypothesized that iron was damaging the blood vessels and allowing the heavy metal, along with other unwelcome cells, to cross the crucial brain-blood barrier. (The barrier keeps blood and cerebrospinal fluid separate. In MS, immune cells cross the blood-brain barrier, where they destroy myelin, a crucial sheathing on nerves.)

More striking still was that, when Dr. Zamboni performed a simple operation to unclog veins and get blood flowing normally again, many of the symptoms of MS disappeared. The procedure is similar to angioplasty, in which a catheter is threaded into the groin and up into the arteries, where a balloon is inflated to clear the blockages. His wife, who had the surgery three years ago, has not had an attack since.

The researcher's theory is simple: that the underlying cause of MS is a condition he has dubbed “chronic cerebrospinal venous insufficiency.” If you tackle CCSVI by repairing the drainage problems from the brain, you can successfully treat, or better still prevent, the disease.

More radical still, the experimental surgery he performed on his wife offers hope that MS, which afflicts 2.5 million people worldwide, can be cured and even largely prevented.





“If this is proven correct, it will be a very, very big discovery because we'll completely change the way we think about MS, and how we'll treat it,” said Bianca Weinstock-Guttman, an associate professor of neurology at the State University of New York at Buffalo.

The initial studies done in Italy were small but the outcomes were dramatic. In a group of 65 patients with relapsing-remitting MS (the most common form) who underwent surgery, the number of active lesions in the brain fell sharply, to 12 per cent from 50 per cent; in the two years after surgery, 73 per cent of patients had no symptoms.




Superconducting Heat Shield



Flight Global reports that european researchers developing a magnetic heat shield that could augment or replace the traditional ablative materials hope to make a test flight in the next decade.

Under development by EADS Astrium, with support from German aerospace centre DLR and the European Space Agency, the magnetic field-protected vehicle will be launched from a submarine on a suborbital trajectory to land in the Russian Kamchatka region.

As a capsule re-enters the atmosphere the air heats up around it due to friction and usually a high-temperature-resistant material is needed to absorb that. A magnetic field is able to deflect the hot atmospheric air away from the vehicle's surface, reducing or eliminating the need for a heat-absorbing material.

A super-conducting coil will generate the magnetic field that would extend out beyond the leading edge of the vehicle. Assessment of the coil is ongoing.

Other issues to be tackled include the ability of the coil to withstand the launch and flight environment; trajectory modification to compensate for the increased drag caused by the atmospheric gas deflection; and telemetry data recovery, when radio signal blocking ionised gases form around the vehicle.


EADS astrium website



European Space (EADS) website

Spintronics in Silicon at Room Temperature

Electrical creation of spin polarization in silicon at room temperature Eventual commercialization will mean faster electronics that is far more energy efficient.

The control and manipulation of the electron spin in semiconductors is central to spintronics which aims to represent digital information using spin orientation rather than electron charge. Such spin-based technologies may have a profound impact on nanoelectronics, data storage, and logic and computer architectures. Recently it has become possible to induce and detect spin polarization in otherwise non-magnetic semiconductors (gallium arsenide and silicon) using all-electrical structures but so far only at temperatures below 150 K and in n-type materials, which limits further development. Here we demonstrate room-temperature electrical injection of spin polarization into n-type and p-type silicon from a ferromagnetic tunnel contact, spin manipulation using the Hanle effect and the electrical detection of the induced spin accumulation. A spin splitting as large as 2.9 meV is created in n-type silicon, corresponding to an electron spin polarization of 4.6%. The extracted spin lifetime is greater than 140 ps for conduction electrons in heavily doped n-type silicon at 300 K and greater than 270 ps for holes in heavily doped p-type silicon at the same temperature. The spin diffusion length is greater than 230 nm for electrons and 310 nm for holes in the corresponding materials. These results open the way to the implementation of spin functionality in complementary silicon devices and electronic circuits operating at ambient temperature, and to the exploration of their prospects and the fundamental rules that govern their behaviour.


EETimes and Nanowerk have coverage of this news




Digital electronics is almost universally based on the detection and control of the movement of electrons through the electrical charge associated with them. However electrons also have the property of spin and transistors that function by controlling an electron's spin orientation, instead of its charge, would use less energy, generate less heat and operate at higher speeds. That theory has resulted in a field of research called spintronics. However, until now this has required low temperatures for operation.

Indeed, as the University of Twente authors comment, the ability to detect spin polarization in otherwise non-magnetic semiconductors — including gallium arsenide and silicon — using all-electrical structures has only been achieved at temperatures below 150 K and in n-type materials, which has limited further development.

The authors state that they have demonstrated room-temperature electrical injection of spin polarization into n-type and p-type silicon from a ferromagnetic tunnel contact, spin manipulation using the Hanle effect and the electrical detection of the induced spin accumulation.

The spin splitting has a life time of greater than 140-ps for conduction electrons in heavily doped n-type silicon at 300 K and greater than 270-ps for holes in heavily doped p-type silicon at the same temperature.

Nonetheless, the results open up the possibility of embedding spintronic operation in complementary silicon operating at ambient temperature.



2 page pdf with supplemental information

NASA's Wide-field Infrared Survey Explorer


WISE is a NASA-funded Explorer mission that will provide a vast storehouse of knowledge about the solar system, the Milky Way, and the Universe. Among the objects WISE will study are asteroids, the coolest and dimmest stars, and the most luminous galaxies.

WISE is an unmanned satellite carrying an infrared-sensitive telescope that will image the entire sky.

* The spacecraft is 2.85 m (9.35 feet) tall, 2.0 m (6.56 feet) wide, 1.73 m (5.68 feet) deep
* There are 4 infrared sensitive detector arrays, each with 1024 X 1024 pixels (1 megapixel array). The near infrared bands (3.4 and 4.6 microns) use Mercury-Cadmium-Telluride (MCT). The mid-infrared bands (12 and 22 microns) use Arsenic-doped Silicon (Si:As).

WISE is expected to find about 100,000 new asteroids in the asteroid belt between Mars and Jupiter and hundreds of asteroids that pass close to Earth. It will be especially good at seeing dark objects that are nearly impossible to find using existing ground-based telescopes, as the objects radiate heat that WISE will see.

The telescope will also be able to spot Jupiter-sized objects up to 60,000 astronomical units away (1 AU equals the Earth-sun distance). The distribution of comet paths has suggested that a very large planet could be lurking at 25,000 AU, says WISE project scientist Peter Eisenhardt of NASA's Jet Propulsion Laboratory in Pasadena, California.




NASA's Wide-field Infrared Survey Explorer, or Wise, is chilled out, sporting a sunshade and getting ready to roll. [this link to the NASA news conference] NASA's newest spacecraft is scheduled to roll to the pad on Friday, Nov. 20, its last stop before launching into space to survey the entire sky in infrared light.

Wise is scheduled to launch no earlier than 9:09 a.m. EST on Dec. 9 from Vandenberg Air Force Base in California. It will circle Earth over the poles, scanning the entire sky one-and-a-half times in nine months. The mission will uncover hidden cosmic objects, including the coolest stars, dark asteroids and the most luminous galaxies.

"The eyes of Wise are a vast improvement over those of past infrared surveys," said Edward "Ned" Wright, the principal investigator for the mission at UCLA. "We will find millions of objects that have never been seen before."

The mission will map the entire sky at four infrared wavelengths with sensitivity hundreds to hundreds of thousands of times greater than its predecessors, cataloging hundreds of millions of objects. The data will serve as navigation charts for other missions, pointing them to the most interesting targets. NASA's Hubble and Spitzer Space Telescopes, the European Space Agency's Herschel Space Observatory, and NASA's upcoming Sofia and James Webb Space Telescope will follow up on Wise finds

November 25, 2009

Dark Matter Rocket

Here is speculation on top of speculation as we do not yet know what Dark Matter is or even have full agreement that it exists. The dark matter starship assumes that there is dark matter around and that the dark matter can be scooped up for fuel like Bussard Ramjet could scoop of hydrogen. They speculate that one of the leading candidates of what dark matter is (netralinos) can release energy as its own matter and antimatter.

His plan is to drive the rocket using the energy released when dark matter particles annihilate each other. Here's where Liu's idea depends on more speculative physics. No one knows what dark matter is actually made of, though there are numerous theories of the subatomic world that contain potential dark matter candidates. One of the frontrunners posits that dark matter is made of neutralinos, particles which have no electric charge. Neutralinos are curious in that they are their own antiparticles: two neutralinos colliding under the right circumstances will annihilate each other.

If dark matter particles do annihilate in this way, they will convert all their mass into energy. A kilogram of the stuff will give out about 10^17 joules, more than 10 billion times as much energy as a kilogram of dynamite, and plenty to propel the rocket forwards.

Even less certain is the detail of how a dark matter rocket might work. Liu imagines the engine as a "box" with a door that is open in the direction of the rocket's motion (see diagram). As dark matter enters, the door is closed and the box is shrunk to compress the dark matter and boost its annihilation rate. Once the annihilation occurs, another door opens and the products rocket out. The whole cycle is repeated, over and over again.


The New Scientist looks at the Dark Matter rocket and the Blackhole starship. The Blackhole starship has already been reviewed on this site.

An interesting new point was raised about blackhole starships and that there appears to be a blackhole sweetspot for the right sized black hole to make fast spaceship propulsion.

Is the Universe Optimized for Blackhole Travel ?

Crane then wondered what would happen if intelligent civilisations could make black holes. This would mean that life in these universes played a key role in the proliferation of baby universes. Smolin felt the idea was too outlandish and left it out of his book. But Crane has been thinking about it on and off for the last decade.

He believes we are seeing Darwinian selection operating on the largest possible scale: only universes that contain life can make black holes and then go on to give birth to other universes, while the lifeless universes are an evolutionary dead end.

His latest calculations made him realise how uncanny it was that there could be a black hole at just the right size for powering a starship. "Why is there such a sweet spot?" he asks. The only reason for an intelligent civilisation to make a black hole, he sees, is so it can travel the universe.

"If this hypothesis is right," he says, "we live in a universe that is optimised for building starships!"


Dark Matter Rocket
Dark Matter as a Possible New Energy Source for Future Rocket Technology

Current rocket technology can not send the spaceship very far, because the amount of the chemical fuel it can take is limited. We try to use dark matter (DM) as fuel to solve this problem. In this work, we give an example of DM engine using dark matter annihilation products as propulsion. The acceleration is proportional to the velocity, which makes the velocity increase exponentially with time in non-relativistic region. The important points for the acceleration are how dense is the DM density and how large is the saturation region. The parameters of the spaceship may also have great influence on the results. We show that the (sub)halos can accelerate the spaceship to velocity 10^−5c  10^−3c. Moreover, in case there is a central black hole in the halo, like the galactic center, the radius of the dense spike can be large enough to accelerate the spaceship close to the speed of light.


The dark matter spaceship could reach the relativistic speed in about 2 days and the length needed for acceleration is about 10^−4pc.



We have used two assumptions on DM in this work. First, we have assumed static DM for simplicity. But the DM particle may have velocity as large as O(10−3c). Once we know the velocity distribution of DM, it can be solved by programming the direction of the spaceship. Second, we have assumed the DM particle and the annihilation products can not pass through the wall of the engine. For the annihilation products, they may be SM fermions which have electric charges. Thus we can make them go into certain direction by the electromagnetic force. The most serious problem comes from DM which are weakly interacting with matter. Current direct searches of DM have given stringent bound on cross-section of DM and matter. It may be difficult using matter to build the containers for the DM, because the cross-section is very small. However, the dark sector may be as complex as our baryon world, for example the mirror world. Thus the material from dark sector may build the container, since the interactions between particles in dark sector can be large.

Sometimes, when looking at the N-body simulation pictures of DM, I think it may describe the future human transportation in some sense. In the picture, there are bright big points which stand for large dense halos, and the dim small points for small sparse halos. Interestingly, these halos have some common features with the cities on the Earth. The dense halos can accelerate the spaceship to higher speed which make it the important nodes for the transportation. However, the sparse halos can not accelerate the spaceship to very high speed, so the spaceship there would better go to the nearby dense halo to get higher speed if its destination is quite far from the sparse halos. Similarly, if we want to take international flight, we should go to the nearby big cities. The small cities usually only have flights to the nearby big cities, but no international flights. Thus we can understand the dense halos may be very important nodes in the future transportation, like the big cities on the Earth.


FURTHER READING
Nextbigfuture highlights from week 39-45 has 13 space related highlights including several advanced propulsion articles.

* Mach Effect Propulsion
* antimatter rockets
* Gravitational field propulsion
* Winterberg's advanced deuterium fusion rocket design
* Vasimr, EMDrive and more

Lawrenceville Plasma Physics Criticisms and Responses

There are two main criticisms of focus fusion.

1. Hydrogen-boron fuel allows too much x-ray cooling

* The hot plasma of this fuel will emit x-rays too quickly,
* the energy lost through the x-rays will always be more than the energy gained by fusion reactions.
* the fusion reactions would not heat the plasma suffieicntly,
* thus the very high temperatures required for burning hydrogen-boron completely would not be reached.
* The rate of radiation depends on the square of the electrical charge on the nucleus involved,
* Thus boron, with 5 charges, causes 25 times more radiation than, for example, deuterium, with one charge.
* this x-ray emission process is termed bremsstrahlung

Response in regards to bremsstrahlung

Fortunately, there is a way to reduce the bremsstrahlung cooling with the dense plasma focus by using the magnetic field effect. This effect, which critics of hydrogen-boron fusion do not take into account, slows down the transfer of energy from the ions to the electrons to by as much as a factor of twenty in the presence of extremely high magnetic fields, without affecting the transfer of energy from the electrons to the ions. This means that the ions could be 20 times hotter than the electrons. In turn, this would reduce x-ray emission by a factor of about 5.

The magnetic field effect was discovered theoretically by R, McNally in 1975 and it has been studied extensively in theoretical and observational studies of neutron stars, which have enormous magnetic fields. For example, in 1987 G. S. Miller, E.E. Salpeter, and I. Wasserman, published in the prestigious Astrophysical Journal an analysis showing that energy transfer to electrons could indeed drop by as much as 20-fold in strong magnetic fields.


Magnetic fields of around 15 giga-gauss(GG, billion gauss) ion temperatures should be 10-20 times higher than electron temperatures.

0.4 GG have been observed in plasma focus fields. A six-fold increase could be obtained by using smaller electrodes and higher currents.

Magnification of the magnetic field as the energy compresses itself into the plasmoid increases with the mass and change of the nuclei in the fuel. This would provide another factor of 6-7, bring the field up to 15 GG.

Ongoing experiments might confirm these projections.

2. Plasmoids can’t exist

A second objection is that dense, self-confined plasmoids can’t exist, and therefore the very high magnetic fields needed for the magnetic field effect also can’t exist.



Response on Plasmoids

Plasmoids have been observed in the plasma focus, in other fusion devices and in nature, for decades by many groups of researchers. Winston Bostick and Victorio Nardi reported in the 1970’s observations of plasmoids with magnetic field of up to 200 Mega-gauss, lasting for tens of nanoseconds. Many scientists have observed plasmoids in the magnetosphere of the earth. In recent years, J. Yee and P.M Bellam at California Institute of Technology studied in detail how plasmoids form in the laboratory and their stability. Plasmoids are contained by their own magnetic fields and currents through the pinch effect, in which currents moving in the same direction attract each other.

Over than past 50 years, other groups of researcher have shown mathematically that plasmoids can be stable, at least for times very long compared with the time it takes particles to cross them. For this reason, there is no theoretical problem with the existence of plasmoids with fields of billions of gauss. Whether such field can in fact be produced practically is what ongoing experiments will test.


Bezos Rocket Company - Blue Origin Has 2011 and 2012 Targets



Picture from 2006 tests of a DC-X like rocket

Jeff Bezos, CEO of Amazon.com, has a rocket company called Blue Origin The website for Blue Origin indicates plans for unmanned flights in 2011 and manned flights in 2012.

Flight testing of prototype New Shepard vehicles began in 2006. Blue Origin expects the first opportunities for experiments requiring an accompanying researcher astronaut to be available in 2012. Flight opportunities in 2011 may be available for autonomous or remotely-controlled experiments on an uncrewed flight test.




More pictures of Blue Origin here



Intelligence and Technology Achievement and Productivity

There are some rare individuals with IQs in the 200's and their brains are not larger than regular people.

Highest IQs Ever

The dominate, rigorously researched, and documented answer (who had the highest IQ) is German polymath Johann von Goethe (IQ = 210), second to Shakespeare in literature, with a vocabulary of over 90,000 words, inspiration to Darwin, with his theories on maxilla bone evolution, mental compatriot to Newton, with his theory of colors, and founder of the science of human chemistry, with his 1809 treatise Elective Affinities, wherein a human chemical reaction view of life is presented, some two-hundred years ahead of its time.

Other individuals, to note, can be found to have had stated IQs above 210 (either verbalized, e.g. Leonardo da Vinci (IQ = 225) or William Sidis (IQ = 250-300), or based on childhood ratios, e.g. Michael Kearney (IQ = 325) or Marilyn vos Savant (IQ = 225)), but these values are generally found to be over-estimates, based on oversimplification, when compared to that person's actual adult IQ or when each person is fitted into a robust comparative historical study (collective of 300 geniuses or more).

To give an example, at age four, American Michael Kearney scored 168-plus on an IQ test for six-year-olds and via mathematical juggling, using his age and test score, he was said to have what is called a "ratio-IQ" of 325. Twenty years later, however, although he turned out to be a relatively smart individual, completing a BS in anthropology (age 10), MS in biochemistry (age 14), and MS in computer science (age 17), he had difficulty getting past the half-way point on the pop-intelligence game show Who Wants to be a Millionaire?, leaving with only $25,000, implying that ratio or estimated IQs are not as accurate as historically-determined IQs.


More intelligence is clearly useful. Higher intelligence has been correlated to higher incomes in life and better job performance. However, national and regional systems and how a country or institution are run can have a big impact on scientific productivity and technological achievement. Russia has a lot of geniuses as does China and India. But for many decades they were held back inferior equipment, facilities.

Someone could be a "genius" race car driver (Schumacher) but if you are driving a Pinto and your opponent is driving a Ferrari then the guy in the Pinto will lose.

Someone could be a "genius" physicist but without equipment and resources it will far tougher to make the breakthroughs.

There can also be societal and other challenges to be overcome. There is the current resistance to investigation of low energy nuclear reactions (cold fusion).

Genius, Intelligence and Brain Size and Structure

There have been many studies of brain structure and size and correlations to intelligence

A 2004 study at the University of California, Irvine found that the volume of gray matter in parts of the cerebral cortex had a greater impact on intelligence than the brain's total volume. The findings suggest that the physical attributes of many parts of the brain -- rather than a centralized "intelligence center" -- determine how smart a person is.

A 1999 analysis of Albert Einstein's brain also seems to support this theory. Einstein's brain was slightly smaller than the average brain. However, parts of his parietal lobe were wider than most people's brains. The larger areas in Einstein's brain are related to mathematics and spatial reasoning. Einstein's parietal lobe was also nearly missing a fissure found in most people's brains. Analysts theorized that the absence of the fissure meant that different regions of his brain could communicate better.

A 2006 paper in the journal "Nature" theorized that the way the brain develops is more important than the size of the brain itself. A person's cerebral cortex gets thicker during childhood and thinner during adolescence. According to the study, the brains of children with higher IQs thickened faster than those of other children.


There was a CNN special by Sanjay Gupta on Genius: quest for extreme brain power

Some say that brain size does not matter at all. However, looking at comparisons across the animal kingdom brain size does come into play for intelligence.

The analogy is a car engine. It is far tougher to achieve high horsepower from something the size of a baseball than a fridge sized engine. It is possible to fine tune and get ten times more horsepower out of formala one engine than an engine that is the same size but more modest design.

It is known that nutritional deficiency and environmental pollution can damage brains and reduce intelligence. These factors (examples of people with apparently superior brain structure, many common things that reduce intelligence) suggests that there are many things that can be improved to raise intelligence without increasing the size of the brain.

There is recent work where stem cells are introduced into the brain to correct certain brain diseases. There are situations where more brain cells has advantages.

Brain enhancement (how much and how soon depends on how far from optimal we really are now. Is everyone losing 20-150IQ points because of rampant defects. some kind of thing that with analysis we can see. WOW this problem is endemic. Like air pollution is probably costing 20 IQ points and imperfect nutrition is probably costing another 20-30 IQ points. How easy is that damage to fix after living that way for 20-50 years ?

Brain/productivity enhancement would be the most impact on the overall betterment of humanity and civilization

ie Eat right and exercise and everyone could be 2-4 times stronger and healthier. Why wouldn't the same thing apply to brain function ? And if we can make the corrections in pills and other adjustments then maybe it is easy to get people to what is now 200-400 IQ. The other thing is the whole IQ score thing is imperfect.

Super-Virtual reality training and wearable computer cognitive aids could make everyone test out great.

The measure that I think is more important is expertise and productivity rather than IQ.



Programs for Increasing Chances of Breakthroughs
There is some support via DARPA for militarily relevant scientific breakthroughs, but there is far less of a formal system to encourage other types of scientific breakthroughs. Even if such breakthroughs might not take that much effort or involvement of genius.

The Department of Energy is now trying to fund some breakthroughs in Energy Technology, but there are many pre-existing biases as to what breakthroughs are possible or likely.

For space technology breakthroughs, if you have one trillion dollars and spend it all on defense satellites, communication satellites, TV satellites and chemical rockets and buildings on the ground then this will not result in breakthroughs into faster than light travel or wormholes. This is especially the case if studying faster than light travel or wormholes has formal or informal funding bans.

More funding for efforts to increase intelligence would also likely improve results and chances for success. It is not just intelligence but also research into breakthroughs in education, training and systems and tools for enhancing scientific productivity.

Better Education on Specific Topics that Make a Larger Pool of Researchers

There are now videos of better teachers for otpics like General Relativity and Quantum Mechanics available online This helps to lower the challenges needed for someone to understand what is already known and what the unknowns are where the frontiers of knowledge are.

There are also the Richard Feynmann videos on physics.

Teraflop or Better Computers Cheap

The availability of multi-teraflop servers for a few thousand dollars removes certain barriers for computational research.

Making More Experts
All expertise theorists agree that it takes enormous effort to build these structures in the mind. Simon coined a psychological law of his own, the 10-year rule, which states that it takes approximately a decade of heavy labor to master any field. Even child prodigies, such as Gauss in mathematics, Mozart in music and Bobby Fischer in chess, must have made an equivalent effort, perhaps by starting earlier and working harder than others.

What matters is not experience per se but "effortful study," which entails continually tackling challenges that lie just beyond one's competence. That is why it is possible for enthusiasts to spend tens of thousands of hours playing chess or golf or a musical instrument without ever advancing beyond the amateur level and why a properly trained student can overtake them in a relatively short time.

Automated and virtual reality systems can be created to enable easy access to effortful study (widespread effective mentoring and coaching).

More experts and enhanced experts should enable interesting gains in productivity across society.

Electromagnetic waveguides on diamond-based chips Used to Manipulate Quantum States at Gigahertz Rates

Physicists at UC Santa Barbara have made an important advance in electrically controlling quantum states of electrons, a step that could help in the development of quantum computing.

In seperate research, another group controlled spin electronically

The researchers have demonstrated the ability to electrically manipulate, at gigahertz rates, the quantum states of electrons trapped on individual defects in diamond crystals. This could aid in the development of quantum computers that could use electron spins to perform computations at unprecedented speed.

Using electromagnetic waveguides on diamond-based chips, the researchers were able to generate magnetic fields large enough to change the quantum state of an atomic-scale defect in less than one billionth of a second. The microwave techniques used in the experiment are analogous to those that underlie magnetic resonance imaging (MRI) technology.




The key achievement in the current work is that it gives a new perspective on how such resonant manipulation can be performed. "We set out to see if there is a practical limit to how fast we can manipulate these quantum states in diamond," said lead author Greg Fuchs, a postdoctoral researcher at UCSB. "Eventually, we reached the point where the standard assumptions of magnetic resonance no longer hold, but to our surprise we found that we actually gained an increase in operation speed by breaking the conventional assumptions."

While these results are unlikely to change MRI technology, they do offer hope for the nascent field of quantum computing.


Science Magazine: Gigahertz Dynamics of a Strongly Driven Single Quantum Spin

Two-level systems are at the core of numerous real-world technologies such as magnetic resonance imaging and atomic clocks. Coherent control of the state is achieved with an oscillating field that drives dynamics at a rate determined by its amplitude. As the strength of the field is increased, a different regime emerges where linear scaling of the manipulation rate breaks down and complex dynamics are expected. Employing a single spin as a canonical two-level system, we have measured the room-temperature "strong-driving" dynamics of a single nitrogen vacancy center in diamond. Using an adiabatic passage to calibrate the spin rotation, we observe dynamics on subnanosecond time scales. Contrary to conventional thinking, this breakdown of the rotating wave approximation provides opportunities for time-optimal quantum control of a single spin.


13 page pdf with supplemental information


November 24, 2009

Air Pollution Maps of the United States


Map of coal power by state. Note: about of third of the air pollution can go thousands of miles from the plant. There is more impact on air quality and health of those near the plants. Air pollution has been improved in the USA since the 1950s and 1960s. There is still a negative effect. 24,000 coal impacted deaths and a total of 60,000 air pollution impacted deaths out of 2.5 million deaths from any cause. Cigarette smoking and obesity have larger negative effects, which is seen in West Virginia's health statistics. The bad air pollution states are ending up at or near the bottom of state health rankings.

State of the air (american lung association) has an interactive map of the air pollution in different states.

Source watch has a list of the states with the most coal power plants


Rank State # of Plants Total Capacity 2005 Power Prod.
1 Texas 20 21,238 MW 148,759 GWh
2 Ohio 35 23,823 MW 137,457 GWh
3 Indiana 31 21,551 MW 123,985 GWh
4 Pennsylvania 40 20,475 MW 122,093 GWh
5 Illinois 32 17,565 MW 92,772 GWh
6 Kentucky 21 16,510 MW 92,613 GWh
7 West Virginia 19 15,372 MW 91,601 GWh
8 Georgia 16 14,594 MW 87,624 GWh
9 North Carolina 25 13,279 MW 78,854 GWh
10 Missouri 24 11,810 MW 77,714 GWh
11 Michigan 33 12,891 MW 71,871 GWh
12 Alabama 11 12,684 MW 70,144 GWh
13 Florida 15 11,382 MW 66,378 GWh
14 Tennessee 13 10,290 MW 59,264 GWh
15 Wyoming 10 6,168 MW 43,421 GWh
16 Wisconsin 28 7,116 MW 41,675 GWh


The dirtiest plants are in Indiana, Ohio, Pennsylvania, Virginia, Alabama

Minnesota is farther away and does not have that many cars and has very good air quality

In 1997, Minnesota had the fewest deaths due to heart disease of any state.

American health rankings allows comparisons of the healths of different states. The thousands of in depths studies separate out the impacts of different levels of smoking from the air pollution effects.

Air pollution ranking by state
Pennsylvania 47th
Ohio 44th
Indiana 42nd
Minnesota 13th

Ranking States by Cancer Deaths
Kentucky 50th
Pennsylvania 37th
Ohio 42nd
Indiana 40th
Minnesota 9th
Pennsylvania air quality and Health Impacts



















Pennsylvania air pollution study from 2006

• Soot pollution causes about 5,000 premature deaths in Pennsylvania annually.

• At this rate, air pollution ranks as the third highest risk factor for premature death, behind smoking and poor diet/ physical inactivity.

• Smog pollution leads to an estimated 7,000 hospital admissions for respiratory disease and soot pollution contributes to roughly 4,000 admissions for cardiovascular disease annually.

• In addition, air pollution causes approximately 4,000 new cases of chronic bronchitis in adults every year.

• Among asthmatics, soot pollution causes an estimated 500,000 asthma attacks annually, with an additional 300,000 asthma attacks caused by smog.

There was an older "death-dealing smog over Donora, Pennsylvania"

An air pollution disaster over the two southwestern Pennsylvania towns of Donora and Webster in October 1948 took dozens of lives, left thousands literally gasping for breath, and motivated the United States Public Health Service (PHS) to enter the arena of air pollution policy.


PBS history of "deadly smog".

Pennsylvania particulate air pollution study

Summary of the science on particulate air pollution links to disease and deaths

LONG-TERM EXPOSURES
American Cancer Society Cohort Study: This study of half a million people in 100 American cities over 16 years has been audited, replicated, re-analyzed, extended and ultimately reconfirmed. The latest results show that long-term exposure to fine particulate matter is associated with premature death from cardio-respiratory causes and lung cancer. Increased risk of premature death is evident at concentrations below current standards.

Harvard Six Cities Study: This long-term cohort study has also been subject to an independent audit, review, and re-analysis and the original findings have been confirmed: long term exposure to fine particle pollution shortens lives and contributes to an increased risk of early death from heart and lung disease, even at air pollution levels far below the current standards. Extended follow up of the Harvard six city study

Children’s Health Study: A study of school-age children in 12 southern California communities reported increased cough, bronchitis, and decreased lung function in children living in more polluted areas. The long-term mean fine particle concentration was at the level of the current standard.








2009 follow up Fine-Particulate Air Pollution and Life Expectancy in the United States






















































Ohio air pollution study

Impact of particulates on Ohio

















Indiana has Orange level bad air days about 10% of the time in its counties.


















At risk populations from bad air pollution, those with asthma, chronic bronchitis, emphysema, cardiovascular disease and diabetes

















Coal plants near residential areas

FURTHER READING
Air pollution in Cairo

Worldwide city by city comparison of air pollution

Harvard study of particulates

Researchers comparing air quality in six cities across the United States were stunned when their data showed that people living in cities with the dirtiest air died on average two years earlier than residents of cities with the cleanest air. The difference in death rates was linked to elevated levels of fine-particle pollution.

In public-health terms, a two-year shift in life expectancy is enormous—comparable to the protective effects of proper diet and exercise—so that the researchers themselves had doubts at first about their findings. But the association held up.

Lung diseases like cancer, emphysema, fibrosis, and asthma are almost all initiated or aggravated by the inhalation of particles and gases, says center director Joseph Brain, Drinker professor of environmental physiology.

“97 to 98 percent of lung cancer would be eliminated if people didn’t smoke cigarettes and avoided environmental and workplace exposures to air pollution.

Sometimes the structure of the lungs leads to different levels of particle exposure. Says Rogers, “We know if you have a mom and her seven-year-old standing at a bus stop and they get a blast of diesel exhaust, the child is going to get relatively much greater particle deposition.” Because of differences in surface to lung volume, metabolic rate, and activity, the seven-year-old’s lungs will get two and a half times the dose of particles as the mother’s lungs. “We first predicted this theoretically,” says Rogers, “and the experimental evidence supports it. The seven-year-old has a fully alveolated lung with an enormous surface area, but a small chest volume, so there is a greater particle deposition relative to the adult, who has a much larger chest volume and a slower metabolism.”

But what they found in 1990, a full 15 years into the Six Cities study, was something entirely unexpected. People were losing lung function, but what was killing them were cardiovascular events such as heart attacks and dysrhythmias. And it was fine particles from power plants and other combustion sources such as automobiles and home heating that showed the strongest associations with these deaths

In 1993, the group published their Six Cities findings in the New England Journal of Medicine. It is the most cited air-pollution paper in existence.

Dockery and his colleagues had the integrity of their science questioned; all their data were later independently examined—and ultimately validated.* “We are in an interesting quandary,” he says. “Congress wrote the law to protect even the most sensitive individuals, at a time when we thought we could define a level at which nobody would be adversely affected. But as we have become more sophisticated in our epidemiologic studies, it has become clear that…this concept—that there is a safe level at which you can protect everybody in the public against health effects—is not holding up. There are detectable health effects at even the lowest levels.”

But how do fine particles cause heart attacks? “One hypothesis,” says Godleski, “since some of the effects are almost immediate, is that they must be neurally mediated.” Particles may stimulate nerve fibers in the lung. Signals relayed to the central nervous system may change the autonomic balance of the heart in ways that “make it more prone to arrhythmia and other effects, which in turn create the potential for a fatality.”

Another hypothesis suggests that, because particles cause inflammation of the lungs, inflammatory agents produced there may affect the heart in a negative way. Vasoconstrictors such as endothelin, for example, are secreted by the lungs when inflamed. The fact that mortality peaks 18 to 20 hours after the peak in a particle-pollution event (such as a smoggy day in summer) lends some support to this possibility; think of the way a sunburn can develop over time, after you leave the beach.

Even after laboratory studies validated the epidemiology of the Six Cities study, questions were raised about the nature of susceptible individuals. So began what Dockery calls “a horrible discussion” that asked, “If people are dying from air pollution, are they people who were going to die soon anyway? Are we just advancing their date of death by a day, and if so [getting back to economic considerations], is that really worth a million dollars [to change]?’” After much investigation, it now appears that air pollution is in fact shortening lives by many years.

“If you get sick with influenza or pneumonia,” Dockery explains, “you might be in trouble for a few days, but if you recover, you can go on and live for another 20 or 30 years. But if, during the period when you are sick, air pollution pushes you over the edge, then you are talking about substantial decreases in life expectancy.”




Carnival of Space 130

Carnival of Space 130 is up at Chandra Blog

This site provided the technology highlights of week 39-45, which included 13 space related highlights

Colony Worlds speculates on the wonders of the solar system 200 years after humans set foot on Mars. The wonders relate to colonization speculation based in the actual conditions of different moons and planets.

Universe Today considers using the Vasimr plasma rocket to clean up orbiting space trash


Check out the Carnival of Space 130 at Chandra Blog for more on the Kepler mission and other space topics

MIT Designing optical chips to be built Existing Chip Making Processes


In the prototype optical chip shown here, the circles in the top two rows are "ring resonators" that can filter out light of different wavelengths.
Image courtesy of Vladimir Stojanovic


Computer chips that transmit data with light instead of electricity consume much less power than conventional chips, but so far, they’ve remained laboratory curiosities. Professors Vladimir Stojanović and Rajeev Ram and their colleagues in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratory hope to change that, by designing optical chips that can be built using ordinary chip-manufacturing processes.

In addition to saving power, they could make computers much faster. “If you just focus on the processor itself, you maybe get a 4x advantage with photonics,” Stojanović says. “But if you focus on the whole connectivity problem, we’re talking 10, 20x improvements in system performance.”


Optical data transmission could solve what will soon be a pressing problem in chip design. As chips’ computational capacity increases, they need higher-bandwidth connections to send data to memory; otherwise, their added processing power is wasted. But sending more data over an electrical connection requires more power.


MIT researchers have demonstrated that they can put large numbers of working optical components and electronics on the same chip. This winter they expect to be able to control the optics directly with the electronics.



TI has produced two sets of prototypes for the MIT researchers, one using a process that can etch chip features as small as 65 nanometers, the other using a 32-nanometer process. To keep light from leaking out of the polysilicon waveguides, the researchers hollowed out the spaces under them when they got the chips back — the sole manufacturing step that wasn’t possible using TI’s in-house processes. But “that can probably be fixed more elegantly in the fabrication house if they see that by fixing that, we get all these benefits,” Watts says. “That’s a pretty minor modification, I think.”

The MIT researchers’ design uses light provided by an off-chip laser. But in addition to guiding the beam, the chip has to be able to load information onto it and pull information off of it. Both procedures use ring resonators, tiny rings of silicon carved into the chip that pull light of a particular frequency out of the waveguide. Rapidly activating and deactivating the resonators effectively turns the light signal on and off, and bursts of light and the gaps between them can represent the ones and zeroes of digital information.

To meet the bandwidth demands of next-generation chips, however, the waveguides will have to carry 128 different wavelengths of light, each encoded with its own data. So at the receiving end, the ring resonators provide a bank of filters to disentangle the incoming signals. On the prototype chips, the performance of the filter banks was “the most amazing result to us,” Stojanović says, “which kind of said that, okay, there’s still hope, and we should keep doing this.” The wavelength of light that the resonators filter is determined by the size of their rings, and no one — at either TI or MIT — could be sure that conventional manufacturing processes were precise enough to handle such tiny variations.



Henry Markram Calls the IBM Cat Scale Brain Simulation a Hoax

Henry Markram (Blue Brain Project) say IBM's claim is a HOAX.

Henry is referring to the claim of achieving a cat scale brain simulation

Henry Markram was interviewed by Sander Olson for Nextbigfuture in August, 2009

This is a mega public relations stunt - a clear case of scientific deception of the public. These simulations do not even come close to the complexity of an ant, let alone that of a cat. IBM allows Mohda to mislead the public into believing that they have simulated a brain with the complexity of a cat - sheer nonsense.

Here are the scientific reasons why it is a hoax:

How complex is their model?
They claim to have simulated over a billion neurons interacting. Their so called "neurons" are the tiniest of points you can imagine, a microscopic dot. Over 98% of the volume of a neuron is branches (like a tree). They just cut off all the branches and roots and took a point in the middle of the trunk to represent a entire neuron. In real life, each segment of the branches of a neuron contains dozens of ion channels that powerfully controls the information processing in a neuron. They have none of that. Neurons contain 10's of thousands of proteins that form a network with 10's of millions of interactions. These interactions are incredibly complex and will require solving millions of differential equations. They have none of that. Neurons contain around 20'000 genes that produce products called mRNA, which builds the proteins. The way neurons build proteins and transport them to all the corners of the neuron where they are needed is an even more complex process which also controls what a neuron is, its memories and how it will process information. They have none of that. They use an alpha function (up fast down slow) to simulate a synaptic event. This is a completely inaccurate representation of a synapse. There are at least 6 types of synapses that are highly non-linear in their transmission (i.e. that transform inputs and not only transmit inputs). In fact you would need a 10's of thousands of differential equations to simulate one synapse. Synapses are also extremely complex molecular machines that would themselves require thousands of differential equations to simulate just one. They simulated none of this. There are complex differential equations that must be solved to simulate the ionic flow in the branches, to simulate the ion channels biophysics, the protein-protein interactions, as well as the complete biochemical and genetic machinery as well as the synaptic transmission between neurons. 100's of thousands of more differential equations. They have none of this. Then there are glia - 10 times more than neurons..And the blood supply...and more and more. These "points" they simulated and the synapses that they use for communication are literally millions of times simpler than a real cat brain. So they have not even simulated a cat's brain at one millionth of its complexity.



Is it nonetheless the biggest simulation ever run?
No. These people simulated 1.6 billion points interacting. They used a formulation to model the summing up and threshold spiking of the "points" called the Izhikevik Formulation. Prof Eugene Izhikevik himself already in 2005 has run a simulation with 100 billion such points interacting just for the fun of it: (over 60 times larger simulation). This simulation ran on a cluster of desktop PCs every graduate student could have built. This is no technical achievement. That model exhibited oscillations, but that always happens so even simulating 100 Billion such points interacting is light years away from a brain. see: http://www.izhikevich.org/human_brain_simulation

Is the simulator they built a big step?
Not even close. There are numerous proprietary and peer-reviewed neurosimulators (e.g., NCS, pNEURON, SPLIT, NEST) out there that can handle very large parallel models that are essentially only bound by the available memory. The bigger the machine you have available, the more neurons you can simulate. All these simulators apply optimizations for the particular platform in order to make optimal use of the available hardware. Without any comparison to existing simulators, their publication is a non-peer reviewed claim.

Did they learn anything about the brain?
They got very excited because they saw oscillations. Oscillations are an obligatory artifact that one always gets when many points interact. These findings that they claim on the neuroscience side may excite engineers, but not neuroscientists.

Why did they get the Gordon Bell Prize?
They submitted a non-peer reviewed paper to the Gordon Bell Committee and were awarded the prize almost instantly after they made their press release. They seem to have been very successful in influencing the committee with their claim, which technically is not peer-reviewed by the respective community and is neuroscientifically outrageous.

But is there any innovation here?
The only innovation here is that IBM has built a large supercomputer - which is irrelevant to the press release.

Why did IBM let Mohda make such a deceptive claim to the public?
I don't know. Perhaps this is a publicity stunt to promote their supercompter. The supercomputer industry is suffering from the financial crisis and they probably are desperate to boost their sales. It is so disappointing to see this truly great company allow the deception of the public on such a grand scale.

But have you not said you can simulate the Human brain in 10 years?
I am a biologist and neuroscientist that has studied the brain for 30 years. I know how complex it is. I believe that with the right resources and the right strategy it is possible. We have so far only simulated a small part of the brain at the cellular level of a rodent and I have always been clear about that.

Would other neuroscientists agree with you?
There is no neuroscientist on earth that would agree that they came even close to simulating the cat's brain - or any brain.

But did Mohda not collaborate with neuroscientists?
I would be very surprised if any neuroscientists that he may have had in his DARPA consortium realized he was going to make such an outrages claim. I can imaging that it is remotely possible that the San Fransisco neuroscientists knew he was going to make such a stupid claim.

But did you not collaborate with IBM?
I was collaborating with IBM on the Blue Brain Project at the very beginning because they had the best available technology to faithfully allow us to integrate the diversity and complexity found in brain tissue into a model. This for me is a major endeavor to advance our insights into the brain and drug development. Two years ago, when the same Dharmendra Mhoda claimed the “mouse-scale simulations”, I cut all neuroscience collaboration with IBM because this is an unethical claim and it deceives the public.

Aren't you afraid they will sue you for saying that they have deceived the public?
We'll there is right and wrong and what they have done is not only wrong, but outrageous. They deceived you and millions of other people.

Henry Markram
Blue Brain Project

IEEE spectrum also has published a letter from Henry Markram that details his criticism.



November 23, 2009

Nextbigfuture will be at Foresight 2010 - the Synergy of Molecular Manufacturing and AGI

There is an upcoming conference focused on the Synergy of Molecular Manufacturing and general Artificial Intelligence, which will alsocelebrate the 20th anniversary of the founding of Foresight. Brian Wang, nextbigfuture.com, will be one of the speakers at this conference.

Several rapidly-developing technologies have the potential to undergo an exponential takeoff in the next few decades, causing as much of an impact on economy and society as the computer and networking did in the past few. Chief among these are molecular manufacturing and artificial general intelligence (AGI). Key in the takeoff phenomenon will be the establishment of strong positive feedback loops within and between the technologies. Positive feedback loops leading to exponential growth are nothing new to economic systems. At issue is the value of the exponent: since the Industrial Revolution, economies have expanded at rates of up to 7% per year; however, computing capability has been expanding at rates up to 70% per year, in accordance with Moore’s Law. If manufacturing and intellectual work shifted into this mode, the impact on the economy and society would be profound. The purpose of this symposium is to examine the mechanisms by which this might happen, and its likely effects.




Registration for the Sat, Jan 16, 2010, 08:30 AM through Sun, Jan 17, 2010, 03:00 PM conference

at the Sheraton Palo Alto Hotel, 625 El Camino Real, Palo Alto, CA, USA

conference page

Economist - The World in 2010

The Economist magazine has their annual projections for the coming year. This year it is the World in 2010

Projected statistics for 80 countries in 2010 (9 page pdf)

Statistics for 15 industries countries in 2010 (4 page pdf)

The Economist long term projection for China's economy is still a positive one.

Over the next decade, China’s annual growth will slow from the 10%-plus pace of the past few years to perhaps 7%—still one of the fastest rates in the world. But future growth will be less dependent on exports. As China’s share of world exports hits 10% in 2010, up from 4% in 2000, Japan’s experience will be instructive. It suggests that there are limits to a country’s global market share: after reaching 10%, its share of world markets fell as the yen strengthened. Likewise, China will be under foreign pressure to allow the yuan to resume its climb against the dollar in 2010.


China's aging population and population in general are discussed

China is running out of children to look after the elderly, a state of affairs often summed up by the formula “4-2-1”: four grandparents, two parents, one child. The country has about 20 years to get its act together. Although its workforce will start shrinking from 2010 relative to the population, in absolute terms both its number of workers and its population as a whole will grow until about 2030, when the population will peak at around 1.46 billion. After that it will begin to decline gently.

If the government really wants to rejuvenate the population, it will need to loosen its policy. More children would increase the dependency ratio until they were old enough to join the labour force. But if it were done soon, some of those children would reach working age just before the crunch time of 2030, easing the labour shortage from then on.

Most officials are adamant that the policy remains in place. But in Shanghai, where the birth rate is well below the national average, the city government is now encouraging couples entitled to more than one child to take full advantage. Where it leads, others may follow.


China's 2010 census could turn up an undercount. The 2010 census undercounted by 22 million. The current population estimate for China could be low by 20-30 million. Instead of 1.339 billion the population of China could be 1.36 or 1.37 billion.




Country 2010 GDP 2010 Growth Percap GDP PPP Percap PPP
1. USA 14.84 trillion 2.4% $47,920 14.84 t $47,920
2. China 5.59 trillion 8.6% $ 4,170 9.85 t $ 7,350
3. Japan 5.13 trillion 1.3% $40,440 4.23 t $33,340
4. Germany 3.20 trillion 0.5% $38,520 2.81 t $33,840
5. France 2.72 trillion 0.9% $43,240 2.16 t $34,310
6. UK 2.26 trillion 0.6% $36,250 2.16 t $34,730
7. Italy 2.14 trillion 0.4% $36,820 1.72 t $29,630
8. Brazil 1.67 trillion 3.8% $ 8,480 2.11 t $10,740
9. Canada 1.48 trillion 2.0% $43,450 1.32 t $38,850
10. India 1.47 trillion 6.3% $ 1,240 3.88 t $ 3,270
11. Spain 1.44 trillion -0.8% $31,250 1.39 t $30,360
12. Russia 1.41 trillion 2.5% $10,030 2.16 t $15,330
13. Australia 1.13 trillion 1.7% $52,290 .84 t $39,020
14. Mexico .89 trillion 3.0% $ 7,890 1.67 t $14,830
15. South Korea .88 trillion 2.8% $17,810 1.42 t $28,700
16. Netherlands .81 trillion 0.4% $49,250 .66 t $40,080

add to China
Hong Kong .22 trillion 2.8% $30,720 .31 t $43,180

Purposely Adding Defects in Carbon Nanotubes increases Energy Stored in Supercapacitors by up to 200%


University of California San Diego researchers have developed a method to enhance the capacitance (up to three times) of carbon nanotube (CNT) electrode-based electrochemical capacitors by controllably incorporating extrinsic defects into the CNTs.

“While batteries have large storage capacity, they take a long time to charge; while electrostatic capacitors can charge quickly but typically have limited capacity. However, supercapacitors/electrochemical capacitors incorporate the advantages of both,” Bandaru said.

Defects on nanotubes create additional charge sites enhancing the stored charge. The researchers have also discovered methods which could increase or decrease the charge associated with the defects by bombarding the CNTs with argon or hydrogen.

Carbon nanotubes could serve as supercapacitor electrodes with enhanced charge and energy storage capacity.

“It is important to control this process carefully as too many defects can deteriorate the electrical conductivity, which is the reason for the use of CNTs in the first place. Good conductivity helps in efficient charge transport and increases the power density of these devices,” Bandaru added.

The researchers think that the energy density and power density obtained through their work could be practically higher than existing capacitor configurations which suffer from problems associated with poor reliability, cost, and poor electrical characteristics.




We characterize the methodology of, and a possible way to enhance, the capacitance of carbon nanotube (CNT) electrode based electrochemical capacitors. Argon irradiation was used to controllably incorporate extrinsic defects into CNTs and increase the magnitude of both the pseudocapacitance and double-layer capacitance by as much as 50% and 200%, respectively, compared to untreated electrodes. Our work has implications in analyzing the prospects of CNT based electrochemical capacitors, through investigating ways and means of improving their charge storage capacity and energy density

Solid-State, Rechargeable, Long Cycle Life Lithium–Air Battery With High Energy Density and More Safety and Stability

Engineers at the University of Dayton Research Institute (UDRI) have developed a solid-state, rechargeable lithium-air battery. When fully developed, the battery could exceed specific energies of 1,000 Wh/kg in practical applications.

We have successfully fabricated and tested the first totally solid-state lithium-air battery, which represents a major advancement in the quest for a commercially viable, safe rechargeable battery with high energy and power densities and long cycle life.

In addition to increasing the battery’s energy density, the development is designed to mitigate the volatile nature of traditional lithium rechargeables, such as those used in cell phones and laptops, which can overheat and catch fire or rupture.




A Solid-State, Rechargeable, Long Cycle Life Lithium–Air Battery that could have over 1000 Wh/kg

This paper describes a totally solid-state, rechargeable, long cycle life lithium–oxygen battery cell. The cell is comprised of a Li metal anode, a highly Li-ion conductive solid electrolyte membrane laminate fabricated from glass–ceramic (GC) and polymer–ceramic materials, and a solid-state composite air cathode prepared from high surface area carbon and ionically conducting GC powder. The cell exhibited excellent thermal stability and rechargeability in the 30–105°C temperature range. It was subjected to 40 charge–discharge cycles at current densities ranging from 0.05 to 0.25 mA/cm2. The reversible charge/discharge voltage profiles of the Li–O2 cell with low polarizations between the discharge and charge are remarkable for a displacement-type electrochemical cell reaction involving the reduction of oxygen to form lithium peroxide. The results represent a major contribution in the quest of an ultrahigh energy density electrochemical power source. We believe that the Li–O2 cell, when fully developed, could exceed specific energies of 1000 Wh/kg in practical configurations.