February 20, 2010

Implied PPP and Big Mac Exchange Rates for Brazil, India, Russia, and China

Goldman Sachs has predicted that Brazil, Russia, India and China (BRIC's) are the countries with the most promising economic growth prospects The BRICs also performed the best during the latest financial crisis.

China has been singled out by economists and the US government as having an undervalued currency. An implication is that over the years as the currency exchange rates approach fair value then the economies with undervalued currencies will be larger with future exchange rates.

The Big Mac index approximates PPP (Purchasing Power Parity), but does not measure India because of the lack of beef consumption.

By PPP measures China, Russia and India should have a lot of currency appreciation over the next few years for China, Brazil and Russia and a bit longer for India. (India is at an earlier development stage and will take longer to get currency appreciation) Brazil will have a little less from PPP but with increasing oil production will have a boost from the strength in the value of commodities (Russia gets a similar boost from commodity prices).

China Yuan Implied exchange rate 3.5 yuan to 1 USD (48% undervalued)
Russia Ruble Implied Exchange rate 18.8 rubles to 1 USD (37% undervalued)
India has an implied 16.164 (65% undervalued)
Brazil has an implied 1.51 (20% undervalued)

China should have double its current economy when exchange rates are fair value. Russia's economy should be 60% larger. India's economy should be nearly triple and Brazil's economy should be 20-25% larger.

The Implied PPP Conversion Rate for India is 16.165

GDP Per Capita (Current Prices, National Currency) INR 47,402.87 .
GDP Per Capita (Current Prices, US Dollars) US$ 1,032.71
Output Gap, Percent of Potential GDP -
GDP (PPP), US Dollars US$ 3,528.61 Billion
GDP Per Capita (PPP), US Dollars US$ 2,932.49
GDP Share of World Total (PPP) 4.946 %

The current US dollar to Indian Rupee exchange rate is 46 rupee to one dollar

Economy watch has a profile of Russia

GDP Per Capita (Current Prices, National Currency) RUB 290,703.77 .
GDP Per Capita (Current Prices, US Dollars) US$ 8,873.61
Output Gap, Percent of Potential GDP -
GDP (PPP), US Dollars US$ 2,126.39 Billion
GDP Per Capita (PPP), US Dollars US$ 15,039.05
GDP Share of World Total (PPP) 3.327 %
Implied PPP Conversion Rate 19.33

Economy Watch profile of Brazil

GDP Per Capita (Current Prices, National Currency) BRR 15,796.24 .
GDP Per Capita (Current Prices, US Dollars) US$ 7,737.32
Output Gap, Percent of Potential GDP -
GDP (PPP), US Dollars US$ 2,002.04 Billion
GDP Per Capita (PPP), US Dollars US$ 10,455.60
GDP Share of World Total (PPP) 2.868 %
Implied PPP Conversion Rate 1.511

Economy watch profile of China

GDP Per Capita (Current Prices, National Currency) RMB 24,379.10 .
GDP Per Capita (Current Prices, US Dollars) US$ 3,565.73
Output Gap, Percent of Potential GDP -
GDP (PPP), US Dollars US$ 8,734.71 Billion
GDP Per Capita (PPP), US Dollars US$ 6,546.30
GDP Share of World Total (PPP) 12.054 %
Implied PPP Conversion Rate 3.724

BRICs in the Future

BRIC GDP forecast from goldman Sachs

Gross Domestic Product (nominal) [2006-2050] (in US$ trillions)
Country 2010  2015   2020   2025   2030   2035   2040    
USA    14.5  16.2    18.0   20.1   22.8   26.1   29.8
China   4.7   8.1    12.6   18.4   25.6   34.3   45.0 
Japan   4.6   4.8     5.2    5.6    5.8    5.9    6.0 
Germany 3.1   3.3     3.5    3.6    3.8    4.0    4.4
UK      2.5   2.8     3.1    3.3    3.6    3.9    4.3
France  2.4   2.6     2.8    3.1    3.3    3.6    3.9 
Italy   1.9   2.1     2.2    2.3    2.4    2.4    2.6 
Russia  1.4   1.9     2.6    3.3    4.3    5.3    6.3 
India   1.3   1.9     2.8    4.3    6.7    10.5  16.5
Brazil  1.3   1.7     2.2    2.8    3.7     5.0   6.6 
Canada  1.4   1.6     1.7    1.9    2.1     2.3   2.6 

Microfluidics and Optics could advance lab-on-a-chip devices and lab on a chip Detects Herpes Virus

This is an example of multiple zone-plates placed over individual microfluidic channels. Credit: Laboratory of Ken Crozier, Harvard School of Engineering and Applied Sciences.

With a silicone rubber "stick-on" sheet containing dozens of miniature, powerful lenses, engineers at Harvard are one step closer to putting the capacity of a large laboratory into a micro-sized package.

Microfluidics, the ability to manipulate tiny volumes of liquid, is at the heart of many lab-on-a-chip devices. Such platforms can automatically mix and filter chemicals, making them ideal for disease detection and environmental sensing.

The performance of these devices, however, is typically inferior to larger scale laboratory equipment. While lab-on-a-chip systems can deliver and manipulate millions of liquid drops, there is not an equally scalable and efficient way to detect the activity, such as biological reactions, within the drops.

The Harvard team's zone-plate array optical detection system, described in an article appearing in Lab on a Chip (Issue 5, 2010), may offer a solution. The array, which integrates directly into a massively parallel microfluidic device, can analyze nearly 200,000 droplets per second; is scalable and reusable; and can be readily customized

Unlike a typical optical detection system that uses a microscope objective lens to scan a single laser spot over a microfluidic channel, the team's zone-plate array is designed to detect light from multiple channels simultaneously. In their demonstration, a 62 element zone-plate array measured a fluorescence signal from drops traveling down 62 channels of a highly parallel microfluidic device.

The device works by creating a focused excitation spot inside each channel in the array and then collects the resulting fluorescence emission from water drops traveling through the channels, literally taking stop-motion pictures of the drops as they pass.

"Water drops flow through each channel of the device at a rate of several thousand per second," explains lead author Ethan Schonbrun, a graduate student at SEAS. "Each channel is monitored by a single zone plate that both excites and collects fluorescence from the high speed drops. By using large arrays of microfluidic channels and zone plate lenses, we can speed up microfluidic measurements."

The series of images are then recorded by a digital semiconductor (CMOS) camera, allowing high speed observation of all the channels simultaneously. Moreover, the array is designed so that each zone plate collects fluorescence from a well-defined region of the channel, thereby avoiding cross talk between adjacent channels. The end result is a movie of the droplets dancing through the channels.

"Our approach allows us to make measurements over a comparatively large area over the chip. Most microscopes have a relatively limited view and cannot see how the whole system is working. With our device, we can place lenses wherever we want to make a measurement," adds Crozier.

The system can detect nearly 200,000 drops per second, or about four times the existing state-of-the-art detection systems. Further, the lens array is scalable, without any loss in efficiency, and can be peeled on-and-off like a reusable sticker. Ultimately, the integrated design offers the sensitivity of a larger confocal microscope and the ability to measure over a larger area, all in a much smaller, cheaper package.

"Because we have this massively parallel approach—effectively like 62 microscopes—we can get very high measurement or data rates," says Crozier. "This device has shown we can measure up to 200,000 drops per second, but I think we can push it even further."

Nanophotonics experts Schonbrun and Crozier originally developed the zone-plate technology to enhance optical tweezers so they could grab particles in a liquid using light. Using the high numerical aperture that makes efficient optical tweezers, they realized that arrays of zone plates could also be used to implement an efficient and scalable optical detection platform.

The researchers, who have filed a patent on their invention, are optimistic that with further research and development, the device could enhance a range of microfluidic and microfluidic-based lab-on-chip devices and speed the advance of using them for applications such as in-the-field biological assays.

2. A team of Brigham Young University (BYU) engineers and chemists has created an inexpensive silicon microchip that reliably detects viruses, even at low concentrations.

The chips work like coin sorters, only they are much, much smaller. Liquids flow until they hit a wall where big particles get stuck and small particles pass through a super-thin slot at the bottom. Each chip’s slot is set a little smaller than the size of the particle to be detected. After the particles get trapped against the wall, they form a line visible with a special camera.

“One of the goals in the ‘lab on a chip’ community is to try to measure down to single particles flowing through a tube or a channel,” said Hawkins, who is also writing a book about aspects of lab-on-a-chip development.

Capturing single particles has important applications besides simply knowing if a particular virus or protein is present.

“One of the things I hope to see is for these chips to become a tool for virus purification,” said David Belnap, an assistant professor of chemistry and co-author on the paper.

He explained that a tool like the BYU chip would advance the pace of his research, allowing him and other researchers to consistently obtain pure samples essential for close inspection of viruses.

Overcoming obstacles to make the chips

A huge barrier to making chips that can detect viruses is $100 million – that’s the cost of machinery precise enough to make chips with nano-sized parts necessary for medical and biological applications.

The BYU group developed an innovative solution. First they used a simpler machine to form two dimensions in micrometers — 1,000 times larger than a nanometer. They formed the third dimension by placing a 50 nanometer-thin layer of metal onto the chip, then topping that with glass deposited by gasses. Finally they used an acid to wash away the thin metal, leaving the narrow gap in the glass as a virus trap.

So far, the chips have one slot size. Hawkins says his team will make chips soon with progressively smaller slots, allowing a single channel to screen for particles of multiple sizes. Someone “reading” such a chip would easily be able to determine which proteins or viruses are present based on which walls have particles stacked against them.

After perfecting the chips’ capabilities, the next step, Hawkins says, is to engineer an easy-to-use way for a lab technician to introduce the test sample into the chip

Gadolinium-Nanodiamond Improves MRI contrast Ten Times and a New Cancer Biomarker

1. A Northwestern University study shows that coupling a magnetic resonance imaging (MRI) contrast agent to a nanodiamond results in dramatically enhanced signal intensity and thus vivid image contrast.

Meade, Ho and their colleagues developed a gadolinium(III)-nanodiamond complex that, in a series of tests, demonstrated a significant increase in relaxivity and, in turn, a significant increase in contrast enhancement. The Gd(III)-nanodiamond complex demonstrated a greater than 10-fold increase in relaxivity -- among the highest per Gd(III) values reported to date. This represents an important advance in the efficiency of MRI contrast agents.

2. Scientists have found that cancer patients produce antibodies that target abnormal glycoproteins (proteins with sugar molecules attached) made by their tumors. The result of this work suggests that antitumor antibodies in the blood may provide a fruitful source of sensitive biomarkers for cancer detection.

An antibody is a type of protein that the body's immune system produces when it detects harmful substances called antigens. Antigens include microorganisms such as bacteria, fungi, parasites, and viruses. Antibodies are also produced when the immune system mistakenly considers healthy tissue a harmful substance. These antibodies, called autoantibodies, target a person's own molecules and tissues. Research has shown that cancer patients sometimes make autoantibodies against their own malignant cells and tissues, as part of an immune response against their cancers. It is unclear why some cancer cells evade immune defenses. Scientists hope that such antibodies may ultimately have the potential to help doctors detect cancer by a simple blood test.

They found distinct abnormal mucin-type O-glycopeptide epitopes (parts of molecules that antibodies will recognize and bind to) that were targeted by autoantibodies in cancer patients--but such antibodies were absent in healthy controls.

Although larger sets of specimens will have to be analyzed to fully appreciate the clinical value of this technology, the preliminary results are very promising.

Closer to Pig Lungs for Human Transplants and Canada Approving Genetically Modified Enviropigs

1. Scientists in Melbourne, Australia, used a ventilator and pump to keep pig lungs alive and "breathing" while human blood flowed in them.

Experts estimated the work could lead to the first animal-human transplants within five years.

The breakthrough came after scientists were able to remove a section of pig DNA, which had made the pig organs incompatible with human blood.

Previous attempts to combine unmodified pig lungs and human blood ended abruptly two years ago when blood clots began forming almost immediately, causing the organs to become so blocked no blood could pass through.

Human DNA is now added to the pigs as they are reared to reduce clotting and the number of lungs which are rejected.

2. Environment Canada is poised to approve genetically modified pigs for the food supply
Phytase produced in the salivary glands and secreted in the saliva increases the digestion of phosphorus contained in feed grains.

Enviropigs would then need approval from Health Canada before the pigs enter the food market.

The Yorkshire pigs were developed by researchers in Ontario at the University of Guelph, who spliced in genes from mice to decrease the amount of phosphorus produced in the pigs' dung.

The genetic modification means the new strain of pigs produce 30 to 65 percent less phosphorus in their waste, which has been problematic in surface and groundwater around large livestock operations.

Enviropig is able to digest cereal grain phosphorus there is no need to supplement the diet with either mineral phosphate or commercially produced phytase, and there is less phosphorus in the manure. When the phosphorus depleted manure is spread on land in areas of intense swine production there is less potential of phosphorus to leach into freshwater ponds, streams and rivers. Because phosphorus is the major nutrient enabling algal growth that is the leading cause of fish kills resulting from anoxic conditions, and reduced water quality, the low phosphorus manure from Enviropigs has a reduced environmental impact in areas where soil phosphorus exceeds desirable levels. Therefore the enviropig biotechnology has two beneficial attributes, it reduces feed cost and reduces the potential of water pollution.

Enviropig Technology

What is an Enviropig™?

Enviropig™ is a trademark used to designate a genetically modified (or genetically engineered) line of Yorkshire pigs that produces phytase in the salivary glands (parotid, submaxillary and sublingual), and secretes the enzyme in the saliva.

How does it work?

A transgene construct containing the murine (mouse) parotid secretory protein promoter gene sequence and the Escherichia coli phytase gene was introduced into the pig chromosome by pronuclear microinjection. This technique does not involve the use of either viral DNA or antibiotic resistance genes. The transgene construct was integrated in a single site in the genome, and shown to be stably transmitted to offspring in a Mendelian fashion through 8 generations. The promoter directs constitutive (continuous) production of the active phytase enzyme in secretory cells of the salivary glands including the parotid, submaxillary and sublingual glands. The phytase is secreted in the saliva and enters the mouth where it mixes with feed consumed. The phytase is most active in the acidic environment of the stomach (pH range of 2.0 to 5.5 during food consumption). There the enzyme digests the phosphorus rich phytate molecules releasing phosphate molecules that are readily absorbed from the small intestine.

The phytase is highly resistant to pepsin the major protease in the stomach, but is destroyed by trypsin and chymotrypsin in the small intestine and none is detected in the ileal contents. The phytase produced by the Enviropig™ is as active as the enzyme produced in Escherichia coli. There is only one protein produced from the transgene and that is the phytase enzyme.

How was an Enviropig™ created?

The Enviropig™ was developed by the introduction of a transgene construct composed of the promoter segment of the murine parotid secretory protein gene and the Escherichia coli phytase gene (Golovan et al 2001) into a fertilized porcine embryo by pronuclear microinjection, and this embryo along with other embryos was surgically implanted into the reproductive tract of an estrous synchronized sow. After a 114 day gestation period, the sow farrowed and piglets born were checked for the presence of the transgene and for phytase enzyme activity in the saliva. When the mature genetically modified pig was crossed with a conventional pig, approximately half of the pigs contained the phytase transgene. This showed that the transgene was stably inserted into one of the chromosomes of the pigs and was inherited in a Mendelian fashion. Through breeding, this line of pigs is in the 8th generation.

Details on Organova and Invetech Printing Body Parts

The 3D- bioprinter was covered in Dec, 2009, but there are more details about how the bioprinters work and the development path

Regenerative medicine is a rapidly advancing field that has the potential to transform human heath care. The potential now exists to develop tissue constructs for tissue repair and organ replacement. The US Dept. of Health and Human Services predicts that "within 20 years regenerative medicine will be the standard of care for replacing all tissue/organ systems in the body in addition to extensive industrial use for pharmaceutical testing." Organovo, Inc. is dedicated to applying its breakthrough NovoGen tissue printing technology to make those goals a reality.

Dec 1, 2009 - Invetech, an innovator in new product development and custom automation for the biomedical, industrial and consumer markets, today announced that it has delivered the world's first production model 3D bio-printer to Organovo, developers of the proprietary NovoGen bioprinting technology. Organovo will supply the units to research institutions investigating human tissue repair and organ replacement

The printer, developed by Invetech, fits inside a standard biosafety cabinet for sterile use. It includes two print heads, one for placing human cells, and the other for placing a hydrogel, scaffold, or support matrix. One of the most complex challenges in the development of the printer was being able to repeatedly position the capillary tip, attached to the print head, to within microns. This was essential to ensure that the cells are placed in exactly the right position. Invetech developed a computer controlled, laser-based calibration system to achieve the required repeatability.

Invetech plan to ship a number of 3D bio-printers to Organovo during 2010 and 2011 as a part of the instrument development program. Organovo will be placing the printers globally with researchers in centers of excellence for medical research

The Economist magazine reports - the arrival of the first commercial 3D bio-printer for manufacturing human tissue and organs

The new machine, which costs around $200,000, has been developed by Organovo, a company in San Diego that specialises in regenerative medicine, and Invetech, an engineering and automation firm in Melbourne, Australia. One of Organovo’s founders, Gabor Forgacs of the University of Missouri, developed the prototype on which the new 3D bio-printer is based. The first production models will soon be delivered to research groups which, like Dr Forgacs’s, are studying ways to produce tissue and organs for repair and replacement. At present much of this work is done by hand or by adapting existing instruments and devices.

To start with, only simple tissues, such as skin, muscle and short stretches of blood vessels, will be made, says Keith Murphy, Organovo’s chief executive, and these will be for research purposes. Mr Murphy says, however, that the company expects that within five years, once clinical trials are complete, the printers will produce blood vessels for use as grafts in bypass surgery. With more research it should be possible to produce bigger, more complex body parts. Because the machines have the ability to make branched tubes, the technology could, for example, be used to create the networks of blood vessels needed to sustain larger printed organs, like kidneys, livers and hearts.

In 2006 Anthony Atala and his colleagues at the Wake Forest Institute for Regenerative Medicine in North Carolina made new bladders for seven patients. These are still working.

Dr Atala’s process starts by taking a tiny sample of tissue from the patient’s own bladder (so that the organ that is grown from it will not be rejected by his immune system). From this he extracts precursor cells that can go on to form the muscle on the outside of the bladder and the specialised cells within it. When more of these cells have been cultured in the laboratory, they are painted onto a biodegradable bladder-shaped scaffold which is warmed to body temperature. The cells then mature and multiply. Six to eight weeks later, the bladder is ready to be put into the patient.

The advantage of using a bioprinter is that it eliminates the need for a scaffold, so Dr Atala, too, is experimenting with inkjet technology. The Organovo machine uses stem cells extracted from adult bone marrow and fat as the precursors. These cells can be coaxed into differentiating into many other types of cells by the application of appropriate growth factors. The cells are formed into droplets 100-500 microns in diameter and containing 10,000-30,000 cells each. The droplets retain their shape well and pass easily through the inkjet printing process.

A second printing head is used to deposit scaffolding—a sugar-based hydrogel. This does not interfere with the cells or stick to them. Once the printing is complete, the structure is left for a day or two, to allow the droplets to fuse together. For tubular structures, such as blood vessels, the hydrogel is printed in the centre and around the outside of the ring of each cross-section before the cells are added. When the part has matured, the hydrogel is peeled away from the outside and pulled from the centre like a piece of string.

Mach Effect Propulsion Research Update

If the Mach Effect is real [mass fluctations) and behaves as theorized (with some experimental confirmation) by James Woodward and the effect scales up as expected then we can create propellentless space drive. It appears that the latest research work by James Woodward is validating the existence of the effect.

STAIf 2007- Mach-Lorentz Thruster (MLT)Applications presentation by Paul March.

An overview of the Mach Effect and interview with Paul March

All nextbigfuture articles related to Mach Effect Propulsion

1-G Space Drive

One-G constant acceleration and deceleration space drive would mean Earth-to-moon in 4 hours, Earth to Mars in 2-5 days, Earth to Saturn in 8-9 days.

The Woodward effect is a hypothesis proposed by James F. Woodward, a physicist at California State University, Fullerton, that energy-storing ions experience transient mass fluctuations when accelerated.

Paul March updates -
Dr. Woodward's work is based on NO new physics. His mass fluctuation conjecture rest squarely on accepted and experimentally verified theories such as Newton’s three laws of motion, Einstein's special and general relativity, Lorentz invariance, and of course Einstein's famous mass = Energy / c^2. And no, it's NOT E= m*c^2 for that version came later. The only element in Woodward's theoretical foundations still in dispute is how to integrate Mach's principle and its effects on the origins of inertia into GRT.

Now you want to know what Jim has produced of late in regards to his latest shuttler test program. I don’t want to steal Dr. Woodward’s thunder, but I’ll append a typical, but still very preliminary data plot for your review with the understanding that Dr. Woodward is still wringing out this new shuttler test set up looking for false positives that might contaminate this test series using this particular type of “soft” PZT material as the energy storage capacitor material. And as usual, using high-k cap dielectric materials makes the result time dependent and a tad flakey, so bear with Jim’s teething pains in bringing this new test article up to its full potential, but M-E potential it has.

Woodward's scaling rules appear to work given the ~100 nanoNewtons Jim's device is generating at 47kHz and the fact that the M-E predicts cubic frequency scaling, it fits right in with [Paul March's] results operating at 2.2 and 3.8 MHz. Jim needs to increasing his operating frequency by a couple of orders of magnitude to see some much more impressive results measured in milliNewtons

When Dr. Woodward gets his current M-E proof-of-principle "Demonstrator" finished with accompanying M-E data for all to review, the normal scientific process would require other independent scientist to replicate his results at their leisure. However that will take years to accomplish, so how can we jump start this process? IMO, having NASA allocate approximately $1.0-to-2.0 million per year for a 3-to-5 years laboratory R&D effort to see if Woodward's M-E work can be verified and then expanded to increase its per thruster output level from micro-Newtons to Newtons and then thousands of Newtons would be well worth the effort. Remember that if we can make this leap from M-E laboratory curiosity to working M-E thrusters, we will have equivalent specific impulse figures measured not in thousands or even tens or thousands of seconds, but trillions of seconds. We will also have a path to building GRT's traversable wormholes or Alcubierre's warp-bubbles needed for interstellar flight that will be measured in weeks to months instead of thousands of years. To me that would be tax dollars well spent no matter what the outcome of this R&D endeavor yields.

Paul March's
Mach-2MHz test article in a MINWAX Faraday shield used the same 500pF at 15kV, Y5U barium titanate caps that Jim was using at the time, but alas no vaccum system, it generated a first light thrust of ~5,000 micro-Newton running at 3.8 MHz. (See my STAIF-2006 paper and the attached related slides. I'm also attaching my MLT-2004 test article's typical 8-second data run's thermal evolution as they heated up for your reference.) I literally saw the Mah-2MHz's ~1,000 uN initial thrust level at 2.15 MHz die off into the noise over a ~1.0 minute run time with semi-constant, (I was looking at the thrust scope trace most of the time), cap voltage of ~125V-p and input power. It looks like the cap’s barium titanate’s crystalline structure is rearranging itself while its under constant load, which in turn probably kills off the piezoelectric induced radial bulk acceleration in the caps that magnifies the vxB Lorentz force in these MLTs.

BTW, these high-k cap based M-E test articles can be resurrected if one lets them rest for several days, or bakes them in an oven above their Curie temperature for an hour or two, then letting them cool down to room temperature. However, they never seem to last as long as they did originally. They typically demonstrate renewed run times to thrust die off on the order of 1/2 to 3/4 the time originally demonstrated when new. That may be great for telling the physicists that something weirdly physical is going on in these M-E tests, but it really sucks when it comes to making a reliable thruster needed for aerospace uses…

February 19, 2010

Uranium Enrichment Economics

NEI Magazine looks at the history and details of the SWU (enrichment) market

The capacity of all these potential centrifuge and laser projects totals almost 90 million SWU per year, sufficient to meet the needs of WNA’s Upper Scenario for the year 2024, and well in excess of requirements before that year and for the other two scenarios.

Enrichment requirements for the world’s growing fleet of nuclear power plants are expected to expand significantly. Current enrichment capacity on a world-wide basis is just sufficient to meet requirements, but the potential pace of enrichment capacity expansion is expected to out-strip the growth in requirements. Thus, it is not likely that all this expansion potential will come to fruition. The continuation of enrichment trade restrictions in the USA and European Union (EU) will have a major bearing on which projects go forward. Perhaps the biggest uncertainties are the status of USEC’s American Centrifuge Project (ACP) and the feasibility of GE Hitachi-Global Laser Enrichment LLC’s (GLE) laser-based SILEX process.

The potential outlook for primary production, shown in Figure above, points toward a large increase in capacity. Russia’s Rosatom plans to increase capacity, between expansion at its existing four facilities and the International Uranium Enrichment Center, by almost 50 percent – up to an eventual level of about 38 million SWU per year. CNNC in China is increasing its capacity of Russian-supplied centrifuges by 50 percent.

The Economics of enrichment of uranium

Los Alamos estimate -
For a laser isotope separation process involving selective excitation of 235UF6 molecules with infrared lasers and their dissociation with an ultraviolet laser, a facility with the standard capacity of 8.75 millions SWU per year is estimated to cost about 1 billion dollars. (Laser casts account for approximately half of the direct capital costs.) This is considerably lower than the estimated cost of a new gaseous diffusion plant (about 5 billion dollars) or that of a gas centrifuge plant (about 6 billion dollars).

The annual operating cost for a laser isotope separation facility is estimated to be about 100 million dollars, in contrast to about 500 million for a gaseous diffusion plant and 100 to 200 million for a gas centrifuge plant. Our estimates of capital and operating costs for a laser isotope separation facility indicate a cost per SWU of about $30/kg.

Depleted Uranium left over from previous enrichment has one third of the uranium percentage as natural uranium. There is about 1.5 million tons of depleted uranium with 0.3 percent U-235 About 100,000 tons of 5% enriched uranium fuel could be produced from the depleted uranium.

mPower Reactor Gets Support from Utilities

Three big utilities, Tennessee Valley Authority, First Energy Corp. and Oglethorpe Power Corp., on Wednesday signed an agreement with McDermott International Inc.'s Babcock & Wilcox subsidiary, committing to get the new mPower factory mass producable 125-140 MWe reactor approved for commercial use in the U.S.

A new type of nuclear reactor—smaller than a rail car and one tenth the cost of a big plant—is emerging as a contender to reshape the nation's resurgent nuclear power industry.

Small reactors are expected to cost about $5,000 per kilowatt of capacity, or $750 million or so for one of Babcock & Wilcox's units. Large reactors cost $5 billion to $10 billion for reactors that would range from 1,100 to 1,700 megawatts of generating capacity.

The first certification request for a small reactor design is expected to be Babcock & Wilcox's request in 2012. The first units could come on line after 2018.

The small reactors that are being developed in China, Russia and India seem to be more likely to dominate the nuclear market from 2016 onwards Of 53 reactors that are being built now 41 (or almost 80% are being built in China, Russia, India and South Korea.

The Hyperion Power Generation company has customer orders and is planning to have their first hot tub size uranium hydride reactors (27MWe) built in 2013. The Russians have built reactors like the SVBR (75/100 MWe) metal breeder reactor for their submarines. The SVBR reactor project is funded and should have a pilot reactor for 2020

China's first 210 MWe pebble bed reactors should be completed 2013 and cost in the range of $1500-2000 per kw

Factory mass produced nuclear reactor designs statistics and analysis from Sandia

B & W mPower

Technical details on the mPower reactor, Hyperion Power Generation 25 MWe fast reactor, Nuscale 45 MWe reactor

* mPower small reactor, a 125 MW LWR design that is still being completed on the drawing boards in Lynchburg, VA.

* The reactor will use 5% enriched uranium in fuel rod assemblies which are similar in design to those used in 1,000 MW plants.

* At a hypothetical price of $3,000/Kw, a single unit would cost $375 million

* One of the intended uses of the mPower reactor is to “repower carbon-intensive plants where the transmission and distribution infrastructure is already in place. (coal to nuclear ? )

* First units could be received by customers by 2018 and that the reactor can can be shipped by truck and rail to a customer site and installed below grade by skilled trades without complex training.

* three years from signed contracts to operational units

Rod Adams has more info on the mPower

Nuclear Engineering International has more info on mPower

B&W boasts that when the mPower goes on the market in 2012, each 125MWe reactor would be made in a factory, cost about half a billion dollars firm fixed price, and could be built and installed, in multiples of two or four reactors, in only three years.

mPower - main data

Reactor type: Integral PWR
Power: ~125MWe, ~400MWt
Reactor coolant: <14mpa (2000psia), ~600K (620F)core outlet Steam conditions: <7mpa (1000psia), superheated Reactor vessel diameter: ~3.6m (12ft) Height: ~22m (70ft) Fuel assemblies Sixty-nine 17x17, uranium dioxide Height: ~half of standard fuel assembly Fuel assembly pitch: 21.5cm Active core height: ~200cm Core diameter (flat to flat): ~200cm Fuel inventory: <20t Average specific power: ~20kW/kgU Core average fuel burnup: <40GWd/tU Target fuel cycle length: ~5 years Maximum enrichment: <5% Reactivity control: Control rods Other features: No soluble boron, air cooled condenser,spent fuel stored in containment for 60 year design life

Iranian Laser Enrichment of Uranium Program

The Washington Post and other sources are reporting the announcement by Iranian President Mahmoud Ahmadinejad that Iran has a program of Laser Enrichment of Uranium It is believed that the laser enrichment is still at experimental quantities and that the main enrichment is still centrifuges.

Here is a 14 page pdf of the history of Laser Enrichment efforts in Iran.

Laser enrichment has been covered here before. GE is building a facility for uranium enrichement using lasers which should be in operation in 2012 or 2013.

General Electric has licenced and is commercializing a laser uranium enrichment process. The Silex laser uranium enrichment process has been indicated to be an order of magnitude more efficient than existing production techniques but again, the exact figure is classified.

Australian scientists Michael Goldsworth and Horst Struve developed the process, and from 1996 to 2002 received support from the United States Enrichment Corp. (Bethesda, MD); the two scientists have since formed a public corporation, Silex Systems (Lucas Heights, NSW, Australia). Last year they licensed the Silex process to General Electric. The process is based on selective excitation of uranium hexafluoride (UF6) molecules that contain U-235 by laser light at a narrow spectral line near 16 µm, but few details have been released (see figure). The Los Alamos National Laboratory (Los Alamos, NM) initially explored the concept three decades ago, but the U.S. Department of Energy later abandoned it in favor of atomic-vapor laser isotope enrichment.

The CO2 lasers can generate 1 J pulses, but only at a limited repetition rate, and only a fraction of the pulse is in the pump band. Unspecified “additional nonlinear optical tricks” are needed to convert the CO2 pump light to the correct wavelength to pump the Raman cell. The lasers are 1% efficient and the Raman conversion 25% efficient, so the overall efficiency is 0.25%.

With many details classified or proprietary, it is hard to quantify the processing. Lyman wrote that if a laser could illuminate a one-liter volume at an ideal repetition rate, it would take about 100 hours to produce one kilogram of U-235-assuming complete separation of the U-235 and U-238 isotopes. However, most processes require multiple stages of separation, and according to Lyman’s comments, a 5000 Hz laser would be needed to process all the feed stream (a mixture of UF6 and an unidentified diluting gas).

Solid state lasers able to be continuously tuned from the 0.2 to 10 micron range

Free electron lasers can operate 3 to 100 microns and in the 6-35 micron ranges

The US Navy has funded development of megawatt solid state free electron lasers for delivery in 2012

The new solid state lasers could be more efficient for the desired frequency and wavelengths.

The specific energy consumption is 2300-3000 kWh/SWU for Gaseous Diffusion, versus 100-300 kWh/SWU for gas centrifuge. The number of stages required to produce LEU is about 30 times larger in the diffusion plant than in the centrifuge plant.

A kilogram of LEU requires roughly 11 kilograms U as feedstock for the enrichment process and about 7 separative work units (SWUs) of enrichment services. To produce one kilogram of uranium enriched to 3.5% U-235 requires 4.3 SWU if the plant is operated at a tails assay 0.30%, or 4.8 SWU if the tails assay is 0.25% (thereby requiring only 7.0 kg instead of 7.8 kg of natural U feed).

Areva's recently announced Idaho enrichment plant, estimated to cost $2 billion, is expected to supply 3 million SWU or half the capacity of the GE plant at full production. The full-scale GE plant, expected to supply 3.5-6.0 million SWU, will require additional investor commitments. The GE laser enrichment plant would start at 1 million SWU/year and then get expanded Close to one million kilograms/year of enriched uranium using 7 SWU per kg.

25 page powerpoint presentation made April 2008 on Silex

Silex is also examining Oxygen-18 (PET medical imaging) and Carbon-13 (medical diagnostic) laser separation.

Laser enrichment at Idaho Samizdat

Silex company site

Worldwide Uranium demand and Nuclear Reactor fuel requirements translate into a requirement for uranium enrichment separative work services in the range 35–38 million SWU/year over the next 10 years.

About 120,000 kg SWU are required to enrich the annual fuel loading for a typical large (1,000 MWe) nuclear reactor.

The Silex process is inefficient for highly enriched uranium at this time

The up to ten times greater enrichment efficiency improves the energy efficiency of nuclear power and the cost efficiency of nuclear fuel and operations.
Uranium: 8.9 kg U3O8 x $53    472 
Conversion: 7.5 kg U x $12     90 
Enrichment: 7.3 SWU x $135    985 [Silex could reduce this by 3-10 times]
Fuel fabrication: per kg      240 
Total, approx:           US$ 1787 

NEI Magazine looks at the history and details of the SWU (enrichment) market

The capacity of all these potential centrifuge and laser projects totals almost 90 million SWU per year, sufficient to meet the needs of WNA’s Upper Scenario for the year 2024, and well in excess of requirements before that year and for the other two scenarios.

Enrichment requirements for the world’s growing fleet of nuclear power plants are expected to expand significantly. Current enrichment capacity on a world-wide basis is just sufficient to meet requirements, but the potential pace of enrichment capacity expansion is expected to out-strip the growth in requirements. Thus, it is not likely that all this expansion potential will come to fruition. The continuation of enrichment trade restrictions in the USA and European Union (EU) will have a major bearing on which projects go forward. Perhaps the biggest uncertainties are the status of USEC’s American Centrifuge Project (ACP) and the feasibility of GE Hitachi-Global Laser Enrichment LLC’s (GLE) laser-based SILEX process.

The potential outlook for primary production, shown in Figure above, points toward a large increase in capacity. Russia’s Rosatom plans to increase capacity, between expansion at its existing four facilities and the International Uranium Enrichment Center, by almost 50 percent – up to an eventual level of about 38 million SWU per year. CNNC in China is increasing its capacity of Russian-supplied centrifuges by 50 percent.

The Economics of enrichment of uranium

Los Alamos estimate -
The annual operating cost for a laser isotope separation facility is estimated to be about 100 million dollars, in contrast to about 500 million for a gaseous diffusion plant and 100 to 200 million for a gas centrifuge plant. Our estimates of capital and operating costs for a laser isotope separation facility indicate a cost per SWU of about $30.

Depleted Uranium left over from previous enrichment has one third of the uranium percentage as natural uranium. There is about 1.5 million tons of depleted uranium with 0.3 percent U-235 About 100,000 tons of 5% enriched uranium fuel could be produced from the depleted uranium.


Wikipedia on enriched uranium

Laser techniques
Laser processes promise lower energy inputs, lower capital costs and lower tails assays, hence significant economic advantages. Several laser processes have been investigated or are under development.

None of the laser processes below are yet ready for commercial use, though SILEX is well advanced and expected to begin commercial production in 2012

Atomic vapor laser isotope separation (AVLIS)
Atomic vapor laser isotope separation employs specially tuned lasers to separate isotopes of uranium using selective ionization of hyperfine transitions. The technique uses lasers which are tuned to frequencies that ionize a 235U atom and no others. The positively-charged 235U ions are then attracted to a negatively-charged plate and collected.

Molecular laser isotope separation (MLIS)
Molecular laser isotope separation uses an infrared laser directed at UF6, exciting molecules that contain a 235U atom. A second laser frees a fluorine atom, leaving uranium pentafluoride which then precipitates out of the gas.

Separation of Isotopes by Laser Excitation (SILEX)
Separation of isotopes by laser excitation is an Australian development that also uses UF6. After a protracted development process involving U.S. enrichment company USEC acquiring and then relinquishing commercialization rights to the technology, GE Hitachi Nuclear Energy (GEH) signed a commercialization agreement with Silex Systems in 2006. GEH has since begun construction of a demonstration test loop and announced plans to build an initial commercial facility. Details of the process are restricted by intergovernmental agreements between USA and Australia and the commercial entities. SILEX has been indicated to be an order of magnitude more efficient than existing production techniques but again, the exact figure is classified

February 18, 2010

Caltech Silicon Wire Arrays Surpass Conventional Light Trapping Limits

This is a schematic diagram of the light-trapping elements used to optimize absorption within a polymer-embedded silicon wire array.
[Credit: Caltech/Michael Kelzenberg]

Using arrays of long, thin silicon wires embedded in a polymer substrate, a team of scientists from the California Institute of Technology (Caltech) has created a new type of flexible solar cell that enhances the absorption of sunlight and efficiently converts its photons into electrons. The solar cell does all this using only a fraction of the expensive semiconductor materials required by conventional solar cells.

* these solar cells have, for the first time, surpassed the conventional light-trapping limit for absorbing materials
* The silicon-wire arrays absorb up to 96 percent of incident sunlight at a single wavelength and 85 percent of total collectible sunlight
* the wires have a near-perfect internal quantum efficiency.
* Each wire measures between 30 and 100 microns in length and only 1 micron in diameter. “The entire thickness of the array is the length of the wire,” notes Atwater. “But in terms of area or volume, just 2 percent of it is silicon, and 98 percent is polymer.”

The next steps, Atwater says, are to increase the operating voltage and the overall size of the solar cell. "The structures we've made are square centimeters in size," he explains. "We're now scaling up to make cells that will be hundreds of square centimeters—the size of a normal cell."

Atwater says that the team is already "on its way" to showing that large-area cells work just as well as these smaller versions.

Nature Materials paper, "Enhanced absorption and carrier collection in Si wire arrays for photovoltaic applications"

Accident Limited Healthspans and the Scientific Conquest of Death

Robert Freitas analyzed the statistically the challenge of life extension. Part of his analysis is shown in the table above

The longest life expectancies is about 86 years for Japanese women. This is almost as good as a 10% reduction in medically preventable conditions versus an 80 year life expectancy, which would result in 88 year life expectancies.

The Scientific Conquest of death is a 296 page collection of essays from Aubrey de Grey, Robert Freitas and others.

There are many causes of death, so it would take a lot of new technology to eliminate or mitigate each one or each category.

Simple existing technologies that are properly applied can greatly reduce deaths worldwide.

Nextbigfuture has looked at improvements to agriculture

There are cheap ways to prevent or cure disease

Conditions can be improved to reduce deaths. Air pollution can be reduced and water quality can be improved.

Roundup of Economic News on the United States and China

1. Guardian UK and other sources report - China sold $34bn (£21.5bn) worth of US government bonds in December, raising fears that ­Beijing is using its financial ­muscle to signal that it has lost confidence in American economic policy.

US treasury figures for the period ending in December 2009 show that, following the sale, China is no longer the largest overseas holder of US treasury bonds. Beijing ended the year sitting on $755.4bn worth of US government debt, compared to Japan's $768.8bn

2. Wall Street Journal - Why hasn't China revalued? Part of the answer is Japan

The last time the U.S. was faced with a rising Asian export power, the currency also became a big political issue. And in September 1985 the major economies of the time met at the Plaza Hotel in New York to ease those tensions. The accord they reached caused the dollar to fall from roughly 240 Japanese yen to about 160 over two years.

Today, China's critics are demanding a similarly sweeping move. But Japan soon regretted agreeing to a big surge in the yen: Growth slowed abruptly, which pushed the government to boost spending and lower interest rates. A real-estate bubble and a years-long slump followed. And the issue the Plaza Accord was intended to fix—Japan's sizable trade surplus—remains to this day.

From Japan's example, Chinese thinkers learned that a big exchange-rate move could damage their economy, and won't necessarily help the trade balance.

3. WSJ- An alternative route to Yuan appreciation

China has kept the yuan, or renminbi, fixed against the dollar since mid-2008, and a big, rapid move is widely seen as unlikely.

Higher inflation could have the same effect—albeit indirectly—and be less contentious politically within China. If average prices in China rise 5% more than in the U.S., and the currency doesn’t move against the U.S. dollar at all, the result is effectively the same as if China revalued the yuan by 5% and the two countries had the same inflation rate. In both cases, Chinese goods have gotten 5% more expensive in U.S. dollar terms, or to put it another way, the real exchange rate has increased 5%

4. An Analyst for Forbes is indicating that because the USA is not building new homes and this could result in a housing shortage in 2011. ie a seller's market

The US needs 1.6 million new homes each year and only 560,000 to 780,000 were started and completed in 2009. There are 6-7 months of housing inventory.

5. UCLA Anderson Economic forecast for various cities was reported in Forbes

Forbes - In Portland, San Francisco, Minneapolis and Washington, D.C.,
the premium to buy--the spread between what you'd spend on renting and
what you'd pay each month for a mortgage--is far narrower now than its
15-year average. And economists predict a significant home-price hike
in five years.

+28% in Bay Area and +15% in DC but it will take 5 years. If it were
smooth about +5% per year in the Bay Area and +3% per year in DC.
Say little to nothing this year and more of it in 2011, 2012.

Projecting the UCLA House Price Appreciation for San Francisco and Washington DC

2010   zero
2011  7% SF, 4% DC
2012  8% SF, 5% DC
2013 5% SF, 3% DC
2014 5% SF, 3% DC

Inertial Electrostatic Fusion Rocket Ship Design - Article being redone

This article is being redone as there were significant inaccuracies derived from the source material. The Advanced Vehicle Research Center is in fact *not* pursuing fusion propulsion research anywhere, for anyone. The organization that is in fact pursuing this research is the World Institute for Science and Engineering (WISE). The plan is for one of the founders to be interviewed by Sander Olson next week, and for accurate material to be posted then as soon as possible. Inertial Electrostatic Fusion received somewhere between $8M and $10M in funding for the Bussard reactor program at EMC2 fusion recently. The now deceased Robert Bussard had proposed more advanced fusion propulsion if IEC fusion worked the way he envisioned. Last year Nextbigfuture interviewed Dr Richard Nebel who is running the IEC fusion reactor project and he indicated that we will know the viability for commercial fusion (and space propulsion is easier than commercial fusion) within two years.

Fusion Ship II, is a 750 MW thrust manned space vehicle in the 500 metric ton class, using Inertial Electrostatic Fusion (IEC), with outer
planet trip times of several months.

A proton energy gain (power in 14.7-MeV protons/input electric power)
of > 9 is required for acceptable radiator mass and size.

The propulsion system is based on NSTAR-extrapolated Argon ion thrusters operating at 35,000 seconds and a total thrust of 4,370 N. Round trip travel time for a typical Jupiter mission ΔV of 202,000 m/s is then 363 days.

Inertial Electrostatic Fusion has gotten $8 million in funding for the Bussard reactors at EMC2 fusion

Robert Bussard had proposed more advanced fusion propulsion if IEC fusion worked the way he envisioned

Last year Nextbigfuture interviewed Dr Richard Nebel who is running the IEC fusion reactor project and he indicated that we will know the viability for commercial fusion (and space propolsion is easier than commercial fusion) within two years

Intel lab researching dielectric coatings for nanocapacitors for nanoscale power storage

EETimes reports that Intel researchers are exploring nanoscale materials that could be used to create ultracapacitors with a greater energy density than today's lithium ion batteries. If successful, the new materials could be mass produced in volumes to power systems ranging from mobile devices to electric vehicles—even smart grid storage units.

The lab is focused on so-called microgrids, small local electric grids that lab director Tomm Aldridge and others believe could represent the future of the smart electric grid.

"It's way too early to announce any results, but we are taking what we think is a fresh look at building ultracapacitors using our expertise in nanomaterials fabrication and high volume manufacturing," said Aldridge. "The research targets are to exceed energy storage of battery technology in terms of energy density and figure out how to assemble these nano-capacitors into ultracapacitors that have useful voltage ranges," he added

The energy-storage effort is exploring use of engineered dielectric coatings to create the capacitors that could be scaled to large arrays. MIT, Stanford and other universities are also exploring nanoscale ultracapacitors as a medium as an alternative with longer life time and more resilience to harsh conditions than traditional batteries.

MIT Technology review discussed similar research in 2009 at the University of Maryland

Researchers at the University of Maryland have developed a kind of capacitor. The research is in its early stages, and the device will have to be scaled up to be practical, but initial results show that it can store 100 times more energy than previous devices of its kind.

Sang Bok Lee, a chemistry professor, and Gary Rubloff, a professor of engineering and director of the Maryland NanoCenter, created nanostructured arrays of electrostatic capacitors. Electrostatic capacitors are the simplest kind of electronic-energy-storage device, says Rubloff. They store electrical charge on the surface of two metal electrodes separated by an insulating material; their storage capacity is directly proportional to the surface area of these sandwich-like electrodes. The Maryland researchers boosted the storage capacity of their capacitors by using nanofabrication to increase their total surface area. Their electrodes work in the same way as ones found in conventional capacitors, but instead of being flat, they are tubular and tucked deep inside nanopores.

The fabrication process begins with a glass plate coated with aluminum. Pores are etched into the plate by treating it with acid and applying a voltage. It's possible to make very regular arrays of tiny but deep pores, each as small as 50 nanometers in diameter and up to 30 micrometers deep, by carefully controlling the reaction conditions. The process is similar to one used to make memory chips. "Next you deposit a very thin layer of metal, then a thin layer of insulator, then another thin layer of metal into these pores," says Rubloff. These three layers act as the nanocapacitors' electrodes and insulating layer. A layer of aluminum sits on top of the device and serves as one electrical contact; the other contact is made with an underlying aluminum layer.

In a paper published online this week in the journal Nature Nanotechnology, the Maryland group describes making 125-micrometer-wide arrays, each containing one million nanocapacitors. The surface area of each array is 250 times greater than that of a conventional capacitor of comparable size. The arrays' storage capacity is about 100 microfarads per square centimeter.

Digital quantum batteries could have ten times the energy storage of lithium batteries. Capacitors built as nanoscale arrays--with electrodes spaced at about 10 nanometers (or 100 atoms) apart--quantum effects ought to suppress arcing between electrodes. the energy density and power density in nano vacuum tubes are large compared to lithium batteries and electrochemical capacitors. The electric field in a nano vacuum tube can be sensed with MOSFETs in the insulating walls.

Software for Predicting the Evolution of Stem Cells

UWM researcher makes breakthrough in stem cell technology

UW-Milwaukee researcher Andrew Cohen has successfully developed a software program that facilitates predicting the evolution of stem cells. The program essentially speeds up what has been a tedious process for researchers in the past.

The program was published last week in the journal Nature Methods. It applies algorithmic information theory to the growth and movement of stem cells tracked over time to show what type of cells (i.e. brain, skin, etc.) they will eventually develop into.

“People look at images and take measurements by hand,” Cohen explained. “It takes a long time, and using computers makes the process a lot less tedious.”

Stem cells all start out the same before they develop into the different cells of our bodies. Scientists do not know what triggers the stem cell’s future growth pattern into a particular type of cell, but researchers like Cohen are figuring out how to predict the cell’s future based on measurements and math.

Determining the fate of stem cells at the beginning of their growth could help scientists apply stem cells exactly where they’re needed in the body.

According to Cohen, “Neurobiologists have realized that — in all the neurodegenerative diseases, such as Alzheimer’s, Parkinson’s and Huntington’s — cell organelles transport deficiencies and are a causative effect in diseases. But how do you measure that? From an engineering perspective, it’s an exceedingly hard tracking problem.” In other words, how does one track deficiencies in cell organelles?

This is one problem that Cohen’s software helps solve.

The program does what earlier researchers were doing manually. The method applied before Cohen’s program was a tedious, time-consuming process; think of it as drawing a flipbook in a notebook vs. using animation software.

Cohen’s software takes measurements of the images of stem cells as they grow.

“We’re starting, in a bunch of different ways, to outperform the human eye,” Cohen said.

Cohen is a computer engineer, but his cross-disciplinary work is bringing progress to the fields of biology and medical research.

Cohen plans to apply the software to research in cancer biology, with a child cancer researcher looking at causes of retinal blastoma.

“With this program we hope to look for behavior differences among populations of cells that have different cancer-related cells activated or not activated,” Cohen said. “The cross-disciplinary nature of this is what makes it really interesting.”

Silicon-coated Nanonets Could Build a Better Lithium-ion Battery

Frame (a) shows a schematic of the Nanonet, a lattice structure of Titanium disilicide (TiSi2), coated with silicon (Si) particles to form the active component for Lithium-ion storage. (b) A microscopic view of the silicon coating on the Nanonets. (c) Shows the crystallinity of the Nanonet core and the Si coating. (d) The crystallinity of TiSi2 and Si (highlighted by the dotted red line) is shown in this lattice-resolved image from transmission electron microscopy. (Source: Nano Letters)

A tiny scaffold-like titanium structure of Nanonets coated with silicon particles could pave the way for faster, lighter and longer-lasting Lithium-ion batteries, according to a team of Boston College chemists who developed the new anode material using nanotechnology.

The web-like Nanonets developed in the lab of Boston College Assistant Professor of Chemistry Dunwei Wang offer a unique structural strength, more surface area and greater conductivity, which produced a charge/re-charge rate five to 10 times greater than typical Lithium-ion anode material, a common component in batteries for a range of consumer electronics, according to findings published in the current online edition of the American Chemical Society journal Nano Letters.

The structure and conductivity of the Nanonets improved the ability to insert and extract Lithium ions from the particulate Silicon coating, the team reported. Running at a charge/discharge rate of 8,400 milliamps per gram (mA/g) – which is approximately five to 10 times greater than similar devices – the specific capacity of the material was greater than 1,000 milliamps-hour per gram (mA-h/g). Typically, laptop Lithium-ion batteries are rated anywhere between 4,000 and 12,000 mA/h, meaning it would only take between four and 12 grams of the Nanonet anode material to achieve similar capacity.

Wang said the capability to preserve the crystalline Titanium Silicon core during the charge/discharge process was the key to achieving the high performance of the Nanonet anode material. Additional research in his lab will examine the performance of the Nanonet as a cathode material.

Si/TiSi2 Heteronanostructures as High-Capacity Anode Material for Li Ion Batteries

We synthesized a unique heteronanostructure consisting of two-dimensional TiSi2 nanonets and particulate Si coating. The high conductivity and the structural integrity of the TiSi2 nanonet core were proven as great merits to permit reproducible Li+ insertion and extraction into and from the Si coating. This heteronanostructure was tested as the anode material for Li+ storage. At a charge/discharge rate of 8400 mA/g, we measured specific capacities >1000 mAh/g. Only an average of 0.1% capacity fade per cycle was observed between the 20th and the 100th cycles. The combined high capacity, long capacity life, and fast charge/discharge rate represent one of the best anode materials that have been reported. The remarkable performance was enabled by the capability to preserve the crystalline TiSi2 core during the charge/discharge process. This achievement demonstrates the potency of this novel heteronanostructure design as an electrode material for energy storage.

8 pages of supplemental material

February 17, 2010

Pico Computing FPGA up to 4620 Times Faster Hardware Acceleration

Pico Computing has achieved the highest-known benchmark speeds for 56-bit DES decryption, with reported throughput of over 280 billion keys per second achieved using a single, hardware-accelerated server

Current-generation CPU cores can process approximately 16 million DES key operations per second. A GPU card such as the GTX-295 can be programmed to process approximately 250 million such operations per second.

When using a Pico FPGA cluster, however, each FPGA is able to perform 1.6 billion DES operations per second. A cluster of 176 FPGAs, installed into a single server using standard PCI Express slots, is capable of processing more than 280 billion DES operations per second. This means that a key recovery that would take years to perform on a PC, even with GPU acceleration, could be accomplished in less than three days on the FPGA cluster.

HPCWire reports - computing. The reason that FPGAs are so adept at these types of applications, from both a performance and power consumption point of view, is their ability to morph their hardware structures to match operators and data types for a given algorithm. This is especially true when the underlying algorithms are not based on typical integer or floating point data types.

In genomics applications, for example, a lot of algorithms are based on the four fundamental nucleoside bases (adenine, thymine, guanine, cytosine) that make up RNA and DNA. Thus a nucleoside data type would only be two bits wide. And unlike CPUs and GPUs, you can map FPGA resources to match that data size exactly. "You don't need full 32-bit or 64-bit data paths and operators," explains David Pellerin, Pico's director of strategic marketing. "It's wasteful." That's why some applications that get 100-fold acceleration from a GPU can get 1,000-fold from an FPGA, when compared to a CPU.

Robert Freitas Details how Diamond Trees Would Control the Atmosphere

Diamond Trees (Tropostats): A Molecular Manufacturing Based System for Compositional Atmospheric Homeostasis

The future technology of molecular manufacturing will enable long-term sequestration of atmospheric carbon in solid diamond products, along with sequestration of lesser masses of numerous air pollutants, yielding pristine air worldwide ~30 years after implementation. A global population of 143 x 10^9 20-kg “diamond trees” or tropostats, generating 28.6 TW of thermally non-polluting solar power and covering ~0.1% of the planetary surface, can create and actively maintain compositional atmospheric homeostasis as a key step toward achieving comprehensive human control of Earth’s climate.

Robert Freitas as usual provides a lot of specific data that describes the scope of the problem and solution. Therefore, his information can provide scaling information for more conventional approaches to achieve the same result. There are many systems that are in design, research and development that are based on more conventional chemistry and biology for removing CO2 from the atmosphere. The calculations provided by Robert Freitas provide a roadmap and scoping for any proposed system for atmospheric homeostasis of material with regular technology.

Previously J Storrs Hall had described using nanotechnology for a global climate control machine. Nanotechnology for Climate Control and Enabling a Kardashev Type 1 and 2 Civilization - J Storrs Hall has conceived of Utility Fog and a Space Pier.

Weather machine Mark I - many small aerostats (­a hydrogen balloon)­ - at a guess an optimal size is somewhere between a millimeter and a centimeter in diameter that have a continuous layer in the stratosphere. Each aerostat contains a mirror, and also a control unit consisting of a radio receiver, computer, and GPS receiver. About 100 billion tonnes of material with regular technology and 10 million tons with more advanced nanotechnology. However, the Weather Machine does not address rising levels of atmospheric greenhouse gases and could risk rapid global warming if a systemic failure of the Machine occurs while gas concentrations are high, whereas Freitas' diamond trees are inherently "fail safe" and directly regulate the greenhouse effect by placing atmospheric gas concentrations under human control.

Diamond Balloons

In the “tree” configuration (Section 4), tropostats are spheres attached to the end of a short stalk of fixed length. If further investigation reveals that vertical atmospheric mixing is too slow because the tropostat field can extract CO2 faster than fresh air can be imported (e.g., the system is convection-limited, contrary to estimates in Section 4.7), one solution may be to employ a “balloon” configuration in which individual spherical neutral-buoyancy tropostats are tethered to ground anchors via thin retractable or spoolable cables that allow the spheres to freely move continuously between ground level and ~1 km altitude, thus giving the filtration network direct access to a much larger volume of air. In this configuration, each tropostat is a hollow sphere of slightly larger radius R tropostat = 2 m with wall thickness twall = 63 microns and material density ρwall = 3510 kg/m3 (diamond) filled with hydrogen gas.

Once global CO2 concentration is reduced to 300 ppm, many of the carbon-processing components of the system can be furloughed and held in reserve to combat future unexpected atmospheric challenges such as those posed by major volcanoes or supervolcanoes, massive forest or peat fires, modest asteroid strikes, regional nuclear wars, and the like. For example, the June 1991 eruption of Mt. Pinatubo caused global average temperature to decrease by 0.4 oC in the first year after the stratospheric injection of ~20 x 10^9 kg of SO2, some of which remained aloft for up to 3 years. Such pollutant clouds might be cleared in ~10^6 sec (~2 weeks) using a population of free-floating diamondoid blimpstats having only ~1% the mass and capacity of the proposed ground-based global tropostat network.

The network reserve capacity also provides some initial defense against catastrophic carbon releases. One such event may have occurred 550 million years ago during a period of widespread glaciation extending close to the equator, believed to have ended suddenly when a colossal volcanic outgassing raised the CO2 concentration of the air to 12%, ~350 times modern levels, causing extreme greenhouse conditions and carbonate deposition as limestone at the rate of about 1 mm/day. Another similar possible event might involve the future rapid release of gaseous methane from seabed methane clathrates into the atmosphere as the oceans warm, though this hypothesis currently lacks support. Alternatively, if someday the planet is threatened by an impending ice age, the tropostat network can be used to quickly ramp up the level of greenhouse gases present in Earth’s atmosphere under precise human control, providing an offsetting warming effect to oppose the global cooling trend.

Architectures for Extreme Scale Computing

Architectures for Extreme Scale Computing (8 page pdf)

The petaflop supercomputers we have now are too large and consume too much power. Petascale machines have a footprint of about 1/10th of a football field and consume several megawatts (MW). One megawatt costs about one million dollars per year.

What is needed for exaflop scale computing ?
* a single commodity chip should deliver terascale performance—namely, 10^12 operations per second (tera-ops).
* Running one million energy and space efficient terascale chips would enable exaflop 10^18 operation systems
*To attain extreme-scale computing, researchers must address architectural challenges in energy and power efficiency, concurrency and locality, resiliency, and programmability.
* A possible target for extreme-scale computing is an exa-op data center that consumes 20 MW, a peta-op departmental server that consumes 20 kilowatts (KW), and a tera-op chip multiprocessor that consumes 20 watts (W). These numbers imply that the machine must deliver 50 giga operations (or 50 × 10^9 operations) per watt. Because these operations must be performed in a second, each operation can only consume, on average, an energy of 20 pico-Joules (pJ).
* Intel’s Core Duo mobile processor (2006) used 10,000 pJ per instruction.
* large machines spend most of the energy transferring data from or to remote caches, memories, and disks. Minimizing data transport energy, rather than arithmetic logic unit (ALU) energy, is the real challenge

Need several technologies

Near-threshold voltage operation.
One of the most effective approaches for energy-efficient operation is to reduce the supply voltage (Vdd) to a value only slightly higher than the transistor threshold voltage (Vth). This is called near-threshold voltage (NTV) operation. It corresponds to a Vdd value of around 0.4 V, compared to a Vdd of around 1 V for current designs.

Broadly speaking, operation under NTV can reduce the gates’ power consumption by about 100× while increasing their delay by 10×. The result is a total energy savings of one order of magnitude.3 In addition to the 10 increase in circuit delay, the close proximity of Vdd and Vth induces a 5× increase in gate delay variation due to process variation, and a several orders-of-magnitude increase in logic failures—especially in memory structures, which are less variation tolerant

Aggressive use of circuit or architectural techniques that minimize or tolerate process variation can address the higher-variation shortcoming. This includes techniques such as body biasing and variation-aware job scheduling. Finally, novel designs of memory cells and other logic can solve the problem of higher probability of logic failure. Overall, NTV operation is a promising direction that several research groups are pursuing.
Nonsilicon memory. Nonsilicon memory is another relevant technology. Phase change memory (PCM), which is currently receiving much attention, is one type of nonsilicon memory. PCM uses a storage element composed of two electrodes separated by a resistor and phase-change material such as Ge2Sb2Te5.4 A current through the resistor heats the phase-change material, which, depending on the temperature conditions, changes between a crystalline (low-resistivity) state and an amorphous (high-resistivity) one—hence recording one of the two values of a bit. PCM’s main attraction is its scalability with process technology. Indeed, both the heating contact areas and the required heating current shrink with each technology generation. Therefore, PCM will enable denser, larger, and very energy-efficient main memories. DRAM, on the other hand, is largely a nonscalable technology, which needs sizable capacitors to store charge and, therefore, requires sizable transistors.

Currently, PCM has longer access latencies than DRAM, higher energy per access (especially for writes), and limited lifetime in the number of writes. However, advances in circuits and memory architectures will hopefully deliver advances in all these axes while retaining PCM scalability. Finally, because PCM is nonvolatile, it can potentially support novel, inexpensive checkpointing schemes for extreme-scale architectures. Researchers can also use it to design interesting, hybrid main memory organizations by combining it with plain DRAM modules.

Other system technologies. Several other technologies will likely significantly impact energy and power efficiency. An obvious one is 3D die stacking, which will reduce memory access power. A 3D stack might contain a processor die and memory dies, or it might contain only memory dies. The resulting compact design eliminates energy-expensive data transfers, but introduces manufacturing challenges, such as the interconnection between stacked dies through vias. Interestingly, such designs, by enabling high-bandwidth connections between memories and processors, might also induce a reorganization of the processor’s memory hierarchy. Very high bandwidth caches near the cores are possible.

Efficient on-chip voltage conversion is another enabling system technology. The goal here is for the machine to be able to change the voltage of small groups of cores in tens of nanoseconds, so they can adapt their power to the threads running on them or to environmental conditions. A voltage controller in each group of cores can regulate the group’s voltage. Hopefully, the next few years will see advances in this area.

Photonic interconnects. Optics have several key properties that can be used for interconnects. They include low-loss communication, very large message bandwidths enabled by wavelength parallelism, and low transport
latencies, as given by the speed of light

Substantial advances in architecture and hardware technologies should appear in the next ew years. For extreme-scale computing to become a reality, we need to revamp most of the subsystems of current multiprocessors. Many aspects remain wide open, including effective NTV many-core design and operation; highly energy-efficient checkpointing; rearchitecting the memory and disk subsystems for low energy and fewer parts; incorporating high-impact technologies such as nonvolatile memory, optics, and 3D die stacking; and developing cost-effective cooling technologies.

The Challenges of to achieving exaflop computing are listed in this article which refers to a 297 page pdf on the issues.