Pages

February 14, 2009

India Major Fast Breeder Program Kicking Into Higher Gear: Two Breeder Will Start Construction

Scientists and engineers at the Indira Gandhi Centre for Atomic Research (IGCAR) are hoping to save around Rs.5 billion (Rs.500 crore or $104 million) by modifying the design of four fast reactors nuclear power plants. With the experience gained from prototype that is being completed, the new projects can be completed in five years as against seven years. Two new fast breeders reactor will start construction shortly. The government has sanctioned construction of four more 500 MW fast reactors of which two will be housed inside the existing nuclear island at Kalpakkam and expected to be ready by 2020. Decision on locating the remaining two fast reactors is yet to be taken. The proposed reactors will also be powered by mixed oxide fuel - a blend of plutonium and uranium oxides - like the upcoming 500 MW prototype fast breeder reactor (PFBR) in the same complex.

Similarly, construction of the Fast Reactor Fuel Cycle Facility is expected to start soon.

With the Rs.35-billion prototype fast breeder reactor (PFBR) project progressing at good pace at Kalpakkam, 80 km from here, the Indian government has sanctioned building of four more 500 MW fast reactors.

A breeder reactor is one that breeds more material for a nuclear fission reaction than it consumes, so that the reaction - that ultimately produces electricity - can continue.

The Indian fast reactors will be fueled by a blend of plutonium and uranium oxide.

While the reactor will use fission plutonium for power production, it will also breed more plutonium than what it uses from the natural uranium.

The surplus plutonium from each fast reactor can be used to set up more such reactors and grow the nuclear capacity in tune with India's needs.

These reactors are also called fast spectrum reactors since the neutrons coming from the fission will not be moderated. Two of the proposed reactors will come up in Kalpakkam, the site for which has been approved, while the location for the remaining two are yet to be finalized.

According to Raj, the four reactors will be designed to last 60 years - an increase of 20 years over PFBR's current life span.

"The blueprint for the four oxide fuel fast reactors is ready. The roadmap for research and development will be ready next month," reactor engineering group director S.C. Chetal told IANS.






Detailing the cost-cutting steps, Chetal said: "The proposed reactors will be built as twin units. That means many of the facilities will be shared by the two reactors, which in turn saves capital and running costs."

For instance, there will be fewer welding points, making the reactors safer and more economical.

"The savings will be achieved from reduced material consumption through innovative design design," said P. Chellapandi, director, safety group.

Chellapandi said the safety vessel of the proposed reactors will be smaller than the one installed inside the PFBR's reactor vault: its diameter will be reduced to 11.5 metres from 12.9 metres.

"A reduction of one metre will result in an overall saving of Rs.25 crore (Rs.250 million) on material, fabrication and civil construction."

The new design fast reactors will have six steam generators as against eight in the PFBR and changes will be made in the grid plate, sodium and reactor shutdown systems.

Subatomic Technology : Stanford Writes 35 Bits per Electron


The initials for Stanford University are written in electron waves on a piece of copper and projected into a tiny hologram.

Stanford researchers describe how they have created letters more than four times smaller than the IBM xenon atom arranged initials.

"But in this experiment we've stored some 35 bits per electron to encode each letter. And we write the letters so small that the bits that comprise them are subatomic in size. So one bit per atom is no longer the limit for information density. There's a grand new horizon below that, in the subatomic regime. Indeed, there's even more room at the bottom than we ever imagined."

The letters in the words are assembled from subatomic sized bits as small as 0.3 nanometers, or roughly one third of a billionth of a meter.

The researchers encoded the letters "S" and "U" (as in Stanford University) within the interference patterns formed by quantum electron waves on the surface of a sliver of copper. The wave patterns even project a tiny hologram of the data, which can be viewed with a powerful microscope.





Working in a vibration-proof basement lab in the Varian Physics Building, Manoharan and Moon began their writing project with a scanning tunneling microscope, a device that not only sees objects at a very small scale but also can be used to move around individual atoms. The Stanford team used it to drag single carbon monoxide molecules into a desired pattern on a copper chip the size of a fingernail.

On the two-dimensional surface of the copper, electrons zip around, behaving as both particles and waves, bouncing off the carbon monoxide molecules the way ripples in a shallow pond might interact with stones placed in the water. The ever-moving waves interact with the molecules and with each other to form standing "interference patterns" that vary with the placement of the molecules.

By altering the arrangement of the molecules, the researchers can create different waveforms, effectively encoding information for later retrieval. To encode and read out the data at unprecedented density, the scientists have devised a new technology, Electronic Quantum Holography.

In a traditional hologram, laser light is shined on a two-dimensional image and a ghostly 3-D object appears. In the new holography, the two-dimensional "molecular holograms" are illuminated not by laser light but by the electrons that are already in the copper in great abundance. The resulting "electronic object" can be read with the scanning tunneling microscope.

Several images can be stored in the same hologram, each created at a different electron wavelength. The researchers read them separately, like stacked pages of a book. The experience, Moon said, is roughly analogous to an optical hologram that shows one object when illuminated with red light and a different object in green light.

For Manoharan, the true significance of the work lies in storing more information in less space. "How densely can you encode information on a computer chip? The assumption has been that basically the ultimate limit is when one atom represents one bit, and then there's no more room—in other words, that it's impossible to scale down below the level of atoms.

"But in this experiment we've stored some 35 bits per electron to encode each letter. And we write the letters so small that the bits that comprise them are subatomic in size. So one bit per atom is no longer the limit for information density. There's a grand new horizon below that, in the subatomic regime. Indeed, there's even more room at the bottom than we ever imagined."

In addition to Moon and Manoharan, authors of the Nature Nanotechnology paper, "Quantum Holographic Encoding in a Two-Dimensional Electron Gas," are graduate students Laila Mattos, physics; Brian Foster, electrical engineering; and Gabriel Zeltzer, applied physics.

Lasers, Holograms and Laser Communication Roundup

1. Photonic chip breaking the terabit-per-second barrier.

Researchers from Australia, Denmark, and China have combined efforts to show the feasibility of terabit-per-second Ethernet over fiber-optic cables. The solution involves a photonic chip that uses laser light for switching signals, and a form of the exotic material type, chalcogenide. One of the key breakthroughs researchers made wasn't so much in speed but in practicality. By using relatively traditional methods to etch a circuit out of a glassy form of a chalcogenide, arsenic trisulfide (As2S3), researchers were able to reduce the waveguide that demultiplexed an incoming signal from tens of meters down to 5 cm. Eggleton said that silicon-based chips could also be used in principle to achieve similar, but slower, results, but their ultimate goal was to create fully photonic chips in the same foundries that now make CMOS (complementary metal oxide semiconductor) integrated circuits.

"It's years to complete," Eggleton said, taking these research efforts into a production technology. But these demonstrations "are starting to establish this is a serious proposition."


2. Audience interaction with 'hologram' enabled by spinning-mirror system

To an assembled audience, the system displays a dynamic, three-dimensional volumetric image of the speaker's head in real time and enables two-way communication between the display and observers. Behind the scenes, a high-speed projector projects patterned light onto the face of the remote speaker at 120 frames/s. Meanwhile, two video cameras record the changing pattern of light on the face from slightly different points of view. From this, a computer reconstructs
the 3-D shape of the face 30 times per second, and then by texture mapping that geometry, produces a 3-D model that updates at the frame rate of the video. This image is projected onto a flat brushed aluminum surface molded into the shape of an upside-down "V," which spins at 15 times per second–thus providing 30 passes of the surface every second. Each observer gets a different view of the speaker's face–as does every viewer's left and right eye. What the speaker sees is a flat screen showing video of the viewing audience. Thus, the speaker can interact with specific audience members. At a recent trade show, Paul Debevec, research associate professor and associate director of graphics at ICT, explained how the system operated at a
recent trade show (see video at www.laserfocusworld.com/articles/348690). The video shows a glass cage surrounding the "hologram" that, according to the researchers,
prevented curious observers from getting too close and clipping their fingers on the spinning mirror. Once the system is further refined, they expect to be able to do away with the glass







3. Ultrafast laser creates photonic crystal in diamond

A metamaterial consisting of periodic regions of high electrical conductivity in a 3 × 3 × 1.5 mm synthetic-diamond crystal has been fabricated by researchers at Kyoto University (Kyoto, Japan); the regions are created by focusing 230 fs pulses at 1 kHz from a modelocked Ti:sapphire laser to a beam-waist diameter of 2 µm and an energy fluence of 28.5 J/cm2 within the crystal. Potential uses of the metallo-dielectric photonic crystal include wire-grid polarizers and terahertz metamaterials



4. Sufrace plasmnos switched at Terahertz rates

Efforts to effect such switching have been ongoing in recent years, making use of a wide range of media including thermoelectric and electro-optic materials, quantum dots, and photochromic molecules. However, switching the packets of electron oscillations has so far been limited to submillisecond timescales, reaching down into the nanosecond regime in 2007. Now, Southampton's Kevin MacDonald and his colleagues have demonstrated a jump some five orders of magnitude over the prior best efforts, modulating plasmon pulses in a metal-dielectric waveguide at the 100-femtosecond level–a modulation bandwidth in the terahertz range.

"Our result represents a significant advance in the achievable modulation bandwidth for active plasmonics," says MacDonald, though he admits that the demonstration is more at the "proof-of-principle" stage than toward immediate practical applications. In fact, the advance may be too dramatic for its own good. "Some would argue that femtosecond switching is actually too fast for the envisioned next-generation applications of plasmonics in data transport and processing," he says.



5. Plasmonic LED appraches 10 GHz modulation

OPTICAL COMMUNICATIONS: Plasmonic LED approaches 10 GHz modulation speed Even though the device with a 40 nm gap between the quantum well and silver layer achieves a modulation speed of only 3.6 GHz, the researchers expect that a device fabricated with higher free-carrier densities on the order of 10**19 cm-3 could easily achieve a modulation speed of 10 GHz. The next step for the research team is to study electrically driven devices with good external efficiencies. "LEDs capable of 10 Gbit/s modulation speed can be the next-generation low-cost optical source for very short distance links within computer racks," says Michael Tan, a senior scientist at Hewlett Packard Laboratories. "The cost of the optical components is one of the largest mitigating factors for introducing photonics inside the box. These LEDs are much easier to manufacture and would cost 50 to 100 times less than today's data communication VCSELs."

Commercial light-emitting diodes (LEDs) are only capable of 1 GHz maximum modulation speeds because of slow carrier recombination, limiting them to applications in short-haul optical communication links.



6. Digital holography combines optical-fiber beams coherently

An experimental proof-of-concept was demonstrated by the researchers for three single-mode polarization-maintaining passive fibers at 1.06 µm using a Nd:YAG laser with a 10 kHz bandwidth expanded to a collimated beam. The first beamsplitter creates a reference beam and feeds the fiber array. A second one separates the reference in two arms: one is used to record the hologram on the CCD, the other generates the conjugated segmented wavefront by diffracting through the SLM and is fed back to the fibers to phase lock the output beams for coherent combination.
The research team is currently working to demonstrate phase locking for a larger number of fibers. The team says that this concept has high potential for extending the performance of fiber lasers–in particular for the pulsed regime–and will be useful for applications such as light detection and ranging and free-space communications that require both high-energy and high-spatial/spectral-quality fiber
sources.


7. High-power fiber-laser beams are combined incoherently

Combining laser beams can boost power at the target far above that produced by a single laser. Incoherent beam combining achieves propagation efficiencies of greater than 90%, while avoiding the complexities of coherent or spectral beam combining. [Naval Research Laboratory.] They achieved propagation efficiencies greater than 90% at a kilometer in range, with a total power of 2.8 kW on a target with a 10 cm radius.

The high beam quality and efficiency of fiber lasers make them ideal candidates for directed-energy applications. Although a number of companies manufacture high-power fiber lasers, IPG Photonics (Oxford, MA) currently holds the record, producing more than 3 kW per fiber of single-mode (M2 approximately 1) laser radiation.2 Another company, Nufern (East Granby, CT), expected to have a 1 kW single-mode fiber laser available in 2008. 3 These multikilowatt single-mode fiber lasers are robust, compact, nearly diffraction-limited, have high wall-plug efficiency, random polarization, and large bandwidth. A 1 kW single-mode IPG fiber-laser module, emitting at 1.07 µm, has a dimension of approximately 60 × 33 × 5 cm (excluding power supply), weighs about 20 lb, has a wall-plug efficiency of about 30%, and has
an operating lifetime in excess of 10,000 hours.

To operate in a single mode, the core of the fiber must be sufficiently small. For example, the IPG single-mode 1 kW fiber lasers have a core radius of about 15 microns. Multimode IPG fibers, on the other hand, operating at 10 kW and 20 kW per fiber, have core radii of about 100 and 200 µm and a beam quality M2 of about 13 and 38. These higher-power fiber lasers with larger values of M2 have a more limited propagation range. In 2008, IPG is expected to have a single-mode fiber laser operating at 5 kW.

Incoherent beam combining of fiber lasers is readily scalable to higher total power levels. For multiple incoherently combined fiber lasers, the total transmitted power scales as the number of lasers, while the radius of the beam director scales as the square root of the number of lasers. A 500 kW laser system, for example, could consist of 100 fiber lasers (5 kW/fiber) and have a beam director radius of about
40 cm. Excluding the power supply, the fibers and pump diodes would occupy a volume of about 8 m**3.

We at NRL have recently completed a proof-of-concept field demonstration of long-range incoherent beam combining at the Naval Surface Warfare Center in Dahlgren, VA. These experiments used four IPG single-mode fiber lasers having a total output power of 6.2 kW. In the initial experiments, we transmitted a total of about 3 kW, and delivered about 2.8 kW to a 10-cm-radius target at a range of 1.2 km.
The fiber lasers were operated at half power because of thermal issues in the beam director, which can be readily corrected in the next series of experiments.

Propagation experiments using the NRL fiber lasers on a 3.2 km range at full power are presently taking place at the Starfire Optical Range (Albuquerque, NM). These experiments will verify our computer model of incoherent beam combining and will help us devise closed-loop techniques to compensate for wandering of the beam centroid due to air turbulence. We will also investigate the effects of thermal blooming, which can be an important limitation under certain conditions. This
latter investigation will use a stagnation tube to eliminate the cooling effects of transverse airflow.


8. Talbot external cavity coherently combines ten laser diodes

A method of phase-locking an array of ten index-guided tapered laser diodes has been devised by researchers at CNRS and the Alcatel-Tales III-V Lab (both in Palaiseau, France) and the University of Nottingham (Nottingham, England). The Talbot effect (a near-field diffraction effect in which a grating is repeatedly imaged at regular distances from the grating) is exploited by using a slightly tipped volume Bragg grating (VBG) as the output mirror; the tip selects an in-phase self-image and places it back on the diode array, sending light back into the diodes for phase-locking. output power of 1.7 W in its in-phase single main lobe mode.


February 13, 2009

Molten Salt Reactors in Japan


There is an online paper that discusses technical work and a proposed program to get detailed Molten Salt Reactor designs built.

A powerpoint presentation illustrates the proposals from the japanese technical report.

There is the Ralph Moir paper on how to launch and scale a thorium Molten Salt Reactor program.



















Battles in Bullet Time: Bionic bullet dodging vs Real Time Guided Sniper Rounds vs Robotic Counter Sniper Fire


There is a patent for a electronically enabled bullet dodging capability. If a sniper is firing from 2500 meters then there is 4 seconds to move out of the way if the bullet can be detected as it emerges from the muzzle. (H/T to Firearm blog

The knee reflex takes about 20 milliseconds to fire. Bull riders can have head accelerations of 26-46 Gs (258-450 m/s**2). So 40 m/s**2 head acceleration should be safe, although it could be necessary to move someone faster and get whiplash instead of a bullet hole. Moving about 15 centimeters would often be enough to get out of harms way.

Considering a rather short, 200 meter shot, a time of flight of about 200 milliseconds is available from the time of firing until the impact. The typical contraction time of human muscles is between about 40 milliseconds and 80 milliseconds, thus providing sufficient headroom for the electronics to compute the optimal avoidance strategy and initiate evasive muscle stimulation (which will only be limited by the ability of the body to follow the electrical stimulus).


Guided Bullets - Could React to Dodging Target

Darpa, the Defense Department's far-out research arm, announced a pair of contracts to start designing a super, .50-caliber sniper rifle that fires guided bullets. Lockheed Martin received $12.3 million for the "Exacto," or Extreme Accuracy Tasked Ordnance, project, while Teledyne Scientific & Imaging got another $9.5 million.

Darpa won't say, publicly, how far, how long and how accurate they want the new bullets to be — all that information is classified. But they will say that Exacto should contain a next-gen scope, a guidance system that provides information to direct the projectile, an "actively controlled .50-caliber projectile that uses this information for real-time directional flight control," and a rifle. "Technologies of interest may include: fin-stabilized projectiles, spin-stabilized projectiles, internal and/or external aero-actuation control methods, projectile guidance technologies, tamper proofing, small stable power supplies, and advanced sighting, optical resolution and clarity technologies."


So if both bullet dodging and real-time guided bullets were successful, then there would be a lot of moves and counter-moves going in split second. [a bullet-time battle]

It would also be Matrix Bullet Dodging versus Angelina Jolie in Wanted curving bullets.

Counter Sniper Systems - Detecting and Suppressing Snipers
DARPA is working on counter sniper systems This would be thrown into the mix of dodging versus guided with detection and suppressing fire at the point of a shot being taken or even before.

C-sniper (counter-sniper) program, which aims to detect and neutralize enemy snipers before they can engage US Forces. Given that snipers don't advertise their presence before the bullet crack, the C-sniper system is expected to operate in always-on mode from a moving vehicle. Presumably, the signatures to be exploited can be inferred from DARPA's own offense programs – e.g. radiation from laser scopes, or reflections off optical scopes.

C-sniper is actually a follow on to a larger programme called Crosshairs. Managed by TTO, Crosshairs is aimed at producing systems to detect enemy bullets, RPGs, (ATGMs) and mortars fired at US military vehicles and to prevent them from striking the vehicle. Incredibly, threat identification and localization will be accomplished in sufficient time to enable both automatic and man-in-the-loop responses.


Wired talks about c-sniper program

Cops and soldiers now have the ability to pinpoint incoming sniper fire. The military's way-out research arm wants to take that a step further, by finding and "neutralizing" shooters before they ever pull their triggers.

For years, military engineers have been working to build a similar system -- using flashes of laser light to "illuminate potential hiding places... and detect retro-reflections from the sniper’s scope.

An infrared camera/illuminator uses backscattered infrared (808 nm) illumination to light up an area of interest at distances up to 1 km. Optical augmentation (glint) from an individual's rifle scope/binoculars or even a person's retinas provides a means of detecting that individual.


Jet Propelled or Mechanically Assisted Bullet Dodging
If you were not relying upon muscle but something in the exoskeleton to propel the body or body part out of the way then faster short movements could be possible. A jet of gas could be used to move the head or body to initiate the dodge along with muscle activation.







A method of protecting a target from a projectile propelled from a firearm comprises detecting an approaching projectile, continuously monitoring the projectile and transmitting an actual position of the projectile to a controller, computing an estimated projectile trajectory based upon the actual position of the projectile, determining an actual position of a target with a plurality of position sensors and a plurality of attitude sensors, determining whether the estimated projectile trajectory coincides with the actual position of the target, and triggering a plurality of muscle stimulators operably coupled to the controller and to the target when the estimated projectile trajectory coincides with the actual position of the target, wherein the muscle stimulators stimulate the target to move in a predefined manner, and wherein the target moves by an amount sufficient to avoid any contact with the approaching projectile. The projectile may be detected in the detecting step by emitting an electromagnetic wave from a projectile detector and receiving the electromagnetic wave after the electromagnetic wave has been reflected back toward the projectile detector by the projectile.


Sniper Rifle Ranges and Typical Shot Distances

Effective range for the standard-caliber sniper rifles against the single human-sized target may be estimated as 700-800 meters for first-shot kills. To extend effective range beyond 1000 meters, often used sniper rifles, designed to fire more powerful ammunition, such as .300 Winchester magnum (7.62x67mm) or .338 Lapua magnum (8.6x70mm). The range for sniper fire may vary from 100 meters or even less in police/counter-terror scenarios, or up to 1 kilometer or more - in military or special operations scenarios.

How Stuff Works describes "How Military Snipers Work"

A Canadian sniper in Afghanistan made the longest recorded sniper kill in history with this weapon. On a March afternoon in 2002, Corporal Rob Furlong of the Princess Patricia's Canadian Light Infantry (PPCLI) killed an enemy combatant from 2,430 meters (12.0772 furlongs/2,657 yd/1.509 miles) with American 750 grain Hornady A-MAX very-low-drag bullets.

Current Sniper Shot Detection Systems

Shotspotter is the audio-monitoring tool used by law-enforcement agencies to pinpoint gunfire. They use a triangulation system to coordinate with other nearby Shotspotter devices to determine precisely where gunshots are fired. They have a range of about 2 miles and are guaranteed accurate to within about 75 feet, though typically they are closer to the exact location. This is nowhere near the accuracy and speed needed for the bionic bullet dodging system. The current systems are for determining an area to cordon off so that the sniper can be hunted in city block or four city blocks or in a section of jungle. Meanwhile the target of the sniper is assuming room temperature.

Detection of the shot and the precise trajectory of the shot will have to get a lot better for bionic dodging to know the direction to dodge and the point to move away from.


Dodge This


Phil Helmuth: I can dodge bullets baby


Wanted - bullet curving assassins

More Durable and High Resolution Nanoimprint Lithography


Nano imprinting with Bulk Metallic Glass enables to directly replicate smallest features with high aspect ratio. Our current record is 13 nm and 50 for feature diamater and aspect ratio, respectively.

MIT Technology Review reports that researchers at Yale University have demonstrated that these nanoimprint molds can be created from more durable materials. This advance can broaden the commercially viability of nanoimprint lithography. They describe current work at 13 nanometer resolution using metallic glasses but are progressing to 1 to 2 nanometers using carbon nanotubes. Achieving 1 nanometer resolution would be one thousand times better than the 32 nanometer resolution lithography that Intel is commercializing now. Below details are presented from the website of the Yale research group.

In nanoimprint lithography, a mold made of a hard material such as metal or silicon is pressed into a softer material, often molten silicon itself or a polymer. The molds can then be reused. But both metals and silicon have limitations as mold materials. Silicon is brittle, and molds made of the material fail after about a hundred uses, says Schroers. Metal molds are more resilient but are grainy, and their features can't be any smaller than the grains in the metal itself--about 10 micrometers.

"In many ways, metallic glasses are an ideal material for nanoimprint lithography," says John Rogers, a professor of materials science and engineering at the University of Illinois at Urbana-Champaign, who was not involved in the work. "They're extremely strong, and they can be molded at extremely high resolution." Ordinary metals are crystalline. But bulk metallic glasses, which are created by cooling liquid metals very rapidly to prevent crystallization, lack such structure. Like silicon molds, they can be patterned very finely.

Schroers says that metallic-glass molds can be used millions of times to pattern materials, including polymers like those used to make DVDs. The Yale group has used the molds to create three-dimensional microparts such as gears and tweezers, as well as much finer structures. This week in the journal Nature, Schroers's group describes making molds with features as small as 13 nanometers.

"Theoretically, the size limit is the size of a single atom," says Schroers of the metallic-glass molds. Indeed, the Yale researchers hope to make molds that can form even finer structures by controlling the surface chemistry of the metallic glasses. But the main limitation on the molds is the structure of the metal and silicon templates used to make them. In the hope of further increasing the molds' resolution, Schroers is now developing templates made of nanostructures such as carbon nanotubes only one to two nanometers in diameter.








Jan Schroers lab site describes the nanoimprint work.

In February 2009, Nature will publish an article titled Nanomoulding using thermoplastic forming with bulk metallic glass. Golden Kumar , Hong Tang, and Jan Schroers developed a method to directly imprint features as small as 13 nm onto bulk metallic glass (BMG). We utilized thermoplastic forming of BMG and a favorable wetting behavior. This technique enables a reliable and economic process for nanoimprint lithography (NIL). Furthermore, as oppose to any other material/technology solution it provides a solution for both mold (template) and imprint material which makes it particularly interesting for high density data storage.



Wetting dominates the filling characteristics for mold diameters below 100 nm. For best control over the imprinting process a small < 100 MPa forming pressure requirement is ideal. Utilizing favorable wetting smallest features can be directly imprinted into BMGs as small as 13 nm with an aspect ratio of up to 50.



Schematic and experimental illustration of a processing technique based on the unique softening behaviour of BMG. First, BMG1 is embossed on a mold fabricated by conventional techniques. The mold can be any suitable material such as Ni, Si or alumina. The mold and BMG1 are separated, leaving a negative pattern of the mold imprinted on BMG1. Figure 4b shows an example of pattern transfer on Pt-BMG after embossing on the Ni mold. The patterned BMG1 can be used as a mold to imprint on a lower Tg metallic glass, BMG2. This step is demonstrated by using patterned Pt-BMG to imprint on Au-BMG. Alternatively, the crystallized BMG1 pattern can even be used as a mold for another amorphous sample of BMG1

The nature abstract on nanomoulding

Nanomoulding with amorphous metals

Nanoimprinting promises low-cost fabrication of micro- and nano-devices by embossing features from a hard mould onto thermoplastic materials, typically polymers with low glass transition temperature. The success and proliferation of such methods critically rely on the manufacturing of robust and durable master moulds. Silicon-based moulds are brittle and have limited longevity. Metal moulds are stronger than semiconductors, but patterning of metals on the nanometre scale is limited by their finite grain size. Amorphous metals (metallic glasses) exhibit superior mechanical properties and are intrinsically free from grain size limitations. Here we demonstrate direct nanopatterning of metallic glasses by hot embossing, generating feature sizes as small as 13 nm. After subsequently crystallizing the as-formed metallic glass mould, we show that another amorphous sample of the same alloy can be formed on the crystallized mould. In addition, metallic glass replicas can also be used as moulds for polymers or other metallic glasses with lower softening temperatures. Using this 'spawning' process, we can massively replicate patterned surfaces through direct moulding without using conventional lithography. We anticipate that our findings will catalyse the development of micro- and nanoscale metallic glass applications that capitalize on the outstanding mechanical properties, microstructural homogeneity and isotropy, and ease of thermoplastic forming exhibited by these materials


Fruit Flies turned into Lab Model For Human Brain Cancers













Fruit flies and humans share most of their genes, including 70 percent of all known human disease genes. Taking advantage of this remarkable evolutionary conservation, researchers at the Salk Institute for Biological Studies transformed the fruit fly into a laboratory model for an innovative study of gliomas, the most common malignant brain tumors. Being able to use fruit flies as reasonable lab models accelerates research by twenty times because of the fast life cycle of flies versus mice.

Fruit flies are ready to mate within two days and have a life expectancy of a little more than two weeks. Genescient has flies that live 4.5 times longer.







The Salk researchers are hoping that through their combined efforts new discoveries from the fly model can be rapidly translated into mouse and human brain tumor studies and lead to development of new therapies for this deadly cancer.

Genescient Research Model for Longevity


Genescient focuses on gene functional relations to human physiology and age-related diseases. This highlights various networks of pathways and genes described in the literature. We then make maps of networks pointing to the genes appearing in our proprietary list and their relationships. This elucidates possible human therapeutics to treat chronic diseases of aging and improve function.

Once found, we then quickly test these compounds in Drosophila for their effects on median lifespan, background mortality, and the rate of aging at different doses. Later Drosophila tests focus on fertility and mating success to determine the functional vitality of the Drosophila with increased lifespan and to test for any potential side effects of each therapeutic substance.

Because the selected Drosophila genetic pathways are also linked to conserved age-related disease genes in humans, direct therapeutic effects on human health can be expected. In our first attempts at selection using this screening procedure, we have already initially tested 13 compounds on normal flies. We found 12 that extend significantly the normal Drosophila lifespan, reduce background mortality rates, or slow the rate of aging. The thirteenth was a null test: we used a compound that acts in humans but not in Drosophila, and as expected, the flies did not respond. Two of the substances that passed our initial tests have also passed our functional tests for fertility and mating-success; we are now performing additional tests of this type on the other promising substances.

The speed and efficacy of our Drosophila testing allows us to test various combinations of compounds to acquire a highly synergistic set of compounds that act through differing aging pathways. This adds greatly to the therapeutic efficacy of the final treatment. Slowing the aging process requires several compounds, acting on several genetic pathways simultaneously. Our screening procedure allows us to identify quickly the best combinations of compounds that can act simultaneously on multiple genetic networks (cardiovascular, metabolic, neurological, etc).

When we find a multipath set of therapeutic compounds, we then do final nutrigenomic testing in humans. We establish dosage levels by prior literature data and our own testing. Once a group of 3 to 4 nutrigenomic compounds has been identified as synergistic in Drosophila, we can evaluate therapeutic efficacy of the nutrigenomic combination in humans using clinical tests such as: athletic performance, cognitive performance, lung capacity, skin elasticity, blood lipids, serum glucose, as well as inflammatory markers like CRP and IL-6. For any particular age-related disease, our proprietary therapeutic compounds could also be tested in specific disease mouse models or in human clinical trials.

Genescient’s Designer Therapeutics fine tune the body’s gene expression to mimic genetically selected longevity and reduced all cause mortality. This approach is in marked contrast to competing Pharma and biotech companies that focus on a single age-related disease or disorder. Genescient is the first company to develop genetic Designer Therapeutics to delay aging and the age-related diseases.



February 12, 2009

Pleistocene Park and Human Cloning Both Appear to be Possible


There are many technical problems to achieving the rebirth of Dinosaurs as seen in Jurassic Park. However, bringing back the Woolly Mammoth, Saber-Tooth Tigers and Neanderthal from the Pleistocene epoch seems possible. The Pleistocene is the epoch from 1.8 million to 10,000 years BP covering the world's recent period of repeated glaciations.

How to Bring An Extinct Species Back
* Well-preserved DNA
* Several billion DNA building blocks
* A suitable surrogate species
* Some seriously advanced technology
* You can also cheat by taking similar living species and modifying it to be like the extinct one. It is also technically difficult to modify a very large number of genes.

How are we doing on Mammoth, Saber-tooth and Neaderthal DNA?
Scientists have completed a first draft version of the Neanderthal genome. Neanderthals were the closest relatives of currently living humans. They lived in Europe and parts of Asia until they became extinct about 30,000 years ago.

Woolly Mammoth DNA has been mapped as well.

All sabre-tooth mammals lived between 33.7 million and 9,000 years ago, but the evolutionary lines that led to the various sabre-tooth genera started to diverge much earlier. There are permafrost-preserved specimens of Saver-tooth cats that would be a good source of DNA. If we could obtain a genome, a close living relative of the sabre-tooth, the African lion, should be a good egg donor and surrogate mother.


New Scientist has an analysis of ten extinct species that could be brought back.



We Have Briefly Brought Back an Extinct Goat and Cloned Dead Animals
Scientist were able to use frozen skin in 2003 to clone a bucardo, or Pyrenean ibex, a subspecies of Spanish ibex that went extinct in 2000. The latest attempt involved the creation of 439 ibex-goat hybrid cloned embryos made by inserting the cell nuclei of the ibex's skin cells into the egg cells of domestic goats which had their own cell nuclei removed. Of these cloned embryos, 57 were transferred into surrogate mothers and seven resulted in pregnancies, but only one goat gave birth and the newborn clone died after seven minutes as a result of lung deformities.

Human Cloning Appears to be Feasible
Robert Lanza notes that a human nucleus inserted into a different human’s egg cell appears to develop normally. “We show for the first time that the same genes turned on in normal human embryos are the same genes turned on in human clones” [Wired News], he says. If Lanza’s study holds up to further scrutiny, it will mean that there are no technical barriers to therapeutic or reproductive cloning. Which means, some experts say, that it’s just a matter of time before someone tries to make a genetic copy of themselves.

Next Big Future Highlights from the First 6 Weeks of 2009

Quantum Computers, Artificial Intelligence, Robotics and Supercomputers
1. 128 qubit quantum computer chips are built and cooled to milliKelvins. If they prove out they should be scaled to 2000 qubits this year. Scaling up means filling up the semiconductor die using the same modular design that they already have. Commercial customers are starting to use them now.

Dwave Systems 128 qubit chip

2. IBM is scheduled to build a 20 petaflop supercomputer by 2012.

3. City scale robotic car systems in near Abu Dhabi and in massive warehouses.

4. There is a synapse brain project funded by DARPA and IBM. Try to make a human cortex simulation with 220 trillion synapse connections. This would be 400 times bigger than an existing rat cortex simulation.

5. Adaptive AI (artificial intelligence) is being commercially rolled out for robotic call centers now.

6. Achieving Matrix-like instant skills; the cheat codes to life's tasks.

7. Single atom room temperature quantum dots

8. Cheap, wireless 15 Gigabit per second communication is coming

Nuclear Fusion, Advanced nuclear fission and other Energy Related
9.

IEC (Inertial electrostatic) nuclear fusion is still chugging along with good experimental results. They are funded through 2009 by the US Navy. The current work is to ensure no gotchas to prevent scaling this up to net power generation. It looks like it could lead to a fantastic new power source.


10. Focus fusion got $1.2 million of funding.

11. General fusion (steam punk / magnetized target nuclear fusion) almost has a second round of funding for $10 million.



More General Fusion pictures and videos.

12. Progress to China's Modular factory mass produced melt-down proof nuclear reactors that they will be building in bunches (eight packs).

13. Micro gap thermophotovoltaics get get to 50% efficiency converting heat and theoretically up to 85%.

14. Interesting simple tech from MIT. Power harvesting shock absorber could increase hybrid car fuel mileage by 10%.

15. DARPA also funding Cyclone engine, which will enable self-fueling robots.





16. Game changing oil recovery processes are being proven at tens of thousands of barrels per day.

17. Jovion corp, Zero Point Energy, Casimir Force Manipulation, Blacklight Power
A long shot possibility for getting zero point energy. The patent and theory could explain cold fusion and Blacklight power.

18. It is related to advancing capabilities in manipulating the Casimir force.

19. Blacklight Power signed a second commercial deal.

Graphene and Nanotechnology
20. Hydrophobic sand is helping to make greening the deserts easier and cheaper. 75% more efficient irrigation.

21. Graphene is a great conductor of electricity is easily turned into an insulator

22. Graphene for ultracapacitors

23. Stanford is able to write 35 bits onto one electron.

24. Fracture putty is being developed to regenerate broken bones.

25. Carbon nanotube breakthrough - five times as strong as Kevlar

26. Electromagnetic launch of planes and UAVs for the next aircraft carrier (2014) and lasers and railguns for added weapons.

27. 49,000 horsepower Superconducting motors are being field tested by the Navy.

28. Progress to practical invisibility is happening fast. This same technology has also enabled optical microscopes that can see to 10-20 nanometers of resolution, which is 10 to 20 times better than the diffraction limit that was previously believed to be the limit of optical microscopes.

29. A galactic civilization that captured the solar power of all stars in a galaxy and used it for reversible computing would have 10**50 human brain equivalents.

30. The UK is making progress to a single stage to orbit space plane.

Risk Related
31. A ship-load of fertilizer is as powerful as an atomic bomb. The science of nuclear war and detailed analysis of all out nuclear war.

China Taking Small Steps Toward Yuan as a Regional Currency

The International Herald Tribune indicates that China is taking small steps to establishing the Yuan as a regional currency.

The yuan's journey from a controlled, partially convertible currency to a liquid, regional medium of exchange will be a long one, because of the desire of the Beijing government for economic stability.

In addition, Beijing has always been wary of moving too quickly to open up its markets, fearing that such a move might leave its economy vulnerable to sudden shifts in capital.

In December, China said the yuan could be used for the settlement of trade between the industrial areas of the Pearl River Delta and Yangtze River Delta and the Chinese territories of Hong Kong and Macao.

And members of the Association of Southeast Asian Nations will be permitted to use yuan in their trade with the southeast China provinces of Guangxi and Yunnan.







Beijing has said these efforts are geared to promoting the country's international trade and to helping Chinese companies limit their currency exposure, while giving foreign companies a chance to familiarize themselves with the yuan.

"Making yuan a regional currency will serve China well," said Zhang Bin, an analyst at the Chinese Academy of Social Sciences, a government research institute.

"It can reduce the troubles of managing huge foreign exchange reserves," Zhang said.

"And looking forward, it will eventually give China more power in the financial market to match its economic size and foreign exchange reserve levels," he said.

The more the yuan moves offshore, the more that companies using it will want access to hedging markets to manage their currency exposure and trading risks. Until there are active hedging markets, like those for swaps and futures, companies offshore might remain lukewarm about adopting the yuan.

American Superconductor and Department of Energy Will Try to Get 10 Megawatt Wind Turbines into Production


The EU has the Upwind project to develop advanced wind technology. Upwind is separate from the american DOE and American Superconductor effort.

American Superconductor will work with the Department of Energy to enable production a 10 megawatt superconducting wind turbine design. The 10 MW superconducting wind turbine would weight 120 tons instead of 300 tons for a conventional design. The current largest conventional wind turbines are about 6 to 6.5 megawatts and there are 7.5 megawatt versions being developed as well.

American Superconductor Corporation (NASDAQ: AMSC), a leading energy technologies company, today announced that it has entered into a Cooperative Research and Development Agreement (CRADA) with the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) and its National Wind Technology Center (NWTC) to validate the economics of a full 10 megawatt (MW) class superconductor wind turbine. AMSC is separately developing full 10 MW-class wind turbine component and system designs. A CRADA allows the Federal government and industry partners to optimize their resources, share technical expertise in a protected environment and speed the commercialization of technologies.

The superconductor generators that are to be utilized for 10 MW-class superconductor wind turbines are based on proven technology AMSC has developed for superconductor ship propulsion motors and generators under contracts with the U.S. Navy. AMSC recently announced that a 36.5 MW superconductor ship propulsion motor it designed and manufactured for the Navy was successfully operated at full power by the Navy and is ready for deployment.


Under the 12-month program, AMSC Windtec™, a wholly owned subsidiary of AMSC, will analyze the cost of a full 10 MW-class superconductor wind turbine, which will include a direct drive superconductor generator and all other components, including the blades, hub, power electronics, nacelle, tower and controls. The NWTC will then benchmark and evaluate the wind turbine’s economic impact, both in terms of its initial cost and its overall cost of energy.






FURTHER READING
This site covered the initial start of design and development of the 10 Megawatt wind turbine back in October 2007

This site has also closed tracked the details of the 36.5 megawatt superconducting navy motor.

14 page presentation on the separate EU Upwind project, which is a 5 year 22 million euro project.

Upwind goals:
1. very large turbines,
2. more cost efficient turbines
3. offshore wind farms of several hundred MW.
• Today’s wind turbines up to P = 5 MW and rotor diameter up to 126 meter
• The future wind turbines P= 10 MW and 20 MW with rotor diameter up to more than 200 meter



Denmark has a project researching 12 megawatt wind turbines.

The goal of the project is to experimentally investigate the wind and turbulence characteristics between 70 and 270 m above sea level and thereby establish the scientific basis relevant for the next generation of huge 12 MW wind turbines operating offshore.


A 6 megawatt rated wind turbine was installed in the USA but it will likely generate 7+ megawatts.


Half the World is Middle Class and 4 Billion Have Mobile Connections



Middle-class people do not live from hand to mouth, job to job, season to season, as the poor do. A local middle-class person in a developing country begins where people have a third of their income left for discretionary spending after providing for basic food and shelter. This allows them not just to buy things like fridges or cars but to improve their health care or plan for their children’s education. Half of the world's population is now middle class by local standards of having 1/3 of income available for something other than basic needs.

The world passed its four billionth mobile connection this week as well.

2001 one billion mobile connections
2005 two billion mobile connections
2007 three billion mobile connections
Q2 2008 50% of all human beings are carrying a mobile phone
There are 100 million mobile broadband connections in 2009

Usually, an income of that size requires regular, formal employment, with a salary and some benefits, that is, a steady job—another key middle-class characteristic. The income needed to have a third of it left over after meeting basic needs also varies from place to place. In China, for example, $3,000 a year may be enough in Chongqing or Chengdu, big cities in the west, but not in Beijing or Shanghai. So defining the middle class in absolute terms is hard.




China's local middleclasss boomed some time between 1990 and 2005, during which period the middle-class share of the population soared from 15% to 62%. It is just being reached in India now. In 2005, says the reputable National Council for Applied Economic Research, the middle-class share of the population was only about 5%. By 2015, it forecasts, it will have risen to 20%; by 2025, to over 40%.

Using a somewhat different definition—those earning $10-100 a day, including in rich countries—an Indian economist, Surjit Bhalla, also found that the middle class’s share of the whole world’s population rose from one-third to over half (57%) between 1990 and 2006. He argues that this is the third middle-class surge since 1800. The first occurred in the 19th century with the creation of the first mass middle class in western Europe. The second, mainly in Western countries, occurred during the baby boom (1950-1980). The current, third one is happening almost entirely in emerging countries. According to Mr Bhalla’s calculations, the number of middle-class people in Asia has overtaken the number in the West for the first time since 1700.


FURTHER READING
China had more car sales in January 2009 than the US did

China's overall vehicle sales, including trucks and buses, totaled 735,000 units in January, a 14.4% drop from a year ago and a more modest 0.8% slide from December. U.S. vehicle sales plunged by 37%, to 656,976 units, the lowest level in 26 years.

General Motors has forecast that China's vehicle sales may reach 10.7 million units in 2009, compared with 9.8 million vehicles in the United States
.

Before the credit crisis, the USA had been selling about 16 million cars and trucks


In 2007, sales were about 16 million cars and trucks in the USA.

Lonsdaleite, also Called Hexagonal Diamond, 58% Harder Than Regular Diamond


In the case of lonsdaleite [hexagonla diamond], compression mechanism also caused bond-flipping, yielding an indentation strength of 152 GPa, which is 58 percent higher than the corresponding value of diamond. viaPhysorg.com

Under large compressive pressures, w-BN [wurtzite boron nitride] increases its strength by 78 percent compared with its strength before bond-flipping. The scientists calculated that w-BN reaches an indentation strength of 114 GPa (billions of pascals), well beyond diamond’s 97 GPa under the same indentation conditions.

“Lonsdaleite is even stronger than w-BN because lonsdaleite is made of carbon atoms and w-BN consists of boron and nitrogen atoms,” Chen explained. “The carbon-carbon bonds in lonsdaleite are stronger than boron-nitrogen bonds in w-BN. This is also why diamond (with a cubic structure) is stronger than cubic boron nitride (c-BN).”


Wikipedia on Lonsdaleite.





By showing the underlying atomistic mechanism that can strengthen some materials, this work may provide new approaches for designing superhard materials. As Chen explained, superhard materials that exhibit other superior properties are highly desirable for applications in many fields of science and technology.

“High hardness is only one important characteristic of superhard materials,” Chen said. “Thermal stability is another key factor since many superhard materials need to withstand extreme high-temperature environments as cutting and drilling tools and as wear, fatigue and corrosion resistant coatings in applications ranging from micro- and nano-electronics to space technology. For all carbon-based superhard materials, including diamond, their carbon atoms will react with oxygen atoms at high temperatures (at around 600°C) and become unstable. So designing new, thermally more stable superhard materials is crucial for high-temperature applications. Moreover, since most common superhard materials, such as diamond and cubic-BN, are semiconductors, it is highly desirable to design superhard materials that are conductors or superconductors. In addition, superhard magnetic materials are key components in various recording devices.”


More information: Pan, Zicheng; Sun, Hong; Zhang, Yi; and Chen, Changfeng. “Harder than Diamond: Superior Indentation Strength of Wurtzite BN and Lonsdaleite.” Physical Review Letters 102, 055503 (2009).

FURTHER READING OTHER HARDER THAN DIAMOND MATERIALS
Ultrahard fullerite (C60) is a form of carbon found to be harder than diamond, and which can be used to create even harder materials. A Type IIa diamond (111) has a hardness value of 167±6 gigapascals (GPa) when scratched with an ultrahard fullerite tip. A Type IIa diamond (111) has a hardness value of 231±5 GPa when scratched with a diamond tip; this leads to hypothetically inflated values.

Ultrahard fullerite can be made into Aggregated diamond nanorods. aggregated diamond nanorods have a modulus of 491 gigapascals (GPa), while a conventional diamond has a modulus of 442 GPa. ADNRs are also 0.3% denser than regular diamond.

Polyyne, a superhard molecular rod comprised of acetylene units - that resists 40 times more longitudinal compression than a diamond. Polyyne has the strongest bonds in carbon chemistry.

February 11, 2009

California s Energy Policy is Creating and Sustaining Structural Budget Problems


California is going through a state budget crisis and has been going through chronic and persistent budget problems for over a decade. California chooses not to use its offshore oil or develop more nuclear power. Some environmentalists will say that the oil and nuclear power would not be enough to solve the energy problems of the United States. However, this will show that California could get $5-10 billion per year of tax revenue from the development of 10 billion barrels of oil and 16 trillion cubic feet of natural gas. Also, the development of nuclear energy could offset electricity purchases from out of state sources which can often be at spot prices. Each nuclear reactor could offset about $1 billion of electricity and natural gas purchases each year. California's budget gap is projected to be $40 billion over two years. The initial issuance of oil leases would provide immediate revenue to the state of one billion/year or more. The construction to build the oil rigs and nuclear plants would provide construction jobs, taxes and fees which would provide immediate benefits as the projects are being built and before oil is pumped or electricity is generated.

Does anyone believe that California will not need $10-20 billion/year in state tax revenue in ten years or that $2-5 billion/year of tax revenue over the next several years would not help a great deal?


Alaska made about $10 billion in oil revenues in 2008. They made about $5.6 billion in oil revenue in 2007.






Alaska's oil resources is projected to be about 13 billion barrels of oil. California's offshore oil is of comparable scale.
















California's state budget is projected to have a $14 billion shortfall for 2008-2009 and about $40 billion for 2009-2010.


California could choose to stop screwing up its finances by having a state energy policy that would have avoided its past financial problems and could still help fix its future financial problems.

The Tennessee Brown's Ferry nuclear plant saved 800 million for the TVA by helping avoid purchases of power on the spot market.

The current plan is for 33% of California's power to come from renewable energy at a build-up cost of $60 billion. For $28 billion or less the equivalent energy could be provide by new nuclear power. The nuclear power choice would save $32 billion.

IEC Fusion Short Status Summary From Dr. Nebel



The short and less technical version from Classical Values.

1. The machine is working way better than the usual theories predict
2. No one knows why (lots of suspicions floating around)
3. New instruments are being added
4. The current machine is called WB-7. WB 7.1 (no details) is in progress.


The technical version mainly in Dr. Nebel's, project leader of the IEC Fusion project words:

Here’s what we know and what we don’t know:

1. We don’t have the spatial resolution of the density to see if the cusps are quasi-neutral [quasi-neutral means the plasma has roughly the same number of positive ions and negative electrons] on the WB-7
2. In one-D simulations the plasma edge (which corresponds to the cusp regions) is not quasi-neutral. Therefore, if the cusps [ a “cusp” is where the confining magnetic fields meet, in this case four fields meeting at a point at each corner of the cube the coils approximately] are quasi-neutral it must be a multidimensional effect.
3. Energy confinement on the WB-7 exceeds the classical predictions (wiffleball based on the electron gyro-radius) by a large factor. [The test machine is kicking the butt of earlier versions of machines of this type]


Our conclusion is that both the wiffleball and the cusp recycle are working at a reasonable level.





Polywells are run slightly electron-rich to confine and focus the ions, so they are not ambipolar and thus the area on the edge of the plasma is expected to have many more electrons than ions. Points 1 and 2 are important because significant numbers of ions in the cusps probably doom the concept in terms of achieving commercializable energy generation from fusion.

However, vastly superior numbers of fusions could still be leveraged this fusion device as a neutron generation source for uses such as transmutation. There have been proposed fusion/fission hybrids where the performance of the nuclear fusion part is ten to forty times less than that of pure energy generation fusion.



FURTHER READING
Plasma potentials at wikipedia

Talk Polywell discussion forum where Dr. Nebel is leaving some remarks and occasionally debating with Art Carlson.

Major CO2 Mitigation Methods: Carbon sequestering, Biochar, Low Carbon Energy Sources, CO2 Absorbing Cement, CO2 into Fuel

Straight up storing CO2 as a gas in the ground or saline is happening now at several tens of millions of tons/year worldwide. The amount of CO2 offset by solar power is on the same scale. Civilization is generating abut 25 billion tons of CO2 per year. Nuclear power worldwide offsets 2 billion tons of CO2 per year. Plus when CO2 is used to get more oil then it is not a net reduction depending on how it accounted. Of course getting towards CO2 neutrality by storing as you use and getting more oil is better than using oil/coal and not storing. Plus enhanced oil recovery also helps with Peak Oil, which is a good and necessary thing on the way to better solutions.

1. Biochar sequestering
The fertile black soils in the Amazon basin suggest a cheaper, lower-tech route toward the same destination as carbon storage. Scattered patches of dark, charcoal-rich soil known as terra preta (Portuguese for "black earth") are the inspiration for an international effort to explore how burying biomass-derived charcoal, or "biochar," could boost soil fertility and transfer a sizeable amount of CO2 from the atmosphere into safe storage in topsoil.

Charcoal is traditionally made by burning wood in pits or temporary structures, but modern pyrolysis equipment greatly reduces the air pollution associated with this practice. Gases emitted from pyrolysis can be captured to generate valuable products instead of being released as smoke. Some of the by-products can be condensed into "bio-oil," a liquid that can be upgraded to fuels including biodiesel and synthesis gas. A portion of the noncondensable fraction is burned to heat the pyrolysis chamber, and the rest can provide heat or fuel an electric generator.

Pyrolysis equipment now being developed at several public and private institutions typically operate at 350–700°C. In Golden, Colorado, Biochar Engineering Corporation is building portable $50,000 pyrolyzers that researchers will use to produce 1–2 tons of biochar per week. Company CEO Jim Fournier says the firm is planning larger units that could be trucked into position. Biomass is expensive to transport, he says, so pyrolysis units located near the source of the biomass are preferable to larger, centrally located facilities, even when the units reach commercial scale.

Steiner and coauthors noted in the 2003 book Amazonian Dark Earths that the charcoal-mediated enhancement of soil caused a 280–400% increase in plant uptake of nitrogen.

Preliminary results in a greenhouse study showed that low-volatility [biochar] supplemented with fertilizer outperformed fertilizer alone by 60%.

Because the heat and chemical energy released during pyrolysis could replace energy derived from fossil fuels, the IBI calculates the total benefit would be equivalent to removing about 1.2 billion metric tons of carbon from the atmosphere each year. That would offset 29% of today’s net rise in atmospheric carbon, which is estimated at 4.1 billion metric tons, according to the Energy Information Administration.







2. Regular Carbon Sequestering
The MIT Future of Coal 2007 report estimated that capturing all of the roughly 1.5 billion tons per year of CO2 generated by coal-burning power plants in the United States would generate a CO2 flow with just one-third of the volume of the natural gas flowing in the U.S. gas pipeline system.

The technology is expected to use between 10 and 40% of the energy produced by a power station.

In 2007, Jason Burnett, EPA associate deputy administrator, told USINFO. "Currently, about 35 million tons of CO2 are sequestered in the United States," Burnett added, "primarily for enhanced oil recovery. We expect that to increase, by some estimates, by 400-fold by 2100."

The Japanese government is targeting an annual reduction of 100 million tons in carbon dioxide emissions through CCS technologies by 2020.

Industrial-scale storage projects are in operation.
Sleipner is the oldest project (1996) and is located in the North Sea where Norway's StatoilHydro strips carbon dioxide from natural gas with amine solvents and disposes of this carbon dioxide in a deep saline aquifer. Since 1996, Sleipner has stored about one million tonnes CO2 a year. A second project in the Snøhvit gas field in the Barents Sea stores 700,000 tonnes per year.

The Weyburn project (started 2000) is currently the world's largest carbon capture and storage project. It is used for enhanced oil recovery with an injection rate of about 1.5 million tonnes per year. They are investigating how the technology can be expanded on a larger scale.

A natural gas reservoir located in In Salah, Algeria. The CO2 will be separated from the natural gas and re-injected into the subsurface at a rate of about 1.2 million tonnes per year.

Australian has a project to store 3 million tons per year starting in 2009. The Gordon project, an add-on to an off-shore Western Australian Natural Gas extraction project, is the largest CO2 storage project in the world. It will attempt to capture and store 3 million tonnes of CO2 per year for 40 years in a saline aquifer, commencing in 2009. It will cost ~$840 million.

CO2 capture from the air.

Wide plan proposes €1.25bn for carbon capture at coal-fired power plants; €1.75bn earmarked for better international energy links. The European commission today proposed earmarking €1.25bn to kickstart carbon capture and storage (CCS) at 11 coal-fired plants across Europe, including four in Britain.The four British power stations – the controversial Kingsnorth plant in Kent, Longannet in Fife, Tilbury in Essex and Hatfield in Yorkshire – would share €250m under the two-year scheme.

Japan and China have a project will cost 20 to 30 billion yen and will involve the participation of the Japanese public and private sectors, including JGC Corp. and Toyota Motor Corp. The two countries plan to bring the project into action in 2009. Under the plan, more than one million tons of CO2 annually from the Harbin Thermal Power Plant in Heilungkiang Province will be transferred to the Daqing Oilfield, about 100 km from the plant, and will be injected and stored in the oilfield.

3. CO2 into Cement

Novacem is a company that is making cement from magnesium silicates that absorbs more CO2 as it hardens. Normally cement adds a net 0.4 tons of CO2 per ton of cement, but this new cement would remove 0.6 tons of CO2 from the air. There is an estimated 10 trillion tons of magnesium silicate in the world. 0.6 tonnes times 10 trillion tons is 6 trillion tons. The amount of CO2 generated by people is 27 billion tons worldwide and this could increase to 45 billion tons. So 6 trillion tons is about 200 years worth of CO2 storage.

Calera cement is a startup funded by Vinod Khosla, technology billionaire. Calera's process takes the idea of carbon capture and storage a step forward by storing the CO2 in a useful product. For every ton of Calera cement that is made, they are sequestering half a ton of CO2.

Calera Cement Process uses flue gas from coal plants/steel plants or natural gas plants + seawater for calcium & Magnesium = Cement + Clean water + Cleaner Air

Calera has an operational pilot plant.

4. Low Carbon Energy Sources

Nuclear power worldwide offsets 2 billion tons of CO2 per year. Scaling nuclear power, wind energy, solar power, geothermal and hydro-electric power can offset a lot of CO2 by displacing coal power, oil and natural gas.

5. CO2 Capture from the Air - for Fuel or Storage

Technology for CO2 capture from the air is progressing.

Carbon Sciences and others are trying to scale up CO2 conversion into fuel.

Carbon Sciences estimate that by 2030, using just 25% of the CO2 produced by the coal industry, they can produce enough fuel to satisfy 30% of the global fuel demand.

The company's plan for 2009 includes the following:

* Develop a functional prototype of its breakthrough CO2 to fuel technology in Q1 2009. This prototype is expected to transform a stream of CO2 gas into a liquid fuel that is: (i) combustible, and (ii) usable in certain vehicles.
* Enhance the prototype to demonstrate a full range of cost effective process innovations to transform CO2 into fuel.
* Begin development of a complete mini-pilot system to demonstrate the company's CO2 technology on a larger scale.
* Prepare for the development of a full pilot system with strategic partners sometime in late 2010 or 2011.

CO2-to-Carbonate technology combines CO2 with industrial waste minerals and transforms them into a high value chemical compound, calcium carbonate, used in applications such as paper production, pharmaceuticals and plastics. This is also bordering the various using CO2 as part of cement.


FURTHER READING
Geoengineering proposals compared.