Pages

October 17, 2009

Rumaila Oilfield Deal Signed in Iraq

The Iraqi government has approved a deal with a consortium led by British giant BP PLC to develop the Rumaila oil field in the south in a major step forward for the country's oil industry.

Daily production from the Rumaila field stands at about 1 million barrels a day, almost half of Iraq's daily output of 2.4 million barrels. BP's targeted production is 2.85 million barrels per day.


Details of the BP/CNPC of China deal with the Iraqi government for the development of the Rumaila oil field

* BP and CNPC must improve the production rate of crude oil and natural gas liquids from Rumaila by 10 percent compared to the initial production rate as soon as possible. (add about 100 thousand barrels per day)

* The initial production rate is to be agreed by the companies and Iraq on the day the contract is ratified or before, and will be calculated as the average production rate over a 30 day period.

* The firms must aim for sustained output, or 'plateau production target', for a period of seven years of 2.85 million barrels of crude and natural gas liquids per day.




THE MINIMUM BP AND CNPC MUST DO INCLUDES

* Conduct 3D seismic and geographical surveys.

* Drill 20 new production and 10 new injection wells.

* Rehabilitate 130 wells.

* Design and build two water re-injection plants.

* Refurbish or construct additional field gathering and processing facilities.

OTHER DETAILS

* The Rumaila oilfield service contract lasts for 20 years, but it can be extended.

* BP and CNPC must submit a rehabilitation plan within six months of ratifying the contract.

* The contractors must establish in Iraq a 'normal presence' -- personnel and equipment necessary to support field operations -- within six months of the rehabilitation plan being approved or the contract could be terminated.

* The companies must submit an enhanced redevelopment plan, based on knowledge gained from the field's initial rehabilitation, within three years of the contract's ratification.


Two other deals could be signed within the coming two weeks.

* Zubair (oilfield) is currently producing about 230,000 barrels per day (target production level of 1.125 million barrels per day within 6 years)
* West Qurna Stage 1 is producing about 280,000 barrels a day. (target production level of 1.5 to 2.1 million barrels per day in 6 years)


Focus Fusion Dense Plasma Focus Project Has Started Test Firing

After seven years of theoretical work and raising money, five months of design, five months of construction and assembly, and a week of testing, LPP (Lawrenceville Plasma Physics) now has a functioning dense plasma focus, Focus-Fusion-1. The first shot, using helium as the fill gas, was achieved at 5:29 PM today, Oct.15, and the first pinch was achieved at 6:04 PM on the second shot. The fact that a pinch was achieved so soon was evidence of the soundness of the design.

The shots were produced with a charging potential of 20 kV, a bit less than half the full bank charge of 45 kV. We will not know the exact current achieved until we reduce some instrumental noise in the next day or so. It is probably around 0.9 MA and within 10% of our predictions.

The pinch is clearly evident in the voltage probe curve, below, of the sixth shot. The first peak is when the machine turns on, while the second, higher, peak in voltage is from the energy transfer into the pinch region and the plasmoid.


Previous articles on focus fusion are here The potential is that this can lead to commercial nuclear fusion and reduce energy costs by 5 to 50 times.

The LPP experiment is using a newly-built DPF (Dense Plasma Focus) device capable of reaching peak currents of more than 2 MA (megaAmps). They hope to get to a power level of 2+ MA which is higher than the Brzosko ( a previous DPF researcher) peak of 0.95 MA.

Initial results are in line with expectations which are described below.



Shot Description and Objectives

What is a “shot”?

A “shot” takes around 10 to 15 minutes and basically involves:

* Pumping the chamber to a vacuum state. There must be no gases in the chamber other than the gases to be tested.
* Pumping in a measured amount of gas (the first series of shots will be taken with Deuterium to calibrate the machine. The tests with boron and hydrogen come later.)
* Charge the capacitors – e.g., press a button, it will take ~1 minute to charge the capacitors up to 25kV (first shots at 25kV. The machine will be worked up to 45 kV later). When it reaches 25kV…
* “Fire” – this is where a button is pushed
*Look at readouts from the oscilloscopes and a few other diagnostic instruments attached to the machine.
*Pump down to a vacuum (and repeat for next shot).


What is the team looking for in the first shot?

* No shorting through the mylar. This could happen if dust or other impurities are on the high voltage plate. Such elements can redirect the electricity to penetrate the mylar insulation sheet.
* No breakage of the pyrex insulators. (Called “hat insulators” because of the hat shape). These insulators are made of pyrex and fitted to the anode. The fit must not be too tight or too loose as the anode expands and contracts with each charge. otherwise the pyrex could break. * The amount of current close to what is predicted.
* Possibly…evidence of a “pinch” – although this shouldn’t happen until after ~ 20 shots.
Ideal outcome for initial sets of shots: The peak current will be somewhere around 1.2 to 1.3 MA (million amperes)

October 16, 2009

Magnetricity Charge and Current of Monopoles in Spin Ice


Magnetic Wien effect, and the detection of magnetic charge by implanted muons.

‘Magnetricity’ observed and measured for the first time

A magnetic charge can behave and interact just like an electric charge in some materials, according to new research led by the London Centre for Nanotechnology (LCN) which could lead to a reassessment of current magnetism theories, as well as significant technological advances.

The research, published in Nature, proves the existence of atom-sized magnetic charges called ‘magnetic monopoles’ that behave and interact just like more familiar electric charges. It also demonstrates a perfect symmetry between electricity and magnetism – a phenomenon dubbed ‘magnetricity' by the authors from the LCN and STFC’s ISIS Neutron and Muon Source .

In order to prove experimentally the existence of magnetic current for the first time, the team mapped Onsager's 1934 theory of the movement of ions in water onto magnetic currents in a material called spin ice. They then tested the theory by applying a magnetic field to a spin ice sample at a very low temperature and observing the process using muon relaxation at ISIS, a technique which acts as a super microscope allowing researchers to understand the world around us at the atomic level.


BBC News coverage of magnetricity

But by engineering different spin ice materials to modify the ways monopoles move through them, the materials might in future be used in "magnetic memory" storage devices or in spintronics - a field which could boost future computing power.


Popular Science coverage

This new discovery opens up the possibility that magnetic monopoles could be used for computer storage. If magnetic polar identity can flow through crystals of spin ice, then the current of identity could replace positive and negative charges with positive and negative monopoles as the information storage medium. And since controlling the magnetic identity of electrons underlies quantum computing, this ability to alter that identity with a current positions spin ice as the new, leading candidate for quantum computing chips.


New Scientist coverage

The result could lead to the development of "magnetronics", including nano-scale computer memory.

In September, two teams of physicists fired neutrons at spin ices made of titanium-containing compounds chilled close to absolute zero. The behaviour of the neutrons suggested that monopoles were present in the material.

To get more detailed information on the monopoles than had previously been possible, Bramwell's team injected muons – short-lived cousins of electrons – into the spin ice. When the muons decayed, they emitted positrons in directions influenced by the magnetic field inside the spin ice.

This revealed that the monopoles were not only present but were moving, producing a magnetic current.

It also allowed the team to measure the amount of magnetic charge on the monopoles. It turned out to be about a 5 in the obscure units of Bohr magnetons per angstrom, in close agreement with theory, which predicted 4.6. Unlike the electric charge on electrons, which is fixed, the magnetic charge on monopoles varies with the temperature and pressure of the spin ice.




From Nature:

Measurement of the charge and current of magnetic monopoles in spin ice

The transport of electrically charged quasiparticles (based on electrons or ions) plays a pivotal role in modern technology as well as in determining the essential functions of biological organisms. In contrast, the transport of magnetic charges has barely been explored experimentally, mainly because magnetic charges, in contrast to electric ones, are generally considered at best to be convenient macroscopic parameters rather than well-defined quasiparticles. However, it was recently proposed that magnetic charges can exist in certain materials in the form of emergent excitations that manifest like point charges, or magnetic monopoles3. Here we address the question of whether such magnetic charges and their associated currents—'magnetricity'—can be measured directly in experiment, without recourse to any material-specific theory. By mapping the problem onto Onsager's theory of electrolytes, we show that this is indeed possible, and devise an appropriate method for the measurement of magnetic charges and their dynamics. Using muon spin rotation as a suitable local probe, we apply the method to a real material, the 'spin ice' Dy2Ti2O7 (refs 5–8). Our experimental measurements prove that magnetic charges exist in this material, interact via a Coulomb potential, and have measurable currents. We further characterize deviations from Ohm's law, and determine the elementary unit of magnetic charge to be 5 B Å-1, which is equal to that recently predicted using the microscopic theory of spin ice. Our measurement of magnetic charge and magnetic current establishes an instance of a perfect symmetry between electricity and magnetism.


Condensed-matter physics: Wien route to monopoles

Determining the magnetic charge of monopoles in a crystalline host seemed a mountain too high for physicists to climb. An experiment based on Wien's theory of electrolytes has now measured its value.

The exotic class of crystalline solids known as 'spin ices' has proved, perhaps surprisingly, to be a repository of some elegant physical phenomena. Spin ices are rare, three-dimensional systems in which the magnetic moments (spins) of the ions remain disordered even at the lowest temperatures available.


Magnetic Charge Transport (19 page pdf)

Bramwell papers on arxiv

Zero-point entropy of the spinel spin glasses CuGa_2O_4 and CuAl_2O_4

Experimental Proof of a Magnetic Coulomb Phase

Pinch Points and Kasteleyn Transitions: How Spin Ice Changes its Entropy

Title: Dy2Ti2O7 Spin Ice: a Test Case for Emergent Clusters in a Frustrated Magnet

Website Ranking and Popularity

A brief pause to note some success.

Alexa ranks this site as the 22nd most popular Science News and Media site

Nextbigfuture is Number 1 on Google for the Search Term Future

Note: The google search results are personalized. Apparently the searches provide different results based on personal usage of google. At the bottom of this article is a picture of the search which prompted mislead me into thinking there was a higher ranking in google searches..

This site is fairly highly ranked for various other searches. We are 36th for molecular nanotechnology and 62nd for Nuclear fusion.

We have over 7000 subscribers. This fluctuates as well but as of Oct 22, 2009 we are over 7000.

Thanks to our readers for making this site successful.






Technorati's ranking is very fluid, but as of October 25, 2009 Nextbigfuture was the 11th ranked science site (blog and news site combined) and Nov 11, 2009 onwards has been 6th in the Technorati science category and 44th in the Technorati Green category.


Photonics in Supercomputing The Road to Exascale by Jeffrey Kash of IBM Research

At the Frontiers in Optics conference on October 14 ,2009 Jeffrey Kash of IBM Research gave a presentation on the road to Exascale supercomputering He gave a projection of the necessary timeline for the shift from electrical communication in supercomputers to optical in order to support exascale supercomputing.

Photonics describes the Kash exascale talk

VCSEL-based optics have displaced electric cables today in supercomputers, but with power, density and cost requirements increasing exponentially as the systems get powerful, the need increases to move to on-chip optics.

Because a 10x increase in performance means the machine will consume double the power, to make future supercomputers feasible to build and to operate optics will need to be more widely used he said. In 2008 a 1-petaflop computer cost $150 million to build and consumes 2.5 MW of power. Using the same technology, by 2020 a 1 exaflop machine would cost $500 million to build and consume 20 MW of power.

Kash gave a timeline that would find optics replacing electrical backplanes by 2012 and replacing electrical printed circuit boards by 2016. In 2020, optics could be directly on the chip. In a less aggressive scenario, by 2020 all off-chip communications need to be optical, he said.

But for that to happen, to get optics up to millions of units in 2010, the price needs to drop to about $1 per Gb/s, he said. Today, Gb/s processing costs about $10.





Photonics in Supercomputing: The Road to Exascale, Jeffrey Kash; IBM Res., USA. Optical interconnects in present and future supercomputers are reviewed, emphasizing Exaflop performance circa 2020, which is 1000X today’s Petaflop computers. Power, density and cost requirements become increasingly stringent, ultimately driving the need for on-chip optics. (Integrated Photonics and Nanophotonics Research and Applications, 2009)


More than 4735 Deaths so Far from H1N1 Flu

United Nations health agency said that more than 4,735 deaths attributable to H1N1 had been reported, and that influenza activity in the northern hemisphere was much higher than usual.

WHO spokesman Gregory Hartl said it was too soon to draw any conclusions from the death toll as experts needed to monitor a full year of the disease, which the WHO declared a pandemic in June after the strain was first detected in April. Health experts need to observe the behavior of the virus during the traditional January-February peak of the influenza season in the northern hemisphere.

Most people who catch the H1N1 virus suffer mild symptoms. "There is a small subset of cases that do and can progress quite rapidly to severe disease and this is sometimes in the space of less than 24 hours and it then becomes a big, big challenge to save the people," Hartl said.

"This disease continues to cause concern because it doesn't act exactly like seasonal influenza and because it doesn't affect the same groups who are affected by seasonal influenza."

In the USA, 86 children have died of H1N1 since April. The seasonal flu typically kills between 46 and 88 children in a full year, according to CDC data.

6 percent of all doctor visits are for flu-like illnesses, levels not normally seen until later in the fall. Half of the all the child deaths have been in teenagers.




Drug manufacturers have told health officials to expect at least 25 percent less vaccine by the end of the month than anticipated. Instead of the 40 million doses projected by the end of October, only 28 million to 30 million doses may be available, said Dr. Anne Schuchat, director of immunization and respiratory diseases.

Three Deals Could Raise Iraq Oil Production to 7 Million Barrels per Day by 2016

Iraq is close to finalising deals at three of its largest oilfields that could nearly triple output and make it the world's number three producer after Russia and Saudi Arabia.

Two deals have been sent to Iraq's cabinet for ratification. It is unclear when the cabinet will ratify them. There are no guarantees that the contracts would be deemeed legal by future administrations, as there are still deep rifts among Iraqi politicians over control of oil wealth.

The potential increase in oil output from the three deals is around 4.5 million barrels per day (bpd), which would nearly triple Iraqi output to around 7 million bpd from around 2.5 million bpd. The increase is around 5 percent of global oil supply. It is enough oil to supply nearly a quarter of daily U.S. consumption, or more than half of China's.

Iraqi oil deals specify that foreign oil firms should reach target production at the fields on offer no later than six years after contracts become effective.

Executives at companies involved say oil firms would want to boost output to target levels more quickly to make the deals more profitable.

Oil companies can begin recovering costs when output has risen 10 percent from the fields. That means they will move to achieve that 10 percent rise as quickly as possible, so Iraq could see a rise of around 150,000 bpd from the three fields in 18 months to 2 years.

Oil industry executives say the expense of hitting those smaller targets would be minimal.




BP (BP.L) and CNPC won the contract to boost output on the Rumaila field, Iraq's largest, in June. Rumaila has reserves of nearly 17 billion barrels. Rumaila is the workhorse or Iraq's oil industry, providing just over 1 million bpd of the country's 2.5 million bpd output. BP has pledged to boost Rumaila's output to 2.85 million bpd.

Eni (ENI.MI), Occidental (OXY.N) and KOGAS (036460.KS) have won a contract to develop the Zubair oilfield. Zubair has oil reserves of 4 billion barrels. Eni has pledged to boost output there to 1.125 million bpd from 195,000 bpd.

West Qurna has reserves of about 8.7 billion barrels, a little less than all the oil held by OPEC member Angola. Exxon and Shell have pledged to boost output there to 2.1 million bpd from around 280,000 bpd. LUKOIL and Conoco have set their sights on 1.5 million bpd.


Muscle Integration With Prosthetics and Brain to Brain Communication

1. MIt Technology Review reports that tiny implants that connect to nerve cells could make it easier to control prosthetic limbs

A novel implant seeded with muscle cells could better integrate prosthetic limbs with the body, allowing amputees greater control over robotic appendages. The construct, developed at the University of Michigan, consists of tiny cups, made from an electrically conductive polymer, that fit on nerve endings and attract the severed nerves. Electrical signals coming from the nerve can then be translated and used to move the limb.


Living interface: Muscle cells (shown here) are grown on a biological scaffold. Severed nerves remaining from the lost limb connect to the muscle cells in the interface, which transmits electrical signals that can be used to control the artificial arm. Credit: Paul Cederna

"This looks like it could be an elegant way to control a prosthetic with fine movement," says Rutledge Ellis-Behnke, a scientist at MIT who was not involved in the research. "Rather than having a big dumb piece of plastic strapped to the arm, you could actually have an integrated tool that feels like it's part of the body."

The most successful method for controlling a prosthesis to date is a surgical procedure in which nerves that were previously attached to muscles in a lost arm and hand are transplanted into the chest. When the wearer thinks about moving the hand, chest muscles contract, and those signals are used to control the limb. While a vast improvement over existing methods, this approach still provides a limited level of control--only about five nerves can be transplanted to the chest.

The new interface, developed by plastic surgeon Paul Cederna and colleagues, builds on this concept, using transplanted muscle cells as targets rather than intact muscle. After a limb is severed, the nerves that originally attached to it continue to sprout, searching for a new muscle with which to connect. (This biological process can sometimes create painful tangles of nerve tissue, called neuromas, at the tip of the severed limb.) "The nerve is constantly sending signals downstream to tell the hand what to do, even if the hand isn't there," says Cederna. "We can interpret those signals and use them to run a prosthesis."

The interface consists of a small cuplike structure about one-tenth of a millimeter in diameter that is surgically implanted at the end of the nerve, relaying both motor and sensory signals from the nerve to the prosthesis. Inside the cup is a scaffold of biological tissue seeded with muscle cells--because motor and sensory nerves make connections onto muscle in healthy tissue, the muscle cells provide a natural target for wandering nerve endings. The severed nerve grows into the cup and connects to the cells, transmitting electrical signals from the brain. Because it is coated with an electrically active polymer, the cup acts as a wire to pick up electrical signals and transmit them to a robotic limb. Cederna's team doesn't develop prostheses themselves, but he says the signals could be transmitted via existing wireless technology.


2. New research from the University of Southampton has demonstrated that it is possible for communication from person to person through the power of thought alone.



Brain-Computer Interfacing (BCI) can be used for capturing brain signals and translating them into commands that allow humans to control (just by thinking) devices such as computers, robots, rehabilitation technology and virtual reality environments.

This experiment goes a step further and was conducted by Dr Christopher James from the University’s Institute of Sound and Vibration Research. The aim was to expand the current limits of this technology and show that brain-to-brain (B2B) communication is possible.

His experiment had one person using BCI to transmit thoughts, translated as a series of binary digits, over the internet to another person whose computer receives the digits and transmits them to the second user’s brain through flashing an LED lamp.

While attached to an EEG amplifier, the first person would generate and transmit a series of binary digits, imagining moving their left arm for zero and their right arm for one. The second person was also attached to an EEG amplifier and their PC would pick up the stream of binary digits and flash an LED lamp at two different frequencies, one for zero and the other one for one. The pattern of the flashing LEDS is too subtle to be picked by the second person, but it is picked up by electrodes measuring the visual cortex of the recipient.

The encoded information is then extracted from the brain activity of the second user and the PC can decipher whether a zero or a one was transmitted. This shows true brain-to-brain activity.






All Optical Plasmonic Computers On Track before 2020


European researchers has demonstrated some of the first commercially viable plasmonic devices, paving the way for a new era of high-speed communications and computing in which electronic and optical signals can be handled simultaneously. The Plasmocom technology can create plasmonic devices using existing commercial lithography techniques.

Zayats notes that interest in the team’s work has been extensive within both academia and industry, evidenced by the success of a workshop in June in Amsterdam attended by representatives of several photonics and electronics firms, including NEC and Panasonic.

“I think that we will start to see this technology make its way into commercial applications over the next five to ten years,” Zayats says. “A key breakthrough will be using plasmonics for inter-chip communication, making it possible to transmit data between one or more chips at optical speeds and eliminating a major bottleneck to faster computers.”


Photonics has more information in "Building better optical components using plasmonics"




The pioneering devices, which are expected to lead to commercial applications within the next decade, make use of electron plasma oscillation to transmit optical and electronic signals along the same metal circuitry via waves of surface plasmon polaritons. In contrast, signals in electronic circuits are transmitted by electrons, while photons are used to carry data in optical systems.

Up until this work an all optical computer would be too big (minimumum size for the last five years would be big as an oven) and plasmonic communication only worked over short distances.

Current commercial optical ring resonators have a radius of up to 300 micrometres, the plasmonic demonstrator built by the Plasmocom team measured just five micrometres. The new devices have dimension 60 times less.

Plasmonic data transmission functions on the basis of oscillations in the electron density at the boundary of two materials: a dielectic (non-conductive) plasma or polymer and a metal surface. By exciting the electrons with light it is possible to propagate high-frequency waves of plasmons along a metal wire or waveguide, thus transmitting a data signal. However, in many cases the signal dissipates after only a few micrometres – far too short to interconnect two computer chips, for example.

The Plasmocom team took a novel approach, developing what they called dielectric-loaded surface plasmon polariton waveguides (DLSPPW). By patterning a layer of various polymer (polymethyl methacrylate) dielectic onto gold film supported by a glass substrate, they were able to achieve waveguides that were only 500 nanometres in size while extending the signal propagation.

Using this approach, the researchers built a variety of plasmonic devices, including low-loss S bends, Y-splitters and a waveguide ring resonator, a crucial part of the add-drop multiplexers (ADM) in optical networks that combine and separate several streams of data into a single signal and vice versa.


Earth Mars Communication with Ion Thruster Satellites in B Orbits


Caption: This is an end-on view of an alternative Mars/Earth communication relay architecture option, looking into the Ecliptic plane. Credit: Credits: ESA/University Strathclyde/University Glasgow

According to the paper, "Non-Keplerian Orbits Using Low Thrust, High ISP Propulsion Systems," an innovative solution to the Mars communication problem may be found by placing a pair of communication relay satellites into a very special type of orbit near Mars: a so-called 'B-orbit' (in contrast to an 'A-orbit', based on natural orbital laws).

Ion thrusters, powered by solar electricity and using tiny amounts of xenon gas as propellant, would hold the satellites in a B-orbit in full view of both Mars and Earth. The satellites could then relay radio signals throughout the Mars–Earth conjunction season, ensuring that astronauts at Mars were never out of touch with Earth.




The research was only the first step in understanding the complex details of such a mission. A lot more work must be done to understand in detail how the satellites have to apply the thrust - for example, taking into account the natural eccentricity of the Martian orbit. Also, failure scenarios must be studied, to have a back-up plan in case one of the ion thrusters failed. In addition, as part of their research, they catalogued other possible mission profiles.

One example would be to use continuous thrust to create a fixed, virtual 'truss' between two spacecraft perpendicular to their flight direction. It would be like having the two spacecraft connected by fixed bar or rod; this could be useful for certain applications.

Another example would be to hover near one of the Earth-Sun system Lagrange points. NASA studied just such a mission profile, called GeoStorm, back in the 1990s with a view to stationing a satellite closer to the Sun than the L1 Lagrange point so as to provide improved early warning of magnetic storms caused by solar coronal mass ejections. Such a mission would have used a solar wind sail for its thrust, but it could also be done using ion propulsion, which can offer control advantages compared to solar sails; this must be studied further.


There's still lots to be done, but this research will help pave the way for future robotic missions to places we've never been or for a human mission to Mars.



Digital Rosetta Stone for Digital Storage for 1000 years



Tadahiro Kuroda, an electrical engineering professor at Keio University in Japan, has invented what he calls a "Digital Rosetta Stone," a wireless memory chip sealed in silicon that he says can store data for 1,000 years.

Currently long term data storage requires: Data typically has to be put on new storage systems every 20 years or less for it to be accessible. The digital migration costs time and money. Storing and maintaining a digital master of a very high-resolution movie, for example, costs $12,500 a year; archiving a standard film costs $1,000 a year.


Kuroda's method: Instead of moving data as electrons through wires, as occurs in standard semiconductors, Kuroda’s sealed stack of wafers allows information to be beamed wirelessly on radio waves. This is a variation on radio-frequency identification technology, used in everything from scannable passports to inventory tracking. A single wafer, or "reader," is used to remove data wirelessly, in the same way information in your car’s E-ZPass is extracted when you go through a toll booth. "A hard disk may crash one day," says Kuroda. "If you replace it with a semiconductor device, there are no mechanics to fail."


Kuroda's work is more near term than using reversible mass transport nanotubes to create super high density billion year memory.



Kuroda needs $1 million to build a working model of the 312-gig archive. So far he and his students have developed the small memory chip with support from the Japan Science & Technology Agency. If he can get them built, Kuroda says, one of his memory chips would cost $625.


Direct Computer Memory Digital Pictures Accelerates Progress to Cheap Gigapixel Cameras

New Scientist reports that Technical University of Delft researchers have found that carefully focus light arriving on an exposed memory chip, the charge stored in every cell corresponds to whether that cell is in a light or dark area. The chip is in effect storing a digital image. This technique can increase the pixel resolution by 100 times over current camera technology.

A memory chip needs none of the current conversion circuitry used in current digital cameras, as it creates digital data directly. As a result, says Vetterli, the memory cell will always be 100 times smaller than CMOS sensor cells; it is bound to be that way because of the sheer number of signal-conditioning transistors the CMOS sensor needs around each pixel. For every pixel on one of today's sensors, the memory-based sensor could have 100 pixels. A chip the size of a 10-megapixel camera sensor will have 100 times as many sensing cells if implemented in memory technology.

A gigapixel camera based on this will still take some work. Unlike the pixels in a conventional sensor, which record a greyscale, the cells in Charbon's memory-chip sensor are simple on-off devices: they can only store a digital 0 or 1, for which read either light or dark. To build a sensor that can record shades of grey, EPFL engineer Feng Yang, who presented the Kyoto paper, is developing a software algorithm that looks across an array of 100 pixels to estimate their overall greyscale value.

They hope to have a gigavision memory chip fabricated late this year and working early next.


Gigapixel cameras becoming sometime in the 2009-2015 time range is a prediction from 2006 that I made.

It was the eighth prediction in the computing section of 156 technology predictions made in 2006




The gigavision camera

We propose a new image device called gigavision camera. The main differences between a conventional and a gigavision camera are that the pixels of the gigavision camera are binary and orders of magnitude smaller. A gigavision camera can be built using standard memory chip technology, where each memory bit is designed to be light sensitive. A conventional gray level image can be obtained from the binary gigavision image by low-pass filtering and sampling. The main advantage of the gigavision camera is that its response is non-linear and similar to a logarithmic function, which makes it suitable for acquiring high dynamic range scenes. The larger the number of binary pixels considered, the higher the dynamic range of the gigavision camera will be. In addition, the binary sensor of the gigavision camera can be combined with a lens array in order to realize an extremely thin camera. Due to the small size of the pixels, this design does not require deconvolution techniques typical of similar systems based on conventional sensors.



Image Reconstruction in the Gigavision Camera (8 page pdf)

More Conventional 18 Megapixel Cameraphones

Silicon Image, Inc. introduced an 18-Megapixel image signal processor Intellectual Property (IP) core. The company claims that the new IP core, called "camerIC-18," supports resolutions ranging from 5MP to 18MP. The IP can "effectively place high-performance digital still camera features in mobile phones.

Standalone image signal processor (ISP) vendors like Zoran, applications processor suppliers like Texas Instruments, SoC companies such as Samsung Electronics and NEC have been working toward that goal.

Their design options range from merging ISP with CMOS image sensors (Samsung); creating a discrete ISP chip (Zoran); making ISP a part of application processor (TI); or integrating ISP inside a baseband chip for a mobile phone.


Super high resolution Camcorders too

Silicon Image hopes to offer with its camerIC-18 IP core is imaging bandwidth to support HD, 3D, 4K and higher resolution video camcorder ISP functions.

A 4K resolution camcorder design incorporating a camerIC-18 IP core and running at 30 frames per second will only require about 700k gates to be implemented in hardware, consuming as little as 125mW of power.

"Only 30 million instructions per second of CPU time are required to support this hardware design," claimed Richter, "making the camerIC-18 IP core one of the industry's highest performing, lowest cost, lowest power consumption camera processors."


Graphene domes, graphene nanostars, Superconductor like behavior in Graphene


Left) Carbon dome structure on an iridium substrate, indicating the tightly bound atoms at the edge of the island (C), the weakly bound atoms at the center (B) and intermediate atoms (A). (Right) Photoelectron spectrum at 970 K showing the contributions from the edge (C), center (B), and intermediate (A) atoms in the dome structure. (In this simplified schematic diagram, the dome structure is shown to consist of three types of carbon atoms; the actual structure is more complex.)

1. Photoelectron spectroscopy data that suggests that en route to forming continuous sheets, graphene islands grow on an iridium surface in the form of microscopic domes. According to their model, the domes consist of circular islands of graphene that are attached via strong chemical bonds to the close-packed iridium surface at the island’s perimeter, but are not chemically attached in the center. The islands grow by attaching atoms and smaller islands to their edges.

Harnessing the potential of graphene in electronic devices requires a method for producing it in sizes larger than available graphite crystals, and surface scientists have been concentrating on developing methods of producing large-scale and perfect graphene films.

The results suggest that the Ir(111) surface catalyzes the growth at the edges of the graphene islands, and that hydrogen does not play any active role in the formation of the graphene films. The picture presented is one where free carbon atoms and small carbon clusters that result from the hydrocarbon decomposition diffuse on the surface to form the islands. It will be very interesting to explore the release of hydrogen as the hydrocarbon molecules react to form graphene. This process would leave islands that are hydrogen terminated without dangling bonds that bind to the metallic surface, producing flat graphene even for small islands. If controlled experiments can produce films having different amounts of hydrogen, then the ability to fabricate films that include both conducting graphene and insulating “graphane”—a two-dimensional layer of covalently bonded hydrocarbon—cannot be far behind. With this ability, the possibilities of producing graphene electronic devices, controlling the work function and optical properties of these materials are endless. Graphene layers might then replace much thicker layers currently used in electronic microcircuits, making future computers and other devices more compact and energy efficient.





2. Fractional quantum Hall effect and insulating phase of Dirac electrons in graphene

New findings, previously considered possible by physicists but only now being seen in the laboratory, show that electrons in graphene can interact strongly with each other. The behavior is similar to superconductivity observed in some metals and complex materials, marked by the flow of electric current with no resistance and other unusual but potentially useful properties. In graphene, this behavior results in a new liquid-like phase of matter consisting of fractionally charged quasi-particles, in which charge is transported with no dissipation.

In graphene, which is an atomic layer of crystalline carbon, two of the distinguishing properties of the material are the charge carriers' two-dimensional and relativistic character. The first experimental evidence of the two-dimensional nature of graphene came from the observation of a sequence of plateaus in measurements of its transport properties in the presence of an applied magnetic field. These are signatures of the so-called integer quantum Hall effect. However, as a consequence of the relativistic character of the charge carriers, the integer quantum Hall effect observed in graphene is qualitatively different from its semiconductor analogue. As a third distinguishing feature of graphene, it has been conjectured that interactions and correlations should be important in this material, but surprisingly, evidence of collective behaviour in graphene is lacking. In particular, the quintessential collective quantum behaviour in two dimensions, the fractional quantum Hall effect (FQHE), has so far resisted observation in graphene despite intense efforts and theoretical predictions of its existence. Here we report the observation of the FQHE in graphene. Our observations are made possible by using suspended graphene devices probed by two-terminal charge transport measurements. This allows us to isolate the sample from substrate-induced perturbations that usually obscure the effects of interactions in this system and to avoid effects of finite geometry. At low carrier density, we find a field-induced transition to an insulator that competes with the FQHE, allowing its observation only in the highest quality samples. We believe that these results will open the door to the physics of FQHE and other collective behaviour in graphene.


2 pages of Supplemental information

3. Chemical engineers say they have discovered graphene is more useful in electronics applications if a gold ion solution is used as a growth catalyst.

Graphene-gold based DNA sensors will have enhanced sensitivity.

October 15, 2009

Resonant interband tunneling diodes made with Chemical Vapor Deposition

Researchers at Ohio State University have discovered a way to make quantum devices using technology common to the chip-making industry today. The team fabricated a device called a tunneling diode using the most common chip-making technique, called chemical vapor deposition. Manufacturers could potentially fabricate quantum devices directly on a silicon chip, side-by-side with their regular circuits and switches. Resonant interband tunneling diodes (RITDs) could be used for ultra-low-power computer chips operating with small voltages and producing less wasted heat. They could also be used for imaging applications.

The quantum device in question is a resonant interband tunneling diode (RITD) -- a device that enables large amounts of current to be regulated through a circuit, but at very low voltages. That means that such devices run on very little power.

RITDs have been difficult to manufacture because they contain dopants -- chemical elements -- that don’t easily fit within a silicon crystal.

Atoms of the RITD dopants antimony or phosphorus, for example, are large compared to atoms of silicon. Because they don’t fit into the natural openings inside a silicon crystal, the dopants tend to collect on the surface of a chip.

They discovered that RITD dopants could be added during chemical vapor deposition, in which a gas carries the chemical elements to the surface of a wafer many layers at a time. The key was determining the right reactor conditions to deliver the dopants to the silicon.

“One key is hydrogen,” he said. “It binds to the silicon surface and keeps the dopants from clumping. So you don’t have to grow chips at 320 degrees Celsius [approximately 600 degrees Fahrenheit] like you do when using molecular beam epitaxy. You can actually grow them at a higher temperature like 600 degrees Celsius [more than 1100 degrees Fahrenheit] at a lower cost, and with fewer crystal defects.”

Tunneling diodes are so named because they exploit a quantum mechanical effect known as tunneling, which lets electrons pass through thin barriers unhindered.

In theory, interband tunneling diodes could form very dense, very efficient micro-circuits in computer chips. A large amount of data could be stored in a small area on a chip with very little energy required.




Researchers judge the usefulness of tunneling diodes by the abrupt change in the current densities they carry, a characteristic known as “peak-to-valley ratio.” Different ratios are appropriate for different kinds of devices. Logic circuits such as those on a computer chip are best suited by a ratio of about 2.

The RITDs that Berger’s team fabricated had a ratio of 1.85. “We’re close, and I’m sure we can do better,” he said.

He envisions his RITDs being used for ultra-low-power computer chips operating with small voltages and producing less wasted heat.

RITDs could form high-resolution detectors for imaging devices called focal plane arrays. These arrays operate at wavelengths beyond the human eye and can permit detection of concealed weapons and improvised explosive devices. They can also provide vision through rain, snow, fog, and even mild dust storms, for improved airplane and automobile safety, Berger said. Medical imaging of cancerous tumors is another potential application.


Exoskeletons, Power Loaders and Morphing Robots

1. Activelink, a subsidiary of Panasonic, is working on power enhancing partial exoskeletons This includes dual arm loaders (like in the Aliens 2 Movie), which will help someone lift 100 kilograms or more. They will not be ready for commercial use before 2015.



Activelink also has powerwalker amplifiers, which are similar to springwalkers, power stilts and powerbocking devices





2. DARPA, irobot, University of Chicago Chembot the Morphing robot introduced at IROS 09 (International conference on Intelligent Robots and Systems). It changes the shape of its stretchy polymer skin using a technique called "jamming skin enabled locomotion". This means that different sections of the robot inflate or deflate separately. Controlling inflation and deflation enables the robot to move.



3. Cyberdyne Hal-5 exoskeleton is starting to made in quantities of hundreds now.



Sankai, who is Cyberdyne’s CEO, expects to supply 80 to 90 suits in Japan in October. At the end of September, 10 sets of HAL suits will be delivered to Denmark to be used by nurses who care for elderly people.



4. Berkeley bionics who make the HULC exoskeleton have been funded for a medical exoskeleton.

Develop a set of technologies that will enable smart, powered, exoskeleton orthotic systems for individuals with limited mobility due to neurological or muscular disorders. ($2,600,000 from Nov 2007 to Oct 2010)

Berkeley Bionics argues that "smart exoskeletons" could replace wheelchairs for many patients for hours at a time, enabling patients who cannot now walk to regain a degree of walking mobility, and to retard the onset of a wide range of secondary disabilities associated with the long-term use of wheelchairs. The proposed system would incorporate several innovations, including a compact, on-board power regeneration system to greatly extend on-board battery life, an advanced control system and user interface to tailor the amount of motive assistance provided to the patient's needs, and a non-constraining, lightweight structural design that is easy for patients to put on and take off with minimal assistance. Solving these problems will open up large new international markets in orthotic exoskeletons, greatly improving the medical situation and quality of life for a large number of wheelchair-bound patients. These technologies also could be adapted for practical, affordable exoskeletons for industrial work, saving thousands of workers from costly back injuries.





5. DARPA has a jumping robot that can go over 25 feet obstacles



6. DARPA has a remote controlled cyborg beetle.





Nanoantennas Could Enable Future Terabit Wireless Optical Quantum Communication



For high frequency electromagnetic light waves in the frequency range of several 100,000 gigahertz (500,000 GHz corresponds to yellow light with 600 Nm wavelength), one needs very small antennas, which are not larger than a half light wave length, thus maximally 350 nanometers (1 nanometer = 1 millionth millimeter). The scientists of the DFG Heisenberg Nanoscale Science group used an electron-beam to make gold nanoantennas of 70-300 nanometers. The results were published in the journal Nanotechnology (Nanotechnology 20 (2009) 425203).

The nano-antennas could be used for future terabit information transfer, but also as tool for the optical microscopy.

EETimes has coverage as well

The wavelengths the antennas are designed for correlate with frequencies of 500 THz and more. Of course, no semiconductor elements are available to drive these antennas. For this reason, they are excited with white light; each antenna gets into resonance for its specific frequency (= light color), forming a frequency multiplex broadband array for data transmission with data rates 10 thousand times higher than existing wireless broadband technologies, the institute claims. Modulation of the light beams is achieved by application of the superposition principle, Eisler explained.

Since no semiconductors are available to drive the antennas electrically, the research group focuses on unconventional methods to transmit and receive data. "Actually we should develop nano switching elements that make use of quantum technology", Eisler said. While quantum computers still are in a very early stage of development, a technical use of these optical antennas could be possible within five to ten years, Eisler believes.




Nanoengineering and characterization of gold dipole nanoantennas with enhanced integrated scattering properties

The full paper is available for 30 days with a free registration to iop.org.

In this paper we present our approach for engineering gold dipole nanoantennas. Using electron-beam lithography we have been able to produce arrays of single gold antennas with dimensions from 70 to 300 nm total length with a highly reproducible nanoengineering protocol. Characterizing these gold nanoantenna architectures by optical means via dark-field microscopy and scattering spectroscopy gives the linear optical response function as a figure-of-merit for the antenna resonances, spectral linewidth and integrated scattering intensity. We observe an enhanced integrated scattering probability for two arm gold dipole nanoantennas with an antenna feed gap compared to antennas of the size of one arm without a gap.

We have presented a nanoengineering e-beam lithography fabrication protocol that enables us to fabricate reliably two arm and single arm gold dipole nanoantennas with feature sizes as small as 20 nm. For the smallest gold antennas, we measured the linear optical scattering response function by systematically varying the antenna dimensions. We have characterized those resonance scattering spectra as damped Lorentzian oscillators. From this model, we have been able to extract the resonance energy peak position, the spectral width and the relative scattering intensity. We found that a nanoantenna feed gap with smaller than 30 nm synergistically combines the near-field coupling of two antenna arms of length Larm each, creating localized electromagnetic hot spots at a well-chosen subwavelength volume, with exceptionally enhanced far-field photon scattering probabilities. The physical volume does not explain the enhanced photon scattering probability completely when classical scattering theory is applied. We thus speculate that the nanoantenna feed gap as an electromagnetic hot spot may act as an additional dipole source that highly perturbs the internal field of the gold nanoantenna and effectively contributes to the enhanced far-field scattering intensity of visible photons. Quantitative near-field experiments at subwavelength volume are needed to characterize the antenna feed gap and its contribution to the far-field scattering response as well as to the near-field localization capability that go hand in hand with detailed topography information in those nanogaps. Moreover, the dipole allowed longitudinal eigenmode of nanoantennas with and without antenna feed gap needs to be considered in detail.


China Buys Two 880 MWe Fast Neutron Nuclear Reactors from Russia


A high-level agreement has been signed for Russia to start pre-project and design works for two commercial 880 MWe fast neutron reactors in China. Breeder reactors burn more of the nuclear fuel (uranium). The BN800 has a fuel burn-up of 70-100 GWd/t. Maximum fuel burn up is 950 GWd/t (Gigawatt days per ton) and current reactors have a burnup of 30-60 GWd/t.

Russia is building the BN-800 fast reactor at Beloyarsk in Russia which is due to start up in 2012. Russia has operated a 600 MWe fast reactor since 1980.

The deal with China is first time commercial fast reactors will have been exported.
Pictures and information on the BN800 from the coal2nuclear page on the BN800 reactor













Construction is under way on Beloyarsk-4 which is the first BN-800 from OKBM, a new, more powerful (880 MWe) FBR, which is actually the same overall size as BN-600. It has improved features including fuel flexibility - U+Pu nitride, MOX, or metal, and with breeding ratio up to 1.3. However, during the plutonium disposition campaign it will be operated with a breeding ratio of less than one. It has much enhanced safety and improved economy - operating cost is expected to be only 15% more than VVER. It is capable of burning up to 2 tonnes of plutonium per year from dismantled weapons and will test the recycling of minor actinides in the fuel.

The Russian BN-600 fast breeder reactor - Beloyarsk unit 3 - has been supplying electricity to the grid since 1980 and is said to have the best operating and production record of all Russia's nuclear power units. It uses chiefly uranium oxide fuel, some enriched to over 20%, with some MOX in recent years. The sodium coolant delivers 550°C at little more than atmospheric pressure. Russia plans to reconfigure the BN-600 by replacing the fertile blanket around the core with steel reflector assemblies to burn the plutonium from its military stockpiles and to extend its life beyond the 30-year design span.







Background information on fast neutron reactors.

India should have fast breeder reactor operating in 2011.
A 500 MWe prototype fast breeder reactor (PFBR) is under construction at Kalpakkam and is expected to be operating in 2011, fuelled with uranium-plutonium oxide or carbide. It will have a blanket with thorium and uranium to breed fissile U-233 and plutonium respectively. Initial FBRs will have mixed oxide fuel but these will be followed by metallic fuelled ones to enable shorter doubling time.


FURTHER READING
BN800 status from Aug 20, 2009

BN-800 as a New Stage in the Development of Sodium Cooled Fast Reactors

60 Tesla Superconducting Magnets Would Allow Tests of Gravitational Field Propulsion

Superconducting magnets have been built with 33.8 Tesla fields and 45-70 Tesla superconducting magnets appear likely to be developed over the next two years.

Magnets at 60 Tesla field strength will enable testing of gravitational field propulsion.

The so-called "hyperdrive" concept won the 2005 American Institute of Aeronautics & Astronautics award for the best nuclear and future flight paper.

The basic concept is this: according to the paper's authors - Jochem Häuser, a physicist and professor of computer science at the University of Applied Sciences in Salzgitter and Walter Dröscher, a retired Austrian patent officer - if you put a huge rotating ring above a superconducting coil and pump enough current through the coil, the resulting large magnetic field will "reduce the gravitational pull on the ring to the point where it floats free".

The origins of this "repulsive anti-gravity force" and the hyperdrive it might power lie in the work of German scientist Burkhard Heim, who - as part of his attempts to reconcile quantum mechanics and Einstein's general theory of relativity - formulated a theoretical six-dimensioned universe by bolting on two new sub-dimensions to Einstein's generally-accepted four (three space, one time).


Dröscher teamed up with Häuser to produce the award-winning "Guidelines For a Space Propulsion Device Based on Heim's Quantum Theory."

Dröscher and Häuser's proposed practical experiment to prove their theory requires "a magnetic coil several metres in diameter capable of sustaining an enormous current density" - something which the majority of engineers say is "not feasible with existing materials and technology".*

So, Mars in three hours? As NS puts it: "Dröscher is hazy about the details", but "suggests that a spacecraft fitted with a coil and ring could be propelled into a multidimensional hyperspace" where "the constants of nature could be different, and even the speed of light could be several times faster than we experience". Then, he says, a quick three-hour jaunt to Mars would indeed be on the cards.


Big 60 Tesla superconducting magnets appear to be becoming feasible. Small 60 Tesla superconducting magnets soon, which would be enough to test the theory. The superconducing magnets also need to have higher current densities. Scaling up the magnets would involve making longer lengths of superconducting wire and more work and research is needed to increase the current density.

The 2005 version of the propulsion papers suggests that 30 Tesla magnets are a starting point for tests.

A transition into parallel space requires a magnetic induction of some 30 T and torus material different from hydrogen. The number of turns of the magnetic coil is denoted by n, the magnetic induction is given in Tesla, and the current through the coil is 100 A, except for the last row where 250 A were used. The mass of the rotating torus is 100 kg, its thickness, d (diameter) 0.05 m, and its circumferential speed is 10^3 m/s. The wire cross section is 1 mm2. The meaning of the probability amplitude is given in the text. For instance, if a larger spacecraft of 10^5 kg with a rotating ring of 10^3 kg needs to have a constant acceleration of 1g, a magnetic induction 0H of some 13 T is needed together with a current density of 100 A/mm2 and a coil of 4×10^5 turns for a value N wgpe=4.4×10−5 . The resulting force would be 10^6 N. Thus a launch of such a spacecraft from the surface of the earth seems to be technically feasible.




A 2004 paper, comparison of experiments GME I (Tajmar) and GME II gravito-magnetic propulsion experiment (28 page pdf)



Gravitational Field Propulsion by
Walter Dröscher, Jochem Hauser, 2009 (20 page pdf)


Current space transportation systems are based on the principle of momentum conservation of classical physics. Therefore, all space vehicles need some kind of fuel for operation. The basic physics underlying this propulsion principle severely limits the specific impulse and/or available thrust. Launch capabilities from the surface of the Earth require huge amounts of fuel. Hence, space flight, as envisaged by von Braun in the early 50s of the last century, will not be possible using this concept. Only if novel physical principles are found can these limits be overcome. Gravitational field propulsion is based on the generation of gravitational fields by man made devices. In other words, gravity fields should be experimentally controllable. At present, it is believed that there are four fundamental interactions in physics: strong (nuclei), weak (radioactive decay), electromagnetic and gravitational. As experience has shown for the last six decades, none of these physical interactions is suitable as a basis for novel space propulsion. None of the advanced physical theories, like string theory or quantum gravity, go beyond the four known interactions. On the contrary, recent results from causal dynamical triangulation simulations indicate that wormholes in spacetime do not seem to exist, and thus even this type of exotic space travel may well be impossible. However, recently, novel physical concepts were presented that might lead to advanced space propulsion technology, based on two novel fundamental force fields. These forces are represented by two additional long range gravitational-like force fields that would be both attractive and repulsive, resulting from interaction of gravity with electromagnetism. A propulsion technology, based on these novel long range fields, would be working without propellant. The current theoretical and experimental concepts pertaining to the novel physics of these gravity-like fields are discussed together with recent gravitomagnetic experiments performed at ARC Seibersdorf (2008). The theoretical concepts of Extended Heim Theory, EHT, are employed for the explanation of these experiments.


FURTHER READING
Gravity Modification blog from the University of Minnesota

High Performance Computing and Communication for Space website, which has most of the papers on Extended Heim Theory collected.

October 14, 2009

Climate Change Mitigation by Reducing CO2 - Blog Action Day 2009

There will Definitely Be a Lot of CO2 Generated for Energy Production for Decades

There will be plenty of natural gas and coal for many decades to centuries. Unconventional natural gas and underground coal gasification are likely to provide affordable fossil fuel for a long time. The THAI (Toe Heel Air Injection) oil recovery process and Multi-fracture horizontal drilling will ensure more supplies of regular oil. Civilization will continue to generate a lot of CO2. 30 billion tons per year of CO2 now and likely to increase. In the IEO2009 (International energy outlook) reference case, world energy-related carbon dioxide emissions grows from 29.0 billion metric tons in 2006 to 33.1 billion metric tons in 2015.

Besides Reducing CO2, Other Mitigation Steps Can be Taken

Reducing abrupt climate change risk using the Montreal Protocol and other regulatory actions to complement cuts in CO2 emissions

BC (Black Carbon or soot) is an aerosol and is among the particle components emitted from the incomplete combustion of fossil fuels and biomass. Particulates from coal and diesel also cause about a millions of premature deaths each year. BC is estimated to be the second or third largest warming agent, although there is uncertainty in determining its precise radiative forcing. BC can be reduced
by approximately 50% with full application of existing technologies by 2030, primarily from reducing diesel emissions and improving cook stoves. Wallack and Ramanathan estimate that it may be possible to offset the warming effect from one to two decades of CO2 emissions by reducing BC by 50% using existing technologies

In 2000, there were 6800 container ships in the world. At the cold war peak the Soviets had or had almost built about 400 nuclear powered ships and the USA had over 200. One large container pollutes as much as 50 million cars.

Converting all commercial shipping to nuclear power would be a more logistically achievable goal than electrifying all cars. Commercial shipping releases half as much particulates as all of the worlds cars.

Carbon Sequesteration is Expensive and Would Take Decades to Have a Major Impact
Carbon sequestration is at a few million tons per year now. Canada is planning a $2 billion/year project to sequester 5 million tons of CO2 per year by 2015 and then a $3 billion/year project to sequester 30 million tons of CO2 per year. $400 per ton down to $100 per ton sequestered each year. Norway is planning to get carbon neutral by sequestering 50 million tons per year by 2020. MIT wrote a study that sequestering the CO2 generated from coal plants in the USA by 2050 would take 11,000 to 23,000 miles of dedicated pipe.

The technology for removing CO2 from the atmosphere is improving.
Carbon sciences and companies like it could recycle a lot of the CO2 directly into fuel. Recycled CO2 could displace fresh CO2 from fossil fuels that are taken from the ground. CO2 fuel will also take a long time to scale up to significant levels.

How Can We have a Significant Impact on CO2 Over the Next Ten Years and Beyond?
The Gigaton Throwdown is an initiative to encourage investors, entrepreneurs, business leaders, and policy makers to “think big” to massively scale clean energy during the next 10 years.

The USA avoids 700 million tons of CO2 from the 800 billion kwh of nuclear power that are generated from standard nuclear plants.

1. A program to accelerate the research and development of annular fuel [ultra-uprates] (MIT, Westinghouse) to allow for 50% power increase to existing nuclear reactors with ultra-uprates. (beyond the traditional power uprates of up to 20%. This could be achieved with research budget allocation and policy changes to ensure prompt deployment. Full deployment in the United States would be avoid about 300 million tons of CO2/year. (30% boost to boiler water reactors.) Full deployment worldwide would avoid 1 billion tons of CO2/year.



Annular fuel ultra uprate economics are discussed in this nextbigfuture article

The technical specifics of the MIT research on annular fuel are summarized in this nextbigfuture article

2. The USA needs to adopt the Idaho national lab plan for conventional nuclear reactors.
Speeding the build out of nuclear reactors. China is adding 86GWe of new nuclear power from now to 2020. The US can accelerate the buildout of nuclear power plants (currently on track for 4-8 by 2020). Politically possible fast tracking would be about 10 nuclear reactors.

Stretch Goals:
1. Life extension of the current fleet beyond 60 years (e.g., what would it
take to extend all lives to ~80 years?); and
2. Strong, sustained expansion of ALWRs throughout this century (e.g., what
would it take to proceed uninterrupted from first new plant deployments in
~2015 to sustained build-rates approaching 10+/year?).

Achieving a build rate of 10 plants per year, which on a sustained basis equates to about 50 plants under construction at any point in time, will require substantial investment in workforce training and new or refurbished manufacturing capability.


3. Develop factor mass produced deep burn nuclear reactors

The Aim High program to make factory mass produced Liquid fluoride thorium reactors to replace coal power worldwide.

A list of eleven fusion and fission technologies to develop.

In terms of transportation:

4. Deploy electric bikes (free like Amsterdam) and also have electric buses/vans for ensuring that people and the free electric vehicles have optimal logistics

China makes and adds 20-30 million electric bikes and scooters each year. 100 million peddle bike sales worldwide. China has 450 million peddle bike users.

5. X prize program for the retrofitting of existing vehicles for fuel efficiency. Aerodynamic retrofit of existing vehicles can enable 30% reduction in highway driving fuel usage. Need to have prizes for figuring out deployment that makes economic sense that people will adopt.

Aeromodding cars for higher mileage

Researchers have achieved 15 to 18 percent reduction in drag by placing the actuators on the back surface of cars and trucks.

6. there is a computer system that works with cruise control (developed in the UK by Sentience) and GPS which allows for proper computer controlled/assisted acceleration and breaking for 5-24% more fuel efficiency. Basically computer assisted hypermiling.

Policy to force the aerodynamic and engine retrofits of high mileage vehicles likes cabs and other high mile fleet vehicles.



7. Biochar sequestering
The fertile black soils in the Amazon basin suggest a cheaper, lower-tech route toward the same destination as carbon storage. Scattered patches of dark, charcoal-rich soil known as terra preta (Portuguese for "black earth") are the inspiration for an international effort to explore how burying biomass-derived charcoal, or "biochar," could boost soil fertility and transfer a sizeable amount of CO2 from the atmosphere into safe storage in topsoil.

Charcoal is traditionally made by burning wood in pits or temporary structures, but modern pyrolysis equipment greatly reduces the air pollution associated with this practice. Gases emitted from pyrolysis can be captured to generate valuable products instead of being released as smoke. Some of the by-products can be condensed into "bio-oil," a liquid that can be upgraded to fuels including biodiesel and synthesis gas. A portion of the noncondensable fraction is burned to heat the pyrolysis chamber, and the rest can provide heat or fuel an electric generator.

Pyrolysis equipment now being developed at several public and private institutions typically operate at 350–700°C. In Golden, Colorado, Biochar Engineering Corporation is building portable $50,000 pyrolyzers that researchers will use to produce 1–2 tons of biochar per week. Company CEO Jim Fournier says the firm is planning larger units that could be trucked into position. Biomass is expensive to transport, he says, so pyrolysis units located near the source of the biomass are preferable to larger, centrally located facilities, even when the units reach commercial scale.

Steiner and coauthors noted in the 2003 book Amazonian Dark Earths that the charcoal-mediated enhancement of soil caused a 280–400% increase in plant uptake of nitrogen.

Preliminary results in a greenhouse study showed that low-volatility [biochar] supplemented with fertilizer outperformed fertilizer alone by 60%.

Because the heat and chemical energy released during pyrolysis could replace energy derived from fossil fuels, the IBI calculates the total benefit would be equivalent to removing about 1.2 billion metric tons of carbon from the atmosphere each year. That would offset 29% of today’s net rise in atmospheric carbon, which is estimated at 4.1 billion metric tons, according to the Energy Information Administration.




8. Regular Carbon Sequestering - how much can it help
The MIT Future of Coal 2007 report estimated that capturing all of the roughly 1.5 billion tons per year of CO2 generated by coal-burning power plants in the United States would generate a CO2 flow with just one-third of the volume of the natural gas flowing in the U.S. gas pipeline system.

The technology is expected to use between 10 and 40% of the energy produced by a power station.

In 2007, Jason Burnett, EPA associate deputy administrator, told USINFO. "Currently, about 35 million tons of CO2 are sequestered in the United States," Burnett added, "primarily for enhanced oil recovery. We expect that to increase, by some estimates, by 400-fold by 2100."

The Japanese government is targeting an annual reduction of 100 million tons in carbon dioxide emissions through CCS technologies by 2020.

Industrial-scale storage projects are in operation.
Sleipner is the oldest project (1996) and is located in the North Sea where Norway's StatoilHydro strips carbon dioxide from natural gas with amine solvents and disposes of this carbon dioxide in a deep saline aquifer. Since 1996, Sleipner has stored about one million tonnes CO2 a year. A second project in the Snøhvit gas field in the Barents Sea stores 700,000 tonnes per year.

The Weyburn project (started 2000) is currently the world's largest carbon capture and storage project. It is used for enhanced oil recovery with an injection rate of about 1.5 million tonnes per year. They are investigating how the technology can be expanded on a larger scale.

A natural gas reservoir located in In Salah, Algeria. The CO2 will be separated from the natural gas and re-injected into the subsurface at a rate of about 1.2 million tonnes per year.

Australian has a project to store 3 million tons per year starting in 2009. The Gordon project, an add-on to an off-shore Western Australian Natural Gas extraction project, is the largest CO2 storage project in the world. It will attempt to capture and store 3 million tonnes of CO2 per year for 40 years in a saline aquifer, commencing in 2009. It will cost ~$840 million.

CO2 capture from the air.

Wide plan proposes €1.25bn for carbon capture at coal-fired power plants; €1.75bn earmarked for better international energy links. The European commission today proposed earmarking €1.25bn to kickstart carbon capture and storage (CCS) at 11 coal-fired plants across Europe, including four in Britain.The four British power stations – the controversial Kingsnorth plant in Kent, Longannet in Fife, Tilbury in Essex and Hatfield in Yorkshire – would share €250m under the two-year scheme.

Japan and China have a project will cost 20 to 30 billion yen and will involve the participation of the Japanese public and private sectors, including JGC Corp. and Toyota Motor Corp. The two countries plan to bring the project into action in 2009. Under the plan, more than one million tons of CO2 annually from the Harbin Thermal Power Plant in Heilungkiang Province will be transferred to the Daqing Oilfield, about 100 km from the plant, and will be injected and stored in the oilfield.

9. CO2 into Cement

Novacem is a company that is making cement from magnesium silicates that absorbs more CO2 as it hardens. Normally cement adds a net 0.4 tons of CO2 per ton of cement, but this new cement would remove 0.6 tons of CO2 from the air. There is an estimated 10 trillion tons of magnesium silicate in the world. 0.6 tonnes times 10 trillion tons is 6 trillion tons. The amount of CO2 generated by people is 27 billion tons worldwide and this could increase to 45 billion tons. So 6 trillion tons is about 200 years worth of CO2 storage.

Calera cement is a startup funded by Vinod Khosla, technology billionaire. Calera's process takes the idea of carbon capture and storage a step forward by storing the CO2 in a useful product. For every ton of Calera cement that is made, they are sequestering half a ton of CO2.

Calera Cement Process uses flue gas from coal plants/steel plants or natural gas plants + seawater for calcium & Magnesium = Cement + Clean water + Cleaner Air

Calera has an operational pilot plant.

Carbon sequestering in cities by using carbon absorbing cement.

10. Low Carbon Energy Sources

Nuclear power worldwide offsets 2 billion tons of CO2 per year. Scaling nuclear power, wind energy, solar power, geothermal and hydro-electric power can offset a lot of CO2 by displacing coal power, oil and natural gas.

11. CO2 Capture from the Air - for Fuel or Storage

Technology for CO2 capture from the air is progressing.

Carbon Sciences and others are trying to scale up CO2 conversion into fuel.

Carbon Sciences estimate that by 2030, using just 25% of the CO2 produced by the coal industry, they can produce enough fuel to satisfy 30% of the global fuel demand.

The company's plan for 2009 includes the following:

* Develop a functional prototype of its breakthrough CO2 to fuel technology in Q1 2009. This prototype is expected to transform a stream of CO2 gas into a liquid fuel that is: (i) combustible, and (ii) usable in certain vehicles.
* Enhance the prototype to demonstrate a full range of cost effective process innovations to transform CO2 into fuel.
* Begin development of a complete mini-pilot system to demonstrate the company's CO2 technology on a larger scale.
* Prepare for the development of a full pilot system with strategic partners sometime in late 2010 or 2011.

CO2-to-Carbonate technology combines CO2 with industrial waste minerals and transforms them into a high value chemical compound, calcium carbonate, used in applications such as paper production, pharmaceuticals and plastics. This is also bordering the various using CO2 as part of cement.


FURTHER READING
Geoengineering proposals compared.

Gigaton Throwdown
The Gigaton Throwdown, a project by Sunil Paul. Mr. Paul started the project under the auspices of the Clinton Global Initiative on Stabilizing the Climate. He organized a fairly large group of venture capital companies, some from the renewable energy sector, and some academic and think tank policy analysts, all concerned about climate change and the need for dramatic action to mitigate such change.

The Gigaton Throwdown defined, briefly:

"The Gigaton Throwdown, launched in 2007 at the Clinton Global Initiative by Sunil Paul, is a project to encourage entrepreneurs, investors and policy makers to plan to grow companies to a scale that they change the climate. The project is evaluating a portfolio of cleantech pathways that could lead to 1 gigaton per year of CO2-equivalent reduction by 2020, and the implications for capital, policy, and industry. The pathways currently in analysis are solar PV, solar thermal, wind, biofuels, nuclear, geothermal, plug-in hybrid electric vehicles, and buildings."

The Gigaton Throwdown report was released June 24, 2009 in Washington DC.

For more background data and analyses behind the final report.

For more background on the Clinton Global Initiative at which the Gigaton Throwdown was launched.

You will note that Dr. John Holdren, Science Advisor to President Obama, was a lead participant in this particular Clinton Global Initiative meeting.


McKinsey consulting had a plan and an analysis of ways to avoid CO2.

1. Energy efficiency in buildings and appliances (710-870 megatons of carbon)
2. More fuel efficient vehicles (340-660 megatons of carbon)
3. Industrial efficiency (620-770 megatons)
4. Bigger carbon sinks (like more forest) (440-580 megatons)
5. Less carbon intensive power generation (800-1570 megatons)
This last one is more nuclear power and renewables and cleaning up coal.