Pages

August 31, 2007

Possible quantum computer limits

A discussion of possible limitations on the power of quantum and classical computers

It is a useful discussion of NP hard problems. My reaction to it is that even if his conjecture that we cannot globally solve NP hard problems is correct there is still commercial viability in improving and expanding the cases where we can solve some NP hard problems.

I think the question of commercial viability is how often does a system not get stuck in local minima. Are there usable probabilities when the optimal solution is found over sufficiently complex problems.

if the best alternative systems only come up with an optimal solution for N=4, and special cases of N=5. But the new system can fairly freqently get optimal solutions up to N=100 or less frequently N=10000 then that could be a commercial advance.

Nature's protein folding may not be perfect for solving the NP hard protein folding problem but it is sufficiently successful to allow life to form. A commercially useful solution capability.

Intriguingly, Farhi and his collaborators proved that, on some problem instances where classical simulated annealing would take exponential time, the quantum adiabatic algorithm takes only polynomial time.

we also know of problem instances where the adiabatic algorithm takes exponential time just as simulated annealing does. So while this is still an active research area, right now the adiabatic algorithm does not look like a magic bullet for solving NP-complete problems.

If quantum computers can’t solve NP-complete problems in polynomial time, it raises an extremely interesting question: is there any physical means to solve NP-complete problems in polynomial time?


Are those problem instances sufficiently useful and widespread for commercial usefulness ? Google is not perfect for internet search, but it is commercially successful for the range of instances where it does provide solutions.

August 30, 2007

Fuel scooping variable Minimag Orion proposal

A recent paper "Use of Mini-Mag Orion and superconducting coils for near-term interstellar transportation" discusses using a massive laser beam to accelerate fuel pellets to catch up to and supply a minimag Orion with fuel. The 1000 ton minimag Orion could then accelerate to 10% of light speed. The laser array would need 2500 times the electrical power of the United States in 2007.

I am making proposals that would help to get rid of the need for a laser array that uses 2500 times the electrical power that is currently generated in the USA as a prerequisite to interstellar or solar travel. Needing a monster laser array and mass production of 250,000 nuclear power plants seemed to be something that would delay the arrival of interstellar travel a long time.

My proposal pre-deploys a bread crumb trail of nuclear fuel pellets which would be scooped up by the nuclear rocket. The giant laser would not be needed or could be greatly reduced in size and performance. Existing technology or very achieveable technology and infrastructure is enough. We can have ships slowly leave multiple trails of nuclear fuel pellets which would be gobbled up by the nuclear rockets similar to the old video game Pacman.


Pacman gobbling up dots

del.icio.us




minimag Orion

Good old fashioned project Orion rockets are designs from the 1960's that used nuclear pulsed propulsion You fire a series of small (100-1000 tons of TNT equivalent) bombs through a hole in a large metal disk. the bombs explode and push the large metal disk with your rocket attached to it. You can then accelerate massive space ships to high speed. 6000 to 100,000 ISP were possible for nuclear fission versions and 1,000,000 ISP for nuclear fusion.


Project Orion vehicle image

Minimag Orion uses sub-critical explosions (not quite full nuclear bombs) that are initiated using magnetic Z-pinches. The explosions are about 5 to 10 tons of TNT equivalent.

They were talking about the minimag as being something that could not be as easily weaponized as good old fashioned Orion's.

However, using ye old impact calculator, for something about 1000 tons

If that object was turned around, it would hit with 251 megatons of force going at 1% of lightspeed.

Going at 10% of lightspeed it would hit something with 25,000 megatons of force.

Also, the GW to TW to PW laser arrays would be pretty threatening too (especially relative to a few thousand measily 100-1000 ton explosions.)

If you are cruising around the solar system at high speed then you are powerful no matter what. In which case, we should crank this up to critical explosion levels and just figure out how to get the maximum performance out of it.

I think we could get a variable minimag Orion/full Orion where we configure it for rapid subcritical explosions to launch from the ground. And then crank it up to critical explosions once we clear the atmosphere.

Fusion powered version of Orion could get to 1 million ISP.

The more advanced levels of Gabriel (from a recent Nasa re-examination of Project Orion)
1. Mark I: Solid pusher plate and conventional shock absorbers (small size, possible with current technology)
2. Mark II: Electromagnetic coupling incorporated into the plate and shocks (medium size, Mark II and beyond require some research and development)
3. Mark III: Pusher plate extensions such as canopy, segments, cables (large size)
4. Mark IV: External pulse unit driver such as laser, antimatter, etc. (large size)

The Mark II starts getting the electromagnetic coupling that minimag Orion has.

I also had an idea that we could start firing sheath/pellets out in front of the minimag Orion before we launched the minimag. The minimag could scoop up slower moving pellets as they caught up to them on their flight path. We could then use a less powerful laser accelerator and/or let the minimag climb to higher speeds than we can accelerate the pellets. We would need to research on a way to get a pellet scooper created that would safely scoop up slower moving pellets. The pellet bread crumb stream could be laid out years in advance of the launch of the minimag. It would be an artificially created fuel stream similar to the principle of the Bussard ramscoop which was to scoop up interstellar hydrogen to power a fusion rocket.


This is a representation of the Bussard Interstellar Ramjet engine

Laying out pellet streams for our pacman minimag Orion or fullblown Orion would be a boost in performance to any one of the Orion concepts. The rocket would not need to carry most of the fuel.

So
1. Minimag subcritical ground launch. It would be wasteful in performance in a couple of ways. In order to switch to full Orion mode later we would not have the weight savings from the flimsy mesh for catching the subcritical plasma. However, simultaneously firing off 100 ten ton explosions would be about as good as one 1000 ton. It would be going back to bigger ships. However, detailed scaling and designs would be needed. Advanced materials (carbon nanotubes, nanograin metal) could still allow for a fairly light pusher plate relative to 1960's versions.

2. Variable and tuneable Z-pinches so that the system converts to higher performance full criticality explosions once it is passed the atmosphere and a certain distance from earth. Some more complexity in the design.

3. System for efficiently laying out a fuel pellet stream years in advance and creating a pellet catcher scoop for the variable minimag Orion. Might not need much of a sheath for the fuel pellets. We could use pellet droppers. We also would need supercomputers to calculate and account for any drift for the pellets from the time they are deployed until the time they are scooped.

Photonic laser propulsion with mirrors to reflect the light 1000-100,000 times could be used to propel the fuel pellet droppers The reason would be that the mirror reflections would let us use far smaller laser arrays say only a gigawatt. The multi-stage and planned aspect of fuel deployment for a nuclear rocket means that we can get a far bigger ship going to 10% of light speed than we could with the same size lasers and power sources.

The proposals that I am making would involve more planning, but the tolerances and requirements for the system are relaxed. We can use a far smaller laser and power sources that are 1% of our current electrical production instead of 250,000% of current electrical production. Costs come way down.

What is needed?
Still need a working minimag Orion.
Need a fuel pellet deployment system.
Need a fuel pellet scooping system, which is probably magnetically based.
Ideally and optionally design and create a variable minimag Orion that can transition to full Orion mode.

UPDATE: Improved fuel pellet deployment

Jim Moore in the comments mentions the inefficiency of scooping up stationary or slower pellets. However, we still would like to avoid laser arrays and power sources that are 2500 times the current electrical production of the United States.

My new solution is to develop Mason Peck's lorentz force propulsion capability

Peck wants to use natural forces to propel starships no bigger than the integrated-circuit chips in your computer. Specifically, he would harness the Lorentz forces that drive charged particles in magnetic fields, and which physicists use to whip bits of atoms to hellacious speeds in giant particle accelerators on Earth. Jupiter, with a rapidly rotating magnetic field 20,000 times stronger than Earth’s, packs a powerful Lorentz punch. Spacecraft like Voyager and Cassini routinely use gravity boosts from large planets to gain acceleration; why not use Lorentz forces as a means of propulsion too? Boosts from rotating magnetic fields could theoretically accelerate spacecraft to speeds of 1 to 10 percent of the speed of light, according to Peck’s early calculations. Because this free energy source works best on small objects, he suggests building a really tiny starship. Extrapolating from today’s state-of-the-art, he assumes we can solve the practical problems of nano-fabricating a spacecraft-on-a-chip, a single semiconductor crystal only a centimeter square and weighing less than a gram. One side would consist of solar cells for power. A rudimentary radio antenna and digital camera would be etched or deposited on the other. Attitude could be controlled by spinning the spacecraft and by torquing against the magnetic field, a technique already used for Earth-orbiting satellites.


Improved electrostatic charge density can be achieved using carbon nanotubes. The increased charge could allow us to scale up the object to move the 80 gram pellet at the correct speed.

While micron-thick wire can be used, the limiting configuration is a fiber consisting of carbon nanotubes. With a capacitance per length of perhaps 4×10-11 F/m values of q/m approaching 3×10**6 C/kg can be achieved for a single nanotube.



self-capacitance architectures

Mason Peck's paper on millimeter scale spacecraft for interstellar travel

So we combine the ability for tiny spacecraft to use propellantless means to achieve high speeds (1-10% of lightspeed) and use that as a means to have an accelerated stream of pellets without the monster laser array and power. The challenge becomes one of the nano-fabrication of the lorentz propelled pellet-ships.

I think the my proposal could be achieved with a few tens of billions of dollars and 15-20 years of dedicated development.


FURTHER READING
My recent article examining the new minimag Orion paper and concept

Earlier article on minimag Orion

Article on updated Project Orion concepts

Examination of Nuclear thermal rockets

Photonic laser propulsion

My early article on using laser array propulsion

Project Orion info online

August 29, 2007

Update using Mini-mag orion to get to 10% of lightspeed

News from Centauri Dreams of progress on the mini-mag Orion concept Take what would be a rocket design that is 70-80 times more efficient than existing chemical rockets and beam particles at it so that it can go at 15% of the speed of light at a cost of $3 billion per flight in Uranium.

UPDATE: I have made progress on my analysis of enhancing mini-mag Orion. I have a new article about pre-deploying the pellets and using charged carbon nanotubes to provide Lorentz force propulsion to the pellets This could be used to remove the need for the massive laser array and power sources 2500 times larger than the current electrical power of the United States. The nanoscale components would seem to become both achievable and affordable over the next ten years. System integration and the key nuclear rocket will still require a committed effort to achieve.

Use of Mini-Mag Orion and superconducting coils for near-term interstellar transportation

1000 T crewed spacecraft and propulsion system dry mass at 10% of lightspeed contains 9 X 10**21 J. The author has generated technology requirements elsewhere for use of fission power reactors and conventional Brayton cycle machinery to propel a spacecraft using electric propulsion. Here we replace the electric power conversion, radiators, power generators and electric thrusters with a Mini-Mag Orion fission–fusion hybrid. Only a small fraction of fission fuel is actually carried with the spacecraft, the remainder of the propellant (macro-particles of fissionable material with a D-T core) is beamed to the spacecraft, and the total beam energy requirement for an interstellar probe mission is roughly 10**20 J , which would require the complete fissioning of 1000 ton of Uranium assuming 35% power plant efficiency. This is roughly equivalent to a recurring cost per flight of 3.0 billion dollars in reactor grade enriched uranium using today's prices. Therefore, interstellar flight is an expensive proposition, but not unaffordable, if the nonrecurring costs of building the power plant can be minimized.


UPDATE: Powering the beam ?

In the abstract that I see they talk about 10**20 joules for 10% of lightspeed for a 1000 ton ship. Going 1% of lightspeed would take 100 times less energy 10**18 joules.

In the paper, they talk about the power required based on 40% efficiency.
They indicate 2.5 TW when they should be saying 2.5 PW.
They had 1000 TW (1PW) as the acceleration power needed for the sheath/pellets. So 40% efficiency means 2.5 PW.

It seems we can make this more efficient by lightening the sheath.
The pellet is only 80 grams. The sheath is 2kg of conducting mylar.

If we put some engineering into the sheath (say using carbon nanotubes for strength, conduction and lower weight) maybe we can get the sheath down to 128 grams. A total reduction in weight of the sheath pellet to 208 grams instead of 2080 grams.

Then the acceleration power for the sheath pellets would go down to 100TW, efficiency could be increased slightly with higher acceleration tolerance and less laser losses. But keeping it at 40%. 250TW.

We can drop in speed to 1% of lightspeed instead of 10% this means 100 times less power. Drop in weight of the vehicle would still save 10 times to go 100 tons and 100 times to go 10 tons.

So 10 ton vehicle at 1% of light speed would need 25GW of laser array power to accelerate the lighter sheath/pellets.

A 100 ton vehicle at 1% of light speed would need 250GW of laser array power to accelerate the lighter sheath/pellets.

A typical 1200 MW nuclear power plant produces 32 PJ per annum.
3.2 * 10**16 joules.

10 twin reactors would get us up to 6.4*10**17 joules. About the power levels needed for the 1% of lightspeed. (which is plenty fast for all the plans that I can think of for doing whatever we want in the solar system). Plus it is over 100 times faster than what we have been able to achieve. 10.8 million kph (6.5 million mph vs 50,000 mph - without a lot of gravity slingshots)

FURTHER READING
My past coverage on minimag Orion

August 28, 2007

Two chinese coal miners lived, 179 did not

The tale of two chinese coal miners who lived. Remember these are the two who lived while 179 others died We also recently had the 12 coal mining deaths in the United States.

The two who lived had to drink their own urine, eat coal and dig through 66 feet of dirt and coal.


del.icio.us



How many nuclear energy deaths have there been this year ? Zero.
How about coal mining deaths in the last year ? 5000 to 10,000 worldwide.
How about nuclear energy deaths in the last ten years ? Zero.
How about coal mining deaths in the last ten year ? about 80,000 worldwide.

Many of those who die in coal mines do not die painlessly or instantly.
Many were likely trapped and suffered for days in failed efforts but were too deep to free themselves or for help to reach them in time.

Plus there are the air pollution deaths of about 1 million each year.
Many of those suffered from painful diseases before succumbing.

A lot of deaths and injuries for coal power versus almost none for nuclear power. IT also translates into a lot of human suffering.

For those who only care about animal suffering and deaths, like the Michael Vick and DMX dog fighting, coal kills thousands and millions of animals when mountain tops get blown off to reach coal. Almost the animals and trees in the destroyed forest are killed. There are some that also could be injured and live or injured and die.

So remember the starting point, two chinese miners suffered and struggled for 11 days and lived. Many others suffered and struggled and died.

Even the chemical "scrubbing" of air pollution generates more toxins in the ash

FURTHER READING:
International mine safety


Coal deaths per million tons

 2005 coal production
PR China 2226 Mt
USA 951 Mt
India 398 Mt (avg about 150 deaths/year)
Australia 301 Mt
South Africa 240 Mt
Russia 222 Mt
Indonesia 140 Mt


China¹s total death toll from coal mining averages well over 5000 per year - official figures give
5300 in 2000
5670 in 2001
7200 in 2003
6027 in 2004
5,986 deaths in 2005
4,746 deaths in 2006

Ukraine's coal mine death toll is over two hundred per year
1999: 274
1998: 360
1995: 339
1992: 459
Coal mining in the Ukraine

Wikipedia on mining accidents

Coal fatalities by state from 1996 to 2007 from U.S. Department of Labor, Mine Safety and Health Administration.

All mining fatalities by state 1996 to 2007

Fatality stats for mining in the USA

Historical mining deaths and injuries by decade for the USA

Discussion of mine safety

[from a surface mining operation]. At mines in the U.S., we find that most fatal accidents on the surface relate to transportation. It could be trucks, front-end loaders, railway cars, belt conveyors, and similar equipment. There are many factors involved in these accidents. Some of the factors are equipment maintenance, roadway design, and training. Another important factor is visibility. In fact, restricted visibility factored into more than a hundred miners' deaths during a recent 10-year period in the U.S.


Successful evacuation in USA and Australia mines.
Part of the credit for that achievement almost certainly belongs to the Personal Emergency Device. How, you might ask, could the mine operator send a page to everyone under tons and feet of rock?

The key is that the system uses a fluctuating low frequency magnetic field to send messages instead of the electro-magnetic waves. This pager system seems to be a very promising example of new technology that can help to protect miners. Incidentally, it is a fine example of the value of international exchange. This unit was developed and originated in Australia.


So far the examples I have given are of mine safety. The other half of the picture we are concerned with is the occupational health issues. For example, consider a coal miner in Kentucky named Terry Howard. He operated a drill at a surface coal mine. Several years ago, during his prime working years, he developed breathing problems. His illness turned out to be silicosis, contracted from breathing the quartz dust that surrounded him when he drilled into the rock overburden.

Silicosis, as we all know, is an incurable disease and can get worse when the sick person is removed from further exposure. There is no cure for silicosis. Howard became totally disabled. His condition deteriorated rapidly and he died, leaving a widow and three school age children. He was only 45 years old.

In the U.S. we have had health regulations in place to combat lung disease among miners since 1970. There has been great progress but as May mentioned, in 1997 there was still more than 300 new cases of occupational lung disease reported to our agency.


Uranium mining versus coal mining

1. Uranium mining is inherently safer because of differences in mining methods

Uranium mining

Mostly insitu leaching. Putting pipes in the ground and pumping an acid mixture in other pipes and pumps drain the dissolved mixture with uranium in it. No miner goes underground. Open pit surface mining performed at times for both coal and uranium.

Coal mining has a lot more risks for fires and explosions.

Coal mining is for 6 billion tons of coal, uranium mining is for 60,000 tons of uranium each year. 100,000 times less target material. Even the amount of rock removed by uranium mining is less.

Ukraine and China have the highest risks in coal mining and they still send a lot fo guys underground (hundreds of thousands).

The US and Australia (safest coal mines) are highly automated by using long wall mining underground. Guys still die just less because they are 100 times more efficient in terms of manpower.

Mountain top removal for 30% of the coal mines in the USA.
Use 1000 tons of explosives per day to blow away dirt and forest to get at coal. Around 7 billion gallons of sludge gets dammed up. There have been leaks of sludge that have killed all aquatic life in rivers and on one occasion 125 people.

The sludge pond is permitted to hold 2.8 billion gallons of toxic sludge, and is 21 times larger than the pond which killed 125 people in the Buffalo Creek Flood.

Blasting at a mountaintop removal mine expels coal dust and fly-rock into the air, which can then disturb or settle onto private property nearby. This dust contains sulfur compounds, which corrodes structures and tombstones and is a health hazard.

Follow up on quantum computers and AGI

Here is an analysis of a case where a 2007 supercomputer running a 1977 algorithm would be outperformed by a 1977 personal computer running a 2007 algorithm

This is also discussed here

This relates to the question of getting to better and better intelligence just by speeding up the hardware.

Nuclear plant application for Alberta near the oilsands

An application has been filed for nuclear plants near Peace River Alberta. Projected completion 2017.

Energy Alberta, which teamed with state-owned Atomic Energy of Canada Ltd, said it would first build one twin-unit reactor that could produce 2,200 megawatts of electricity by early 2017.



I have had several articles discussing the oilsands and using nuclear power with it.

nuclear oilsands and water usage

Nuclear oilsands Shell is probably the customer

A review of the technical paper on nuclear powered extraction of oil from the oilsands

Projected oil production from the oilsands

Steam from nuclear plants can lower the cost and environmental cost of the production of ethanol and biofuels

The Oildrum reviews Toe to Heel Air injection, which is a potentially very good technique for recovering heavy oil It could recover 70-80% of the oil traditional instead of 20-50%.

Resource demands of all hybrid and electric cars

A discussion at resource investor about the resource demands of scaling up so that all cars are hybrids and electric cars

August 27, 2007

McKinsey Global Institute analyzes the US dollar and yuan

The US current account deficit was a record $857 billion in 2006

A 45% depreciation of the dollar against the yuan would not result in balanced U.S. bilateral trade with China. The cost advantage of most Chinese exports is simply too great to be eliminated by currency movements alone. Moreover, many of the goods that the U.S. imports from China are not manufactured domestically. Nor are they available in sufficient quantities—at least at the moment—from other low-cost markets.

Looking across countries and export categories, our research finds that over the next few years, the U.S. has ample opportunity to boost service and manufacturing exports, by as much as $450 billion by 2012. The U.S. could also over time reduce oil imports by increasing energy efficiency and developing alternative fuel sources. But the analysis shows that these measures would at best reduce the U.S. current deficit only very modestly, leaving it at 6.3% of gross domestic product in 2012.

To reduce substantially or eliminate the U.S. deficit would require a 25% to 30% dollar depreciation from the level that prevailed in January, 2007. The U.S. trade balance with its NAFTA partners—Canada and Mexico—would face a major adjustment. With no further currency interventions, the current deficit of $109 billion would swing to a surplus of $100 billion or more.

Should China and other Asian countries continue to peg their currencies to the dollar, the greenback would need to fall by nearly 40% against the rest of the world's currencies to close the current account deficit. Should Asian countries allow their currencies to strengthen, however, the required dollar depreciation against the rest of the world would be much less dramatic: an estimated 25%.


I have reported in the past on the yuan and that it will likely appreciate at a rate of 5-12% per year

I have predicted that China's overall economy will pass the USA before 2020 on an exchange rate basis

Advance for Embryonic stem cells as post heart attack treatment

Human stem cells could soon improve heart function in people if injected following a heart attack, a new study suggests.

Heart attacks killed 267 out of 100,000 people in 2007.

If everyone who had a heart attack or fatal heart disease was saved then 700,000 people in the USA and 7.1 million people worldwide would be saved each year.

During a heart attack, the heart muscle dies within days and the scar doesn't contract, Okarma said. So, over time, the heart enlarges, increasing the diameter of its chambers.

However, in the study, cardiac cells injected into the ventricular wall of the heart decreased the diameter of the left ventricle, rebuilt the heart muscle wall and improved the strength of the contracting heart muscles, according to Okarma.

"The animals that got our cells, four days after the infarction [heart attack], didn't enlarge — the heart stayed small because the heart muscle cells we put in are functional. They prevented the onset of heart failure.

"Our cardiomyocytes are the first human cardiac cells shown to survive after injection into an infarcted ventricle and to produce significant improvement in heart function," Okarma said.

Four weeks after the transplant, the hearts of the rats that received the cardiac cells were scanned with an MRI to assess the growth of new tissue and to determine whether the injected stem cells had migrated to other organs.

The scientists discovered that the newly introduced cells remained solely in the heart and that no tumours or cysts developed — a common occurrence when foreign cells are transplanted into the heart.

Okarma said studies on sheep are currently underway. He predicts human trials will occur in two to three years.

"This will become the treatment of choice for patients who suffer a heart attack," Okarma predicted, "because no pill fixes a broken heart."


Sifting through a slew of biochemical factors that were known to be involved in heart formation, researchers came up with a formula that greatly increased the yield of heart tissue from the stem cells.

"Typically one tenth of 1 per cent of the (stem) cells would make heart muscle, maybe 1 per cent on a really good day," Murry says. "Now we're getting 38 to 50 per cent of the cells turning into heart muscle."

In total, the combined procedures increased the proportion of successful cell grafts from 15 per cent to 100 per cent in the rodents, who went through lab-induced heart attacks.

The size of those grafts went from "tiny clusters of cells" under the old methods to upwards of 10 per cent of the damaged heart region under the new.

Murry hopes to bring the research into clinical human trials within three years, but will test it on pigs or other large mammals first.

He says it may have applications for stem cell therapies on other organs that have similar production and transplantation hurdles.

"The problem with cell death in cell transplantation (for example) is something that has plagued repair of all solid tissues so far," Murry says. "It's our hope that this will be useful outside of the heart as well."

More on the McCain/Lieberman climate change bill

The EIA is known for making conservative projections on energy. They recently made an analysis of what the USA energy picture would look like if the McCain Lieberman the Climate Stewardship and Innovation Act of 2007 was adopted. It seems that sojme kind of climate bill will be passed this year. Nine climate bills are being considered. It seems likely that there will be some house of representatives bill and a senate bill. Then the bills will get merged and probably one will be passed. A question will be whether the passed bill will have a veto proof majority. If it does not then will George Bush veto it ? Business supports getting a bill done this year, since they think any bill in 2009 under a new administration will likely be far more green and less business friendly.

Here is my first article about the climate change bill

Passage of a solid climate change bill would be one of the best things that can be done to increase nuclear energy and renewable energy and reducing dangerous coal and fossil fuel usage. Contact your senator and congressman to encourage them to pass S280 and the climate change bills.


Here are some special topics in the EIA analysis


A No Nuclear case was analyzed to examine the impacts of restricting new nuclear capacity growth (beyond that added in the reference case) under the S. 280 Core assumptions. The allowance price in the No Nuclear case is 6 percent higher than the S. 280 Core case in 2030 and power sector CO2 emissions are about 3 percent higher. The power sector turns to increased investment in renewables (mainly biomass and wind) as well as significant investment in new coal plants with carbon capture and sequestration and natural gas. In the No Nuclear case, 70 gigawatts of new coal plants with carbon capture equipment are built. Total coal production in 2030 in the No Nuclear case is more than 100 million tons higher than in the S. 280 Core case. The higher allowance price and more costly capacity investment in this case lead to average delivered electricity prices in 2030 that are 8 percent higher than the S. 280 Core case. In turn, the higher prices have an impact on electricity sales, which are 2 percent lower in 2030 in the No Nuclear case than in the S. 280 Core case.



Alternative cases were prepared to explore the impacts of additional areas of uncertainty:

- The Unlimited Offsets case examines the impact of removing the 30 percent offset limit in S. 280. This limit is particularly important in the later phases of the proposal when the emissions caps, and the offset limit tied to them, are lowered sharply.

- The Low Discount case assumes that investors will only require a 4-percent return on allowances rather than the higher rate of return investors generally require for large plant investments such as power plants. Recent analysis at the Massachusetts Institute of Technology examined the returns on sulfur dioxide emission allowances (SO2) and found that they were generally not correlated with market returns, suggesting that financial investors would treat them as relatively low risk assets.16 It is unclear whether a similar relationship might be seen in GHG allowance markets since GHG emissions are so ubiquitously linked to economic activity.

- The High Auction case was prepared in response to a request from Senate staff to examine the impact of assuming that a larger share of the allowances distributed each year are auctioned rather than given out for free.

- The No Nuclear case examines the impacts of limiting the penetration of new nuclear capacity to the level seen in the reference case. Earlier EIA analyses have suggested that nuclear power could be an important option for reducing power sector GHG emissions. However, while interest in new nuclear plants appears to be growing, uncertainty about the costs of new plants and public concerns about safety and longterm waste disposal could limit their penetration.

- The Commercial Covered case examines the impacts of assuming that all entities in the commercial sector were covered. As explained, while detailed data are not available, very few buildings are expected to meet the 10,000-metric-ton facility emission threshold in S. 280, but this case examines the potential impact if a larger than expected number did.

- The S. 280 High Technology case examines the impact of the provisions of S. 280 using more optimistic assumptions about improvements in technology. This case should be seen as a “what if” case, rather than predictive of the impacts of S. 280 from innovation incentives and technology deployment programs.




Are quantum computers needed for AGI ?

Dwave systems, CTO Geordie Rose, theorizes about the usefulness and possible necessity of quantum computers for human level and higher artificial intelligence

Summarizing: Are humans somehow getting good approximate solutions to problems that are NP hard ? If so, would quantum computers be useful to develop alternative ways to get as good or better than some innate human capability ?

What got me thinking about intelligence in the first place was the observation that many of the tasks that seem to be difficult for computers, but relatively easy for biological brains, are most naturally thought of as NP-hard optimization problems. Basically anything that involves complex pattern matching–recognizing speech, inference, relational database search, vision, and learning, for example.

Another thing that seems interesting is this: take any algorithm that scales linearly with input size. For the problem this algorithm solves, can you think of a single example where a human could beat a computer? I can’t think of one.

Finally: biological brains operate with small amounts of input data (five senses). For example if we look at a photograph the total data we receive is quite small.

Is it possible that the notion of complexity classes is important for asking the right questions about intelligence? Here’s a rough outline of an idea.

1. Categorize all of the problems that biological brains have to solve as well-posed computational problems.

2. The subset of these problems that can be solved with algorithms that scale polynomially can always be done better with silicon than with bio-brains. Note that this isn’t true in general–it requires the observation that the input problem instance size is small for bio-brains (# of pixels in a photograph eg.).

3. There are problems that have survival value to solve that are harder than P.

4. Brains evolved excellent heuristics to provide quick approximate solutions to the problems harder than P, and currently it is that subset where bio-brains beat silicon.

5. In a hierarchy of difficulty, some problems will be too hard for bio-brains to evolve heuristics for. This means that the primary differentiator in this picture of things will be the “easiest hard problems”. The hard hard problems are too hard for evolution to create hardware heuristics for.

6. The easiest hard problems (the group most likely to have good hardware heuristics and bad software heuristics) are NP-hard optimization problems.

I tend to think now that breakthroughs in machine intelligence are going to come from algorithms–either new classical algorithms, or the capability to run quantum algorithms on quantum hardware, and not on Moore’s Law advances.

Clean vehicles in India and China key to future global environment

Businessweek reports on a shift from scooters to small cars in India

A critical factor for India and China's environment and the global environment is the form of the 1+ billion vehicles that India and China will use from 2007-2020. How many will be two wheelers versus four wheelers and how many will be electric and how many will be gas powered. Making electric two wheel vehicles with suitable performance and low cost is an achievable goal which is being deployed in China. There are 800 million cars around the world now. If China and India become major users of cars there could be 2 billion cars by 2020. India and China automobile adoption is also constrained by a lack of road infrastructure. This will force more efficient public transit and scooter usage.

I had a report on electric bicycles and scooters in China There will be 60 million electric bicycles and scooters in China by the end of 2007 out of 450 million bicycles in use in China. By 2011, there could be 350 million electric bicycles and scooters. India has 100 million scooters but almost all of them are gas powered.

India has a the 65 million-per-year two-wheeler market. Small cars dominate Indian roads, with 70% of the 1.4 million-car annual market. That's likely to double by 2008. Spurred by a growing demand for compact and low-cost cars, auto sales are expected to soar from $34 billion last year to $145 billion by 2016. According to the Strategic Foresight group, between 2001 and 2007 about 100 million Indians moved up the ladder from being bullock cart-users to being two-wheeler users. The next move is a car—an affordable one, which bridges the gap between the $1,450 average scooter to the $4,800 small car currently on the market.

In January, 2008, Tata is expected to introduce cars in the price range of $3,000 and below, with engine capacities ranging from 660 cc to 1,500 cc in gasoline, diesel, and hybrid versions.


Hero group is the largest builder of 2-wheel vehicles in India. Hero Honda Motors is the World's largest manufacturer of two-wheelers with annual sales volume of over 3.0 million motorcycles.


Some of the Hero Group mopeds

50 cc mopeds can get 70-100mpg and go about 40mph and cost $600-1000.

Electric bikes in China can go for $150-250 and have top speeds of about 15-30mph.

The Hero Group announced that it would be launching high-speed battery-run two-wheelers as also three-wheelers in a few months, but has no plans, as of now, to introduce electric four-wheelers. The group said it would also be launching battery-run e-bikes with speeds between 40 - 50 km per hour in the next six months. Although China, from where the Hero Group is importing batteries, happens to be the biggest market with 20 million e-bikes sold annually, the company is focusing only on domestic (Indian) market with 0.1 million units. However, Hero is looking at Russia to begin its exports. The group, which introduced battery-run two-wheelers in seven models in the price range of INR 15,000 and INR 28,000 recently, would by March 2008 have 120 show-rooms in the country. It is projecting revenue of INR 1,200-1,500 million this fiscal from sale of e-bikes, including a 15% share from Gujarat.