Pages

December 05, 2009

Historical Colonization versus Historical Navies and Future Spaceships

In terms of the scale of the effort for colonizing North America, I think it is useful to compare the size of the naval fleets of the time and other historical benchmarks. We know how large the military is today and the share of the total economy that it has. It will be more useful to approximate how large the interplanetary space travel industry will need to be before an interstellar colonization expedition would be a reasonably sustainable activity.

This relates to the discussion of spaceships and whether interstellar spaceships will happen

Technology will be key in lowering the costs and energy requirements (even for communication). However, we will need to build up the economic scale and interplanetary space capabilities to achieve sustainable results and progress.

Military Comparisons to Colonization
The Spanish Armada of 1588 at wikipedia

The Spanish fleet was composed of 151 ships, 8,000 sailors and 18,000 soldiers, and bore 1,500 brass guns and 1,000 iron guns. The full body of the fleet took two days to leave port. It contained 28 purpose-built warships: 20 galleons, 4 galleys and 4 (Neapolitan) galleasses. The remainder of the heavy vessels consisted mostly of armed carracks and hulks; there were also 34 light ships present.

In the Spanish Netherlands 30,000 soldiers awaited the arrival of the armada, the plan being to use the cover of the warships to convey the army on barges to a place near London. All told, 55,000 men were to have been mustered, a huge army for that time

English fleet however did outnumber the Spanish, with 200 to 130 ships, however the Spanish outgunned the English fleet: its available firepower was 50% more than that of the English. The English fleet consisted of the 34 ships of the royal fleet (21 of which were galleons of 200 to 400 tons), and 163 other ships, 30 of which were 200 to 400 tons and carried up to 42 guns each; 12 of these were privateers owned by Lord Howard of Effingham, Sir John Hawkins and Sir Francis Drake.


In 1600, there were about 500 million people in the world

Largest cities in Spain in 1600: Sevilla (40,000), Toledo (44,000), Madrid (40,000), Barcelona (40,000), Valencia (35,000), Valladolid (32,000), Córdoba (25,000). Population of Spain in 1600: 9 million.

One of the most successful conquistadors was Hernán Cortés, who with a relatively small Spanish force but also crucially the support of around two hundred thousand Amerindian allies, overran the mighty Aztec empire in the campaigns of 1519–21 to bring what would later become Mexico into the Spanish empire as the basis for the colony of New Spain. Of equal importance was the conquest of the Inca empire by Francisco Pizarro, which would become the Viceroyalty of Peru. After the conquest of Mexico, rumours of golden cities (Quivira and Cíbola in North America, El Dorado in South America) caused several more expeditions to be sent out, but many of those returned without having found their goal, or having found it, finding it much less valuable than was hoped. Indeed, the American colonies only began to yield a substantial part of the crown's revenues with the establishment of mines such as that of Potosí (1546). By the late 16th century American silver accounted for one-fifth of Spain's total budget. In the 16th century "perhaps 240,000 Europeans" entered American ports.

More people and money over the course of century than were on both sides of a very large naval engagement. There were also expeditions and fleets of colonizing ships (1-11 ships were common).







In 1600 the economies were estimated at :


Region / Country GDP (PPP)
mill. of International dollars GDP Share percentage (%)
World 329,417 100
Ming China 96,000 29.2
Mughal India 74,250 22.6
Far East (excluding China, India,
Japan, Russia) 24,088 7.3
Africa 22,000 6.7
Spanish Empire 20,789 6.3
France 15,559 4.7
Italian States 14,410 4.4
Ottoman Empire 12,637 3.8
Germany 12,432 3.8
Russia and Central Asia 11,447 3.5
Japan 9,620 2.9
Eastern Europe (excluding Russia) 8,743 2.7
Spain 7,416 2.1
British Isles 6,007 1.8


The voyages of Christopher Columbus are invoked by Americans more than any other historical analog to capture the ethos of the manned space program. A better analogy would be Leif Ericksson. He and his fellow Norsemen reached North America five centuries before Columbus by travelling in the most remarkable sailing vessels of their time. Not until Columbus, however, did Europeans have at their disposal a robust maritime technology that would allow them to not only reach the Western hemisphere but also to sail back and forth to Europe reliably. Over the last forty-five years, the United States has developed space launch vehicles that can carry astronauts to near-Earth orbit and even to the moon. It has failed, however, to develop the space ship that can do for the United States what the caravel did for Columbus. The current program to build a new suite of launch vehicles simply recycles old technology. It builds longships, not caravels.

Colonial Population Estimates

Estimates of population of Colonial America, from 1610 to 1780.


North American Latin American Europe
Year Population Population Population
1610 350
1620 2,300
1630 4,600
1640 26,600
1650 50,400
1660 75,100
1670 111,900
1680 151,500
1690 210,400
1700 250,900
1710 331,700
1720 466,200
1730 629,400
1740 905,600
1750 1,170,800 16 million 163 million
1760 1,593,600
1770 2,148,100
1780 2,780,400
1800 7 million 24 million 203 million
1850 26 million 38 million 267 million


Historical population figures


Northern America comprises the northern countries and territories of North America: Canada, the United States, Greenland, Bermuda, and St. Pierre and Miquelon. Latin America comprises Middle America (Mexico, the nations of Central America, and the Caribbean) and South America.

The figures for North and Central America only refer to post-European contact settlers, and not native populations from before European settlement.


Future Space Settlements

If (when) there is human settlement of space and
If there were parallels to the scale of the settlement of the Americas.

then there would be thousands of spaceships capable of carrying hundreds of people at a time for interplanetary and later interstellar travel. The interplanetary capability (out to the Oort comet cloud) would be something like the ships traveling and trading around the mediterranean.

Stages of ease of movement around space

Ease of getting to orbits and to the moon and near earth asteroids
Ease of getting Mars (1. AU) the Asteroid belt (between 2.3 and 3.3 AU)
Ease of getting out to Saturn (9.5 AU)
Ease of getting out to the Kuiper belt (30 and 50 AU with over 100,000 Kuiper belt objects with a diameter greater than 50 km and a total mass of 1-10% of earth)
Ease of getting out to the Oort comet cloud and the gravitational lensing points

The hypothetical Oort cloud is a spherical cloud of up to a trillion icy objects that is believed to be the source for all long-period comets and to surround the Solar System at roughly 50,000 AU (around 1 light-year (LY)), and possibly to as far as 100,000 AU (1.87 LY). It is believed to be composed of comets which were ejected from the inner Solar System by gravitational interactions with the outer planets.


Ease of getting out to other solar systems (brown drawfs and stars)



Converting Coal Plants for Clear Displacement of Pollution and CO2


Eddystone coal power station will be shutdown and replace with natural gas and nuclear plant uprates

A small coal-fired generating plant in northwestern New Mexico will be used to test new hybrid technology that combines solar- and coal-generated steam to produce electricity. Solar thermal concentrating technology will provide heat for the turbine and reduce coal usage.

The 245-megawatt Escalante Generating Station in Prewitt, 27 miles northwest of Grants, is one of two host sites that California’s Electric Power Research Institute (EPRI) chose to test the technology. The other site is a natural gas-powered generating station near Las Vegas, Nev.

Most coal plant conversion projects have been to replace coal with biomass or natural gas Nuclear is playing a smaller role in replacing the coal power that is shutting down, but nuclear does have role.

Exelon said that it would completely close the Cromby Generating Station along the Schuylkill in Phoenixville and that it would retire two coal-fired generators at the Eddystone Generating Station on the Delaware River.

The targeted units have 933 megawatts of capacity. Exelon contends that there is sufficient generation capacity in the region to meet demand, and with new natural gas supplies coming into the market, it is supplanting coal as the preferred fossil fuel. Exelon also has plans to increase the output of its Limerick and Peach Bottom nuclear reactors in the next eight years.


Georgia Power Company will convert its Plant Mitchell Unit 3 from a coal-fired power plant to a biomass power plant.

The facility will be able to produce 96 megawatts of power once the conversion is completed in June 2012, making it one of the largest biomass power plants in the United States. It will draw on surplus wood fuel from suppliers within a 100-mile radius of the power plant. Georgia Power, the largest subsidiary of Southern Company, requested the conversion last summer and plans to begin the conversion by spring of 2011. The Georgia PSC approved Georgia Power's request on March 17, while also approving the utility's construction of two new nuclear power units at its Vogtle Nuclear Power Plant in southeast Georgia


Progress Energy Carolinas, a wholly owned subsidiary of Progress Energy, today announced that by the end of 2017, the company intends to permanently shut down all of its remaining N.C. coal-fired power plants that do not have flue-gas desulfurization controls (scrubbers).



The utility outlined its plan to close a total of 11 coal-fired units, totaling nearly 1,500 megawatts (MW) at four sites in the state:

The 600-MW L.V. Sutton Plant near Wilmington.
The 316-MW Cape Fear Plant near Moncure.
The 172-MW W.H. Weatherspoon Plant near Lumberton.
And the 397-MW H.F. Lee Plant near Goldsboro (retirement announced in August).

Progress Energy Carolinas has announced a plan to build new generation fueled by natural gas in Wayne County, N.C., and expects to announce additional gas plans in the near future. The company will continue to operate three coal-fired plants in North Carolina after 2017. The company has invested more than $2 billion in installing state-of-the-art emission controls at the 2,424-MW Roxboro Plant and the 742-MW Mayo Plant, both located in Person County, and the 376-MW Asheville Plant in Buncombe County. Emissions of nitrogen oxides, sulfur dioxide, mercury and other pollutants have been reduced significantly at those sites.


Why Coal Plants are Closing Now

Treehugger discusses the start of large numbers of coal plant closures/mothballing

At the national level, several things are driving the closings or "mothballing" of old coal-fired plants.

1. The closing plants are very old, and are relatively inefficient, with many parts and components at the end of design life. Physical size of the property may not allow for large scale upgrades. Moreover, an upgrade that ups capacity would open up the air emissions permitting process (see point 2. below).

2. Older coal-fired plants may not support cost-effective implementation of the pollution controls that will be needed to meet new standards for mercury and fine particulatesl.

3. These plants may be in air sheds where air quality standards are not currently being met (true in Pennsylvania for certain). Once such plants are closed, emission credits embodied in their air emissions permits can be in effect 'traded' for a new permit that puts out fewer grams of pollution per kilowatt generated: more power output for the same emission load.

4. Because of deregulation of power markets which occurred in the recent past, utilities are under pressure to keep power prices down, which makes it harder still to justify added capital upgrade costs on these old plants. With loans still hard to come by, the cheaper, faster to build, less-polluting natural gas plant gets the banker's nod every time.

5. You will read and hear plenty of speculation about how the prospect of a 'cap and trade' regime is what is behind these closings. That's load of horse apples being dumped by people who do not understand capital investment and pollution control standards. The climate bill, as proposed, gives these utilities free credits for carbon emissions above the moving cap. The costs of managing fly ash and mercury, and the relatively high expense of keeping these nags running is what the game is about.


As USEPA gets down to reviewing more air permits, you will see numerous additional announcements of capacity cut backs, mothballing, and outright plant closures. Keep in mind that some of the old coal fired plants are on polluted ground and that outright closure would mean expensive cleanup. Therefore, I am betting that 'mothballing' will be the prevailing modality


Coal to Nuclear
Coal2nuclear discusses the details for replacing coal burners with nuclear reactors.

● Man dumps 37 billion tons of CO2 into the air every year. Nature manages to remove only 21 of them. The excess 16 is Global Warming's CO2.
● Half of Global Warming comes from a few supersized coal-burning boilers that could be quickly replaced with a modified Russian BN-800 nuclear boiler.

5,000 Supersized Power Plant Coal Boilers are making 53% of Global Warming's accumulating CO2 (8.6 billion tons of CO2/year). Few in number, these modern energy giants power only 2% of the world's power plants but are making 53% of all Global Warming CO2. The world has about 5,000 supersized boilers in 1,200 huge power plants such as Taichung. 5,000 weapons of mass combustion, each one burns a mile-long train of coal every day - that's over 5,000 miles of coal every day. They are truly Global Warming's smoking gun. Suggested nuclear boiler replacement: BN-800


Intel Larrabee Canceled

IEEE Spectrum picked the Intel Larrabee as a technology winner for 2009

Intel has canceled the Intel Larrabee, so the IEEE Spectrum pick is completely, unambiguously wrong

I had commented about the prediction of the IEEE Spectrum when it was made, that it was vague. What would it mean that Larrabee would be a winner ? Clearly it cannot be canceled and win.

Intel could not get the Larrabee to outperform the Nvidia Fermi or AMD GPUs.




December 04, 2009

Compact Proton Beam Accelerators and Handheld Fusion Reactors

Previously this site had reported on the DARPA project to create chip scale high energy atomic beams as a path to commercial nuclear fusion.

There is a DARPA budget document with a bit more description that what was previously referenced.

The Chip-Scale High Energy Atomic Beams program will develop chip-scale high-energy atomic beam technology by developing high efficiency radio frequency (RF) accelerators, either linear or circular, that can achieve energies of protons and other ions up to a few mega electron volts (MeV). Chip-scale integration offers precise, micro actuators and high electric field generation at modest power levels that will enable several order of magnitude decreases in the volume needed to accelerate the ions. Furthermore, thermal isolation techniques will enable high efficiency beam to power converters, perhaps making chipscale self-sustained fusion possible.

Program Plans:
FY 2009 Plans:
- Develop 0.5 MeV proton beams and collide onto microscale B-11 target with a fusion Q (energy ratio) > 20, possibly leading to self sustained fusion.
- Develop neutron-less fusion allowing safe deployment for handheld power sources.
- Develop microscale isotope production by proton beam interaction with specific targets.
- Explore purification of isotope systems.
- Develop hand-held pico-second laser systems to introduce wakefield accelerators for x-ray and fusion sources.


UPDATE: Physicist Art Carlson comments:

For those who might not know: Even if you can make a 0.5 MeV ion beam, and even if you can make it with 100% energy efficiency, when it slams into a solid target it will unavoidably lose more energy by heating the electrons in the solid than it will produce by fusion. This is true for D-T and it is 1000 times more true for p-B11.


It seems likely that the higher voltages suggested in Winterberg's fusion proposals (gigavolts) would be needed.
END UPDATE

Principles and applications of compact laser–plasma accelerators



There are various laser pumped proton beam systems in the 3 Million EV to the 58 million EV range. Some of the 3 million EV systems are relatively compact.

Testing the first goal:
Develop 0.5 MeV proton beams and collide onto microscale B-11 target with a fusion Q (energy ratio) > 20, possibly leading to self sustained fusion.

Seems to be testable even if making everything chipscale takes longer.



Proton beams

In contrast to electrons, ions are best accelerated by a low-frequency (compared with the electron plasma wave frequency) or even a quasi-static electric field. Indeed, owing to their higher mass, the rapid field oscillations associated with an electron plasma wave average out to zero net acceleration for an ion. In experiments
so far, the mechanisms of ion acceleration can be classified into two categories, on the basis of how the electric charge separation that produces the quasi-static field is generated: ponderomotive or thermal explosion acceleration.

Proton beams produced by rear-surface acceleration show good collimation, increasing at higher proton energy, and very low transverse emittance (below 10-2 mmmrad for protons above 10MeV). Several paths for beam optimization are now being actively pursued. The first is to operate with ultrathin targets, in the sub-100nm range, which requires ultrahigh-contrast laser pulses. Improved acceleration with such targets has been reported recently.

Proton beams with energies up to 58MeV have been measured at the Lawrence Livermore National Laboratory with the now dismantled Nova petawatt laser. With smaller facilities, of the 1 J/30 fs class, distributions extending up to 10MeV have been
obtained.

The evolution of short-pulse laser technology, a field in rapid progress, will still improve the properties of laser produced particle sources. For example, the development of diode pumped lasers will enable the laser power efficiency to be increased by up to tens of per cent and will also lead to a significant reduction of the size of the laser systems. The rapid evolution of chirped pulse amplification laser technology, coupled to progress in laser–plasma interaction modelling, will soon result in improved performances, lower cost and still wider applicability of these compact particle sources.

The Extreme Light Infrastructure project in Europe and other projects are advancing the technology of lasers and accelerators and are bringing researchers together to look at uses for these new particle and photon beams.


Neely, D. et al. Enhanced proton beams from ultrathin targets driven by high contrast laser pulses.
Appl. Phys. Lett. 89, 021502 (2006).

Antici, P. et al. Energetic protons generated by ultrahigh contrast laser pulses interacting with ultrathin targets. Phys. Plasmas 14, 030701 (2007).

Ceccotti, T. et al. Proton acceleration with high-intensity, ultra-high-contrast laser pulses. Phys. Rev.
Lett. 99, 185002 (2007).

Proton Acceleration with High-Intensity Ultrahigh-Contrast Laser Pulses

We report on simultaneous measurements of backward- and forward-accelerated protons spectra when an ultrahigh intensity ( 5 X 10^18 W=cm2), ultrahigh contrast (>10^10) laser pulse interacts with foils of thickness ranging from 0.08 to 105 micrometers. Under such conditions, free of preplasma originating from ionization of the laser-irradiated surface, we show that the maximum proton energies are proportional to the p component of the laser electric field only and not to the ponderomotive force and that the characteristics of the proton beams originating from both target sides are almost identical. All these points have been corroborated by extensive 1D and 2D particle-in-cell simulations showing a very good agreement with the experimental data.




Nano to Macro Scale Nuclear Technology Applications


Betavoltaics can store about one million times more energy than other battery like devices but currently cannot release it very quickly.

Nano-to-Macro Scale Engineering Applications of Nuclear Technology-An Overview by Rusi P. Taleyarkhan, School of Nuclear Engineering, Purdue University (7 page pdf)

Nuclear science and technology offers the capability for radical industrial innovations from the nano-to-macro scales and is a field that already impacts over $600B in annual worldwide activity. Areas impacted are as diverse as medicine, industrial process control, energy, explosives, materials processing, agriculture, food preservation, sterilization, non-destructive interrogation for the molecular structure of compounds to use as tracers for transport and the tracking of fluids. This paper focuses on novel nano-macro scale peaceful applications for the oil-gas industry, for the metals industries, for enabling fundamental advances in boiling heat transfer, for induction of super compression in imploding bubbles to then lead to thermonuclear fusion and energy amplifications of over x one million times compared to chemical sources, to generation of nanopores in materials that may see applications such as for high-efficiency membranes for use in batteries and for dialysis, to the development of a novel class of low cost, multidisciplinary, fundamental particle detection systems.


Nuclear safety studies have spinoffs that improve safety in other industries. It has helped to improve the understanding and safety for handling liquid natural gas and help prevent aluminum-water explosions.

The Science of Nuclear safety has also resulted in advancements with supercooled powders. Nano-micron scale supercooled powder production using spontaneous molten metal-water explosions (e.g., 10g Sn at ~1100 K dropped into water bath at ~310 K); Cooling rates estimated to be in range of 100,000 K/s to one million K/s.





Enhancement of boiling heat transfer and hydrophilicity via irradiation

Some of the most wide-ranging phenomena utilized in every-day life involves hydrophilicity and the boiling of water at hot surfaces. This aspect governs the safety limits and consequently, the power output of water-cooled nuclear reactors; for a 1,000 MWe plant, even 1% enhancement implies power generation availability for an additional 10,000 homes (based on per capita electricity consumption in the USA). Enhancement of boiling heat flux for a given system has enormous significance and implications on economics and safety of operations (including that of nuclear reactors). Radiation treatment of solid surfaces appears to provide such a means as has been noted lately in several nuclear safety-motivated studies (Honjo, 2008) wherein gamma radiation has been shown to improve surface hydrophilicity and enhancement of critical heat flux (CHF) by an impressive ~ 10%, as well as delaying the onset of the well-known Leidenfrost point of the boiling curve – thereby, fundamentally impacting quenching characteristics of hot metals.


Betavoltaics

The promise and potential of betavoltaics is immense. This primarily due to recent advances in a combination of areas related to:
(1) isotope production and potential availability at reasonable cost;
(2) significant advances in novel radiation-hard semi-conductor chips that can be micro-to-nano-fabricated; and,
(3) leap-ahead advances in intrinsic conversion efficiencies; as also from significant advances in photo-voltaic technology.

Unlike the ~ 1 eV energy level of visible photons used for photovoltaic cells, beta energies from isotopes are in the 100,000 – one milllion eV range, and thus can provide unsurpassed higher-energy densities for application in confined quarters.

The theoretical conversion efficiency of a betavoltaic increases sub-linearly with increasing semiconductor bandgap. The direct-conversion technology results in a number of advantages over conventional power sources: Long-lived power: Continuous current is produced during the entire decay period of the radioisotope source. The use of isotope sources with half-lives that range from years to decades allows continuous power production for similar periods. Examples include 204Tl ,85Kr , 90Sr, 147Pm and 3h with half-lives of 3.8y, 10.8y, 28.8y, 2.6y and 12.3y, respectively. Recent advances in efficiency of conversion to > 10% as well as materials degradation with 247Pm type isotopes and related studies using 3H at Purdue University will be discussed at the conference.

Comment from Regular Reader Goatguy

The betavoltaic concept is actually intriguing from a number of perspectives.

Let's go with the obvious gotchas first: it is not a 'public' technology no matter how packaged. It is absolutely impossible to imagine ordinary consumers having megacuries of isotopes either in their homes, vehicles or general workplaces. It also is only marginally a commercially-feasible concept: the same security issues exist, and it won't be placed into all but the most hardened secure locations. So ... "big business" is about it, with security forces, cards, cameras, fences, reconnaisance, etc. The military is an obvious (but again, strangely, compromised by the 'tight security' issue!) placement, as are utilities, police forces, municipal entities, heavy industry, mining & exploration, shipping and rail, and the like. Aerospace, yes - especially so.

So.

First, the diagram is wrong. The semiconductor would be on both sides of the isotope. (duh). The beta electrons aren't particularly inclined to to one way or the other.

Second, beta has an extremely short free path through solids. Therefore only the thinnest films of it (and the thinnest barriers through the semiconductor) allow for efficient tunneling and capture.

Third, The cells would most likely be chemical-vapor deposited so that thousands of layers per centimeter could be built up. there is a COST to that (though it is also readily mass-producible)

Fourth, using shorter half-lives, typically with higher beta energy, energies of 100Wh/kg (360kJ/kg) aren't out of the realm of the possible. Combined with conventional emerging super-caps and conventional Li-ion battery systems having a net power density of 200 kJ/kg and specific energies up to 1000+ W/kG are clearly achievable.

At the dozen-kilogram level, such power systems might be perfect for remote data loggers where photovoltaic is out of the question. Satellites wouldn't need the batteries, but could probably use the super-caps, to allow them to work in burst mode over long lifetimes.

At the megagram (ton) level:

100 watt, peak, practical cell
50% weight utilization
50.00 =W/kg
180.00 =kj/kg-hr
0.05 =kWh/kg
1000 kg (proposed)
50.00 =kWh/kg/h
1,200.00 =kWh/kg/day


producing over 1,000 kWh/day of electricity...

2,500 heat kWh
9,000,000,000 joules
9,000,000 BTU
375,000 BTU/hr

So, removing the 375,000 BTU with simple air-cooling would be satisfactory.

the biggest problem there would be having triplicate or quadruplicate cooling systems to keep the 'energy block' from melting down in the event of a coolant breakdown. Probably "different systems" - underground aquifer tap + air cooling + "city water" cooling as a backup.

There are quite a few businesses that could use such power densities, and autonomy from the grid. Computer facilities in particular come to mind, if for nothing but the "core-core" equipment and routers, telecommunications and robotic systems-control apparatus.

Another player might be municipalities - especially rapid-transit 'underground' type systems. Virtually all substations can be quite hardened (and secured, and surveillance watched), and the net availability of the baseline power (with capacitive storage) is ideal for running the nominal system without purchasing much grid power.

I don't think there is much of a need though for systems larger than a few tons, say 10 or so megawatt-hours/day. Reason is, the systems rapidly approach the size where compact natural-gas generation, or pebble-bed nuclear systems have much higher energy density for the weight and footprint.

Police stations could use the devices at the several-ton level to generate baseline electricity (and deliver it quickly by the capacitive discharge route) for fleets of flywheel cars, or fast-charge ionic battery systems. (Busses could use this too). Here, the only real metric is whether the cost per megajoule of the betavoltaic power is no more than a factor of 2 greater than grid power. Why 2? Because at a 2:1 parity, in the event of a long-term electrical failure, the fleet of cars wouldn't diminish by more than 25% or so to keep law and order.

Significantly: if the cost per kilowatt-hour of electricity is even reasonably close to the grid wholesale level, then the technology is an easy "must buy" - since with all likelihood, the plateauing production of oil combined with the necessarily increased demand for it by route of India, China and the whole Far East ... is going to cause baseline grid electricity to markedly increase in price in the not too far future.


FURTHER READING
Previous coverage of liquid nuclear betavoltaic batteries

In 2007, IEEE Spectrum reported that DARPA wanted GTI (one of the betavoltaic companies) to run full speed toward proving that a reactor of the 100- to 1000-kilowatt scale could be built.

A liquid nuclear diode could catch energetic alpha and beta particles, gamma rays, and even the new atoms left over from the fission of larger atoms, Tsang says. Fissile fragments could be a particularly good source of energy. In the fission of U-235, for example, the fragments carry 85 percent of the energy released. Because the fragments are heavy, as they plow through the semiconductor they ”make a shower of electron-hole pairs along the path,” he says.

Note: Alpha radiation (positively charged helium nuclei) and beta radiation (electrons).


MIT Technology Review reports on Widetronix's batteries are made up of a metal foil impregnated with tritium isotopes and a thin chip of the semiconductor silicon carbide, which can convert 30 percent of the beta particles that hit it into an electrical current. "Silicon carbide is very robust, and when we thin it down, it becomes flexible," says Widetronix CEO Jonathan Greene. "When we stack up chips and foils into a package a centimeter squared and two-tenths of a centimeter high, we have a one microwatt product." The prototype being tested by Lockheed Martin produces 25 nanowatts of power.

IEC Fusion, National Ignition Facility and Other Nuclear Fusion Projects

Alan Boyle at MSNBC cosmic Log reviews the status of nuclear fusion projects

He does not mention General Fusion or Japan's Muon fusion project.

For IEC Fusion:
In September, EMC2 Fusion was awarded a Navy contract, backed by $7.9 million in stimulus funds, to develop a scaled-up version of a Polywell fusion reactor. Development and testing of the device is expected to take two years, and there's an option to spend another $4.4 million on experiments with hydrogen-boron fuel (known as pB11).


The Navy contract for $7.9 million and the $4.4 million option

Award includes an option for a Wiffleball 8.1 for an additional $4,455,077.




National Ignition Facility

The National Ignition Facility is a the $3.5 billion laser research site at California's Lawrence Livermore National Laboratory. NIF is designed to produce fusion power on a small scale by aiming 192 laser beams simultaneously at a hydrogen target the size of a pencil eraser for a burst lasting just a few billionths of a second.

NIF was certified for operation in March, and last month officials reported that the laser beams could generate enough X-ray energy during the initial testing phase to ignite the fuel capsules as required. The research campaign is scheduled to begin in earnest early next year, and there's already talk in the fusion community that the reaction could reach the break-even point by the time 2010 ends.


December 03, 2009

Transistor made with a Single Atom Active Region


Figure caption:
(a) Colored scanning electron microscope image of the measured device. Aluminum top gate is used to induce a two-dimensional electron layer at the silicon-silicon oxide interface below the metallization. The barrier gate is partially below the top gate and depletes the electron layer in the vicinity of the phosphorus donors (the red spheres added to the original image). The barrier gate can also be used to control the conductivity of the device. All the barrier gates in the figure form their own individual transistors.
(b) Measured differential conductance through the device at 4 Tesla magnetic field. The red and the yellow spheres illustrate the spin-down and -up states of a donor electron which induce the lines of high conductivity clearly visible in the figure.




Researchers from Helsinki University of Technology (Finland), University of New South Wales (Australia), and University of Melbourne (Australia) have succeeded in building a working transistor, whose active region composes only of a single phosphorus atom in silicon. The results have just been published in Nano Letters.

The working principles of the device are based on sequential tunneling of single electrons between the phosphorus atom and the source and drain leads of the transistor. The tunneling can be suppressed or allowed by controlling the voltage on a nearby metal electrode with a width of a few tens of nanometers.



Nanoletters: Transport Spectroscopy of Single Phosphorus Donors in a Silicon Nanoscale Transistor

Abstract
We have developed nanoscale double-gated field-effect-transistors for the study of electron states and transport properties of single deliberately implanted phosphorus donors. The devices provide a high-level of control of key parameters required for potential applications in nanoelectronics. For the donors, we resolve transitions corresponding to two charge states successively occupied by spin down and spin up electrons. The charging energies and the Land g-factors are consistent with expectations for donors in gated nanostructures.

Fujitsu Runs Prototype of Ten Petaflop Supercomputer and Wants Restart of Project

Fujitsu Ltd. (6702) on Wednesday successfully operated a prototype of its next-generation supercomputer, in spite of a government panel's decision to freeze funding for the project.


The Ministry of Science and others are calling for the project to be continued, and Fujitsu says it is ready to start production as soon as it gets the green light.
The electronics manufacturer has already developed a trial CPU especially for the supercomputer. Plans for the actual supercomputer call for some 20,000 circuit boards and approximately 80,000 CPUs. The prototype connected a few dozen boards -- each of the boards used four CPUs.

Fujitsu has been receiving over 10 billion yen in financial aid from the government through the research laboratory Riken. But Fujitsu has shouldered almost twice that amount on its own. If the next-generation supercomputer is brought online in 2012 as scheduled, Fujitsu plans to develop its CPUs for use in smaller supercomputers and corporate servers




China COSCO CEO Seriously Considering Nuclear Powered Container Ships


The boss of the world’s largest shipping conglomerate has advocated the use of nuclear power onboard merchant ships.

Outlining the container alliance CKYH’s decision to push ahead with super slow steaming, COSCO ceo and president Capt Wei Jiafu said that the move was in part a green one. He then went on to say that he was in favour of using nuclear power onboard merchant ships as a further green initiative. ‘As they are already onboard submarines, why not cargo ships?’ he mused. Later he spoke to Seatrade Asia Online and revealed COSCO is in talks with the national nuclear authorities to develop nuclear powered ships.

Earlier that morning Wei had said as much as 40% of the global total orderbook is under threat. Wei’s prediction is far higher than most analysts’ at present. He was speaking at the Senior Maritime Forum coorganised by UBM and Seatrade at this year’s Marintec China. Citing ‘financing and cash flow problems in medium and small sized corporations’ since the outbreak of the financial crisis, Wei said that his ‘personal feeling’ was that ‘about 40% of newbuilding orders will be postponed or cancelled this year and next year’. COSCO, itself, has cancelled 126 bulkers and postponed the delivery of a large swathe of boxships by one to two years.


Technical and Economic Analysis of Nuclear Container Ships
This site had examined a study of the economics of nuclear power for commercial shipping. The study showed that a nuclear ship would be $40 million per year cheaper to operate when bunker oil is at $500/ton.
Those studies had indicated improved economics when bunker fuel is over $300/ton. Bunker oil is currently about $375/ton. Also, changing to nuclear powered container ships would reduce air pollution by the equivalent of about 20,000 cars converted to electric per container ship that is converted.

A second article had more analysis, pictures and video.

The 2008 International conference of Container Ship Design & Operation had another presentation of nuclear powered commercial shipping (page 3 of 4) H/T DV82XL at the Energy from Thorium Forum

Analysis of High-Speed Trans-Pacific Nuclear Containership Service
G. A. Sawyer, J. W. Shirley, J A. Stroud, E. Bartlett, General Management
Partners LLC, USA. C. B. McKesson, CCDoTT, USA.

35 knot ships that could hold more cargo could be built and operated more cheaply than regular oil powered ships. Initial costs are 6 times higher ($900 million versus 150 million.) Three nuclear ships could do the work of 4 regular ships and operational costs would be lower. The higher speed means the fast cargo niche could be addressed. A reasonable timeline is for nuclear commercial shipping in the 10-15 year timeframe.

More information in this artiticle "The Ultimate Green Ship: Nuclear Powered ?"




More Economic Analysis

A nuclear powered container ship was analyzed by Femenia, C.R. Cushing & Co, Inc. in 2008.




Capacity 15,000 TEU (a big container ship)
Length 405 m
Beam 60 m
Draft 15.5 m
Speed 32 knots
Power 150 Mw (200,000 SHP)
Propellers 2

Economic Issues
Capital Costs (Source: Femenia, C.R. Cushing & Co, Inc)
150,000 kW (200,000 HP)
1. Assumes Nuclear @ $2500 / kW
2. Assumes Diesel @ $800 / kW
3. Assumes Plant Life 40 Years
4. Assumes Interest Rate 10%




COSCO website


What is the size of a VLCC?

With its US$ 15.4135 billion (122.8825 billion RMB) in annual revenue, COSCO was successfully listed as the 488th of Fortune Global 500 in 2006; in 2007, COSCO secured the 405th of the list with its US$ 20.84 billion (158.5135 billion RMB).

COSCO owns or operates a fleet of more than 800 modern merchant vessels with a total capacity of over 50 million DWT and an annual shipping volume of over 400 million tons, covering over 1,500 ports in 160 countries and territories across the globe, ranking China's first and world's second in general. In specific, the containers fleet ranks No.1 in China and No.6 in the world; the dry bulk fleet ranks the top in the world;

VLCC is the abbreviation of "Very Large Crude Carrier". It is one of the largest crude carriers currently in operation throughout the world. Its deck is as big as 3 soccer fields. Full load of a VLCC on oil is equivalent to the quantity consumed by 8 million private automobiles throughout the country in 10 days. COSCO currently has got 3 VLCCs in operation and will take delivery of another two in 2004.

Why is the fifth-generation container ship reputed as sea mega-carrier?
The fifth-generation containership is regarded as one of the world's most advanced containerships in service so far. Currently COSCO operates 13 of them. This kind of vessels can carry as many as 5,446 TEU (Twenty Equivalent Unit) in terms of slot capacity. If these containers are connected one by one, such a connection line is supposed to be as long as 33 Km and if a train has 60 carriages, 50 of such trains will be needed to carry all these boxes away. By 2005, COSCO will have taken delivery of another 5 jumbos, each with a slot capacity of up to 8000 TEU. These vessels are probably included in a new generation in global container liner industry. By then, the container fleet of COSCO shall be more powerful.


RELATED INFO
Britain and other European governments have been accused of underestimating the health risks from shipping pollution following research which shows that one giant container ship can emit almost the same amount of cancer and asthma-causing chemicals as 50m cars.

Confidential data from maritime industry insiders based on engine size and the quality of fuel typically used by ships and cars shows that just 15 of the world's biggest ships may now emit as much pollution as all the world's 760m cars. Low-grade ship bunker fuel (or fuel oil) has up to 2,000 times the sulphur content of diesel fuel used in US and European automobiles.

Pressure is mounting on the UN's International Maritime Organisation and the EU to tighten laws governing ship emissions following the decision by the US government last week to impose a strict 230-mile buffer zone along the entire US coast, a move that is expected to be followed by Canada.

The setting up of a low emission shipping zone follows US academic research which showed that pollution from the world's 90,000 cargo ships leads to 60,000 deaths a year in the US alone and costs up to $330bn per year in health costs from lung and heart diseases. The US Environmental Protection Agency estimates the buffer zone, which could be in place by next year, will save more than 8,000 lives a year with new air quality standards cutting sulphur in fuel by 98%, particulate matter by 85% and nitrogen oxide emissions by 80%



Japan Starts MOX Burning Reactor and Small Dose Radiation Risks Are Lower

1. Japan's first ever nuclear power reactor to use mixed oxide (MOX) fuel assemblies is now operating at full capacity, fuel supplier Areva has announced. Kyushu's Genkai 3 was loaded with the fuel fabricated from uranium and the plutonium recovered from previously used nuclear fuel in October.

Recycling of plutonium in MOX is to play a key role in Japan's future nuclear fuel cycle, and two other utilities - Shikoku Electric Power Co and Chubu Electric Power Co – plan to introduce MOX fuel into their reactors in or after 2010.


2. The risks of small radiation doses could have been exaggerated, the Electric Power Research Institute (EPRI) has found. This matters because we should not waste money trying to protect against things that are not actually dangerous. We do not spend money trying to protect against 2 mile per hour collisions or lower because those kind of collisions are not dangerous. Trying to protect against those kinds of collisions would mean spending trillions of dollars without any increase in safety.

The risks from small doses over longer periods have been largely assumed under the 'linear-no threshold' model, which implies that any level of radiation exposure - no matter how small - would cause a corresponding level of biological damage.

After collating more than 200 peer-reviewed publications on the topic, EPRI was able to conclude that this methodology may have been over-estimating the risks. Different mechanisms are at work at each end of the scale, EPRI found from recent studies, and "when radiation is delivered at a low dose rate (i.e. over a longer time period), it is much less effective in producing biological changes than when the same dose is delivered in a short time period."




The data covers more than 100,000 workers per year at US nuclear power plants. Nobody has been exposed to more than the US regulatory annual limit of 50 rem (0.5 Sv) since 1989. "Doses of less than 10 rem (0.1 Sv) in a single exposure are too small to allow detection of any statistically significant excess cancers," said EPRI.

The Ritch letter focused on the currently ongoing revision of the IAEA Basic Safety Standards and a proposal to reduce dose limits for the public from 1 mSv per year to 0.3 mSv. Ritch warned that a "wholly theoretical gain in radiation safety could take precedence over, and act to the detriment of, the real gains in public health and environmental protection that can be achieved through a worldwide expansion of nuclear power."

The tendency to strive for ever-lower radiation doses in the absence of evidence of real health gains "undercuts the fundamental, well-established principle of optimisation of doses, which entails that a judicious balance be struck among real risks and benefits," said Ritch.

In practical terms, a constant drive for lower doses would lead to increasing cost and complication for nuclear operators and regulators, with no measurable benefit for workers or the public.


MIT Proposes Solid Oxide Fuel Cells for Natural Gas Power

MIT researchers propose a system that uses solid-oxide fuel cells, which can produce power from fuel without burning it.

The system would not require any new technology, but would rather combine existing components, or ones that are already well under development, in a novel configuration (for which they have applied for a patent). The system would also have the advantage of running on natural gas, a relatively plentiful fuel source — proven global reserves of natural gas are expected to last about 60 years at current consumption rates — that is considered more environmentally friendly than coal or oil. (Present natural-gas power plants produce an average of 1,135 pounds of carbon dioxide for every megawatt-hour of electricity produced — half to one-third the emissions from coal plants, depending on the type of coal.)

The system proposed by Adams and Barton would not emit into the air any carbon dioxide or other gases believed responsible for global warming, but would instead produce a stream of mostly pure carbon dioxide. This stream could be harnessed and stored underground relatively easily, a process known as carbon capture and sequestration (CCS). One additional advantage of the proposed system is that, unlike a conventional natural gas plant with CCS that would consume significant amounts of water, the fuel-cell based system actually produces clean water that could easily be treated to provide potable water as a side benefit.

the basic principles have been demonstrated in a number of smaller units including a 250-kilowatt plant, and prototype megawatt-scale plants are planned for completion around 2012. Actual utility-scale power plants would likely be on the order of 500 megawatts, Adams says. And because fuel cells, unlike conventional turbine-based generators, are inherently modular, once the system has been proved at small size it can easily be scaled up. “You don’t need one large unit,” Adams explains. “You can do hundreds or thousands of small ones, run in parallel.”



Combined-cycle natural gas plants — the most efficient type of fossil-fuel power plants in use today — could be retrofitted with a carbon-capture system to reduce the output of greenhouse gases by 90 percent. But the MIT researchers’ study found that their proposed system could eliminate virtually 100 percent of these emissions, at a comparable cost for the electricity produced, and with even a higher efficiency (in terms of the amount of electricity produced from a given amount of fuel.

The study shows that a very low level of carbon tax, on the order of $5 to $10 per ton, would make this technology cheaper than coal plants, which are currently the lowest cost option for electricity generation.

High-efficiency power production from natural gas with carbon capture

A unique electricity generation process uses natural gas and solid oxide fuel cells at high electrical efficiency (74%HHV) and zero atmospheric emissions. The process contains a steam reformer heat-integrated with the fuel cells to provide the heat necessary for reforming. The fuel cells are powered with H2 and avoid carbon deposition issues. 100% CO2 capture is achieved downstream of the fuel cells with very little energy penalty using a multi-stage flash cascade process, where high-purity water is produced as a side product. Alternative reforming techniques such as CO2 reforming, autothermal reforming, and partial oxidation are considered. The capital and energy costs of the proposed process are considered to determine the levelized cost of electricity, which is low when compared to other similar carbon capture-enabled processes.




Prospects and Undeveloped Uranium Mines

1.
Summary of Uranium Resources Available in Major Deposits and Prospective Mines Ranger, Olympic Dam and Beverley are the top three mines currently in production in Australia and are described here

Bannerman Resources Ltd released further evidence of major uranium plays at the Oshiveli and Onkelo prospects.

Bannerman's chief executive Len Jubber said these results were directed towards completion of a further mineral resource estimate as part of the ongoing feasibility study work for the Etango Project.

Latest drill hits included assays of 78 metres downhole grading 230 ppm U308; 36m @ 416 ppm, including 6m @ 2,037 ppm; 15m @ 531 ppm; 51m @ 255 ppm; 21m @ 365 ppm and 20m @ 540 ppm U308.

Oshiveli and Onkelo are contiguous with the Anomaly A deposit and form the northern 3 km strike length of the 6 km long Etango Project uranium mineralisation.







2.

Canadian uranium mines




















3. Kazakhstan uranium mines






























4.

13 page pdf on Namibia uranium production prospects and Challenges





























5. Active Botswana explorer Impact Minerals (ASX: IPT) said today that its Kekobolo prospect has a geological setting similar to the Letlhakane uranium deposit, owned by another Australian explorer A-Cap Resources Ltd (ASX: ACB).

Lekobolo is 20 km along strike to the south west of the large Letlhakane uranium deposit that covers an area of about 30 sq km. A-Cap Resources has reported its project has an inferred resource of 98 million lb of uranium oxide grading 158 ppm at a cut-off grade of 100 ppm. That deposit is hosted by near-surface calcrete and by Karoo Group sediments.


6. 23 pdf from Areva from 2008. Their view of uranium market and their own projects




















7. 20 page pdf on Niger's Uranium mines (from 2009)

Takardeit exploration target is 35 – 80M lbs
Has outcropping uranium mineralisation over 3km by 2km area

Best hits:
IND001 - 1.8m @ 1,376ppm eU3O8 from 2.2m depth
IND012 - 1.4m @ 1,691ppm eU3O8 from 13.8m depth
IND014 - 0.4m @ 1,098ppm eU308 from 7.4m depth

Location of IND 001 on top of mesa,
Approximately 150m by 100m with possible 12m ore grade zone through entire outcrop
Several other mesas in the region, all exhibiting high grade mineralisation
2km north of the cross section drill holes

Another mesa to the east of IND001
The whole face is ore grade as measured with an XRF spectrometer, the central high grade core zone is where 13,400ppm was recorded

IDEKEL Prospect (Tagait 4)
Radiometric anomaly over 6kms, Outcrop to 0.1% (1,000ppm) U3O8, Zone mapped in field over 5kms
Nearest Cogema hole has reported anomalous uranium over 26m from 8m depth to a maximum of 0.11% (1,100ppm) U308

SOMAIR –Arlit Mill 2,000 t/day
Areva 63.4%, ONAREM (Niger) 36.6%
Open Pit Mines: Ariege, Artois, Arlette, Tamou,
Reserves: 47,000 t U Grade: 0.3% (3,000ppm)
U Production 2007: ~1,750 t U

COMINAK –Akouta Mill 1,800 t/day
Areva 34%, ONAREM (Niger) 31%,
OURD (Japan) 25%, ENUSA(Spain) 10%
Underground Mines: Akola, Akouta, Afasto
Reserves: 36,300 t U Grade: 0.45% (4,500ppm)
U Production 2007: ~1,400 t U

Open cut Imouraren project which neighbours NGM
concessions (est. 150,000 tonnes contained uranium
@ 0.11%/1,100ppm head grade, 400ppm cut off) is
under development by Areva
Expected to produce 5,000tpa
US$1.5bn development will make Niger world No. 2 producer by 2011/12

December 02, 2009

New Bomb Resistant Tactical Trucks and All Terrain Vehicles



The next generation of bomb resistant vehicles will be in Afghanistan along with the 30,000 extra troops They are equipped with everything from composite armor to “electronic keels".

The M-ATV is made by Oshkosh defense. The M-ATV has a curb weight of just under 25,000 lb, and it’s powered by a 370-hp Caterpillar C7. It seats four passengers, plus one gunner; and has a central tire inflation system with four terrain settings to improve traction on unimproved roads. Max speed is 65 miles per hour; max range is 320 miles. Each one costs about $1.4 million, fully loaded.





MRAPs were the bomb resistant vehicles used in Iraq and weighed over 14 tons

The MRAPs were too bulky for Afghanistan’s rough terrain and primitive roadways. The suspensions took a beating, and the top-heavy MRAPs were prone to rollover.

Stuck Spirit Rover Analyzes Mars Water Cycle and Improving Evidence of Fossilized Mars Bacteria




1. In 1996, NASA researchers reported that a meteorite contained evidence that life once existed on Mars. But others argued that the evidence was most likely caused by inorganic processes that could be recreated artificially. A second group of NASA researchers (containing some scientists from the first study) has reexamined the 1996 findings using a new analysis technique called ion beam milling, and they again claim that living organisms are most likely responsible for the materials found in the meteorite.

The new study not only reexamined the contents of the meteorite itself, named ALH84001, but tested the alternative, non-biological hypothesis. "In this study, we interpret our results to suggest that the in situ inorganic hypotheses are inconsistent with the data, and thus infer that the biogenic hypothesis is still a viable explanation," says Kathie Thomas-Keprta, a senior scientist for Barrios Technology at Johnson Space Center in Houston

47 page pdf of the new analysis of the Mars "fossilized life" rock.



“The evidence supporting the possibility of past life on Mars has been slowly building up during the past decade,” said McKay, NASA chief scientist for exploration and astrobiology, JSC. “This evidence includes signs of past surface water including remains of rivers, lakes and possibly oceans, signs of current water near or at the surface, water-derived deposits of clay minerals and carbonates in old terrain, and the recent release of methane into the Martian atmosphere, a finding that may have several explanations, including the presence of microbial life, the main source of methane on Earth."

2. "By being stuck at Troy [crater on Mars], Spirit [Mars Robotic Rover] has been able to teach us about the modern water cycle on Mars." Indeed, Spirit's saga at Troy has given scientists material evidence of past water on Mars on two time scales: ancient volcanic times, and cycles ongoing to the present day.

Italy, Belgium, Germany and the Netherland Also Have Nuclear Weapons


Time Magazine reports that nuclear bombs are stored on air-force bases in Italy, Belgium, Germany and the Netherlands — and planes from each of those countries are capable of delivering them.

The Federation of American Scientists believes that there are some 200 B61 thermonuclear gravity bombs scattered across these four countries. Under a NATO agreement struck during the Cold War, the bombs, which are technically owned by the U.S., can be transferred to the control of a host nation's air force in times of conflict




B61 nuclear bomb at wikipedia

Total production of all versions was approximately 3,155, of which approximately 1,925 remain in service as of 2002, and some 1,265 are considered to be operational

The B61 has been deployed by a very wide variety of U.S. military aircraft. Aircraft cleared for its use have included the B-58 Hustler, B-1, B-2, B-52, and FB-111 strategic bomber aircraft; the F-100 Super Sabre, F-104 Starfighter, F-105 Thunderchief, F-111 and F-4 Phantom II fighter bombers; the A-4 Skyhawk, A-6 Intruder, and A-7 Corsair II attack aircraft; the F-15 Eagle and F-15E Strike Eagle; F22 Raptor; British, German and Italian Panavia Tornado IDS aircraft, plus Belgian and Dutch F-16 Fighting Falcon can also carry the B61.

Though exact numbers are hard to establish, research done by the Natural Resources Defense Council suggests approximately 480 are deployed with United States Air Force units in various European countries

The B61 is a variable yield bomb designed for carriage by high-speed aircraft. It has a streamlined casing capable of withstanding supersonic flight speeds. The weapon is 11 ft 8 in (3.58 m) long, with a diameter of about 13 in (33 cm). Basic weight is about 700 lb (320 kg), although the weights of individual weapons may vary depending on version and fuze/retardation configuration.

The newest variant is the B61 Mod 11, a hardened penetration bomb with a reinforced casing (according to some sources, containing depleted uranium) and a delayed-action fuze, allowing it to penetrate several metres into the ground before detonating, damaging fortified structures further underground [2]. The Mod 11 weighs about 1,200 lb (540 kg). Developed from 1994, the Mod 11 went into service in 1997 replacing the older megaton-yield B53 bomb, a limited number of which had been retained for anti-fortification use. About 50 Mod 11 bombs have been produced, their warheads converted from Mod 7 bombs. At present, the primary carrier for the B61 Mod 11 is the B-2 Spirit.

Most versions of the B61 are equipped with a parachute retarder (currently a 24-ft (7.3 m) diameter nylon/Kevlar chute) to slow the weapon in its descent, giving the aircraft a chance to escape the blast (or to allow the weapon to survive impact with the ground in laydown mode). The B61 can be set for airburst, ground burst, or laydown detonation, and can be released at speeds up to Mach 2 and altitudes as low as 50 feet (15 m). Fusing for most versions is by radar.

The B61 is a variable yield, kiloton-range weapon called "Full Fuzing Option"(FUFO) or "Dial-a-yield" by many service personnel. Tactical versions (Mods 3, 4, and 10) can be set to 0.3, 1.5, 5, 10, 60, 80, or 170 kiloton explosive yield (depending on version). The strategic version (B61 Mod 7) has four yield options, with a maximum of 340 kilotons. Sources conflict on the yield of the earth-penetrating Mod 11; the physics package or bomb core components of the Mod 11 are apparently unchanged from the earlier strategic Mod 7, however the declassified 2001 Nuclear Posture Review [3] states that the B-61-11 has only a single yield; some sources indicate 10 KT, others suggest the 340 kiloton maximum yield as the Mod-7.




Fighting fat with Brown fat cells, Can Shrink Spare Tires Around Your Stomach

Bonn scientists discover a promising new approach to combat obesity

Fighting fat with fat

The researchers suspect that a disorder of the brown fatty tissue can lead to obesity in adults. If it were possible to turn on the 'natural heating system' on again, the problem of unwanted fat would be quickly solved: according to estimates, 50 grams of active brown fatty tissue is sufficient for increasing the basal metabolic rate by 20 per cent.

"With the same nutrition and activity the fat reserves would melt at a rate of five kilos per year," Professor Pfeifer explains. "This makes our results interesting from a therapeutic perspective. By blocking the PKG signaling path in the brown fat we basically want to fight fat with fat."


Separate New England Journal of Medicine Forecast - Obesity Gains are Offsetting Gains from Smoking Reduction

In an effort to forecast the effect of the rise in obesity and decline in smoking on health at the population level over the next decade, researchers from Harvard University and the University of Michigan examined data from national health surveys conducted from the early 1970s through 2006.

Over the next decade the health benefits achieved because fewer Americans are smoking will be more than overshadowed by the negative health effects of the unchecked rise in obesity, new research (Harvard/Michigan) suggests.

If all adults in the United States stopped smoking and achieved a normal weight by 2020, the life expectancy of an 18-year-old would increase by nearly four years, according to the forecast.





Brown adipose tissue is different from white fat pads. It contains loads of mitochondria, miniature power stations which among other things can 'burn' fat. In doing this, they normally generate a voltage similar to that of a battery, which then provides energy for cellular processes. However, the mitochondria of brown fat cells have a short circuit. They go full steam ahead all the time. The energy released when the fat is broken down is released as heat.

'This is actually what is intended,' Professor Alexander Pfeifer from the Bonn PharmaCentre explains. 'Brown fat acts like a natural heating system.' For example, babies would get cold very quickly without this mechanism. Up to now, it was thought that brown fat only occurred in newborn babies and was lost with age. However, this year different groups were able to show that this is not true: even adults have a deposit of brown fat in the neck area. But with very overweight people this deposit is only moderately active or is completely absent.

PKG turns on the heating

The scientists from Bonn, Heidelberg, Cologne, Martinsried and the Federal Institute of Drugs and Medical Products (BfArM) were now able to show which signals prompt the body to produce brown fat cells. A signalling pathway which is controlled by the PKG enzyme takes on a key role in this process. This signalling pathway results in the stem cells of the fatty tissue becoming brown fat cells. For this it switches on the mass production of mitochondria and ensures that UCP is formed, the substance that creates the short circuit. 'Furthermore, we were able to show that PKG makes brown fat cells susceptible to insulin,' Alexander Pfeifer explains. 'Therefore PKG also controls how much fat is burnt in general.'

Mice without PKG have a lower body temperature, as the researchers were able to show with a thermographic camera. Above all, animals in the thermal image lack an 'energy spot' between the shoulder blades, i.e. the place where normally the brown fat is active.


Babies get cold quickly. That is why nature has equipped them with a special heating system, brown fat cells. Their only purpose is to burn fat, thereby generating heat. It only has recently become known that such cells also occur in adults. Scientists at the University of Bonn have now found a new signalling pathway which stimulates the production and function of brown fat cells. They propose using the natural heating system in order to just 'burn' unwanted excess fat. Scientists from Heidelberg, Cologne, Martinsried and the Federal Institute of Drugs and Medical Products (Bundesinstitut für Arzneimittel und Medizinprodukte, BfArM) were also involved in the study. The results will be published in the journal Science Signaling on 1st December (doi: 10.1126/scisignal.2000511).




US Energy Mix


The latest US energy source mix

From January 2009 to August 2009, US primary energy consumption fell by 5.7 percent compared to the same time period in 2008. For the first eight months of 2009:
* petroleum provided 31.7% of US energy consumption
* natural gas provided 24.6% of US energy consumption,
* coal provided 21.0%
* nuclear 9.0%
* biomass 4.1%
* hydro 2.9%
* wind 0.7%
* geothermal 0.4%
* and solar 0.1% (EIA’s Monthly Energy Review).



Intel Makes Single Chip Cloud Computer with 48-Cores


Intel's 48-core Single-chip Cloud Computer (SCC) processor (Credit: Intel)

Intel on Wednesday demonstrated a fully programmable 48-core processor it thinks will pave the way for massive data computers powerful enough to do more of what humans can. (H/T Sander Olson)

The 1.3-billion transistor processor, called Single-chip Cloud Computer (SCC) is successor generation to the 80-core "Polaris" processor that Intel's Tera-scale research project produced in 2007. Unlike that precursor, though, the second-generation model is able to run the standard software of Intel's x86 chips such as its Pentium and Core models.

The name “Single-chip Cloud Computer” reflects the fact that the architecture resembles a scalable cluster of computers such as you would find in a cloud, integrated into silicon.

The research chip features:
* 24 “tiles” with two IA cores per tile
* A 24-router mesh network with 256 GB/s bisection bandwidth
* 4 integrated DDR3 memory controllers
* Hardware support for message-passing




EETimes discusses the new direction that Intel is taking on its terascale chip efforts

In the future, Intel's so-called "single-chip cloud computer" processor could enable PCs to use "vision" to interact with people.

Intel is taking a more mainstream approach to this multicore effort by going the x86-based route. Intel has nicknamed this test chip a "single-chip cloud computer" because it resembles the organization of datacenters used to create a "cloud" of computing. Cloud datacenters are comprised of tens to thousands of computers connected by a physically cabled network, distributing large tasks and massive datasets in parallel.

The long-term research goal for Intel's new device is to add scaling features that spur new software applications and human-machine interfaces. Intel plans to build 100 or more experimental chips for use by dozens of industrial and academic research collaborators. The goal is to develop new software applications and programming models for future multicore processors.

This prototype device itself contains of 48 programmable processing cores. It also includes a high-speed, on-chip network for sharing information along with new power management techniques. The on-chip network technology was also present on Polaris, Rattner said.

With a chip like this, you could imagine a cloud datacenter of the future which will be an order of magnitude more energy efficient than what exists today, saving significant resources on space and power costs," said Rattner. "Over time, I expect these advanced concepts to find their way into mainstream devices, just as advanced automotive technology such as electronic engine control, air bags and anti-lock braking eventually found their way into all cars."



Intel Labs has created an experimental “Single-chip Cloud Computer,” (SCC) a research microprocessor containing the most Intel Architecture cores ever integrated on a silicon CPU chip – 48 cores. It incorporates technologies intended to scale multi-core processors to 100 cores and beyond, such as an on-chip network, advanced power management technologies and support for “message-passing.”

Architecturally, the chip resembles a cloud of computers integrated into silicon. The novel many-core architecture includes innovations for scalability in terms of energy-efficiency including improved core-core communication and techniques that enable software to dynamically configure voltage and frequency to attain power consumptions from 125W to as low as 25W.

This represents the latest achievement from Intel’s Tera-scale Computing Research Program. The research was co-led by Intel Labs Bangalore, India, Intel Labs Braunschweig, Germany and Intel Labs researchers in the United States.

Other Supercomputing News

EETimes reports that Cray Inc. has announced three European partners for a new program aimed at delivering by the end of the decade a supercomputer capable of performing an exaflop, a quintillion calculations per second

Science Unable to Find Men in 20s Who Had not Seen Porn and Daily Show on Climategate

1. Telegraph UK, researchers were conducting a study comparing the views of men in their 20s who had never been exposed to pornography with regular users. (H/T Instapundit)

But their project stumbled at the first hurdle when they failed to find a single man who had not been seen it.

“We started our research seeking men in their 20s who had never consumed pornography,” said Professor Simon Louis Lajeunesse. “We couldn't find any.”


Jon Stewart of the Daily Show Talks Climategate


JON STEWART: “Poor Al Gore: Global Warming Completely Debunked By the Internet You Invented.”

Plus, “Why would you throw out raw data from the eighties? — I still have Penthouse magazines from the seventies! Laminated.







Nuclear Plant Uprates


Construction crews last week prepared a concrete slab to serve as a staging area for the replacement of Limerick's six huge transformers, a $90 million job that will take about two years to complete.

The improvements to the transformers, which convert electricity for transmission on big power lines, are only one component of a complicated effort to "uprate" the plant's output, adding 170 megawatts of generating capacity to each unit. Along with earlier upgrades, the improvements will expand Limerick's total capacity to 2,600 megawatts - 23 percent more power than it produced when the two units were completed in 1989.

The upgrades at Limerick, and a similar project at the twin Peach Bottom reactors in York County, are part a larger Exelon Corp. plan to add up to 1,500 megawatts of capacity to its fleet of 17 reactors, which are concentrated in Pennsylvania and Illinois.

Uprates cost about $2,400 per kilowatt, which would put the Limerick upgrades in the neighborhood of $800 million. Uprates are competitive with building a new fossil-fuel or renewable plant because they require few increases to operating costs - no additional staffing or maintenance costs, just more fuel.

The initial uprate planned for Limerick is a "measurement uncertainty upgrade," which involves installing equipment to more precisely measure the water flowing into the reactor, and therefore the quantity of steam produced, Exelon says. With more accurate metering, the plant can operate closer to its legal capacity, yielding an improvement of up to 2 percent. Limerick plans to apply to the NRC for that uprate in February.

State utility regulators have approved a plan by Entergy Mississippi to increase the output from its Grand Gulf Nuclear Power Plant in Port Gibson.
Grand Gulf's capacity would rise to 1,443 megawatts from 1,265, a 13 percent increase. Fisackerly, in a statement released Wednesday, says the total cost, estimated at $510 million, will be shared among the joint owners of Grand Gulf.
The project's completion is scheduled for 2012. Most of the work will be accomplished during regularly scheduled maintenance outages.



The industry's capacity factor - a measure of how efficiently the nuclear assets produce power - has increased from 62 percent to almost 92 percent in two decades. According to the Nuclear Energy Institute, the industry's policy arm, no other electrical generators come close. Coal-fired plants operate at 71 percent capacity; wind turbines operate at 31 percent capacity, and solar panels generate power only 21 percent of the time.

Owners also are extracting more life from their reactors by extending their 40-year operating licenses. The Nuclear Regulatory Commission has granted 20-year extensions to more than half the nation's commercial reactors - on Monday, PPL Corp.'s two Susquehanna units in Luzerne County became the latest to get extensions. Exelon plans to apply for Limerick extensions in 2011.

OTHER NUCLEAR NEWS
Japan's nuclear utilisation rate climbed year-on-year in November, rising for the third month in a row, as utilities increased power output ahead of winter.

The nuclear power plant utilisation rate at Japan's 10 nuclear power companies averaged 66.6 percent in November, 8.5 percentage points higher than a year ago, the trade ministry said on Tuesday. Last month's nuclear run rate was also 3.1 percentage points higher than in October


December 01, 2009

Quantum Cascade Laser at 120 Watts at Room Temperature

The quantum cascade laser (QCL) is based on inter-subband electron transitions inside a quantum well structure, which can be tailored to emit different wavelengths simply by changing the thickness of the constituent layers. In other words, we are no longer limited by inherent band gaps, and can demonstrate a very versatile source using one material system. Northwestern University researchers have gotten 120 watts from a room temperature quantum cascade laser. Last year the highest power level was 34 watts for a quantum cascade laser.

Applied Physics Letters: High power broad area quantum cascade lasers

Broad area quantum cascade lasers (QCLs) are studied with ridge widths up to 400 µm, in room temperature pulsed mode operation at an emission wavelength around 4.45 µm. The peak output power scales linearly with the ridge width. A maximum total peak output power of 120 W is obtained from a single 400-µm-wide device with a cavity length of 3 mm. A stable far field emission characteristic is observed with dual lobes at ±38° for all tested devices, which suggests that these broad area QCLs are highly resistant to filamentation




Center for Quantum Devices at Northwestern University

Monash University of Australia Innovation for Tripling Energy Conversion in Solar Cells


Scientists at Monash University, in collaboration with colleagues from the universities of Wollongong and Ulm in Germany, have produced tandem dye-sensitised solar cells with a three-fold increase in energy conversion efficiency compared with previously reported tandem dye-sensitised solar cells.

Lead researcher Dr Udo Bach, from Monash University, said the breakthrough had the potential to increase the energy generation performance of the cells and make them a viable and competitive alternative to traditional silicon solar cells.

Dr Bach said the key was the discovery of a new, more efficient type of dye that made the operation of inverse dye-sensitised solar cells much more efficient.

When the research team combined two types of dye-sensitised solar cell -- one inverse and the other classic -- into a simple stack, they were able to produce for the first time a tandem solar cell that exceeded the efficiency of its individual components.

"The tandem approach -- stacking many solar cells together -- has been successfully used in conventional photovoltaic devices to maximise energy generation, but there have been obstacles in doing this with dye-sensitised cells because there has not been a method for creating an inverse system that would allow dye molecules to efficiently pass on positive charges to a semiconductor when illuminated with light," Dr Bach said.


Nature Materials: Highly efficient photocathodes for dye-sensitized tandem solar cells

Thin-film dye-sensitized solar cells (DSCs) based on mesoporous semiconductor electrodes are low-cost alternatives to conventional silicon devices. High-efficiency DSCs typically operate as photoanodes (n-DSCs), where photocurrents result from dye-sensitized electron injection into n-type semiconductors. Dye-sensitized photocathodes (p-DSCs) operate in an inverse mode, where dye-excitation is followed by rapid electron transfer from a p-type semiconductor to the dye (dye-sensitized hole injection). Such p-DSCs and n-DSCs can be combined to construct tandem solar cells (pn-DSCs) with a theoretical efficiency limitation well beyond that of single-junction DSCs. Nevertheless, the efficiencies of such tandem pn-DSCs have so far been hampered by the poor performance of the available p-DSCs. Here we show for the first time that p-DSCs can convert absorbed photons to electrons with yields of up to 96%, resulting in a sevenfold increase in energy conversion efficiency compared with previously reported photocathodes7. The donor–acceptor dyes, studied as photocathodic sensitizers, comprise a variable-length oligothiophene bridge, which provides control over the spatial separation of the photogenerated charge carriers. As a result, charge recombination is decelerated by several orders of magnitude and tandem pn-DSCs can be constructed that exceed the efficiency of their individual components.




"Inverse dye-sensitised solar cells are the key to producing dye-sensitised tandem solar cells, but the challenge has been to find a way to make them perform more effectively. By creating a way of making inverse dye-sensitised solar cells operate very efficiently we have opened the way for dye-sensitised tandem solar cells to become a commercial reality."

Although dye-sensitised solar cells have been the focus of research for a number of years because they can be fabricated with relative simplicity and cost-efficiency, their effectiveness has not been on par with high-performance silicon solar cells.

Dr Bach said the breakthrough, which is detailed in a paper published in Nature Materials, was an important milestone in the ongoing development of viable and efficient solar cell technology.

"While this new tandem technology is still in its early infancy, it represents an important first step towards the development of the next generation of solar cells that can be produced at low cost and with energy efficient production methods," he said.


9 pages of supplemental information