Pages

December 12, 2009

Carnival of Space 133 with North Pole Mysteries, astronomy and Future space colonization


1. Above is a piece of the 370 megapixel image of 500,000 galaxies.

Phil Plait, the bad astronomer, discusses the huge image just released by the Canada-France-Hawaii Telescope Legacy Survey Deep Field #1, a ginormous mosaic of the night sky.

It covers a solid square degree of sky — 5 times the area of the full Moon — and tips the scale at a whopping 370 megapixels! It took 5 years and several hundred hours of observing time with the 3.6 meter telescope on top of Mauna Kea to get this massive mosaic. The image itself may look cool and all, but the true power comes when you give in to the dark side you use the interactive zoom feature









2.

Cat Dynamics covers the Giant Green Spiral over the north pole and also covers the Case for Pluto as a planet.

3. Universe Today also talks about "What was the Norway Spiral?"

4. "A Solid Look at Sail Technologies" for Centauri Dreams:

This reviews an article that ran in a magazine produced by CUNY -- the article covers Greg Matloff's work on solar sails and also discusses Roman Kezerashvili's interesting studies of how a sail moving close to the Sun would be affected by General Relativity.




5. Steve's astrocorner discusses NASA Wise infrared space telescope.

If all goes like the simulations performed at Cal-Tech then dozens of brown dwarfs should be found within 25 light years of Earth. Other items to scan for will be dark asteroids. Dark asteroids present a hazard to Earth in the form of a run in.These asteroids do not reflect light very well and because of this, cannot be picked up in a telescope.There are estimated 100,000 of these Asteroids orbiting undetected. WISE will be able to see these Astro-boulders because they absorb the Sun's heat and then reflect it in the infrared


6. Alan Boyle's cosmic log at msnbc has firstly

Spaceship debut causes chills

Hundreds gather at a Mojave Desert airport for the unveiling of SpaceShipTwo, which is likely to be the world's first commercial spaceship. Those hundreds flee soon after the unveiling, due to a windstorm that sweeps over the airport.


7. Alan Boyle's cosmic log at msnbc has secondly
From the desert to space

SpaceShipTwo isn't the only game in town: Mojave is also home to other ventures that are targeting the final frontier, and making money as they do it.


8. Alan Boyle's cosmic log at msnbc has lastly
Light show sparks UFO buzz

A spectacular light show visible from northern Norway energizes the UFO crowd, but experts determine that the display was actually caused by a failed Russian ballistic missile.


9. A Really Cool Exoplanet from the astroblogger

Discusses the visual discovery of an almost solar system like gas giant around a near duplicate of our sun by the Subaru telescope.


10. Planetaria discusses the new evidence for past life on Mars

11. This week at One-Minute Astronomer... an exclusive interview with world-renowned astrophotographer Jerry Lodriguss. In this interview, you'll learn the basics of taking a simple but quite lovely photo of the night sky with a digital camera. No telescope required.

12. NASA Chandra X-ray Observatory blog has "Galaxy Collision Switches on Black Hole"


13. Collectspace has design submissions from Astronauts, space workers for NASA's end-of-shuttle patch contest

14. From "A Babe in the Universe" Direct from Johnson Space Center Building 31, where it all happened, we hear the latest on: "Life From Mars"

In 1996 colleagues discovered signs of life on Martian meteorite ALH84001. The latest paper demolishes competing explanations, making the case for life ever stronger.


15. Weirdwarp looks at various planets and moons and their potential for being terraformed.

16. Simostronomy gives some advice on operating an amateur telescope in cold weather

17. Alice's Astroinfo has a gift guide

Looking for something for that little (or big) astronomer in your life? Well, if you need a little help, or feel a like the possibilities are too infinite, I’ve gathered together some of my suggestions here.


18. Cheap Astronomy delivers a podcast on the local stellar neighbourhood of our little corner of the Milky Way.

19. TheSpacewriter visits an online multiwavelength explorer.

I mention this a lot in my public talks in various venues: that the sky we see with our eyes isn’t the sum total of the universe that can be detected. I’ll say it again, another way. When you look at the night sky with your eyes, you’re only seeing the universe through a very small window of emissions.


20. Cumbrian Sky looks at Eddington Crater

I’m the Secretary of Kendal’s astronomical Society, the “Eddington Astronomical Society,” and that it is named after the famous astrophysicist Sir Arthur Eddington, who was born here in Kendal in 1882.


21. Cumbrian Sky discusses the Virgin Galactic Spaceship Two















22. Here at nextbigfuture:
Suborbital tourism is a stepping stone to two hour flights around the world which would then lead to large scale orbital and space travel.























23. Still here at nextbigfuture:

Historical colonization and details on the age of sail and how it helps understand how to make appropriately scaled plans to colonize space.

* In the age of sail cross ocean colonization, navies faced off with 150 ships on a side and send fleets of 1-11 ships carrying hundreds per ship to explore and colonize and trade with the newly discovered Americas.

* By the late 16th century American silver accounted for one-fifth of Spain's total budget.

* In the 16th century "perhaps 240,000 Europeans" entered American ports.

* If (when) there is human settlement of space and if there were parallels to the scale of the settlement of the Americas, then there would be thousands of spaceships capable of carrying hundreds of people at a time for interplanetary and later interstellar travel. The interplanetary capability (out to the Oort comet cloud) would be something like the ships traveling and trading around the mediterranean.

24. And finally, at nextbigfuture this week

Current aircraft can be made comfortable while keeping passenger density up. This relates to future spaceships passenger seating configurations


Iraq Oil Capacity could Reach 12 million barrels per day in 6 years


BBC reports: Iraq's oil capacity could reach 12 million barrels per day (bpd) in six years, the country's oil minister says.

Hussein al-Shahristani told reporters in Baghdad that oil producers would not necessarily operate at full capacity, but would take into account demand.

Saudi Arabia, the world's largest oil exporter, has a capacity of 12.5m bpd.

Earlier, a joint bid by Russian and Norwegian oil firms won the contract for the "supergiant" West Qurna field, said to have reserves of 13bn barrels.

Lukoil and Statoil will get $1.15 a barrel and will work to raise output from West Qurna Phase 2, in the Basra region, to 1.8m bpd. In June, a winning bid to develop another Iraqi field received $2 a barrel.

On Friday, the contract to develop the 12.6bn-barrel Majnoon field in southern Iraq was won by a consortium led by Shell. It also pledged to increase daily production to 1.8m barrels, up from only 46,000.

Rights for the eastern Halfaya field, with 4.1bn barrels of reserves, went to a consortium led by the Chinese state oil company, CNPC.

But the East Baghdad field, part of which lies under the city's Sadr City area, and another in the Diyala province attracted no bids.




If a daily total of 12m barrels was achieved, Iraq would overtake Russia and challenge Saudi Arabia for the position of the world's largest oil producer. However, Riyadh says it could produce 15 bpd.

Iraq's proven reserves now stand at 115bn barrels, below Iran's 137bn and Saudi Arabia's 264bn. But Iraq's data dates from the 1970s, before improvements in technology transformed the industry.

Mr Shahristani declared that Iraq had "scores" of oilfields, including "supergiants" - fields of 5bn barrels or more - to offer international companies in the future.



Magneto Inertial Fusion

Status of the U. S. program in magneto-inertial fusion (7 page pdf from the Dept of Energy)

A status report of the current U.S. program in magneto-inertial fusion (MIF) conducted by the Office of Fusion Energy Sciences (OFES) of the U.S. Department of Energy is given. Magneto-inertial fusion is an emerging concept for inertial fusion and a pathway to the study of dense plasmas in ultrahigh magnetic fields (magnetic fields in excess of 500 T). The presence of magnetic field in an inertial fusion target suppresses cross-field thermal transport and potentially could enable more attractive inertial fusion energy systems. The program is part of the OFES program in high energy density laboratory plasmas (HED-LP).


From Talk Polywell

Magneto-Inertial Fusion Introduction
The essential ideas behind magneto-inertial fusion (MIF) have existed for a long time. The concept involves freezing magnetic flux in the hot spot of an inertial fusion target or embedding magnetic flux in a target plasma bounded by a conducting shell serving as a magnetic flux conserver.

In a manner similar to conventional inertial fusion, the hot spot or the conducting shell is imploded. As the shell or the hot spot implodes, the magnetic flux is compressed with it, thus the intensity of the magnetic field is increased. The intense magnetic field suppresses cross-field thermal diffusivity in the plasma during the compression, and thus facilitates the heating of the plasma to thermonuclear fusion temperatures. The extremely high magnetic field created in the hot spot or the target plasma also enhances alpha-particle energy deposition in the plasma when fusion reactions occur.

There are two main classes of MIF, the class of high-gain MIF and the class of low-to-intermediate gain MIF. Both attempt to make use of a strong magnetic field in the target to suppress electron thermal transport in the target and thus rely upon the same scientific knowledge base of the underlying plasma physics. However, their strategies for addressing the two challenges of IFE, suitable targets and drivers, are different.

In the U.S., magneto-inertial fusion is currently being pursued as a science-oriented research program in high energy density laboratory plasma (HED-LP) by the Office of Fusion Energy Sciences (OFES) of the U.S. Department of Energy (DOE). Dense plasma in ultrahigh magnetic field (> 500 T), or magnetized HEDLP, is one of the thrust areas of HEDLP. Magneto-inertial fusion (MIF) is a pathway to create and study dense plasmas in ultrahigh magnetic fields.



High-gain MIF

It was shown by Kirkpatrick et al. that ignition is possible with lower implosion velocity with magnetized targets. Magnetic fields from 1,000 T to 10,000 T are required for typical ICF scenarios, due to the high burn-time density in typical ICF targets. Research is required to develop the scientific knowledge base on the physics of dense plasmas in ultrahigh magnetic fields and the capabilities in creating and applying ultrahigh magnetic fields to facilitate ignition in conventional ICF with lower implosion velocity and driver energy.

Experiments are in progress at the University of Rochester to compress a seed magnetic field in surrogate ICF targets using the OMEGA laser facility. The apparatus for generating a seed magnetic field of the order of 10 – 15 T has been developed and tested successfully. The magnetic field is generated by a large current flowing in small external coils surrounding the target. Compression of the magnetic flux using a high-temperature conductive plasma will be attempted next.

If successful, the flux will be compressed to produce a plasma with an embeded magnetic field of several thousand Tesla. Another method to create a seed magnetic field in dense plasma is by laser driven current drive. Theoretical and computational research to explore the concept is underway at Princeton University


Low and intermediate gain MIF

Low-gain MIF trades fusion gain in favor of non-cryogenic gaseous targets and high-efficiency lowcost drivers, so that the very high gains and high costs traditionally associated with ICF may not be needed. Electromagnetic pulsed power has lower power density than lasers or particle beams, but it has much higher wall-plug efficiency and much lower cost per unit energy delivered. By using both a magnetic field in the target and a lower-density target plasma, the required compression and heating power density is reduced to such an extent as to allow direct compression of the target by electromagnetic pulsed power. With considerably higher wall-plug efficiency, target fusion gain needed for economic power generation can be much lower than for conventional laser driven ICF. For example, if the wall-plug efficiency of the driver is higher than 30%, a fusion gain as low as 30 may be acceptable for IFE purposes.

Solid and liquid shells (called liners) have been proposed for compressing various types of magnetized target plasma for low gain MIF in which fusion gain in the range of 10 – 30 is sought. Plasma liner might prove to be more attractive for energy applications eventually, and are being explored for its potential for achieving intermediate fusion gain up to about 50.

At the Shiva Star pulsed power facility at AFRL-Kirtland, we have successfully demonstrated the implosion of an aluminum liner of the required geometry (30 cm long, nominally 10 cm in diameter and 1.1 mm thick) for compressing an FRC in 24 μs, achieving a velocity of 0.5 cm/μs, a kinetic energy of 1.5 MJ from stored capacitor energy of 4.4 MJ, and a radial convergence of 16 without observable Rayleigh-Taylor instability.

A dedicated experimental facility (FRX-L) for developing high-density, compact FRC as targets for MIF, including the translation and capture of the FRC by a metallic liner, has been developed at the Los Alamos National Laboratory. The FRC is formed by a field-reversed theta pinch in a quartz tube about 0.5 m long and 10 cm in diameter. Experiments at FRX-L have produced FRC with densities of about 3 x 10^16 cm-3, temperature () of about 300 eV corresponding to pressure of about 30 bar with a lifetime of about 10 μs. It has also developed a considerable database for the FRC behavior for various combinations of bank voltages, trigger timing and pre-fill gas pressure.

The experiment is now ready to combine the FRC generation technique developed at Los Alamos with the Shiva Star facility to perform an integrated liner-on-plasma implosion experiment. The integrated experiment will advance our predictive understanding on the compression heating of the FRC to multi-keV temperatures and 10^19 cm-3 plasma densities. This experiment will be performed over the next few years.

Helion and Related

Plasma liner formed by plasma jets provides an avenue to address three major issues of low-to intermediate gain magneto-inertial fusion: (1) standoff delivery of the imploding momentum, (2) repetitive operation, and (3) liner fabrication and cost. If the plasma liner is used to compress the magnetized plasma directly, very high Mach number (> 15) is required of the plasma liner in order to reach fusion conditions.

At UC-Davis, the acceleration of compact toroids is being studied in the CTIX facility. CTIX is a switch-less accelerator with the repetitive rate currently limited only by the gas injector. Magnetized plasma with a density of 10^16 per cm3 has been accelerated to 150 km/sec in the 1.5m long accelerator at a repetition rate of 1 Hz. Up to a thousand plasmas per day may be formed without the need to refurbish machine parts. At Caltech, an experimental facility is available for addressing the fundamental science issues governing magnetic reconnections, MHD-driven jets and spheromak formation. The inter-shot time is 2 minutes, and a large number of shots can be taken without hardware damage.

The plan for the next 3 years is to demonstrate acceleration of plasma to form jets with velocity exceeding 200 km/s and Mach number greater than 10 and to conduct experiments to explore the physics of merging jets. Concurrently standoff methods to produce seed magnetic fields will be explored conceptually.

At MSNW Inc. and the University of Washington in Seatle, WA, an experimental facility is being established to generate a database on plasma-liner compression of a magnetized plasma. Two inductive plasma accelerators (IPA) have been constructed and tested forming a stable, hot (400 eV - 800 eV) target FRC with density 5 x 10^14 cm-3 for compression. 2D cylindrical imploding plasma shell will be created by theta pinch and will be available in the near future for experimental campaigns to compress the FRC. If successful, research will continue in the next five years to create high-density (> 10^17 per cm3) and keV magnetized plasmas.

More Details on magnetized high energy density laboratory plasmas

Magneto-inertial fusion: An emerging concept for inertial fusion and dense plasmas in ultrahigh magnetic fields

At Caltech, an experimental facility is available for addressing the fundamental science issues governing magnetic reconnections, MHD-driven jets and spheromak formation. The research emphasizes experimental reproducibility, diagnostics, and achieving agreement between observations and first-principles theoretical models. The inter-shot time is 2 minutes, and a large number of shots can be taken without hardware damage.

The plan for the next 3 years is to demonstrate acceleration of plasma to form jets with velocity exceeding 200 km/s and Mach number greater than 10 and to conduct experiments to explore the physics of merging jets. Concurrently standoff methods to produce seed magnetic fields will be explored conceptually. If successful, research will continue in the next 5 years to increase the Mach number to 20 and to develop a user experimental facility with an array of plasma jets to form plasma liners for a variety of research. The research will include creating high energy density matter and compressing magnetized plasmas to reach keV temperatures and high magnetic fields.


December 11, 2009

Development Path for Helion Energy for 2010 and 2012


Helion Energy Researchers proposing two intermediate systems that last one could be useful for fusion/fission hybrid on the way to full fusion energy


Development of a High Fluence Fusion Neutron Source and Component Test Facility Based on the Magneto-kinetic Compression of FRCs by John Slough who is working on developing a Fusion Reactor

This was discussed on the Energy From Thorium forum It was noted that the fusion/fission hybrid would not be as affordable or as simple as the liquid flouride thorium reactor (LFTR). However, the fusion transmuter hybrid might be available as early as 2012 for $30-40 million in development while a LFTR would take longer. It was noted that it would take 16.5 years for the fusion transmuter to produce Uranium 235 needed. However, if each module only costs $20-30 million then $300 million for ten modules would cut the production time down to 1.65 years. Nuclear reprocessing plants currently are very expensive. $20 billion for the Japanese facility (Rokkasho). Five hundred fusion transmuter modules would cost about $15 billion (and the price could go down with factory mass production).






In the latest Helion Energy presentation they discuss 50 fusion engines being able to burn or transmute the entire stockpile of nuclear waste in 20 years. The Fusion engine is an upgrade of the 2012 system.




















The chart is saying that a full commercial fusion engine would cost less than 100 million and is ballpark estimated at about $30-70 million












50 times $100 million - $5 billion is far cheaper than waste repositories or reprocessing.

If the 2012 unit is not the fully ready thing, based on the timelines it could be 2-4 years more to get to a useful commercial system.

Say 2016.

The $2.5 million proposal looks like ARPA-E size or government stimulus fundable thing. Plus U of Washington gets enough in its regular physics budget to pay for it.

Still faster than development of a LFTR.
Potentially faster and cheaper than building a repository. (especially with regulatory and political issues)

Just the transmutation part even without the dedicated fission reactors would be worthwhile. I think they could transmute fuel for some of the regular reactors now to use

The Lawrence Livermore (LIFE hybrid) and other fusion fission hybrid systems were all talking about one billion dollar plus development and 2020-2030 as timeframes for when the main development would be occuring at the earliest.


















Motivation for CTF (Component Test Facility) based on the FRC (Field Reversed configuration)
Criteria for Component Test Facility:
(1) Provide an environment close to the fusion reactor
(a) On the smallest physical scale (cost and timeliness)
(b) With the simplest configuration (cost and ease of use)
(2) It should be capable of evaluating the full tritium fuel cycle.
(3) It should allow for easy diverter access for evaluation of a range of materials evaluation
Magneto-kinetic Compression of FRC plasmoids

Fusion Power Density scales as β2B4
(1) The FRC has the highest <β> of all fusion plasmas (<β> ~ 0.8-0.9)
(2) Compression and burn occurs in a simple linear geometry at highest
possible B consistent with pulsed solenoidal coil (Bz ~15-20 T)
(3) D-T fusion neutron generation provides more realistic test for
materials and tritium breeding
(4) Diverter outside blanket and remote from burn chamber

The pulsed FRC based CTF:
(1) Reduces by orders of magnitude the scale and complexity involved in a CTF based on a spallation source, ST or tokamak
(2) Provides for a vastly lower cost, risk and a much shorter timescale for implementation
(3) Can address both material exposure issues in blanket as well as diverters.
(4) Can address crucial Tritium fueling concerns –
(I) Inventory, (II) Production, (III) Processing, and (IV) Recovery
All are without resolution
All represent potential show-stoppers for DT fusion.
(5) Can be further developed to contribute to energy production in the near term
(I) fissile/fusile breeder
(II) Waste transmutation/burner

























































UPDATE

Energy from Thorium forum has two follow up comments that are relevant for this article

From Lars: The claim for 50 fusion engines to consume the entire US stockpile in 20 years is based on a Sandia report

In this you will find that in addition to the fusion neutron source supplying a small percentage of makeup neutrons one must also:
separate the actinides from the spent fuel
develop a lead cooled, fluid reactor
develop a first wall material that can survive being in the center of a fast reactor
develop on-line fission product removal.

All these things are not included in Helion fusion engine development program and in fact will require more development work than LFTR. LFTR has the advantage of being similar to MSRE and hence has much development work already completed. I don't believe anyone has built a lead cooled, fluid fuel reactor yet at any size.

One of the biggest challenges with LFTR is the lifetime of the first wall. In our case, that first wall is on the outer perimeter of the reactor chamber where it sees around 5% of the neutrons. In the reactor proposed in the paper above the first wall is between the fusion and fission reactors. It sees the full neutron flux from the fusion machine (and those are very fast so they more damaging to the wall). In addition it is near the heart of the fission reactor where it will see a large fission neutron flux.

This is not to say he should stop work. Solving the energy problem is a very high value proposition worthy of several parallel efforts. But you will not have a power producing reactor for $40M using this approach. He hopes to build a fusion engine with break even power for this money scaled to supply 1/20th the neutrons used in a 1GWe reactor.

You still have the expense of developing the fission reactor - that is not included in any of his costs. The fission reactor is the one that supplies 95% of the neutrons and all of the output power available to sell.

From Axil (how the first wall problem can be avoided with intermittent operation and easy and frequent swaps of the first wall):

If aneutronic B11-H fusion is not practical from either a technical or economic standpoint anytime in the near future, then D-T fusion is best served by a fusion/fission hybrid concept, and the Hilion reactor topology is well positioned for this approach.

Any big project should be developed in well thought out, mutually supportive and orchestrated phases. The thorium fusion hybrid should conform to this type of development strategy. The first phase should be the development of a U233 fuel factory. The first market would be existing Light Water Reactors and the new AHTR pebble reactors.

The price for the U233 would be well below the current U235 equivalent price. I think that such a fusion/fission fuel factory is very price competitive and is capable of producing U233 very well below this current $70 lbs yellowcake equivalent. The price of yellow cake has varied from $15 to $137 per pound recently and currently it is about $70 per pound.

32 page pdf that discusses makig lower priced uranium fuel

The best type of fusion/fission hybrid has a very small zone of fusion preferably a point source. The Hilion reactor has this very important feature and because of the small size of the fusion zone it facilitates an all inclosing blanket with almost perfect closure. Because of the ideal efficiency of its almost perfect liquid blanket envelope, I can see this type of subcritical reactor producing about 5500 kgs of pure U232/U233 per year. Very few neutrons would be wasted. Beryllium in the blanket would almost double the production of the fusion neutron flux. To maximize U233 production, no lithium should be included in the blanket. Tritium would come from the waste flow of its dependent parasitic fission reactors; its customers.


The reactor does not need heat exchangers of turboelectric generators; it can dump the heat produced by fusion (typically 10 megawatts) to the air so a thermal power circuit wound not need to be developed or deployed. Because it is subcritical, it would not need a containment structure either.

If the protactinium is removed from the liquid fluoride beryllium/thorium blanket through on-line blanket salt reprocessing immediately after its creation, no fission heat would be produced by U233 fission.

Since this hybrid does not need to produce electric power or connect to the grid, this hybrid can operate intermittingly to allow frequent change out of its first wall. Such a diamond pipe can be replaced in a matter of hours. A coating of lithium hydride on the inside of this first wall diamond pipe might greatly reduce alpha particle damage.


I believe that this is the development strategy currently envisioned for the Helion fusion engine development program.

If the U233 can be produced with a 1% or greater U232 content, then no U238 denaturing would be required by IAEA rules. This highly enriched and proliferation proof U232/U233 nuclear fuel would make light water reactors and AHTR very clean and eliminate the waste problem associated with the uranium fuel cycle. This alone would be a big selling point for the thorium/fusion hybrid and get the thorium fuel cycle off at a run.

Next generation of retinal implants Has 17 Times Higher Resolution


A video camera transmits images to a processor, which displays the images on an LCD screen on the inside of patient's goggles. The LCD display transmits infrared light pulses that project the image to photovoltaic cells implanted underneath the retina. The photovoltaic cells then convert light signals into electrical impulses that in turn stimulate retinal neurons above them.


A flexible retinal implant, combined with sophisticated electronics, may provide a higher degree of vision to blind patients with retinal degeneration. The resolution is higher, and improved software highlights the edges of objects, making them more recognizable.

The Stanford implant has approximately 1,000 electrodes, compared to 60 electrodes commonly found in fully implantable systems.

What's more, patients would not have to move their heads to see, as they do with older implants. Although we don't notice it, images fade when we do not move our eyes, and we make several tiny eye movements each second to prevent fading. With older retinal implants, the camera moves when the head moves, but not when the eyes move.

The Stanford implant, on the other hand, retains the natural link between eye movements and vision, Palanker said. A patient would wear a video camera that transmits images to a processor, which displays the images on an LCD screen on the inside of patient's goggles. The LCD display transmits infrared light pulses that project the image to photovoltaic cells implanted underneath the retina. The photovoltaic cells convert light signals into electrical impulses that in turn stimulate retinal neurons above them.






A close-up view of the flexible retinal implant made of silicon. It has tiny bridges that allow it to fold over the shape of the eye and provide a high-resolution image.

This is also the first flexible implant, and it makes use of a material commonly used in computer chips and solar cells. Peumans and his team at the Stanford Nanofabrication Facility engineered a silicon implant with tiny bridges that allow it to fold over the shape of the eye. "The advantage of having it flexible is that relatively large implants can be placed under the retina without being deformed, and the whole image would stay in focus," Palanker said. A set of flexible implants can cover an even larger portion of the retina, allowing patients to see the entire visual field presented on the display.

The Stanford device is implanted under the retina, at the earliest possible stage in the visual pathway. "In many degenerative diseases where the photoreceptors are lost, you lose the first and second cells in the pathway," Baccus said. "Ideally you want to talk to the next cell that's still there." The goal is to preserve the complex circuitry of the retina so that images appear more natural.





Google Using Dwave Systems Quantum Computer as a binary classifier of images

Google is researching Quantum Computer Algorithms using Dwave Systems Quantum Computers for a binary classifier of images

At the Neural Information Processing Systems conference (NIPS 2009), we show the progress we have made. We demonstrate a detector that has learned to spot cars by looking at example pictures. It was trained with adiabatic quantum optimization using a D-Wave C4 Chimera chip. There are still many open questions but in our experiments we observed that this detector performs better than those we had trained using classical solvers running on the computers we have in our data centers today. Besides progress in engineering synthetic intelligence we hope that improved mastery of quantum computing will also increase our appreciation for the structure of reality as described by the laws of quantum physics.


A new type of machine, a so-called quantum computer, can help here. Quantum computers take advantage of the laws of quantum physics to provide new computational capabilities. While quantum mechanics has been foundational to the theories of physics for about a hundred years the picture of reality it paints remains enigmatic. This is largely because at the scale of our every day experience quantum effects are vanishingly small and can usually not be observed directly. Consequently, quantum computers astonish us with their abilities. Let’s take unstructured search as an example. Assume I hide a ball in a cabinet with a million drawers. How many drawers do you have to open to find the ball? Sometimes you may get lucky and find the ball in the first few drawers but at other times you have to inspect almost all of them. So on average it will take you 500,000 peeks to find the ball. Now a quantum computer can perform such a search looking only into 1000 drawers. This mind boggling feat is known as Grover’s algorithm.

Over the past three years a team at Google has studied how problems such as recognizing an object in an image or learning to make an optimal decision based on example data can be made amenable to solution by quantum algorithms. The algorithms we employ are the quantum adiabatic algorithms discovered by Edward Farhi and collaborators at MIT. These algorithms promise to find higher quality solutions for optimization problems than obtainable with classical solvers.




Training a Large Scale Classifier with the Quantum Adiabatic Algorithm (14 page pdf)

In a previous publication we proposed discrete global optimization as a method to train a strong binary classifier constructed as a thresholded sum over weak classifiers. Our motivation was to cast the training of a classifier into a format amenable to solution by the quantum adiabatic algorithm. Applying adiabatic quantum computing (AQC) promises to yield solutions that are superior to those which can be achieved with classical heuristic solvers. Interestingly we found that by using heuristic solvers to obtain approximate solutions we could already gain an advantage over the standard method AdaBoost. In this communication we generalize the baseline method to large scale classifier training. By large scale we mean that either the cardinality of the dictionary of candidate weak classifiers or the number of weak learners used in the strong classifier exceed the number of variables that can be handled effectively in a single global optimization. For such situations we propose an iterative and piecewise approach in which a subset of weak classifiers is selected in each iteration via global optimization. The strong classifier is then constructed by concatenating the subsets of weak classifiers. We show in numerical studies that the generalized method again successfully competes with AdaBoost. We also provide theoretical arguments as to why the proposed optimization method, which does not only minimize the empirical loss but also adds L0-norm regularization, is superior to versions of boosting that only minimize the empirical loss. By conducting a Quantum Monte Carlo simulation we gather evidence that the quantum adiabatic algorithm is able to handle a generic training problem efficiently.


NIPS 2009 Demonstration: Binary Classification using Hardware Implementation of Quantum Annealing (19 page pdf)

Previous work [NDRM08, NDRM09] has sought the development of binary classifiers that exploit the ability to better solve certain discrete optimization problems with quantum annealing. The resultant training algorithm was shown to offer benefits over competing binary classifiers even when the discrete optimization problems were solved with software heuristics. In this progress update we provide first results on training using a physical implementation of quantum annealing for black-box optimization of Ising objectives. We successfully build a classifier for the detection of cars in digital images using quantum annealing in hardware. We describe the learning algorithm and motivate the particular regularization we employ. We provide results on the efficacy of hardware-realized quantum annealing, and compare the final classifier to software trained variants, and a highly tuned version of AdaBoost


We test QBoost to develop a detector for cars in digital images. The training and test sets consist of 20 000 images with roughly half the images in each set containing cars and the other half containing city streets and buildings without cars. The images containing cars are human-labeled ground truth data with tight bounding boxes drawn around each car and grouped by positions (e.g. front, back, side, etc.) For demonstration purposes, we trained a single detector channel only using the side-view ground truth images. The actual data seen by the training system is obtained by randomly sampling subregions of the input training and test data to obtain 100 000 patches for each data set. Before presenting results on the performance of the strong classifier we characterize the optimization performance of the hardware.

Previous experiments on other data sets using software heuristics for QUBO solving have shown that performance beyond AdaBoost can typically be obtained. Presumably the improvements are due to the explicit regularization that QBoost employs. We would hope that QBoost can be made to outperform even the highly tuned boosting benchmark we have employed here.

We mention that the experiments presented here were not designed to test the quantumness of the hardware. Results of such tests will be reported elsewhere.

Improved Cooling for 3D Microchips With Signicant 3D Chip Deployment Expected 2015-2020


CMOSAIC (European Project) could boost the computing performance of central processors by a factor 10 while consuming less energy.

3D microprocessors cooled from the inside through channels as thin as a human hair filled with a liquid coolant. Such is the solution currently being developed by researchers from the EPFL (Ecole polytechnique fédérale de Lausanne, Switzerland) and its sister organisation ETH Zurich to boost the performance of future computers. The CMOSAIC project, under the leadership of John R. Thome in Lausanne, aims to develop processors 10 times more powerful with as many transistors per cubic centimetre as there are neurons in the same volume of a human brain – a functional density greater than ever before. IBM has just signed a partnership to join the adventure.

It will take a few years until 3D microchips equip consumer electronics. The initial 3D microprocessors should be fitted on supercomputers by 2015, while the version with an integrated cooling system should go to market around 2020.

3D processors build on the idea of multicores. However, the cores are stacked vertically rather than placed side-by-side as in current processors. The advantage is that the entire surface of the core can be connected to the next layer, through 100 to 10,0000 connections per mm2. Shorter and more numerous, these minute interconnects should ensure that data transfer is 10 times faster, while reducing energy consumption and heat.



Although 3D microprocessors will use up less energy and generate less heat, they will still warm up. This is why John R. Thome’s team is in charge of developing a revolutionary cooling system. Channels with a 50-micron diameter are inserted between each core layer. These microchannels contain a cooling liquid, which exits the circuit in the form of vapour, is brought back to the liquid state by a condenser and finally pumped back into the processor. Next year, a prototype of this cooling system will be implemented and tested under actual operating conditions – but without a processor


December 10, 2009

Argonne Labs Working to Control Casimir Force


MEMS used to detect the presence of the Casimir Force

Scientists at the U.S. Department of Energy's Argonne National Laboratory are developing a way to control the Casimir force, a quantum mechanical force which attracts objects when they are only a hundred nanometers apart.

Recently Ames Lab calculated that metamaterials could be used to make a repulsive casimir effect.



“As characteristic device dimensions shrink to the nanoscale, the effects of the attractive Casimir force become more pronounced, making very difficult to control nano-devices. This is a technological challenge that needs to be addressed before the full potential of NEMS devices can be demonstrated,” scientist Daniel Lopez said. “The goal is to not only limit its attractive properties, but also to make it repulsive. A repulsive force acting at the nano-scale would allow engineers to design novel NEMS devices capable of frictionless motion through nanolevitation.”

The approach to controlling this force involves nanostructuring the interacting surfaces to tune the effects of the Casimir force.

Argonne National Laboratory was recently selected by the Defense Advanced Research Projects Agency (DARPA) to develop mechanisms to control and manipulate the Casimir force. This program will be developed in close partnership with Indiana University - Purdue University Indianapolis, National Institute of Standards & Technology (NIST) and Los Alamos National Laboratory.




Brain Structure and Reading Ability and IQ

Intensive reading programs can produce measurable changes in the structure of a child's brain, according to a study in the journal Neuron. The study found that several different programs improved the integrity of fibers that carry information from one part of the brain to another.

They used a special type of MRI to look at the brains of several dozen children from 8 to 12 years old, including poor readers and those with typical reading skills. The MRI scans allowed the scientists to study the network of fibers that carries information around the brain, which lives in the brain's so-called white matter.

Children with poor reading skills had white matter with "lower structural quality" than typical children, Just says.

So during the next school year, Just and Keller enrolled some of the poor readers in programs that provided a total of 100 hours of intensive remedial instruction. The programs had the kids practice reading words and sentences over and over again.

When they were done, a second set of MRI scans showed that the training changed "not just their reading ability, but the tissues in their brain," Just says. The integrity of their white matter improved, while it was unchanged for children in standard classes.

Equally striking, Just says: "The amount of improvement in the white matter in an individual was correlated with that individual's improvement in his reading ability."


Having a good understanding of how brain structure effects learning and IQ could lead to improved training and possible pathways to transhuman cognitive enhancement or at least cognitive optimization.

Prior Brain Structure and IQ Studies


Research suggests that the layer of insulation coating neural wiring in the brain plays a critical role in determining intelligence. In addition, the quality of this insulation appears to be largely genetically determined, providing further support for the idea that IQ is partly inherited.


The neural wires that transmit electrical messages from cell to cell in the brain are coated with a fatty layer called myelin. Much like the insulation on an electrical wire, myelin stops current from leaking out of the wire and boosts the speed with which messages travel through the brain--the higher quality the myelin, the faster the messages travel. These myelin-coated tracts make up the brain's white matter, while the bodies of neural cells are called grey matter




Neuroanotomy and IQ at wikipedia

In 2004, Richard Haier, professor of psychology in the Department of Pediatrics and colleagues at University of California, Irvine and the University of New Mexico used MRI to obtain structural images of the brain in 47 normal adults who also took standard IQ tests. The study demonstrated that general human intelligence appears to be correlated with the volume and location of gray matter tissue in the brain. Although the regional distribution of gray matter in humans may have a genetic basis, structural changes can also occur in response to environmental stimulation. The study also demonstrated that, of the brain's gray matter, only about 6 percent appeared to be related to IQ.

A study involving 307 children (age between six to nineteen) measuring the size of brain structures using magnetic resonance imaging (MRI) and measuring verbal and non-verbal abilities has been conducted (Shaw et al. 2006). The study has indicated that there is a relationship between IQ and the structure of the cortex—the characteristic change being the group with the superior IQ scores starts with thinner cortex in the early age then becomes thicker than average by the late teens



Swine Flu Has Killed Over 10,000 Americans

Swine flu has killed nearly 10,000 Americans, including 1,100 children and 7,500 younger adults, and infected one in six people in the United States

* more than 200,000 Americans had been hospitalized -- about the same number who are affected by seasonal flu in an entire year.

* and some 85 million doses of the vaccine had been made available for distribution so far, with 12 million more doses added this week.

The World Health Organization is defending itself against the charge that it exaggerated the risks of H1N1

The US numbers are not in sync with the World numbers which are At least 8,768 people worldwide have been killed by A/H1N1 influenza, an increase of 942 in the past week (reported Dec 4, 2009)

The Center for Disease Control provides an H1N1 (Swine Flu) update

Flu season statistics



Super Soldier Updates

DARPA has a program that is spending about $3 billion to create super soldiers. Here is an update of technology that is ready or is becoming deployable or usable for the purpose of creating super soldiers. Much of it is not from DARPA.

Exoskeletons
1. The Human Universal Load Carrier (HULC™) exoskeleton runs on Li-ion batteries, driving lightweight hydraulic legs with titanium structure. A wearer can hang a 200lb backpack from the back frame and heavy chest armour and kit from shoulder extensions.

According to Lockheed reps the HULC isn't ready for prime time yet, being still "in ruggedisation". However the company would envisage giving it to actual soldiers so as to get their input from the summer of 2010.

2. Raytheon (was from Sarcos) XOS lightweight aluminum exoskeleton

Users wear the exoskeleton, dubbed XOS, like a lightweight aluminum suit. Equipped with sensors, actuators, and controllers, the machine’s advanced software senses and instantly follows movement in smooth, continuous coordination. At full power, one may not only carry or lift 200 pounds more than 100 times without stopping, but also bend to kick, punch, or climb stairs and ramps. The Exoskeletons for Human Performance Augmentation program began in 2000 and development centers at the Raytheon Sarcos research facility in Salt Lake City, Utah, with funding from the U.S. Army. Early prototypes are expected in 2010 with fully deployed versions by 2017. Smaller and more powerful mobile power supplies are key as up to now demonstrations have a powercord to power them.







3. Exoskeletons and powerloaders could be coming from Japan in 2015 Japan has exoskeletons available for senior citizens now.

Strength Enhancement
4. Myostatin inhibition has been successfully demonstrated in monkeys.

The muscles were 15% bigger, 78% stronger and the effect lasted for the 15 month study with no negative health effects. The treatment produced no obvious negative side-effects and human clinical trials are expected to start next year. Myostatin inhibition has seen other trials where it has four times the effect of high doses of steroids.

5. Real SARM Steroids Are Available for Online Purchase

MIT Technology review reports that a group from the German Sport University Cologne in Germany detected the real SARM (selective androgen receptor modulators)in a product called Andarine, available online for $100 and labeled as green tea extracts and face moisturizer.

Selective androgen receptor modulators have steroid effects but are believed to be safer, without many of the harmful side effects of steroids.

6 FRS energy has had trials which show improved performance in endurance events and it is commercially available (Lance Armstrong promotes it)

7. Wearable enhancement is available for running faster and jumping farther

Powerbocking (jumping stilts, springwalkers) is the act of jumping and running with elastic-like spring-loaded stilts. For some it is an extreme sport, for others it is a form of exercise or even a means of artistic expression. The use of the stilts to perform extreme jumping, running and acrobatics is known as 'Bocking' or 'PowerBocking' after the inventor

Each boot consists of a foot-plate with snowboard type bindings, rubber foot pad which is also commonly called a hoof, and a fibreglass leaf spring. Using only their weight, and few movements, the user is generally able to jump 3–5 ft (1–1.5 meters) off the ground and run up to 20 mph (32 km/h). They also give the ability to take up to 9-foot (2.7 meters) strides

Guns and Weapons
8. The AA-12 recoilless auto assault gun can rapidly fire a lot of grenades.

The Auto Assault-12 (AA-12) shotgun (originally designed and known as the Atchisson Assault Shotgun). The AA-12 can fire in semi-automatic or fully automatic mode at 300 rounds per minute (5 every second) and has a magazine of 32 rounds. The AA-12 can fire 120 grenade rounds per minute with 9 foot blast radius. Having one AA-12 in each hand doubles the rate of fire.

New electromagnetic pulse (EMP) grenades could be adapted to the AA-12 as well, that would emit hundreds of megawatts of EMP for microseconds. A small e-bomb will be qualitatively different than larger versions. Radiated power falls off with the square of distance, so a target 3 meters (10 ft.) away receives 100 times the effect of one 30 meters away. An EMP grenade would probably only be effective for a 10-30 foot radius.





9. Instant wound healing progress

10. DARPA is developing injections to put injured soldiers into hibernation so that they can live until they can be treated

The institute’s research will be based on previous Darpa-funded efforts. One project, at Stanford University, hypothesized that humans could one day mimic the hibernation abilities of squirrels — who emerge from winter months no worse for wear — using a pancreatic enzyme we have in common with the critters. The other, led by Dr. Mark Roth at the Fred Hutchinson Cancer Research Center, used nematode worms and rats to test how hydrogen sulfide could block the body’s ability to use oxygen — creating a kind of “suspended animation” where hearts stop beating and wounds don’t bleed. After removing 60 percent of the rat’s blood, Dr. Roth managed to keep the critters alive for 10 hours using his hydrogen sulfide cocktail.

Redesign Electronics for Printed Electronics

Printed Electronics World: The first cars looked like horse drawn carriages - suboptimal and using the design rules of the past. So it is with most printed electronics today.

The irony of the integrated circuit - the silicon chip - is that it integrates so little. It cannot incorporate a loudspeaker, microphone, push button or a reasonable battery or solar cell for example, because these are too big and silicon chips have to be small for viability. Large silicon chips are prohibitively expensive.

Printed electronics is very different. It can integrate all these things.

Printed inductance is very feeble - no ferrite cores or multiple turns on top of each other yet - but many companies already print even high power resistors on the desired flexible, low cost substrates, some acting as heaters for eg thermochromic displays.


Printed Supercapacitors and Solar Cells
ACREO prints supercapacitors as gate dielectrics in its transistors, indeed, flexible supercabatteries less than one millimeter thick were launched this year by Nanotecture. Dyesol dye-sensitized solar cells, have achieved record efficiencies of 12.3%. They have led to Dyesol's CEGS technology: Combining Electricity Generation and Storage. This is a promising way of integrating a dye solar cell with a supercapacitor which could be a potential spin-out from Dyesol. Plastic Electronics GmbH is creating a variety of printed devices relying on capacitive effects from smart shelves to thumb controls.


Printed Matermaterials and Memcapacitors

Printed metamaterial components are coming along. What will we be able to do with the planned memcapacitors derived from memristors? Memory that takes no power is a possibility




Integration and Large toolkits

Bluespark printed manganese dioxide zinc battery supporting integral antenna and interconnects.

Infinite Power Solutions sells its laminar batteries with energy harvesting interfaces that will increasingly be made in one process, providing near loss-less energy storage, highly efficient power management electronics, and regulated output voltage—all in a miniaturized footprint.

Three Dimensional
New printed electronics increasingly consists of components printed on top and alongside each other, the discrete component becoming a thing of the past. This can lead to capacitive coupling.

Large Area not a Problem
Stretchability, edible electronics, transparent and tightly rollable electronics and other totally new paradigms completely change the design rules.

An unrolled printed photovoltaic or piezoelectric power source can be huge without being a problem as can the unrollable displays, keyboards etc printed at the same time. Printed electronics on a poster, billboard or even point of sale display has large area available so such things as transistor feature size or photovoltaic efficiency are not necessarily a primary issue if the materials are affordable.

Some other Fermi Paradox Speculations

The Fermi paradox is if the universe contains many technologically advanced civilizations, combined with our lack of observational evidence to support that view, seems to be inconsistent. Either this assumption is incorrect (and technologically advanced intelligent life is much rarer than we believe), our current observations are incomplete (and we simply have not detected them yet), or our search methodologies are flawed (we are not searching for the correct indicators).

This site has discussed the Fermi Paradox before

Variation on transcendence, aliens advance to some other level.

Something really sucks about our universe or interstellar area. This is somehow apparent to all advanced aliens.

It is likely that traveling around interstellar and intergalactic space is very expensive energy wise.

If it turns out that physics allows certain things to be far more inexpensive energy wise or to provide far better returns for the effrot then it may be obvious to any aliens that it is a waste of effort to travel around this universe or galaxy.

Customizing Universes or Selecting Better Places with Wormhole Travel
Customized Pocket Universes (tough to meet up with other people in the TARDIS) -
If along the way to working around conventional physics to make FTL you have develop ways to manipulate spacetime then it may be by default you have to develop the ability to make customized universes. The cost benefit of traveling around this universe may be low or negative. I can spend the same amount of initial energy to open up for FTL and make a whole other universe and extract energy/build from the exoverse/multiverse. Super advanced aliens then do not live in wild universes but move to customized universes or universes are thus far more sparsely populated It is not even necessary to create custom universes, if multiverse exists and multiverse travel is possible then there could be better universes to find and move to.



It could also mean that it would be likely that most universes are customized but ours could be an earlier one. We could be living in the DOS 2.0 version of universes and everyone else moves on to better versions. We are in multiverse equivalent of armpit, USA and people move to the better universes when they get the chance.

Dark Matter Rockets

Blackhole starships - discusses ways and energy to make small blackholes

Mach effect could be used for fast travel (not FTL in that mode) and could enable wormhole creation and travel.

Wormhole creation and travel could allow multi-dimensional movement.

Before aliens leave they make some super telescopes and look around in detail and see confirm that this universe is inferior and swap out.

Sitting beside a timebomb or Love Canal
Another possibility. Aliens can detect the conditions of stars etc... They see that some stars in our area are going to supernova in the next few millenia. The stars will blow up and make things inhospitable. They decide it is a bad idea to hang around or to invest in building and doing stuff in our area.

Again they all go elsewhere.

Some other Options
There is also a variation on the "god must be crazy" situation. The movie has a tribe that gathers a coke bottle and uses it for tools.
If we are clueless then we could be using alien artifacts and evidence but not recognize them as such.

There is the situation where if you get the tech you do not hang around stars. you have energy better than fusion.
Gathering around natural "campfires" just makes you a target for galactic predators.
So advanced aliens are hanging around the Oort comet clouds or in intergalactic "voids".

Advanced aliens may have to go fully ninja. Meeting up with other aliens is not worth the risk. Many science fiction story examples of this.

Videos of the Bear Protection Suit Body Armor


Back in January 2007, nextbigfuture covered the bear protection body armor suit made by Troy Hurtubise.

Troy Hurtubise has made a suit that stops bullets (from 12 gauge to ELEPHANT GUN), shrapnel and is light weight.

He spent two years and $150,000 in the lab out back of his house in North Bay, Ontario, Canada designing and building a practical, lightweight and affordable shell to stave off bullets, explosives, knives and clubs. He calls it the Trojan and describes it as the "first ballistic, full exoskeleton body suit of armour." The whole suit comes in at 18 kilograms. It covers everything but the fingertips and the major joints, and could be mass-produced for about $2,000, Hurtubise says.

Trjoan Ballistic suit of armor at wikipedia

In early 2007, Hurtubise made public his new protective suit which was designed to be worn by soldiers. Calling it the "Trojan", Hurtubise describes it as the "first ballistic, full exoskeleton body suit of armour." Weighing in at 40 lbs, he claims that the suit can withstand bullets from high powered weapons (including an elephant gun). Hurtubise claims that he has been unable to test the suit against live ammunition because no one is willing to shoot him in it. It also features a knife holster and air conditioned helmet.

The suit has many features including a solar powered air system, recording device, compartments for emergency morphine and salt, and a knife and gun holster. He estimates that the cost of each suit to be roughly $2,000 if mass produced. It has been called the Halo suit, after the fictional MJOLNIR battle armor the Master Chief character wears in the Xbox game.

In early February, 2007, after failing to receive any offers to buy the Trojan, Hurtubise - now bankrupt from the expense of creating the suit - was forced to put the prototype up for auction on eBay in the hopes that it would bring in enough money to sustain his family. Unfortunately for Hurtubise, the auction's reserve bid was not met. There was a raffle for the suit on the Mission Trojan website, whose goal is to raise money for further prototypes and testing of the Trojan Suit to demonstrate its abilities for military applications. The suit was won by Sara Markis of West Palm Beach, Florida.

The money raised from the raffle of the Trojan T model armour was used to finance the Trojan S type armour. This new model is superior to the T model in many ways as detailed on his website and YouTube channel.

The new type S armour purports to be lighter, tougher, more flexible, cheaper to produce and provide more complete body coverage than any other type or armour anywhere.


The suits look nice and are full body armor but they are not exoskeletons. DARPA funds a lot of stuff and has not picked up on this so it seems likely that the body armor that they are spending a lot more money on is superior to what Troy has cooked up.









December 09, 2009

More Stem Cell and Gene Therapy Progress Roundup

1. Researchers from the UCLA AIDS Institute and colleagues have for the first time demonstrated that human blood stem cells can be engineered into cells that can target and kill HIV-infected cells -- a process that potentially could be used against a range of chronic viral diseases they have made the equivalent of a genetic vaccine.

These studies lay the foundation for further therapeutic development that involves restoring damaged or defective immune responses toward a variety of viruses that cause chronic disease, or even different types of tumors."

Taking CD8 cytotoxic T lymphocytes -- the "killer" T cells that help fight infection -- from an HIV-infected individual, the researchers identified the molecule known as the T-cell receptor, which guides the T cell in recognizing and killing HIV-infected cells. These cells, while able to destroy HIV-infected cells, do not exist in enough quantities to clear the virus from the body. So the researchers cloned the receptor and genetically engineered human blood stem cells, then placed the stem cells into human thymus tissue that had been implanted in mice, allowing them to study the reaction in a living organism.

The engineered stem cells developed into a large population of mature, multifunctional HIV-specific CD8 cells that could specifically target cells containing HIV proteins. The researchers also found that HIV-specific T-cell receptors have to be matched to an individual in much the same way that an organ is matched to a transplant patient.

The next step is to test this strategy in a more advanced model to determine if it would work in the human body


2. Researchers from Yale University and Mirna Therapeutics, Inc., reversed the growth of lung tumors in mice using a naturally occurring tumor suppressor microRNA.
The study reveals that a tiny bit of RNA may one day play a big role in cancer treatment, and provides hope for future patients battling one of the most prevalent and difficult to treat cancers.


3. A gentler form of blood stem cell transplant can reverse severe sickle cell disease in adults lucky enough to find a matched donor.

Patients with sickle cell disease have a genetic mutation that results in defective crescent-shaped red blood cells. Severe disease causes stroke, severe pain, and often fatal damage to major organs.

Blood stem cell transplants have reversed sickle cell disease in some 200 children. But the procedure, which requires destruction of the patients' defective cells by radiation and chemotherapy to make room for the transplanted cells -- is too intense for adults weakened by sickle cell disease.

Moreover, adult patients are more prone to deadly graft-versus-host disease ( GVHD), in which the transplanted cells attack the recipient.

But recent studies show that in some stem cell transplant recipients, some host cells survive the toxic "conditioning regimen" of radiation and drugs -- and their progeny happily coexist with those of the transplanted stem cells


New Scientist on the stem cell treatment of 9 out of 10 people with sickle cell disease



4. There are problems associated with direct stem cell injections to try to restore heart attack muscle. UC San Diego bioengineers are proposing to use cells placed in a supportive material that changes stiffness with time by exhibiting time-dependent crosslinking. This could help repair heart attack damage.

5. Cells from heart attack survivors' own bone marrow reduced the risk of death or another heart attack when they were infused into the affected artery after successful stent placement.

* At two years, no patients from the bone marrow cell group had suffered a heart attack while seven patients from the placebo group had -- a statistically significant difference.

* Compared with placebo patients, cell-infused patients were less likely to die (three vs. eight in placebo group), need new revascularizations (25 vs. 38), or be rehospitalized for heart failure (one vs. five).



6. Stem cell derived neurons may allow scientists to determine whether breakdowns in the transport of proteins, lipids and other materials within cells trigger the neuronal death and neurodegeneration that characterize Alzheimer's disease (AD) and the rarer but always fatal neurological disorder, Niemann-Pick Type C (NPC).

Using human embryonic stem cells (hESCs), Goldstein and his team have produced human neurons in which the NPC gene is switched off, providing the first close look at cellular transport in a human neuron lacking normal function of the gene.


7. Gene Therapy and Stem Cells Save Limb

Blood vessel blockage, a common condition in old age or diabetes, leads to low blood flow and results in low oxygen, which can kill cells and tissues. Such blockages can require amputation resulting in loss of limbs. Now, using mice as their model, researchers at Johns Hopkins have developed therapies that increase blood flow, improve movement and decrease tissue death and the need for amputation.

Activating the HIF-1 gene in the cells appeared to turn on a number of genes that help these cells not only home to the ischemic limb, but to stay there once they arrive. To figure out how the cells stay where they're needed, the research team built a tiny microfluidic chamber and tested the cells' ability to stay stuck with fluid flowing around them at rates mimicking the flow of blood through vessels in the body. They found that cells under low oxygen conditions were better able to stay stuck only if those same cells had HIF-1 turned on.

"Our results are promising because they show that a combination of gene and cell therapy can improve the outcome in the case of critical limb ischemia associated with aging or diabetes," says Semenza. "And that's critical for bringing such treatment to the clinic."



Days 2 and 3 of the International Electron Devices Meeting blogged By Sander Olson

This is a follow up of Sanders report of day one of the International Electron Devices Meeting Sander Olson reports on Day 2 and 3.

Days 2 and 3 of the International Electron Devices Meeting focused on both conventional and radical switching paradigms to continue Moore's law as long as possible. Conventional techniques, such as strained silicon and metal gates, will form the basis for the next several technology nodes for reducing leakage and maximizing performance of silicon transistors. Intel engineers described the technology for their upcoming 32 nm technology node, which features 2nd generation high-K and metal gates, and 4th generation strained silicon. However, such techniques will not by themselves be sufficient for more than a few process generations, and so other more radical techniques will be needed. As integrated circuits transistor budgets grow, multi-billion dollar ICs will become increasingly common. Designing such multi-billion transistor chips is causing difficulties within the industry. Carl Anderson of IBM reported that router complexities, wiring variability, power density, and electrical migration issues are all making IC design more difficult than ever. Moreover, there are limits to the number of chip designers that even a large company like Intel or IBM can hire. In 1990 IBM's chip designers were expected to design 50 circuits per year per designer. IBM currently employs thousands of chip designers - many more than 20 years ago - but each designer is now expected to design up to 5 million circuits per year. This massive increase is due to extensive use of increasingly sophisticated design and layout software, such as Simulation Program with Integrated Circuit Emphasis (SPICE). IBM is close to the limit of the number of chip designers that it can hire, and is therefore continuing to leverage increasingly sophisticated design tools in an effort to keep up with burgeoning transistor budgets.



Maintaining high levels of reliability is also becoming problematic with modern IC designs. Intel's S. Borkar noted that chip design complexity is increasing exponentially, and creating dependable chips is becoming progressively harder. Intel sees the need to find ways to design reliable chips with unreliable components. In particular, chip designers are simply unable to prevent variability from cropping into the latest chips, which is making it exceedingly difficult to ensure reliable operation and uniform performance. K. Michaels of PDF solutions noted that 32 nm chip designs require chip designers to model 1500 different types of unique transistors. Michaels argued that "Logic Templates" were a potential solution to this problem.

One concept which is attracting increasing attention is 3-d chip technology, which garnered substantial discussion at the conference. Silicon scaling will probably end at either the 22 nm or 15 nm nodes, and after that the industry will need to transition to some form of 3d technology to compensate. 3-d methods include through-silicon-vias and 3-d die-to-wafer integration. In order to make this work engineers need to find ways to thin wafers to as little as 10 microns, mostly by chemical polishing. But 300 mm wafers have successfully been thinned to 7 microns, and no electrical degradation was found on these thinned wafers.

Many papers were given on multi-gate, high-electron mobility, spintronics, and carbon transistors. Multi-gate transistors generally consist of a raised gate surrounding the transistor channel, and have the potential to reduce off-state leakage. Chip designers were originally hoping to insert multi-gate devices at the 32nm node, but due to manufacturing and other technical difficulties won't insert multi-gate transistors into mainstream ICs until the 12 nm node, if that node is ever reached. The first Indium Gallium Arsenide FinFETs have been fabricated and are exceptionally fast but these transistors are expensive to fabricate. "Spintronics" takes advantage of the "spin" of an electron rather than an electron's charge. Spintronics devices could potentially dissipate a fraction of the power that charge-based devices consume. The first spin-based MOSFET transistors have been fabricated,

The most promising transistor, technology, however, appears to come from carbon transistors. The IEDM labeled graphene transistors as one of the research materials of greatest importance. Graphene is a single sheet of carbon with unrivaled mechanical, electronic, and thermal properties. The number of papers on graphene has expanded exponentially, and IBM has demonstrated a graphene Field Effect Transistor (FET) operating at 50 Ghz. Moreover, IBM is confident that 100 GHz transistors should soon be feasible. Furthermore, graphene can be used to make either conventional charge-based devices, or alternately can be used to make spintronic devices. One speaker at the conference revealed that "spin" has been demonstrated in both single and multiple layers of graphene. Researchers have made lithographically patterned 1-2 nanometer "nanoribbons" of graphene with high-mobility, high on-off ratios, and high critical current density. The prospects for graphene are hampered by the fact that graphene currently has a zero energy gap, so designers need to engineer a sufficient gap in order for it to be suitable for digital electronics. There are also issues with rough edges on the ribbons as well as controlled-growth difficulties. But none of these issues are considered showstoppers, and graphene could end up being used in commercial semiconductors as early as 2020

J Storrs Hall of Foresight Explains the Medieval Warm Period and Global Warming







There was a Medieval Warm Period (900-1100 AD), in central Greenland at any rate. But we knew that — that’s when the Vikings were naming it Greenland, after all.

*the axis is degrees C. The Greenland ones are actual (yep, it’s cold there), the Vostok are delta of current temp

* CO2 can migrate in ice, but all that does is smooth out the CO2 record. But CO2 is not the temperature proxy — it’s the isotopic fractions of 18Oxygen and deuterium in the actual ice itself.


United Kingdom’s Met (Meteorological) Office announced that the 2000-2009 decade “has been, by far, the warmest decade on the instrumental record”, and that 2009 is on track to become the fifth warmest year in the past 160 years, continuing the warming trend that has accelerated since the 1970s





















We’re pretty lucky to be here during this rare, warm period in climate history. But the broader lesson is, climate doesn’t stand still. It doesn’t even stand stay on the relatively constrained range of the last 10,000 years for more than about 10,000 years at a time.

Does this mean that CO2 isn’t a greenhouse gas? No.

Does it mean that it isn’t warming? No.

Does it mean that we shouldn’t develop clean, efficient technology that gets its energy elsewhere than burning fossil fuels? Of course not. We should do all those things for many reasons — but there’s plenty of time to do them the right way, by developing nanotech. (There’s plenty of money, too, but it’s all going to climate science at the moment. ) And that will be a very good thing to have done if we do fall back into an ice age, believe me.

For climate science it means that the Hockey Team climatologists’ insistence that human-emitted CO2 is the only thing that could account for the recent warming trend is probably poppycock.


If we want climate stability then we will the need the geoengineering and climate control technology to achieve that result.

Nanotechnology for global climate control

City scale climate engineering

FURTHER READING
There is a rundown from a climate skeptic who indicates why the historical numbers matter and why the slope of warming matters and why the amount of effect from CO2 matters and why accurate models matter and why the current models do not appear to be accurate.

1. The slope of recent temperature increases is used as evidence for the anthropogenic theory.

The more the warming falls into a documented natural range of temperature variation, the harder it is to portray it as requiring man-made forcings to explain. This is also the exact same reason alarmist scientists work so hard to eliminate the Medieval Warm Period and little ice age from the temperature record. Again, the goal is to show that natural variation is in a very narrow range, and deviations from this narrow range must therefore be man-made.


2. It is already really hard to justify the huge sensitivities in alarmist forecasts based on past warming — if past warming is lower, forecasts look even more absurd.

When projected back to pre-industrial CO2 levels, these future forecasts imply that we should have seen 2,3,4 or more degrees of warming over the last century, and even the flawed surface temperature records we are discussing with a number of upwards biases and questionable adjustments only shows about 0.6C.

Sure, there are some time delay issues, probably 10-15 years, as well as some potential anthropogenic cooling from aerosols, but none of this closes these tremendous gaps. Even with an exaggerated temperature history, only the no feedback 1C per century case is really validated by history. And, if one assumes the actual warming is less than 0.6C, and only a part of that is from anthropogenic CO2, then the actual warming forecast justified is one of negative feedback, showing less than 1C per century warming from manmade CO2 — which is EXACTLY the case that most skeptics make.



Magneto Electric Quantum Wheel

Here we show that self-propulsion in quantum vacuum may be achieved by rotating or aggregating magneto-electric nano-particles. The back-action follows from changes in momentum of electro-magnetic zero-point fluctuations, generated in magneto-electric materials. This effect may provide new tools for investigation of the quantum nature of our world. It might also serve in the future as a “quantum wheel” to correct satellite orientation in space.

Mechanical action of quantum vacuum on magneto-electric objects may be observable and have a significant value. Rotation or self-assembly of the nano-particles is enough to generate a back-action from zero electro-magnetic fluctuations. The amount of momentum that can be extracted from quantum vacuum by this effect, may have in the future practical implications, depending on advances in magneto-electric materials.



Despite some initial claims of negligible or even zero momentum transfer, recent theoretical studies concur that material objects may acquire momentum from quantum vacuum. This can be explained qualitatively by considering quantum vacuum as a random fluctuating electromagnetic field (so called zero fluctuations) composed of propagating modes. Each mode possesses both energy and momentum, causing Lamb splitting of spectral levels, Casimir attraction of objects in an empty space and other mechanical interactions from nano to astrophysical scales. The total momentum vanishes if the counter-propagating modes cancel mutually. It occurs in all materials except magneto-electrics, which lack both space and time symmetries.

Self-propulsion requires mechanical back-action from an external medium such as ground, water, air or even a quantum liquid. This can be provided by wheels or propellor-like devices. Mechanical interaction with electro-magnetic radiation can also serve as a mean for moving matter on macro- and nano-scales. It seems that self-propulsion in vacuum, however, can be achieved only by a rocket-like disposal of mass, at least in foreseeable future.

In this article we demonstrate that aggregating or rotating magneto-electric particles change the momentum of quantum vacuum and, as a consequence they acquire the resulting difference. It follows from momentum conservation: any change in momentum of zero fluctuations is compensated by a corresponding change in the momentum of a material object or electromagnetic field. These new occurrences of the vacuum momentum transfer do not require external means, such as previously proposed modification of the magneto-electric constant by applying external electric and magnetic fields or suppressing the quantum vacuum modes by cavity-imposed boundary condition