Pages

May 02, 2009

Wolfram Alpha in Action Video



The new computation engine will interpret natural language questions like:
What is the GDP of France ? It provides a historical graph of the GDP of France and definitions of GDP and has dropdowns for drilling deeper into the range of presented information.

Lexington ? Based on your location it will assume which Lexington is being referred to and provide information on it. There is a selection list to look at other possible Lexingtons.






It looks like a very useful new way to analyze and find information.

Any numberical information that is presented can easily be made in charts and graphs as the system is built with Mathematica functionality. The Mathematica capabilities enable very powerful uses of functions and high level math manipulations.

Carnival of Space 101

Carnival of Space 101 is up at Robot Explorers

This site provided one of the articles on Helion Energy's nuclear fusion effort.

Crowlspace takes another look at the Fermi Paradox (which is an argument that aliens should be here so long as one race of alien in the galaxy is successfully able to colonize space.)

two likely equilibrium states that answer the Fermi Paradox. Either we’re in the pre-Colonization era, before anyone ventures forth and colonizes the lot, due to Life starting late in the Universe’s life because of some recently changed astrophysical process. Or the Galaxy has been colonized, in which case two sub-divisions seem reasonable:

(1) We’re an undeveloped patch missed by the last few waves of colonization as per Geoff Landis’s Percolation Theory.
(2) Or we haven’t been missed and we’re colonized, but just not how we usually imagine.


Cumbrian Sky looks at NASA's plans or non-plans for the moon.

Centauri Dreams has another discussion of solar sails, the concept of exploration being under attack and a second Planetary Society try at a solar sail.






Check out the Carnival of Space 101 at Robot Explorers for a lot more

May 01, 2009

Metalized Spider Silk Up to Ten Times Stronger


Strength for spider silk with metals added using Atomic Layer Deposition. Ten times stronger with titanium, nine times with aluminium and five times stronger with zinc.

New Scientist and the article in the Journal Science report that : the team fired beams of ionised metal compounds at lengths of silk from the orb-weaving spider Araneus diatematus using a technology called atomic layer deposition (ALD). As well as coating each silk fibre in a fine metal oxide, some metal ions penetrated the fibre. They tried zinc, aluminium and titanium compounds, all of which improved the mechanical properties of the silk. "With all three metals, the fibres can hold three to four times as much weight," says Knez. The fibres also become stretchier, so that their toughness - the energy needed to break a strand - rises even more. "The work needed to break the fibre rises tenfold with titanium, ninefold with aluminium and fivefold with zinc," he says.

The same technique might beef up other biomaterials for a host of applications such as making artificial tendons from collagen.

So if adding metal proves to not be harmful then it would be possible for people to undergo a treatment that would strengthen tendons by up to ten times. There is other research to enable people to regenerate (possibly able to say regrow a limb over months) and separate work to have non-harmful steroids or myostatin inhibitors (up to 4 times more effective than steroids) to increase strength. There was concern that myostatin inhibitors would weaken tendons. There is also the possibility of gene therapy to vastly increase strength. Success in all three areas (strength enhancement, tendon toughness, and regeneration) would enable super-soldiers and disruptive levels of physical enhancement.

The team believe that the metals are reacting with the spider silk's protein structure, forming strong covalently bonded cross-links between the amino acid polymers within the silk. Normally, these polymers are only linked by weaker hydrogen bonds.

Spider silk is not a practical engineering material, but materials scientists are trying to produce artificial fibres that mimic its properties. If they succeed, the result could be super-tough textiles.

Knez thinks the technique has more immediate potential for toughening other biomaterials such as collagen. "Mechanically improving collagen using our technique might open several new possible applications, like artificial tendons."


26 page Pdf of supporting material that describe the procedure and the results. The pictures and charts and graphs in this article are from this source.








The abstract: Greatly Increased Toughness of Infiltrated Spider Silk

In nature, tiny amounts of inorganic impurities, such as metals, are incorporated in the protein structures of some biomaterials and lead to unusual mechanical properties of those materials. A desire to produce these biomimicking new materials has stimulated materials scientists, and diverse approaches have been attempted. In contrast, research to improve the mechanical properties of biomaterials themselves by direct metal incorporation into inner protein structures has rarely been tried because of the difficulty of developing a method that can infiltrate metals into biomaterials, resulting in a metal-incorporated protein matrix. We demonstrated that metals can be intentionally infiltrated into inner protein structures of biomaterials through multiple pulsed vapor-phase infiltration performed with equipment conventionally used for atomic layer deposition (ALD). We infiltrated zinc (Zn), titanium (Ti), or aluminum (Al), combined with water from corresponding ALD precursors, into spider dragline silks and observed greatly improved toughness of the resulting silks. The presence of the infiltrated metals such as Al or Ti was verified by energy-dispersive x-ray (EDX) and nuclear magnetic resonance spectra measured inside the treated silks. This result of enhanced toughness of spider silk could potentially serve as a model for a more general approach to enhance the strength and toughness of other biomaterials.






[1] SS/N: Native dragline silk of Araneus spider.
[2] ESM/N: Native eggshell membrane.
[3] SS/TIP and [4] SS/W*: Native silks which are dipped into TIP (Ti [OCH(CH3)2]4) or water at ambient conditions (T=15°, P=Patm) during 10 hours, followed by drying at the same conditions, respectively.
[5] SS/TP/100 and [6] SS/WP/100: Single precursor (TMA or water) exposure during 100 cycles at the same processing condition as SS/Al2O3/100, respectively.
[7] PF/Al2O3/300 (for NMR): Para film on which Al2O3 layer is deposited at the same processing condition as SS/Al2O3/300.
* For easy preparation and handling of these samples, when we dipped the silk into TIP/water and subsequently dried silk, we used directly a paper clip on which silk fibers are wound as a sample carrier for dipping and drying. During this process, unintentionally the sample, in particular SS/W, is subjected to an axial restraint during drying at room temperature and ambient atmosphere.



Hypersonic Plane Update on the X-51 Waverider and Falcon Scramjet

X-51 Waverider


The Boeing X-51 WAverider will be undergoing testflights at the end of 2009 and into 2010 The tests are to try and fly hypersonic for 5 minutes instead of previous tests that lasted for a few seconds of hypersonic flight.

The Waverider, a.k.a. the X-51, is designed to fly more than six times faster than the speed of sound on ordinary jet fuel.

The WaveRider stays airborne, in part, with lift generated by the shock waves of its own flight. The design stems from the goal of the program — to demonstrate an air-breathing, hypersonic, combustion ramjet engine, known as a scramjet.

"We built a vehicle around an engine," said Joseph Vogel, the X-51 project manager with Boeing, which is building a series of four test planes under a $246.5-million program managed by the Air Force Research Laboratory in Dayton, Ohio.

NASA tested the concept in 2004, breaking the record for a jet-powered aircraft with a speed of Mach 9.6, or nearly 7,000 mph. But the vehicle, known as X-43, only flew for a few seconds and its copper-based engine was not designed to survive the flight.

The X-51 engine, made by Pratt & Whitney, is made from a standard nickel alloy and is cooled during flight by its own fuel. The program's goal is to fly for about five minutes. The military has its eye on high-speed cruise missiles as well as space vehicles that wouldn't need carry-on oxidizers. The space shuttle, for example, carries both liquid oxygen and liquid hydrogen, to power its main engines.

The WaveRider's first flight is scheduled for October over the Pacific Ocean. It will be carried into the air by a B-52 bomber, then released at an altitude of about 50,000 feet. A solid-rocket booster will ignite and speed it up to about Mach 4.8 and if all goes well, the aircraft's engine will take over from there, boosting the speed to more than Mach 6.





Falcon Hypersonic Flow Tests Done


There were ground tests of hypersonic flow up to mach 3 and future tests will go up to mach 4. This is to show that a combined ramjet and scramjet combination can work with ramjet power up to a speed where the scramjet can take over.

The Falcon hypersonic program page

Still 10-15 Years to Commercialize 10+ Megawatt Superconducting Wind Turbines


AMSC (American Superconductor and Texas-based TECO-Westinghouse Motor Co have been working an estimated $6.8 million project to design components for a 10-MW HTS generator. Another HTS device manufacturer, Germany’s Zenergy Power Group, is working with Converteam Ltd in the UK to commercialize an 8-MW HTS wind-turbine generator. Because of the practical limitations to erecting large turbines, a generator’s size and weight do matter, says Larry Masur, a Zenergy vice president. Several groups expect to have generator prototypes ready for testing within two years but commercialization will take 10-15 years to get competitive costs. Kite generated wind and other alternatives to turbines seem like the better approach.

Superconducting Wire Manufacturing Volume Needs to Increase and Wire Costs Need to be 3-6 Times Cheaper

Half of the Superwind project is making the wires cheaper,” says Abrahamsen, whose colleagues are working on a more efficient process to deposit the layers of YBCO (YBa2Cu3O7) superconducting cuprates that form coated conductors. “The cost of offshore wind power is about €1 million [$1.3 million] for 1 MW, and depending on the design, a 10-MW generator will require several hundred kilometers of HTS wire.” To compete with the cost of copper wire, which is around $50/kA·m, Zenergy’s Masur says that HTS wire manufacturing needs to ramp up, and the price of HTS wire needs to fall to $15–$30/kA·m—from values estimated by other sources to be as high as $100/kA·m at low-production volumes. That does not include the cost to maintain and operate the cryogenic equipment needed to cool the wire below its critical temperature.







The HTS generator project teams are also testing designs that eliminate the gearbox, which converts the low angular speed of a turbine’s blades to a higher rotor speed to match the electrical grid’s AC frequency. Gearboxes often break down, especially in the humid offshore environment, and that adds to the cost of maintenance. AMSC’s Gamble says that his team has already yielded a gearless design that increases the torque on the rotor, which makes it easier to control the speed of the blades and maintain constant power flow to the grid.

The promise of HTS wind-turbine generators has the support of sectors from environmental groups to governments. Musial says it may take 10–15 years for commercial 10-MW or greater HTS generators to take off. “This is not science fiction,” he adds, “but it is not a garage project either.”


April 30, 2009

DNA Origami Self-Assembled Growth Details

This is a more detailed follow up to the original nextbigfuture article announcing the development of a method to grow DNA origami. This article discusses more details of the methods and error rates of what was done. This is a pathway to assembling larger complex molecularly precise structures.


Recently there was success with the algorithmic self-assembly of DNA tiles to grow DNA Origami.
Programmable DNA origami seeds were developed that can display up to 32 distinct binding sites and researchers demonstrated the use of seeds to nucleate three types of algorithmic crystals. In the simplest case, the starting materials are a set of tiles that can form crystalline ribbons of any width; the seed directs assembly of a chosen width with >90% yield. Increased structural diversity is obtained by using tiles that copy a binary string from layer to layer; the seed specifies the initial string and triggers growth under near-optimal conditions where the bit copying error rate is <0.2%. Increased structural complexity is achieved by using tiles that generate a binary counting pattern; the seed specifies the initial value for the counter. Self-assembly proceeds in a one-pot annealing reaction involving up to 300 DNA strands containing >17 kb of sequence information. In sum, this work demonstrates how DNA origami seeds enable the easy, high-yield, low-error-rate growth of algorithmic crystals as a route toward programmable bottom-up fabrication.


The 6 page paper is here



Programmingribbon width. (A–C)AFMimages of ligated ribbonsgrown
from seeds specifying 8-, 10-, and 12-tile-wide ribbons, respectively. (Scale bars:
1m.) (D, F, andH) Respective high-resolutionAFMimages of individual ribbons.
(Scale bars: 50 nm.) (E, G, and I) Distribution of ribbon widths in corresponding
samples of unligated ribbons (SI Text), given as the fraction of tiles found in
ribbons of a given width. Solid bars indicate samples annealed with seeds. n
41,718; 69,200; and 125,876 tiles, respectively. Dots indicate samples annealed
without seeds. n 23,524; 26,404; and 145,376 tiles, respectively. In each experiment, boundary and nucleation barrier tiles were at 100 nM each, and the
repeatable block tile concentrations were proportional to their use in ribbons of
the target width, i.e., 200, 300, and 400 nM, respectively. For samples with seeds,
each staple strand was at 50 nM, each adapter strand was at 100 nM, and the
origami scaffold strand was at 10 nM.








The effective nucleation of Variable-Width, Copy, and Binary Counter ribbons using information-bearing origami seeds points the way to reliable and programmable bottom-up fabrication of complex molecular structures. This success was based on several principles. (i) Each tile set was capable of generating an infinite variety of distinct structures, a precondition for programmability. (ii) A designed nucleation barrier prevented the spontaneous assembly of tiles in slightly supersaturated conditions, clearing the way for high-yield seeded growth with low error rates. (iii) Information contained in the seed was propagated, and sometimes processed, during crystal growth, enacting a simple developmental program. The system developed here has already been useful for growing DNA crystals with other algorithmic patterns. At the same time, this work uncovers several problems that must be solved to improve the technique. (i) The rate of copying errors that changed 1 to 0 was 5–10 times higher than errors changing 0 to 1, suggesting that modifying tiles by adding DNA hairpins as an AFM contrast agent significantly alters the crystal growth energetics. Alternative labeling methods could reduce the 1-to-0 copying error rate dramatically. (ii) Premature reversal errors and spurious nucleation errors could be reduced by adding an independent nucleation barrier on the other edge of the ribbon. (iii) Nucleation on the seed could occur more readily if the origami seed were redesigned to match the tile lattice spacing. (iv) Here, we used unequal tile concentrations to prevent excess tiles from forming undesirable side products. In contrast, theory predicts that error rates are lowest when the concentrations of all tiles are equal throughout the reaction. Low error rates and elimination of side products could be simultaneously achieved by purification of crystals before growth creates an imbalance in concentration, use of a chemostat, or design of a chemical buffer for tile concentrations (R. Hariadi, personal communication). (v) Implementing improved proofreading techniques should further reduce logical error rates and larger block sizes may reduce internal lattice defects. Finally (vi), aggregation of crystals must be reduced. Combined with the wealth of available chemistries for attaching biomolecules and nanoparticles to DNA, an improved system for seeded growth of algorithmic crystals could be a powerful platform for programming the growth of molecularly defined structures and information-based materials.

The artificial systems developed here fill the gap between the simple seeded growth of natural crystals and the sophisticated seeded growth of biological organisms. Some natural systems also occupy this gap: Similar phenomena—seeded growth of crystals with fixed thickness, variable thickness, combinatorial layering patterns, and even complex patterns derived from local interactions have all been observed or inferred in minerals such as rectorite, illite, kaolinite, barium ferrite, and mica.Within biology, centrioles that nucleate microtubles in the ‘‘9 / 2’’ arrangement to form cilia or flagella can be seen as information bearing seeds for molecular self-assembly. Thus, in addition to their technical relevance, the ability to study seeded growth processes using programmable DNA systems may open up new approaches for studying fundamental natural phenomena.


15 pages of supporting information describe how errors were reduced and various methods that were used.





IMEC integrates plasmon-based nanophotonic circuitry with state-of-the-art ICs


IMEC, Europe’s leading independent nanoelectronics research institute, reports a method to integrate high-speed CMOS electronics and nanophotonic circuitry based on plasmonic effects. Metal-based nanophotonics (plasmonics) can squeeze light into nanoscale structures that are much smaller than conventional optic components. Plasmonic technology, today still in an experimental stage, has the potential to be used in future applications such as nanoscale optical interconnects for high performance computer chips, extremely sensitive (bio)molecular sensors, and highly efficient thin-film solar cells. IMEC’s results are published in the May issue of Nature Photonics.

Fast and low power on chip photonic communication is needed to enable zettaflop supercomputers. A redesign of current computer architecture with onchip photonics that will enable supercomputers that are 100,000 to 1 million times faster than we have today.










Nanowerk has coverage of the IMEC work

The optical properties of nanostructured (noble) metals show great promise for use in nanophotonic applications. When such nanostructures are illuminated with visible to near-infrared light, the excitation of collective oscillations of conduction electrons – called surface plasmons – generates strong optical resonances. Moreover, surface plasmons are capable of capturing, guiding, and focusing electromagnetic energy in deep-subwavelength length-scales, i.e. smaller than the diffraction limit of the light. This is unlike conventional dielectric optical waveguides, which are limited by the wavelength of the light, and which therefore cannot be scaled down to tens of nanometers, which is the dimension of the components on today’s nanoelectronic ICs.

Nanoscale plasmonic circuits would allow massive parallel routing of optical information on ICs. But eventually that high-bandwidth optical information has to be converted to electrical signals. To make such ICs that combine high-speed CMOS electronics and plasmonic circuitry, efficient and fast interfacing components are needed that couple the signals from plasmon waveguides to electrical devices.
As an important stepping stone to such components, IMEC has now demonstrated integrated electrical detection of highly confined short-wavelength surface plasmon polaritons in metal-dielectric-metal plasmon waveguides. The detection was done by embedding a photodetector in a metal plasmon waveguide. Because the waveguide and the photodetector have the same nanoscale dimensions, there is an efficient coupling of the surface plasmons into the photodetector and an ultrafast response.

IMEC has set up a number of experiments that unambiguously demonstrate this electrical detection. The strong measured polarization dependence, the experimentally obtained influence of the waveguide length and the measured spectral response are all in line with theoretical expectations, obtained from finite element and finite-difference-time-domain calculations. These results pave the way for the integration of nanoscale plasmonic circuitry and high-speed electronics.



Nature Photonics Article.

Electrical detection of confined gap plasmons in metal–insulator–metal waveguides

Abstract: Plasmonic waveguides offer promise in providing a solution to the bandwidth limitations of classical electrical interconnections. Fast, low-loss and error-free signal transmission has been achieved in long-range surface plasmon polariton waveguides. Deep subwavelength plasmonic waveguides with short propagation lengths have also been demonstrated showing the possibility of matching the sizes of optics and today's electronic components. However, in order to combine surface plasmon waveguides with electronic circuits, new high-bandwidth electro-optical transducers need to be developed. Here, we experimentally demonstrate the electrical detection of surface plasmon polaritons in metallic slot waveguides. By means of an integrated metal–semiconductor–metal photodetector, highly confined surface plasmon polaritons in a metal–insulator–metal waveguide are detected and characterized. This approach of integrating electro-optical components in metallic waveguides could lead to the development of advanced active plasmonic devices and high-bandwidth on-chip plasmonic circuits.


Molecular Level Computer Simulation Provides Insights for Successful Gene Therapy

Molecular-level computer simulations of dendrimer/DNA complexes in the presence of a model cell membrane provide insights that directly pertain to critical issues arising in emerging gene delivery therapeutic applications. This work "Dendrimers as synthetic gene vectors: Cell membrane attachment" appeared in the Journal of Chemical Physics.

Key Finding: Dendrimers should be able to work effectively for gene therapy and cancer drug delivery. Dendrimers are known to be safer than virus delivery of genes and drugs but to this point have been less effective. This work shows how to increase effectiveness.

Science Daily has coverage. A group of researchers at the University of California, Berkeley and Los Alamos National Laboratory have completed the first comprehensive, molecular-level numerical study of gene therapy. Their work should help scientists design new experimental gene therapies and possibly solve some of the problems associated with this promising technique.

"There are several barriers to gene delivery," says Nikolaos Voulgarakis of Berkeley, the lead author on the new paper. "The genetic material must be protected during transit to a cell, it must pass into a cell, it must survive the cell's defense mechanisms, and it must enter into the cell's guarded nucleus."

If all of these barriers can be overcome, gene therapy would be a valuable technique with profound clinical implications. It has the potential to correct a number of human diseases that result from specific genes in a person's DNA makeup not functioning properly -- or at all. Gene therapy would provide a mechanism to replace these specific genes, swapping out the bad for the good. If doctors could safely do this, they could treat or even cure diseases like cystic fibrosis, certain types of cancer, sickle cell anemia, and a number of rare genetic disorders.









Dendrimers seem to offer many advantages over viruses. They may be much less toxic, and they may offer other advantages in terms of cost, ease of production, and the ability to transport very long genes. If they can be designed to efficiently -- and safely -- shuttle genes into human cells, then they may be a more practical solution to gene therapy than viruses.

So far, laboratory experiments with different types of dendrimers have shown that they can insert genes into cells, but only with very low efficiency. Hoping to discover the key to improving this efficiency, Voulgarakis and his colleagues simulated the detailed, atomic-level physical process of dendrimers entering cells. They varied parameters like the dendrimer size and the length of the DNA they carry. Modeling these parameters on a computer is a fast, inexpensive approach for testing different ideas and optimizing the delivery vehicle.

What they uncovered were the key factors that determine the success of dendrimers as gene delivery vehicles -- things like the charges of the dendrimers and their target cell membranes, the length of DNA, and the concentration of surrounding salt. Their work has illuminated some of the molecular-level details that should help clinicians design the most appropriate gene vectors.

"Our study indicates that, over a broad range of biological conditions, the dendrimer/nucleic acid package will be stable enough to remain on the surface of the cell until translocation," says Voulgarakis.

Dendrimers are also used clinically for delivering cancer drugs to tumors, and for helping to image the human body. In the future, Voulgarakis and his colleagues plan to study the possibility of using dendrimers as drug delivery vehicles.


Helion Energy Fusion Based On John Slough Work


This 2005 presentation on using nuclear fusion for propulsion in space within 20 years of the start of a development program mentions John Slough's work and the graphics match what is being used by Helion Energy.








Star Thrust experiment page (1000 to 1 million ISP) This work was in the 1990s. Helion Energy development of a Field Reversed Configuration would also enable Star Thrust propulsion development.

Most fusion confinement concepts are unsuited for space power production due to their large size and complexity, and are non-ideal for propulsion due to the use of D-T fuel which releases most of its energy in the form of high energy neutrons. A notable exception to these restraints is provided by the Field Reversed Configuration (FRC) which is a simple elongated current ring confined in a modest field solenoidal magnet, as sketched above. FRCs lack any significant toroidal field, which results in a compact high b plasma that is suitable for burning advanced aneutronic fuels. (Synchrotron radiation would limit ignition of high temperature aneutronic fuels in the low b environment of most confinement geometries.) The linear geometry and magnetic separatrix are a natural attribute for propulsive applications.

FRTP startup technology is well developed, but is too bulky and heavy for space applications. In recognition of this, NASA is supporting a very high power, but short duration, RMF startup experiment called STX (Star Thrust Experiment). STX will study the RMF formation and sustainment of hot (100s eV) mid-sized FRCs, where the classical skin depth of the RMF is much less then the radius of the FRC. Penetration of the RMF has been demonstrated under such circumstances due to the collisionless Hall effect. In the laboratory frame of reference, the RMF works by pulling the electrons around azimuthally while leaving the ions behind, whereas in the electron frame, the RMF appears at rest due to synchronous rotation. This is accomplished by selecting the RMF frequency and amplitude such that the electrons are magnetized with respect to the rotating field and the ions are not). It is also necessary that the electrons be highly collisionless in order to experience synchronous rotation, and thus allow RMF penetration.

The STX vacuum chamber consists of a 40 cm diameter by 3 m long quartz tube. Two high Q (400) tank circuits will produce the required .01T .5MHz RMF at the tens of megawatts level for .2 msec. In addition to an ion Alfven heater and axial discharge, these high power tank circuits will be responsible for ionizing and heating the plasma past the radiation barrier, and thus will have the capability of briefly attaining power outputs in the hundreds of megawatts, levels common to FRTPs. Additional IGBT circuits (solid state) will sustain the RMF for a longer duration at lower power levels. STX is presently under construction. The experimental design, power supply performance and preliminary ionization and heating data, along with FRC RMF theory, will be presented.


April 29, 2009

World Health Organization Raises Pandemic threat Level to 5 out of 6


WHO Director General Margaret Chan told a news conference in Geneva as she raised the official alert level to phase 5, the last step before a pandemic.

"The biggest question is this: how severe will the pandemic be, especially now at the start," Chan said. But she added that the world "is better prepared for an influenza pandemic than at any time in history."

Phase 5 is characterized by human-to-human spread of the virus into at least two countries in one WHO region. While most countries will not be affected at this stage, the declaration of Phase 5 is a strong signal that a pandemic is imminent and that the time to finalize the organization, communication, and implementation of the planned mitigation measures is short.

Phase 6, the pandemic phase, is characterized by community level outbreaks in at least one other country in a different WHO region in addition to the criteria defined in Phase 5. Designation of this phase will indicate that a global pandemic is under way.

During the post-peak period, pandemic disease levels in most countries with adequate surveillance will have dropped below peak observed levels. The post-peak period signifies that pandemic activity appears to be decreasing; however, it is uncertain if additional waves will occur and countries will need to be prepared for a second wave.

Previous pandemics have been characterized by waves of activity spread over months. Once the level of disease activity drops, a critical communications task will be to balance this information with the possibility of another wave. Pandemic waves can be separated by months and an immediate “at-ease” signal may be premature.

In the post-pandemic period, influenza disease activity will have returned to levels normally seen for seasonal influenza. It is expected that the pandemic virus will behave as a seasonal influenza A virus. At this stage, it is important to maintain surveillance and update pandemic preparedness and response plans accordingly. An intensive phase of recovery and evaluation may be required.


Pandemic phases described at wikipedia.



Pandemic Risk scale is different from the Pandemic Severity Index.







The Pandemic Severity Index (PSI) is a proposed classification scale for reporting the severity of influenza pandemics in the United States.



RELATED READING
Deaths from regular flu

Helion Built One Third Scale Nuclear Fusion Engine. $20 Million For Full Scale Reactor in 2010-2011



Helion Energy has built a one third scale fusion generator and wants $20 million to build a full scale prototype by 2011. They would need another $100 million or so to build a first commercial version by 2019. Successful development of commercial nuclear fusion should revolutionize space travel, energy production and provide massive economic benefits when fully deployed.

UPDATE In 2013, Helion Energy reported having received a total of about $7 million in funds from DOE, the Department of Defense and NASA. The company hopes to raise another $2 million by next year, $35 million in 2015-17, and $200 million for its pilot plant stage. Helion Energy's new plan is to build a 50-MWe pilot of its “Fusion Engine” by 2019 after which licensees will begin building commercial models by 2022.

Helion Energy has the exclusive license to a novel energy technology, the Fusion Engine. A prototype at 1/3 commercial scale is operational and generating energy from fusion. The Fusion Engine works by forming hot, ionized deuterium and tritium gas known as a Field Reversed Configuration plasma. Two of these plasmas are then electromagnetically accelerated to greater than 1 million mph and then collided in a burn chamber. In this isolated region, temperatures reach 50 million degrees and release enormous amounts of energy.


Note: Field reversed configuration plasma using colliding beams is the technology being developed by the secretive Tri-Alpha Energy which has raised over $40 million.

The $20 million would enable building a full‐scale prototype.
– Energy generation is volumetric therefore scale increase required for breakeven
• Development of Repetitive Pulse Technology
– Continuous operation requires repeated fusion events to meet commercial targets
• Continued/Additional Computer Modeling of Commercial Reactor
– Optimize for high efficiency power generation



UPDATE:
Follow up article on this site about using this type of fusion for space propulsion.

(H/T to Kurt9) Talk Polywell discussion

Quotes from Art Carlson:
The idea itself is old, so the question is how they deal with the known difficulties. One of these is that FRCs, when scaled to reactor sizes, are supposed to be unstable. (Actually they are supposed to be unstable at laboratory size, which they are not, but it will probably get worse when scaling up.)

The topology of an FRC machine is linear, so it is much simpler than a (toroidal) tokamak, and it operates naturally at high beta, so the power density is 2 orders of magnitude higher than for a tokamak.

the Seattle area code of the CEO matches the UW, so there is probably more than a coincidental relationship here. That's good. John Slough is an old buddy of mine. He was running the experiment I did my Ph.D. work on. Definitely serious. Keep your eye on this one (but watch your pocketbook).

There are a lot of interesting things to say about this concept. For example, direct conversion: You can magnetically compress the configuration till it starts burning, then let it expand against the field. If you are lucky, you get more electrical energy out than you put in to begin with. This is the most attractive direct conversion scheme I know of. Another thing, wall loading: Since this thing is zipping down a tube, it you decide the heat and/or neutron flux to the wall is too large at your design point, you can simply make your tube a bit longer and send the plasma down it a bit faster


But, from http://www.physicsessays.com/doc/s2005/John_Slough-Final.pdf:

Most importantly, the FRC remains in a stable regime with regard to MHD modes such as the tilt from formation through burn.

Finally, in the reactor scenario outlined for PHD, the FRC at no time exceeds the empirical regime where stability and good confinement has been observed throughout the entire formation process through burn.


From http://hifweb.lbl.gov/ICC2000/MagnetizedTargetFusion/Slough1_Oral.pdf

Given the observed scaling with size and density, the required radius at a density of 1024 m-3 for a DT fusion burn with a gain > 1 is found to be ~ 1 cm.

The PHD fusion envisioned also provides for a simple direct conversion of the plasma into directed thrust. Of all fusion reactor embodiments, only the magnetically confined plasma in the Field Reversed Configuration (FRC) has the linear geometry, low confining field, and high plasma pressure required for the direct conversion of fusion energy into high specific impulse and thrust, and would thus have direct applicability to deep space flight.





























Video Players with 25 Times the Resolution of High Def Demonstrated



Red Camera has developed a wavelet compression that compresses video by 700 times or more.

The Red 400 original files were mastered as 16bit TIFFs (4096x2304), roughly 51MBs per frame. That's an uncompressed data rate of 1.3GBs per second. Blu-Ray is 1440X1080 and has 1920x1080 as well. So the Red 400 is about 5-6 times gthe resolution of Blu-ray.

The Red 300 was mastered as 10bit DPX (4096x2048) at roughly 32MBs per frame or about 750MBs per second.

So the compression was to 10 Mbps or less and able to be transmitted over wifi.

Red Camera is revolutionizing the resolution of still cameras and video cameras. They are increasing the resolution and lowering the high cost of high end systems.

The Red One Video camera has been used by movie directors like

* Steven Soderbergh, the Oscar-winning director, borrowed two prototypes to shoot his Che Guevara biopics, which premiered at the Cannes Film Festival in May, and later purchased three for his film The Informant.
* Peter Jackson, the Lord of the Rings himself, bought four.
* Director Doug Liman used a Red on Jumper.
* Peter Hyams used one on his upcoming Beyond a Reasonable Doubt


New high end modular video cameras are starting at $3750 for a complete camera with 3K resolution. This is over ten times the resolution of HDTV cameras with similar prices and over two times the resolution of 2K cameras with higher prices.






2K projectors can display 2.2 megapixels and 4K projectors can show 8.8 megapixels.

FURTHER READING
The Red Ray page at Red Camera Company

Red Scarlet and Epic video cameras page. There is also 3D cameras coming.

Optical Wavelength Cloaking Carpets



Michal Lipson and pals at Cornell University and Xiang Zhang and buddies at UC Berkeley say they have both built cloaks that are essentially mirrors with a tiny bump in which an object can hide. The cloaking occurs because the mirrors look entirely flat. The bump is hidden by a pattern of tiny silicon nanopillars on the mirror surface that steers reflected light in a way that makes any bump look flat. So anything can be hidden beneath the bump without an observer realising it is there, like hiding a small object under a thick carpet.

Cloaking at Optical Frequencies : Cornell University



Cloaking principle of the fabricated device. A planar mirror forms an image equal to the object reflected (a), but when the mirror is deformed, the image is distorted (b), allowing an external observer to identify the deformation. Our cloaking device – shaded area in front of the mirror – corrects the distortion in the image, so that the observer no longer identifies the deformation in the mirror, nor an object hidden behind the deformation (c).

The cloak operates at a wide bandwidth and conceals a deformation on a flat reflecting surface, under which an object can be hidden. The device is composed of nanometer size silicon structures with spatially varying densities across the cloak. The density variation is defined using transformation optics to define the effective index distribution of the cloak.

These results represent the first experimental demonstration of an invisibility cloaking device at optical frequencies. The bandwidth and wavelength of operation of the device is limited by the bandwidth of operation of the distributed Bragg reflector. This bandwidth is large, 950 nm, around a wavelength of 1500 nm due to the large index contrast between silicon and SiO2. Such a cloak could in principle be reproduced over much larger domains, using techniques such as nanoimprinting, for example, enabling a wide variety of applications in defense, communications, and other industries. Note that in this paper we show how the trajectory of light can be manipulated around a region to render it invisible. Using transformative optics in a similar fashion to the one used in this paper, one could do the opposite - concentrate light in an area. This could be used for example for efficiently collecting sunlight in solar energy applications [concentrate light].








Dielectric Optical Cloak: UC Berkeley

the first experimental realization of a dielectric optical cloak. The cloak is designed using quasi-conformal mapping to conceal an object that is placed under a curved reflecting surface which imitates the reflection of a flat surface. Our cloak consists only of isotropic dielectric materials which enables broadband and low-loss invisibility at a wavelength range of 1400-1800 nm.

The experimental demonstration of cloaking at optical frequencies suggests invisibility devices are indeed within reach. The all-dielectric design is isotropic and non-resonance based Wavelength dependence of the carpet cloak. Plotted is the intensity along the output grating for a curved reflecting surface (A) with cloak and (B) without the cloak. The cloak demonstrates broadband performance at 1400 nm – 1800 nm wavelengths. Distinct splitting of the incident beam is observed from the uncloaked curved surface due to the strong scattering of the original beam. Therefore promising a new class of broadband and low-loss optical cloaks. It should be noted, that this methodology can also be extended into an air background by incorporating non-resonant metallic elements to achieve indices smaller than one. Furthermore, the quasi-conformal mapping design and fabrication methodology presented here may open new realms of transformation optics beyond cloaking.




April 28, 2009

Ultrasound Imaging with Smartphones


The image of Zar's carotid artery appears on this small, portable smartphone connected to the probe by a USB driver.

William D. Richard, Ph.D., WUSTL (Washington University of St Louis) associate professor of computer science and engineering, and David Zar, research associate in computer science and engineering, have made commercial USB ultrasound probes compatible with Microsoft Windows mobile-based smartphones.

In order to make commercial USB ultrasound probes work with smartphones, the researchers had to optimize every aspect of probe design and operation, from power consumption and data transfer rate to image formation algorithms. As a result, it is now possible to build smartphone-compatible USB ultrasound probes for imaging the kidney, liver, bladder and eyes, endocavity probes for prostate and uterine screenings and biopsies, and vascular probes for imaging veins and arteries for starting IVs and central lines. Both medicine and global computer use will never be the same.

"Twenty-first century medicine is defined by medical imaging," said Zar. "Yet 70 percent of the world's population has no access to medical imaging. It's hard to take an MRI or CT scanner to a rural community without power."

A typical, portable ultrasound device may cost as much as $30,000. Some of these USB-based probes sell for less than $2,000 with the goal of a price tag as low as $500.







The electronics for the ultraprobe have shrunk over 25 years from cabinet-sized to a tiny circuit board one inch by three inches (left). WUSTL's William D. Richard and Dave Zar have wedded a small, portable ultra sound imaging device with a smartphone (right).

Richard and Zar have discussed a potential collaboration with researchers at the Massachusetts Institute of Technology about integrating their probe-smartphone concept into a suite of field trials for medical applications in developing countries.

"We're at the point of wanting to leverage what we've done with this technology and find as many applications as possible," Richard said.

One such application could find its way to the military. Medics could quickly diagnose wounded soldiers with the small, portable probe and phone to detect quickly the site of shrapnel wounds in order to make the decision of transporting the soldier or treating him elsewhere on the field.

RELATED READING
There are cellphone microscopes that could be available for less than $100.

Researchers at the University of Illinois have developed a nanoneedle to Inject into Living Cells

Researchers at the University of Illinois have developed a membrane-penetrating nanoneedle for the targeted delivery of one or more molecules into the cytoplasm or the nucleus of living cells. In addition to ferrying tiny amounts of cargo, the nanoneedle can also be used as an electrochemical probe and as an optical biosensor. Supporting information for the Nano Letters research.
In the paper, Min-Feng Yu, a professor of mechanical science and engineering, and collaborators describe how they deliver, detect and track individual fluorescent quantum dots in a cell’s cytoplasm and nucleus. The quantum dots can be used for studying molecular mechanics and physical properties inside cells. To create a nanoneedle, the researchers begin with a rigid but resilient boron-nitride nanotube. The nanotube is then attached to one end of a glass pipette for easy handling, and coated with a thin layer of gold. Molecular cargo is then attached to the gold surface via “linker” molecules. When placed in a cell’s cytoplasm or nucleus, the bonds with the linker molecules break, freeing the cargo. With a diameter of approximately 50 nanometers, the nanoneedle introduces minimal intrusiveness in penetrating cell membranes and accessing the interiors of live cells. The delivery process can be precisely controlled, monitored and recorded – goals that have not been achieved in prior studies.

The ability to deliver a small number of molecules or nanoparticles into living cells with spatial and temporal precision may make feasible numerous new strategies for biological studies at the single-molecule level, which would otherwise be technically challenging or even impossible, the researchers report. “Combined with molecular targeting strategies using quantum dots and magnetic nanoparticles as molecular probes, the nanoneedle delivery method can potentially enable the simultaneous observation and manipulation of individual molecules,” said Ning Wang, a professor of mechanical science and engineering and a co-author of the paper. Beyond delivery, the nanoneedle-based approach can also be extended in many ways for single-cell studies, said Yu, who also is a researcher at the Center for Nanoscale Chemical-Electrical-Mechanical Manufacturing Systems. “Nanoneedles can be used as electrochemical probes and as optical biosensors to study cellular environments, stimulate certain types of biological sequences, and examine the effect of nanoparticles on cellular physiology.”

Regular Flu Deaths in the USA in 2009




Regular flu in the United States kills about 30,000 people in an average year. 90% of those are people 65 and older who are already not in the best of health. There have been 820-987 deaths each week from the regular flu in the 122 cities that are in the center for disease control tracking system. In the USA there are about 2.5 million deaths in a year.

A regular year worldwide is for 250,000 to 500,000 people to die from the flu. Worldwide there are 55 million deaths from all causes each year. Normal flu pandemics kill 0.75-2 million people each year.

The last major global flu pandemic was in 1968-1969 (the Hong Kong Flu) Since that time there has been more effective flu treatments that have been developed and there is progress toward universal flu vaccines and to a system of broad flu vaccination with faster to develop booster shots when specific flu types are identified.

All of the public health protocols to minimize swine flu risks should be followed for regular flu.
* Frequently wash hands with hot water and soap for 20-30 seconds.
* Avoid contact with people who are sick with flu like symptoms.
* Get a flu vaccination shot

Flu activity has reports from the Center for Disease Control.








Advanced Research Projects Agency – Energy Starting Up

Transformational energy technology projects can be submitted to the Advanced Research Projects Agency – Energy (ARPA-E or ARPA Energy) from May 12 to June 2, 2009 for the first round of funding.

The agency defined as "transformational" a technology that "so outperforms current approaches that it causes an industry to shift its technology base to the new technology."

The program will seek to move immature technologies with potentially high payoff beyond the "valley of death" that frequently prevents new technologies from reaching the commercial market, the solicitation stated. "We are not looking for incremental progress on current technologies," the agency added.

The new program will cover ARPA-E's key mission areas: energy security through the development of new energy technologies and maintaining the U.S. lead in developing and deploying advanced energy technologies.

The agency said it will accept concept papers between May 12 and June 2.

The announcement coincided with a speech on Monday by President Barack Obama at the National Academy of Sciences in which he formally announced funding for ARPA-E. The agency will initially receive $400 million in economic stimulus funding.

ARPA-E anticipates that most awards will be for total project costs in the range of $2 million to $5 million. Some may be as low as $500,000 or as high as $10 million. In extremely exceptional cases, ARPA-E may choose to accept efforts up to $20 million.

The period of performance for efforts selected under this announcement is limited to no more than 36 months performance; however, ARPA-E has a strong preference for a period of performance of no more than 24 months.

No more than 25% of the ARPA-E funds may be expended by the combination of all foreign entities on the project (excluding equipment that is not available in the United States).

The recipient must provide cost share of at least 20% of the total allowable costs for R&D projects of an applied nature.

Along with the industry research solicitation, the Energy Department also announced government grants to establish 46 Energy Frontier Research Centers. Funding for that effort totals $777 million.


Potential ARPA-E Projects
A suitable project could be a project to initiate development of factory mass produced Liquid fluoride thorium reactors (LFTR) to replace coal power worldwide.

Inertial Electrostatic fusion could use more funding.

Focus Fusion (Dense plasma focus) could also use more funding.

Development of large scale uranium mining from seawater also seems ARPA-E worthy.

Submissions of Concept Papers
Opening time for submission of concept papers begins May 12, 2009. The Concept Paper closing date and time is on 2 June 2009 at 8:00 p.m. (EST). Concept papers must be submitted to www.FedConnect.net at any time between the opening time and the closing time for concept paper submission. Early submission is strongly encouraged.
This FOA will appear on the FedConnect website, www.FedConnect.net, and FedBizOpps website, www.fedbizopps.gov/. The directions for completing the concept paper submission are in this FOA (Section IV). To submit the concept paper, a cover sheet (Appendix 1) is also required.

Concept papers will be reviewed as received. In cases of an extremely meritorious submission, an applicant may be contacted by fax and email (if contact information is supplied by the applicant) for early submission for a full application. Concept paper notification will indicate whether a full application based on the idea presented in the concept paper is likely to be competitive. ARPA-E anticipates informing all other applicants no later than 13 July 2009.







ARPA-E is part of a broader national energy strategy. The elements of the Administration’s Energy and Environment Agenda (www.whitehouse.gov/agenda/energy_and_environment) relevant to this FOA include:
• Reduce GHG emissions: Drive emissions to 80% below 1990 levels by 2050, and ensure 25 percent of our electricity comes from renewable sources by 2025.
• Enhance Energy Security: Save more oil than the U.S. currently imports from the Middle East and Venezuela combined (more than 3.5 million barrels per day) within 10 years.
• Restore Science Leadership: Strengthen America’s role as the world leader in science and technology.
• Quickly Implement the Economic Recovery Package: Create millions of new green jobs and lay the foundation for the future.

Under this FOA, ARPA-E is seeking R&D applications for technologies that, when in wide-spread use, will make substantial, significant, quantitative contributions to these national goals and ARPA-E Mission Areas. In addition, the proposed technology when in use may not have a negative impact on any of the ARPA-E Mission Areas.

FURTHER READING
ARPA-E website

Electro Thermal Dynamic Stripping Oil Recovery Could Unlock 400 Billion More Barrels of Oil in Alberta at $26/Barrel



A field test was performed from Sept 2006 to August 2007 and the recovery and performance exceeded expectations. The recovery factor was over 75%, energy used per barrel was 23% less than anticipated and peak production rates were better than expected.

ET Energy's Electro Thermal technology could be used to pump out 600 billion barrels of Alberta's oil sands bitumen. That's more than triple the Alberta government's best guess at what's currently recoverable from the oil sands, and enough to satisfy total global demand for twenty years.

Saudi Arabia has 260 billion barrels of oil reserves, so the additional 421 billion barrels would be close to double the oil in Saudi Arabia.

In coming weeks, the company will hit the road to raise $150-million to commercialize its technology.

That technology isn't much to look at — just a few well heads and large tanks sitting on a windswept field south of Fort McMurray. A series of electrodes dangle in each well. When they are turned on, they pass a current through the earth — like electricity through a stove element — and heat it up. The result: The bitumen, which is normally locked in sand as hard as rock, begins to flow — like molasses in a microwave. No huge mines needed, no greenhouse gas-spewing steam projects required.

In a place accustomed to prying bitumen from the earth using monstrous shovels and vast quantities of steam, this pilot project is a bold attempt to reshape the environmental and financial costs of the oil sands.

In other parts of Alberta, companies are using radically different techniques: Petrobank Energy and Resources Ltd. is studying how to free bitumen using underground combustion, while Laricina Energy Ltd. is mixing steam with solvents, which dramatically cuts the amount of natural gas used to extract bitumen from deeper oil sands. At universities and provincial research bodies, scientists are studying how microbes could be used in bitumen upgrading, and examining the effectiveness of new techniques inside specially modified medical CT scanners.

E-T has stumbled in its attempts to apply the technology to the oil sands (it has worked dozens of times in environmental remediation applications). In its second major test, it managed to produce oil from only one of four wells. Its problems ranged from electrical cables that were accidentally severed by surface equipment, to the design of its electrodes. In total, E-T has produced less than 3,000 barrels of oil.

Yet the potential prize for success is huge. E-T's technology, for example, could help open up carbonate oil, a huge hydrocarbon resource that is so tricky to produce that virtually no one has tried. And Petrobank believes its process, which uses a controlled underground burn to intensely heat oil sands and make them flow, can be used in a huge variety of heavy oil fields around the world. Like E-T's process, it requires virtually no water and uses dramatically less energy.









The Electro-Thermal Dynamic Stripping Process (ET-DSP™) is a patented electro-thermal technology which combines the majority of the dominant heat transfer mechanisms (electro-thermal, conduction, and convection) into an effective and environmentally benign method for heating the Athabasca oil sands. The fundamental physics for electro-thermal processes in oil sands were developed in the Applied Electromagnetic Laboratory at the University of Alberta as part of the AOSTRA University Access Program. The ET-DSP™ invention results from the use of electro-thermal methods for heating soils in the environmental industry in combination with years of thermal reservoir engineering experience in the energy industry. ET-DSP™ has achieved commercial status in the environmental industry as a technology that restores contaminated sites to useable real estate in less than six months as opposed to decades.

The electrodes are placed in a grid configuration and an extraction well is located within the center of each series of electrode wells. The spacing of the electrode wells is optimized to provide the most efficient heating of the formation. Currently configured in 1:1 E-Well (Electrode Well) to X-Well (Extraction Well) ratio, although further field testing will establish if higher ratios are more economic (ex. 2:1 or 4:1)


A 5 page research article on electrically stimulated oil recovery from 2000.

Considering Possibilities for a Room Temperature Quantum Computer


Fig 1: Optically controlled spintronic patches might be linked by flying qubits to form a larger processor. Even 20 qubits linked within a patch would provide only a very modest quantum computer. Linking 10 or 12 patches would be much more impressive. This figure shows schematically such a linkage to form a larger processor. If each patch is to be accessed by separate optical inputs, the spacings must be more than optical wavelengths, so of order 1–2 microns

Marshall Stoneham of the
London Centre for Nanotechnology and Department of Physics and Astronomy has written a paper that considers if a room temperature quantum computers are possible.


Marshall concentrates on two proposals as examples, with apologies to those whose suggestions I am omitting. Both of the proposals use optical methods to control spins, but do so in wholly different ways. The first is a scheme for optically controlled spintronics that Marhall Stoneham, Andrew Fisher, and Thornton Greenland proposed. The second route exploits entanglement of states of distant atoms by interference in the context of measurement-based quantum computing.

What would you do with a quantum computer if you had one? Proposals that do not demand room temperature range from probable, like decryption or directory searching, to the possible, like modeling quantum systems, and even to the difficult yet perhaps conceivable, like modeling turbulence. More frivolous applications, like the computer games that drive many of today’s developments, make much more sense if they work at ambient temperatures. And available quantum processing at room temperature would surely stimulate inventive new ideas, just as solid-state lasers led to compact disc technology.

Summing up, where do we stand? At liquid nitrogen temperatures, say 77 K, quantum computing is surely possible, if quantum computing is possible at all. At dry ice temperatures, say 195 K, quantum computing seems reasonably possible. At temperatures that can be reached by thermoelectric or thermomagnetic cooling, say 260 K, things are harder, but there is hope. Yet we know that small (say 2–3 qubit) quantum devices operate at room temperature. It seems likely, to me at least, that a quantum computer of say 20 qubits will operate at room temperature. I do not say it will be easy. Will such a QIP device be as portable as a laptop? I won’t rule that out, but the answer is not obvious on present designs.

Why do we need a quantum computer? The major reasons stem from challenges to mainstream silicon technology. Markets demand enhanced power efficiency, miniaturization, and speed. These enhancements have their limits. Future technology scenarios developed for the semiconductor industry’s own roadmap imply that the number of electrons needed to switch a transistor should fall to just 1 (one single electron) before 2020. Should we follow this innovative yet incremental roadmap, and trust to new tricks, or should we seek a radical technology, with wholly novel quantum components operating alongside existing silicon and photonic technologies? Any device with nanoscale features inevitably displays some types of quantum behavior, so why not make a virtue of necessity and exploit quantum ideas? Quantum-based ideas may offer a major opportunity, just as the atom gave the chemical industry in the 19th century, and the electron gave microelectronics in the 20th century. Quantum sciences could transform 21st century technologies.

Why choose the solid state for quantum computing? Quantum devices nearly always mean nanoscale devices, ultimately because useful electronic wave functions are fairly compact. Complex devices with controlled features at this scale need the incredible know-how we have acquired with silicon technology. Moreover, quantum computers will be operated by familiar silicon technology. Operation will be easier if classical controls can be integrated with the quantum device, and easiest if the quantum device is silicon compatible. And scaling up, the linking of many basic and extremely small units is a routine demand for silicon devices. With silicon technologies, there are also good ways to link electronics and photonics. So an ideal quantum device would not just meet quantum performance criteria, but would be based on silicon; it would use off-the-shelf techniques (even sophisticated ones) suitable for a near-future generation fabrication plant. A cloud on the horizon concerns decoherence: can entanglement be sustained long enough in a large enough system for a useful quantum calculation?



Fig 2: Larger arrays of diamond center qubits could be linked together for scale-up to a quantum computer. The many pairwise entanglements can be linked via a fast-switched optical multiplexer, in readiness for the final measurement step.






It can’t be done: serious quantum computing simply isn’t possible anyway. Could any quantum computer work at all? Is it credible that we can build a system big enough to be useful, yet one that isn’t defeated by loss of entanglement or degraded quantum coherence? Certainly there are doubters, who note how friction defeated 19th century mechanical computers. Others have given believable arguments that computing based on entanglement is possible. Of course, it may prove that some hybrid, a sort of quantum-assisted classical computing, will prove the crucial step.

It can’t be done: quantum behavior disappears at higher temperatures. Confusion can arise because quantum phenomena show up in two ways. In quantum statistics, the quantal ħ appears as ħω/kT. When statistics matter most, near equilibrium, high temperatures T oppose the quantum effects of ħ. However, in quantum dynamics, ħ can appear unassociated with T, opening new channels of behavior. Quantum information processing relies on staying away from equilibrium, so the rates of many individual processes compete in complex ways: dynamics dominate. Whatever the practical problems, there is no intrinsic problem with quantum computing at high temperatures.

It can’t be done: the right qubits don’t exist. True, some qubits are not available at room temperature. These include superconducting qubits and those based on Bose-Einstein condensates. In Kane’s seminal approach, the high polarizability needed for phosphorus-doped silicon (Si:P) corresponds to a low ionization donor energy, so the qubits disappear (or decohere) at room temperature. In what follows, I shall look at methods without such problems.



Optically controlled spintronics. Think of a thin film of silicon, perhaps 10 nm thick, isotopically pure to avoid nuclear spins, on top of an oxide substrate (Fig. 1). The simple architecture described is essentially two dimensional. Now imagine the film randomly doped with two species of deep donor—one species as qubits, the other to control the qubits. In their ground states, these species should have negligible interactions. When a control donor is excited, the electron’s wave function spreads out more, and its overlap with two of the qubit donors will create an entangling interaction between those two qubits. Shaped pulses of optical excitation of chosen control donors guide the quantum dance (entanglement) of chosen qubit donors.

For controlling entanglement in this way, typical donor spacings in silicon must be of the order of tens of nanometers. Optically, one can only address regions of the order of a wavelength across, say 1000 nm. The limit of optical spatial resolution is a factor 100 larger than donor spacings needed for entanglement. How can one address chosen pairs of qubits? The smallest area on which we can focus light contains many spins. The answer is to exploit the randomness inevitable in standard fabrication and doping. Within a given patch of the film a wavelength across, the optical absorptions will be inhomogeneously broadened from dopant randomness. Even the steps at the silicon interfaces are helpful because the film thickness variations shift transition energies from one dopant site to another. Light of different wavelengths will excite different control donors in this patch, and so manipulate the entanglements of different qubits. Reasonable assumptions suggest one might make use of perhaps 20 gates or so per patch. Controlled links among 20 qubits would be very good by present standards, though further scale up—the linking of patches—would be needed for a serious computer. The optically controlled spintronics strategy separates the two roles: qubit spins store quantum information, and controls manipulate quantum information. These roles require different figures of merit.

To operate at room temperature, qubits must stay in their ground states, and their decoherence—loss of quantum information—must be slow enough. Shallow donors like Si:P or Si:Bi thermally ionize too readily for room-temperature operations, though one could demonstrate principles at low temperatures with these materials. Double donors like Si:Mg+ or Si:Se+ have ionization energies of about half the silicon band gap and might be deep enough. Most defects in diamond are stable at room temperature, including substitutional N in diamond and the NV- center on which so many experiments have been done.

What about decoherence? First, whatever enables entanglement also causes decoherence. This is why fast switching means fast decoherence, and slow decoherence implies slow switching. Optical control involves manipulation of the qubits by stimulated absorption and emission in controlled optical excitation sequences, so spontaneous emission will cause decoherence. For shallow donors, like Si:P, the excitation energy is less than the maximum silicon phonon energy; even at low temperatures, one-phonon emission causes rapid decoherence. Second, spin-lattice relaxation in qubit ground states destroys quantum information. Large spin-orbit coupling is bad news, so avoiding high atomic number species helps. Spin lattice relaxation data at room temperature are not yet available for those Si donors (like Si:Se+) where one-phonon processes are eliminated because their first excited state lies more than the maximum phonon energy above the ground state. In diamond at room temperature, the spin-lattice relaxation time for substitutional nitrogen is very good (~1 ms) and a number of other centers have times ~0.1 ms. Third, excited state processes can be problems, and two-photon ionization puts constraints on wavelengths and optical intensities. Fourth, the qubits could lose quantum information to the control atoms. This can be sorted out by choosing the right form of excitation pulses. Fifth, interactions with other spins, including nuclear spins, set limits, but there are helpful strategies, like using isotopically pure silicon.

The control dopants require different criteria. The wave functions of electronically excited controls overlap and interact with two or more qubits to manipulate entanglements between these qubits. The transiently excited state wave function of the control must have the right spatial extent and lifetime. While centers like Si:As could be used to show the ideas, for room-temperature operation one would choose perhaps a double donor in silicon, or substitutional phosphorus in diamond. The control dopant must have sharp optical absorption lines, since what determines the number of independent gates available in a patch is the ratio of the spread of excitation energies, inhomogeneously broadened, to the (homogeneous) linewidth. The spread of excitation energies—inhomogeneous broadening is beneficial in this optical spintronics approach—has several causes, some controllable. Randomness of relative control-qubit positions and orientations is important, and it seems possible to improve the distribution by using self-organization to eliminate unusable close encounters. Steps on the silicon interfaces are also helpful, provided there are no unpaired spins. Overall, various experimental data and theoretical analyses indicate likely inhomogeneous widths are a few percent of the excitation energy.

A checklist of interesting systems as qubits or controls shows some significant gaps in knowledge of defects in solids. Surprisingly little is known about electronic excited states in diamond or silicon, apart from energies and (sometimes) symmetries. Little is known about spin lattice relaxation and excited state kinetics at temperatures above liquid nitrogen, except for the shallow donors that are unlikely to be good choices for a serious quantum computer. There are few studies of stabilities of several species present at one time. Can we be sure to have isolated P in diamond? Would it lose an electron to substitutional N to yield the useless species P+ and N- ? Will most P be found as the irrelevant (spin S=0) PV- center?

What limits the number of gates in a patch is the number of control atoms that can be resolved spectroscopically one from another. As the temperature rises, the lines get broader, so this number falls and scaling becomes harder. Note the zero phonon linewidth need not be simply related to the fraction of the intensity in the sidebands. Above liquid nitrogen temperatures, these homogeneous optical widths increase fast. Thus we have two clear limits to room-temperature operation. The first is qubit decoherence, especially from spin lattice relaxation. The second is control linewidths becoming too large, reducing scalability, which may prove a more powerful limit.

Entangled states of distant atoms or solid-state defects created by interference. See Fig 2 above. A wholly different approach generates quantum entanglement between remote systems by performing measurements on them in a certain way. The systems might be two diamonds, each containing a single NV- center prepared in specific electron spin states, the two centers tuned to have exactly the same optical energies. The measurement involves “single shot” optical excitation. Both systems are exposed to a weak laser pulse that, on average, will achieve one excitation. The single system excited will emit a photon that, after passing though beam splitters and an interferometer, is detected without giving information as to which system was excited. “Remote entanglement” is achieved, subject to some strong conditions. The electronic quantum information can be swapped to more robust nuclear states (a so-called brokering process). This brokered information can then be recovered when needed to implement a strategy of measurement-based quantum information processing.

The materials and equipment needs, while different from those of optically controlled spintronics, have features in common. For remote entanglement, a random distribution of centers is used, with one from each zone chosen because of their match to each other. The excitation energies of the two distant centers must stay equal very accurately, and this equality must be stable over time, but can be monitored. There are some challenges here, since there will be energy shifts when other defect species in any one of the systems change charge or spin state (the difficulty is present but less severe for the optical control approach). As for optically controlled spintronics, scale-up requires narrow lines, and becomes harder at higher temperatures, though there are ways to reduce the problem. Remote entanglement needs interferometric stability, avoiding problems when there are different temperature fluctuations for the paths from the separate systems. Again, there are credible strategies to reduce the effects.