Pages

March 02, 2007

Stanford technology predictions partial review

Reviewing the Delta scan predictions of the Stanford humanities lab.
I had already reviewed one of the predictions

Here are several of the computer related predicions

They predict Working prototypes of quantum computers may be demonstrated by 2040

The working prototype demo was Feb 13 of this year By next year quantums will be well scaled up and commercial.

Nanoscale physical materials that can be automatically assembled into useful configurations by computer instructions could usher in a new era in manufacturing. 11-20 years It is inprecise because certain DNA nanotechnology already qualifies as assembled via computer instructions. If they are talking about diamondoid molecular manufacturing then the timeframe is not unreasonable.

Bigelow aerospace plans L1 and lunar facilities

Bigelow Aerospace is gearing up to launchits second prototype space station into orbit. The company has set its sights on something much, much bigger: a project to assemble full-blown space villages at L1 orbit and then drop them to the lunar surface, ready for immediate move-in.

The next test module, Genesis 2, is due for launch in April – with a larger prototype, known as Galaxy, tentatively scheduled for liftoff next year. Bigelow's plan calls for launching the company's first space "hotel" capable of accommodating guests (or researchers, for that matter) in 2010.


Getting all that right is "Job One," Bigelow told me. But by 2012, the focus could start shifting from low Earth orbit, or LEO, farther out into space. One of the key places in Bigelow's plan is a point about 200,000 miles (323,000 kilometers) out from Earth in the moon's direction, where the pulls of terrestrial and lunar gravity balance each other.


Bigelow would turn that region of space, called L1, into a construction zone. Inflatable modules would be linked up with propulsion/power systems and support structures, and then the completed base would be lowered down to the moon's surface, all in one piece.

Once the moon base has been set down, dirt would be piled on top, using a technique that Bigelow plans to start testing later this year at his Las Vegas headquarters. The moon dirt, more technically known as regolith, would serve to shield the base's occupants from the harsh radiation hitting the lunar surface.

How Numenta will work



Wired has an interview with Jeff Hawkins about how his Numenta Artificial Intelligence system will work



Scan and match
1) The system is shown a poor-quality image of a helicopter moving across a screen. It’s read by low-level nodes that each see a 4 x 4-pixel section of the image.
2) The low-level nodes pass the pattern they see up to the next level.
3) Intermediate nodes aggregate input from the low-level nodes to form shapes.
4) The top-level node compares the shapes against a library of objects and selects the best match.

Predict and refine
5) That info is passed back down to the intermediate - level nodes so they can better predict what shape they’ll see next.
6) Data from higher-up nodes allows the bottom nodes to clean up the image by ignoring pixels that don’t match the expected pattern (indicated above by an X). This entire process repeats until the image is crisp.

Dileep George built an original demonstration program, a basic representation of the process used in the human visual cortex. Most modeling programs are linear; they process data and make calculations in one direction. But George designed multiple, parallel layers of nodes — each representing thousands of neurons in cortical columns and each a small program with its own ability to process information, remember patterns, and make predictions.

George and Hawkins called the new technology hierarchical temporal memory, or HTM. An HTM consists of a pyramid of nodes, each encoded with a set of statistical formulas. The whole HTM is pointed at a data set, and the nodes create representations of the world the data describes — whether a series of pictures or the temperature fluctuations of a river. The temporal label reflects the fact that in order to learn, an HTM has to be fed information with a time component — say, pictures moving across a screen or temperatures rising and falling over a week. Just as with the brain, the easiest way for an HTM to learn to identify an object is by recognizing that its elements — the four legs of a dog, the lines of a letter in the alphabet — are consistently found in similar arrangements. Other than that, an HTM is agnostic; it can form a model of just about any set of data it’s exposed to. And, just as your cortex can combine sound with vision to confirm that you are seeing a dog instead of a fox, HTMs can also be hooked together. Most important, Hawkins says, an HTM can do what humans start doing from birth but that computers never have: not just learn, but generalize.

An HTM trained to identify helicopters from picture can be fed images it has never seen before, images of highly distorted helicopters oriented in various directions. To human eyes, each was still easily recognizable. Computers, however, haven’t traditionally been able to handle such deviations from what they’ve been programmed to detect, which is why spambots are foiled by strings of fuzzy letters that humans easily type in. George clicked on a picture, and after a few seconds the program spit out the correct identification: helicopter. It also cleaned up the image, just as our visual cortex does when it turns the messy data arriving from our retinas into clear images in our mind. The HTM even seems to handle optical illusions much like the human cortex. When George showed his HTM a capital A without its central horizontal line, the software filled in the missing information, just as our brains would.

Numenta is being applied to help monitor the sensors for air traffic control and il platforms. There are seeing good results with speed improvements over traditional approaches and correct identification of high risk situations. There is no degradation with more information as is the case with some AI systems.

Comparing modern skeptism about molecular nanotechnology with FPGA in 1957

Chris Pheonix has a though experiment of taking an FPGA chip back in time to 1957. He indicates the problems that would encountered trying to get the engineers of the day to accept that the FPGA could work and to believe in it enough to invest the time, money and effort into making the connections and supporting systems to program the FPGA and make it function.

The opinions of skeptics are important in determining the schedule by which new ideas are incorporated into the grand system of technology. It may be the case that molecular manufacturing proposals in the mid-1980's simply could not have hoped to attract serious investment, regardless of how carefully the technical case was presented. An extension of this argument would suggest that molecular manufacturing will only be developed once it is no longer revolutionary. But even if that is the case, technologies that are evolutionary within their field can have revolutionary impacts in other areas.

The IBM PC was only an evolutionary step forward from earlier hobby computers, but it revolutionized the relationship between office workers and computers. Without a forward-looking development program, molecular manufacturing may not be developed until other nanotechnologies are capable of building engineered molecular machines --say, around 2020 or perhaps even 2025. But even at that late date, the simplicity, flexibility, and affordability of molecular manufacturing could be expected to open up revolutionary opportunities in fields from medicine to aerospace. And we expect that, as the possibilities inherent in molecular manufacturing become widely accepted, a targeted development program probably will be started within the next few years, leading to development of basic (but revolutionary) molecular manufacturing not long after.


Photolithography was applied to making the first circuits in 1957


October 1957, staff at DOFL used photoengraving and photomechanical techniques to construct the first electronic circuit that incorporated nonprepackaged transistors and diodes as integral parts.
The integrated circuit was patented in 1959.
Vacuum tube computers were still the most powerful machines up to 1958

The first integrated circuits contained only a few transistors. Called "Small-Scale Integration" (SSI), they used circuits containing transistors numbering in the tens.

SSI circuits were crucial to early aerospace projects, and vice-versa. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertially-guided flight computers; the Apollo guidance computer led and motivated the integrated-circuit technology, while the Minuteman missile forced it into mass-production.

These programs purchased almost all of the available integrated circuits from 1960 through 1963, and almost alone provided the demand that funded the production improvements to get the production costs from $1000/circuit (in 1960 dollars) to merely $25/circuit (in 1963 dollars). They began to appear in consumer products at the turn of the decade, a typical application being FM inter-carrier sound processing in television receivers.

The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "Medium-Scale Integration" (MSI).

They were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.

Further development, driven by the same economic factors, led to "Large-Scale Integration" (LSI) in the mid 1970s, with tens of thousands of transistors per chip.

As I track the progress of technology on this site, the size of the leap to molecular manufacturing is rapidly becoming a smaller one. The expanding space of DNA nanotechnology, directed self assembly, synthetic biology, advanced lithography/nanoimprinting and other techniques for fine control at the 10 nanometer level are setting the stage for a fully enable molecular nanotechnology capability. These are early capabilities in molecular control. I think they are somewhat akin to transistors in the development of computers. They are useful and have some advantages over traditional approaches but they do not scale up as well as the IC systems. They are part of the basis for what will become ICs. Molecular systems will reach there promise when we have a few more key processes and methods to enable super high volume scaling.

Early molecular nanotechnology will have labs on a chip, microbubble circuitry, graphene membranes, other nanoscale membranes for supporting system components.

Also, there will be system architectures and societal experience with precursor systems like from advanced fabber and prototyping systems and modular macroscale robots. Nanoparticles and nanomaterials are already being widely adopted. Carbon nanotubes have already made impressive gains and will be moving from niche applications and research into the mainstream over this year and the next three years.

New methods for DNA nanotechnology structures

I think the main benefit for these new methods is to extend the capabilities of these DNA nanotechnology systems along with DNA origami, Ned Seemans DNA systems, rotaxane molecular chemistry, other molecular chemistry, laser/electric manipulation, synthetic biology and advanced self assembly. More tools and methods in our rapidly growing molecular manipulation toolbox.

Scientists from Duke University have recently demonstrated a new method for assembling large, low-cost DNA nanostructures, in part by reusing the “sticky-ends,” the broken DNA strands used to connect the nanostructures. In their hierarchical self-assembly method, the scientists have demonstrated one of the largest programmable synthetic nanostructures ever synthesized.


The hierarchical approach to building nanostructures from DNA, beginning with nine oligonucleotides and resulting in an 8 x 8 grid. Credit: Constantin Pistol, et al.

At a molecular weight of 8960 kD, the 64-motif structure is one of the largest programmable synthetic nanostructures ever synthesized. The scientists also predict that this method can be scaled even further before reaching a limit imposed when the generic interactions begin to dominate the process. However, studies in periodic DNA crystal formation suggest that the scale limit is nearly macroscopic.


The first type, the “generic” sticky-end, binds with only one helix instead of the normal two. This binding provides a relatively unstable interaction, which makes it easier to individually program two adjacent grids later on. The second type, the “specific” sticky-end, provides a stronger interaction and can control the weak interactions between the generic sticky-ends.

In one fabrication method, the scientists bound together two 4 x 4 grids, using two generic sticky-ends and one specific sticky-end (the fourth grid arm was left open for identification purposes). The scientists found that the single specific sticky-end could dominate the entire connection, providing a scalable assembly method.

In the second method, Pistol and Dwyer used all specific sticky-ends to connect two 4 x 4 grids. They found that, after the grids were connected, the sticky-ends could be reused in other connections. In this method, the scientists assembled four 4 x 4 grids to produce a 64-motif structure.

In analyzing their structures for defects, Pistol and Dwyer found that missing motifs were common, requiring defect-tolerant designs in future large-scale assemblies. “One of the benefits of DNA nanostructures for computers is device density,” Dwyer said. “The grid has a pitch of 20nm, and this is about half of the smallest device feature in Intel's latest lithography process. The other benefit is manufacturing scale. Each experiment created a vast number of structures (~1012 or more) and this holds the promise of more complex and higher performance computers in the future.”

March 01, 2007

Foolproof Quantum Cryptography

Adding decoy photons to quantum-cryptographic signals should finally make them "unconditionally secure."

Researchers at Toshiba, in Cambridge, U.K., have found a way to plug a security hole that currently limits how far and how fast encryption keys can be distributed using existing quantum-cryptographic systems. The developments could broaden the commercial appeal of "unconditionally secure" quantum key distribution, says Andrew Shields, head of Quantum Information Group at Toshiba Research Europe, who led the research.

Quantum cryptography is currently only used for sending encryption keys between buildings by some banks and government departments. But systems can only guarantee security over relatively short distances. The challenge is to extend the range and increase the speed at which the keys can be sent so that they can be used more widely, says Shields.


Making quantum encryption totally secure will require the use of single-photon pulses. Pictured is a new light-emitting diode capable of generating such pulses.
Credit: Toshiba Research Europe Ltd.


In practice, however, this sort of unconditional security can only really be guaranteed if one's light source emits nothing but single photons. Since this is not the case in current quantum encryption, eavesdropping attacks are possible. In one strategy, an eavesdropper siphons off individual photons; this attack relies on the fact that some pulses will consist of more than one photon, meaning they won't be missed.

To get around this, existing commercial quantum-encryption systems use tricks to reduce the probability that pulses will contain multiple photons. For example, the systems might limit the intensity of each pulse and reduce the bit rate at which they are sent. However, the trade-off is that the weaker a pulse is, the less distance it can travel, while a slower bit rate will limit the speed at which keys can be distributed, says Shields.

Toshiba's solution is to include within the signal what Shields calls "decoy pulses." These pulses are randomly interspersed within the signal and are weaker than the rest of the signal. This means they rarely consist of more than one photon. If an eavesdropper tries blocking single photons while siphoning off multiple photons from the rest of the pulses, more of these decoy pulses will be blocked on average than will the rest of the signal. So by monitoring the proportion of signals to decoy pulses that make it through, it is possible to detect an attack.

New Nanocoating breakthrough in non-Reflective material

A team of researchers from Rensselaer Polytechnic Institute has created the world’s first material that reflects virtually no light. Reporting in the March issue of Nature Photonics, they describe an optical coating made from the material that enables vastly improved control over the basic properties of light. The research could open the door to much brighter LEDs, more efficient solar cells, and a new class of "smart" light sources that adjust to specific environments, among many other potential applications


To achieve a very low refractive index, silica nanorods are deposited at an angle of precisely 45 degrees on top of a thin film of aluminum nitride. Credit: Rensselaer/Fred Schubert

Schubert and his coworkers have created a material with a refractive index of 1.05, which is extremely close to the refractive index of air and the lowest ever reported. Window glass, for comparison, has a refractive index of about 1.45.

he new optical coating could find use in just about any application where light travels into or out of a material, such as:

-- More efficient solar cells. The new coating could increase the amount of light reaching the active region of a solar cell by several percent, which could have a major impact on its performance. "Conventional coatings are not appropriate for a broad spectral source like the sun," Schubert said. "The sun emits light in the ultraviolet, infrared, and visible spectral range. To use all the energy provided by the sun, we don’t want any energy reflected by the solar cell surface."

-- Brighter LEDs. LEDs are increasingly being used in traffic signals, automotive lighting, and exit signs, because they draw far less electricity and last much longer than conventional fluorescent and incandescent bulbs. But current LEDs are not yet bright enough to replace the standard light bulb. Eliminating reflection could improve the luminance of LEDs, which could accelerate the replacement of conventional light sources by solid-state sources.

-- "Smart" lighting. Not only could improved LEDs provide significant energy savings, they also offer the potential for totally new functionalities. Schubert’s new technique allows for vastly improved control of the basic properties of light, which could allow "smart" light sources to adjust to specific environments. Smart light sources offer the potential to alter human circadian rhythms to match changing work schedules, or to allow an automobile to imperceptibly communicate with the car behind it, according to Schubert.

- Optical interconnects. For many computing applications, it would be ideal to communicate using photons, as opposed to the electrons that are found in electrical circuits. This is the basis of the burgeoning field of photonics. The new materials could help achieve greater control over light, helping to sustain the burgeoning photonics revolution, Schubert said.

-- High-reflectance mirrors. The idea of anti-reflection coatings also could be turned on its head, according to Schubert. The ability to precisely control a material’s refractive index could be used to make extremely high-reflectance mirrors, which are used in many optical components including telescopes, optoelectronic devices, and sensors.

-- Black body radiation. The development could also advance fundamental science. A material that reflects no light is known as an ideal "black body." No such material has been available to scientists, until now. Researchers could use an ideal black body to shed light on quantum mechanics, the much-touted theory from physics that explains the inherent "weirdness" of the atomic realm.

More accurate breakdown of energy equivalents

This is a more accurate breakdown of what equals one cubic mile of oil. The IEEE Spectrum comparison compares oil's inputs to the other's outputs.

The world's annual consumption, one cubic mile of oil, can be replaced by [this includes converting all cars and trucks to electric or PHEV/biofuels a likely 30-40 year effort with massive global pushes]:
* 700 1.1 GW nuclear plants,
* 1,550 500MW coal plants,
* 720,000 3MW wind turbines,
* Maybe 2 billion 2.1KW solar panels

If we were going to supply a cubic-mile-of-oil equivalent of heat and work from nuclear plants at 33% thermal efficiency (3.3 GW thermal input, 2.2 GW thermal + 1.1 GW electric output) it would take a lot less. If you cranked them for 50 years, a mere 14 1.1 GW plants could supply 771 GW-years of electricity and another 1540 GW-years of low-grade heat, more than satisfying the requirement of 1370 GW-years of heat from oil. Coal would do about about the same, but it would take 31 500 MW plants to equal the 14 nukes. Wind has no waste heat stream and couldn't do as well (the energy would have to be all electric), but the possibilities for solar are amazing. Solar heat (for space heat) can be collected for very little, sometimes for free with careful design. Supplying 770 GW-years of electricity from solar PV at 25% capacity factor would require only about 40 million 2.1 kW installations; doing a year's worth per year would require about 2 billion 2.1 kW systems, or about 700 watts per capita.

700 watts is about 10 of today's PV panels. The industrial nations could almost afford to give 10 panels to every child at birth, and cost improvements in the pipeline could extend this to much of the world in the next decade or two.


Other analysis

Putting the brakes on laser mirror systems

I have described a bouncing laser mirror system for accelerating space vehicles to very fast speed. How would braking work ?

The braking is done with a receiving laser bounce system at your destination (like Mars)

OR

you would need to carry a good superconducting magnetic sail to slow you down

OR
you have to bring along a drive system to slow you down (like an ion drive, Vasimr or Minimag orion where you start slowing down halfway)

OR
you do not go faster than you can brake (aerobraking and airbags, whatever drive you have for braking etc...)

I would say that you first send over the robotic parts and gear on slower trips. Bigger, slower payloads with aerobraking and whatever else you have for braking. Maybe an early package would be the nuclear gas reactors (still to be made but on the drawing board) and laser, mirror systems. Send those multi-ton packages over on 96 day or 6 month trips. Whatever speed that you can brake safely from. Then the receiving lasers have power. Then you can do a better job of slowing in bound shipments. Then you can start sending things over faster. Go twice as slow send 4 times as much stuff. Go ten times slower and send 100 times as much stuff. The laser/mirror system is still very efficient in terms of the cost of consumables (mainly just electricity).

Ultimately a network of laser systems for accelerating and slowing vehicles would be needed.

Path to detailed wiring diagram of the brain

Researchers at the Salk Institute for Biological Studies have jumped what many believe to be a major hurdle to preparing a detailed wiring diagram of the brain: identifying all of the connections to a single neuron. The researchers describe how they modified the deadly rabies virus, turning it into a tool that can cross the synaptic space of a targeted nerve cell just once to identify all the neurons to which it is directly connected.

Viruses that naturally spread between neurons have previously been used to outline the flow of nerve cell communication, but they have two drawbacks. First, once inside the brain, they keep spreading from cell to cell without stopping. Second, they cross different synapses – the specialized junctions between nerve cells - at different rates, crossing bigger, stronger synapses faster than smaller, weaker ones. Together these attributes make these viruses unable to determine exactly which cells are connected to which. The team of Salk researchers sought to create a modified virus whose spread could be limited to a single synaptic connection.

“The core idea is to use a virus that is missing a gene required for spreading across synapses but to provide the missing gene by some other means within the initially infected cells,” says Ian Wickersham, Ph.D., postdoctoral researcher and lead author on the project.

With the critical gene deleted from its genome, the virus is marooned inside a cell, unable to spread beyond it. However, supplying the missing gene in that same cell allows the virus to spread to cells that are directly connected to it. Since these neighboring cells lack the gene supplied in the first cell, the virus is stuck. Only the cells connected directly to the original cell are labeled.

You need two genes expressed in the cell or cell type of interest: TVA, to get the rabies virus in, and the missing viral gene so the virus can spread to connected cells,” says Wickersham.

They experimented on a neonatal rat brain: The result was spectacular: as expected, these red cells were selectively infected by the virus, which spread to hundreds of surrounding cells, turning them brilliantly fluorescent green. Once scientists can identify a neural circuit, they can then deactivate it, and test for changes in brain function.

February 28, 2007

One atom thick graphene membranes

Researchers have used the world's thinnest material to create a new type of technology, which could be used to make super-fast electronic components and speed up the development of drugs. It's believed this super-small graphene structure can be used to sieve gases, make ultra-fast electronic switches and image individual molecules with unprecedented accuracy.

Now an international research team, led by Dr Jannik Meyer of The Max-Planck Institute in Germany and Professor Andre Geim of The University of Manchester has managed to make free-hanging graphene.

The team used a combination of microfabrication techniques used, for example, in the manufacturing of microprocessors.

A metallic scaffold was placed on top of a sheet of graphene, which was placed on a silicon chip. The chip was then dissolved in acids, leaving the graphene hanging freely in air or a vacuum from the scaffold.

The resulting membranes are the thinnest material possible and maintain a remarkably high quality.

Professor Geim – who works in the School of Physics and Astronomy at The University of Manchester – and his fellow researchers have also found the reason for the stability of such atomically-thin materials, which were previously presumed to be impossible.

They report that graphene is not perfectly flat but instead gently crumpled out of plane, which helps stabilise otherwise intrinsically unstable ultra-thin matter.

Professor Geim and his colleagues believe that the membranes they have created can be used like sieves, to filter light gases through the atomic mesh of the chicken wire structure, or to make miniature electro-mechanical switches.

It's also thought it may be possible to use them as a non-obscuring support for electron microscopy to study individual molecules.

This has significant implications for the development of medical drugs, as it will potentially allow the rapid analysis of the atomic structures of bio-active complex molecules.


"We have made proof-of-concept devices and believe the technology transfer to other areas should be straightforward. However, the real challenge is to make such membranes cheap and readily available for large-scale applications.

Smallest transistor made from graphene

Professor Andre Geim and Dr Kostya Novoselov from The School of Physics and Astronomy at The University of Manchester, reveal details of transistors that are only one atom thick and less than 50 atoms wide, in the March issue of Nature Materials.

Professor Geim and colleagues have shown for the first time that graphene remains highly stable and conductive even when it is cut into strips of only a few nanometres wide.

All other known materials – including silicon – oxidise, decompose and become unstable at sizes tens times larger.

The research team suggests that future electronic circuits can be carved out of a single graphene sheet. Such circuits would include the central element or 'quantum dot', semitransparent barriers to control movements of individual electrons, interconnects and logic gates – all made entirely of graphene.

Geim's team have proved this idea by making a number of single-electron-transistor devices that work under ambient conditions and show a high-quality transistor action.

"At the present time no technology can cut individual elements with nanometre precision. We have to rely on chance by narrowing our ribbons to a few nanometres in width," says Dr Leonid Ponomarenko, who is leading this research at The University of Manchester. "Some of them were too wide and did not work properly whereas others were over-cut and broken."

But Dr Ponomarenko is optimistic that this proof-of-concept technique can be scaled up. "The next logical step is true nanometre-sized circuits and this is where graphene can come into play because it remains stable – unlike silicon or other materials – even at these dimensions."

Professor Geim does not expect that graphene-based circuits will come of age before 2025. Until then, silicon technology should remain dominant.

Pioneers of Environmentalism now support nuclear energy as part of solution

Patrick Moore, co-founder of Greenpeace, is chairman and chief scientist of Greenspirit Strategies in Vancouver, British Columbia, and is co-chair of an industry-funded initiative, the Clean and Safe Energy Coalition, which supports increased use of nuclear energy. He indicates that nuclear energy should be part of the environmental solution and that coal and fossil fuel are the real enemies.

Stewart Brand is another early environmentalist who has embraced environmental heresies

Worldchanging.com is also part of the more pragmatic, realistic position based on science and business realities

Nanoparticles light interface to nerve cells

Nanoparticle integration with nerve cells and neurons

Although light signals have previously been transmitted to nerve cells using silicon (whose ability to turn light into electricity is employed in solar cells and in the imaging sensors of video cameras), nanoengineered materials promise far greater efficiency and versatility.

"It should be possible for us to tune the electrical characteristics of these nanoparticle films to get properties like color sensitivity and differential stimulation, the sort of things you want if you're trying to make an artificial retina, which is one of the ultimate goals of this project," Pappas said. "You can't do that with silicon. Plus, silicon is a bulk material — silicon devices are much less size-compatible with cells."

The researchers caution that despite the great potential of a light-sensitive nanoparticle-neuron interface, creating an actual implantable artificial retina is a long-range project. But they're equally hopeful about a variety of other, less complex applications made possible by a tiny, versatile light-activated interface with nerve cells — such things as new ways to connect with artificial limbs and other prostheses, and revolutionary new tools for imaging, diagnosis and therapy.

"The beauty of this achievement is that these materials can be remotely activated without having to use wires to connect them. All you have to do is deliver light to the material," said Professor Massoud Motamedi, director of UTMB's Center for Biomedical Engineering and a co-author of the paper. "This type of technology has the ability to provide non-invasive connections between the human nervous system and prostheses and instruments that are unprecedented in their flexibility, compactness and reliability," Motamedi continued. "I feel that such nanotools are going to give the fields of medicine and biology brand-new capabilities that it's hard to even imagine now."

February 27, 2007

new coal power plants in British Columbia to emit no carbon dioxide

I think this is great. Coal kills a lot of people. Forcing them to be a cleaner is good. They will still be worse than anything else but at least not insanely deadly. See my other coal and nuclear energy articles for the details.

Canada makes a law that carbon must be captured from coal plants Some energy experts say that meeting the policy, which states that coal plants must capture and sequester their carbon dioxide, effectively mandates the use of cleaner but more costly coal gasification technology called Integrated Gasification Combined Cycle (IGCC).

Major utilities and technology providers in the United States say that IGCC technology is ready for commercial use. According to the National Energy Technology Laboratory, in Pittsburgh, IGCC is the technology selected for one-fifth of 159 new coal plants proposed since 2000. But so far, systems for capturing carbon dioxide from such power plants have not been engineered. And of the 32 proposed IGCC plants, only a handful are moving forward.

What is slowing the transition away from conventional pulverized-coal technology is IGCC's higher up-front cost. General Electric, which is providing the designs for the IGCC project that is now the farthest along, estimates that the first 10 will cost at least 10 to 15 percent more to build than a pulverized-coal plant. Other experts estimate that the cost premium could be much higher. That has made IGCC a tough sell, even though it is cleaner, emitting levels of smog-producing NOx and sulfur dioxide closer to those of a natural gas-fired power plant.

Better understanding for how to modify high temp superconductors

New insights into modifying high temperature superconductors How adjusting the pressure and substitution of isotopes effects the critical temperature.

The results also suggest that vibrations (called phonons), within the lattice structure of these materials, are essential to their superconductivity by binding electrons in pairs.

A proposed theoretical work for achieving room temperature superconductors is based on getting the phonon (vibration) structure right

Three nearterm paths to Increasing intelligence

From Stephen Kosslyn, Havard Psychology, relatively nearterm paths to increasing intelligence

Cognitive neuroscience and related fields have identified a host of distinct neural systems in the human brain. Different combinations of these systems are used in the service of accomplishing different tasks, and each system can be made more efficient by "targeted training." Such training involves having people perform tasks that are designed to exercise very specific abilities, which grow out of distinct neural networks.

Understanding the nature of group problem solving will increase human intelligence.

Increasingly powerful and accessible machines and knowldge to help us extend our intelligence. Richer, faster and better wikipedia and google. As well as better interfaces.

Suberbot cube robots proof of nano building block concepts

Molecular manufacturing has a building block architecture concept
Recent Superbot work shows that modular building blocks can be highly functional.

Wei-Min Shen of the University of Southern California's Information Sciences Institute recently reported to NASA significant progress in developing "SuperBot," identical modular units that plug into each other to create robots that can stand, crawl, wiggle and even roll. He illustrated his comments with striking video of the system in action, video now posted on line.



"Superbot consists of Lego-like but autonomous robotic modules that can reconfigure into different systems for different tasks. Examples of configurable systems include rolling tracks or wheels (for efficient travel), spiders or centipedes (for climbing), snakes (for burrowing in ground), long arms (for inspection and repair in space), and devices that can fly in micro-gravity environment.

"Each module is a complete robotic system and has a power supply,
micro- controllers, sensors, communication, three degrees of freedom,
and six connecting faces (front, back, left, right, up and down) to
dynamically connect to other modules.

Nanotech to help the poor?

How will NNI style nanotech or later nanotech be used to help the poor ?

"Nanotechnology has the potential to generate enormous health benefits for the more than five billion people living in the developing world," according to Dr. Peter A. Singer, senior scientist at the McLaughlin-Rotman Centre for Global Health and Professor of Medicine at University of Toronto. "Nanotechnology might provide less-industrialized countries with powerful new tools for diagnosing and treating disease, and might increase the availability of clean water."

February 26, 2007

Table top X-ray lasers near

Table top x-rays could improve imaging resolution by 1000 times

The X-rays we get in the hospital are limited by spatial resolution. They can't detect really small cancers because the X-ray source in your doctor's office is like a light bulb, not like a laser. If you had a bright, laser-like X-ray beam, you could image with far higher resolution. This system will impact medicine, biology and nanotechnology.

To generate laser-like X-ray beams, the team used a powerful laser to pluck an electron from an atom of argon, a highly stable chemical element, and then slam it back into the same atom. The boomerang action generates a weak, but directed beam of X-rays.

The obstacle they needed to hurdle was combining different X-ray waves emitted from a large number of atoms to generate an X-ray beam bright enough to be useful, according to Kapteyn. In other words, they needed to generate big enough waves flowing together to make a strong X-ray.

The researchers sent some weak pulses of visible laser light into the gas in the opposite direction of the laser beam generating the X-rays. The weak laser beam manipulates the electrons plucked from the argon atoms, whose emissions are out of sync with the main beam, and then slams them back into the atoms to generate X-rays at just the right time, intensifying the strength of the beam by over a hundred times.

New glider sub can go three times deeper than military submarines

Deepglider, the 71-inch long, 138-pound device is made of carbon fiber that dive to 9000 feet (2700 meters). The energy-efficient, battery-powered glider carries sensors to measure oceanic conditions including salinity and temperature -- information that is key to understanding climate change. It can also stay out to sea for up to a year.

Boeing assembled the 4-foot hull on the same carbon-fiber machine used to mock up the fuselage barrels for the 787.

Use 67 kilowatt solid state lasers for Mars in 10 days

A proof of concept demonstration of a photonic laser propulsion system using mirrors to bounce laser light and multiply the effectiveness of lasers was made



UPDATE: New article on how to stop the vehicle at the destination

A 67 kilowatt solid state laser has been achieved



The Solid State Heat Capacity Laser (SSHCL) has achieved 67 kilowatts (kW) of average power in the laboratory.

It could take only a further six to eight months to break the "magic" 100kW mark required for the battlefield, the project's chief scientist told the BBC. Hitting 67kW, said SSHCL programme manager Bob Yamamoto, meant 100kW was now within reach.

SSHCL uses an array of many diodes - not dissimilar to the LEDs used in bicycle lights and remote controls - to generate a beam. SSHCL generates a pulsed beam which fires 200 times a second at a wavelength of one micron. However, other experts place more stock in a continuous wave (CW), or "always-on", beam format.

One of the biggest hurdles to surmount for solid state lasers is achieving a sufficient beam quality. This is a measure of how tightly a laser beam can be focused under certain conditions.

Dr Yamamoto said improving the beam quality was one of the current goals for his team. The Livermore group is one of a number working on solid state lasers and is looking for further funding.

In 2005, Massachusetts-based Textron Systems won a $40m grant from the US Department of Defense to build a 100kW laser by 2009.

I am less interested in the weapon aspect as the potential for laser launching systems for space applications.

The proof of concept photonic laser propulsion system using mirrors to bounce laser light and multiply the effectiveness of lasers generate 35 micronewtons of thrust using low wattage lasers and 3000 bounces.

It has been proposed that extremely small payloads (10 kg) could be delivered to Mars in only 10 days of travel time using laser-based lightsail caft (Meyer, 1984), but in order to do so, would require a 47 GW laser system.

One thousand 100 kilowatt laser modules and 2000 bounces would be equal to a 200 Gigawatt laser. This would be 4 times the 10 kg system and could deliver 40kg payloads to Mars in ten days. Ten thousand modules would allow for 400 kg payloads to Mars in ten days.

A twenty ton vehicle could be sent to Mars in 96 days using two 1 GW laser source and 1000 reflections. The sail would have an areal density of 10 gm/m**2. For a sail with a 1000 m diameter the resulting total weight would be 7850 kg, within the weight budget of the 20 tonne lightsail craft. The example has a lightsail with a mass of 10 tonne, carrying 10 tonne of cargo.

An equivalent modular system would need ten thousand 100 kilowatt laser modules and 2000 reflections.

The deployment of large sails can be done using magnetically inflated cables

Modular laser launch systems are described here

The infrastructure for many thousand laser modules is substantial but not impossible in the range of several billion dollars. The cost of the electricity for each launch to Mars is:
Where P is the laser power of 1x10**9 W, the total time the lasers run, t, is 20 hours, and h is the wall-plug efficiency of the laser, which for purposes of this example will be assumed to be 25%. Under these conditions the total energy requirement becomes 8x10**7 kW-hr. The cost of producing electricity in the US is currently on the order of $0.03/kW-hr, which would result in a total power cost of $2.4 million, or only $240/kg for the delivered cargo.

Advanced civilizations immune to natural asteroids

I have written about using space rocks as weapons.

Robert A. Metzger had performed a calculation that a civilization with access to 100 to 1000 times the energy that we have now (possible with advanced nanotechnology) could make 1000 big particle accelerators and over a year slow planet earth down enough to dodge teh incoming asteroid

Of course the space rock as weapon with an advanced adversary could make some adjustments, but being able to dodge does raise the bar. Plus it shows that a really advanced civilization with a lot of power can handle any size natural asteroid if it spots it within 2 years.

February 25, 2007

The most important quantum computer algorithm

This paper is the Abrahms-Lloyd algorithm which is at the heart of most quantum simulations algorithms Being able to implement this would massively speed up molecular modeling for the development of molecular manufacturing.

Geordie Rose, CTO of Dwave indicates, In contrast to factoring, quantum simulation is both extremely commercially valuable (a large fraction of the world’s supercomputer cycles are currently spent solving the Schrodinger equation) and offers the possibility for huge social good (new clean energy sources, better medecines, cleaner chemical plants, etc.).

An explanation of the Shor quantum computer algorithm with minimal math

Scott Aaronson, quantum computer expert, explains the most famous quantum computer algorithm using minimal math.

1. Find a property that is shared by all of possible answers which can be compared.
For Shor's algorithm it is the period of Prime factors.
x mod N, x2 mod N, x3 mod N, x4 mod N, … (discovered by Euler in 1760)

2.
we could create an enormous quantum superposition over all the numbers in our sequence: x mod N, x2 mod N, x3 mod N, etc. Then maybe there’s some quantum operation we could perform on that superposition that would reveal the period.

we’re no longer trying to find a needle in an exponentially-large haystack, something we know is hard even for a quantum computer. Instead, we’re now trying to find the period of a sequence, which is a global property of all the numbers in the sequence taken together.


3. if we want to get this period-finding idea to work, we’ll have to answer two questions:

A. Using a quantum computer, can we quickly create a superposition over x mod N, x2 mod N, x3 mod N, and so on?

We can certainly create a superposition over all integers r, from 1 up to N or so. The trouble is, given an r, how do we quickly compute xr mod N?
Use repeated squaring

B. Supposing we did create such a superposition, how would we figure out the period?
To get the period out, Shor uses something called the quantum Fourier transform, or QFT.
The QFT converts the property to linear amounts (vectors, lines with length and direction) which can be compared. In a quantum computer the amounts interfere destructively and cancel each other out, which leaves the right answer if the property does differentiate the right answer.

Geordie Rose talks about his take on Shor's Algorithm and why he think it is not the most important

A tutorial on Fourier Transforms

A guide to online tutorials on Fourier transforms with star ratings

Dwave Demo recap

The official recap of the Dwave demo is at the dwave blog Dwave has had over 100 respondents with ideas for applications to date, about one third of which are from Fortune 1000 companies. So if they achieve the speedup and targets that they plan to achieve they should have very solid business success.