Pages

February 20, 2008

Nanodynamics cancels another IPO

Nanodynamics pulls another stock listing This time Nanodynamics pulled there stock from listing on the Dubai exchange.

Graphene is one of the best conductors of heat

Carbon nanotubes have a typical thermal conductivity range of 3000 to 3500 W/m•K [watts per meter per degree Kelvin]. Diamond, another form of carbon, comes in between 1000 and 2200 W/m•K. The single-layer graphene studied by the UCR researchers displayed a thermal conductivity as high as 5300 W/m•K near room temperature. The thermal conductivity of silicon, the most important electronic material, is around 145 W/m•K if measured at room temperature. Graphene is 36.5 times better than silicon at conducting heat.

New Space update

SpaceX dragon in cargo configuration.

Space Exploration Technologies Corp. (SpaceX) has completed the Preliminary Design Review (PDR) for the second Falcon 9 / Dragon demonstration under NASA’s Commercial Orbital Transportation Services (COTS) project.

During this second and much longer demonstration, the uncrewed Dragon spacecraft will approach within 10 kilometers of the ISS and hold its position. The primary objective of the four day long mission is to demonstrate Dragon’s communication and control system links to the ISS. The three missions for the next 18 months are:


Demo Date Duration Objectives
1 Q3 2008 5 hours
Launch and separate from Falcon 9, orbit Earth, transmit telemetry, receive commands, demonstrate orbital maneuvering and thermal control, re-enter atmosphere, and recover Dragon capsule

2 Q2 2009 5 days
Full, long-duration system check-out, beginning with ISS rendezvous simulation with the Falcon 9 upper stage. Dragon will perform approach, rendezvous, and breakaway operations with the stage.

3 Q3 2009 3 days
Full cargo mission profile including mate to ISS, with empty capsule




Dragon in crew configuration

Although these demonstrations are for cargo re-supply, SpaceX designed the Dragon spacecraft to transport up to seven astronauts to Earth orbit and back. “We have made substantial progress and are confident we can address the gap between Shuttle retirement and Orion operations,” said Gwynne Shotwell, SpaceX VP of Business Development. “We look forward to advancing with the crew-carrying Dragon configuration for NASA should they give the go-ahead.”



Dragon docked to the ISS

Bigelow Aerospace and Lockheed Martin Commercial Launch Services are engaged in discussions and converging on terms to supply Atlas V launch vehicles to provide crew and cargo transportation services to a Bigelow-built space complex.


Bigelow aerospace's Genesis II

Genesis II was successfully launched from the Kosmotras Space and Missile Complex near the town of Yasny on June 28, 2007.

During the operational phase, which is currently planned to begin in 2012, up to 12 missions per year are envisioned, increasing as demand dictates.


FURTHER READING
Hobbyspace timeline for new space developments from 2008-2020. The Hobbyspace predictions look pretty reasonable to me. Hopefully some breakthoughs in fusion and nanomaterials help to make the predictions seem tame.

There will be winners in both the Lunar Lander and Beamed Power Centennial Challenges in 2008.

In the first year of operation, starting late 2009 or early 2010, Virgin Galactic and other suborbital space tourist companies will take in ~$30M to $50M in revenue by flying a few hundred space tourists. There will be steady growth in revenue and the number of passengers in subsequent years.

In late 2010, the Falcon 9/Dragon makes its first cargo flight to the ISS. Crew operations begin by late 2011.

In 2010 Bigelow Aerospace launches the Sundancer space habitat, which can hold a crew of three.

The V-Prize competition for point-to-point spaceflight demonstration between Virginia and Europe opens in 2009 with a four year time limit and is won by 2013.

By 2015, Bigelow has 3 complexes in orbit, each consisting of at least two of the big BA-330 modules. Long term contracts with one, possibly two, launch companies, provides for a flight with crew, passengers, and cargo to each station at least once a month.

The Bigelow module complexes begin to form the nuclei of genuine long term space settlements.

2015-2020 Orbital tourism expands significantly when trips to the Bigelow Aerospace space hotel become available via commercial services that offer transport ticket prices in the $2M-$4M range. Several thousand people per year are flying on suborbital spaceflights. Prices have dropped to a few tens of thousand of dollars range.


Lasers could used to scan for a broad range of disease and health Biomarkers


Laser light can be used to detect molecules in breath that may be markers for diseases like asthma or cancer.

Although it has yet to be tested in clinical trials, a new apparatus may allow doctors to screen people for certain diseases simply by sampling their breath, according to JILA, a joint institute of the National Institute of Standards and Technology (NIST) and the University of Colorado (CU-Boulder).

Known as optical frequency comb spectroscopy, the method is powerful enough to sort through all the molecules in human breath and sensitive enough to distinguish rare molecules that may be biomarkers for specific diseases, said Ye.


When many breath molecules are detected simultaneously, highly reliable, disease-specific information can be collected. Asthma, for example, can be detected much more reliably when carbonyl sulfide, carbon monoxide and hydrogen peroxide are all detected simultaneously with nitric oxide.

While current breath analysis using biomarkers is a noninvasive and low-cost procedure, approaches are limited because the equipment is either not selective enough to detect a diverse set of rare biomarkers or not sensitive enough to detect particular trace amounts of molecules exhaled in human breath.

"The new technique has the potential to be low-cost, rapid and reliable, and is sensitive enough to detect a much wider array of biomarkers all at once for a diverse set of diseases," Ye said.

To test the technology, Ye's team had several CU-Boulder volunteer students breathe into an optical cavity -- a space between two curved mirrors -- then directed sets of ultrafast laser pulses into the cavity. As the light pulses ricocheted around the cavity tens of thousands of times (covering a distance of several kilometers by the time it exited the cavity), the researchers determined which frequencies of light were absorbed, indicating which molecules -- and their quantities -- were present by the amount of light they absorbed.

The remarkable combination of a broad spectral coverage of the entire comb and a sharp spectral resolution of individual comb lines allows them to sensitively identify many different molecules, Ye said. They detected trace signatures of gases like ammonia, carbon monoxide and methane from the samples of volunteers. In one measurement, they detected carbon monoxide in a student smoker that was five times higher compared to a nonsmoking studen.

There is a podcast on this research

The university of Colorado podcast list is here

Possibly over Two billion earths in the Milky Way


(Hat tip Bad Astronomy), About 1% of the stars in the Milky Way could have earth like planets (smaller rock like planets) Another study shows the Milky Way is twice as big as previously thought. So instead 100 billion stars there could be 200 billion and of those 1% or two billion could have earth like worlds around sun like stars. Centauri Dreams also has coverage of the earth like worlds study

A Spitzer Space Telescope study found that 10-20% of young stars had these disks of dusty debris around them. As it happens, about 10% of the stars in the Milky Way can be categorized as sun-like, which is about 10 billion stars. If 10% of them have rocky planets, as this study indicates, then there may be a billion Earths orbiting stars in our galaxy alone! And that’s only for stars like the Sun; lower mass stars also can form planetary systems, and there are far more of them then stars like the Sun. It is entirely possible that there are many billions of terrestrial planets in the galaxy… and there are hundreds of billions of galaxies in the Universe.

Up to 62 percent of the surveyed stars have formed, or may be forming, planets. The correct answer probably lies somewhere between the pessimistic case of less than 20 percent and optimistic case of more than 60 percent.


In separate but related news, the Milky way galaxy is twice as big as previously thought

Astrophysicist Professor Bryan Gaensler led a team that has found that our galaxy - a flattened spiral about 100,000 light years across - is 12,000 light years thick, not the 6,000 light years that had been previously thought.



The University of Sydney team's analysis differs from previous calculations because they were more discerning with their data selection. "We used data from pulsars: stars that flash with a regular pulse," Professor Gaensler explains. "As light from these pulsars travels to us, it interacts with electrons scattered between the stars (the Warm Ionised Medium, or WIM), which slows the light down.

"If you know the distance to the pulsar accurately, then you can work out how dense the WIM is and where it stops - in other words where the Galaxy's edge is.

"Of the thousands of pulsars known in and around our Galaxy, only about 60 have really well known distances. But to measure the thickness of the Milky Way we need to focus only on those that are sitting above or below the main part of the Galaxy; it turns out that pulsars embedded in the main disk of the Milky Way don't give us useful information."

Choosing only the pulsars well above or below us cuts the number of measurements by a factor of three, but it is precisely this rejection of data points that makes The University of Sydney's analysis different from previous work.



McKinsey Globals Energy Efficiency plan


McKinsey Global has an energy productivity plan (36 pages) An additional $170 billion per year invested in energy efficiency can provide 17% average internal rate of return and cut projected energy demand growth by half by 2020. We could use existing technologies to pay for themselves. It would provide up to half of the global greenhouse gas (GHG) avoidance to get to a long term 550 ppm level of GHG in the atmosphere. This would reduce the energy demand in 2020 by 135 quadrillion BTU or the equivalent of 64 million barrels of oil per day. Instead of needing 613 quadrillion btu in 2020, there would be a need for 478 quadrillion btu. (In 2003, the world used 422 quadrillion btu).



They recommend four steps:
1. Set energy efficiency standards for appliances and equipment
This is to follow on the US success with energy efficient appliances like refridgerators.
2. Finance energy efficiency upgrades in new buildings and remodels
Best returns when new buildings are made, but the time to capture benefits is over 15 years.
3. Raise corporate standards of energy efficiency
Many companies are government owned or are shielded from high energy costs. Companies should be required to report on energy efficiency, GHG and pollution information. [I added the pollution part] Investment guarantees and incentives can be used to encourage corporate efficiency.
4. Invest in energy intermediaries
Many people (home owners, landlords and business owners) do not make the investment in energy savings even though those investments would pay for themselves. Problems are that they may not believe that they will keep the building or equipment long enough to see the returns. Policy and investment needs to be adjusted to allow businesses to step in that would pay for the energy efficiency and capture the returns (and pay something to the owner for access to the energy saving opportunity.)










FURTHER READING
Optimal cost efficiency for home energy

IBM using DNA to make carbon nanotube grid computer chips

Left is previous work where DNA was wrapped around carbon nanotubes.

Scientists at IBM are conducting research into arranging carbon nanotubes--strands of carbon atoms that can conduct electricity--into arrays with DNA molecules.
Once the nanotube array is meticulously constructed, the laboratory-generated DNA molecules could be removed, leaving an orderly grid of nanotubes. The nanotube grid, conceivably, could function as a data storage device or perform calculations.



Previous work to create grids using DNA

Potentially, DNA could address, or recognize, features as small as two nanometers. Cutting-edge chips today have features that average 45 nanometers.

"These are DNA nanostructures that are self-assembled into discrete shapes. Our goal is to use these structures as bread boards on which to assemble carbon nanotubes, silicon nanowires, quantum dots," said Greg Wallraff, an IBM scientist and a lithography and materials expert working on the project. "What we are really making are tiny DNA circuit boards that will be used to assemble other components."


UPDATE: The attachment sites on DNA, which is where the nanowires and transistors would attach on the template, can be made much closer together than with traditional pattern manufacturing techniques.
With DNA, the attachment sites are 4nm to 6nm apart. Normally, they're about 45nm apart.

"Think of it as tiling a floor. These DNA pieces are like tiles," explained Gordon. "Each tile has some array of electronic components. Those tiles are placed on a chip in a larger array so there are thousands or millions on a chip. The second step, which we don't know how to do yet, would be to wire them all together. We've got sizes well below conventional lithography."

Wallraff said the next steps will be connect all the tiles together and check the defect levels during assembly.


Actually using this pattern technique is probably 10 to 20 years away, he noted.

Other work to enable self assembly of electronics:
A simple surface treatment technique demonstrated by a collaboration between researchers at the National Institute of Standards and Technology (NIST), Penn State and the University of Kentucky potentially offers a low-cost way to mass produce large arrays of organic electronic transistors on polymer sheets for a wide range of applications including flexible displays, “intelligent paper” and flexible sheets of biosensor arrays for field diagnostics.

The researchers found that by applying a specially tailored pretreatment compound to the contacts before applying the organic semiconductor solution, they could induce the molecules in solution to self-assemble into well-ordered crystals at the contact sites. These structures grow outwards to join across the FET channel in a way that provides good electrical properties at the FET site, but further away from the treated contacts the molecules dry in a more random, helter-skelter arrangement that has dramatically poorer properties—effectively providing the needed electrical isolation for each device without any additional processing steps. The work is an example of the merging of device structure and function that may enable low cost manufacturing, and an area where organic materials have important advantages.





In creating chip arrays, DNA assembly might work as follows: scientists would first create scaffolds of designer DNA manipulated into specific shapes. Rothemund has made DNA structures in the shapes of circles, stars, and happy faces.

A pattern would then be etched into a photo-resistant surface with e-beam lithography and the combination of several interacting thin films. A solution of the designer DNA would then be poured on the patterned surface and the DNA would space themselves out according to the patterns on the substrate and the chemical/physical forces between the molecules.

The nanotubes would then be poured in. Interactions between the nanotubes and the DNA would occur until they formed the desired pattern. Single strand DNA, along with origami, could be used in concert.

Another key part in the system revolves around peptides that can bind to the DNA and a nonbiologically inspired molecule like a nanotube.

With DNA, chipmakers could phase out multibillion fabrication facilities stocked with lithography systems, which cost tens of millions of dollars, and the other "top-down" style equipment.

Potentially, DNA techniques could allow manufacturers to produce features that are smaller than patterns that could be achieved even with the most advanced lithography systems, predicted Wallraff. E-beam lithography, which is extremely difficult to use in mass manufacturing, goes down to 10 nanometers.

"Of course, the devil is in the details," said Wallraff. "These are self-assembly procedures and error rates--missing features could be the downfall."



FURTHER READING
Carbon nanotube transistor work at IBM

UPDATE: Combine this work to get to 2 nanometer feature sizes with configurable more customized processors and we can go from affordable exaflop computers to zettaflop supercomputers. Exaflop is 1000 petaflops, 1 million teraflops. Zettaflops is 1 million petaflops and 1 billion teraflops.

UPDATE: John E. Kelly, IBM's Director of Research, is focusing on four top research priorities.
Each of the projects will get $100 million over the next two to three years, in hopes of generating at least $1 billion, each, in new revenue. The projects: inventing a successor to today's semiconductor, designing computers that process data much more efficiently, using math to solve complex business problems, and building massive clusters of computers that operate like a single machine—an approach called "cloud" computing.

Kelly foresees creating dozens of new joint ventures for research, which he calls "collaboratories," with countries, companies, and independent research outfits. One venture with Saudi Arabia, focusing on nanotechnology, was unveiled on Feb. 26. The two sides plan to develop technologies for solar energy and water desalinization.


In 2003, Kelly took a gamble and set up research alliances with a handful of partners, including Sony Electronics (SNE) and Advanced Micro Devices (AMD), to share expenses and brainpower. The approach eventually paid off, as IBM's chip business returned to profitability and remained on the cutting edge of technology.

February 19, 2008

Two tech lists : MIT Tech Review 2008 and Grand Challenges for Engineering

The National Academy of Engineering has selected the grand challenges of Engineering Throughout history engineering has driven the advance of civilization and completing any of these challenges will be game changers to the course of civilization. The challenges are in four broad categories sustainability, health, vulnerability and joy of living.

MIT Technology Review magazine has selected the top ten emerging technologies for 2008 For some reason many websites are confusing the 2007 list with the 2008 list.

I will be writing several follow up articles to look at the current state of progress to addressing the Grand Challenges of Engineering (both the NAE list and my own) and to discuss both MIT and my own list of emerging technologies for 2008.

A crossover between the two lists is The Grand challenge of reverse engineering the human brain and Connectomics, map all connections between neurons in the mammalian brain, on the MIT Tech list. This is the key to Ray Kurzweil vision of the technological singularity and for one path to the development of Artificial General Intelligence (AGI)
Connectomics aims to map all synaptic connections between neurons in the mammalian brain.

BrainBows: Neurons in the hippocampus, a brain area involved in memory, are labeled in different colors, with their neural projections pointing downward.
Credit: Tamily A. Weissman

In experiments so far, Lichtman's group has used the technology to trace all the connections in a small slice of the cerebellum, the part of the brain that controls balance and movement. Other scientists have already expressed interest in using the technology to study neural connections in the retina, the cortex, and the olfactory bulb, as well as in non-neural cell types.




MIT List of emerging tech for 2008
Modeling Surprise combines data mining and machine learning to help people do a better job of anticipating and coping with unusual events. To monitor surprises effectively the machine has to have both knowledge--a good cognitive model of what humans find surprising--and foresight: some way to predict a surprising event in time for the user to do something about it. By analyzing historical data and backtracking from "surprising events" to the point say thirty minutes before surprise, the system can look to spot when a surprise appears to be developing.

The resulting model works remarkably well. When its parameters are set so that its false-positive rate shrinks to 5 percent, it still predicts about half of the surprises in Seattle's traffic system. If that doesn't sound impressive, consider that it tips drivers off to 50 percent more surprises than they would other­wise know about. Today, more than 5,000 Microsoft employees have this "surprise machine" loaded on their smart phones, and many have customized it to reflect their own preferences.


Probabilitistic chips The idea is to lower the operating voltage of parts of a chip--specifically, the logic circuits that calculate the least significant bits, such as the 3 in the number 21,693. The resulting decrease in signal-to-noise ratio means those circuits would occasionally arrive at the wrong answer, but engineers can calculate the probability of getting the right answer for any specific voltage.

Nanoradio a single nanotube radio.
The next step for Zettl and his colleagues is to make their nanoradios send out information in addition to receiving it. But Zettl says that won't be hard, since a transmitter is essentially a receiver run in reverse. Nano transmitters could open the door to other applications as well. For instance, Zettl suggests that nanoradios attached to tiny chemical sensors could be implanted in the blood vessels of patients with diabetes or other diseases. If the sensors detect an abnormal level of insulin or some other target compound, the transmitter could then relay the information to a detector, or perhaps even to an implanted drug reservoir that could release insulin or another therapeutic on cue. In fact, Zettl says that since his paper on the nanotube radio came out in the journal Nano Letters, he's received several calls from researchers working on radio-based drug delivery vehicles.



Tiny tunes: A nanoradio is a carbon nanotube anchored to an electrode, with a second electrode just beyond its free end. Credit: John Hersey

Wireless light uses resonant coupling, in which two objects tuned to the same frequency exchange energy strongly but interact only weakly with other objects, for wireless power transmission and charging. Wireless power technology transmits electricity to devices without the use of cables.


Wireless Light
Marin Soljačić and colleagues used magnetic resonance coupling to power a 60-watt light bulb. Tuned to the same frequency, two 60-centimeter copper coils can transmit electricity over a distance of two meters, through the air and around an obstacle.

1. Resonant copper coil attached to frequency converter and plugged into outlet
2. Wall outlet
3. Obstacle
4. Resonant copper coil attached to light bulb
Credit: Bryan Christie Design


Miniturized Atomic Magnetometers the size of a grain of rice require little power and are sensitive to very weak magnetic fields. Tiny, inexpensive magnetometers could lead to portable MRI machines, tools for detecting buried explosive devices, and ways to evaluate mineral deposits remotely. This is part of the larger trend of radically smaller and more precise sensors.

Shrinking sensors: A completed magnetometer built by NIST physicists is shown above. It consists of a small infrared laser (glued to a gold-coated plate), the cesium-filled cell, and a light detector.
Credit: Jim Yost; Courtesy of John Kitching

Offline web applications: Adobe will release AIR early this year; companies such as eBay, AOL, and Anthropologie have built applications using early versions of the software. Google is working on a competing platform called Gears.

Transistors based on graphene, a carbon material one atom thick, could have extraordinary electronic properties.

Interest in graphene was sparked by research into carbon nanotubes as potential successors to silicon. Carbon nanotubes, which are essentially sheets of graphene rolled up into cylinders, also have excellent electronic properties that could lead to ultrahigh-­performance electronics. But nanotubes have to be carefully sorted and positioned in order to produce complex circuits, and good ways to do this haven't been developed. ­Graphene is far easier to work with. Graphene transistor manufacturing uses techniques very much like those used to manufacture silicon chips today.


Personal reality mining infers human relationships and behavior by applying data-mining algorithms to information collected by cell-phone sensors that can measure location, physical activity, and more.

Cellulolytic enzymes break down the cellulose found in biomass so it can be used as a feedstock for cheaper biofuels that provide a larger gain in energy produced over energy used to produce.

Researchers have reduced the cost of industrial cellulolytic enzymes to 20 to 50 cents per gallon of ethanol produced. But the cost will have to fall to three or four cents per gallon for cellulosic ethanol to compete with corn ethanol.


The Grand Challenges of Engineering List
The range of difficulty of the challenges on the list vary widely. Also, certain challenges such as success in providing energy from fusion would solve other challenges such as providing access to clean water. This is because one of the main hurdles to providing cheap clean water is lack of enough energy. If you have enough cheap clean energy then the clean water automatically follows.

Make solar energy economical
Provide energy from fusion
Develop Carbon Sequestration methods
Manage the nitrogen cycle
Provide access to clean water
Restore and improve urban infrastructure
Advanced Health Informatics
Engineer better medicines
Reverse engineer the brain
Prevent nuclear terror
Secure cyberspace
Enhance virtual reality
Advanced personalized learning
Engineer the tools of scientific discovery

February 17, 2008

Quantum effects for better computer memory, clocks and microscopes

Over the last decade, Seth Lloyd and his colleagues and postdocs at MIT have been looking at how quantum mechanics can make things better. What Lloyd refers to as the “funky effects” of quantum theory, such as squeezing and entanglement, could ultimately be harnessed to make measurements of time and distance more precise and computers more efficient.

Among the ways that these quantum effects are beginning to be harnessed in the lab, he said, is in prototypes of new imaging systems that can precisely track the time of arrival of individual photons, the basic particles of light. “There’s significantly greater accuracy in the time-of-arrival measurement than what one would expect,” he said. And this could ultimately lead to systems that can detect finer detail, for example in a microscope’s view of a minuscule object, than what were thought to be the ultimate physical limitations of optical systems set by the dimensions of wavelengths of light.


In addition, quantum effects could be used to make much-more-efficient memory chips for computers, by drastically reducing the number of transistors that need to be used each time data is stored or retrieved in a random-access memory location. Lloyd and his collaborators devised an entirely new way of addressing memory locations, using quantum principles, which they call a “bucket brigade” system. A similar, enhanced scheme could also be used in future quantum computers, which are expected to be feasible at some point and could be especially adept at complex operations such as pattern recognition.

Another example of the potential power of quantum effects is in making more accurate clocks, using the property of entanglement, in which two separate particles can instantaneously affect each other’s characteristics.

While some of these potential applications have been theorized for many years, Lloyd said, experiments are “slowly catching up” to the theory. “We can do a lot already,” he said, “and we’re hoping to demonstrate a lot more” in coming years.



V Giovannetti and L Maccone wrote the main papers on using quantum entanglement to reach a quantum speed limit. The quantum speed limit is the Margolus-Levitin speed limit.

Quantum enhanced measurements that gets beyond some classical limits and closer to ultimate limits

FURTHER READING
Fundamental limits to sensing and control Seth Lloyd from 2006.

A presentation on the quantum speed limit

The latest thinking on optical quantum computing Previous optical quantum computing schemes had too much overhead, and new approaches are simplified. Key challenges will be the realization of high-efficiency sources of indistinguishable single photons, low-loss, scalable optical circuits, high-efficiency single-photon detectors, and low-loss interfacing of these components.

A presentation on quantum optics and other quantum effect applications

Quantum optics is a field of research in physics, dealing with the application of quantum mechanics to phenomena involving light and its interactions with matter.