September 26, 2009

NASA also Found Mars has More Water Than Previously Believed


The High Resolution Imaging Science Experiment camera on NASA's Mars Reconnaissance Orbiter took these images of a fresh, 6-meter-wide (20-foot-wide) crater on Mars on Oct. 18, 2008, (left) and on Jan. 14, 2009. Each image is 35 meters (115 feet) across. Image Credit: NASA/JPL-Caltech/University of Arizona


NASA's Mars Reconnaissance Orbiter has revealed frozen water hiding just below the surface of mid-latitude Mars. The spacecraft's observations were obtained from orbit after meteorites excavated fresh craters on the Red Planet.

Scientists controlling instruments on the orbiter found bright ice exposed at five Martian sites with new craters that range in depth from approximately half a meter to 2.5 meters (1.5 feet to 8 feet). The craters did not exist in earlier images of the same sites. Some of the craters show a thin layer of bright ice atop darker underlying material. The bright patches darkened in the weeks following initial observations, as the freshly exposed ice vaporized into the thin Martian atmosphere. One of the new craters had a bright patch of material large enough for one of the orbiter's instruments to confirm it is water-ice.

The finds indicate water-ice occurs beneath Mars' surface halfway between the north pole and the equator, a lower latitude than expected in the Martian climate.


The Mars ice discovery was announced in the same week as the announcement of water being found on the moon.



The ice exposed by fresh impacts suggests that NASA's Viking Lander 2, digging into mid-latitude Mars in 1976, might have struck ice if it had dug 10 centimeters (4 inches) deeper. The Viking 2 mission, which consisted of an orbiter and a lander, launched in September 1975 and became one of the first two space probes to land successfully on the Martian surface. The Viking 1 and 2 landers characterized the structure and composition of the atmosphere and surface. They also conducted on-the-spot biological tests for life on another planet.




Are EMdrives Really Asymmetric Capacitor Thrusters ?

This site has covered the EMdrive before.



The EmDrive is highly controversial research in propulsion which would be reactionless and uses superconducting cavities.

Rocketeer comments: Here's my take on it. It "works" after a fashion (actually generates thrust), but not in the way that Shawyer thinks it does. It's actually a form of Asymmetric Capacitor Thruster (ACT), which generates thrust by the Biefeld-Brown effect -- charged metal surfaces ionise the surrounding air by corona discharge, and create an '"ion wind" which pushes the apparatus along. Good for the continued integrity of the laws of physics, but bad for space applications of the EmDrive, because in a vacuum it would do precisely nothing.


Shawyer claims that this and other effects have been taken into account.

"Stray electromagnetic effects were eliminated by using different test rigs, by testing two thrusters with very different mounting structures, and by changing the orientation by 90 degrees to eliminate the Earth’s magnetic field," he writes. "Electrostatic charges were eliminated by the comprehensive earthing required for safety reasons, and to provide the return path for the magnetron anode current."

When I asked him about the possibility of "ion wind" being the real cause of thrust, Dr Shawyer patiently explained that it had been addressed at a very early stage:

"Air currents from whatever source were eliminated in the first Proof of Concept project by testing the experimental thruster mounted in a hermetically sealed box. The experiment was reviewed and accepted by professional government scientists." [The research was being supported by the British government at the time.]

He also points out that real ion drives need much higher voltage and that "Anyone who thinks they can create grammes of thrust from ion wind at the voltages we work at clearly doesn’t understand physics." He does not believe a vacuum chamber test would show anything, as ion drives function in a vaccum and there would still be the question of wehther some ionised material was somehow being ejected. However, the hermetically sealed box test should have negated that possibility.


Understanding Asymmetric Capacitor Thrusters
Biefeld-Brown effect at wikipedia

The Biefeld–Brown effect is an effect that was discovered by Paul Alfred Biefeld (CH) and Thomas Townsend Brown (USA). The effect is more widely referred to as electrohydrodynamics (EHD) or sometimes electro-fluid-dynamics, a counterpart to the well-known magneto-hydrodynamics. Extensive research was performed during the 1950s and 1960's on the use of this electric propulsion effect during the publicized era of the United States gravity control propulsion research (1955 - 1974). During 1964, Major Alexander Procofieff de Seversky had in fact published much of his related work in U.S. Patent 3,130,945, and with the aim to forestall any possible misunderstanding about these devices, had termed these flying machines as ionocrafts. In the following years, many promising concepts had to be abandoned due to technological limitations and were forgotten. The effect has only recently become of interest again and such flying devices are now known as EHD thrusters. Simple single-stage versions lifted by this effect are sometimes also called lifters.




23 page pdf NASA study of Asymmetrical Capacitors for Propulsion

Asymmetrical Capacitor Thrusters have been proposed as a source of propulsion. For over eighty years it has been known that a thrust results when a high voltage is placed across an asymmetrical capacitor, when that voltage causes a leakage current to flow. However, there is surprisingly little experimental or theoretical data explaining this effect. This paper reports on the results of tests of several Asymmetrical Capacitor Thrusters (ACTs). The thrust they produce has been measured for various voltages, polarities, and ground configurations and their radiation in the VHF range has been recorded. These tests were performed at atmospheric pressure and at various reduced pressures. A simple model for the thrust was developed. The model assumed the thrust was due to electrostatic forces on the leakage current flowing across the capacitor. It was further assumed that this current involves charged ions which undergo multiple collisions with air. These collisions transfer momentum. All of the measured data was consistent with this model. Many configurations were tested, and the results suggest general design principles for ACTs to be used for a variety of purposes.

A series of careful tests have been performed on Asymmetrical Capacitor Thrusters
(ACTs). In the past, several mechanisms have been proposed for the thrust that they produce. These mechanisms were considered, both on theoretical grounds and by comparison with test results. All of the mechanisms considered were eliminated except one. A simple model was developed of ions drifting from one electrode to the other under electrostatic forces, and imparting momentum to air as they underwent multiple collisions. This model was found to be consistent with all of our observations. It predicted the magnitude of the force (thrust) that was measured. It also predicted how the direction of the thrust changed when the location of the ground wire changed. Furthermore, it also predicted that the direction of the thrust was independent of the polarity of the applied voltage. Finally, it qualitatively predicted how the magnitude of the thrust varied as the design of the ACT (its shape, etc.) varied, over many such design changes. It may be concluded that that the ion drift model explains how a thrust is developed by ions pushing on air. Tests were also performed in nitrogen and argon, and were performed at reduced pressures. A thrust was also produced at moderately reduced pressures, when the ACT produced a current flow without causing a breakdown of the air or other gas. In spite of decades of speculation about possible new physical principles being responsible for the thrust produced by ACTs and lifters, we find no evidence to support such a conclusion. On the contrary, we find that their operation is fully explained by a very simple theory that uses only electrostatic forces and the transfer of momentum by multiple collisions.


Microwave Regolith to -50C to Get Lunar Water for Fuel

A lot of water has been found on the moon but is in concentractions of 1 part in 1000 to 1 part in 10,000

how do you extract water that is likely locked up as small concentrations of ice in the lunar soil? Microwaves could provide the key, according to work by Edwin Ethridge of NASA's Marshall Space Flight Center and William Kaukler of the University of Alabama, both in Huntsville, who first demonstrated the technique in 2006.

They used an ordinary microwave oven to zap simulated lunar soil that had been cooled to moon-like temperatures of -150 °C.

Keeping the soil in a vacuum to simulate lunar conditions, they found that heating it to just -50 °C with microwaves made the water ice sublimate, or transform directly from solid to vapour. The vapour then diffused out from higher-pressure pores in the soil to the low-pressure vacuum above.

Extracting lunar water woudl use 100 times less energy as would be needed to extract hydrogen and oxygen from the lunar soil.




September 25, 2009

Aubrey de Grey Explains SENS Antiaging for Three and Half Minute Interview on MSNBC Today



* SENS is not one breakthrough and will not be a pill
* one month of a series of procedures, stem cell therapies, gene therapy, small molecule drugs etc... and then repeated every 10-30 years
* thorough repair and maintentance of tissues
* Could have significant progress in 25-30 years or it could take 100 years. We do not know.

Sign up to comment at 3banana, so that SENS can win the share to win contest. Just 3 fields including your email address and then a comment. It will help SENS to win $3000 more than second place and more publicity. Vote Sept 27, 2009.

The non-profit causes that collect the most comments on their note from unique users will win. Voting will end on Sunday, September 27, 2009 at midnight.

Extensive Synthetic Biology Coverage at the New Yorker

The New Yorker has a lengthy 8 web page article on synthetic biology

Artemisinin (anti-malaria drug that is now produced via synthetic biology) is the first step in what Keasling hopes will become a much larger program. “We ought to be able to make any compound produced by a plant inside a microbe,” he said. “We ought to have all these metabolic pathways. You need this drug: O.K., we pull this piece, this part, and this one off the shelf. You put them into a microbe, and two weeks later out comes your product.”

That’s what Amyris has done in its efforts to develop new fuels. “Artemisinin is a hydrocarbon, and we built a microbial platform to produce it,” Keasling said. “We can remove a few of the genes to take out artemisinin and put in a different gene, to make biofuels.” Amyris, led by John Melo, who spent years as a senior executive at British Petroleum, has already engineered three microbes that can convert sugar to fuel. “We still have lots to learn and lots of problems to solve,” Keasling said. “I am well aware that makes some people anxious, and I understand why. Anything so powerful and new is troubling. But I don’t think the answer to the future is to race into the past.”


Those in the west who want to slow certain applications of synthetic biology:
Keasling, too, believes that the nation needs to consider the potential impact of this technology, but he is baffled by opposition to what should soon become the world’s most reliable source of cheap artemisinin. “Just for a moment, imagine that we replaced artemisinin with a cancer drug,” he said. “And let’s have the entire Western world rely on some farmers in China and Africa who may or may not plant their crop. And let’s have a lot of American children die because of that. Look at the world and tell me we shouldn’t be doing this. It’s not people in Africa who see malaria who say, Whoa, let’s put the brakes on.”




Freeman Dyson on the Impact of Widespread Synthetic Biology Capability

Freeman Dyson wrote: “every orchid or rose or lizard or snake is the work of a dedicated and skilled breeder. There are thousands of people, amateurs and professionals, who devote their lives to this business.” This, of course, we have been doing in one way or another for millennia. “Now imagine what will happen when the tools of genetic engineering become accessible to these people.”

New Yorker Writer speculates: It is only a matter of time before domesticated biotechnology presents us with what Dyson described as an “explosion of diversity of new living creatures. . . . Designing genomes will be a personal thing, a new art form as creative as painting or sculpture


Getting to Inherently Safe Designs
The debate over genetically engineered food has often focussed on theoretical harm rather than on tangible benefits. “If you build a bridge and it falls down, you are not going to be permitted to design bridges ever again,” Endy said. “But that doesn’t mean we should never build a new bridge. There we have accepted the fact that risks are inevitable.” He believes the same should be true of engineering biology.

We also have to think about our society’s basic goals and how this science might help us achieve them. “We have seen an example with artemisinin and malaria,” Endy said. “Maybe we could avoid diseases completely. That might require us to go through a transition in medicine akin to what happened in environmental science and engineering after the end of the Second World War. We had industrial problems, and people said, Hey, the river’s on fire—let’s put it out. And, after the nth time of doing that, people started to say, Maybe we shouldn’t make factories that put shit into the river. So let’s collect all the waste. That turns out to be really expensive, because then we have to dispose of it. Finally, people said, Let’s redesign the factories so that they don’t make that crap.”


Ultimate Solution to Healthcare ?

Endy pointed out that we are spending trillions of dollars on health care and that preventing disease is obviously more desirable than treating it. “My guess is that our ultimate solution to the crisis of health-care costs will be to redesign ourselves so that we don’t have so many problems to deal with. But note,” he stressed, “you can’t possibly begin to do something like this if you don’t have a value system in place that allows you to map concepts of ethics, beauty, and aesthetics onto our own existence


How much of this was science fiction?

Endy stood up. “Can I show you something?” he asked, as he walked over to a bookshelf and grabbed four gray bottles. Each one contained about half a cup of sugar, and each had a letter on it: A, T, C, or G, for the four nucleotides in our DNA. “You can buy jars of these chemicals that are derived from sugarcane,” he said. “And they end up being the four bases of DNA in a form that can be readily assembled. You hook the bottles up to a machine, and into the machine comes information from a computer, a sequence of DNA—like T-A-A-T-A-G-C-A-A. You program in whatever you want to build, and that machine will stitch the genetic material together from scratch. This is the recipe: you take information and the raw chemicals and compile genetic material. Just sit down at your laptop and type the letters and out comes your organism.”

We don’t have machines that can turn those sugars into entire genomes yet. Endy shrugged. “But I don’t see any physical reason why we won’t,” he said. “It’s a question of money. If somebody wants to pay for it, then it will get done.” He looked at his watch, apologized, and said, “I’m sorry, we will have to continue this discussion another day, because I have an appointment with some people from the Department of Homeland Security.”






Sandia computer scientists successfully boot one million Linux kernels as virtual machines

Computer scientists at Sandia National Laboratories in Livermore, Calif., have for the first time successfully demonstrated the ability to run more than a million Linux kernels as virtual machines.

The Sandia work will be helped in future by being able to use bug free code for operating system kernels developed by the University of New South Wales.

Combined the two developments (mega simultaneous virtual kernels and bug free code) mean unprecendented levels of scalability.

The achievement will allow cyber security researchers to more effectively observe behavior found in malicious botnets, or networks of infected machines that can operate on the scale of a million nodes. Botnets, said Sandia’s Ron Minnich, are often difficult to analyze since they are geographically spread all over the world.

Sandia scientists used virtual machine (VM) technology and the power of its Thunderbird supercomputing cluster for the demonstration.

researchers had only been able to run up to 20,000 kernels concurrently (a “kernel” is the central component of most computer operating systems). The more kernels that can be run at once, he said, the more effective cyber security professionals can be in combating the global botnet problem. “Eventually, we would like to be able to emulate the computer network of a small nation, or even one as large as the United States, in order to ‘virtualize’ and monitor a cyber attack,” he said.

A related use for millions to tens of millions of operating systems, Sandia’s researchers suggest, is to construct high-fidelity models of parts of the Internet


This would also relate to being able to have complex simulations of a million people.



To arrive at the one million Linux kernel figure, Sandia’s researchers ran one kernel in each of 250 VMs and coupled those with the 4,480 physical machines on Thunderbird. Dell and IBM both made key technical contributions to the experiments, as did a team at Sandia’s Albuquerque site that maintains Thunderbird and prepared it for the project.

The capability to run a high number of operating system instances inside of virtual machines on a high performance computing (HPC) cluster can also be used to model even larger HPC machines with millions to tens of millions of nodes that will be developed in the future, said Minnich. The successful Sandia demonstration, he asserts, means that development of operating systems, configuration and management tools, and even software for scientific computation can begin now before the hardware technology to build such machines is mature.


Next Goal 100 million Virtual Machine Kernels
Sandia’s researchers plan to take their newfound capability to the next level.

“It has been estimated that we will need 100 million CPUs (central processing units) by 2018 in order to build a computer that will run at the speeds we want,” said Minnich. “This approach we’ve demonstrated is a good way to get us started on finding ways to program a machine with that many CPUs.” Continued research, he said, will help computer scientists to come up with ways to manage and control such vast quantities, “so that when we have a computer with 100 million CPUs we can actually use it.”


Before 2020, they could reach 10 billion Virtual machines or more than one for every person.


7500 lines of Bug Free Code, A Mathematically Verified Crash Proof Operating System Kernel

Computer researchers at UNSW (University of New South Wales) and NICTA (NICTA is Australia’s Information and Communications Technology (ICT) Centre of Excellence) have achieved a breakthrough in software which will deliver significant increases in security and reliability and has the potential to be a major commercialisation success.

A team had been able to prove with mathematical rigour that an operating-system kernel – the code at the heart of any computer or microprocessor – was 100 per cent bug-free and therefore immune to crashes and failures.

The breakthrough has major implications for improving the reliability of critical systems such as medical machinery, military systems and aircraft, where failure due to a software error could have disastrous results.

“A rule of thumb is that reasonably engineered software has about 10 bugs per thousand lines of code, with really high quality software you can get that down to maybe one or three bugs per thousand lines of code,” Professor Heiser said.


This will also be relevant to Artificial General Intelligence. Being able to prove that the code you write will do what you want 100% of the time is huge.



Bug Free Code is Possible

Verifying the kernel – known as the seL4 microkernel – involved mathematically proving the correctness of about 7,500 lines of computer code in an project taking an average of six people more than five years.

“The NICTA team has achieved a landmark result which will be a game changer for security-and-safety-critical software,” Professor Heiser said.

“The verification provides conclusive evidence that bug-free software is possible, and in the future, nothing less should be considered acceptable where critical assets are at stake.”


NASA and Science Journal Information about Water on the Moon


These images show a very young lunar crater on the side of the moon that faces away from Earth, as viewed by NASA's Moon Mineralogy Mapper on the Indian Space Research Organization's Chandrayaan-1 spacecraft. On the left is an image showing brightness at shorter infrared wavelengths. On the right, the distribution of water-rich minerals (light blue) is shown around a small crater. Both water- and hydroxyl-rich materials were found to be associated with material ejected from the crater.
Credits: ISRO/NASA/JPL-Caltech/USGS/Brown Univ.


NASA scientists have discovered water molecules in the polar regions of the moon. Instruments aboard three separate spacecraft revealed water molecules in amounts that are greater than predicted, but still relatively small. Hydroxyl, a molecule consisting of one oxygen atom and one hydrogen atom, also was found in the lunar soil. The findings were published in Thursday's edition of the journal Science.

This site had a previous article based on the leaked information on moon water and the latest work on fuel depots in space. The water should be used to supply fuel depots to lower the cost of working and living in space.

The journal Science has an article "A Whiff of Water Found on the Moon"

Three independent groups today announced the detection of water on the lunar surface, their find is at most a part per 1000 water in the outermost millimeter or two of still very dry lunar rock.

The discovery has potential, though. Future astronauts might conceivably wring enough water from not-completely-desiccated lunar "soil" to drink or even to fuel their rockets. Equally enticing, the water seems to be on its way to the poles, where it could be pumping up subsurface ice deposits that would be a real water bonanza.

The Moon Mineralogy Mapper (M3) that has been orbiting the moon onboard India's now-defunct Chandrayaan-1 spacecraft. A spectrometer, M3 detected an infrared absorption at a wavelength of 3.0 micrometers that only water or hydroxyl--a hydrogen and an oxygen bound together--could have created.

But spectroscopists had long distrusted any sign of water in lunar data because Apollo moon rocks were so bone-dry. So M3 team members asked the researchers operating the spectrometer on NASA's EPOXI spacecraft to take a look as it passed the moon last June on its way to comet Hartley 2. EPOXI observations confirmed the M3 detection, as did a reanalysis of Cassini spectrometer data taken in 1999 on its way to Saturn. The three analyses are reported in separate papers in Science.

The best estimate coming out of the reported observations for water's abundance is 0.2 to 1 part per 1000 of water, she says, and that's in the upper millimeter or two that spectroscopy can penetrate. At those levels, an astronaut would have to process the soil from a baseball-diamond-size plot to get a decent drink of water.

More tantalizing, the water becomes more abundant closer to the poles. That and water's abundance varying with time suggests to Pieters that water is being produced on the moon--perhaps through solar wind hydrogen interacting with surface rock--and then hopscotching from place to place through the moon's vanishingly thin atmosphere. Because a water molecule would stick more securely to colder rock, water would tend to migrate toward the colder polar regions. There, it might become trapped for eons as subsurface ice in permanently shadowed craters, which are currently thought to be among the coldest places in the solar system.


Character and Spatial Distribution of OH/H2O on the Surface of the Moon Seen by M3 on Chandrayaan-1



The search for water on the surface of the anhydrous Moon remained an unfulfilled quest for 40 years. The Moon Mineralogy Mapper (M3) on Chandrayaan-1 has now detected absorption features near 2.8-3.0 µm on the surface of the Moon. For silicate bodies, such features are typically attributed to OH- and/or H2O-bearing materials. On the Moon, the feature is seen as a widely distributed absorption that appears strongest at cooler high latitudes and at several fresh feldspathic craters. The general lack of correlation of this feature in sunlit M3 data with neutron spectrometer H abundance data suggests that the formation and retention of OH and H2O is an ongoing surficial process. OH/H2O production processes may feed polar cold traps and make the lunar regolith a candidate source of volatiles for human exploration


9 page pdf with supplemental information for Character and Spatial Distribution of OH/H2O on the Surface of the Moon

Detection of Adsorbed Water and Hydroxyl on the Moon by
Roger N. Clark


Data from the Visual and Infrared Mapping Spectrometer (VIMS) on Cassini during its fly-by of the Moon in 1999 show a broad absorption at 3µm due to adsorbed water and near 2.8µm attributed to hydroxyl in the sunlit surface on the Moon. The amounts of water indicated in the spectra depend on the type of mixing, and the grain sizes in the rocks and soils but could be 10 to 1,000 parts per million and locally higher. Water in the polar regions may be water that has migrated to the colder environments there. Trace hydroxyl is observed in the anorthositic highlands at lower latitudes


10 page pdf with supplemental information about Detection of Adsorbed Water and Hydroxyl on the Moon

Temporal and Spatial Variability of Lunar Hydration as Observed by the Deep Impact Spacecraft

The Moon is generally anhydrous, yet the Deep Impact spacecraft found the entire surface to be hydrated during some portions of the day. OH and H2O absorptions in the near infrared were strongest near the North Pole and are consistent with <0.5 wt% H2O. Hydration varied with temperature, rather than cumulative solar radiation, but no inherent absorptivity differences with composition were observed. However, comparisons between data collected one week (a quarter lunar day) apart show a dynamic process with diurnal changes in hydration that were greater for mare basalts (~70%) than for highlands (~50%). This hydration loss and return to steady state occurred entirely between local morning and evening, requiring a ready daytime source of water group ions, which is consistent with a solar wind origin.


3 page pdf with supplemental information on Temporal and Spatial Variability of Lunar Hydration as Observed by the Deep Impact Spacecraft

The Deep Impact HRI‐IR spectrometer was used to look at the moon three times and saw signs of water as well.

A Lunar Waterworld by Paul G. Lucey Science DOI: 10.1126/science.1181471
Space-based spectroscopic measurements provide strong evidence for water on the surface of the Moon
.

How to Find Water on the Moon

These graphs show detailed measurements of light as a function of color or wavelength. The data, called spectra, are used to identify minerals and molecules. On the left are spectra of lunar rocks, minerals and soil returned to Earth by NASA's Apollo missions, taken in the visible to shorter-wavelength infrared range. The blue bar shows where a dip in the light is expected due to the presence of water and hydroxyl molecules. To the right are model spectra for pure water (H2O) and hydroxyl (OH-).

Image credit: ISRO/NASA/JPL-Caltech/Brown Univ.










Dispersing Light through the Moon Mineralogy Mapper


The Moon Mineralogy Mapper is a state-of-the-art NASA imaging spectrometer. Sunlight reflected off the moon enters the telescope and then is passed by mirrors to the spectrometer. In the spectrometer, white light is dispersed into different wavelengths (from 0.43 to 3 micrometers) for every point in an image. Once in orbit around the moon, the instrument generates three-dimensional cubes of data that allow scientists to map the composition of the surface.
Image credit: NASA/JPL-Caltech











Water Detected at High Latitudes


This image of the moon is from NASA's Moon Mineralogy Mapper on the Indian Space Research Organization's Chandrayaan-1 mission. It is a three-color composite of reflected near-infrared radiation from the sun, and illustrates the extent to which different materials are mapped across the side of the moon that faces Earth.

Small amounts of water and hydroxyl (blue) were detected on the surface of the moon at various locations. This image illustrates their distribution at high latitudes toward the poles.

Blue shows the signature of water and hydroxyl molecules as seen by a highly diagnostic absorption of infrared light with a wavelength of three micrometers. Green shows the brightness of the surface as measured by reflected infrared radiation from the sun with a wavelength of 2.4 micrometers, and red shows an iron-bearing mineral called pyroxene, detected by absorption of 2.0-micrometer infrared light.
Image credit: ISRO/NASA/JPL-Caltech/Brown Univ./USGS









Daytime Water Cycle on the Moon



This schematic shows the daytime cycle of hydration, loss and rehydration on the lunar surface. In the morning, when the moon is cold, it contains water and hydroxyl molecules. One theory holds that the water and hydroxyl are, in part, formed from hydrogen ions in the solar wind. By local noon, when the moon is at its warmest, some water and hydroxyl are lost. By evening, the surface cools again, returning to a state equal to that seen in the morning. Thus, regardless of location or terrain type, the entire surface of the moon is hydrated during some part of the lunar day.
Credit: University of Maryland/McREL

Quantum Computers Powered By Photon Machine Guns

New Scientist reports of a new solution to increasing the numbers of entangled qubits for quantum computers. It will be a revolutionary advance for photonic quantum computing.

Raising the number of qubits has proven tricky because of the difficulty of reliably producing entangled particles. Now a team has designed a system that should fire out barrages of entangled photons with machine-gun regularity

Rudolph and Netanel Lindner at the Technion-Israel Institute of Technology in Haifa have designed the blueprint for a system that fires out large numbers of entangled photons on demand. They call it a "photonic machine gun"

Rudolph and Lindner initially estimated that their device would be able to fire out 12 qubits on demand. "Talking to various experimentalists I think we were a bit conservative," says Rudolph. "The current collection efficiencies might make detection of 20 to 30 entangled photons feasible, which would take us beyond what we can fit into the memory of a classical computer."

They say that a practical version could be built within a few years. "It's only within the last year or so that the [nanofabrication] technology has made this feasible," Rudolph says.


Abstract: Proposal for Pulsed On-Demand Sources of Photonic Cluster State Strings from Phys. Rev. Lett. 103, 113602 (2009) [4 pages]

We present a method to convert certain single photon sources into devices capable of emitting large strings of photonic cluster state in a controlled and pulsed “on-demand” manner. Such sources would greatly reduce the resources required to achieve linear optical quantum computation. Standard spin errors, such as dephasing, are shown to affect only 1 or 2 of the emitted photons at a time. This allows for the use of standard fault tolerance techniques, and shows that the photonic machine gun can be fired for arbitrarily long times. Using realistic parameters for current quantum dot sources, we conclude high entangled-photon emission rates are achievable, with Pauli-error rates per photon of less than 0.2%. For quantum dot sources, the method has the added advantage of alleviating the problematic issues of obtaining identical photons from independent, nonidentical quantum dots, and of exciton dephasing.




8 page pdf with supplemental information.


Exciton Dephasing

In the paper we only briefly mentioned the spectral dephasing which will occur
while the system is excited. Our intuition contrasted with that of others, namely
we felt that this process would not a¤ect the entanglement of the state - in
particular with respect to the polarization degrees of freedom we are interested
in - but would only lead to the emitted photon wavepackets being in a mixture
of di¤erent frequencies. This in turn would only a¤ect the (small fraction) of
photons which have to go through fusion gates, and such photons can be …ltered
before entering the gates in a way which will only lead to a change in the success
probability of the (non-deterministic) gate. That is, such …ltering need not even
lead to a loss error (as explained below). As such the only e¤ect will be that we
need to use more photons - but the overhead is some constant factor.

September 24, 2009

Mach Effect 4: More Mach Effect Answers

The first nextbigfuture article on mach's effect propulsion which has an interview with Paul March

A second article with answers to various questions in comments on mach effect propulsion.

Mach Effect investigation could be a path to the unification of general relativity and quantum mechanics. Here is links to abot 20 hours of Stanford lectures on General relativity and another 20 on quantum mechanics plus a short video from one of the investigators of mach's effect for potentially revolutionary propulsion. It is useful to understand the details of General Relativity and Quantum Mechanics to be able to judge the quality of an attempt at unification and the possible impact of unification of those two areas.

If Mach's effect can be used for propulsion as envisioned then what has been envisioned in terms of space travel in capabilities in Star Trek and even possibly wormholes for Faster than light travel and communication becomes possible. The work is based on solid General Relativity and Quantum Mechanics and understanding of inertia and the experiments are being carefully conducted. Success development would be a candidate for one of the greatest accomplishments of humanity.

Paul March Has More Mach Effect Answers at Talk Polywell
Where does the kinetic energy of a Mach-drive vehicle come from?"

Simple, it's the cosmological gravity/inertia or gravinertial field created by the rest of the mass/energy in the universe. This idea is at the heart of Mach's principle as stated by Ernst mach in the late 1800s. In other words when an M-E drive accelerates itself and anything attached to it, the momentum and energy books for this acceleration step are balanced by subtracting the equivalent energy from this cosmological gravinertial field, which IMO, simultaneously lowers the overall temperature of the causally connected universe. So the Mach drive is just an electric motor that has replaced the driving electric and magnetic fields with the gravinertial field as the intermediating agent.




“The magic seems to be in how the mass of the material changes?”

The magic you refer to is wrapped around the question of what is the origins of inertia and inertial mass, and can it be dynamically modified by applied E&M fields? In the GRT/Machian view, (QM also takes a different position on this question), the property we call inertial mass comes about from the interaction of the cosmological gravinertial (G/I) field with the atoms & ions of the local mass. If you can transiently shield this G/I field interaction between the G/I field and the local mass, the local mass’ inertial mass will decrease during the initial shielding process and then increase when being unshielded. This G/I field shielding effect can be induced by bulk accelerating the local mass relative to the distant stars while a local power supply is pumping power thru the mass as would be the case in a bulk accelerated capacitor being charged and discharged. The actual change in the E= m*c^2 energy in the cap during its charge and discharging process is FAR TOO SMALL to account for the M-E’s predicted delta mass ratios or those already measured. You could consider the bulk acceleration and cap power flux simply as the catalytic elements needed to shield the local mass from its G/I field, which is the source of inertial mass in the Machian viewpoint.

As to whether the ac signal used to excite the capacitor is referenced to ground in a +/- signal around zero volts, or is a varying dc signal matters not. It’s the change in cap energy state and how fast it changes when being multiplied by the bulk acceleration that drives the magnitude of the M-E’s mass/energy fluctuations.


"So it's an inertial dampening field? Unlike the sci-fi use of the term, however, it shields the mass from the universal gravity/inertia?"

If you are referring to the M-E impulse term as a transient inertia damping field or spherical kink in the G/I field around the local mass that propagates away at the speed of c both forwards and backwards in time, my answer is yes it is. In other words, the G/I field IS the source of inertia. If you shield this field from the local mass, the magnitude of the local mass has to be reduced or cancelled totally if the shielding is large enough.

As to your second question, I don't have an answer for you other than it might be possible for the M-E to explain the observed satellite data if the proposed solar system E- and B-fields were aligned appropriately and if the satellites in question met the requirements of the M-E, which of course is TBD.


Does this mean that whatever space craft is being propelled by the mach-thruster is not gaining mass the closer it approaches c, unlike conventional thrusters?

Only if the entire vehicle was undergoing mass fluctuations. As currently built, these M-E drives only affect the mass density of the cap dielectrics in the drives themselves, while the rest of the vehicle would undergo the usual relativistic effects as the vehicle's velocity approached c. How you get around that probelm is to develop the M-E wormhole term into a working FTL drive.


The MLT (Mach Lorentz Thruster) thrust varies with the sine function of the angle between the E-field and the B-field in the cap dielectric, so one can vary the MLT's thrust smoothly between zero thrust to say max +X axis thrust at 90 deg, back to zero thrust at 180 degrees, then on to a peak -X thrust at 270 degrees, and then back down to zero at 360/0 degrees. How smooth this thrust control is depends on the granularity of your phase control system.


Any cites/links/papers/authors on QVF [Quantum Vacuum Fluctations] besides the one you already provided? Any feedback appreciated.

You might start with the "Hydrodynamics of the Vacuum" by P. M. Stevenson from Rice University. Also of interest is a STAIF-2006 paper by Harold White and Eric Davis entitled "The Alcubierre Warp Drive in Higher Dimensional Spacetime" by H. G. White-1 and E. W. Davis-2, along with Harold (Sonny) White's STAIF-2007 Paper on QVF/MHD Thrusters.


Hydrodynamics of the Vacuum [32 page pdf]

Hydrodynamics is the appropriate “effective theory” for describing any fluid medium at sufficiently long length scales. This paper treats the vacuum as such a medium and derives the corresponding hydrodynamic equations. Unlike a normal medium the vacuum has no linear sound-wave regime; disturbances always “propagate” nonlinearly. For an “empty vacuum” the hydrodynamic equations are familiar ones (shallow water-wave equations) and they describe an experimentally observed phenomenon — the spreading of a clump of zero-temperature atoms into empty space. The “Higgs vacuum” case is much stranger; pressure and energy density, and hence time and space, exchange roles. The speed of sound is formally infinite, rather than zero as in the empty vacuum. Higher-derivative corrections to the vacuum hydrodynamic equations are also considered. In the empty-vacuum case the corrections are of quantum origin and the post-hydrodynamic description corresponds to the Gross-Pitaevskii equation. I conjecture the form of the post-hydrodynamic corrections in the Higgs case. In the 1+1-dimensional case the equations possess remarkable ‘soliton’ solutions and appear to constitute a new exactly integrable system.

There are two main points that I wish to emphasize: (i) Hydrodynamics in the empty vacuum case makes perfect sense and describes an experimentally observed phenomenon, the free expansion of an atomic Bose-Einstein condensate when the atom-trap potential is turned off. (ii) Hydrodynamics in the Higgs-vacuum case gives very strange and exciting behaviour as a consequence of the fact that the speed of sound in the Higgs vacuum is formally infinite. The Higgs vacuum is a medium that is both ultrarelativistic (pressure ≫ energy density) and ultra-quantum, being a Bose-Einstein condensate with almost all its particles in the same quantum state. Not surprisingly, perhaps, its properties are very different from those of familiar media.


The Alcubierre Warp Drive in Higher Dimensional Spacetime, 8 page pdf

The canonical form of the Alcubierre warp drive metric is considered to gain insight into the mathematical mechanism triggering the effect. A parallel with the Chung-Freese spacetime metric is drawn to demonstrate that the spacetime expansion boost can be considered a 3 + 1 on-brane simplification for higher dimensional geometric effects. The implications for baryonic matter of higher dimensional spacetime, in conjunction with the Alcubierre metric, are used to illustrate an equation of state for dark energy. Finally, this combined model will then be used to outline a theoretical framework for negative pressure (an alternative to negative energy) and a conceptual lab experiment is described.


More from Paul March:
Let’s look at thruster energy to thrust efficiencies. The best chemical rocket thrusters as exemplified by the Space Shuttle Main Engine (SSME) has an Isp of ~453 seconds and a thruster efficiency of ~2.5x10^-4 Newtons per Watt. The current VX-200 VASIMR engine by Ad Astra company has a Isp of ~5,000 second and a net energy to thrust efficiency of ~5.0 Newtons / 200 kW = 2.5x10^-5 Newton per Watt due to its high Isp figure driven by its limited power supply. For reference, the highest performing turbofan engine used on the wide body jets has an Isp ~5,000 seconds and a thruster energy efficiency of ~2.0x10^-3 Newton per Watt, but this is only operational below 40k feet altitude here on Earth where it has access to its external propellant supply.

Now, my proof of principle Mach-2MHz, Mach Lorentz Thruster (MLT) on the other hand had a thruster energy efficiency of 2.9x10^-4 Newton/Watt, which is already equivalent to the best operational chemical rocket (SSME). And since there is no currently known theoretical restrictions on an MLT’s maximum thrust efficiency other than those placed on it by its engineering details, what is obtainable for the best MLT or M-E drive performance is only limited by the available dielectric material science and power electronics of the day in question. It may take up to 100 years to reach this 1.0 N/W efficiency level through a process of continuous improvements much like how the internal combustion engine was improved over the 20th Century, but theoretically the road is open for this kind of incremental development process.

Given that a capacitor dielectric can vary its total mass cyclically over a period of time around an average value, and you can apply an external force to the dielectric so it pushes the dielectric when it is heavy and then pulls on it when it's light in the same direction, you create an unbalanced force in the direction of the pull light force. This is force rectification of a time varying mass. If you want to reverse this net unbalanced force due to the time varying mass, you simply reverse the push/pull order, so you push light and then pull heavy.

The M-E drives push/pull off the mostly distant mass/energy in the universe via the cosmological gravity/inertia or gravinertia (G/I) field that gives rise to Newtonian inertia per Mach's principle. As to the origins of the momentum and energy acquired by the M-E Drive, it comes from the kinetic energy of the various parts of the universe that create this G/I field, which IMO reduces the average temperature of the universe by a very, very small percentage required to balance the energy books. However, since the 5% of the mass/energy that is standard mass in the universe is composed of over 1x10^80 atoms and ions, wiggling a block of dielectric mass that only contains at most ~1x10^26 ions is no big deal...



GE Working on Pulse Detonation For 65% Efficient Natural Gas Power Generation Turbines

MIT Technology Review interview with Michael Idelchik, vice president of advanced technologies at GE Research.

Pulse detonation technology, or supersonic combustion. With this one, rather than burning fuel at constant pressure, you let the pressure rise, so basically you generate a shock wave; you're releasing heat in a detonation. An existing turbine burns at constant pressure. With detonation, pressure is rising, and the total energy available for the turbine increases. We see the potential of 30 percent fuel-efficiency improvement. Of course realization, including all the hardware around this process, would reduce this.

I think it will be anywhere from 5 percent to 10 percent. That's percentage points--say from 59 to 60 percent efficient to 65 percent efficient. We have other technology that will get us close [to that] but no other technology that can get so much at once. It's very revolutionary technology.

The first application will definitely be land-based--it will be power generation at a natural-gas power plant.

You detonate anywhere from 50 to 80 hertz. Then you have unsteady flow going into the turbine. So you need to rethink how your turbine works. You don't have a steady flow anymore.

You have to look at the mechanical stability, vibrational analysis. You have to protect the compressor; detonation happens in both directions, so you have to close one end. So controls and synchronization of the detonation chambers become a really big challenge as well. You have to absorb the energy from detonation and convert it to shaft horsepower. That has to be done very well, otherwise you can lose everything in the turbine. What blade design and nozzle design will allow you to extract the most horsepower?




Multiscale models and simulations--from nanoseconds all the way up to 20 to 30 milliseconds is needed to create this kind of system. Evolution of valve technology and materials to go with that. Understanding how to design a robust detonation tube, how to produce detonation consistently and operate within the load range of the turbine, from idle to max power.


Google Chrome Frame Plugin Makes IE8 9.6 times Faster for Javascript

Microsoft's Internet Explorer zips through JavaScript nearly 10 times faster than usual when Google's new Chrome Frame plug-in is partnered with the browser, benchmark tests show.

According to tests run by Computerworld, Internet Explorer 8 (IE8) with the plug-in was 9.6 times faster than IE8 on its own. Computerworld ran the SunSpider JavaScript benchmark suite three times each for IE8 with Chrome Frame, and IE8 without the plug-in, then averaged the scores

Chrome Frame must be installed by the browser user, but it can be triggered automatically by Web site and application developers using a single HTML tag on their sites or in their applications' code. Until those sites and applications are modified to call on Chrome Frame, users can manually force IE to use the plug-in by prefacing the URL of a site with the characters "cf:" (sans the quotation marks).

That was how Computerworld obtained the impressive SunSpider results for IE8.

The Chrome Frame plug-in works with IE6, IE7 or IE8 on Windows XP or Windows Vista. It's available for downloading from Google's site.

Microsoft claims in an interview on ars Technica that Chrome Frame creates security holes



"With Internet Explorer 8, we made significant advancements and updates to make the browser safer for our customers," a Microsoft spokesperson told Ars. "Given the security issues with plugins in general and Google Chrome in particular, Google Chrome Frame running as a plugin has doubled the attach area for malware and malicious scripts. This is not a risk we would recommend our friends and families take." The spokesperson also referred us to the latest phishing and malware data from NSS Labs, the same security company that found IE8 was the most secure browser in August 2009 via two Microsoft-sponsored reports


Ars Technica:
Plugins and add-ons are definitely a huge security issue; they usually remain unpatched longer than most and often end up doing more damage than vulnerabilities in the actual browser. As for IE + Google Chrome Frame potentially allowing for double the damage because the browser mutant would be open to a wider range of attacks, we're going to have to call foul. Somehow we doubt there is a significant amount of malware specifically targeting Chrome, and for whatever exists, we're pretty sure most would fail when encountering IE + Google Chrome Frame. These Web attacks would be written to be able to circumvent Chrome's security measures and would simply not expect Internet Explorer's security layers.


What about the part about Chrome having security issues in particular? Soon after Chrome was first released in September 2008, vulnerabilities were discovered and loudly trumpeted. The new browser was quickly labeled insecure days after it was made available, and remained so until a patched version was released.

After that though, Google made sure to stay on top of things, and it has paid off. In March 2009, for example, Chrome was the only browser left standing after day one of the famous Pwn2Own contest, where security researchers competed to exploit vulnerabilities in web browsers, while Firefox, Safari, and Internet Explorer were all successfully compromised. Microsoft argues that Chrome only remained unscathed because nobody attempted to exploit it, but the fact remains that none of the researchers had vulnerabilities for Chrome in mind before going into the contest.



Help the Antiaging SENS project by Commenting on 3Banana

Here is the link to the 3banana page where a comment can be added in support of the antiaging project SENS.

Winning the contest via the most comments will provide and more valudable publicity.



Graphene Improves Cheap titanium dioxide-based batteries and a Flash of Light Turns Graphene into A Biosensor

1. Graphene enhances titanium dioxide-based batteries.

Researchers would like to develop lithium-ion batteries using titanium dioxide, an inexpensive material (instead of rare earth metals which China controls most of the current rare earth metal reserves.).

Department of Energy's Pacific Northwest National Laboratory's Gary Yang and colleagues added graphene, sheets made up of single carbon atoms, to titanium dioxide. When they compared how well the new combination of electrode materials charged and discharged electric current, the electrodes containing graphene outperformed the standard titanium dioxide by up to three times. Graphene also performed better as an additive than carbon nanotubes.

2. Department of Energy's Pacific Northwest National Laboratory's researchers also discovered that a flash of light turns graphene into a biosensor



Disease diagnosis, toxin detection and more are possible with DNA-graphene nanostructure

Biomedical researchers suspect graphene, a novel nanomaterial made of sheets of single carbon atoms, would be useful in a variety of applications. But no one had studied the interaction between graphene and DNA, the building block of all living things. To learn more, PNNL's Zhiwen Tang, Yuehe Lin and colleagues from both PNNL and Princeton University built nanostructures of graphene and DNA. They attached a fluorescent molecule to the DNA to track the interaction. Tests showed that the fluorescence dimmed significantly when single-stranded DNA rested on graphene, but that double-stranded DNA only darkened slightly – an indication that single-stranded DNA had a stronger interaction with graphene than its double-stranded cousin. The researchers then examined whether they could take advantage of the difference in fluorescence and binding. When they added complementary DNA to single-stranded DNA-graphene structures, they found the fluorescence glowed anew. This suggested the two DNAs intertwined and left the graphene surface as a new molecule.

DNA's ability to turns its fluorescent light switch on and off when near graphene could be used to create a biosensor, the researchers propose. Possible applications for a DNA-graphene biosensor include diagnosing diseases like cancer, detecting toxins in tainted food and detecting pathogens from biological weapons. Other tests also revealed that single-stranded DNA attached to graphene was less prone to being broken down by enzymes, which makes graphene-DNA structures especially stable. This could lead to drug delivery for gene therapy. Tang will discuss this research and some of its possible applications in medicine, food safety and biodefense.




Zenn Plans Only EEStor Drive Train

Zenn will now focus on acting as a supplier to the auto industry. Working with secretive EEStor, Zenn plans to make an electric drive train, the ZENNergy Drive system, which can deliver those oh-so-controversial performance claims from EEstor: 10 times the energy of lead-acid batteries at one-tenth the weight and half the price, with the ability to move a car 400 kilometers after a 5-minute charge.

Increased competition in the electric vehicle market played a role in Zenn’s decision to not make the cityZENN car.

A press release from Zenn Motor Company.

Greencarcongress indicates that Zenn Motor and EEstor still say first production units of the EESU are expected by the end of the year (2009).

The EESU is a high-power-density multi-layered barium titanate ceramic ultracapacitor that the companies say is expected to provide energy densities of more than 450 Wh/kg and more than 700 Wh/L; charge in minutes; and have extremely long life


Maxwell Technologies and Energ2 Ultracapacitors

Maxwell Technologies (another ultracapacitor company) signed a second deal for thousands-to-tens-of-thousands of units the first year [2010], with significant expansion of the contract in the second year.

Maxwell ultracapacitors are to be used to power stop-start systems, which turn off the internal combustion engine when the vehicle slows or coasts. When the driver accelerates, the ultracapacitor provides bursts of power to re-start the engine, minimizing fuel use and relieving the vehicle's battery of high currents and repeated cycling that can shorten battery life. "It can improve fuel efficiency on a basic level in the 5 to 10 percent range," Sund said. "In heavy stop-start urban driving, it could get up to 20 percent or more savings."




Seattle startup EnerG2 plans to begin shipping ultracapacitors this year for electric rail systems and heavy-duty vehicles. Energ2 article from Venturebeat in late 2008

How to Guide for Controlling the Structure of Nanoparticles and Another Guide for Nanotubes

1. University of North Carolina engineers have produced a ‘How-To’ Guide for Controlling the Structure of Nanoparticles.

researchers from North Carolina State University have learned how to consistently create hollow, solid and amorphous nanoparticles of nickel phosphide, which has potential uses in the development of solar cells and as catalysts for removing sulfur from fuel. Their work can now serve as a “how-to” guide for other researchers to controllably create hollow, solid and amorphous nanoparticles – in order to determine what special properties they may have.

The study provides a step-by-step analysis of how to create solid or hollow nanoparticles that are all made of the same material. “It’s been known that these structures could be made,” says Dr. Joe Tracy, an assistant professor of material science engineering at NC State and co-author of the paper, “but this research provides us with a comprehensive understanding of nanostructural control during nanoparticle formation, showing how to consistently obtain different structures in the lab.” The study also shows how to create solid nanoparticles that are amorphous, meaning they do not have a crystalline structure.


Abstract “Nickel Phosphide Nanoparticles with Hollow, Solid, and Amorphous Structures”
Published: Online, September 16, 2009, Chemistry of Materials

Abstract: Conversion of unary metal nanoparticles (NPs) upon exposure to O, S, Se, and P precursors usually produces hollow metal oxide, sulfide, selenide, or phosphide NPs through the Kirkendall effect. Here, nanostructural control of mixed-phase Ni2P/Ni12P5 (represented as NixPy) NPs prepared through thermolysis of nickel acetylacetonate using trioctylphosphine (TOP) as a ligand and phosphorous precursor is reported. The P:Ni mole ratio controls the NP size and is the key factor in determining the nanostructure. For P:Ni mole ratios of 1-3, Ni NPs form below 240 °C and subsequently convert to crystalline-hollow NixPy NPs at 300 °C. For higher P:Ni ratios, a Ni-TOP complex forms that requires higher temperatures for NP growth, thus favoring direct formation of NixPy rather than Ni. Consequently, for P:Ni mole ratios greater than 9, amorphous-solid NixPy NPs form at 240 °C and become crystalline-solid NixPy NPs at 300 °C. For intermediate P:Ni mole ratios of ~6, both growth mechanisms give rise to a mixture of hollow and solid NixPy NPs. Similar results have been obtained using tributlyphosphine or triphenyphosphine as the phosphorous source, but trioctylphosphine oxide cannot serve as a phosphorous source.


2. Case Western Reserve University researchers mixed metals commonly used to grow nanotubes and found that the composition of the catalyst can control the chirality. [a recipe for controlling carbon nanotube growth]



Linking catalyst composition to chirality distributions of as-grown single-walled carbon nanotubes by tuning NixFe1-x nanoparticles



Chirally pure single-walled carbon nanotubes (SWCNTs) are required for various applications ranging from nanoelectronics to nanomedicine1. Although significant efforts have been directed towards separation of SWCNT mixtures, including density-gradient ultracentrifugation 2, chromatography3 and electrophoresis4, the initial chirality distribution is determined during growth and must be controlled for non-destructive, scalable and economical production. Here, we show that the chirality distribution of as-grown SWCNTs can be altered by varying the composition of NixFe1-x nanocatalysts. Precise tuning of the nanocatalyst composition at constant size is achieved by a new gas-phase synthesis route based on an atmospheric-pressure microplasma. The link between the composition-dependent crystal structure of the nanocatalysts and the resulting nanotube chirality supports epitaxial models and is a step towards chiral-selective growth of SWCNTs.


20 pages of supplemental information

Aids Vaccine Progress: Prevention of 30% of HIV Infections

From the BBC News and many other sources, scientists say they have developed a vaccine that cuts the risk of HIV infection by more than 30%.

It is the first time a vaccine has been shown to give even this limited protection against the virus that causes Aids.

The vaccine was tried out on 16,000 volunteers in Thailand. The vaccine trial, which was funded by the US army, involved a combination of two vaccines that individually had proved ineffective.

The World Health Organisation says it offers the promise of a safe AIDS/HIV vaccine eventually becoming available for people around the world.



Mobile Money Could Benefit World Poor as Much as Mobile Phones Already Have

The Economist magazine has a special feature on how mobile money could benefit the lives of the world's poor as much as mobile phones already have.

Mobile phones have become tools of economic empowerment for the world’s poorest people. These phones compensate for inadequate infrastructure, such as bad roads and slow postal services, allowing information to move more freely, making markets more efficient and unleashing entrepreneurship. All this has a direct impact on economic growth: an extra ten phones per 100 people in a typical developing country boosts GDP growth by 0.8 percentage points, according to the World Bank. More than 4 billion handsets are now in use worldwide, three-quarters of them in the developing world. Even in Africa, four in ten people now have a mobile phone.


Everyone having a mobile phone should be an 8% boost to GDP growth over no one having a mobile phone.

Mobile Money

Mobile money, which allows cash to travel as quickly as a text message. Across the developing world, corner shops are where people buy vouchers to top up their calling credit. Mobile-money services allow these small retailers to act rather like bank branches. They can take your cash, and (by sending a special kind of text message) credit it to your mobile-money account. You can then transfer money (again, via text message) to other registered users, who can withdraw it by visiting their own local corner shops. You can even send money to people who are not registered users; they receive a text message with a code that can be redeemed for cash.

By far the most successful example of mobile money is M-PESA, launched in 2007 by Safaricom of Kenya. It now has nearly 7m users—not bad for a country of 38m people, 18.3m of whom have mobile phones. M-PESA first became popular as a way for young, male urban migrants to send money back to their families in the countryside. It is now used to pay for everything from school fees (no need to queue up at the bank every month to hand over a wad of bills) to taxis (drivers like it because they are carrying around less cash). Similar schemes are popular in the Philippines and South Africa.




Extending mobile money to other poor countries, particularly in Africa and Asia, would have a huge impact. It is a faster, cheaper and safer way to transfer money than the alternatives, such as slow, costly transfers via banks and post offices, or handing an envelope of cash to a bus driver. Rather than spend a day travelling by bus to the nearest bank, recipients in rural areas can spend their time doing more productive things. The incomes of Kenyan households using M-PESA have increased by 5-30% since they started mobile banking, according to a recent study.

Mobile money also provides a stepping stone to formal financial services for the billions of people who lack access to savings accounts, credit and insurance. Although for regulatory reasons M-PESA accounts do not pay interest, the service is used by some people as a savings account. Having even a small cushion of savings to fall back on allows people to deal with unexpected expenses, such as medical treatment, without having to sell a cow or take a child out of school. Mobile banking is safer than storing wealth in the form of cattle (which can become diseased and die), gold (which can be stolen), in neighbourhood savings schemes (which may be fraudulent) or by stuffing banknotes into a mattress. In the Maldives many people lost their savings in the tsunami of 2004; it hopes to introduce universal mobile banking next year.


The economist special report on telecoms in emerging markets.

In the year to March 2009 an additional 128m people signed up for mobile phones in India, 89m in China and 96m across Africa, according to TeleGeography, a telecoms consultancy. The total mobile phone users will reach 6 billion by 2013 [up from about 4 billion now in 2009], according to the GSMA, an industry group, with half of these new users in China and India alone.

operators have developed new business models and industry structures that enable them to make a profit serving low-spending customers that Western firms would not bother with. Indian operators have led the way, and some aspects of the “Indian model” are now being adopted by operators in other countries, both rich and poor. This model provides new opportunities, especially for Indian operators. The spread of the Indian model could help bring mobile phones within reach of an even larger number of the world’s poor.

The second trend is the emergence of China’s two leading telecoms-equipment-makers, Huawei and ZTE, which have entered the global stage in the past five years. Initially dismissed as low-cost, low-quality producers, they now have a growing reputation for quality and innovation, prompting a shake-out among the incumbent Western equipment-makers. The most recent victim was Nortel, once Canada’s most valuable company, which went bust in January. Having long concentrated on emerging markets, Huawei and ZTE are well placed to expand their market share as subscriber numbers continue to grow and networks are upgraded from second-generation (2G) to third-generation (3G) technology, notably in China and India.

The third trend is the development of new phone-based services, beyond voice calls and basic text messages, which are now becoming feasible because mobile phones are relatively widely available. In rich countries most such services have revolved around trivial things like music downloads and mobile gaming. In poor countries data services such as mobile-phone-based agricultural advice, health care and money transfer could provide enormous economic and developmental benefits.


RELATED READING
Huawei is launching a mobile phone HSPA+ system that has download speeds of 56 mbits per second. HSPA+ is expected to be used for most cellphones in 2015 (taking over from regular HSPA). This could be the speed of internet downloads in Africa in 2015. What will be the broadband speeds and standards in the United States ? The definition of broadband for the United States is a very slow speed.

The green revolution continues.

Technology adoption and economic growth is happening across Africa.

Technology has helped and is helping the poor


September 23, 2009

Robust and Scalable Flux Qubit


Dwave Systems researchers have written a paper describing a novel rf-SQUID flux qubit. It is robust against fabrication variations in Josephson junction critical currents and device inductance has been implemented. Experimental results were shown to be in agreement with
predictions of a quantum mechanical Hamiltonian whose parameters were independently calibrated, thus justifying the identification of this device as a flux qubit.

22 page pdf: Experimental Demonstration of a Robust and Scalable Flux Qubit

Three key conclusions:
1) the CCJJ rf-SQUID is a robust and scalable device in that it allows for in-situ correction for parametric variations in Josephson junction critical currents and device inductance, both within and between flux qubits using only static flux biases.
2) the measured flux qubit properties, namely the persistent currentvand tunneling energy q, agree with the predictions of a quantum mechanical Hamiltonian whose parameters have been independently calibrated, thus justifying the identification of this device as a flux qubit.
3) it has been experimentally demonstrated that the low frequency flux noise in this all Nb wiring flux qubit is comparable to the best all Al wiring devices reported upon in the literature. Taken in summation, these three conclusions represent a significant step forward in the development of useful large scale superconducting quantum information processors.


















Intel Switching to Optical Connections for Computer Components in 2010 Starting at 10 Gigabits per second and Demostrated the Larrabee Chip

MIT Technology Review reports that starting in 2010 Intel plans to sell inexpensive cables with fiber-optic-caliber speed to connect, for instance, a laptop and an external hard drive, or a phone and a desktop computer. Speeding up communication inside computers and between computers will

By 2010, says Dadi Perlmutter, vice president of Intel's mobility group, the company hopes to ship an optical cable called Light Peak that will be able to zip 10 gigabits of data per second from one gadget to another, a rate equivalent of transferring a Blu-ray movie from a computer to a mobile video player in 30 seconds. A single Light Peak cable will also be capable of transporting different types of data simultaneously, meaning it will be possible to back up a hard drive, transfer high-definition video, and connect to a network with just one line.

At both ends of a Light Peak cable are chips that contain devices that produce light, encode data in it, and send it on its way. The chips can also amplify incoming signals and convert the light to an electrical signal that can be interpreted by gadgets. The first generation of Light Peak will use chips made with standard optical materials such as gallium arsenide. However, to truly make optical cables cheap enough to replace copper, future versions of Light Peak, which will handle 40-gigabits-per-second and 100-gigabits-per-second transfer rates, will most likely need to rely on silicon-based optical chips, a product of the maturing field of silicon photonics. Silicon photonics researchers hope to transform computing by making high-bandwidth connectors cheaper than ever before, not just in cables, but also eventually within electronic motherboards and microprocessors.

"This will be a long-term transition," says Perlmutter, referring to the fact that it takes years to develop and adopt standards for new connecting technologies




The first generation of Light Peak cables will use the same sort of $75 optical chips found in telecommunications devices. But Intel has employed some tricks to drive down cost by more than a factor of 10, says Victor Krutul, director of Intel's optical I/O team. For one, the chips don't need to transmit data over the distances of telecom devices. For another, they don't need to last as long or withstand harsh conditions. Because telecom chips in consumer cables won't need to last for decades or withstand heat and humidity, manufacturing standards can be relaxed and allow the chips to be made more inexpensively.


EEtimes reports that Intel Corp. demonstrated a working version of its first discrete graphics processor, Larrabee, at the Intel Developer Forum. Intel would not say when it plans to release Larrabee which is expected compete with graphics chips from AMD and Nvidia.

Semiaccurate.com has pictures of the demonstration system and other pictures from the Intel Developer Forum.

Larrabee is the one everyone cares about, and it was shown off publicly for the first time today. The machine it was on is a six core Gulftown computer, the Westmere Exxxxxxtreme chip and ultimately next gen server CPU. It was running the traditional game Quake Wars, ported to do raytracing.

Waves moved, geometry was not static, and in general it worked. Instead of multiple four core chips, the new demo was running on the 'GPU', although Intel would not call it that. The only thing on the CPU was the game engine itself, exactly what you would expect from a CPU/GPU machine. As we said earlier, B0 silicon, the bug fixed Larrabee, taped out a month ago, and would possibly be shown at IDF.

Sadly, it has not come back from the fabs yet, so the demo was running on Ax silicon, most likely A6. It worked, but didn't seem to be a huge step forward from four quad core Xeons. Oh wait, one GPU running at sub-10% of hoped for performance beating 16 Xeon cores is a huge step forward.




Richard Walker, Shadow Robotics, Interview by Sander Olson



Here is the Richard Walker interview. Richard Walker is the technical director of Shadow Robotics, which is a British robotics company. Shadow Robotics has developed a dexterous human hand which has capabilities and a range of motion similar to a human hand. The company is also developing other robotic limbs such as arms and legs. The company is now researching embedding intelligence into their hand, as well as putting skin on the limbs. As the cost of robots decreases and the cost of human labor increases, robots will start to become ubiquitous in society.

The Shadow Robot website

Question 1: Tell us about the C6m smart motor hand. What range of motion does it have, and how reliably does it operate?



Answer: The C6M Hand is our newest development in Dextrous Manipulation. We took the C6 Air Muscle hand, which with 24 movements has the same movement capability as the human hand, and replaced the Air Muscle actuation with our own intelligent motor control system. That produces a Hand with the same range of movement as the C6 Air Muscle hand, but with a shorter forearm. The Smart Motor units we use have integrated force sensing, so we can precisely control applied forces.

Question 2 : How sophisticated are the sensors embedded in these robotic hands? How many different forms of stimuli can they sense?

Answer: We measure position of each joint using a Hall Effect sensor and rotary magnet. This is accurate to about 1/3 of a degree. We also measure either air pressure in the muscles, or force at the motors, which lets us know how hard a grip is being exerted. We've produced various designs of tactile sensing in the fingertip, with the best ones so far providing 34 sensing regions on each finger, good down to a few grams and up to about 1kg of load.


Shadow robot hand video from about one year ago.



Question 3: Is Shadow robotics capable of mass producing these hands or are they custom made?

Answer: We typically build Hand systems to order for research customers. We have a small in-house manufacturing capability, and good relationships with external subcontractors who manufacture many of the components for us, which would allow us to ramp up relatively quickly.

Question 4 : Has your company integrated your robotic hands with any mobile robots?

Answer: No, not so far. We're talking to a couple of organisations who produce mobile platforms about detailed integration work, but in essence the Hand system has a standard mounting fixture so you can just bolt it onto any robot arm available.

Question 5: Your company offers a leg building kit. How sophisticated are these legs? How strong are they?

Answer: The leg building kit is a slight misnomer. We provide muscle, sensor and control components to someone who is building a mechanism. The muscles are the 30mm muscles, so they are capable of a fair size force; the position sensors are accurate to less than 1 degree, and the SPCU gives plug-and-play control.

Question 6: How quickly can this robotic hand perform movements? What level of dexterity does it posses?

Answer: We looked at a video and tracked open-to-closed in 0.2 seconds at the fastest. The movement capability is that of the human; when you operate it from a dataglove the limitation is the calibration of the dataglove to your hand.

Question 7 : Can the c6m hand throw objects such as balls, with any velocity?

Answer: That's more a function of the arm that it's attached to, rather than the wrist of the Hand.


Question 8 : What do you anticipate will be the first mainstream or industrial use for the c5 and c6m robotic hands?

Answer: We're looking at a couple of areas quite actively now – one is bomb disposal work, where the possibility of using the dexterity of the Hand on a remote platform is obviously a big win, and the other is hazardous handling in laboratories – nuclear, chemical, biological – where you need skilled workers but protecting them restricts their ability to perform the task...

Question 9: Tell us about the new robotic arm. What are its specifications?

Answer: We were asked to produce a 4 degree of freedom biologically-inspired arm, using antagonistic muscles. It's capable of supporting a C6 hand with 2.5kg payload in the Hand. (We think it could be stronger, but to date haven't really powered it up). The movements are a rotation about the length of the forearm, a bend of the elbow, an underarm bowling movement and a sweep of the arm across the torso. (We don't implement the movement that raises the elbow sideways and upwards). Each of the movements has about the range of the corresponding human.

Question 10 : Have you collaborated with any other robotic companies?

Answer: We tend to work more with research organisations, but are now starting to reach out to companies offering capabilities that complement ours.

Question 11: What improvements and enhancements do you have planned for your next-generation robotic hand?

Answer: More thumbs. No, seriously, we're improving the control capability, making the system more rugged and maintainable, looking at tolerating more extreme environments, and seeing what can be done in terms of embedding intelligence into the Hand itself. As well as exploring skin.

Question 12: How much progress do you anticipate being made in the robotics field by 2020?

Answer: I suspect we'll have capable robots available, but they will only be found in areas where it's really hard – economically - to place humans. Low labor costs are the great barrier to robotics.

Calstar Products Makes Bricks Using 5 to 10 Times Less Energy



CalStar Products, Inc. uses advanced technology to make architectural facing bricks and durable pavers for the green building market. Both products contain 40% post-industrial recycled material, use 90% less energy to make, and generate 90% less CO2 than traditional fired clay brick and pavers

Building operations consume around 39 percent of the total energy in the U.S. and construction materials use another 12 percent, according to the Department of Energy.

CalStar’s commercial Fly Ash Brick (FAB) is fully compliant with ASTM C-216, is available in modular and utility sizes, and comes in seven colors.

CalStar’s products are designed to be price competitive with traditional products of equivalent quality. This way, we make it easy for our customers to do the right thing for the environment without having to sacrifice project budgets.


Ordinary bricks are fired for 24 hours at 2,000 degrees F (1,093 C) as part of a process that can last a week, while Calstar bricks are baked at temperatures below 212 F (100 C) and take only 10 hours from start to finish.

At first, the company will make only "facing brick," used on the outside of buildings, a $2 billion annual U.S. market. It plans to branch out into paving stones, roofing tile and other brick markets.

The company has signed 16 distributors to sell 12 million or more bricks the first year, and plans to make 100 million bricks for sale throughout the Midwest and South, CEO Kane said. After that, fast-growing markets like China beckon.




A 3 page debate about the efficiency of clay bricks versus fly Ash bricks

Although new clay brick factories, using the most efficient technology and controls, might create bricks with an embodied energy of 5800 Btu/brick, we believe the BEES figure of 8800 Btu may be more indicative of the industry average.

With respect to Mr. Clark’s concern that we do not account for energy required to produce fly ash, fly ash is a by-product of burning coal for power generation. The coal will be burned regardless of whether fly ash is beneficially reused or tossed into a landfill. The recycling of fly ash into construction materials not only prevents the need for more landfills (thereby preserving land and open space), but also improves some material properties and dramatically reduces the environmental impact of those products.


RELATED READING
Serious Materials is a company that is trying to bring a more energy efficient drywall, ecorock, to market

Serious Materials website

The production of standard gypsum drywall – a process invented in 1917 – results in up to 20 billion pounds of CO2 per year. Drywall is the third-worst greenhouse gas producer in building materials.

EcoRock™, a non-gypsum, green alternative to standard drywall that performs and is used just like drywall, uses 80% less energy to produce, with a corresponding drop in emissions


Serious Materials Energy-Saving Windows
Windows represent the single largest opportunity for improvement in the built environment. 39% of all emissions are tied to building operations, with 38% of that for heating and cooling. Up to 40% of that energy – and cost – literally goes out the window. The wasted energy causes over 250 million tons of emissions per year.

Today’s good quality, dual-pane commercial or residential windows are typically R-1 to R-3. This number is dramatically lower than a typical R-13 wall.

Our Serious Windows deliver true, full frame window performance from R-5 to
R-11. This performance is up to four times higher than major brands, and Energy Star requirements.





Форма для связи

Name

Email *

Message *