Pages

March 22, 2008

Ma of the KMT wins the Taiwan Presidency - better relations with China ahead


As I had been predicting for nearly a year, Ma of the KMT won the Presidential election. He won by 58 to 41 over Frank Hsieh of the DPP

Ma's vote total topped the 7 million mark, a point at which it would be mathematically impossible for him to lose, the commission said.

The commission estimated that 75 percent of Taiwan's eligible voters cast ballots in the presidential race.



People want a clean a government instead of a corrupt one," Ma, also a former justice minister, told The Associated Press.

"They want a good economy, not a sluggish one. They don't want political feuding. They want peace across the Taiwan Strait. No war."

Hsieh, a former premier, conceded defeat in front of unhappy supporters, AP reported.


Ma is looking to make peace with China and open a common market with them.

Early moves are to open up many direct flights and transporation. Open up tourism from and to China from 270,000 visitor to Taiwan in 2007 up to 3.6 million by 2009. Those extra 3.3 million tourists could bring in a few billion in economic activity. Causing a boom in hotel building and bumping Taiwan GDP by 1%.

UPDATE: Some such as a Businessweek blog Eye on Asia has theorized that Cathay Pacific and Hong kong would be a loser from the Taiwan election. Because direct filghts would occur and the 270,000 visitors (more soon with the changes) would not need to fly through Hong Kong or use Cathay Pacific. I would disagree. Cathay Pacific and Dragon may not be big winners, but if the number of visitors between China and Taiwan shoots up by ten or twenty times, there could still not be a drop in the total number that pass through Hong Kong. 100% may have had to pass through Hong Kong before, but if 10% of a far larger number still choose to make a Hong Kong stopover then Cathay Pacific and Hong Kong do not lose.

China invading Taiwan over independence is pretty much off the table. [Although I never really believed it was seriously on the table. It was an occasional disruptive threat though. Missile launches into the Ocean and posturing and unproductive threats. Plus it was used as an excuse by the US military for more buildup.] Would Germany invade Austria now that they are part of the European Union ?

FURTHER READING
Another source with a more negative view of the result but a lot of details about taiwan. [Taiwan matters] and Michael Turtons other Blog, the View from Taiwan. Lots of interesting pictures

Taiwan and China

Chinas future economy and a prediction fom May, 2007 of the Ma victory

All of my Taiwan related articles

March 21, 2008

Next Generation Bionic Arms and human machine interfaces


Johns Hopkins University Applied Physics Laboratory (APL) researchers lead a nationwide effort to make a bionic arm that wires directly into the brain to let amputees regain motor control—and feeling.

By 2009, DARPA hopes to have a mechanical arm whose functionality is on par with a flesh-and-blood limb.

DARPA Gives APL-Led Revolutionizing Prosthetics 2009 Team Green Light for Phase 2 bionic arm.

The second prototype, demonstrated at DARPA Tech 2007 last August, has 25 individual joints that approach the natural speed and range of motion of the human limb. These mechanical limb systems are complemented by a range of emerging neural integration strategies that promise to restore near-natural control and important sensory feedback capabilities.



The DARPA project is gunning for much more than that: researchers want an arm that transmits sensation to the user—pressure, texture, even temperature. The Proto-1 arm already has integrated force sensors in the artificial hand that give the wearer a sensation of feeling. Harshbarger says Proto-2 builds on that breakthrough with 100 sensors that connect the body's natural neural signals to the mechanical prosthetic arm to create a sensory feedback loop: the wearer interacts with an object and the arm feeds back, in real time, where the arm is in space, what object it is touching, whether that object is smooth or rough, how hard the hand is holding it, and what temperature the object is. With that information, the user can react in split-second real time.

As it turns out, the degree of control is directly proportional to the invasiveness of the method. Harshbarger's team is working with four tiers of neural interface. Each tier adds a level of magnitude to the control and sensory capability of the prosthesis—but also a level of magnitude in required surgery.

For simple activities, like grasping a ball, you don't need surgery. The most basic interface (for low-level amputation) uses electrodes taped to the surface of the residual limb's skin.

To move individual fingers, which is necessary, for example, to statically hold a key or a pen, you need to access the muscle firings directly. The next level (of invasiveness and control) bypasses these interfering layers of flesh and skin by using small wireless devices called injectable myoelectric sensors (IMES). These tiny, rice grain-like devices are injected into the muscle tissue of the residual arm and work just like the surface electrodes to tap the muscle signals right at the source.


The next level of interface bypasses the residual muscle to tap into the peripheral nerves either with surgery or implanted electrodes. So far the team has had great success with the former, a technique called targeted muscle reinnervation. This surgery reroutes nerves that once led to the muscles controlling the native arm and opens a direct line between those nerves and the mechanical arm. In a an individual with both limbs, those nerves travel from the spinal cord down the shoulder over the clavicle and then into the armpit, where they connect to about 80 000 nerve fibers that allow the brain to communicate with the arm. They reroute the nerves to the chest muscles.

But what if for whatever reason these unused areas of muscle are unavailable or damaged? Another way to access the peripheral nerves is with penetrating electrodes that intersect the nerves with what are essentially needles. Researchers at the University of Utah developed an implantable device called the Utah Slant Electrode Array (USEA), a 5-millimeter-square grid of 100 needlelike electrodes. These electrodes hold hundreds of different mechanisms, among them signal amplifiers, storage registers, and a multiplexing scheme to transmit to a receiver on the skin.

Finally, the most extreme solution is meant for people whose bodies no longer offer any means for interfacing to the artificial limb, for whom even nerve-rerouting surgery may not be an option. In such cases, the Utah electrode arrays are relocated to the source of all neural signals—the brain's motor cortex, which is right at the top of the head, toward the back of the frontal lobe. The electrode arrays are either placed on the inside surface of the top of the skull near the motor cortex or penetrate directly into the motor cortex. A device very much like the skull-mounted USEA has already been proven to pick up the brain's electrical signals and is currently used to warn epileptic patients of impending seizures.


Roll to roll R2R production of electronics

Up until now inkjet has been the favoured manufacturing technique of the burgeoning organic light-emitting diode (OLED) industry, but a growing number of companies and organisations are looking to roll-to-roll (R2R) manufacturing for making OLED displays.

GE Global Research has reached a milestone in next-generation lighting, demonstrating the world's first roll-to-roll manufactured organic light emitting diodes.(OLED) GE is aiming to introduce OLED lighting products to market by the year 2010. In the 2010 timescale, they're looking at niche, high-end applications, like in architectural designs. The demo rolled out about 20 feet of OLEDs.

GE OLEDs produced roll to roll

The Holy Grail of the flexible display industry is to be able to manufacture the entire display, on a flexible substrate, at fast speeds, with minimal handling and at low cost. R2R manufacturing, traditionally a low-tech method for making disposable items such as packaging and newspapers, is seen as the solution to this challenge.

The US Display Consortium (USDC) has awarded funds to two projects to help boost R2R manufacturing technology for display components and other electronic devices. The USDC recently awarded $10 million (e7.8 million) to Binghamton University in New York State to develop an R&D centre that will evaluate the potential R2R technology for the microelectronics industry



Printing presses can get up to speeds of 2500-3000 feet per minute.

Depending on the printing technology chosen speeds of 150 feet per minute (fpm) to 300 fpm can be readily achieved for printing of R2R produced electronics

If some production and devices compatible with R2R production could achieve the quality and performance of CMOS lithography devices then the multi-billion semiconductor wafer plants that make 50,000 wafers per month with 100-300 chips on each wafer could be replaced with production of 2000 wafers per minute. 1500 times faster production at the 2500 feet per minute speed range.

FURTHER READING
Rapid progress in the field of organic semiconductors makes the vision of plastic integrated circuits reachable. Polymer electronic is expected to be a promising technology for low-cost and large-area electronic systems, which can not be addressed by traditional chip technology. Solutions of polymer or polymer based pastes offer advantages due to their easy processing possibilities which give the opportunity to coat and pattern them like a printing ink. Cost requirements of polymer electronic imply that for manufacturing also very cost-effective processes must be used. Hence reel-to-reel manufacturing with endless substrates of foil is a key technology to fulfill the challenging cost demands of cheap flexible systems.

Quantum dot–based memory structures potentially one thousand times faster than current memory

The concept of a memory device based on self-organized quantum dots (QDs) is presented, enabling extremely fast write times, limited only by the charge carrier relaxation time being in the picosecond range. (from Applied Physical letters) [potentially one thousand times faster than current computer memory] For a first device structure with embedded InAs/GaAs QDs, a write time of 6 ns is demonstrated. A similar structure containing GaSb/GaAs QDs shows a write time of 14 ns.

Other interesting news from Applied Physical Letters:
Re-examination of Casimir limit for phonon traveling in semiconductor nanostructures.

The effective mean free path MFP of nanofilms is found to be larger than that of nanowires, where the Casimir limit for nanofilms equals twice its thickness, or two times of the limit for nanowires. The theoretical formula agrees approximately with available experimental and computer simulation results for heat conduction along semiconducting nanowires, nanofilms, and superlattices.


Nanomechanical device powered by the lateral Casimir force

The coupling between corrugated surfaces due to the lateral Casimir force is employed to propose a nanoscale mechanical device composed of two racks and a pinion. The noncontact nature of the interaction allows for the system to be made frustrated by choosing the two racks to move in the same direction and forcing the pinion to choose between two opposite directions. This leads to a rich and sensitive phase behavior, which makes the device potentially useful as a mechanical sensor or amplifier. The device could also be used to make a mechanical clock signal of tunable frequency.


Excimer-based red/near-infrared organic light-emitting diodes with very high quantum efficiency

Various light output measures of red/near-infrared (NIR) excimer-based organic light-emitting diodes (LEDs) are reported for different cathodes (Al, Al/LiF, Ca, and Ca/PbO2). By using a selected phosphor (PtL2Cl) from a series of terdentate cyclometallated efficient phosphorescent Pt(II) complexes, PtLnCl, as the neat film emitting layer and a Ca/Pb(IV)O2 cathode, the authors achieve unusually high forward viewing external quantum efficiencies of up to 14.5% photons/electron and a power conversion efficiency of up to 6% at a high emission forward output of 25 mW/cm2. These are the highest efficiencies reported for a NIR organic LED.


China and Taiwan

Angus Maddison provides an update view of China's Economic Future in the "Chinese Economic Performance in the Long Run" tome written for the OECD.

Angus Maddison now predicts: China should overtake the United States as the world’s biggest economy before 2015 [PPP] and by 2030 account for about a quarter of world GDP. [This is faster than his 1998 predictions] It would have a per capita income like that of western Europe in 1990. Its per capita income would be only one third of that in the United States, but its role in the world economy and its geopolitical leverage would certainly be much greater.

I had projected China to pass the US economy on an exchange rate basis by 2020 but updated that to 2018 plus or minus three years.

Taiwan's election - likely good for Taiwan and China's future economies
Taiwan's presidential election starts tonight. I have predicted for many months that I think Ma Ying-jeou will win.

The Wall street journal talks about the developments up to this point and the prospects for Taiwan-China relations.

Taiwan investment in China had been falling for the last 6 years, that trend should be reversed when a new president is installed. More strongly with Ma but even with Hsieh.

Frank Hsieh, says Tibet shows the need for a strong ruler who can defend Taiwan's interests. But he advocates ending the travel ban and lifting the 40% rule. Though the party still officially seeks independence, Mr. Hsieh says his priority is better relations.

The more likely victor, according to polls, is Ma Ying-jeou of the Nationalist Party, who wants to go further. Despite his party's historical hostility toward the Communists, who defeated the Nationalists in China's civil war nearly 60 years ago, Mr. Ma now is floating the idea of a common market.

In final pre-election polls taken March 8 -- before the violence in Tibet and Mr. Hsieh's warnings -- Mr. Ma led by about 10 percentage points


Bloomberg news talks about the thaw in Chinese and Taiwanese relations that will unfold over the next few years.

Reuters talks about how the Taiwanese election will be good for Taiwan's economy

The International Herald Tribune talks about the likely better relations that will result.

Marketwatch talks about the expected improved economic climate.

Peter Sutton, analyst with CLSA Asia-Pacific Market, said the country's [Taiwan's] "attractiveness increases in proportion to the expansion of its links with China."


Asia Times online has more personal stories on this issue.

Back to details of China's future growth
The Policy problems of China's rapid growth are changing. Ten years ago the problems were reducing the role of inefficient state enterprise, weakness of the financial system, and the weak fiscal position of central government. However, those problems have been greatly reduced in the last ten years.

The problem of energy supply and the environment has emerged as a significant new challenge to China’s future development. Electricity supply rose ten–fold between 1978 and 2005 and its availability at rather low prices transformed living conditions in many urban households. Car ownership has also risen and is likely to become the most dynamic element in private consumption. In 2006 there were about 19 million passenger cars in circulation, (one for every 70 persons). This compared with 140 million and one for every 2 persons in the United States. Judging by the average west European relationship of car ownership to per capita income, it seems likely there will be 300 million passenger cars in China (one for every 5 persons) in 2030.

There has been a surprisingly large improvement in the efficiency with which energy is used. In 1973, 0.64 tons of oil equivalent were used per thousand dollars of GDP, by 2003, this had fallen to 0.22 tons. The International Energy Agency (IEA) projects a further fall to 0.11 tons in 2030 in a scenario which takes account of energy efficiency policies the government can reasonably be expected to adopt. Energy efficiency was better in China than in the United States in 2003 and the IEA expects this to be true in 2030.


Other problems:
- Legal System and Private Property Rights
- Regional and Urban Rural Inequality

Angus Maddison's updated detail view of the China catchup process:
China is still a relatively poor country. In 2003 its per capita income was only 17 per cent of that in the United States, 23 per cent of that in Japan, 28 per cent of that in Taiwan and 31 per cent of that in Korea. Countries in this situation of relative backwardness and distance from the technological frontier have a capacity for fast growth if they mobilise and allocate physical and human capital effectively, adapt foreign technology to their factor proportions and utilise the opportunities for specialisation which come from integration into the world economy. China demonstrated a capacity to do this in the reform period and there is no good reason to suppose that this capacity will evaporate.

It is likely that the catch-up process will continue in the next quarter century, but it would be unrealistic to assume that the future growth trajectory will be as fast as in 1978-2003. In that period there were large, once-for-all, gains in efficiency of resource allocation in agriculture, an explosive expansion of foreign trade and accelerated absorption of foreign technology through large-scale foreign direct investment. The pace of Chinese progress will slacken as it gets nearer to the technological frontier. I have assumed that per capita income will grow at an average rate of 4.5 per cent a year between 2003 and 2030, but that the rate of advance will taper off over the period. Specifically, I assume a rate of 5.6 per cent a year to 2010, 4.6 per cent between 2010 and 2020 and a little more than 3.6 per cent a year from 2020 to 2030. By then, in our scenario, it will have reached the same per capita level as western Europe and Japan around 1990, when their catch-up process had ceased.


FURTHER READING
Other OECD papers and studies related to the rise of China

Ma plans to allow more Chinese tourists into Taiwan, which may boost visitor arrivals to 3.6 million in 2009, up from last year's 270,000. [13 times more]

Direct air links ``will start another hotel building boom,'' said Douglas Hsu, chairman of Taiwan's Far Eastern Group, which has textile, shipping, banking and retail businesses. ``All of these visitors have to stay somewhere, and all of them will have to shop.''

March 20, 2008

Thermoelectric 40% improvement in a cheap material

Researchers at Boston College and MIT have used nanotechnology to achieve a major increase in thermoelectric efficiency, a milestone that paves the way for a new generation of products - from semiconductors and air conditioners to car exhaust systems and solar power technology - that run cleaner.

UPDATE:Info from the Technology Review on how this process is easily and cheaply scaled to tons of material.

"Power-generation applications [for thermoelectrics] are not big now because the materials aren't good enough," says Chen. He believes that his group's more efficient version of the material will finally make such applications commercially viable.

Ren says that it's easy to make large amounts of the nanocomposite material: "We're not talking grams; we're not talking kilograms. We can make metric tons." Because bismuth antimony telluride is already used in commercial products, Ren and Chen predict that their technique will be integrated into commercial manufacturing in several months.


UPDATE: VC's Kleiner Perkins have funded these researchers in a company called GMZ Energy.

GMZ’s breakthrough, actually a technology licensed from the Massachusetts Institute of Technology, is the discovery of materials that can be cheaply manufactured and easily integrated into existing designs, and are also more efficient than other thermoelectric materials. That could both expand the existing thermoelectrics market, and put GMZ in a leading position within it. It’s in the late stages of testing by manufacturers, and the company is gearing up to manufacture about 7 tons of it per year, enough for a few million devices.

One of the many factors holding down the fuel efficiency of today’s vehicles is the amount of energy they dump off as waste heat. A few clever placements of thermoelectric material can capture back up to about a fifth of that heat as electricity and cycle it back into the power system — a perfect application for hybrid vehicles.


High-Thermoelectric Performance of Nanostructured Bismuth Antimony Telluride Bulk Alloys

The dimensionless thermoelectric figure-of-merit (ZT) in bulk bismuth antimony telluride alloys has remained around 1 for more than 50 years. Here we show that a peak ZT of 1.4 at 100 °C can be achieved in p-type nanocrystalline bismuth antimony telluride bulk alloy. ZT is about 1.2 at room temperature and 0.8 at 250°C, which makes these materials useful for cooling and power generation. Cooling devices that use these materials have produced high temperature differences of 86°, 106 °, and 119°C with hot-side temperatures set at 50°, 100°, and 150°C, respectively. This discovery sets the stage for use of a new nanocomposite approach in developing high performance low-cost bulk thermoelectric materials.



The ZT figure of 0.8 to 1.4 means about 8-15% recapture of energy from heat. Get ZT up to 10 and things get really interesting. But capture 8-15% cheaply and that can be very good too.



A cross-section of nano-crystalline bismuth antimony telluride grains, as viewed through transmission electron microscope. Colors highlight the features of each grain of the semiconductor alloy in bulk form. A team of researchers from Boston College and MIT produced a major increase in thermoelectric efficiency after using nanotechnology to structure the material, which is commonly used in industry and research. Image / Boston College, MIT, and GMZ Inc.


The team’s low-cost approach, details of which are published today in the online version of the journal Science, involves building tiny alloy nanostructures that can serve as micro-coolers and power generators. The researchers said that in addition to being inexpensive, their method will likely result in practical, near-term enhancements to make products consume less energy or capture energy that would otherwise be wasted. Using nanotechnology, the researchers at BC and MIT produced a big increase in the thermoelectric efficiency of bismuth antimony telluride — a semiconductor alloy that has been commonly used in commercial devices since the 1950s — in bulk form. Specifically, the team realized a 40 percent increase in the alloy’s figure of merit, a term scientists use to measure a material’s relative performance.


FURTHER READING
I have provided extensive and in depth coverage on the impact that better thermoelectrics will have.

ThermoCeramix makes materials with a much higher “emissivity”, meaning they’re more efficient at heating up. These materials have some obvious uses in everyday household appliances like those I just mentioned, as well as commercial processes.

So imagine, instead of having a centralized water heater in your house, having a bit of ThermoCeramix material wrapped around the pipe in your faucet. When you turn on the hot water tap, it heats up instantaneously.


Being more efficient at converting heat into usable work is hugely important, especially if it can be done for large power plants and vehicles.

In 2005, world primary energy consumption was 462.798 quadrillion Btus. Thus, daily world energy consumption was 1.27 x 10^15 Btus. One barrel of oil contains 5.8 x 10^6 Btus. Therefore, in 2005, the daily energy consumption was 219 million barrels of oil equivalent.

If we divide the world daily primary energy consumption of 1.27 x 10^15 Btus by the daily Btu production from one average nuclear plant (240 x 10^9), we find the world consumed the equivalent amount of energy from about 5,300 nuclear plants each day in 2005.

A typical car engine is only 20% efficient and a typical steam power plant (used primarily by coal and nuclear plants) is only 33% efficient. This means that only a fraction of the heat created is turned into usable energy.


If thermoelectrics can increase those efficiencies by 10-50% that would clearly be huge.

Star Trek is capitalistic not fascist


Michael Anissimov at Accelerating Future has an article where he promotes the World Transhumanist Association discussion topic: Is Star Trek a Fascist Society?

I have seen the 726 episodes across 6 TV series and 10 (and soon 11) movies and many of the books, two Vegas rides, etc… Their continuing mission (not always successful) is to have high ratings, sell movie tickets, dvds and books and merchandise and to boldly make a buck where ever they can. So Star trek is capitalistic and not fascist.

Most writers do not know how to write to deal with the consequences of future tech in a logical way and to then enable the story to continue in an engaging way for 4 decades of shows.

“Hard science fiction generally isn’t fascist, but one can make a plausible case that Star Trek being fascist.

Star Trek is not hard science fiction. They occasionally make an attempt to sprinkle in some science but if it interferes with a story then out that goes. Technobabble is for a venere of science to help with the suspension of disbelief.

G) Everyone apparently has some kind of mind-block against realizing that the transporter beam could make copies of all the crew and keep them young and immortal.


I blame the Q continuum, the aliens of Talos IV, Gary Seven aliens, and the Organians for the mindblocks. They must have their own version of the Prime Directive to keep the Federation, Klingons etc… down. There should be a movement to “lift the mindblocks. lift the mindblocks”. There actually have been widespread use of mindblocks as a plot device. [726 shows, what are you going to do ?]

Q keeping Picard down


Talos IV aliens


Sure, sure Kirk and Spock and McCoy did dress up as Nazis is was part of disguise and plan to free the Ekosians from the huge error made by a civilian historian (history professors - gotta watch out for them) named John Gill.

All of the technology is under utilized. All the technology that they have and that they should have. The phasers are weak relative to the kinetic energy weapons that they could have. Writers not willing to go outside a comfort zone or create more real life costs.

H) The Prime Directive maintains human (and allied) supremacy over the hapless lesser peoples who are denied political and technological progress in order supposedly to respect their cultural “difference.”

Nazi fascist aliens are keeping our fictional Star Trek humans down. Which already were episodes of ST Voyager and ST Enterprise (Nazi holodeck aliens and Nazi aliens)

Hirogen Nazi

So it may be more of a caste system where more advanced races hinder or do not share with less advanced races OR it could be the writers choosing to keep things episodic and hitting the reset button before the end of each hour. One could ask why didn't Jerry Seinfeld (in his show a successful comedian who had a TV show picked up for a while) not have the money to move to a better apartment over ten years. Perhaps mindblocks ?

A) They have no politics. It’s a military dictatorship.

A lot of the “power” is concentrated in a few exceptional individuals each of the set of 7 or so primary characters with a focus on 3 or 4 of them. Similar observations could be made of Jack Bauer, Rambo, Chuck Norris, Spy Kids, Bugs Bunny etc… and their “universes”.

I think that the focus of the trek universe shows and movies is on the “military part”. Much the way the TV show MASH was military focused or Law and Order has a police and criminal justice focus or ER has a doctor focus. They have had shows in TNG (next generation) where they show the civilian side. The billions and I think trillions of people in the federation are actually mostly civilian. It is just that they are mostly useless to the stories. DS9 and TNG had shows talking about many civilians having a somewhat negative view of starfleet. Sarek did not want spock to be part of starfleet. Sisko dad during the military emergency etc… Pickard after best of both worlds considered civilian life.

B) They have no money. It’s a command economy.

People who only watch some shows. >-|
Parts of the economy have some money. (gold pressed latinum - is used by some humans and races beyond just Ferengi). The Federation happens to have a big universal guaranteed income and medical care. It does show a lack of motivation and ambition in the civilians in this situation.

C) All conflict is racial. Humans v. Klingons v. Romulans etc.

People who only watch or remember some shows. >-|

There was the Klingon civil war, internal Romulan conflicts. They have shown many more human on human conflicts. For humans, they had a third world war history, Eugenics war, another nuclear war in 2050 etc…

D) The races have intrinsic cultural personalities which make them less attractive than the humans. Attractive members of alternate races try to
become more human: Spock and Worf trying to get a sense of humor. Data
trying to get emotions.

Green orion women.

F) Cognitive enhancement and life extension technologies are outlawed, or at least all R&D towards those goals have been stopped.

This was explained in several important shows and movies. They indicated that is where they got the anti-transhumanism, because of Khan Noonien Singh and the Eugenics Wars. Julian Bashir a main character on DS9, is the enhanced individual who turned out good.

Plus many humans who get enhanced seem to have trouble dealing with the power in the Star Trek universe. Charlie, Gary Mitchell etc...

E) Something terrible happened to Asians, Africans and Latins, because 90% of all humans are English-speaking whites.

Casting choices and biases and real life distribution of actors where they film. Plenty of blacks have been cast (many end up playing under makeup.)
Asians are noticeably missing but some are behind the cameras.

FURTHER READING
Article on nazis in Star trek from startrek.com

UPDATE: Godwin's law is that the larger a usenet discussion group is then the higher the likelihood of Hitler and Nazi comparisons.

Godwin's Law is often cited in online discussions as a caution against the use of inflammatory rhetoric or exaggerated comparisons, and is often conflated with fallacious arguments of the reductio ad Hitlerum form.


Reductio ad Hitlerum, also argumentum ad Hitlerum, or reductio (or argumentum) ad Nazium – dog Latin for "reduction (or argument) to Hitler (or the Nazis)" – is a modern fallacy in logic. It is a variety of both questionable cause and association fallacy. The phrase reductio ad Hitlerum was coined by an academic ethicist, Leo Strauss, in 1950. Engaging in this fallacy is sometimes known as playing the Nazi card.

Carnival of Space Week 46

Carnival of Space 46 is up at riding with robots

My contribution was the premature report of room temperature superconductors. There is a new class of superconductors made under high pressure which could lead to room temperature superconductors.

Centauri Dreams talks about David Brin's speculation on eleven reasons for no contact with aliens (fermi paradox)


Bad Astronomy has ten things you don't know about the Milky Way galaxy

Out of the Cradle has list of moon related websites

New Frontiers talks about the space elevator

Colony Worlds proposes living inside aquariums and glass instead of underground on Mars.

Buckyballs can contain superpressured hydrogen


Buckyballs (carbon 60) can be loaded with hydrogen up to a few times smaller than the pressure [up to 140GPa] when the hydrogen turns metallic (400GPa) This is a detailed computational analysis, they did not perform a physical experiment and there is as yet no clear strategy for performing the actual loading of hydrogen into the C60. The highest pressures are achieved only with significant and near unstable strain. Could multi-walled carbon nanotubes be loaded with silane up to a critical pressure to a make a superconducting wire ? What is the peak critical superconducting temperature for the silane class of material ? How can we load gases into fullerenes up to those pressures ?

Internal pressure in fullerene nanocage producing various average relative C-C bond elongations (1%, 2%, 3%, 4%, 6%, 8%, and 10%) vs radius of undeformed cage. Vertical dashed lines mark radii of some nearly spherical fullerenes.

This relates to the new class of superconductors made from supercompressing silane (silicon and hydrogen). Note: I had to amend that report on superconductors. The investigators did not get good readings in the critical range of 96-120GPa, where the readings that they did get showed an indication of a spike in critical temperature. So highly pressured silanes could be a route to room temperature superconductors but we do not know yet. Also, now if silane were superconducting under high pressure than those conditions might be achieved with buckyballs loaded with silane.

The research appears on the March 2008 cover of the American Chemical Society's journal Nano Letters. "Based on our calculations, it appears that some buckyballs are capable of holding volumes of hydrogen so dense as to be almost metallic," said lead researcher Boris Yakobson, professor of mechanical engineering and materials science at Rice. "It appears they can hold about 8 percent of their weight in hydrogen at room temperature, which is considerably better than the federal target of 6 percent."

Buckyballs, which were discovered at Rice more than 20 years ago, are part of a family of carbon molecules called fullerenes. The family includes carbon nanotubes, the typical 60-atom buckyball and larger buckyballs composed of 2,000 or more atoms.



If a feasible way to produce hydrogen-filled buckyballs is developed, Yakobson said, it might be possible to store them as a powder.

"They will likely assemble into weak molecular crystals or form a thin powder," he said. "They might find use in their whole form or be punctured under certain conditions to release pure hydrogen for fuel cells or other types of engines."


FURTHER READING
The full paper is here


Estimated hydrogen density in Hn@C60 structures vs hydrogen pressure. Horizontal dashed line marks the density of liquid hydrogen at boiling point (20 K) under ambient pressure.


Fullerene Nanocage Capacity for Hydrogen Storage by Olga V. Pupysheva, Amir A. Farajian, and Boris I. Yakobson

We model fullerene nanocages filled with hydrogen, of the general formula Hn@Ck, and study the capacity of such endohedral fullerenes to store hydrogen. It is shown using density functional theory that for large numbers of encapsulated hydrogen atoms, some of them become chemisorbed on the inner surface of the cage. A maximum of 58 hydrogen atoms inside a C60 cage is found to still remain a metastable structure, and the mechanism of its breaking is studied by ab initio molecular dynamics simulations. Hydrogen pressure inside the fullerene nanocage is estimated for the first time and is shown to reach the values only a few times smaller than the pressure of hydrogen metallization. We provide a general relation between the hydrogen pressure and resulting C−C bond elongation for fullerene nanocages of arbitrary radii. This opens a way to assess possible hydrogen content inside larger carbon nanocages, such as giant fullerenes, where significant capacity can be reached at reasonable conditions


NEC claims 10-Petaflop supercomputing breakthrough

NEC AND the Tokyo Institute of Technology have developed the technology for a ten-Petaflop supercomputer. The foundation of this beast is a network of optical interconnections between nests of chips. The Japanese government says it could be ready by 2010.

The optically connected chips can talk to each other at 25 gigabits per second, so between them they can calculate at warp factor speeds (there’s no figures available. Depends how many chips are aggregated). That’s a 250 per cent increase on the fastest speed that data can limp along cables.

The prototype converts electrical signals into optical signals using laser diodes, says our man at the Nikkei, and its connector bundles 1,000 fibres together to bring together the worlds most powerful aggregation of neighbouring chips.


IBM has also been making progress with on chip optical interconnections.

I had previously reported on this Japanese supercomputer project.

March 19, 2008

Stanford researchers developing 3-D camera with 12,616 lenses


Stanford electronics researchers, lead by electrical engineering Professor Abbas El Gamal, are developing such a camera that makes a 2-D photo with an electronic "depth map" containing the distance from the camera to every object in the picture, a kind of super 3-D.
They it built around their "multi-aperture image sensor." They've shrunk the pixels on the sensor to 0.7 microns, several times smaller than pixels in standard digital cameras. They've grouped the pixels in arrays of 256 pixels each, and they're preparing to place a tiny lens atop each array.

Current cameras have expensive lens, the new system could de-emphasize lens and use on chip processing from more information for better image quality. So better 3d, better high resolution and cheaper cameras and a possibly better way to provide robots with 3D vision and 3D awareness.

If their prototype 3-megapixel chip had all its micro lenses in place, they would add up to 12,616 "cameras."

Point such a camera at someone's face, and it would, in addition to taking a photo, precisely record the distances to the subject's eyes, nose, ears, chin, etc. One obvious potential use of the technology: facial recognition for security purposes.

But there are a number of other possibilities for a depth-information camera: biological imaging, 3-D printing, creation of 3-D objects or people to inhabit virtual worlds, or 3-D modeling of buildings.

The technology is expected to produce a photo in which almost everything, near or far, is in focus. But it would be possible to selectively defocus parts of the photo after the fact, using editing software on a computer

Knowing the exact distance to an object might give robots better spatial vision than humans and allow them to perform delicate tasks now beyond their abilities.

Other researchers are headed toward similar depth-map goals from different approaches. Some use intelligent software to inspect ordinary 2-D photos for the edges, shadows or focus differences that might infer the distances of objects. Others have tried cameras with multiple lenses, or prisms mounted in front of a single camera lens. One approach employs lasers; another attempts to stitch together photos taken from different angles, while yet another involves video shot from a moving camera.

But El Gamal, Fife and Wong believe their multi-aperture sensor has some key advantages. It's small and doesn't require lasers, bulky camera gear, multiple photos or complex calibration. And it has excellent color quality. Each of the 256 pixels in a specific array detects the same color.

The technology also may aid the quest for the huge photos possible with a gigapixel camera—that's 140 times as many pixels as today's typical 7-megapixel cameras. The first benefit of the Stanford technology is straightforward: Smaller pixels mean more pixels can be crowded onto the chip.

The second benefit involves chip architecture. With a billion pixels on one chip, some of them are sure to go bad, leaving dead spots, El Gamal said. But the overlapping views provided by the multi-aperture sensor provide backups when pixels fail.

The finished product may cost less than existing digital cameras, the researchers say, because the quality of a camera's main lens will no longer be of paramount importance. "We believe that you can reduce the complexity of the main lens by shifting the complexity to the semiconductor," Fife said.


FURTHER READING
I had discussed the laser based 3D Lidar based freezeframe technology for more autonomous robots

I had discussed other gigapixel camera systems

Stitching for terapixel images

An update on lens arrays for gigapixels and super resolution aerial photographs, besides the bugs eye lens there is mohawk version for longer and thinner shot coverage.


More precise and safe gene therapy is highly promising

A way to carry out genetic surgery [more precise gene therapy] has been devised by a British Nobel prizewinner that is already under test on diabetic patients and being readied for use to treat Aids, blocked blood vessels and chronic pain. Safety and precision problems and concerns have been holding back wider use of gene therapy. If gene therapy were completely safe and precise, then many like gene therapy pioneer H. Lee Sweeney would switch to recommending treatments like myostatin inhibitors for increasing muscle mass by up to 4 times because they would make people healthier.

Sir Aaron Klug, a Nobel laureate working at the Medical Research Council's Laboratory of Molecular Biology in Cambridge, has developed a more efficient way to target genes, so gene therapy can be done with surgical precision. They have modified a piece of natural cellular machinery called "zinc fingers".

They have devised synthetic versions, called zinc-fingered nucleases, which have the capacity to recognise specific sequences of DNA which makes them extremely good at latching on to a specific spot, targeting particular genes without affecting others, so they can carry out genetic surgery to knock out genes or introduce new ones.

The new method is already being tested on more than 100 young diabetic patients who have lost sensation, a common complication, by the Californian company Sangamo BioSciences, after encouraging results in preliminary tests of the method to introduce a gene encoding a growth factor that can help restore sensation.



Sir Aaron explains: "The beauty of zinc-finger nucleases lies in their simplicity. Where other methods are long, arduous and often messy, it is relatively easy to switch off genes using this method.

"The zinc-finger design allows us to target a single gene, while the nuclease disrupts the gene. The single step process is extremely quick and reliable and opens up exciting possibilities for research and gene therapy."


FURTHER READING
BBC news also has coverage

Animal trials are already under way to use the technique to knock out the receptor of HIV in immune system T-cells of patients with Aids.

If successful this will render the T-cells immune from HIV infection, and enable them to fight disease.

Clinical trials to aid patients with blocked blood vessels are also under way.



Darpa is on track for railgun firing of modified mortar rounds in 2008

A full-scale, fully cantilevered electromagnetic railgun developed by the Defense Advanced Research Projects Agency (DARPA) has successfully launched a full-sized projectile, with size and weight similar to a 120mm mortar, at speeds of 430 meters-per-second. 430 meters/second would be a little faster than the 101-318 meter/second speed of regular mortar firings.

The railgun is the largest caliber supersonic railgun in the world and the first-ever successful fullscale cantilevered railgun to shoot a mortar-size projectiles.
The railgun is 2.4 meters long and weighs 950 kilograms. It is fully cantilevered from the breech end without visible droop. A cantilevered design is important because fieldable gun designs will need the ability to change aiming on a shot-to-shot basis. Built-in muzzle shunts quickly extinguish muzzle arc and reduce muzzle flash by providing an alternate current path.


The system has been demonstrated with reduced-mass projectiles to velocities around 550 meters-per-second and full-mass projectiles weighing 16.6 kilograms to 430 meters-per second. More than 30 projectile launches have been conducted during this program, which began in 2005. Testing of the full-scale railgun began in mid-2007.

This DARPA-sponsored project has been conducted by researchers from the Institute for Advanced Technology at the University of Texas at Austin. The ultimate goal is to be able to launch a slightly modified M934 mortar projectile jointly developed with the U.S. Army’s Armament, Research and Development and Engineering Center. Test launches of the M934 mortar projectile are scheduled for April to June 2008.


A regular M934 mortar

FURTHER READING
Using railguns for space launch.

Previous coverage of other railgun tests

Magnetic catapult better for space launches.

Part of the militaries vision of railguns and other future military systems [15 page pdf]

ElectroMagnetic (EM) Gun Technology Maturation & Demonstration
The ElectroMagnetic (EM) Gun Technology Maturation & Demonstration ATO focuses on developing and demonstrating key EM gun subsystems at or near full-scale to support future armament system developments. Future armored combat systems require more lethal yet compact main armament systems capable of defeating threat armor providing protection levels greatly in excess of current systems. The goal is to reduce technical risk associated with EM Gun technology by demonstrating meaningful technical progress at subsystem level; gain an understanding of EM technology issues; identify technology trends; conduct return on investment analyses; and craft a technology development strategy. By FY08, this effort will build a lightweight cantilevered high-fidelity railgun with integrated breech and muzzle shunt and demonstrate performance at hypervelocity and multi-round launch capability. It will integrate compact, twin counter-rotating pulsed alternator power supplies, conduct subsystem functional tests, and accomplish high fidelity PPS demonstrations that will establish requisite performance criteria to transition into the follow-on ATD. EM armaments offer the potential to field a leap-ahead capability by providing adjustable velocities, including hypervelocity, greatly above the ability of the conventional cannon. EM armaments could greatly reduce the sustainment requirements and vulnerabilities of conventional cannon systems and potentially can be fully integrated with electric propulsion and electromagnetic armor systems to provide an efficient, highly mobile, and deployable armored force. If successful, the payoff of EM gun technology will be increased lethality and lethality growth potential and enhanced platform survivability by reducing launch signature, and carrying less explosive energy on board.

Fuji Molten Salt reactor, Ralph Moir Interviews and other nuclear news

Charles Barton has an informative interview with Ralph Moir posted at Nuclear Green and Thorium energy.
Dr. Ralph Moir was an extremely distinguished scientist at Lawrence-Livermore Laboratory, and a personal associate of Dr. Edward Teller. He first discusses fusion/fission hybrid reactors and then molten salt fission reactors.

Fusion holds the promise--yet to be fulfilled--of providing a supply of neutrons that can be used to produce fissile fuel for fission reactors. Even if fusion cost twice that of fission per unit of thermal power produced, its fuel would be competitive with mined uranium at $200/kg. Fusion will be even more competitive as its costs come down. This produced fuel can be used in fission reactors to completely burn up the fertile fuel supply, that is depleted uranium or thorium. Its weakness is fusion is not here and past slow progress suggests future progress might be slow. Furthermore, we are not assured that fusion's costs will be less than twice that of fission.


This seems to suggest that even a partial success with inertial electrostatic fusion where for some reason a full scale commercial fusion reactor is not achieved or is slower in completing, that if it becomes a thousand or ten thousand times better at being a neutron source then it could be part of making completely burning fission reactors. [completely burning fission means no unburned fuel or almost no nuclear waste]

A conventional molten salt reactor can produce almost all of its own fuel but needs initial fuel for start up and needs some makeup fuel and also some fuel to be used to burnout certain wastes. So the fusion/fission hybrid can be this fuel supplier. In this way the combination of a hybrid fuel supplier and molten-salt burners can supply the planet's power for many hundreds or even thousands of years at an increased nuclear power level enough to make a big impact in decreasing carbon usage. Such a combination might have one hybrid fusion fission reactor for every fifteen fission reactors.


Dr Moir favorite fission reactor was the molten-salt reactor whose program was terminated in the 1970s.

It holds the promise of being more economical than our present reactors while using less fuel. I published a paper on this topic that the ORNL people did not feel they could publish. It can come in small sizes without as much of a penalty as is usually the case and can be in large sizes. It can burn thorium thereby getting away from so much buildup of plutonium and higher actinides.

The next step in molten salt reactor development should be the construction and operation of a small <10 MWe reactor based largely on the MSRE that operated at ORNL at about 7 MWth but without electricity production. The FUJI [MSR] project [which I covered in detail] has not gotten funding and is making no progress other than a paper here and there on some particular aspect.

A crash program for molten salt reactor development would only cost about $1 billion.


FURTHER READING
Dr Moir had a cost comparison of molten salt reactors to PWR and coal. Molten salt would be a bit cheaper than the other two.

He published detailed recommendations for a restart of a molten salt reactor program.

Dr Moir's papers and links to molten salt reactor resources.

Hoglund has a page discussing the benefits of molten salt reactors.


OTHER NUCLEAR NEWS
New energy and fuel has a good article that digs deeper into the work to get higher burn rates from nuclear fuel.

The advantages of the research and development of coating technology offers more beyond the increase of burnup percentage. The effects yield that the total fuel used is reduced, the amount needed to produce a given output is reduced and most importantly, the operating temperatures can be raised which brings a dramatic increase in the efficiency, or much more electricity is generated for a given amount of fuel. Oakridge’s review offers that the increase in operating temperature would allow an increase of thermal efficiency from 31% of current plants to beyond 43% which equates to more than 38% more power should current plants be retrofitted. It may be probable that as plants are re-licensed with new reactor technology that new reactor designs are installed.


Florida state regulators Tuesday morning approved Florida Power & Light's request to build two nuclear plants at its Turkey Point facility.

Combined license applications have been filed for 11 reactors so far and do not yet include the two new reactors for Turkey point.


A total of 33 reactors from 22 applications are expected. Most will be filed by the end of this year. The anticipated timeline for licensing is for licenses to be issued in 2011 and 2012. Actually construction could start then.

IBM makes progress toward optical on chip communication which could speed data transfer 100 times and reduce power by 10 times

BM Researchers Develop World’s Tiniest Nanophotonic Switch to route optical data between cores in future computer chips. If light can be used instead of wires, as much as 100 times more information can be sent between cores, while using 10 times less power and consequently generating less heat.

The report on this work, entitled “High-throughput silicon nanophotonic wavelength-insensitive switch for on-chip optical networks” by Yurii Vlasov, William M. J. Green, and Fengnian Xia of IBM’s T.J.Watson Research Center in Yorktown Heights, N.Y. is published in the April 2008 issue of the journal Nature Photonics. This work was partially supported by the Defense Advanced Research Projects Agency (DARPA) through the Defense Sciences Office program “Slowing, Storing and Processing Light”.


UPDATE:
Japan's NEC is using something similar (high speed optical interconnects) as the basis of their 2010 ten petaflop supercomputer

March 18, 2008

NIST Team paves the way for hybrid devices of standard CMOS and molecular electronics from organic molecules


Side and top views of the NIST molecular resistor. Above are schematics showing a cross-section of the full device and a close-up view of the molecular monolayer attached to the CMOS-compatible silicon substrate. Below is a photomicrograph looking down on an assembled resistor indicating the location of the well.

NIST team demonstrates that a single layer of organic molecules can be assembled on the same sort of substrate used in conventional microchips.

The ability to use a silicon crystal substrate that is compatible with the industry-standard CMOS (complementary metal oxide semiconductor) manufacturing technology paves the way for hybrid CMOS-molecular device circuitry—the necessary precursor to a “beyond CMOS” totally molecular technology—to be fabricated in the near future.



For their electronic device, the NIST team first demonstrated that a good quality monolayer of organic molecules could be assembled on the silicon orientation common to industrial CMOS fabrication, verifying this with extensive spectroscopic analysis.

They then went on to build a simple but working molecular electronic device—a resistor—using the same techniques. A single layer of simple chains of carbon atoms tethered on their ends with sulfur atoms were deposited in tiny 100-nanometer deep wells on the silicon substrate and capped with a layer of silver to form the top electrical contact. The use of silver is a departure from other molecular electronic studies where gold or aluminum has been used. Unlike the latter two elements, silver does not displace the monolayer or impede its ability to function.

The NIST team fabricated two molecular electronic devices, each with a different length of carbon chain populating the monolayer. Both devices successfully resisted electrical flow with the one possessing longer chains having the greater resistance as expected. A control device lacking the monolayer showed less resistance, proving that the other two units did function as nonlinear resistors.

The next step, the team reports, is to fabricate a CMOS-molecular hybrid circuit to show that molecular electronic components can work in harmony with current microelectronics technologies.

This work was funded in part by the NIST Office of Microelectronics Programs and the Defense Advanced Research Projects Agency (DARPA) MoleApps Program.


Metamaterials can shrink the size of cellphones, radios and radar equipment



NIST researchers have made metafilms of both yttrium iron spheres (right, each about 50 millimeters in diameter) embedded in a matrix, and tiny copper squares etched on a wafer (above). Credit: (Top) C. Holloway/NIST, (Right) © Geoffrey Wheeler

National Institute of Standards and Technology has demonstrated that thin films made of “metamaterials”—manmade composites engineered to offer strange combinations of electromagnetic properties—can reduce the size of resonating circuits that generate microwaves.

The work is a step forward in the worldwide quest to further shrink electronic devices such as cell phones, radios, and radar equipment.


The researcher team deduced the effects of placing a metafilm across the inside center of a common type of resonator, a cavity in which microwaves continuously ricochet back and forth. Resonant cavities are used to tune microwave systems to radiate or detect specific frequencies. To resonate, the cavity’s main dimension must be at least half the wavelength of the desired frequency, so for a mobile phone operating at a frequency of 1 gigahertz, the resonator would be about 15 centimeters long. Other research groups have shown that filling part of the cavity with bulk metamaterials allows resonators to be shrunk beyond the usual size limit. The NIST team showed the same effect can be achieved with a single metafilm, which consumes less space, thus allowing for the possibility of smaller resonators, as well as less energy loss. More sophisticated metafilm designs would enhance the effect further so that, in principle, resonators could be made as small as desired, according to the paper.

The metafilm creates an illusion that the resonator is longer than its small physical size by shifting the phase of the electromagnetic energy as it passes through the metafilm, lead author Chris Holloway explains, as if space were expanded in the middle of the cavity. This occurs because the metafilm’s scattering structures, like atoms or molecules in conventional dielectric or magnetic materials, trap electric and magnetic energy locally. The microwaves respond to this uneven energy landscape by adjusting their phases to achieve stable resonance conditions inside the cavity.

On the downside, the researchers also found that, due to losses in the metafilm, smaller resonators have a lower quality factor, or ability to store energy. Accordingly, trade-offs need to be made in device design with respect to operating frequency, resonator size and quality factor, according to the paper.

Supercompressed silicon and hydrogen superconducts at room temperature

A new superconducting material fabricated by a Canadian-German team has been fabricated out of a silicon-hydrogen compound [after supercompression, 96-120GPa] and does not require cooling. They had to keep the material under pressure (100GPa) in order to get it to superconduct.

CORRECTION: The press release talked about not using refrigerant and EEtimes said room temperature superconductor. They believe that the new silane / hydrogen compounds could reach room temperature superconducting levels. The temperature at which superconductivity occurs exhibits some interesting behavior. It hangs around 5-10K for most of the pressure range (50-200GPa), but in a small range between 100-125GPa, it increases quite sharply. Although the researchers only have five data points in the range and never observed a critical temperature higher than 20K, the shape of the curve indicates that, for some small range of pressures, a very high critical temperature might be achieved. So they still have to investigate the critical pressure range and possibly other compounds and still get them to work after pressure is removed. The other unpressurized material which could be superconducting at 185K are closer to being possible improved application, but they need some more independent confirmations.

So there is still work to do to make this more practical. Figure out a way to quench the metal such that it stays and metal and superconducting when pressure is removed or figure out a better but similar material. There is also the early word on the non-pressurized advances to -87C. This is at the cusp of a decent lab freezer, that easily go down to -86C. A lot of improvement and activity in the area of superconducting material seems to be happening now.

ANOTHER UPDATE
Fullerenes could theoretically be loaded with hydrogen (or other gases like silane) into the range of the correct pressure. The problem is still being able to do it and what the actual peak critical temperature is at optimal pressure The best route is probably to learn more about superconductors from these materials and then figure out a better compound that does not require these extreme efforts.

UPDATE: 3 page pdf on the methods used for the experiment.

We have used diamond anvil cell equipped with beveled diamonds and gasket made of cubic BN powder mixed with epoxy. Commercial silane of 99.99% purity (Air Liquide) was loaded trough capillaries into a small cavity surrounding diamonds where it was condensed at ≈112-150 K. All the system was carefully checked with a helium leak detector to be ensured the absolute tightness − a necessary precaution because silane is a pyrophoric substance.

Decomposition can indeed occur when silane was loaded at P<50 GPa and warmed to room temperature. In this case we clearly observed Si at the X-ray diffraction patterns, and the H2 vibron in Raman spectra even not from transparent but metallic sample at higher pressures. Thus, we avoided decomposition by loading silane and performing further measurements at low temperatures below 120-150 K. We warmed the sample up to 300 K only at pressures above 100 GPa. X-ray diffraction measurements proved that no Si phase appeared in this case. It is important that with our sensitive Raman setup we observed no hydrogen vibrons either in the sample or in the surrounding transparent cBN gasket.


This follows the recent news of higher critical temperature superconductors that are made under normal pressure at up to 185K or -87C.

"If you put hydrogen compounds under enough pressure, you can get superconductivity," said professor John Tse of the University of Saskatchewan. "These new superconductors can be operated at higher temperatures, perhaps without a refrigerant."

He performed the theoretical work with doctoral candidate Yansun Yao. The experimental confirmation was performed by researcher Mikhail Eremets at the Max Plank Institute in Germany.

The new family of superconductors are based on a hydrogen compound called "silane," which is the silicon analog of methane--combining a single silicon atom with four hydrogen atoms to form a molecular hydride. (Methane is a single carbon atom with four hydrogens).

Researchers have speculated for years that hydrogen under enough pressure would superconduct at room temperature, but have been unable to achieve the necessary conditions (hydrogen is the most difficult element to compress). The Canadian and German researchers attributed their success to adding hydrogen to a compound with silicon that reduced the amount of compression needed to achieve superconductivity.


In an article published today in the prestigious journal Science, the team has produced the first experimental proof that superconductivity can occur in hydrogen compounds known as molecular hydrides.

In related research, Tse’s team is using the Canadian Light Source synchrotron to study high pressure structures of other hydrides systems on potential superconductivity and making use of them to store hydrogen for fuel cells.


WHAT WOULD COMMERCIALLY USABLE ROOM TEMPERATURE SUPERCONDUCTORS MEAN ?
BBC News talked about that anticipated but delayed vision from the hoped for results from the 1987 "warmer" superconducting breakthroughs.

Levitating high-speed trains, super-efficient power generators and ultra-powerful supercomputers would become commonplace thanks to a new breed of materials known as high temperature superconductors (HTSC).


Those difficult to manipulate superconductors have been on track to make smaller and more efficient motors with commercial impact in 2010 South Korea was making significant advances with 1300hp superconducting generators.

They were also being tested in 36.5 MW motors for navy ships.

Here was a more recent list of predictions of what "warm" superconductors that we had before the most recent two announcements could provide. 100Tbps routers, faster communications, faster computers, better sensors and more. Room temperature versions would make all of these things cheaper, more widespread and more powerful.

If the new room temperature superconductors have or can be made to have a very high current density relative to their weight, then there is the possibility of a ground launched magnetic sail or high performance magnetic sails for space propulsion.

31 page pdf of the 1999 Zubrin study for Nasa on magnetic sails






Getting up to 100 billion to 1 trillion or more amperes per cubic meter is the current density for high performing magnetic sails.

D.G. Andrews and R.M. Zubrin, "Magnetic Sails and Interstellar Travel." Journal of the British Interplanetary Society, 1990. The first paper published, concerned primarily with the cost savings to other propulsion systems from the use of the magsail as an interstellar brake.

R.M. Zubrin and D.G. Andrews, "Magnetic Sails and Interplanetary Travel." Journal of Spacecraft and Rockets, April 1991. The technical description and very thorough analysis of the magsail for interplanetary travel. Excellent.

R.M. Zubrin, "The Magnetic Sail." Analog Science Fiction & Fact, May 1992. A version of the above paper edited for a non-technical audience. Useful for general concepts, inadequate for a full understanding.


FURTHER READING
Superconductivity in Hydrogen Dominant Materials: Silane [journal Science abstract]

M. I. Eremets,1* I. A. Trojan,1 S. A. Medvedev,1 J. S. Tse,2 Y. Yao2

The metallization of hydrogen directly would require pressure in excess of 400 gigapascals (GPa), out of the reach of present experimental techniques. The dense group IVa hydrides attract considerable attention because hydrogen in these compounds is chemically precompressed and a metallic state is expected to be achievable at experimentally accessible pressures. We report the transformation of insulating molecular silane to a metal at 50 GPa, becoming superconducting at a transition temperature of Tc = 17 kelvin at 96 and 120 GPa. The metallic phase has a hexagonal close-packed structure with a high density of atomic hydrogen, creating a three-dimensional conducting network. These experimental findings support the idea of modeling metallic hydrogen with hydrogen-rich alloy
.

1 Max Planck Institute für Chemie, Postfach 3060, 55020 Mainz, Germany.
2 Department of Physics and Engineering Physics, University of Saskatchewan, Saskatoon, S7N 5E2, Canada.

On leave from A. V. Shubnikov Institute of Crystallography, Russian Academy of Sciences, 117333, Leninskii Avenue 59, Moscow, Russia.

March 17, 2008

Superconductivity seen at up to 185K


On rare cold days in Antarctica this material would be superconducting outdoors without added cooling.

UPDATE: Announcement of room temperature superconductors from highly compressed silicon and hydrogen was premature in journal Science by Saskatchewan, Canada and German researchers. The transition temperature was low for the data that they had but they believe there is pressure zone that performs better

There are hints of superconductivity at 200K for aluminum nanoclusters. Not all three requirements for superconductivity confirmation are met. Plus the nanoclusters are of limited practical use. If confirmed maybe some kind of suspension or nanoparticle structure could be used. There have been million nanoparticle structures into a 3 dimensional form put together with DNA to form new crystals.

Another update, a new family of high temperature superconductors has been found and they are iron and arsenic compounds.


Common laboratory freezers can reach -85C or -86C. The superconductor is working at -87C.
Low temperature freezers for about $6000-15000.

Superconductors that would work at room temperature or with cheaper refridgeration and that can be produced in large volumes would revolutionize energy distribution and could improve all kinds of technology.

Like the 181K superconductor reported in January of 2008, the 185K superconductor appeared as a minority phase in a 1223/1212 host that was doped with extra Tm and Cu (see structure types at page bottom).
Through trail and error Tc was found to peak with slightly more Lead and slightly less Indium than the 181K formulation. Eight separate tests of the compound (Sn1.0Pb0.5In0.5)Ba4Tm5Cu7O20+ produced an average Tc of 185.6K. Interestingly, the 3-to-1 ratio of 4A to 3A metals in the insulating layer is also the ratio that produces the highest transition temperatures among binary alloy superconductors.


The discover of the new proposed record high Tc material is Joe Eck, the author of superconductors.org.




In October of 2007, superconductivity near 175K was detected at ambient pressure in an Sn-In-Tm intergrowth. By doping roughly 28% of the Sn atomic sites of that molecule with Pb, Tc is increased further to 181K (183K magnetic). The revised chemical formula thus becomes (Sn1.0Pb0.4In0.6)Ba4Tm5Cu7O20+ with a 1245/1212 (non-stoichiometric) structure.

The report on the 175K superconductor discovery is here.

FURTHER READING
Other superconductor news

Thermo scientific makes and sells advanced freezer systems able to handle -196C

A 4 page pdf discussing the advances in cooling technology

IEC fusion visitor report

The latest word from the IEC fusion effort. Basically progress is being made by a dedicated team, but results will be rigorously confirmed before they are published.

A visitor to the lab reports:
The effort is very professional, the crew is made of experienced people who get their hands dirty, and progress is occurring at a rapid rate. The WB-7 vacuum chamber is a dream: stainless steel with hinged doors, large enough for a person to sit in (it would be cramped, but still). It was welded by the same guy who welded my "Carl's Jr.," a real craftsman, but EMC wanted it so fast that it couldn't even be leak-checked before coming to the lab--that was done on site. They definitely have the "balls to the wall." I brought them a BTI PND dosimeter as a little token of my appreciation for the tour and in the hopes that they get neutrons soon. I watched a video of their first plasma in helium. They've got guys working on a magnet current switching scheme and RF plasma diagnostics, among many other aspects. Parts (MaGrids) from the previous WB systems were sitting out on the office floor and were informative to look at. If I tried to describe their program in any specificity here, I'd be abusing my privilege of being a tourist in their lab. But basically, it looked to me like a vigorous and enthusiastic effort.

More estimates of Bakken oil size and timing

The Kiplinger Letter talks about the potential and timing of the Bakken oil play

An official government survey of the Bakken region's oil treasure trove is due out next month. The report is expected to play it very conservatively, because it will confine estimates to the amount of oil that likely can be produced profitably based on last year’s oil prices. It will also not take into account any further technological advances that might make it even easier to extract more oil.

"The Bakken is much like the enormous natural gas field that sat for many years under and around Dallas until people figured out the geology and how to drill it out economically," says Lucian Pugliaresi, president of the Energy Policy Research Foundation.

Figure on at least five years before the oil starts flowing in large volumes. A lot of work will need to be done first. In addition to installing drilling gear, firms must build supporting infrastructure, including roads, pipelines as well as new water, sewage and sanitation systems to meet the needs of workers and other area residents.

Note that the Bakken Play region is not an environmentally sensitive area similar to Alaskan tundra that has stymied much oil field development because of concerns about damage to the fragile environment. Still, some environmental protests are sure to emerge and may gum up development for a while, but they’re unlikely to stop oil production from the Bakken fields.


Bismarck, N.D. oil companies are now drilling beneath North Dakota's big lake.

Oil companies are using advanced horizontal drill techniques to tap crude oil and gas underneath Lake Sakakawea.

Controlled and computationally modelled gene deletion a new approach to gene therapy

Gene therapy up until now has been the insertion of a new working gene into cells to add or activate a capability, but now the controlled deletion of multiple genes is a new way to effect cellular activity.

The targeted removal of genes -- the exact opposite of what a gene therapist would do -- can restore cellular function in cells with genetic defects, such as mutations.

After the 2003 Northeast electrical power grid blackout, where a sequence of failures in the power grid led to the largest outage in U.S. history, experts determined that the event could have been reduced or avoided by instigating small intentional blackouts in the system during the initial hours of instability.

“And the same could be valid in biology, where a defective gene may trigger a cascade of ‘failures’ along the cellular network,” said Motter, whose interest and expertise lie in complex systems and understanding how the structure and dynamics of a high-dimensional system, such as an intracellular network or a power grid, relate to its function.



Schematic illustration of the consequences of gene deletion on the organism's growth rate. (A) The growth rate following the deletion of an enzyme-encoding gene often drops, but after many generations may recover to a new optimal value not very different from the original one (red line). The optimal growth rate before and after the deletion is predicted by FBA (black and green dotted lines). The blue line indicates the predicted buffering effect of additional gene deletions: by deleting appropriately selected additional genes, the suboptimal growth rate shortly after gene deletions is higher than without the rescue deletions. (B–E) The effect of rescue deletions on the fluxes of a metabolic network, where M1 ... M4 represent metabolites and the width of the arrows represents the strength of individual fluxes.

The team’s use of predictive models is similar to how physicists use models, for example, to determine the position of the moon tomorrow at a specific time. Thanks to the recent wealth of available biological information, computational scientists now are beginning to develop quantitative models of biological systems that allow them to predict cellular behavior.

In one in silico experiment (via computer simulation) with E. coli, the researchers found that the deletion of one gene is lethal to the cell but when that same gene is removed along with other genes, it is not lethal. The gene, it turns out, is only essential in the presence of other genes. This touches the issue, says Motter, of whether organisms have an unconditional set of essential genes.



Distribution of metabolic fluxes in the E. coli 's TCA cycle in arabinose minimal medium for (A) wild-type organism predicted by FBA, (B) fbaA mutant predicted by MOMA, (C) optimal state of fbaA mutant predicted by FBA, and (D) fbaA mutant with the rescue deletions of genes aceA and sucAB, predicted by MOMA. Key flux changes are highlighted in orange. Note that the metabolic flux pattern predicted by MOMA after the fbaA deletion (B) is similar to the wild-type fluxes (A). With the rescue deletions, however, MOMA-predicted fluxes (D) are brought closer to the FBA-predicted fluxes (C), restoring the organisms' ability to produce biomass.

While Motter’s team has not done actual laboratory experiments, they have used their computational results to re-interpret and explain specific recent experimental results. They have applied physics methods to solve a biological problem. Their method, for example, can identify the genes whose removal restores growth in gene-deficient mutants of E. coli and S. cerevisiae, a type of yeast.



The impact of rescue deletions. (A) Predicted biomass production for the fbaA mutant of E. coli in arabinose minimal medium as a function of the number of rescue deletions when starting with aceA and sucAB. Deleted rescue genes are indicated in the figure. (B) Biomass production of tpiA- and nuoA-deficient mutants in glucose minimal medium as a function of the number of individual rescue deletions. Deleted genes are indicated in the figure. The optimal biomass flux remains unchanged with the addition of rescue deletions. The biomass fluxes are normalized by the wild-type flux GwtFBA=0.745 mmol/g DW-h in (A) and 0.908 mmol/g DW-h in (B).

FURTHER READING
The research is here: Predicting synthetic rescues in metabolic networks.

An important goal of medical research is to develop methods to recover the loss of cellular function due to mutations and other defects. Many approaches based on gene therapy aim to repair the defective gene or to insert genes with compensatory function. Here, we propose an alternative, network-based strategy that aims to restore biological function by forcing the cell to either bypass the functions affected by the defective gene, or to compensate for the lost function. Focusing on the metabolism of single-cell organisms, we computationally study mutants that lack an essential enzyme, and thus are unable to grow or have a significantly reduced growth rate. We show that several of these mutants can be turned into viable organisms through additional gene deletions that restore their growth rate. In a rather counterintuitive fashion, this is achieved via additional damage to the metabolic network. Using flux balance-based approaches, we identify a number of synthetically viable gene pairs, in which the removal of one enzyme-encoding gene results in a non-viable phenotype, while the deletion of a second enzyme-encoding gene rescues the organism. The systematic network-based identification of compensatory rescue effects may open new avenues for genetic interventions.