Pages

July 11, 2008

Wasteful government spending, unfunded projects and prom dates

Many people who discuss energy and environmental issues often complain that if only project X or funding for Technology AB was cut or reduced and the billions not wasted on it then that money could go to the energy technology that I like or the energy project that I think would work.

This is massively flawed reasoning for several reasons:
1. The assumption that the massive waste and inefficiency in government spending would re-assign freed up tax dollars for "the good project" does not make sense given the track record.

Think of dates for the High School Prom.
If the girls are funders of projects and the guys are projects to be funded, then if all the football players are somehow banned from the prom it does not follow that all of the chess club members would now get dates.

2. Lack of available funds is not the reason that certain projects are not getting funded.

There could be a highly worthy (based on science and engineering) energy project like the Fuji Molten salt reactor whose design is languishing for lack of a few hundred million or a billion dollars of funding. There is a world economy of $60 trillion/year and $1-2 trillion per year is going to energy infrastructure for building it or researching it. So it is like there is a population of 60,000 women. 20,000-40,000 of them have to work on the farm or in the offices to keep the city running. There is potentially 20,000 who could go to the prom if it was important enough. Like 20,000 of them went to the World war 2 mobilization prom. Many will choose to stay home or do something else and just are not interested in the prom. Meanwhile 1000-2000 are already going to the energy infrastructure and research prom. The fact that one guy could not convince one of the 1000-2000 to go with him instead of one of the other guys is not the only reason he did not get a date. He could have asked and tried to convince one of the 18000-19000 available and eligible women to go with him. Only 15-25% of the women work for a government [tax money and the rest are private money]

So the suggestion would be that the dateless guy/unfunded project needs to look at dressing better, working out and re-inventing himself so that he is successful in getting a date/funded. In the case of project, creating better plans, finding ways to do more preliminary research that justifies the project and considering if the project really makes sense given the overall situation in different regions and countries.

Also, certain kinds of funders/women will never fund certain projects. The chess club guy/fuji molten salt reactor may need to reconsider only asking large breasted, blond women/specific nation with a specific department for money and try to go for other sources.


Governments waste, corruption of the system and inefficiency is a lot bigger than a few billion here or there.

Here is a Heritage foundation list from 2005 of top ten items of US government waste.

Some of the items:
Overpaying for medicine in medicare ($25-30 billion) could be saved with reform.
Defaulted student loans $25 billion (some of the students did not exist)
Other fraud

2007 report indicates that government waste is at an all time high

Big science projects that are unlikely to achieve the goals that someone would want (International Tokamak fusion project supposedly trying to achieve abundant,cheap and clean energy) may get cut but the funds will not go into a better fusion project or a good advanced fission but instead will go to funding the wars and defence spending.

Microbial energy solution prospects for biofuels and solar power

The microbial energy solution for biofuels and solar power The Biodesign team, in their Nature Review Microbiology perspective article, outlines the prospects for bioenergy. They believe the future of microbial bioenergy is brightened by recent advancements in genome technologies and other molecular-biology techniques. One species of bacteria, the human gut bacterium E. coli, has become the workhorse of the multi-trillion dollar global biotech industry.

Microorganisms can produce renewable energy in large quantities and without damaging the environment or disrupting food supply. The microbial communities must be robust and self-stabilizing, and their essential syntrophies must be managed. Pre-genomic, genomic and post-genomic tools can provide crucial information about the structure and function of these microbial communities. Applying these tools will help accelerate the rate at which microbial bioenergy processes move from intriguing science to real-world practice.


A recent International Herald Tribune article reviewed the status of having algae produce biofuel. Large-scale commercial production is at least five years away, according to most estimates, and it is still too early to say which methods, if any, will be economically viable, how much energy they may produce and what their effects on the environment might be. The U.S. National Renewable Energy Laboratory is focusing on the development of commercial co-products for algae, like ethanol or animal feed, which could help to improve profitability.

LiveFuels uses open ponds to grow algae that are indigenous to the local environment, hoping that this will avoid the invasion problem. Since algae need nutrients to grow, including nitrogen and phosphorous, the company plans to feed agricultural runoff water - polluted with nitrogen and phosphorous fertilizers - into its ponds, combining energy production with water treatment.

Another company, Bionavitas, of Redmond, Washington, also grows native algae, but in deep, narrow canals, with a special optical system to bring light to the algae beneath the surface. It too hopes to harness nutrients from polluted wastewater; and because intense carbon dioxide inputs can speed growth, it envisages setting up sites next to a factory that could funnel smokestack emissions directly into its canals. Michael Weaver, the chief executive, said that Bionavitas aimed to use "the whole algae" to produce biodiesel, ethanol, nutriceuticals and products currently derived from petroleum.

Vertigro, a U.S. company based in Vancouver, Canada, is testing single varieties of algae, grown in bioreactors that resemble hanging plastic bags, to see which grows best in a closed environment and produces the most oil. Its business plan is to sell its system to companies that would use it for commercial biofuel production, said Glen Kertz, chief executive of Valcent Products, a partner in Vertigro with Global Green Solutions, a sustainable energy development business.

In Seattle, Blue Marble Energy is putting algal biomass in anaerobic digesters to produce industrial chemicals and methane. The latter is combusted in a turbine to generate electricity and could also be used in fuel cells, said the chief executive, Kelly Ogilvie. Saleable byproducts include ammonia, anhydrous ammonia, and other industrial chemicals currently made with petroleum.


Livefuels is also on this Earth2tech lis of 15 algae to fuel startups

From Sciencedaily, to date, approximately 75 genomes are available from microorganisms that have a role in bioenergy production. These include 21 genomes from methane producing archaea, 24 genomes from bacteria that can produce hydrogen or electricity, and 30 genomes from cyanobacteria that are potential biodiesel producers. At least half of the completed microbial genomes that are relevant to bioenergy were released in the past 2 years, and more than 80 bioenergy-related genomes are currently being sequenced.


Biodesign researchers outline paths where bacteria are the best hope in producing renewable energy in large quantities without damaging the environment or competing with our food supply.

Two distinct, but complementary approaches will be needed. The first is to use microbes to convert biomass to useful energy. Different microorganisms can grow without oxygen to take this abundant organic matter and convert it to useful forms of energy such as methane, hydrogen, or even electricity. The second uses bacteria or algae that can capture sunlight to produce new biomass that can be turned into liquid fuels, like biodiesel, or converted by other microorganisms to useful energy. Both approaches currently are intensive areas of biofuel research at the Biodesign Institute, which has a joint project with petroleum giant BP to harvest photosynthetic bacteria to produce renewable liquid fuels, such as biodiesel.


July 10, 2008

Continental Resources Bakken oil 600 to 1000 barrel a day per well

Continental Resources first well flowed at an average rate of 693 barrels of crude oil equivalent per day in its initial week of production in May.

The second well, Mathistad 1-35H, began production on July 4 and flowed at an average rate of 1,095 barrels of crude oil equivalent per day, with 90 percent of production being crude oil and 10 percent natural gas.

Natixis Bleichroeder analyst Curtis Trimble said the latest results from the Three Forks/Sanish formation increased the productive profile of the Bakken Shale area.

"Future wells will be closer to the 600 to 1000 barrel a day level versus previous wells that were averaging about 450 barrels a day," Trimble said.

Continental is the largest leaseholder in the Bakken Shale play with about 500,000 acres in North Dakota and Montana.

Nuclear Plant Builder Shaw Group's conference call highlights

Shaw Group is a partner in the AP1000 nuclear reactors (Westinghouse/Toshiba). Shaw Group gets part of the revenue per AP1000 reactor is between $2 billion and $2.5 billion per reactor on an EPC basis. They get 20% of the profits on the sale of the reactor units. The reactors are currently somewhere around $3,500 of KW, something in that range plus or minus 20%. The Shaw Group is the architect engineers for the AP1000 reactor group consortium.

As of the May 31, 2008 conference call:
we’ve gotten four reactors so far. Last year about this time we told you we’d have two to eight so we’ve got four reactors. We have a Letter of Intent for two more reactors in the United States and that’s with Progress and that should go to—hopefully go to an EPC contract in the next two, three, four months. In addition to that in the United States we feel comfortable that we will again be awarded two to eight reactors over the next calendar year in the US in calendar of 2009. South Africa it’s pretty publically known that the project is between three and 18 reactors with our competition being Areva and that’s about the only comment I have on that.

The UK market continues to develop quicker then we thought and that market should follow very closely with the US market. India we are still hopeful that during the calendar year that the treaty with the current Bush administration will be signed and I think that there’s some movement on India due to the political parties having a—I think the government has come to some type of understanding with the minority party that that may have promise for the rest of the year as well. We continue to—those are the major markets with the exception of China, the other markets throughout the world are mostly smaller, a couple of reactors in different countries. But the global market is developing and continues to develop faster and faster with the search for long-term economical sources of energy.



Steel increase may be $100 million on a nuclear power plant, if you start to look at the effect of the cost of electricity over 40 years on a $7 billion to $10 billion plant, its inconsequential.

David Yuschak – SMH Capital asked : Let’s just talk about the nuclear build particularly—and the last nuclear build the biggest problem people got into and companies got into was the cash flows going to those projects, things kind of got out of whack but a lot of excess expectations, particularly in some of the smaller companies back then given the size and the skill for the projects, as you look at this next cycle, what do you think the nuclear owner has learned today to make sure that the issues that kind of put them under the last time, and in addition to the cash flow expectations needed to support that may have changed compared to our last nuclear build.

I think the overriding difference is that a combination operating license is issued before there’s work done, construction work done on the site. This means that once the plant is complete in accordance with the permit that it can go into operation. In the past you had a construction license where you could begin to build the construction of a nuclear power plant and then a new crew from the NRC came in when it was substantially finished and said, you know I need you to change this, that and the other thing and before you get a operating permit which tended to delay two to three years. Excessive changes during the construction process ended up being longer delivery of the operating unit. And if you recall back then, it’s been such a long time, the construction techniques from CAD systems to Intergraph or interfaces are automatically cleared today with, you didn’t know until you actually erected piping systems or structural components so a lot of things have changed. But I think that far and away the biggest change is that once you have a [inaudible] construction operating license that you’re able to go ahead and build the plant and get it to work and so we’re encouraged by that and I think that that we have a good handle on what we’re required to do.

Q: How much time do you think it will take then to do a conventional nuclear project today given these technologies and the licensing from start to finish?

A: I think that from a first concrete poured in the field I think that 48 months would be a reasonable expectation.

FURTHER READING
McDermott International is also involved in building nuclear plants and equipment

General Electric makes nuclear plants and wind turbines

Exelon operates 17 nuclear reactors in the United States

As gas, coal and capacity prices increase and reserved margins decline the value of our nuclear fleet increases. We expect that value to increase even further in a carbon constraint world. While we don't know just when will all three presidential candidates pledge to carbon legislation and even President Bush now advocating action. We believe that legislation will be inactive in 2009 or 2010 with a likely effective date sometime in 2012 or 2013.

Dominion Resources is a utility that operates nuclear power plants

Our third unit at our North Anna nuclear station remains on track. We mentioned to you on our last call that the Nuclear Regulatory Commission had received and reviewed our combined operating license application. The application was deemed to complete and a scheduling order issued. We've already received approval for early site permit and we stand first in line to receive our combined operating license from the NRC in the latter part of 2011. This approval would place us as the first company in the nation to begin construction of a new nuclear unit in nearly three decades. Additionally, we plan to apply this summer for federal loan guarantees at the Department of Energy.

The approval process to construct a nuclear plant rests with the NRC. Because we are a regulated utility in Virginia, our earned return on the plant during construction and its service life is determined by state law. Under Virginia law, we will be eligible to file for a premium of 200 basis points on the allowed base return on equity for this nuclear unit, which we plan to do later this year.

There appears to be a continuing misunderstanding among some in the financial community about the procedural requirements necessary to construct a nuclear facility in the United States. This misconception probably arises from the flurry of announcements by companies, some regulated, some merchant, some who have early site permits, and some who seek the site permits as part of a combined operating license application. Every company, whether regulated or unregulated, must have an NRC issued COL to begin safety related construction of a new nuclear unit regardless of all other approval requirements, including State Commission approvals. To help understand the process, at least from our perspective, please refer to Dominion's Investor Relations website under our supplemental schedules for an outline of the application process to begin such construction.


Entergy Corp is another utility operating nuclear reactors in the USA

The Shaw group expanded operations and opened new offices in China to support the increased nuclear business in China

YCBO superconductor at 105K and upcoming 195K superconductor

Superconductors.org reports the critical transition temperature (Tc) of the industrial superconductor YBCO (YBa2Cu3O7) has been successfuly increased from 92K to near 105K by reformulating to Y3Ba5Cu8Ox. No new elements were added.

Y-358 - dubbed "Ultra YBCO" - is a novel intergrowth incorporating two different types of planar weight disparity (PWD) and a larger unit cell (around 31.2Ã). The target structure, shown below left, is colinear and does not incorporate branching of the CuO2 chains, as occurs in the Y-124 and Y-247 structure types.

Other YBCO variants have also been discovered by Superconductors.ORG, but had limitations vis-a-vis standard YBCO. "Super YBCO" (Tc up to 107K) required the use of an expensive heavy rare earth oxide to synthesize. And "Enhanced YBCO" (Tc 97K) was not homogenous. Its volume fraction appeared to be around 30%. Ultra YBCO should cost no more than standard YBCO to manufacture.

Synthesis of the material was by the solid state reaction method. Stoichiometric amounts of the below precursors were mixed, pelletized at 70,000 psi and sintered for 11 hours at 890C. The pellet was then annealed for 10 hours at 500C in flowing O2.

Superconductors.org is also teasing that they have a superconductor that works at 195K which is the dry ice sublimation temperature [-78 Celsius which is 195 Kelvin]

So this year has seen excellent experimental progress being made to room temperature (300K) superconductors as well as theoretical progress. There has also been the whole new class of higher temperature iron based superconductors.

Myostatin inhibitors funded for muscle regeneration of war injuries

A $1.2 million grant from the Office of Naval Research to Dr. Hamrick,bone biologist in the Medical College of Georgia Schools of Graduate Studies and Medicine, is enabling laboratory studies of two experimental myostatin inhibitors: a decoy receptor and a binding protein, both developed by MetaMorphix, Inc. of Beltsville, Md. Both inhibitors have been shown effective in muscle regeneration, but this is the first trial that looks at their impact on bone.

They are studying two myostatin inhibitors in mice with limb injuries, first to see which works best and then to identify the best delivery mechanism, says Dr. Mark Hamrick, bone biologist in the Medical College of Georgia Schools of Graduate Studies and Medicine.

"Fifty to 60 percent of the injuries occurring in Iraq are to the limbs, and the average injury requires five surgeries," Dr. Hamrick says. "Myostatin inhibitors are known to improve muscle regeneration and we have evidence that they also increase bone formation. We believe these inhibitors will result in a stronger, more rapid recovery for these soldiers and other victims of traumatic limb injuries."



Bone and muscle healing typically go hand in hand. Muscle provides blood, growth factors and potentially stem cells for a healing callus. It's not yet known how well bones reciprocate. "If you can improve muscle healing, you can improve bone healing," Dr. Hamrick says. "Young people have a tremendous potential to heal that can be improved with better approaches to preventing infection and to healing soft tissue and bone in an integrated manner."

Researchers hope to move to clinical trials in two to three years, Dr. Hamrick says. "If we find the primary role of myostatin is very early in the healing process and see a big jump in expression early in a fracture callus, it may be that a single injection bolus immediately after injury is the best time for treatment rather than continued treatment over a period of time."


FURTHER READING
Roughly the same article, but the Medical College site has some link issues.
Augusta Chronicle coverage

Science Daily coverage

DNA sewing machine



DNA sewing machine created by Kyohei Terao from Kyoto University and colleagues. They designed laser-directed microdevices to pick up and manoeuvre giant individual molecules of DNA. The technology will also be useful for a number of other applications including DNA sequencing and molecular electronic. This should help enable the goals of $10-1000 whole genome sequencing and help with DNA manufacturing (using DNA as a structural material). There is a lot of further potential for more specially designed microtools and structures for improving the manipulation of DNA and other molecules.

The full article is here

In conclusion, we have demonstrated the method and device for on-site single-molecule manipulation of giant DNA molecules, using optically driven microstructures for picking up and separating a DNA fibers from a bundle. We used a microfabricated hook together with winding/unwinding of the DNA fiber onto microfabricated bobbins. This method enables the manipulation of DNA molecules in the order of mega base pairs under a microscope without fragmentation. The method is purely mechanical, and requires no chemical modifications; moreover, it can manipulate any desired part of the targeted DNA in the microscope view. This method will create avenues for space-resolved single molecule assays of large chromosomal DNA, along with its applications in gene location and epigenetic studies.


Single molecule analysis of DNA is limited by the difficulty of stretching out and handling these long molecules - eukaryotic DNA can range from millimetres to centimetres. A giant DNA molecule is very fragile, explains Terao, so to catch it and manipulate it without breaking it is a challenge.

Thinking of a strand of DNA as a piece of sewing thread, Terao developed microhooks to pick up the DNA, just like we would use our fingers to pick up thread. When thread is very long it becomes tiresome to manipulate it just with our fingers and instead we wind it around bobbins to make it compact. This is what inspired us to use microbobbins, says Terao.



Optical tweezers - where tightly focused laser beams trap and hold tiny objects - are used to catch and move these microdevices. The z-shaped microhook is directed by the tweezers to pick up a single strand of DNA, and barbs in the openings of the hook prevent the caught DNA unhooking. In the case of bobbins, two focused laser beams are used to revolve one bobbin around the other. The revolving motion winds the DNA molecule between the two bobbins.

This DNA manipulation technique should prove useful in applications such as fluorescence in situ hybridisation (FISH), says Terao.



Now that is a long comment thread

The big news out of Iraq is that Lara Logan, the chief foreign affairs correspondent for CBS News, tells The Washington Post she is pregnant, and the father is a married federal contractor whom she met while stationed in Iraq. H/T instapundit

The one thing that I would note is that the comment thread is already over 2700 comments and one would have to page through them ten at time.

Carnival of Space Week 62

Dave Mosher at discovery.com has an excellent Carnival of Space week 62 The slideshow presentation and article layout are top notch and the content of twenty articles is excellent as well. Space Disco is one of the seven new blogs you'll find at Discovery Space. Bad Astronomy is one of the blogs thathas moved to discovery.com.

This site contributed the latest discussion of a 100MW version as the next step for the IEC fusion reactor project

Centauri dreams has an article by Marc Millis laying out the current status of the Tau Zero Foundation, a non-profit looking at ways to achieve breakthroughs for interstellar travel.

A Babe In The Universe looks reviews the space angles of the International Conference of Environmental Systems in San Francisco.

Check out the Space Disco for a lot more in the Carnival of Space week 62

Terabit per second internet coming soon

Researchers at the University of Sydney have developed technology that could boost the throughput of existing networks by sixty to 100-fold without costing the consumer any more, and its all thanks to a scratch on a piece of glass.

After four years of development, University of Sydney scientists say the Internet is set to become on average 60 times faster than existing networks.

"The scratched glass we've developed is actually a photonic integrated circuit," Eggleton said.

"This circuit uses the 'scratch' as a guide or a switching path for information - like when trains are switched from one track to another - except this switch takes only one picosecond to change tracks. This means that in one second the switch is turning on and off about one million times. We are talking about photonic technology that has terabit per second capacity."

An initial demonstration proved it possible to achieve speeds 60 times faster than existing local networks.


Applications of Highly-Nonlinear Chalcogenide Glass Devices Tailored for High-Speed All-Optical Signal Processing

Ultrahigh nonlinear tapered fiber and planar rib Chalcogenide waveguides have been developed to enable high-speed all-optical signal processing in compact, low-loss optical devices through the use of four-wave mixing (FWM) and cross-phase modulation (XPM) via the ultra fast Kerr effect. Tapering a commercial $hbox{As}_{2}hbox{Se}_{3}$ fiber is shown to reduce its effective core area and enhance the Kerr nonlinearity thereby enabling XPM wavelength conversion of a 40 Gb/s signal in a shorter 16-cm length device that allows a broader wavelength tuning range due to its smaller net chromatic dispersion. Progress toward photonic chip-scale devices is shown by fabricating $hbox{As}_{2}hbox{S}_{3}$ planar rib waveguides exhibiting nonlinearity up to $2080, {rm W}^{-1}cdot hbox{km}^{-1}$ and losses as low as 0.05 dB/cm. The material's high refractive index, ensuring more robust confinement of the optical mode, permits a more compact serpentine-shaped rib waveguide of 22.5 cm length on a 7-cm-size chip, which is successfully applied to broadband wavelength conversion of 40–80 Gb/s signals by XPM. A shorter 5-cm length planar waveguide proves most effective for all-optical time-division demultiplexing of a 160 Gb/s signal by FWM and analysis shows its length is near optimum for maximizing FWM in consideration of its dispersion and loss.


Speeding up the Internet 100 times is just a stepping stone to a Photonic Chip
The Centre for Ultrahigh bandwidth Devices for Optical Systems (CUDOS) vision is the Photonic Chip.






FURTHER READING
Centre for Ultrahigh bandwidth Devices for Optical Systems (CUDOS) Research

All-optical and nonlinear signal processing

Microstructured and tapered fibre devices

Optical waveguide gratings and slow light

Photonic crystals

Optofluidics

So why use microfluidics in conjunction with microphotonics? The combination of these fields potentially allows one to impart adjustable photonic control in new ways that are highly compact and tuneable. We may also turn the technology around and use photonics to sense fluid properties, which is of increasing importance to medical diagnostics.


July 09, 2008

High magnetism reveals the inner electronic structure of high temperature superconductors

University of Cambridge researchers have discovered where the charge 'hole' carriers that play a significant role in the superconductivity originate within the electronic structure of copper-oxide superconductors. A correct and detailed understanding what is going on in high temperature superconductors will help to get to a correct theory for superconductors and can lead to the development of room temperature superconductors. This work has revealed how magnetism and superconductance interact. This is part of a series of major discoveries in the field of superconductors this year.

These findings are particularly important for the next step of deciphering the glue that binds the holes together and determining what enables them to superconduct.

Dr Suchitra E. Sebastian, lead author of the study, commented, "An experimental difficulty in the past has been accessing the underlying microscopics of the system once it begins to superconduct. Superconductivity throws a manner of 'veil' over the system, hiding its inner workings from experimental probes. A major advance has been our use of high magnetic fields, which punch holes through the superconducting shroud, known as vortices - regions where superconductivity is destroyed, through which the underlying electronic structure can be probed.

"We have successfully unearthed for the first time in a high temperature superconductor the location in the electronic structure where 'pockets' of doped hole carriers aggregate. Our experiments have thus made an important advance toward understanding how superconducting pairs form out of these hole pockets."


The paper 'A multi-component Fermi surface in the vortex state of an underdoped high-Tc superconductor' will be published in the 09 July edition of Nature.


By determining exactly where the doped holes aggregate in the electronic structure of these superconductors, the researchers have been able to advance understanding in two vital areas:

(1) A direct probe revealing the location and size of pockets of holes is an essential step to determining how these particles stick together to superconduct.

(2) Their experiments have successfully accessed the region betwixt magnetism and superconductivity: when the superconducting veil is partially lifted, their experiments suggest the existence of underlying magnetism which shapes the hole pockets. Interplay between magnetism and superconductivity is therefore indicated - leading to the next question to be addressed.

Do these forms of order compete, with magnetism appearing in the vortex regions where superconductivity is killed, as they suggest? Or do they complement each other by some more intricate mechanism? One possibility they suggest for the coexistence of two very different physical phenomena is that the non-superconducting vortex cores may behave in concert, exhibiting collective magnetism while the rest of the material superconducts.

Myostatin blocking still under hot pursuit

Acceleron, a Cambridge, Mass.-based biotech firm, and other companies are still pursuing myostatin blocking, which can be four times more effective at building muscle versus high doses of steroids

Se-Jin Lee, the molecular biologist at Johns Hopkins University who discovered myostatin in mice in 1992, says it's "disappointing" that MYO-029 is dead, but he still believes blocking myostatin holds promise. But what really disappoints Lee is that discussion of a promising treatment for a devastating disease becomes entangled in discussions of doping. The benefits go far beyond the Duchenne muscular dystrophy, a disease that is diagnosed in only 600 American boys a year, to diseases like cancer and AIDS. Such drugs could even have a big effect on the muscle weakening that comes with aging.

"Everybody gets old; everybody is going to lose muscle mass," Lee says. "If you look at the benefit of buying people five more years of independent living, it seems a little out of whack to be worrying about sports records."



Acceleron and some other companies are working on several different drugs that hit myostatin. And Affymax (AFFY), a Palo Alto biotech firm, is working on what may be a cheaper, easier to use version of EPO. These are baby steps, but also reminders that someday, performance-enhancing drugs will be able to really push the limits of what the human body can do--like it or not.

Other drugs and enhancements
PPAR delta drugs Status: Experimental.
Legitimate use: Would fight obesity, heart disease.
Athletic advantage: Mice with the PPAR delta receptor modified can run twice as long as their unmodified brothers and sisters.
Side effects: Unknown, but PPAR drugs to treat diabetes have had unpredictable side effects

Gene therapy
Status: Experimental.
Legitimate use: Treating genetically inherited diseases.
Athletic advantage: Extra EPO, myostatin or other hormones created by DNA implanted within the body. Would be undetected by drug tests.
Side effects: Unknown. Gene therapy treatments use viruses or other biotechnology to alter DNA; in most attempts, risks have outweighed the benefits.

Robotic Limbs and prosthetics
Status: Early versions in development now.
Legitimate use: Allowing amputees to walk and run.
Athletic advantage: Prosthetics are now good enough that amputee athlete Oscar Pistorius will run in Beijing games.
Side effects: For amputees, an easy decision. But it will be a long time before able-bodied athletes are replacing perfectly good limbs

Exoskeletons
Status: In development.
Legitimate use: Allowing workers to carry very heavy loads or walk great distances.
Side effects: None, so far.

Second look shows Apollo Moon rocks had water inside them

In a study published today in Nature, researchers led by Brown University geologist Alberto Saal found evidence of water molecules in pebbles retrieved by NASA's Apollo missions.

Mars magma contained as much as 2 percent dissolved water.

For the past four decades, the limit for detecting water in lunar samples was about 50 parts per million (ppm) at best," explained Hauri. "We developed a way to detect as little as 5 ppm of water. We were really surprised to find a great deal more in these tiny glass beads, up to 46 ppm"

46 tons of water for every million tons would be huge for lunar colonization. Most regolith does not have that concentration it is only the volatile pebbles. The researchers estimated that there was originally about 750 ppm of water in the magma at the time of eruption. It suggests the intriguing possibility that the Moon's interior might have had as much water as the Earth's upper mantle. But even more intriguing: If the Moon's volcanoes released 95% of their water, where did all that water go? Since the Moon's gravity is too feeble to retain an atmosphere, the researchers speculate that some of the water vapor from the eruptions was probably forced into space but some may have gone to the lunar poles. Unless it is very deep, lunar groundwater is unlikely to exist since the Sun heats most of the Moon's surface to over 200°F (100°C).

The findings point to the existence of water deep beneath the moon's surface, transforming scientific understanding of our nearest neighbor's formation and, perhaps, our own. There may also be a more immediately practical application.

"Is there water there? That's important for lunar missions. People could get the water. They could use the hydrogen for energy," said Saal.


A high-powered imaging technique known as secondary ion mass spectrometry revealed a wealth of so-called volatile compounds, among them fluorine, chlorine, sulfur, carbon dioxide -- and water.

Critically, telltale hydrogen molecules were concentrated at the center of samples rather than their surfaces, assuring Saal's team that water was present in an infant moon rather than added by recent bombardment.

If that water in fact came from the Earth, then planetary geologists can be certain that our planet contained water 4.5 billion years ago. That would change the dynamics of models of Earth's formations

FURTHER READING
Volatile compounds at Wikipedia

In planetary science, volatiles, commonly called ices in the extraterrestrial context, are that group of compounds with low boiling points (see volatile) that are associated with a planet's or moon's crust and/or atmosphere. Examples include nitrogen, water, carbon dioxide, ammonia, hydrogen and methane, all compounds of C, H, O and/or N, as well as sulphur dioxide. In astrogeology, these compounds, in their solid state, often comprise large proportions of the crusts of moons and dwarf planets. In terrestrial geology, the term more specifically refers to components of magma (mostly water vapor and carbon dioxide) that affect the appearance and strength of volcanoes. Volatiles affect the viscosity of the magma, and the tendency to explosive eruptions.


Current information on Wind Power material usage


Per Peterson, Prof at Berkeley provides information on construction material for energy. 95% of construction inputs are steel and concrete. This article looks at the most recent wind turbines and finds that wind power's need for a lot of steel and concrete is not substantially changed from the 1990 figures. 700-1000 tons (not including all support structures beyond tower and base) per MW (nuclear equivalent power adjusted for capacity factor) for offshore wind for 5MW turbines. 840-1250 tons (after 20-25% support structure adjustments) puts the amount of material needed at the level of the 1990 wind machines. There is another article on this site that updates the concrete and steel inputs for nuclear reactors Some High Temperature nuclear reactor designs would cut the amount of material usage by a lot. Wind material usage can be cut using Kitegen designs, "whale bumps" on the blades for more efficiency and other design improvements.


Concrete monopole foundation for wind turbines


Enercon 4.5MW offshore turbine weighs 440 tons (looks like mostly steel) Does not look like it includes any support structures or the tower.

The REpower 5M turbine features a rotor diameter of 126 metres and a Top Head Mass (THM; nacelle + rotor) of 430 tonnes [not including tower, foundation and support structures.]

Four or five offshore wind farms 2007-2011 with a total capacity of around 1500 MW in Germany were discussed.
It would require investments in the range of around €3.6 billion throughout Germany [assuming on budget], which translates in terms of job creation volume into 25,000 and 40,000 ‘man years’. [So US$5B and about 30,000 man years or 60 million man hours, for 1.5GW -reduce by capacity factor for projects running 2007-2011]


Mathis argued that future 5-7 MW offshore wind turbines erected in 25-40 metre deep water will require new foundation solutions. If such huge foundations were constructed as steel monopiles, the required diameter would be in the range of 8-10 metres and the total length about 50-60 metres. Utilization of jacket type or tripod type foundations with similar capacity and water depth range will, in his view, result into even higher demands with regard to fabrication, welding complexity and corrosion protection. This points to concrete foundations as the solution. However, the construction of gravity-based concrete foundations requires sophisticated formwork systems and new transport logistics methods to deal with component masses between 3000 and 7000 metric tonnes.

Three substructures were considered for the final selection process:

centre column tripod (CCT);
flat faced tripod (FFT);
OWEC jacket quatropod (OJQ), a four-legged jacket solution.

According to the study a CCT design requires cast nodes to improve fatigue performance, bringing the total mass up to 1080. The FFT needs three large 96-inch (243 cm) diameter piles but no cast components, while the substructure mass is 1140 tonnes. Finally the OJQ is based on a design from OWEC Tower A/S, a ‘traditional’ jacket structure adapted for REpower 5M wind turbine use.


The mass of the lightweight structure, including three 72-inch piles for fixing the substructure to the seabed, is approximately 600 tonnes.(For more general information on the Beatrice project see Renewable Energy World November-December 2006)

So 600-1140 tons plus 450 tons for the nacelle and rotors for a 5MW wind turbine (1.5 MW of equivalent nuclear power). 700-1000 tons per MW (nuclear equivalent power adjusted for capacity factor) for offshore. Land based could be less but there are size limitations on land and tower must be built higher to get same wind quality
.





Enercon 6MW model has 36 concrete section

Previously, in-situ concrete (125 m hub height) or steel towers (97 m hub height) were used for the E-112/6 MW. The towers for the E-126/6 MW will be 131 meters tall and made up of 36 concrete segments manufactured at WEC Turmbau Emden GmbH. Once completed, the hub height will reach 135 metres and the overall height an impressive 198 metres.


a diagram of the major component assemblies (8000 parts) and the 2007 eight page article discusses wind power supply chain issues.

20-25% of offshore wind is the support structures






Better wind mapping shows at 100 meter elevations 40-45% capacity factors can be found for some sites.


Integrated wind can deal with many issues that have been laid out as problems.

FURTHER READING
2006: Thirty seven Nordex N62 wind turbines (6340 tons) NORDEX N 62 69 m hub height 1.3 MW rated power. So 6340 for 50 MW of nameplate power or 16MW equivalent of nuclear power. 400tons per MW (nuclear equivalent)

Calculations could be produced using wind turbine design principles

This 2001 8 pager has a table with percentage of materials for different components of wind turbines

2007 article on 3MW turbines

Though wind turbines don't consume fuel, it takes at least 150,000 lb of steel, concrete, and fiberglass to build a single 3-MW turbine. Thus, turbines have a carbon footprint that is laid down before they ever generate a single kilowatt. And detractors point out that steel and concrete are both energy intensive, carbon-emitting industries. There are also networks of roads needed to service wind farms. And wind turbines take land, somewhere between 60 and 300 acres/MW. (For comparison, nuclear and coal plants generate about 1,000 MW/acre).

Blacklight Power covered on CNN Money



The working models in his lab generate 50 kilowatts of electricity - enough to power six or seven houses. But these, Mills says, can be scaled to drive a large, electric power plant. The inventor claims this electricity will cost less than 2 cents per kilowatt-hour, which compares to a national average of 8.9 cents.

This site has recently covered Blacklight Powers announcement of a 50kw prototype generator. The new information is that they have over 20 of those units undergoing testing.

The Controversy and theory is NOT "If Blacklight Power is right then Quantum Mechanics is wrong"
The wikipedia coverage of the Hydrino theory indicates that it maybe compatible with the standard theory of relativistic quantum mechanics. H/T to Anodes commenter on reddit for pointing out the Hydrino Theory wikipedia entry

One of the main critical papers is the work by A Rathke, A critical analysis of the hydrino model

Jan Naudts of the University of Antwerp, a supporter of standard quantum theory rather than Mills' theory, whose paper nonetheless states:

A. Rathke has questioned the existence of [the hydrino], claiming that it is incompatible with standard quantum mechanics. All Rathke’s arguments relate to nonrelativistic quantum mechanics. The present paper discusses the problem in the context of relativistic quantum mechanics... The present paper shows that one can find arguments in favour of the hydrino state also in the standard theory of relativistic quantum mechanics.

Another scientist disputing Rathke's analysis, Ronald C. Bourgoin, of the Edgecombe Community College, published a peer-reviewed paper in the journal Advanced Studies in Theoretical Physics, not only supporting the theoretical possibility of hydrino states, but further stating that the general wave equation of quantum mechanics predicts the very same reciprocal energy states as does Mills' theory.


Mills reports that limitations on confinement and terrestrial conditions have prevented the achievement of hydrino states below 1/30, which would correspond to an energy release of approximately 15 keV per hydrogen atom.







Schematic of the Blacklight generator with calorimeter test setup (link to 102 page paper)

While his business has been working on the "BlackLight Process" since its inception almost two decades ago, Mills developed the patented cocktail that enables the reaction - a solid fuel made of hydrogen and a sodium hydride catalyst - only a year ago. Now that the device is ready for commercialization, he says, BlackLight is negotiating with several utilities and architecture and engineering firms, but he won't disclose any partners' names until the deals are finalized.


About 20 of the generators, which look like small copper water heaters turned on their sides, rest on lab benches inside the company's 55,000 square foot headquarters, once a Lockheed Martin facility. BlackLight's 11 scientists barely make a sound as they slip among the cavernous rooms, blue lab coats flapping behind them. The near-emptiness is eerie, but it's also portentous, says Mills: "Within the next two years, we're going to grow to 500, maybe 1,000 employees. This could satisfy a majority of the world's power needs, and the demand is going to be huge."

"He's wrong in so many ways, it's beyond counting," says Robert Park, a professor of physics at the University of Maryland and former spokesman for the American Physics Society. Parks, 77, uses BlackLight as an example of phony physics in his 2002 book, Voodoo Science: The Road from Foolishness to Fraud. He says of Mills, "I don't know of a single scientist of any reputation who takes his claims seriously."

Critics such as Park say the high-profile CEOs on BlackLight's board are following each other over a cliff. He could be right: Both Jordan and Jim Lenehan - a BlackLight investor, senior consultant at hedge fund Cerberus, and former president of Johnson and Johnson (JNJ, Fortune 500) - say they were led to the business by friends. But Lenehan, who does not sit on BlackLight's board, says, "It's no longer a high-risk part of my portfolio. It now has the ability to make a huge difference in the world of power."

Jordan, who earned science degrees from Yale and Princeton, expresses a similar sentiment.

"In the beginning, I thought it was worth putting money into because it was going to be a huge flop or a huge success." he says. "But when they made the breakthrough last fall, I saw the results."

That logic could explain BlackLight's success in garnering investors, despite its lack of scientific approval: While the academic community stresses theoretical backing for a new discovery, the business world is more concerned with practical applications.

Lenehan says, "My point of view is, just do it - generate power. In terms of influencing investors, it's about results."

Jordan agrees: "Theoretically, the bumble bee can't fly - but no one told the bumble bee. Now they're saying this can't be done, but it's happening."

The rest of the world will have to wait for evidence until the fall of 2009, when the business promises to install its cells in power plants.


FURTHER READING
Blacklight generator 102 page paper.

In this study we made specific theoretical predictions and tested them with standard, easily interpretable experiments. The results of spectroscopic, chemical, and thermal data show that new energy states of hydrogen are formed by the reaction of H with catalysts such as Li and NaH . Furthermore, the power and energy balance data demonstrate that this novel reaction of atomic hydrogen can proceed with high kinetics and yields by using reagents to generate the catalysts such as Li and NaH to form significantly more stable hydrides and hydrogen molecules is a new energy source ready for commercialization. The energy scaled linearly and the power increased nonlinearly to easily achieve over 50 kW. Based on the volume of the catalyst and hydrogen fuel, the power density is among the highest known, (comparable to or higher than that of internal combustion), and the energy balance is greater that that of any know material on a weight or molar basis. Consequently, the mass balance and cost per unit energy is much lower than that of burning fossil fuels. Furthermore, the process is nonpolluting. Since the identified H2 (1/ p) byproduct is stable and lighter-than-air, it cannot accumulate in the Earth’s atmosphere.


39 page spectroscopic observation paper

Hydrino study group

Hydrino theory at wikipedia

UPDATE:
New Energy Times which focuses on Cold Fusion indicates that Blacklight Power has not been that successful getting patents

HP could be producing Memristors in 2009

HP scientists have now successfully engineered control over how the device functions. This means it is now possible to design memristors into integrated circuits that remember information, consume far less power than existing devices and may someday learn from past behavior. The EEtimes, reports that the advance promises to speed development of commercial prototype chips for its RRAM (resistive random-access memory) by next year.

Meanwhile Samsung is pressing ahead with rapid flash SSD drives

Samsung Electronics Co. Ltd. has started volume production of its 1.8- and 2.5-inch multi-level cell (MLC)-based solid state drives (SSD) with a 128 Gigabyte (GB) storage capacity. Mass production of the Samsung MLC-based 64GB SSD also began this month. They will begin producing a 256GB version at the end of this year, and that it expects sales of SSD units to increase 800 percent between now and 2010. You can expect the new Samsung SSDs to be cheap by comparison to SLC-based SSDs and faster by comparison to traditional laptop hard disk drives while lasting about 20 times longer than the expected 4-5 year life span of that mechanical spinner.
OCZ is now shipping 128GB SSD drives for $479 This site would expect Samsung pricing to be similar. 128GB SSD drives have been available for a few thousand dollars. Samsung is the largest producer of Flash memory. High volume production for Samsung and increasing the overall market by 800% over two years probably means 4 times lower costs to go with 8 times more units for an overall market dollar size increase of double.

"With engineering control, we can build a device that delivers a specific electrical performance," says Duncan Stewart, principal investigator. "Only then do you get to a point where you can build large integrated circuits."

HP Labs scientists who in April proved the existence of the memristor have made another significant advance toward developing a new type of computer memory that's many times faster than Flash and could lead to analog computers that process information in a manner similar to the human brain.

The researchers, members of the Information and Quantum Systems Lab led by HP Senior Fellow R. Stanley Williams, published their experimental findings in the advance online editon of the July issue of the journal Nature Nanotechnology.

The team conducted its experiments by building a nanoscale memristor switch – at 50 nanometers by 50 nanometers, it is the world's smallest – that contained a layer of titanium dioxide (a chemical commonly used in both sunscreen and white paint) between two nanowires. As its name implies, titanium dioxide typically comprises one titanium atom for every two oxygen atoms.

Scientist Jianhua Yang found that by subtly manipulating the distribution of the oxygen atoms in this layer, he could control how the device functioned. Although other labs have demonstrated switching using similar materials, none have achieved this level of control over the switches.

Faster, cheaper nonvolatile RAM

A memristive device can operate in both digital and analog modes, each of which has different applications.

In digital mode, it could replace today's solid-state memories (Flash) with much faster and less expensive nonvolatile random access memory (NVRAM). That would enable digital cameras without a delay between photos, for example, or computers that save power by turning off when not needed and then turning back on instantly when needed.

Because it is built at nanoscale, the NVRAM chip would also be denser, giving chipmakers the ability to pack more information into a smaller space.


Computers that learn

Longer term, in its analog mode, the memristor could possibly enable computers that "learn" what you want.

"Any learning a computer displays today is the result of software," says Yang. "What we're talking about is the computer itself – the hardware – being able to learn."

That's not to say the computer would function like a human brain. But it could gain pattern-matching abilities would let it adapt its user interface based on how you use it. These same abilities make it ideal for such artificial intelligence applications as recognizing faces or understanding speech.

"When John Von Neumann first proposed computing machines 60 years ago, he proposed they function the way the brain does," says Stewart. "That would have meant analog parallel computing, but it was impossible to build at that time. Instead, we got digital serial computers."

Now it may be possible to build large-scale analog parallel computing machines, he says.


FURTHER READING
Memristor questions answered

The memristor is well suited for FPGA designs.

July 08, 2008

Carnegie Endowment makes conservative prediction of China overtaking the US Economy in 2035


Here is 16 page briefing from the Carnegie Endowment by Albert Kiedel on the economic rise of China

UPDATE: The British Telegraph has an interesting series of articles on life in China now

The very conservative projection described in the table above already underestimates China's economy. It has China at $4 trillion on an exchange rate basis in 2010.
This site estimates that China is at that level in late 2008 (and already if Hong Kong and Macau are included which they should as part of China since 1997 and 1998).

As of July, 2008 :
China's currency is now 6.85 yuan to 1 USD. China's GDP is now $3.85 trillion. Including Hong Kong and Macau China has $4.2 trillion GDP.


Year GDP(yuan) GDP growth Yuan per USD China GDP China+HK/Ma US GDP
2007 24.66 11.9% 7.3 3.38 3.7 13.8
Jul08 26.3 6.85 3.83 4.2 Past Germany
Oct08 26.7 6.65 4.0 4.45
2008 27.3 10.2% 6.35 4.3 4.8 14.0
2009 30.1 9.8% 5.62 5.4 5.9 14.2 Pass Japan
2010 33.7 9.5% 5.11 6.6 7.1 14.6
2011 37.0 9.5% 4.64 8.0 8.5 15.0
2012 40.6 9.5% 4.26 9.5 10.0 15.4
2013 44.2 9.0% 3.91 11.3 11.8 15.9
2014 48.2 9.0% 3.72 13.0 13.5 16.4
2015 52.0 8.0% 3.54 14.7 15.2 16.9
2016 56.2 8.0% 3.53 16.7 17.2 17.4 Passing USA
2017 60.4 7.5% 3.38 18.8 19.4 17.9 Past USA
2018 64.2 7.0% 3.20 20.9 21.5 18.4
2019 69.2 7.0% 3.09 23.0 23.6 19.0
2020 74.0 7.0% 3.0 25.2 25.8 19.6
2021 78.4 6.0% 2.9 27.2 27.8 20.2
2022 83.1 6.0% 2.9 29.4 30.0 20.8
2023 87.3 5.0% 2.8 31.5 32.2 21.4
2024 91.7 5.0% 2.8 33.7 34.4 22.0
2025 96.3 5.0% 2.7 36.1 36.8 22.7
2026 101.1 5.0% 2.6 38.7 39.4 23.4
2027 106.1 5.0% 2.6 41.4 42.1 24.1
2028 111.4 5.0% 2.5 44.4 45.1 24.8
2029 117.0 5.0% 2.5 47.5 48.2 25.5
2030 122.8 5.0% 2.4 50.9 51.6 26.3 Close to double USA



China's inflation and internal pressures are causing Chinese leaders to increase the value of the yuan over the mid and long term

China is engineering a pause in the yuan appreciation to curb speculators who are profiting on the rise of the yuan

Since last October interest rate differentials between dollars and yuan have reversed. The U.S. Federal Reserve aggressively lowered rates just as the Chinese central bank, the People's Bank of China, was pushing up domestic rates to fight inflation. Currently, rates on the Chinese central bank's one-year bills are about 170 basis points higher than comparable U.S. Treasuries.

This has created an arbitrage opportunity that local firms are exploiting on a massive scale, borrowing cheap dollars to substitute for more expensive borrowings in yuan and for local investments. A second factor driving this arbitrage is the wide-spread expectation that the government will either speed up the rise in the yuan's crawling peg or implement a one-off revaluation.


Li Jin (Harvard) and Shan Li (former CEO of Bank of China International) suggested in the Wall Street Journal that China slow currency appreciation and invest funds in the US to help deal with China's inflation and stabilize the US economy

FURTHER READING
Previous economic update on China

Highlights of the Carnegie Endowment economic rise of China

Other Keidel analysis of China

Startups looking to make building green

Calera is a company funded by Vinod Khosla which is trying to make concrete that pulls carbon dioxide from the air instead of emitting it. This would have a huge reduction in greenhouse gases when fully deployed. Concrete manufacturing is a primary source of carbon dioxide.

For each ton of Calera concrete one ton of carbon dioxide is removed from the air. Calera is completing a pilot plant by the end of 2008. They plan to complete a commercial plant by 2010 and have 100 sites within 5 years.

Constructing and operating buildings requires 48% of the energy used in the United States. 21% for residential buildings. 27% for transportation and 25% for industrial.

Cement is a huge culprit of greenhouse gas emissions: It uses about 2.5 billion tons of cement a year, and produces that many tons in carbon dioxide.


There are many green building companies being funded


New Jersey-based Hycrete, which produces an admixture (or liquid solution) that is used to waterproof concrete, completed its second round in 2006. Hycrete makes a mixture of sand, aggregate, cement and water, the admixture acts as a replacement for the external membranes that are typically used to keep water from seeping into concrete. When it is mixed into concrete, it links up to metallic ions and behaves like a hydrophobic solution (like oil) — repelling water. Because it doesn’t require volatile organic compounds (VOCs) or other harmful chemicals, the corrosion-resistant concrete can safely be recycled and reused in other projects.

Per Peterson information on steel and concrete needed for different energy


Per Peterson, Prof at Berkeley provides information on construction material for energy. 95% of construction inputs are steel and concrete.


China is making 1250MW AP1000's now, 1400MW in the next batch and 1700MW for the ones after that



Information is mostly from this Per Peterson powerpoint presentation on nuclear energy



Energy from coal, 7.3 million kg per day for a 1GW plant.


Energy from nuclear fission, 3.2kg of fuel used per day for a 1GW plant.


Energy from nuclear fusion. 0.6kg per day of fuel for a 1GW plant


Nuclear safety study from 2004


Nuclear workers compared to other industries


CO2 comparison for different energy sources


FURTHER READING

Nuclear Energy: 1996, 2006, 2016 by Per Peterson

Nuclear Research by Per Peterson

Possible genetic cause of SIDS

The Economist reports on a science paper, Sudden Infant death Syndrome (SIDS) may have a genetic cause.

Dr Audero’s results suggest that some upset of the serotonin system may be a necessary, but not always sufficient, part of the pattern that leads to SIDS. It will be enough to kill some children, but needs an environmental “boost” in other cases. If research can establish that is true, then it may be possible to screen infants and single out those at risk, so that parents can take suitable precautions. That would be a very good thing indeed.

SUDDEN infant death syndrome (SIDS) is the biggest killer of babies over one month old in the rich world.

Anton, special purpose supercomputer for molecular simulations

A special purpose supercomputer, Anton, is being made to accelerate the modelling of protein folding and provide a thousandfold increase in performance for complex molecular simulations.

The effort is being led by David E. Shaw, a billionaire computer scientist. In the 1990s, Mr. Shaw was one of the most successful of an elite group of technologists pursuing computer-based trading strategies on Wall Street. Several years ago Mr. Shaw, who is also a major investor in Schrdinger, a chemical simulation software firm, stepped away from day-to-day management of his investment firm, D. E. Shaw & Company. He is now chief scientist of D. E. Shaw Research. It could be used to investigate problems of great scientific interest, like the folding of protein molecules, and in the design of drugs based on the simulated biological activity of different molecules.


Note: This is what billionaires and near billionaires should be doing funding grand technological and scientific research projects that could create huge advances for civilizations capabilities. Fund high leverage, high risk and high potential world changing projects. SENS still needs another billionaire or two and Robert Freitas and Ralph Merkle need one to enable rapid develop of their diamond mechanosynthesis work.

Anton is described in an ACM paper

The ability to perform long, accurate molecular dynamics (MD) simulations involving proteins and other biological macro-molecules could in principle provide answers to some of the most important currently outstanding questions in the fields of biology, chemistry, and medicine. A wide range of biologically interesting phenomena, however, occur over timescales on the order of a millisecond---several orders of magnitude beyond the duration of the longest current MD simulations.

We describe a massively parallel machine called Anton, which should be capable of executing millisecond-scale classical MD simulations of such biomolecular systems. The machine, which is scheduled for completion by the end of 2008, is based on 512 identical MD-specific ASICs that interact in a tightly coupled manner using a specialized highspeed communication network. Anton has been designed to use both novel parallel algorithms and special-purpose logic to dramatically accelerate those calculations that dominate the time required for a typical MD simulation. The remainder of the simulation algorithm is executed by a programmable portion of each chip that achieves a substantial degree of parallelism while preserving the flexibility necessary to accommodate anticipated advances in physical models and simulation methods.




Simulations of processes like the folding of proteins into three-dimensional structures or the interactions between proteins or between a protein and a drug molecule hold out the promise of advancing science and drug development. However, each simulation must be validated by experimental scientists in a laboratory setting. Thus one of the principal advantages of increased speed in simulations that now take thousands of hours on the fastest supercomputers is to speed the time to the laboratory.

Scientists said the real value of Anton might not be known until they find out what the machine can do. “Only after Anton van Leeuwenhoek used his microscope did he see protozoa in the pond water,” said Roger Brent, director of the Molecular Sciences Institute, an independent research laboratory in Berkeley, Calif.

In molecular dynamics (MD), you must divide time into discrete 1-femtosecond time steps.

If the time steps are too long, individual atoms run into each other, get higher energy configurations, and everything becomes unstable. For each individual step, you must compute the interaction between all pairs of particles, determined by molecular force fields. Then you must move each atom a tiny bit and repeat the process a huge number of times.

Two approaches to protein-folding simulation include simulating many short trajectories and simulating one very long MD trajectory. While both approaches are complementary, Shaw’s group practices the second approach, which requires enormous amount of parallelism. To reach their goal of simulating a full millisecond requires an enormous increase in speed -- 10,000 times more speed than single-processors, and 1000 times the speed of the best existing parallel implementations. “We are several orders of magnitude from where we need to be,” said Shaw.

Shaw’s lab has also created specialized software, dubbed Desmond, for MD. It’s developed to run on Anton but the algorithm can be adapted to run on computational clusters

Shaw cautioned that we don’t know enough about the accuracy of molecular force fields, and that maybe after 100 or so microseconds, a small inaccuracy in the force field calculation “would lead to a very fast way of getting the wrong answer.”


FURTHER READING
From HPCwire review of the Newport conference

July 07, 2008

The next Bussard IEC fusion reactor could be 100MW size producing net energy

Dr Nebel is talking about is a 1.5 meter 100 MW net power fusion reactor Dr Nebel has said he is getting good data from the WB7 test device. He is under a publishing embargo and cannot discuss the data, (neutron counts) but he has said the next device might as well be a 100MW version. This 100MW version may only cost $20 million to make. The implication is that Dr Nebel and his team are getting very good results. Hopefully this speculation is confirmed in August or September of this year with results published and next stages funded.

Dr Nebel said: The one you have to worry about is the input power scaling, because that one is related to the plasma losses (or transport). This one answers the question of “How much power do I need to supply to the device to maintain constant Beta”. Theoretical modeling of transport has a much poorer track record than plasma equilibrium has. These scaling laws are where the major risks for the larger device reside. The major saving grace is that for the Polywell is that the projected average densities are ~ 2 orders of magnitude higher than they are in Tokamaks so the energy confinement times don’t have to be all that good. (It’s the product of the density and the confinement time that’s important.) Our contention is that since our projections for a power producing device only require a machine [1.5 Meters in diameter would in theory be able to produce something around 100MW of net power] we might as well build the next one in that size range and accept the risk. The machines just aren’t all that expensive. Also, there are a multitude of things that can be done to improve confinement (such as pulse discharge cleaning, pellet injection, etc.) that have been successful in the magnetic confinement program that can be instituted if our projections fall short. This approach will minimize the development time and lead to a lower costs for the overall program.


The peak fields for the reactor designs (at least for our reactor designs) are in the 5-10 T range. however, these are work in progress.

We have run Gauss meters all over the face of the cubes and through the corners and we don't see any low field regions. The fields peak near the conductors and fall off near the coil centers, as you would expect.




Other Dr Nebel comments of interest:
1. The theory says that you can beat Bremstrahlung, but it's a challenge. The key is to keep the Boron concentration low compared the proton concentration so Z isn’t too bad. You pay for it in power density, but there is an optimum which works. You also gain because the electron energies are low in the high density regions.

2. The size arguments apply for machines where confinement is limited by cross-field diffusion like Tokamaks. They don't apply for electrostatic machines.

3. The Polywell doesn't have any lines of zero field. Take a look at the original papers on the configuration. See :
Bussard R.W., FusionTechnology, Vol. 19, 273, (1991) .
or
Krall N.A., Fusion Technology. Vol. 22, 42 (1992).

Furthermore, one expects adiabatic behavior along the field lines external to the device. Thus, what goes out comes back in. Phase space scattering is small because the density is small external to the device.

4. The machine does not use a bi-modal velocity distribution. We have looked at two-stream in detail, and it is not an issue for this machine. The most definitive treatise on the ions is : L. Chacon, G. H. Miley, D. C. Barnes, D. A. Knoll, Phys. Plasmas 7, 4547 (2000) which concluded partially relaxed ion distributions work just fine. Furthermore, the Polywell doesn’t even require ion convergence to work (unlike most other electrostatic devices). It helps, but it isn’t a requirement.

5. The system doesn’t have grids. It has magnetically insulated coil cases to provide the electrostatic acceleration. That’s what keeps the losses tolerable.

6. The electrostatic potential well is an issue. Maintaining it depends on the detailed particle balance. The “knobs” that affect it are the electron confinement time, the ion confinement time, and the electron injection current. There are methods of controlling all of these knobs.


Bussard thought the truncated dodecahedron might be better than the truncated cube of WB-6. Reason, the cusps are smaller, the triangular corners of the "cube". The electrons would have a tougher time escaping.


FURTHER READING
Where Dr Nebel originally posted his comments about making a reactor of "that size"

Polywell forum discussion speculating on the cost to make just one 100MW prototype system

$20-50 million depending upon the magnets and power supplies. If you run D-D and don't care about coil longevity (1 hr estimated) we can make do with some specially constructed MRI magnets (water jackets for alpha/neutron cooling). That might be acceptable for initial experimental purposes. (360 - 10 second runs). Thus $20 million for big test machine and $50 million for a better big test machine.

More questions from Art Carlson, the critique who was having a productive exchange with Dr Nebel

M Simon notes some problems and challenges for a 100MW version of an IEC fusion reactor.

The "first wall" problem is the hardest of the "we have very little idea" problems. A B11 coating has been tried for ITER. That would be ideal if it works. However, ITER is now looking into diamond coating. No mention of Boron these days.

Cooling the coils from alpha impingement is hard. But we do know where to start and we do have some tricks.

Some other lesser problems: design for compactness and energy extraction. Power converter designs. Control of reactant flows. Superconducting magnets. Integrated reactor controls. POPS enhancement.



Roger Fox has written a diary on Dr Nebel's work and Dr Nebels comments and adds his own speculation

Currently the fuel is "puffed" in gaseous form, there is no carburetor. The fuel ions are puffed in, the plasma lights up, some fusion occurs, and the magnets get very hot. All this occurs in under a second. It takes hours for the magnets to cool down for the next run. Superconducting magnets would solve this problem, but at a much higher cost.

Theory says if you scale up the 35 cm magnets to 2 meters, you will have a 500 mw net power reactor. This scaling theory is unproven. A carburetor also needs to be built and there is a possibility that slightly different designs can be more efficient.


An introduction to inertial confinement fusion

IEC fusion uses magnets to contain an electron cloud in the center. It is a variation on the electron gun and vacuum tube in television technology. Then they inject the fuel (deuterium or lithium, boron) as positive ions. The positive ions get attracted to the high negative charge at a speed sufficient for fusion. Speed and electron volt charge can be converted over to temperature. The electrons hitting the TV screen can be converted from electron volts to 200 million degrees. The old problem was that if you had a physical grid in the center then you could not get higher than 98% efficiency because ions would collide with the grid. Bussard's innovation was to use magnets in a configuration that the electrons and ions never hit and keep losses 100,000 times lower. 99.999+% efficiency.

Previous update on the inertial confinement fusion demonstration project

A review of new funded approaches to nuclear fusion

If IEC nuclear fusion works as well as hoped then not only does it solve energy issues but also provides super space capabilities with launch costs reduced 1000 times

Even expensive net power generation means that one fusion reactor can burn the fuel of ten regular fission reactors to make all nuclear power cleaner.

Artificial intelligence milestone -Polaris computer begins beating human poker champions

The Second Man-Machine Poker Competition has computers from the University of Alberta playing some of the biggest names in the online poker world: Nick "stoxtrader" Grudzien, Matt "Hoss_TBF" Hawrilenko, and IJay "doughnutz" Palansky.

On July 6, 2008 Polaris completed a come-from-behind victory by posting a decisive win against Matt Hawrilenko and IJay Palansky. Polaris won both sides of the duplicate match to win convincingly.

The results are here


Match Player.........Amount Won Player........ Amount Won Difference Result
Live 1 Nick Grudzien -$42000..... Kyle Hendon... +$37000 ...-$5000... Draw
Live 2 Rich McRoberts +$89500.... Victor Acosta. -$39500 ..+$50000... Humans Win
Live 3 Mark Newhouse +$251500.... IJay Palansky. -$307500 .-$56000... Polaris Wins
Live 4 Matt Hawrilenko -$60500... IJay Palansky. -$29000 ..-$89500... Polaris Wins



It was just in April 2008 that a computer started to become competitive with humans in 9X9 Go.

Checkers (8X8) was weakly solved April 2007 by the University of Alberta They are the same group that built the Polaris poker bot

Eetimes reports, the University of Alberta group said it expects to be asked for rematches by the vanquished pros as well as by other poker experts who will claim the win by Polaris was a fluke. "Even after Deep Blue beat Kasparov, there were still some skeptics, and I think the same is true here," said Bowling. "Over the next year or so there are going to have to be several rematches before everyone is convinced that humans have been surpassed by machines in poker."

Bowling's group plans to expand Polaris beyond its current limitations, enabling it to play more complicated poker games than its current heads-up, hold-em version. They also plan to expand efforts to apply the poker-playing algorithms to useful applications.

"The techniques we are devising have broad applications outside of poker," said Bowling. "For instance, wireless sensor networks are exploring one of our poker-like algorithms to lay out sensors in buildings in a way that yields better understanding of how heat flow patterns affect efficiency."

One algorithm, called counter-factual regret, monitored the outcome of hands lost by Polaris and what could have been done to change the outcome. Polaris could then watch for similar circumstances and adjust more effectively.


FURTHER READING
Other press coverage of the human computer poker matches

Game complexity is described here

China wants 100 Westinghouse AP1000 operating or under construction by 2020

China wants to have 100 of Westinghouse Electric Co.'s nuclear reactors in operation or under construction by 2020 -- more than double what was anticipated. The Westinghouse AP1000 are being scaled up to 1700 MW and some of the ones already being built for China are 1250MW designs, which will be followed by 1400MW designs and then the 1700MW versions. If China follows through on these and other nuclear plans they should have 200GW of nuclear power completed by 2025. This would be double what the USA has now.

H/T Idaho Samizdat

Chinese officials shared those plans with Westinghouse during a mid-May meeting.

FURTHER READING
Uranium reserves have increased 17% since the last Red Book report The new economical reserve estaimate worldwide is now around 5.5 million tonnes of uranium. the category of uranium that could be expected to be found based on the geologic characteristics of known resources has grown by 500,000 tonnes to 10.5 million tonnes. The data comes from Uranium 2007: Resources, Production and Demand - often known as the Red Book - published every two years by the OECD Nuclear Energy Agency (NEA) and the International Atomic Energy Agency (IAEA). IAEA projections for the future of nuclear power see it expanding from 372 GWe today to 509-663 GWe by 2030. Such growth would cause an increase in uranium demand from 66,500 tonnes per year to between 94,000 and 122,000 tonnes. The NEA concluded that "currently identified resources are adequate to meet this expansion," noting that advanced reactors and the reprocessing and recycling of uranium "could increase the long-term availability of nuclear energy from a century to thousands of years."

Russia is accelerating its nuclear reactor plans and construction as well

France will be ordering a second new nuclear reactor

Those in favor of more nuclear power in Europe now almost equals those opposed and over one third of opposition would vanish with an understanding of waste solutions.

China is also planning to factory mass produce smaller high temperature reactors

This continues to show that Amory Lovins is full of crap about any nuclear illusion.

The nuclear build rate is accelerating around the world

The supply chain issues are being addressed, such as the limitations on large forgings

Constructing a lot of nuclear power plants is not resource constrained

Staffing and training issues can be resolved with better management and more automation

Nuclear power is an important part of any sensible energy plan