Pages

November 15, 2008

California Pollution and the need for Nuclear Power and Renewables

1. Air pollution from fossil fuel use is costing California $28 billion per year and 3800 lives per year.

It increases healthcare costs and kills more people each year 3812 than 9/11.

2. The EPA's (Enviromental Protection Agency) recent decision, likely future decisions and President-elect Obama and the likely new federal energy plan will be killing coal over time. California will need to replace 16.6% of electricity (about 50 billion kwh) plus handle increased power needs from a growing population.

California's population is growing and energy demands will be increasing even with efforts at conservation and efficiency. Increased electricity demand will also come from more electric and hybrid cars. California will need more renewable power and more nuclear power.

3. Getting to 33% renewable energy sources by 2020 would cost California $60 billion.

4. $28 billion would buy 2 nuclear power plants even with the highest cost estimates, which would be saved by PGE not having to buy expensive spot market power. Uprating or expanding the current plants could be faster and more economical for initial expansion.

California's energy almanac shows the current energy situation.


California's total energy breakdown shows that renewable energy is still tiny.







Details on California Air Pollution
The US Environmental Protection Agency (EPA) has classified the South Coast Air Basin (SoCAB), which includes Los Angeles, Orange, Riverside and San Bernardino counties, as an extreme nonattainment area for ozone. The San Joaquin Valley Air Basin (SJVAB) also is designated an extreme nonattainment area for ozone. Both air basins are classified as serious nonattainment areas for PM2.5.

Between 2005 and 2007 ambient ozone levels in the San Joaquin Valley exceeded the health-based 8-hour National Ambient Air Quality Standard (NAAQS) on from 112 to 139 days a year, while in the South Coast Air Basin exceedances occurred on from 115 to 120 days. Fine particulate levels would need to fall by more than 50% to meet the maximum 24-hour standard, and annual average concentrations need to drop by nearly
30%. "These health-based standards will be very difficult to achieve," the authors note.

Pollution sources in the two areas vary, but fuel combustion, including heavy-duty diesel truck exhaust, dominates both regions. Exposure to air pollution causes premature death, hospitalizations and respiratory symptoms, limiting a person's normal daily activity and increasing school absences and loss of workdays, said the researchers. The cost reflects the impact these health problems have on the
economy.

Each year, the life- and health-threatening levels of pollution cause the following adverse health effects for the two basins:

Premature deaths among those age 30 and older: 3,812
Premature deaths in infants: 13
New cases of adult onset chronic bronchitis: 1,950
Days of reduced activity in adults: 3,517,720
Hospital admissions: 2,760
Asthma attacks: 141,370
Days of school absence: 1,259,840
Cases of acute bronchitis in children: 16,110
Lost days of work: 466,880
Days of respiratory symptoms in children: 2,078,300
Emergency room visits: 2,800

November 14, 2008

World and China Economies in 2009 and Energy and Infrastructure Going Forward


The Economist estimates China's past and current growth. They also examine China $600 billion stimulus effort.

Any analysis of China’s growth prospects is clouded by the widely held belief that the government smoothes its GDP numbers and always overstates growth during economic downturns. The chart plots China’s official growth rate against an alternative estimate calculated by Dragonomics from expenditure data (ie, investment, household spending and exports). This estimate shows much bigger swings than the politically smoothed official numbers.

Although China’s planned fiscal expansion is still vague, it promises, if it is implemented and it works, to save the economy from a hard landing. And if stronger domestic demand sucks in more imports of raw materials and infrastructure-building machinery, that is the best way China can help the rest of the world.

Most economists think the stimulus package will be enough to keep growth at 7.5-8% for 2009 as a whole.


Nouriel Roubini, a professor at the Stern Business School at New York University, who has been accurate in predicting the scale of the current downturn so far indicates that the worst is not over. He believes the US could see -4% GDP in 2009 and China 6% GDP growth before China stimulus.

Robert Zubrin blames the current economic crisis on oil and OPEC and suggests the solution is enabling oil alternatives with flex fuels.

China's leaders are talking up alternative energy cars.




China and India Out to 2020
The proportion of intra-regional trade in East Asia grew from 40% in 1980, to 50% in 1995, to 60% today. The strong dependency on the US market (single market dependency) of the 1980s and 1990s is rapidly diminishing.

After Japan and the newly industrialised countries (NIC) of Northeast and Southeast Asia, which managed to break out of the "third world" in two generations, China and India have been in a phase of remarkable expansion since the 1980s and 90s. With 33% of the world's population, their share of global gross domestic product (GDP), calculated in terms of purchasing power parity (PPP), has risen from 3.2% and 3.3%
respectively in 1980, to 13.9% and 6.17% in 2006 (while global GDP has tripled). At the same time the GDP (PPP) per inhabitant, a more subtle measure, has increased by a factor of 16 in China ($419 to $6,800) and by a factor of five in India ($643 to $3,490). The share of global GDP produced by Asia as a whole, currently 34%, should reach nearly 45% in 2020: 20% for China, 9% for India and 6.2% for Japan (the total share for "emerging" regions is estimated between 55% and 60%) [World Bank and IMF data]

Yangtze River Delta Could Be US$3-5 Trillion by 2020

Wang Fanghua, a scholar from Shanghai Jiao Tong University, forecast
the GDP
in the delta would hit 15.95 trillion yuan by 2020, based on the annual growth rate of 11 percent. If the yuan appreciates to 5 yuan to 1USD that forecast would be USD3.2 trillion, if the yuan appreciates to 4 yuan to 1USD that forecast means US$4 trillion [the current size of China's economy which is about the same as Germany as well. As the region is on route to modernization, its plan of building a high-speed railway network is brewing.

According to the plan by the Chinese government, the Yangtze River Delta will build five major railways, including three inter-city railways, namely, Shanghai-Nanjing railway, Nanjing-Hangzhou railway and Shanghai-Hangzhou railway.

China Demographics and Current Cash
China demographics are discussed.

China is also expected to see its largest population mobilization when 300 million people enter urban centers in the next two to three decades.


World Bank thinks China in good position

China's cash reserves are estimated at more than $3,000bn – 70% of the world's total reserves – compared with $800bn in 2000.


China's Cars and Trains
Ou Guoli, a professor at Beijing Jiaotong University, said it was necessary to develop subways in Beijing as it was one of the best ways to ease traffic congestion.

As car-ownership rapidly increases, Liu hinted that the city was considering containing growth. He did not elaborate.

He said currently Beijing has 3.4 million cars and if there is no restriction, the number will exceed 4 million in three years.

Guangzhou also expanding subways.

Other infrastructure build, Jan 2008 started work on 1,300km line between Beijing and Shanghai that, when completed in five years' time [2013], will reduce rail time between the two cities from ten hours to five—and thus be a competitive alternative to flying.

China had 78,000km of track at the end of 2007. The original plan, published in 2004, was to increase this to 100,000km by 2020. Last October this was revised to 120,000 km (and officials now say the target will be met by 2015). Even sticking to the 2020 target, this will mean laying 60% more track in the next dozen years than was built since the start of the economic reform programme 30 years ago. Huang Min, the Ministry of Railways' chief economist, says that by 2020 the railway system's freight-handling capacity should be greater than demand. At present, he says, it can handle only 40%.

Mr Huang reckons that railway expansion will bring down logistics costs, which he says amount to 18% of GDP in China compared with 10% in America. It will also help reduce pollution, he says, since fewer polluting lorries will be needed.

China's stimulus plan will boost and accelerate existing infrastructure projects.

Spending on priority areas such as railway and power grid expansion can now be front-loaded to support economic growth. To cover an expected fiscal deficit in 2009, the government is likely to increase its government bond issuance by a wide margin.

November 13, 2008

The Third Tsunami of Computing – Web 2.0 is irrelevant

By Alvin Wang

People have been asking what is Web 2.0 and what is 3.0.
People have been asking what is Web 2.0 and what is 3.0. Is Web 2.0 ajax or social networking? I would propose that the question is completely irrelevant. The current Internet is an application of the next generation of computer architectures. That is where the lasting advantages will appear. To explain this, it is necessary to look at historical trends.

20 years ago, at the dawn of the Microcomputer age, there was talk about 3 computer architectures. They were mainframes, minis, and microcomputers. Each stage had it’s winners

Mainframes – IBM, Oracle

Minicomputers – DEC, Prime,

Microcomputers –Microsoft, Intel

We can look back at this and see that mini-computer companies are all dead. The Internet came and everyone assumed that this was the next wave. From that, I created this chart



There is the temptation to say that the Internet is cool but let’s jump the next big thing after the Internet. That would be a mistake. The analysis is too simplistic.

Looking back, we can see many waves. Waves come and go. The question is where are the Tsunamis? A Tsunami will alter the landscape and devastate unprepared industries. There is no fighting a Tsumani. You must go with it. The question is how to define a Tsunami and what are it’s characteristics.

Rule 1 – it is almost impossible to predict the outcome even if you are the person starting it.

The first computing Tsunami was the mainframe age. IBM ruled the mainframe age. In 1943, IBM’s CEO Thomas Watson said I think there is a world market for maybe five computers”. Obviously he was wrong.



Rule 2 – One great feature does not make a Tsunami.

Minicomputers were smaller and cheaper than mainframes. They needed less system administrators. They did not need none. There was no way for ordinary people to load their own software. There was no quantum leap.



Rule 3 – Tsunami’s are hard to create. It rarely works on the first try.

It has been said the Visicalc spreadsheet that started the PC age. Visicalc was replaced by Lotus 123. Lotus wiped out by Excel.




Rule 4 – Only true tsunamis provide lasting advantages.

Mainframes have been called dead for decades. Minis are long gone but Mainframes are alive and well. Microsoft Windows and Intel processors are alive well into the Internet age.



Corollary 4 – You can make money at an intermediary stage. Just sell out.

Lotus Development was sold for $3.5B. Bebo recently sold for $850M. You can ride a minor waive to major dollars. Just take them.

20 years ago at the dawn of the microcomputer age, there was talk of three computer architectures. Mainframe, Mini, and Microcomputers. That was irrelevant. The second tsunami was GUI based home computing.

There is a temptation to anoint every new innovation as a game changer, a Tsunami. The Internet was not a game changer for IT. It a precursor like Visicalc was to true distributed computing. The dotcom era is littered with the carcasses of companies that thought that the old rules did not apply. Ajax, Social Computing, Software as a Service, etc are all false waves.


The 3rd Tsunami of Computer Architecture

In violation of Rule #1 of Tsunamis., I think that I have seen the first product of the of the 3rd Tsunami of IT. It is less than 6 months old so there are still a lot of bugs and it needs a lot more features. Windows Vista has a lot more features than Windows 1.0. It is Google App Engine.

Although Google has been using it internally for years, AppEngine was made available to the public in April, 2008. There are problems with it but it makes it possible for a single programmer to write an application for the entire Internet. There is no provisioning, load balancing, or any of the other thousand and one tasks to setup a popular Internet site today.

Internet, Ajax, Wikis, Social Computing are all pieces of the puzzle. The final piece that changes everything is super virtualization. The ability to make many small computers behave like 1 large one in a scalable and transparent way will make creating software as a service systems simple.


Creating a Tsunami is complex and costly.  Google has several billion invested in it's cloud and will probably invest several billion more before it is complete.  There is a market for a super virtualization operating system.  

Of course, I could be wrong.  It is rule #1 for a reason.


Identifying a Tsunami Informs Decisions

We have described the true paradigm shifts the Tsunami change. There can be powerful and less enduring change yet still highly profitable change. Even popular fads or sites with fleeting levels of great popularity such as Friendster can be highly profitable.

Having a clearer understanding and assessment of the true scope and nature of a new technology will help with the formulation of proper business strategies.

The PC industry had many profitable areas such as processors, operating systems and applications. Each area became segmented and had multiple areas where a company could be successful. Knowing that something was PC Industry big would provide context for what to expect and how to respond.

Red Cameras: Modular, Upgradable, Breakthrough Resolution, the Future of Cameras



The Red Camera Company had already made digital video cameras that are better than 35 mm film cameras at one tenth the cost.

The Red Camera Company had already made the Scarlet and Epic [Epic higher end] line of video cameras You are able upgrade EVERYTHING as technology advances. Every component, on an individual level, can be upgraded and improved. The RED ONE is a modular system that can be upgraded. Scarlet and EPIC are completely modular and upgradeable in every way.



RED CONFIGURATIONS

Sensors will range from 4.9 megapixels up to 261 megapixels and video resolution will range from 3000 lines (almost three times high definition lines and 7 times overall highdef resolution) to 28,000 lines (this goes beyond hivision also called ultra high definition 7,680 × 4,320 pixels)


Click on the pictures for a larger readable version.

Prices ranging from $2500 to $55000. The highend cameras will have no competitors in terms of resolution and even the $55,000 price is a lot less than the 35mm film cameras that are currently used. Even the "lowend" video cameras are pushing what has been available for digital video camera resolution.





The Red Sensors
The Epic and Scarlet cameras at the Red Camera site








Here is the Scarlet camera



Telescoping Carbon Nanotubes Can Make Flash Memory Replacment


Researchers at The University of Nottingham have used carbon nanotubes to make fast non-volatiles memory. (H/T Sander Olsen)

If one nanotube sits inside another — slightly larger — one, the inner tube will ‘float’ within the outer, responding to electrostatic, van der Waals and capillary forces. Passing power through the nanotubes allows the inner tube to be pushed in and out of the outer tube. This telescoping action can either connect or disconnect the inner tube to an electrode, creating the ‘zero’ or ‘one’ states required to store information using binary code. When the power source is switched off, van der Waals force — which governs attraction between molecules — keeps the Inner tube in contact with the electrode. This makes the memory storage non-volatile, like Flash memory.





Researchers from across the scientific disciplines will be working on the ‘nanodevices for data storage’ project, which is funded by the Engineering and Physical Sciences Research Council. Colleagues from the Schools of Chemistry, Physics and Astronomy, Pharmacy and the Nottingham Nanotechnology and Nanoscience Centre will examine the methods and materials required to develop this new technology, as well as exploring other potential applications for the telescoping properties of carbon nanotubes. These include drug delivery to individual cells and nanothermometers which could differentiate between healthy and cancerous cells.

Dr Elena Bichoutskaia in the School of Chemistry at the University is leading the study.


A June 2008 paper on carbon nanotube based storage devices.



FURTHER READING
Researach by Elena Bichoutskaia group.

November 12, 2008

Ten Times Improvement in Affordable Jet Engine Capability by 2017


GE90 engine is an example of improving engine affordability and with better performance

Phase 2 and 3 of the Versatile Affordable Advanced Turbine Engines (VAATE) program are being funded. The VAATE program is structured in three phases to achieve 4X performance/cost by 2009, 6X performance/cost by 2013 and 10X performance/cost by 2017.

The Adaptive Versatile Engine Technology (ADVENT) program is a 5-year project that aims to produce a revolution in jet engine design. Project ADVENT is a actually the flagship effort under the Versatile, Affordable Advanced Turbine Engines Program, or VAATE.

A single contractor will be selected in 2009 to carry out the Phase 2 effort. This phase covers work from engine-detailed design through Technology Readiness Level 6, which signifies it is ready for a full-up, operational test in a relevant engine environment. Engine demonstrator testing would occur in 2012.


By 2017 the military user will realize a factor of ten (“10X”) improvement in turbine engine-based propulsion system affordable capability. “Affordable capability” is defined as the ratio of propulsion system capability to cost. “Capability” in this context measures technical performance parameters including thrust, weight, and fuel consumption. “Cost” quantifies the total cost of ownership, and includes development, procurement, and life cycle maintenance cost. These improvements are to be realized relative to a baseline representative of year-2000 state-of-the-art systems.


Examples of goal factors for the large turbofan/turbojet class include:
• 200% increase in engine thrust-to-weight ratio (a key jet engine design parameter)
• 25% reduction in engine fuel consumption (and thus fuel cost)
• 60% reduction in engine development, procurement, and life cycle maintenance cost




Followup to the Integrated High Performance Turbine Engine Technology (IHPTET) programme

The IHPTET programme was completed in 2005, it demonstrated a 100 per cent increase in the thrust-to-weight ratio of turbofan engines relative to their counterparts of a generation ago. It showed that a 35 per cent decrease in production and maintenance costs is achievable in today's manufactured products. The results have been fed into many contemporary engine projects. The completion of IHPTET Phase I in 1991 -- after it had successfully demonstrated a 31 per cent improvement in combat engine thrust-to-weight ratios -- led, for example, to the supercruise capability of the F-22.


VAATE Focus Areas

The four VAATE Focus Areas are: (1) Versatile Core, (2) Intelligent Engine, (3) Durability and (4) engine-airframe integration.

The Versatile Core Focus Area concerns the most fundamental part of a gas turbine propulsion system, the engine “core.” Within the core, engine pressure, temperature, and rotational speed reach maximum value, as do the resultant thermodynamic and structural design requirements. The core is the heaviest, most complex, and highest cost component of the propulsion system and thus a component where technology advancement has great payoff. In addition to fuel consumption, thrust, and emissions improvements, Versatile Core technologies will reduce engine cost by allowing engines optimized for different applications to be built around identical core hardware. In one scenario, the engines for a large subsonic transport aircraft would have significant parts commonality with that for a high-performance supersonic fighter, thereby spreading nonrecurring costs over a larger customer base. Such explicit emphasis on dual-use capability will increase competitiveness of U.S. products in the demanding civil market.

The Intelligent Engine Focus Area concerns achieving the maximum utility from the engine through improved engine control systems, advanced prognostics and health maintenance, and integration of the engine, airframe, and power management subsystems. Advanced engine architectures utilizing pulse detonation combustion or hybrid gas turbine/fuel cell concepts will also be examined. By focusing on the Intelligent Engine area, VAATE allows capture of benefits that can be realized only through air vehicle system-level optimization of propulsion and power architectures and hardware.

The Durability Focus Area concerns reducing engine maintenance and part replacement costs by doubling component life while providing a significant increase in hot-time capability. Durability improvements are pervasive and not only benefit future systems, but also can often be retrofitted into legacy aircraft engines. An additional aim of this Focus Area is to prevent component failures, increase engine life and reliability, enhance reparability, and improve system readiness.

Engine-airframe integration technologies are key in attaining the significant cost and weight reductions required in order to achieve the VAATE tenfold goal.

Whereas IHPTET focused on bottom-up component technologies for engine performance improvement, VAATE will prioritize investments by emphasizing a top-down, air vehicle system-level approach.


Specific power and specific fuel consumption are, in general, improved by increasing pressure ratios and turbine inlet temperatures. As stronger, lighter-weight materials become available and more precise temperature measurement and control become possible through developing pyrometry, electrical controls and turbine cooling technology), increased pressures and temperatures are forecast.

Desired Results

Future scenarios envision a responsive, lethal, survivable force involving diverse platform requirements such as global strike, uninhabited air vehicles, advanced stealth combat, high Mach cruise, low-cost access to space, and Vertical/Short Take Off and Landing (V/STOL). VAATE will provide these systems with multiple benefits, including increased range, decreased logistics footprint, increased readiness, improved noise, emissions, and observability (stealth), and high speed endurance. For a future U.S. Air Force structure, specific benefits of VAATE-class technology (over that of a year-2000 state-of-the-art baseline engine) include: a 100% improvement in range for a manned fighter; a 200% improvement in range-payload per unit cost for a global reach transport; and a staggering $200+B reduction in life cycle cost for the entire future force structure.

Many mission requirements of these future weapon systems simply cannot be achieved without propulsion advancements. For example, a responsive strike aircraft should have twice the range at half the aircraft unit cost of current systems. Certain Unmanned Combat Air Vehicle (UCAV) concepts require 2.5 times the mission radius or 3 times the mission persistence (loiter time) of today’s manned vehicles. For access to space, a fuel-efficient, on-demand turbine engine accelerator up to a speed of Mach 4+ is required. Such capability does not exist today. For multi-role mobility, a future aircraft must be capable of Short Take-Off Vertical Landing (STOVL) with a 2-to-4 times mission radius increase over today’s conventional take-off aircraft.


FURTHER READING
Improving the Efficiency of Engines for Large Nonfighter Aircraft (2007)

Oak Ridge National Labs Cray XT5 is the Fastest Computer for Open Science and NEC's SX-9 Supercomputer


The latest upgrade to the Cray XT Jaguar supercomputer at the Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) has increased the system's computing power to a peak 1.64 “petaflops,” or quadrillion mathematical calculations per second, making Jaguar the world’s first petaflop system dedicated to open research. Scientists have already used the newly upgraded Jaguar to complete an unprecedented superconductivity calculation that achieved a sustained performance of more than 1.3 petaflops.

In the wikipedia timeline of supercomputers, a Cray supercomputer has not been the fastest computer in the world since 1989 when Florida State University's ETA10-G/8 bumped off the DOE Cray-2/8. Cray Supercomputers were fairly consistently dominating the supercomputer scene from 1976-1988.

Separately Japan's NEC has a SX-9 supercomputer at Tohoku University which is the tops in several benchmarks.

During the third quarter of 2008 Cray achieved a major milestone by successfully deploying all of the cabinets for the petaflops system, ahead of schedule. Starting at 26 TF (26 trillion calculations per second) in 2006, the XT system grew 60-fold in capability through a series of upgrades to what is today the world’s most capable system dedicated to open scientific research. Jaguar uses over 45,000 of the latest quad-core Opteron processors from AMD and features 362 terabytes of memory and a 10-petabyte file sys¬tem. The machine has 578 terabytes per second of memory bandwidth and unprec¬edented input/output (I/O) bandwidth of 284 gigabytes per second to tackle the biggest bottleneck in leading-edge systems—moving data into and out of processors. The upgraded Jaguar will undergo rigorous acceptance testing in late December before transitioning to production in early 2009.


Gallery of photos on the cray XT5





NEC SC-9 Supercomputer
NEC Corporation's SX-9 supercomputer, which began operation at Tohoku University's Cyber Science Center (Sendai City, Miyagi prefecture, Japan; Hiroaki Kobayashi, Director) in March 2008, has achieved the world's fastest standing in the High Performance Computing (HPC) field through scoring top marks on 19 of 28 areas in the HPC Challenge Benchmark test.



FURTHER READING
20 Page PDF on petascale computing.

IEA World Energy Outlook


The International Energy Agency has made a new comprehensive forecast of world energy from now until 2030.

The conclusions are:
Current energy trends are patently unsustainable —socially, environmentally, economically
- Oil will remain the leading energy source but...
> The era of cheap oil is over, although price volatility will remain
> Oilfield decline is the key determinant of investment needs
> The oil market is undergoing major and lasting structural change, with national companies in the ascendancy
- To avoid "abrupt and irreversible" climate change we need a major decarbonisation of the world’s energy system
> Copenhagen must deliver a credible post‐2012 climate regime
> Limiting temperature rise to 2°C will require significant emission reductions in all regions & technological breakthroughs
> Mitigating climate change will substantially improve energy security
- The present economic worries do not excuse back‐tracking or delays in taking action to address energy challenges


IEA report press release.

trends call for energy-supply investment of $26.3 trillion to 2030, or over $1 trillion/year. Yet the credit squeeze could delay spending, potentially setting up a supply-crunch that could choke economic recovery.

The findings of an unprecedented field-by-field analysis of the historical production trends of 800 oilfields indicate that decline rates are likely to rise significantly in the long term, from an average of 6.7% today to 8.6% in 2030. “Despite all the attention that is given to demand growth, decline rates are actually a far more important determinant of investment needs. Even if oil demand was to remain flat to 2030, 45 mb/d of gross capacity – roughly four times the current capacity of Saudi Arabia – would need to be built by 2030 just to offset the effect of oilfield decline”, Mr. Tanaka added.










FURTHER READING
6 page fact sheet

Key graphs 1.6 Megabytes

13 page executive summary pdf

November 11, 2008

Taiwan Makes Fastest 60GHz Wireless System on a Chip, 100 times Wifi Speed for less than $1


the coin (Taiwan Dollar) shown is about the size of a US quarter.

National Taiwan University announced their latest invention System on a Chip (SOC) yesterday, the smallest such product at the lowest cost and consuming the least electricity. The NTU research team claims that the transmission speed of the chip is 100 times as fast as WiFi and 350 times as fast as a 3.5G cell phone. Lee indicated that the chip size has been reduced to 0.5 millimeter, one-tenth of that of existing chips, and the cost is less than one-tenth of the traditional communication module and could be further lowered to only US$1. The SOC successfully combines RF Front-End Circuits and an antenna array to reach the highest transmission speed.
The range of this version is about ten meters. Higher power and bigger antennas can get far more range, but would be for a different class of device.

NTU coverage in chinese

Babelfish translation of the chinese language coverage

The transmission speed may reach WiFi 100 times, 3.5G handset's 350 times, the total power will be one fortieth similar type's of chip, the chip area is one tenth the size of similar chips.

The chip uses the 60GHz frequency band and has reached 5Gigabit per second transmission speed. Total power usage is below 300 milliwatts.

The demonstration was a prototype chip. Future versions will use better antennas.


National Taiwan University Chinese Site






In 2008, Intel researchers agreed that 60 ghz wireless is the way to go.

With 60 GHZ spectrum there is a potential for moving data at over one Gigabit per second (1 Gbit/s) and up to 8 Gbit/s (over ten meter distances for personal area networks). This will allow things like fast video transfer at a kiosk where you will buy a movie for your Mobile Internet Device (MID) or mobile smartphones. 60 GHz has an advantage with seven GHz of unlicensed spectrum bandwidth available from the FCC.


Taiwan's chip has fully realized the potential of 60GHz communication.

Other researchers have had similar 60GHz chips but were looking at the $10 price range for the component.

The 60 GHz signal doesn’t penetrate easily through walls and other obstacles, so the applications are likely to span very short distances, such as one or two rooms in a home or office.

FURTHER READING
Jri Lee Profile at the Department of Electrical Engineering, National Taiwan University

Publications by Jri lee up to 2006

60 GHz Of Promise Land . . . Or Is It?

Several companies (e.g. Proxim's GigaLink) already make commercial 60 GHz back haul systems that can transmit at rates to 1.25 Gb/s at a distance of a mile or so with a parabolic dish in good weather.

As for data rate you need to look at available bandwidth. With 7-GHz bandwidth available at 60 Hz (57 to 64 GHz in the U.S.), you can really get some great data rates (4 to 5 Gb/s to be conservative), and that is using the simpler BPSK and QPSK modulation methods. Obviously even higher rates are possible with multilevel modulation schemes. With QAM in OFDM you can probably get to the 20 Gb/s range.

Let's take a look at the basic RF range equation (called the Friis equation) that factors in antenna gains, wavelength, and transmit power and an assumed receive power. Using one mW transmit power, antenna gains of one at both receiver and transmitter, and a desired receive power of 10 nW, I get a range of about 5 inches. Not too useful. Bumping up the transmit power, adding gain antennas and making the receiver sensitivity greater should yield a range of multiple meters. Still pretty poor as it severely limits the application. No doubt manufacturers will work to get that range well up—otherwise why bother? The target is probably 10 meters, which is useful. The problem is the range prediction above assumes an unblocked line-of-sight (LOS) signal path. Any practical usage will encounter walls to penetrate and many obstacles that will create horrible multipath conditions. Under such conditions, assume a maximum range of only a few meters. Is that good enough? Maybe.

One final note about the propagation. As it turns out, 60 GHz is at that part of the spectrum where the absorption of the signal by water molecules is at a peak. That's probably why they made 60 GHz the unlicensed band. The frequencies directly above and below are more useful. What that means is that when it rains or snows signal amplitude will be severely decreased even blocked. That attenuation is in the 10 dB/km to 15 dB/km or about 1.5 dB/100 meters [attenuation is also from oxygen]. Luckily, most applications will probably be of the indoor variety, so we won't have to worry about that.


A draft standard for 60GHz WPAN (Wireless Personal Area Network)

The three device types are defined as follows:

1. A type A device offers video streaming and WPAN applications in 10-meter range LOS/NLOS multipath environments. It uses high gain trainable antennas. This device type is considered as the ‘high end’ - high performance device.
2. A second type B Device offers video and data applications over shorter range (1-3 meters) point to point LOS links with non-trainable antennas. It is considered as the ‘economy’ device and trades off range and NLOS performance in favour of low cost implementation and low power consumption.
3. The third type C device is positioned to support data only applications over point to point LOS links at less than 1-meter range with non-trainable antennas and no QoS guaranties. This type is considered as ‘bottom end’ device providing simpler implementation, lowest cost and lowest power consumption.


Graphene Production Advance : Route to large Scale Graphene Sheets


UCLA researchers developed a method of placing graphite oxide paper in a solution of pure hydrazine (a chemical compound of nitrogen and hydrogen), which reduces the graphite oxide paper into single-layer graphene.

This is the first reported instance of using hydrazine as the solvent. The graphene produced from the hydrazine solution is also a more efficient electrical conductor. Field-effect devices display output currents three orders of magnitude higher than previously reported using chemically produced graphene.

Kaner and Kang's co-authors on the research were doctoral students Vincent Tung, from Yang's lab, and Matthew Allen, from Kaner's lab.

"We have discovered a route toward solution processing of large-scale graphene sheets," Tung said. "These breakthroughs represent the future of graphene nanoelectronic research."

The coverage of the graphene sheets can be controlled by altering the concentration and composition of the hydrazine solution. This hydrazine method also preserves the integrity of the sheets, producing the largest-area graphene sheet yet reported, 20 micrometers by 40 micrometers. A micrometer is one-millionth of a meter, while a nanometer is one billionth of a meter.

"These graphene sheets are by far the largest produced, and the method allows great control over deposition," Allen said. "Chemically converted graphene can now be studied in depth through a variety of electronic tests and microscopic techniques not previously possible."





The abstract for the Nature Nanotechnology paper "High-throughput solution processing of large-scale graphene" is here

The electronic properties of graphene, such as high charge carrier concentrations and mobilities, make it a promising candidate for next-generation nanoelectronic devices. In particular, electrons and holes can undergo ballistic transport on the sub-micrometre scale in graphene and do not suffer from the scale limitations of current MOSFET technologies. However, it is still difficult to produce single-layer samples of graphene and bulk processing has not yet been achieved, despite strenuous efforts to develop a scalable production method. Here, we report a versatile solution-based process for the large-scale production of single-layer chemically converted graphene over the entire area of a silicon/SiO2 wafer. By dispersing graphite oxide paper in pure hydrazine we were able to remove oxygen functionalities and restore the planar geometry of the single sheets. The chemically converted graphene sheets that were produced have the largest area reported to date (up to 20 40 m), making them far easier to process. Field-effect devices have been fabricated by conventional photolithography, displaying currents that are three orders of magnitude higher than previously reported for chemically produced graphene6. The size of these sheets enables a wide range of characterization techniques, including optical microscopy, scanning electron microscopy and atomic force microscopy, to be performed on the same specimen.


A 9 page supplement for the research is here

Supercomputer Conference: Possible Exascale Disruption and the Best Technical Papers

The Supercomputer conference is Nov 15-20, 2008.

Technological developments in several areas have the potential to impact exascale supercomputer systems in a very disruptive way. These technologies could lead to viable exascale systems in the 2015-2020 timeframe. Four technologies are:

* Quantum computing [Dwave Systems]
* Flash storage Sun Micro has introduced high performance flash from terabytes up a to half a petabyte
* Cheap and low power optical communications Keren Bergman talks about nanophotonics for onchip and interchip communication
* IBM 3D chip stacking

IBM's leadership in advancing chip-stacking technology in a manufacturing environment announced one year ago, which drastically shortens the distance that information needs to travel on a chip to just 1/1000th of that on 2-D chips and allows the addition of up to 100 times more channels, or pathways, for that information to flow.

IBM researchers are exploring concepts for stacking memory on top of processors and, ultimately, for stacking many layers of processor cores.

IBM scientists were able to demonstrate a cooling performance of up to 180 W/cm**2 per layer for a stack with a typical footprint of 4 cm**2.






Some of the best Technical Papers

Links to all technical paper abstracts are here.

1. High-Radix Crossbar Switches Enabled by Proximity Communication

Parallel applications are usually able to achieve high computational performance but suffer from large latency in I/O accesses. I/O prefetching is an effective solution for masking the latency. Most of existing I/O prefetching techniques, however, are conservative and their effectiveness is limited by low accuracy and coverage. As the processor-I/O performance gap has been increasing rapidly, data-access delay has become a dominant performance bottleneck. We argue that it is time to revisit the “I/O wall” problem and trade the excessive computing power with data-access speed. We propose a novel pre-execution approach for masking I/O latency. We describe the pre-execution I/O prefetching framework, the pre-execution thread construction methodology, the underlying library support, and the prototype implementation in the ROMIO MPI-IO implementation in MPICH2. Preliminary experiments show that the pre-execution approach is promising in reducing I/O access latency and has real potential.



2. Benchmarking GPUs to Tune Dense Linear Algebra

We present performance results for dense linear algebra using the 8-series NVIDIA GPUs. Our GEMM routine runs 60% faster than the vendor implementation and approaches the peak of hardware capabilities. Our LU, QR and Cholesky factorizations achieve up to 80-90% of the peak GEMM rate. Our parallel LU running on two GPUs achieves up to ~300 Gflop/s. These results are accomplished by challenging the accepted view of the GPU architecture and programming guidelines. We argue that modern GPUs should be viewed as multithreaded multicore vector units. We exploit register blocking to optimize GEMM and heterogeneity of the system (compute both on GPU and CPU). This study includes detailed benchmarking of the GPU memory system that reveals sizes and latencies of caches and TLB. We present a couple of algorithmic optimizations aimed at increasing parallelism and regularity in the problem that provide us with slightly higher performance.


3. A Scalable Parallel Framework for Analyzing Terascale Molecular Dynamics Trajectories

As parallel algorithms and architectures drive the longest molecular dynamics (MD) simulations towards the millisecond scale, traditional sequential post-simulation data analysis methods are becoming increasingly untenable. Inspired by the programming interface of Google's MapReduce, we have built a new parallel analysis framework called HiMach, which allows users to write trajectory analysis programs sequentially, and carries out the parallel execution of the programs automatically. We introduce (1) a new MD trajectory data analysis model that is amenable to parallel processing, (2) a new interface for defining trajectories to be analyzed, (3) a novel method to make use of an existing sequential analysis tool called VMD, and (4) an extension to the original MapReduce model to support multiple rounds of analysis. Performance evaluations on up to 512 processor cores demonstrate the efficiency and scalability of the HiMach framework on a Linux cluster.



FURTHER READING
The Conference schedule is here.

November 10, 2008

Drug Increases Muscle in the Elderly

the Annals of Internal Medicine has a report of a drug MK-677 which has been found to boost the lean muscle of elderly people and reduce their frailty.

Over 12 months, the ghrelin mimetic MK-677 enhanced pulsatile growth hormone secretion, significantly increased fat-free mass, and was generally well tolerated. Long-term functional and, ultimately, pharmacoeconomic, studies in elderly persons are indicated. In this randomized trial, 65 healthy older adults were assigned to receive placebo or MK-677, an oral ghrelin mimetic that increased pulsatile growth hormone secretion to young-adult levels. Over 1 year, lean fat-free mass increased 1.1 kg with MK-677 and decreased 0.5 kg with placebo. MK-677 did not affect strength and function, but insulin sensitivity declined and mean serum glucose levels increased 0.28 mmol/L (5 mg/dL).






Livescience reported the following on MK-677

daily dose of an investigational medication has been found to restore muscle mass in the arms and legs of older adults and improve some of their biochemistry to levels found in healthy young adults, suggesting an anti-frailty drug has been found.

The drug, called MK-677, was evaluated for its safety and effectiveness in a study that showed the drug restored 20 percent of muscle mass loss associated with normal aging. In fact, levels of growth hormone (GH) and of insulin-like growth factor I (IGF- I) in healthy seniors who took the drug increased to the levels found in healthy young adults, said Michael O. Thorner, a professor of internal medicine and neurosurgery at the University of Virginia Health System


Heart Attack, Stroke and Cardiovascular Risk Can Be Halved even if you have Low Cholesterol


Statins can help lower heart attack risk by 50% even if you have low cholesterol but do have a high level of C-Reactive protein (6 million people in the United States fall into this category and 250,000 fewer cardiovascular events - strokes, heart attacks, could avoided over five years if those 6 million were given this treatment and the results of the study were duplicated.)

The New England Journal of medicine study is here "Rosuvastatin to Prevent Vascular Events in Men and Women with Elevated C-Reactive Protein"

The study included about 18,000 apparently healthy men and women with normal cholesterol but higher than normal levels of high sensitivity C-reactive protein, a marker of inflammation that has been linked to heart disease.

All of the study participants had LDL cholesterol levels of less than 130 milligrams per deciliter when they entered the trial, and none had known diabetes or heart disease. But they did have high-sensitivity CRP levels of 2.0 milligrams per liter or higher.

Blood hsCRP levels of less than 1 milligram per liter are indicative of low cardiovascular risk, while 1 to 3 milligrams per liter indicates moderate risk, and greater than 3 indicates high risk, Ridker says.

About 9,000 study participants were treated with 20 milligrams per day of Crestor and an equal number of participants took a placebo.

When the trial was stopped after a median follow-up of 1.9 years, statin users had lowered their LDL cholesterol by an average of 50% and their hsCRP by 37%.

There were also half as many heart attacks, strokes, and deaths from cardiovascular causes among the participants taking the statin. In all, 0.9% of statin users had one of these events, compared to 1.8% of placebo users.


The Wall Street Journal reports, an analysis by study statistician Robert Glynn of Brigham estimated that applying the Jupiter findings to medical practice for six million Americans for five years would prevent 250,000 major cardiovascular events. The study suggests 25 patients would need to be treated for five years to prevent one major event, a number Dr. Ridker says appears at least as cost effective as strategies screening for high LDL.






The study raises important questions about the role of high-sensitivity CRP in assessing cardiovascular risk.

The test is increasingly used by cardiologists but has not been considered a routine test for heart disease risk, mainly because its impact on treatment decisions has not been clear.

These findings, along with two other studies presented this weekend in New Orleans, could change this.

The studies, supported by the National Heart, Lung, and Blood Institute (NHLBI), showed the hsCRP test to be valuable for evaluating risk after a first heart attack or stroke.

In a written statement, NHLBI Director Elizabeth G. Nabel, MD, notes that the three studies provide the strongest evidence so far that hsCRP testing is a useful marker for cardiovascular disease.

Seaweed, Fungus and Algae Biofuel

An editorial by Ricardo Radulovich strongly advocates using seaweed, macro-algae, to solve our energy issues.

Until now, seaweed has been valued mainly as food, but also as fertiliser, animal feed, and recently for a growing phycocolloid industry producing algin, agar and carrageenan. But it could also be a major fuel.

Macro-algae (seaweeds) are cultivated at sea, mainly by simply tying them to anchored floating lines. Seaweeds do not require soil, and are already provided with all the water they need, a major advantage over land production of biofuels since water is the most limiting factor for most agricultural expansion, especially with climate change.

We have calculated that less than three per cent of the world's oceans — that's about 20 per cent of the land area currently used in agriculture — would be needed to fully substitute for fossil fuels.


In March, 2007, Tokyo University of Marine Science and Technology, the Mitsubishi Research Institute, and several companies announced a project to develop bioethanol from seaweed. The plan is to cultivate Sargasso seaweed in an area covering 3,860 square miles in the Sea of Japan. This will be harvested and dissolved into ethanol aboard ships, which will carry the biofuel to a tanker. The process is expected to yield 5 billion gallons of bioethanol in 3-5 years. [equal to about 326,000 barrels of oil per day or one percent of OPEC oil production]


The Japanese seaweed fuel project would put Japan in the range of Brazils biofuel output levels. Brazil and the United States are the current world leaders in biofuel production (Brazil uses sugarcane and the US mostly uses corn and soybeans) The USA and Brazil produce about 70% of the worlds biofuel.

Currently about eight million or so tonnes of seaweed are produced each year, with a market of nearly $6 billion, primarily China, Japan and Korea. The seaweed is grown for food.

Indonesia harvested 1,079,850 tons of seaweed in 2006 but is expected to reach 1.9 million tons in 2009. In September, South Korea's government signed a deal to lease 25,000 hectares (61,750 acres or about 90 square miles) of Indonesian coastal waters to grow seaweed for bioethanol fuel.

Italy is looking at seaweed biofuel.





Fungus Produces Diesel

A fungus has been discovered in the rainforest which could be used to produce biodiesel.


FURTHER READING
Biofuel news

Camelina (a weed that can grow in colder climates)biofuel feedstock.

Solazyme algae biodiesel

http://seaweed.nuigalway.ie/isa/

http://www.seaweed.ie/

Online algae database

Top five cultivated seaweeds in the world are Laminaria, Porphyra, Undaria, Eucheuma and Gracilari. These together account for 5.97 million metric tonnes of seaweed production. Top 10 countries...

Bakken and Three Forks-Sanish Oil Update

North Dakota Production is over 177,000 barrels per day in August, 2008. Barrels per well increasing from 1000 last year to 1400 this year. This does not include about 50,000 bopd from Montana or the Saskatechwan bakken oil.

The Three Forks Sanish Formation which is the geological layer below the Bakken Oil formation shows promise as well.

In April, the U.S. Geological estimated that up to 4.3 billion barrels of oil can be recovered from the Bakken using current technology. The potential of the Three Forks-Sanish formation was factored into the agency’s estimate for the Bakken, though it was based on production from the Antelope Field, said Rich Pollastro, a USGS geologist.

Pollastro said only about 50 wells have tapped the Three Forks-Sanish formation in the past, and all have used conventional vertical drilling. He said Continental’s well was the first
to use horizontal drilling techniques that have been successful in the Bakken.

Many new wells are producing 700 barrels per day and more by tapping into the Three Forks-Sanish Formation.

XTO Energy is getting 1750 barrels per day out of some of its Three Forks-Sanish wells





There may be a Bakken Pipeline spur which would help move more of the Bakken oil out of Saskatchewan and North Dakota.

November 09, 2008

Update on Hyperion Power Generation mini-nuclear reactor

The UK Guardian newspaper reports, Hyperion Power Generation CRO Deal claims to have more than 100 firm orders, largely from the oil and electricity industries, but says the company is also targeting developing countries and isolated communities.

The company plans to set up three factories to produce 4,000 plants between 2013 and 2023. 'We already have a pipeline for 100 reactors, and we are taking our time to tool up to mass-produce this reactor.'

The first confirmed order came from TES, a Czech infrastructure company specialising in water plants and power plants. 'They ordered six units and optioned a further 12. We are very sure of their capability to purchase,' said Deal. The first one, he said, would be installed in Romania. 'We now have a six-year waiting list. We are in talks with developers in the Cayman Islands, Panama and the Bahamas.'


Six year waiting list appears to be 5 years until the first one is delivered and then one hundred of the 15 ton reactors produced in the first year to 18 months and then scaling to 400-500 reactors every year.


Safety of this Liquid Metal Reactor

The patent for the reactor is here.

The liquid metal reactor takes advantage of the physical properties of a fissile metal hydride, such as uranium hydride, which serves as a combination fuel and moderator. The invention is self-stabilizing and requires no moving mechanical components to control nuclear criticality. In contrast with customary designs, the control of the nuclear activity is achieved through the temperature driven mobility of the hydrogen isotope contained in the hydride. If the core temperature increases above a set point, the hydrogen isotope dissociates from the hydride and escapes out of the core, the moderation drops and the power production decreases. If the temperature drops, the hydrogen isotope is again associated by the fissile metal hydride and the process is reversed. The chemical isotope splits chemically when it gets too hot. Just like water boils and turns into steam, you can design the water system to not exceed the boiling point of water. You would have to keep the water under pressure to force higher temperatures.

Adapting the University Triga Reactor Safety Systems

Triga reactors at General Atomics. Triga are teaching reactors that are safe enough to be operated by university students and walk-away safe. Over 60 Triga reactors have been built and some used for decades.

Here is an 11 page pdf history of the Triga reactors, which have had steady state power generation up to 16MW. There have been 25MW designs.

The safety systems will be similar but the reactor cores are different between the Triga (fuel rods in a pool type reactor) and the Hyperion Power Generation Uranium Hydride (liquid metal) reactor.

No Incremental Risk
If you were going to blow it up, it would take a lot of explosives -like blowing up a 15-20 ton buried bank vault. A lot of explosives to penetrate the concrete cask and then more to blow through however many feet of dirt it is buried under.

It would not add much to the cost to have sensors and digital video camera security to these things. So extreme tunneling, attempts to move it or blow it up should be easily detectable and action taken.

For the amount of effort and explosives it would take then just take those explosives and add radioactive material (available in mines and in less secure facilities and sources) and then put your dirty bomb anywhere. Thus there is no incremental risk.

The nuclear material is tougher to turn into nuclear bombs than using raw uranium, which a terrorist could get from natural sources (mines etc...). Again no incremental risk (we are adding no new risk as there is an easier existing path).

Economics
$25 million for each of the initial 25-30MWe reactors.
For getting oil from oil shale this system can supply heat instead of natural gas. Hyperion also offers a 70% reduction in operating costs (based on costs for field-generation of steam in oil-shale recovery operations), from $11 per million BTU for natural gas to $3 per million BTU for Hyperion. Over five years, a single Hyperion reactor can save $2 billion in operating costs in a heavy oil field. A lot of the initial one hundred orders are from oil and gas companies.



Here is a comparison to help put the system's potential into perspective. A single truck can deliver the HPM heat source to a site. The device is supposed to be able to produce 70 MW of thermal energy for 5 years. That means that the truck will be delivering about 10.5 trillion BTU's to the site. Natural gas costs about $7 per million BTU which would would cost $73 million.

That is about 3 times as much as the announced selling price for an HPM, but the advantage does not stop there - the HPM is targeted for places where there are no gas pipelines to deliver gas, so natural gas is not available at any price.

Instead, it would be better to compare the HPM to diesel fuel, which currently costs about 2 times as much per unit of useful heat as natural gas and still requires some form of delivery for remote locations. In some places, fuel transportation costs are two or three times as much as the cost of the fuel from the central supply points.

In certain very difficult terrains, or in places where there are people who like to shoot at tankers, delivery costs can be 100 times as much as the basic cost of the fuel.


Scaled Up

Initially these units will be in remote areas near oil sand projects and they will not be directly under people's houses. Do people live directly over power transformers or oil refineries ? The first few thousand can be placed on the site of existing nuclear and coal plants which have a few square miles of space. Even if there eventually there was one for every twenty thousand or ten thousand homes, they would be situated in some industrial zoned area. For eastern europe and island developments, the units will be sited several hundred meters from where people are living.

Three factories from a small company are scheduled to produce 4000 of these 15 ton reactors with each using 100-200 kg or so of uranium every 5-10 years. Make three hundred factories and produce 400,000 of these 15 ton reactors every five years. 16,000 tons of uranium per year (a fraction of what we now use for light water reactors). Produce 10 TWe of power. Currently the world uses about 15 TWe of electricity. This system could provide virtually carbon and pollution free energy.

Reprocess the football size waste that is removed at the end of each 5-10 year cycle. And over the course of 15 years develop factory mass produced molten salt reactors for 99% efficiency use of the uranium or thorium.

After 50-100 years each of the units themselves would need to be decommissioned. If there were 80,000 per year in 50-100 years then 1.6 million tons of material to handle each year. This is far less of an issue than the billions of tons of CO2 and particulates from coal, and less of an issue than the mercury, arsenic and toxic metals which are often not contained or the bits of Uranium and Thorium in the coal which go up the fuel stacks at 20,000 tons/year.

FURTHER READING
This site has provided extensive past coverage of the uranium hydride reactor

Uranium hydride reactor update April 2008

Uranium hydride reactor could blunt peak oil


Generating heat to extract oil from the oilsands.

Powering Vasimr rocket engines for fast travel inside the solar system

Are disposable reactors a safe option

In the link, uranium hydride could blunt peak oil, this site has reviewed the patent for this reactor.

They use 4.9% enriched uranium. Fissile fuel burnup of at least 50% should be achievable with adequate design. This about 450 gigawatt days per ton of uranium or thorium. This is about ten times more efficient than current nuclear reactors. There would half as much left over uranium (unburned fuel)

It's fuel lasts for about 5 years. Other reactors also have re-fueling. In this case, refueling is done by digging up the reactor if needed and then having the manufacturer perform the refueling. In between there are no people operating the reactor because it is self-regulating. The manufacturer separates about a football size amount of material when taking the used fuel out.

It's parts

is basically a hot tub full of uranium hydride with some hydrogen and some heat exchange rods.

The right tub of materials regulates itself while generating electricity