Pages

August 28, 2009

Using Technology to Enable the Good Enough Revolution and Paths to Abundance

Wired describes the "Good Enough Revolution"

Wired makes the following points about the "Good Enough Revolution".

Eventually all markets reach a point where products more gain more "value" by having core features and making those features extremely easy to use, convenient and cheap instead of adding more features.

* In 2001, 181 million disposable cameras versus 7 million digital cameras
* Flip Ultra video camera costs $150 versus $8000 for midrange digital video cameras. (Flip Ultra has 17% of the video camera market). Flip has VGA quality versus hi-def competition.
* MP3 versus higher quality sound formats
* Skype and Voice over IP versus higher quality phone services
* Predator UAV versus expensive jets. (100-1000 times cheaper and offering constant presense over targets and potential targets)
* elawyers
* Two doctors working out of a microclinic could meet 80 percent of a typical patient's needs. With a hi-def video conferencing add-on, members could even link to a nearby hospital for a quick consult with a specialist. The per-member cost at a microclinic is roughly half that of a full Kaiser hospital



Beyond cheap: the Path to Abundant
Cheaper can also offer more. If we bring the cost of most medical tests down down to pennies per test and make them easy to do. People would be able to self-test frequently, or even have constant monitoring from implants or wearable monitors and have results constantly tracked.

Full automation of medical testing and biomarker tracking would drop the cost of medicine by 100 times from microclinics and enable prevention and cures at the earliest stages of disease development.

Cheap, plentiful and good enough evolves into abundance.

* Two doctors in a Miniclinic at half the cost of hospitals for 80%
* Nurse practictioner with a more automated miniclinic at one fifth the cost of a hospital
* Automated Medical kiosks at one tenth the cost of a hospital
* At home medical testing and monitoring systems 100 times cheaper than a hospital

* Implantable and wearable devices: everyone has their blood, sweat, urine always monitored for any cancer cells, and test and has genetic testing for less than a dollar. Cheap body and brain scans. Prevention of most diseases.

For video cameras, it is where you have many always on video cameras, that are using ambient or scavenged energy (body heat, solar power etc...). Where their is automatic wireless synchronization (Whitefi [devices using the old analog TV spectrum] or other wireless system). Where the devices are so cheap you do not care if they get wrecked. Where the update and sync is frequent and automatic so there is very little data lost under any circumstances. You are no longer taking the action of automatically capturing the moment, you are choosing to pause/stop recording and are retrieving the information as needed.

Nuclear Costs at BraveNewClimate and from Dan Yurman and Others

Recent nuclear power cost estimates – separating fact from myth at Brave New Climate by Barry Brook

David Walters at Daily Kos

What do we know for sure? We know, for sure, that the costs of the material for an AP-1000 is is less than 1.4 billion dollars. How do we know that? Because this is what the Chinese are saying it is going to cost to built their AP-1000s, take or add $100 million or so. This means that Westinghouse is charging, at most, about $1000/kW installed components. Interesting, yes? We also can assume that Westinghouse is charging the Chinese the same as they are the dozen or more new builds submitted to the US NRC.

Early this year I met with a EDF nuclear physicist, far-left member of their union at his nuclear plant in southern France and ask him about this. He said at Flamesville, they are already developing new concrete techniques (some, believe or not ,dervived from French wine barrel making techniques!) to speed up and help in building the plant. They expect, still, that the plant will be over budget but not nearly as much as the Finnish version. People at EDF, both in management and in the unions are very optimistic.


Dan Yurman at Idaho Samizdat explains the costs and bids for nuclear plants that were considered in Ontario, Canada

$26 billion is an aggregate number that includes two reactors, turbines, transmission and distribution infrastructure (power lines or T&D), plant infrastructure, and nuclear fuel for 60 years as well as decommissioning costs. [Every cost for pre-build, build, infrastructure upgrades, 60 years of operation and also decommissioning some decades after] One of The most important number in the whole controversy has gone largely without notice and that is the delivered cost of electricity from the plants is in the range of five cents per kilowatt hour.



Charles Barton has an analysis of wind energy costs as compared to nuclear power

RELATED NEWS

South Korea could become the fourth country to develop plans to export reactors to India, after Russia, France and the USA. All these plans have come in a rush after a virtual nuclear trade embargo against India was dismantled last year.

Kepco's APR-1400 is the technology to be studied, a pressurized water reactor (PWR) developed from Westinghouse units imported in the 1970s. That deal included an element of knowledge transfer and Korean firms have since mastered the design and manufacture of every single component of the APR1400


Latest Developments in Nanotechnology Presentation at Singularity University Slides



Slides from the presentation made on July 16, 2009 at the Singularity University by Brian Wang of Nextbigfuture.



Some other presentations from the Singularity University that are available on slideshare.

Singularity University Open Source Panel



Singularity University Spime Design Workshop



Singularity University at Youtube

Search of Youtube for Singularity University












Singularity University Semester Completion and Projects

The first Singularity University 9 week semester is completing today.

A goal of the Singularity University is to catalyze projects that could possibly help 1 Billion People in Under Ten Years.

The team projects from the first semester include the following reports:

* One Global Voice leverages mobile phone proliferation to accelerate economic development. It envisions a platform that will provide a set of modular programming tools accessible through a web portal, empowering individuals to create applications empowering education and commerce, linking together the developed and developing worlds.

* Gettaround addresses how an intelligent transportation grid can positively affect energy usage and slow climate change, as people value access over ownership of cars. The first step to the grid, Gettaround is a marketplace for peer-to-peer leasing of under-utilized car hours. It enables car owners to derive revenue from their idle cars, and for renters to have easy access to cars – affordably and conveniently.

* ACASA focuses on advances in rapid, additive manufacturing technologies to construct affordable and customizable housing in the developing world. Cost-efficient, environmentally sustainable solutions have the potential to create a transformative new paradigm for improving housing construction using local resources.

* XIDAR considers a new paradigm for disaster response, allowing users to overcome the communications network problems typical of crisis situations. The project enables innovative solutions to facilitate evacuation, medical triage and aid during natural disasters.




The ACASA work is looking to commercialize one of the items in the nextbigfuture mundane singularity. ACASA is trying to develop printable buildings. Caterpillar Inc has been funding the contour crafting project since 2008. The work was originated at the University of Southern California

Contour Crafting is an effort to scale up rapid prototyping/manufacturing (a billion dollar industry to make 3 dimensional parts) and inkjet printing techniques to the scale of building multi-story buildings and vehicles. The process could accelerate the trillion dollar (US only) construction industry by 200 times. Projections indicate costs will be around one fifth as much as conventional construction. (Land prices are unchanged, so the actual prices of homes would not change as much in say Hawaii, Tokyo, Manhattan or San Francisco). Using this process, a single house or a colony of houses, each with possibly a different design, may be automatically constructed in a single run, embedded in each house all the conduits for electrical, plumbing and air-conditioning.

The contour crafting section of nextbigfuture articles.

This was also one of the technologies that Nextbigfuture has described as a seed of a manufacturing revolution

There are many economic benefits of being able to build with fast but still with high quality and adaptability.



Hopefully ACASA can be successfully in accelerating the commercialization and deployment of printable buildings.

There is competition at the lowend of rapid building production, where small buildings can be produced in a factory, but printable buildings has greater potential. It would be useful to expand the capabilities to fully printable infrastructure or to get around certain infrastructure needs with independent power generation, wireless communication and capture of rainwater and use of wells and onsite management of waste.

Starting printable building deployment in less developed countries is a reasonable start because of the hurdle of penetrating building codes and other legal and regulatory issues in developed countries.

FURTHER READING
On Thursday, July 16th (11:30-12:30), Brian Wang of Next Big Future will spoke at the Singularity University on the "Latest Developments in Nanotechnology".

The presentation is uploaded to slideshare.

August 27, 2009

Russian Space Agency and Nasa Talk Joint Mars Mission in Order to Get Bigger Budgets

If Space Programs Were Richly Funded for Decades

IEEE spectrum indicates that Russia has unveiled an ambitious three-decade plan for a manned space program this week at the International Aviation and Space Salon, MAKS-2009

The Russian Federal Space Agency’s hope is that its plan will become the basis for a broad international effort to send humans to Mars and build a permanent base on the surface of the moon.

In contrast to NASA efforts, which would use the moon as a stepping-stone on the way to Mars, the latest Russian space doctrine aims for Mars first. To reach a Mars landing, RKK Energia, Russia’s premier developer of manned spacecraft, displayed a multitude of planned space vehicles, including a transport ship, a nuclear-powered space tug, and a planetary lander system. Together they would make up what the agency is calling the Interplanetary Expeditionary Complex.

Officials at the Russian space agency, Roskosmos, made no secret that these grand ambitions were not achievable within the current budget and capabilities of the Russian space program alone. Instead, they hoped to jump-start the idea of broader international cooperation, which could spread the cost of the manned space program.


NASA’s top official in Russia proposed Tuesday that the U.S. and Russian space agencies join forces to send a manned mission to Mars, RIA-Novosti reported.

White House Says Stay Within Current Budget

The White House told the panel to aim to stay within current budget estimates.

"If you want to do something, you have to have the money to do it," said panel member and former astronaut Sally Ride. "This budget is very, very, very hard to fit and still have an exploration program."




The options that face the White House come down to variations and combinations of these themes: Pay more, do less or radically change American space policy. The most radical idea would be to hand much of NASA's duties to private companies.

SpaceX Falcon 9 First Launch Planned by the End of 2009

SpaceX continues to plan to debut the new Falcon 9 rocket by the end of this year, but company engineers are still qualifying some parts of the vehicle for the rigors of launch.

"We're not down to an exact date, but we are targeting the end of the year. And so far, so good," said Tim Buzza, SpaceX's vice president of launch operations.

Obama's Space Commission members appear unanimous in advocating commercial transportation contracts, and in forsaking a return to the moon in favor of more ambitious, longer-term projects to explore the solar system.

Shifting to commercial-style NASA transportation contracts most likely would translate into many thousands of industry and NASA job losses. NASA came to its present dilemma partly because of an eroding budget picture, leaving it some $50 billion short of the projected cost of keeping the space station flying while also pursuing plans to return astronauts to the moon around 2020.




Nvidia CEO Predicts 570 Times More Powerful GPU in Six Years

TG Daily reports that Nvidia's CEO predicted that GPU (Graphical Processing Units) will increase in power by 570 times over six years (up to 2016) from current levels. This would require tripling the speed of the GPU every year.

Previously William J. Dally, Chief scientist at Nvidia Corp, predicted Nvidia GPUs in 2015 will be implemented on 11 nm process technology that feature roughly 5,000 cores and 20 teraflops of performance. Current Nvidia GPUs have 500 gigaflops of performance in single precision. 20 teraflops would be 40 times faster. 570 times faster in 2016 would be 285 teraflops. However, if Huang was referring to double precision then the increase would be from the current 100 gigaflops going up to 57 teraflops of double precision performance. This seems to make more sense and is more consistent with the 20 teraflop in 2015 statement. 57 teraflops of double precision performance in 2016.

3 teraflop GPGPU chips should be available from Nvidia in late 2009 or early 2010.

The current Tesla GPUs are running at 1.3-1.4 GHz and deliver about 1 teraflop, single precision, and less than 100 gigaflops, double precision. Valich speculates that a 2 GHz clock could up that to 3 teraflops of single precision performance, and, because of other architectural changes, double precision performance would get an even larger boost.


This statement was made as part of "GPU Computing Revolution" keynote speech by Jen-Hsun Huang at the Hot Chips 21 conference.

Huang - who made his comments at the Hot Chips symposium in Stanford University - explained that such advances could enable the development of realtime universal language translation devices and advanced forms of augmented reality. Huang also discussed a number of "real-world" GPU applications, including energy exploration, interactive ray tracing and CGI simulations.


Nvidia website for its Tesla GPGPU computing solutions is here

There is a GPU technology conference Sept 30-Oct 2, 2009 in San Jose



The transcript (from Seekingalpha) of Nvidia's quarterly earnings conference call was on August 6, 2009

After three years of evangelizing, GPU computing has surely reached the tipping point. CUDA has been adopted in a wide range of applications. In consumer applications, nearly every major consumer video application has been or will be accelerated by CUDA. We estimate there are over 1200 research papers based on CUDA. We’ve highlighted 500 of them on CUDAZone.com.

CUDA now accelerates Amber, an important molecular dynamic simulation program used by more than 60,000 researchers in academia and pharmaceutical companies worldwide to accelerate new drug discovery. CUDA sped up Amber 50 times.

For the financial market, numerics and compatible announced CUDA support for their new counter party risk application and achieved an 18 times speed-up. Numerics is used by approximately 375 financial institutions.

There are broad ranging uses for CUDA including astro physics, computational biology and chemistry, fluid dynamic simulation, electromagnetic interference, CT [image reconstruction], seismic analysis, raytracing and more.

Another indicator of CUDA adoption is the ramp of our new TESLA GPU for computing business. There are now more than 700 GPU clusters installed around the world with new Fortune 500 customers ranging from Schlumberger and Chevron in the energy sector to BNP Paribas in banking.

And starting this fall with the launch of Microsoft’s Windows 7 and Apple’s Snow Leopard, GPU computing will go mainstream. In these new operating systems, the GPU will not only be the graphics processor but also a general purpose parallel processor accessible to any application.

Recently John Petty, a leading industry analyst, forecast the global graphics market to grow nearly 22% in 2010, based in part to the rise of the GPU as a co-processor. The report states the continued expansion and development of heterogeneous computing and GPU compute will stimulate growth in 2010, enabled by Apple’s and Microsoft’s new operating system, new programming capabilities using OpenCL, Direct Compute, and NVIDIA's CUDA architecture will remove barriers to the exploitation of the GPU as a serious economical and powerful co-processor in all levels of PCs.

TESLA is available as a module, a desk side personal super-computer or server for high performance computing clusters. TESLA achieved its first significant quarter of revenue with approximately $10 million in sales. Virtually every major OEM, including [Cray], Dell, HP, IBM, Lenovo, Silicon Graphics, or excuse me, SGI, Sun, and Super Micro now offers TESLA based solutions

We have over 50 HPC specialized VARS currently selling TESLA today. We estimate there are approximately 1,000 VARS actively involved in the HPC market which we have yet to engage.

We estimate TESLA to address a $5 billion market opportunity for us over the next three years.

We also know that high resolution displays and projectors are becoming more affordable than ever. The Sony 4K projectors, digital projectors are very affordable and people need scalable resolution, scalable visualization solutions to be able to address that and so we created a new product called Quadro SVS. And the Quadro SVS virtualizes both the application as well as the display, so you could literally run one application across up to four GPUs completely virtualized and invisibly, and then you can take the output of that and literally drive it up to 32 million pixels without the application ever knowing anything about it. And so this virtualization technology both at the GPU level as well as the display level is a groundbreaking idea and it’s something that we are really excited about.

TESLA servers consumers nearly 20 times less power than a conventional CPU server and the reason for that is because of the amount of performance that you get out of it.



Nextbigfuture had an interview with Nvidia's Sumit Gupta.

Forbes had an interview in June 2009 with Huang



August 26, 2009

Multi-Petaflop Supercomputers Now to 2011

Fujitsu 10 Petaflops by early 2011

Fujitsu is building the supercomputer for Japan's Institute of Physical and Chemical Research, known as RIKEN, said Takumi Maruyama, head of Fujitsu's processor development department, on the sidelines of the Hot Chips conference at Stanford University on Tuesday.

The system will be based on Fujitsu's upcoming Sparc64 VIIIfx processor, which has eight processor cores and will be an update to the four-core Sparc64 VII chip that Fujitsu released two years ago, Maruyama said.

A prior nextbigfuture article on the Fujitsu supercomputer

Blue Waters Up to 10 petaflops in 2011
Blue Waters is the name of a petascale supercomputer being designed and built as a joint effort between the National Center for Supercomputing Applications, the University of Illinois at Urbana-Champaign, and IBM. Expected to be completed in 2011, Blue Waters is expected to run science and engineering codes at sustained speeds of at least one petaflops, or one quadrillion floating point operations per second. This is nearly four times faster than IBM's Blue Gene L. One source has stated that Blue Waters may hit a peak system speed of 10 petaflops.

the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications (NCSA) announced today that they have finalized their contract with IBM to build the world's first sustained petascale computational system dedicated to open scientific research. This leadership-class project, called Blue Waters, is supported by a $208 million grant from the National Science Foundation and will come online in 2011.

Upgraded Jaguar

New six-core "Istanbul" processors from AMD are being installed now and will rev up Jaguar's peak processing capability to "well over 2 petaflops." by the end of 2009. That's more than 2,000 trillion mathematical calculations per second.

The switch from quadcore processing to six-core processors will results in about a 70 percent performance gain and also enhance the memory applications. So the 1.6 petaflop peak processing of Jaguar should go to 2.7 petaflops peak.

AMD believes that a proposed 12-core processor, code-named Magny-Cours, in 2010 will give them a major advantage. And that may be why Advanced Micro Devices Inc. said this week that it is releasing its six-core Opteron chip in June, well ahead of schedule, and plans to follow it early next year with a chip code-named Magny-Cours that will ship in eight- and 12-core models. After that, it plans a 16-core chip in 2011. If there was a 2010 chip refresh on the Jaguar supercomputer then it could go 5 petaflops of peak performance and a 2011 chip refresh could provide 7 petaflops.



Sequoia, IBM and DOE making 20 petaflop for 2011

IBM has promised the DOE National Nuclear Security Administration a 20 petaflop supercomputer that is scheduled for delivery in 2011. The supercomputer will be called Sequoia. The computer should be ten times more energy efficient per calculation than current supercomputers.

The Sequoia effort includes two generations of IBM Blue Gene supercomputers that will deliver the next generation of advanced systems to weapon simulation codes being developed under the ASC program. ASC is a cornerstone of the National Nuclear Security Administration's (NNSA) program to ensure the safety, security and reliability of the nation's nuclear deterrent without underground testing -- Stockpile Stewardship. These two Blue Gene systems are "Dawn," a 500-teraflop system that was accepted by LLNL in March of 2009, and "Sequoia," a 20-petaflop system based on future Blue Gene technology, slated for delivery in 2011. Lawrence Livermore Selects TotalView Debugger for the 20 Petaflop System.

Among the features that TotalView Technologies will incorporate for the Dawn and Sequoia systems are user-programmable data display, fast conditional breakpoints and watchpoints, compiled expressions, asynchronous thread control, and full post-mortem debugging.

At 20 petaflops, Sequoia will be 34 times as powerful as LLNL's current Blue Gene/L, giving scientists a lot more computing cycles for weapons simulations and basic science research. "Sequoia represents a major challenge to code developers as the multi-core era demands that we effectively absorb more cores and threads per MPI task," said Mark Seager, Asst. Dept. Head for Advanced Computing Technology at LLNL. "This programming challenge can only be overcome with world class code development tools. Through our long-term partnership with cutting-edge technology companies like TotalView Technologies we are confident we can deliver on our demanding debugger scalability and usability requirements."

TotalView is a comprehensive source code analysis and memory error detection tool that dramatically enhances developer productivity by simplifying the process of debugging parallel, data-intensive, multi-process, multi-threaded or network-distributed applications. Built to handle the complexities of the world's most demanding applications, TotalView offers a number of advanced features that help speed development and eliminate bugs quickly, and is capable of scaling to thousands of processes or threads with applications distributed over multiple machines or processors.


Cloud Computing and Distributed Supercomputing

Folding@home the largest distributed computing effort has 8.3 petaflops of computing power in active utilization.

This effort represents one of the larger current "World Computers" -400,000 active for folding@home, 30,000 GPGPUs provide 68% of the processing and 36,600 Playstations 3 represent 25% of the rest of the processing power. If participation continues to grow and if by 2011 almost all of about 1 million active participating computers had 1 teraflops of performance then the combined power would be 1 exaflop.


FURTHER READING
Sun Microsystems claimed a new watermark for server CPUs, unveiling Rainbow Falls, a 16-core, 128-thread processor at the Hot Chips conference Tuesday (August 25). But analysts gave the IBM Power7 kudos as the more compelling achievement in the latest round of high-end server processors. Power7 packs as many as 32 cores supporting 128 threads on a four-chip module with links to handle up to 32 sockets in a system. "It is scaling well beyond anything we've ever really seen before," said Peter Glaskowsky, a technology analyst for Envisioneering Group


Fixing Mitochondria to Improve Invitro Fertilization (IVF) and Fix some Disease and Cheap IVF in Africa

1. The BBC reports that faulty mitochondria were successfully replaced in the eggs of monkeys and the eggs were fertilized and developed into healthy monkeys.

Faulty mitochondria effects about 15,000 babies every year and prevents some tens of thousands of women from having children. The US work, featured in the journal Nature, raises hopes of a treatment enabling women with defective eggs to have a child without using donor eggs.

The treatment would involve so-called "germ line" genetic changes which would be passed down through generations.

The genetic fault is contained in structures in the egg called the mitochondria, which are involved in maintaining the egg's internal processes.

If an egg with faulty mitochondria is fertilised the resulting child could have any of hundreds of different diseases including anaemia, dementia, hypertension and a range of neurological disorders.

US researchers have previously tried and failed to correct this defect by adding healthy donated mitochondria into the eggs of patients wishing to have children.

But these attempts resulted in birth defects - probably because mitochondria are so delicate that they are damaged when they are transplanted from one egg to another.

As a result, the treatment was banned by the US until it could be demonstrated that it was safe in animal experiments.

A group at the Oregon Health and Science University has now done just that.

They transferred the DNA needed to make a baby out of monkey eggs, leaving behind the potentially diseased genes in the mitochondria.

This was transplanted it into eggs emptied of DNA but containing healthy mitochondria.

The technique resulted in three healthy births with no sign of any birth defects.


2. New Scientist magazine reports that by the end of October, 2009, a clinic at the University of Khartoum plans to offer in vitro fertilisation to couples for less than $300, a fraction of its cost in the west.

Note: Lower cost but safe methods could be adopted to lower the general cost of healthcare in the west. This would be an important factor in the overall funding of public healthcare debate.

If successful, such efforts could lower the cost of IVF everywhere. In the US, the price of one round of treatment can be up to $12,000 and is rarely covered by health insurance. In the UK, it costs about £5000 ($8000), which the National Health Service may or may not pay for, depending on where a couple lives.




Some 10 to 30 per cent of African couples are infertile, often as a result of untreated sexually transmitted diseases, botched abortions and post-delivery pelvic infections.

stimulate egg production, many clinics in the west prescribe genetically engineered or "recombinant" forms of follicle-stimulating hormone (FSH) because it can cause women to release a dozen or more eggs per cycle. That means some embryos can be frozen in case the first round of IVF doesn't work. Such drugs have the disadvantage of being enormously expensive, sometimes costing thousands of dollars per round of treatment.

In contrast, clomiphene is a generic drug which prompts the pituitary gland to pump out more FSH and costs just $11 for one round of treatment. It was used very successfully in the early years of IVF, inducing maturation of up to four viable eggs per cycle. That's far fewer than with injecting FSH directly, but since low-cost IVF facilities are unlikely to have the equipment or liquid nitrogen for freezing extra embryos, fewer eggs are needed anyway.

Using clomiphene, the ESHRE group plans to transfer no more than two embryos to the woman's uterus, while the LCIF initiative plans to transfer only one.

Combined with not freezing extra eggs, this reduces the chance of a successful pregnancy, but as clomiphene has fewer side effects than recombinant FSH, women may be more likely try further rounds of IVF if earlier attempts fail. The ESHRE group estimates this will achieve a pregnancy rate of 15 to 20 per cent, lower than the European rate of 25 per cent and the US figure of 35 per cent.

Another big cost-saving has come in the use of incubators. Western doctors select the best embryos by allowing them to incubate for up to six days; those that fail to divide, or which show cellular defects, are then weeded out and the best transferred. But certain defects - multiple cell nuclei, for example - can be seen as early as the second day, and some embryos which fail in the artificial environment of a culture dish will develop normally in utero, according to Van Blerkom. On this basis, the ESHRE group plans to transfer the embryo on the first or second day after fertilisation.

Incubators themselves can also be made cheaper. Australian company Cryologic sells portable table-top incubators for less than $1000. These lack the fancy electronics and ability to change temperature of standard incubators, but this is unnecessary for IVF. Van Blerkom has used one to successfully incubate embryos and found that the batteries can be recharged with solar panels, also useful in countries where electricity outages are common. Meanwhile, the LCIF is counting on warm water baths to incubate embryos.

One company argues that incubators can be avoided completely, since a natural one - the woman herself - is already walking around. INVO Bioscience of Beverly, Massachusetts, recently launched the INVOcell, a small plastic capsule into which fertilised eggs are placed together with culture media. The capsule, encased in a protective shell, is then inserted into a woman's vagina for three days, which keeps the embryos at the desired temperature. After removal, doctors select the two best embryos and transfer them to the woman's uterus.

Company spokeswoman Katie Karloff claims that using the device - which costs between $85 in Africa and $185 in Europe - can cut the cost of IVF by half. It is also uniquely suited for places that frequently lose electrical power. Karloff reports that the INVOcell has now been used 85 times around the world, with 20 resulting pregnancies.

Cut-price $900 microscopes for confirming cell division can be easily adapted for minimal-cost clinics, says Van Blerkom, as can portable digital ultrasound machines that sell for less than $5000 - far below the typical $400,000 price tag for machines in western IVF clinics.




USA: Over Two Thousand Dams Near Population Centers Need Repair


More than 2,000 dams near population centers are in need of repair, according to statistics released this month by the Association of State Dam Safety Officials. [High hazard potential repairs are needed.]

The National Inventory of Dams (NID), which is maintained by the U.S. Army Corps of Engineers (USACE), shows that the number of dams in the U.S. has increased to more than 85,000, but the federal government owns or regulates only 11% of those dams.

Responsibility for ensuring the safety of the rest of the nation’s dams falls to state dam safety programs. Many state dam safety programs do not have sufficient resources, funding, or staff to conduct dam safety inspections, to take appropriate enforcement actions, or to ensure proper construction by reviewing plans and performing construction inspections. For example, Texas has only 7 engineers and an annual budget of $435,000 to regulate more than 7,400 dams. Alabama does not have a dam safety program despite the fact that there are more than 2,000 dams in the state. And in some states many dams are specifically exempted from inspection by state law. In Missouri there are 740 high hazard potential dams that are exempted because they are less than 35 feet in height.

In 2009, the Association of State Dam Safety Officials (ASDSO) estimated that the total cost to repair the nation’s dams totaled $50 billion and the needed investment to repair high hazard potential dams totaled $16 billion. These estimates have increased significantly since ASDSO’s 2003 report, when the needed investment for all dams was $36 billion and the needed investment for high hazard potential dams was $10.1 billion.

The 2009 report noted an additional investment of $12 billion over 10 years will be needed to eliminate the existing backlog of 4,095 deficient dams. That means the number of high hazard potential dams repaired must be increased by 270 dams per year above the number now being repaired, at an additional annual cost of $850 million a year. To address the additional 2,276 deficient—but not high hazard—dams, an additional $335 million per year is required, totaling $3.4 billion over the next 10 years


History of Dam failures in the USA



20 megabyte, 168 page 2009 US Infrastructure Report



The five key solutions [from the infrastructure report] are:
* increase federal leadership in infrastructure;
* promote sustainability and resilience;
* develop federal, regional, and state infrastructure plans;
* address life cycle costs and ongoing maintenance;
* increase and improve infrastructure investment from all stakeholders.


RELATED READING
Deaths per terawatt hour for all energy sources

Russian dam blows a transformer

The Buffalo Creek Dam Accident:

The 15- to 20-foot black wave of water gushed at an average of 7 feet per second and destroyed one town after another. A resident of Amherstdale commented that before the water reached her town, "There was such a cold stillness. There was no words, no dogs, no nothing. It felt like you could reach out and slice the stillness." -- quote from Everything in Its Path, by Kai T. Erikson


August 25, 2009

Scientist Aims to Genetically Manipulate Chicken Embryos to Create Dinosaur Traits possibly leading to a Chickenosaurus



Macleans, canadian newsmagazine, has coverage on "the quest to build a dinosaur"

Larsson is experimenting with chicken embryos to create the creature Horner describes: a “chickenosaurus,” they call it. If he succeeds, Larsson will have made an animal with clawed hands, teeth, a long, dinosaurian tail and ancestral plumage, one that shares characteristics with “the dinosaur we know that’s closest to birds, little raptors like the velociraptor,” Horner says.

The chickenosaurus will be a conversation piece, he says, sparking a public debate about evolution by winding its tape backwards for all to see. “Let’s put it this way,” Horner says. “You can’t make a dinosaur out of a chicken, if evolution doesn’t work.”


Physorg reports that Hans Larsson, the Canada Research Chair in Macro Evolution at Montreal's McGill University, said he aims to develop dinosaur traits that disappeared millions of years ago in birds.

Larsson believes by flipping certain genetic levers during a chicken embryo's development, he can reproduce the dinosaur anatomy, he told AFP in an interview.


From Macleans


The Larsson lab website is here



The dinosaur called an Oviraptor is also called the "scary chicken". It was about 6 feet tall.



This is not like Jurassic Park where prehistoric DNA is revived, but rather like a new genetic sculpture using manipulation or DNA to create something that looks like something else. It is like taking a new car and making it look like a Model T. A replica car is created.

Hans Larrson was interviewed by the CTV (Canadian Television)

"We should be able to regenerate or essentially make the genetic program mimic the way it was at say, 150 million years ago, and grow a longer tail, change its plumage to something a little bit more primitive, have three-clawed fingers, some teeth," he said.

The idea for the project came about over a discussion with internationally renowned American paleontologist Jack Horner. Among other things, Horner served as technical adviser for the Jurassic Park films.

The two were talking about how to illustrate evolution. They decided that altering the development of chicken embryos could be "a very public, visual way of doing that," Larsson said.

"The fundamental questions are animal development. We're trying to find out what genes are turning on and off, how cells are moving within the embryo."

The study will focus on chicken eggs because birds are direct descendants of dinosaurs, Larsson said.

The project doesn't face ethical hurdles because none of the embryos would be hatched yet, Larsson said. The research is funded by the Natural Sciences and Engineering Research Council of Canada, the Canada Research Chairs program and National Geographic.


Supercavitating Submarines and Possible Future Supercavitation Vehicles



From 2007, seapower digital has more information on the Underwater Express program. Underwater express is a DARPA project looking to create a 100 knot (110 mph) submarine that is 8 feet in diameter and 60 feet long and would not have a surface signature. It would replace a 40 knot speedboat for special forces. It would probably have a thermal power source (maybe a nuclear engine or a diesel engine) that would power water jets. A speedboat also makes a lot of noise so sonar/sound detection would not be different, but an underwater delivery would have nothing to spot on the surface and would be over twice as fast. There is the challenge of detecting obstacles and controlling the supercavitating craft when surrounded by an air pocket when sonar and sensors are useless.

One way around the issue is that blue lasers can be used to communicate through water, so a satellite or plane could provide an updated feed of what obstacles are in the path or are around the superfast submarine.

Potentially supercavitation could reduce drag by one thousand times (the difference in drag between water and air. Current supercavitation technology reduces drag by abougt 60-70%.

A German company supposedly tested a 500 mph (800 kph) version of a supercavitating torpedo [the Barracuda] in 2005



The supercavitating underwater missile is a technology demonstration program for close-in defence against underwater targets. It is equipped with a solid-propellant rocket motor, inertial measurement unit, autopilot and a conical tip which can be moved by means of an actuator system. The rocket motor provides the missile with a submerged speed of more than 400km/hr. The inertial measurement unit and the autopilot stabilize the missile so that the heading is held. The flexible nose cone provides steering just as a missile's fins do. Due to its high submerged speed, it moves in an air bubble, the so-called cavitation bubble, wherein almost vacuum prevails, thus greatly reducing its water resistance and enabling the high speed. To date, around a dozen test models of the underwater missile have been built and tested successfully. The tests focused on stabilization, guidance and maximization of agility, which is of great advantage for engaging rapidly moving underwater targets. The supercavitating underwater missile is suited for use from submarines and surface vessels.




There is a site with a fictional speculation in a game and other media called Empire the film which speculates on supersonic supercavitation vehicles.

This would involve mastering the reduction in drag from 60-70% to many times better. They consider supersonic aircraft carriers and versions of jet fighters underwater. The site has a lot of cool looking graphics. (copyrighted to so go to the link to see them). Empire the film also suggest nuclear powered vessels that use nuclear power to generate hydrogen from seawater to build up fuel for the rockets that power the supercavitation propulsion.

Supercavitating Catamaran/Trimaran Pontoons

The poontoons of a catamaran or trimaran could have supercavitation installed to reduce drag by 60-70% (or more for more advanced versions) to enable higher speed surface cargo transportation. A 70% reduction in drag would mean you could go 1.8 times faster using the same amount of power (72 knots instead of 40 knots). A 90% reduction in drag would mean 3 times faster for the same power (120 knots instead of 40 knots).



Applying Stealth to Supercavitation

The speed of sound in seawater that is free of air bubbles or suspended sediment, sound travels at about 1560 m/s. In dry air at 20 °C (68 °F), the speed of sound is 343 meters per second (1,125 ft/s). This equates to 1,236 kilometers per hour (768 mph), or about one mile in five seconds.

A supercavitating vessel could make a lot of noise. Applying stealth would be to reduce the sound and to direct the detectable sound away from sensors in a predictable way. Being able to alter the detectable signature or create or have unmannded decoys could be useful. Also, by travelling at high speed and possibly having the sound that is given off being not that much faster, the supercavitating vessel could use a detectable burst of speed and then go into silent and stealth mode. It would be known that the supercavitating vessel is in a general area but it would not be known exactly where it was.


Supercavitation ventilation patent

Some consideration of supercavitating hydrofoils

Supercavitation at wikipedia

Worst Case Swine Flu Scenario - 90,000 deaths in the USA in 2009


Above are the deaths and hospitalizations that have already occured in the USA. A normal US flu season kills 36,000 people each year.

Flu season in the USA is from September to about March the next year.

Up to half of the population of the U.S. could come down with the swine flu and 90,000 could die this season, according to a dire report from the President's Council of Advisors on Science and Technology.



The report, which claims as many as 1.8 million people could end up in the hospital seeking treatment for the H1N1 virus, comes as government officials push drug companies to make a vaccine available next month.

The report says that under a worst-case scenario, between 60 and 120 million Americans could get sick with the swine flu and another 30 million could contract the virus but not show symptoms. Between 30,000 and 90,000 could die -- more than twice the annual average of deaths associated with the seasonal flu. Those deaths generally occur in people older than 65.

The council recommended that manufacturers begin to package the vaccine so that it could be used by those that are at high risk in September. All five manufacturers have already been asked by the government to bottle the vaccine when it's ready.

But health officials announced a delay in the vaccine production last week. Originally, the government expected 120 million doses to be available on Oct. 15, but it now estimates there will only be 45 million, with 20 million more each week through December.




FURTHER READING

The CDC website for H1N1 flu

Genetically engineered gut bacteria trigger intestinal cells to make insulin in mice

MIT Technology Review reports that friendly gut microbes that have been engineered to make a specific protein can help regulate blood sugar in diabetic mice, according to preliminary research presented this week at the American Chemical Society conference in Washington, D.C.
While the research is still in the very early stages, the microbes, which could be grown in yogurt, might one day provide an alternative treatment for people with diabetes.

People with type 1 diabetes lack the ability to make insulin, a hormone that triggers muscle and liver cells to take up glucose and store it for energy. John March, a biochemical engineer at Cornell University, in Ithaca, NY, and his collaborators decided to re-create this essential circuit using the existing signaling system between the epithelial cells lining the intestine and the millions of healthy bacteria that normally reside in the gut. These epithelial cells absorb nutrients from food, protect tissue from harmful bacteria, and listen for molecular signals from helpful bacteria. "If they are already signaling to one another, why not signal something we want?" asks March.

The researchers created a strain of nonpathogenic E. coli bacteria that produce a protein called GLP-1. In healthy people, this protein triggers cells in the pancreas to make insulin. Last year, March and his collaborators showed that engineered bacterial cells secreting the protein could trigger human intestinal cells in a dish to produce insulin in response to glucose. (It's not yet clear why the protein has this effect.)

In the new research, researchers fed the bacteria to diabetic mice. "After 80 days, the mice [went] from being diabetic to having normal glucose blood levels," says March. Diabetic mice that were not fed the engineered bacteria still had high blood sugar levels. "The promise, in short, is that a diabetic could eat yogurt or drink a smoothie as glucose-responsive insulin therapy rather than relying on insulin injections," says Kristala Jones Prather, a biochemical engineer at MIT, who was not involved in the research.




Creating bacteria that produce the protein has a number of advantages over using the protein itself as the treatment. "The bacteria can secrete just the right amount of the protein in response to conditions in the host," says March. That could ultimately "minimize the need for self-monitoring and allow the patient's own cells (or the cells of the commensal E. coli) to provide the appropriate amount of insulin when needed," says Cynthia Collins, a bioengineer at Rensselaer Polytechnic Institute, in Troy, NY, who was not involved in the research.

In addition, producing the protein where it's needed overcomes some of the problems with protein-based drugs, which can be expensive to make and often degrade during digestion. "Purifying the protein and then getting past the gut is very expensive," says March. "Probiotics are cheap--less than a dollar per dose." In underprivileged settings, they could be cultured in yogurt and distributed around a village.

The researchers haven't yet studied the animals' guts, so they don't know exactly how or where the diabetic mice are producing insulin. It's also not yet clear if the treatment, which is presumably triggering intestinal cells to produce insulin, has any harmful effects, such as an overproduction of the hormone or perhaps an inhibition of the normal function of the epithelial cells. "The mice seem to have normal blood glucose levels at this point, and their weight is normal," says March. "If they stopped eating, we would be concerned."

March's microbes are one of a number of new strains being developed to treat disease, including bacteria designed to fight cavities, produce vitamins and treat lactose intolerance. March's group is also engineering a strain of E. coli designed to prevent cholera. Cholera prevention "needs to be something cheap and easy and readily passed from village to village, so why not use something that can be mixed in with food and grown for free?" says March.


Chris Phoenix Suggests Enhancing Structural DNA to Make Centimeter Scale Constructs

If DNA-tagged molecular shapes (whether made of DNA with Rothemund staples, or Schafmeister polymers, or whatever) were allowed to self-assemble to a DNA framework, and then zinc or light (or whatever reagent) were added, then the shapes could bond quite strongly into a single large strong precise molecule. The molecule could be highly crosslinked, and thus stronger and stiffer than most protein, and certainly stronger and stiffer than just plain DNA.

How big could the molecule be? Well, let's not forget IBM's recent announcement of plans to template entire computer chips with surface-attached DNA. That implies that the molecule - a single, engineered molecule - could be centimeter-scale!

A few years ago, Drexler (IIRC) described two ways of fastening molecules together. One is a protein structure called a zinc finger, wherein several amino acids (cysteine and/or histidine) bind to a single zinc ion. Another is a pair of paired carbon atoms, which rearrange bonds when hit by light to form a link across the pair. There are many other ways of fastening molecules, as well.





August 24, 2009

Genescient Will Have will have Nutrigenomic Products, fully lab tested, by end of 2009

Gregory Benford will be presenting an update of Genescient's work at the Singularity Summit in New York on October 3rd and 4th, 2009.

Our laboratory animals live for 5 times the normal lifespan. They have health and vigor. We use their genetic properties to find what works similarly in humans.

Genescient applies 21st century genomic technology to identify, screen and develop benign therapeutic substances to defeat the diseases of aging. Genescient's singular approach addresses the complex genomic networks that underlie aging and aging-associated diseases such as cardiovascular disease, Type II diabetes and neurodegenerative diseases. We will have nutrigenomic products, fully lab tested, by end of 2009


Genescient has released a research paper detailing their study of two common stimulants (caffeine and the stimulant in chocolate) and two common sedatives (valproic acid and lithium).

Genescient has found that caffeine consistently impaired mating success in experiments. By contrast, at normal doses theobromine (the chief stimulant in chocolate) was benign. Worse still, caffeine impaired survival and female reproduction. Again, theobromine proved relatively benign for survival and reproduction.




Genescient corporate overview

Our focus is to extend healthy human lifespan by using advanced genomics to develop therapeutic substances that attack the diseases of aging. We are the first company founded to exploit artificial selection of animal models for longevity.

Our extremely long-lived animal models (Drosophila melanogaster) have been developed over 700 generations. They are an ideal system for the study of aging and age-related disease because Drosophila metabolic genetic pathways that are highly conserved in humans.

Our sophisticated analysis cross-links gene function in Drosophila with their human orthologs, thus revealing the targets for therapeutic substance development. To date we have discovered over 100 of these genomic targets, all related to the primary diseases of aging.

This large library of targets, enables Genescient to effectively select and test therapeutic drug candidates. To date, Genescient’s “proof-of-concept” testing program has yielded a number of very promising therapeutic substances.

Genescient’s screening platform also enables us to partner with biotechnology and pharmaceutical companies to test and rapidly move forward promising drug candidates. In an era where drug failure at a late clinical trial phase can cost a company hundreds of millions of dollars, Genescient’s unparalleled screening technology helps pharmaceutical companies to rapidly eliminate poor candidates.


Henry Markam (the Project Director of the Blue Brain project) Interview

Here is the Henry Markram interview by Sander Olson. [Note: there 14 questions and answers and a copyright by Dr. Markram at the end] Dr. Markram is the Project Director of the Blue Brain project, and Dr. Markram recently predicted that the Blue Brain project could have a human level brain simulation within a decade. Of note in this interview:

- The Blue Brain Project is not an AI project, but an attempt to unlock the mysteries of the brain. Dr. Markram is confident that Blue Brain models will eventually supplant AI.

-Knowledge of the brain is increasing exponentially. We are currently gathering as much information on the brain's structure and function each year as was gained in the entire 20th century. Neuroscientists are currently producing about 50,000 peer-reviewed articles per year. The Blue Brain project was launched in part to organize and coordinate this research.

-The Blue Brain project currently has the capability of electronically simulating 100 million neurons/100 billion synapse models, but is constrained by lack of funds to buy a sufficiently powerful computer.

-It currently requires 10-100 seconds of computer time to simulate one second of neuronal activity, but future computers should be able to simulate neurons in close to real time.

-A grid computing program to simulate and "build" individual neurons will soon be unveiled, and it will run on individual PCs as a screen saver.

- The blue brain project should result in extremely powerful "liquid computers" that can handle infinite parallelization.

Henry Markram Interview

Question 1: Tell us about the Blue Brain project.

HM: The Blue Brain Project aims to build a 21st century research facility capable of constructing and simulating computer models of the brain. Such a facility will be capable of constructing brain models across different levels of biological detail and for different animal species, ages, and diseases. The target is to be able to construct models of the human brain within about a decade. The facility will serve to aggregate, organize and unify all fragments of data and knowledge of the
brain (past and future), allow virtual experimentation to test hypotheses of brain function and dysfunction, and create a novel platform for virtualized medicine. A prototype facility has been completed, which is today capable of building any neural microcircuit or module of the brain with cellular level precision. The facility’s capability will be extended over the course of the next 10 years to building whole brain models at the subcellular, molecular and genetic levels. The facility will be accessible as an internet portal with different levels of access to provide neuroscientists with virtual laboratories, hospitals with advanced diagnostic and treatment planning facilities, clinicians with a facility to explore hypotheses of disease for specific patients, pharmaceutical companies to carry out virtual drug development, technology companies to design the next generation technologies, and for schools and institutions of higher education to take students for virtual tours into the brain’s of different animals to see and learn how the brain is designed and
function, and how it evolved, develops and then ages, and what can go wrong in different diseases. The facility will also be open for public virtual visits to allow anyone to better understand their own brain and what kind of reality it is genetically and environmentally programmed to create and to find out how they can shape the reality they create for themselves.



Question 2: You recently predicted that an artificial brain could be built within 10 years. What properties will this brain have? How closely will it resemble a human
brain?


HM: We are building brain models of different mammalian species (mouse, rat, cat, monkey) first before we reach the human brain. We do this because we need to learn how to use less and less invasive data and more and more indirect data to build human brain models. We build these models by sticking as close to biology as possible. This is done by abstracting the biological components and processes into mathematical equations and then into computer models. We do not aim at a specific function – they should all appear if we build it correctly. Computational Neuroscience over the past 50 years is a theory driven science, while Blue Brain is biology driven. When we reach the human brain, human-like perceptions and motor actions should emerge automatically. The model brains will be able to learn to do what we human’s can learn to do, perform complex decision-making, manifest emotions, intelligence and personalities. We see these all as straightforward emergent properties of the brain. Self-awareness and consciousness may also emerge if this phenomenon depends on neuronal, synaptic and/or molecular interactions. Anything that depends on the physical elements that can be measured in the brain should emerge if we are successful in building it accurately enough. If we inject some theories of brain while we build it and ignore the biology, then we are back to square one with computational neuroscience and we will almost certainly fail.

Question 3: Ray Kurzweil predicts that our understanding of the human brain is
increasing exponentially. Do you agree?


HM: Kurzweil is not entirely right nor entirely wrong, it depends on how you look at it. It is certainly true that we are generating today more data and knowledge about the brain’s structure and function probably in one year than we generated in the entire 20th century. The amount of data and knowledge about the brain that will be obtained in the 21st century is vast beyond imagination. Neuroscientists today are producing over 50’000 peer-reviewed articles in one year and growing exponentially. Machines and robotic technology can sequence and map parts of the brain at many levels that produces data thousands of time faster than any lab of the past. So yes, there is no doubt that we are generating a massive amount of data and knowledge about the brain, but this raises a dilemma of what the individual understands. No neuroscientists can even read more than about 200 articles per year and no neuroscientists is even remotely capable of comprehending the current pool of data and knowledge. Neuroscientists will almost certainly drown in data the 21st century. So, actually, the fraction of the known knowledge about the brain that each person has is actually decreasing(!) and will decrease even further until neuroscientists
are forced to become informaticians or robot operators. This is one of the reasons that we launched the Blue Brain Project. We need a global agenda to bring the data and knowledge together in a single working platform – a model, so that each scientist can test their tiny fragment of understanding against all data and knowledge that everyone has accumulated together. One needs to see and feel the all data and knowledge in one. I believe that unless we succeed in a project like the Blue Brain Project, we will never understand the brain. It is the unifying strategy, much the same way that models unified understandings in so many past revolutions in science.

Question 4: You recently completed Phase one of the Blue Brain project. What
did Phase one accomplish?


HM: The Blue Brain Project is not aiming at building just a single model, we are building an international facility that has the capability to build brain models. The facility roadmap is to gradually expand its capability and capacity to build whole brain models at ever greater levels of resolution. Over the past 4 years we built a prototype facility as a proof of concept and targeted the cellular level resolution. We also wanted to try to solve some fundamental challenges that, if we could not solve them, would mean that it is technically not be possible to build biologically realistic brain models. We solved these challenges and built the first prototype facility that can now build neural microcircuits at cellular level resolution. On our current supercomputer we can build and simulate up to 100’000 biologically detailed neurons interconnected with 10 million synapses. The facility actually has the capability of building much larger neural circuits even today (100 million neurons with 100 billion synapses), but we can’t afford to buy the big computer to simulate them. The facility is unique because it is designed in a way that the models “absorb” biological data and knowledge and continuously become as real as the available data and knowledge.

In building this facility, we already discovered some fascinating principles of how neural microcircuits are designed, how complex neural states emerge, which elements contribute to specific neural states, and we are close to testing a fundamental theory of how the brain generates a perceptual reality – the neural code.

Question 5: Is there a phase two?

HM: Of course, there are many many big phases and many more tiny steps before we reach the human brain. We are expanding the capability of the facility to build models at the sub-cellular (structures inside cells), molecular and genetic levels and we are expanding the capacity of the facility to build and simulate larger models till we reach whole brain models. Each phase needs more computing resources and a bigger team of engineers and scientists to deal with the new levels of issues.

Question 6: It takes electronic circuits about 100 seconds to emulate 1 second of
neuronal activity. How long before electronic circuits can emulate their biological counterparts in real time?


HM: Well, I am not sure where you got these numbers. Firstly, there is a difference between emulation and simulation; a neuron on a silicon chip emulates the behavior of a neuron, while a software model of a neuron simulates its behavior. We are simulating the brain, not emulating it. It it is very difficult and impossibly expensive to build even simple equations of complex neurons onto a silicon chip. The most that is possible today at the cellular level is around 50 or so neurons. These analog VLSI (aVLSI) chips can actually emulate neuronal behavior in real-time and even much faster than real-time. Because of this limit, what people mostly do is to put very simple neurons (very simplified equations) onto silicon chips. In these cases it is possible to build networks of thousands of what we call “point neurons”. There is a project called FACETS that has built a chip with over 100’000 neurons in a network which actually run the calculations 10’000 times faster than real-time. DARPA is also trying to build such chips that can run even larger neural networks with intelligent synapses. From the perspective of Blue Brain, these projects are
peripherally interesting as engineering projects that will probably build some mildly clever devices, but they do not come even close to the sophistication and capabilities of a Blue Brain model. Software models have the advantage that you can make the models as complex as you need to, but they have the draw back that they need very advanced supercomputers to get close to real-time. When we started Blue Brain, 1 second of biological time took over 1000 seconds to simulate, but we improved the software, and it now takes around 10-100 seconds. So it is still in slow motion. The future computers we are planning to build should get the simulations close to real-time.

Question 7: How many different types of neurons exist in the human brain? What, besides morphology, differentiates these neurons?

HM: There are around 400 brain regions and each brain region contains neurons with different types of morphologies. Some brain regions contain only 2 or 3 types, while others contain up to 50 types. The neocortex has 48 main types of morphologies. The average is around 5 types, so there are around 2000 different morphological types of neurons in the brain. Telling them apart however is not an entirely solved problem in neuroscience. It is like trying to mathematically describe the differences between any two trees in a forest. Neurons can also differ in many other ways. A very important way they differ is in their electrical “personalities”. There are about 15
classes of electrical personalities that a neuron can take on. Even neurons with the same morphology can take on different electrical personalities. The way neurons build their electrical personalities is to select which ion channels (membrane proteins that pass electrical current into the cell) are expressed by the genes. So if one looks at the genes that are switched on in neurons, then one sees that they switch on different genes to make different ion channels and by combinatorics, they create their different electrical personalities. Neurons can also differ in the
way they switch on other genes to build other types of proteins and so even neurons that look and behave the same, can process information differently. So there are actually many thousands of different types of neurons at the morphological-electrical-molecular levels. If you then also consider that each neuron is plugged into the brain in a different way, then one realizes that each of your 100 billion neurons is unique and no neuron in our brains are the same as any neuron in
anyone else’s brain. So the Blue Brain Project is trying to understand this complexity, rather than build simple models that do clever tricks.

Question 8: What is liquid computing?

HM: Let me first explain that a Turing machine is a machine that can solve any problem if the problem is given to it in discretely timed batches. So a Turing machine is a universal computer for what is called “batch processing”. But what a Turing machine can’t do is to solve problems universally while information is continuously coming in and disturbing it from finishing the operation it just started on. In other words, it can’t (without work arounds and cheating) strictly speaking solve problems presented to it on an analog time scale and produce answers on an analog time scale. A liquid computer is however a computer that can solve any problem in realtime and at any time (not discrete time). You can even call it “anytime processing”. So it is a universal theory for analog computing. You see, a big problem that the brain has, is to solve how to keep thinking about something that it just saw while the world around it never stops sending it new information. If you sent your computer continuously new information it will not be able to do anything because it can’t finish one thing before it has to start on another problem. The way liquid computing works is very much like an actual physical liquid. It makes sense of the perturbations rather than seeing them as a nuisance. We also call it high entropy computing or computing on transient states. This is a very important (but not complete) theory of how the brain works because it shows us how to tap into the vast amount of information that lies in a “surprise”. Another big challenge to understand the brain is that it is always physically changing. Your brain right now is already different from what it was just 1 hour ago, and extremely different from what it was when you were 10 years old. So, because your brain is constantly different and because every moment in your life is potentially (hopefully) also novel, there is a very good chance that most of the time, the responses produced in your brain are new to you (to your neurons) - never “seen” before. So if the brain produces a response that it never “experienced” before, how does it know what it means? The state that your brain is in right now, never happened before so how can your brain make sense of states it never saw before and connect them to all your moments before? Liquid computing provides a partial explanation for this problem by showing that the same state never actually needs to reoccur in the brain for you to make sense of the states – that is why we also call it computing on transient states. Liquid computing can in principle solve any problem instantaneously and keep solving them in real-time and with infinite parallelization. But, it is very difficult to build a good liquid computer. One of the benefits of Blue Brain is that it will be able to design and build extremely powerful liquid computers.

Question 9: The Blue Brain project is constrained by the lack of available computing power. Have you considered initiating a distributed computing project, along the lines of the protein folding or SETI projects?

HM: Indeed, CERN is our neighbor and they invented GRID computing. But, there are different limits for distributed computing. The brain is a perfect democracy and no neuron can make a decision without first listening to thousands of others, so interconnect is critical. GRID computing is not ideal for brain simulations. But what we are doing is to build a Blue GRID to help us build and analyze neurons because one just needs many processors working independently. Building biologically realistic neurons is even more challenging than simulating them, but this does not need supercomputing - it needs distributed computing. We call project the “Adopt a Neuron Project” and it will soon go live. Anyone will be able to adopt a neuron and have it work as a screen saver while helping us build and analyze the neurons.

Question 10: Your Blue Gene/L supercomputer is vital to the research. Do you have plans to augment or replace the Blue Gene/L supercomputer with a more powerful model?

HM: Already done. Blue Gene/L is history - we now have a Blue Gene/P supercomputer. This allows us to make the step to molecular level resolution for the models.

Question 11: It appears that the brain incorporates hybrid digital-analogue computing techniques. To what extent can these techniques be emulated by a purely digital computer?

HM: This is a complex question that can’t be answered properly here. The issue is actually multidimensional because digital vs analog computation pertains to discretization of space, time, amplitude and identities of the elements. To track the configuration changes of molecules is still reasonable, but to track every atom takes too many digital time steps to do it for long. So it is not yet too serious a barrier for molecular level simulations, but to simulate every atom’s position and movement in the brain will require the super-quantum computers of the 22nd century. In short, numerical precision of digital computers is good enough to capture the analog resolution of amplitudes, spaces and identities that is relevant to measureable biological processes. PS: simulation, not emulation.

Question 12: Have you collaborated with any members of the AI community? Is
your project affecting the AI field?

HM: No, Blue Brain adopts a philosophy that is pretty much 180 degrees opposite to the philosophy in AI. In my view, AI is an extreme form of engineering and applied math where you try to come up with a God formula to create magical human powers. If you want to go into AI, I think you have to realize you are making the assumption that your formula will have to capture 11 billion years of evolutionary intelligence. In most cases, AI researches do not even know what a neuron is, let alone how the brain works, but then they don’t need to because they are searching
for something else. I don’t blame them for trying because, if you want to build clever devices today, it is much easier to ignore the brain - it is just too complex to harvest the technology of the brain. Look at speech recognition today – the best ones out there don’t use neural principles. Having said that, we all know how inadequate the current devices are and that is just because AI can’t even come close to what the brain can do. Blue Brain is not trying to build clever devices, it is a biological project that will reveal systematically the secret formulas operating, but Blue Brain models and simpler derivative models will gradually replace all of AI.

Question 13: Who is funding your organization? To what extent is the Blue
Brain project constrained by limited funding?


HM: The Blue Brain Project is a project of the Swiss Federal Institute for Technology (EPFL), so I get funding from the EPFL (which means from the Swiss government), my research grants (European Union, Foundations, etc), some other entities and just one special visionary donor. Sure we are limited by funding. It is a multi-billion dollar project (about the cost of one F-18 fighter jet). I have a roadmap to finish within 10 years, but the uncertainty is the funding. Naturally I think this is the most important project the human race can ever undertake because it will explain how we create our individual realities and even explain reality itself. And then there are over 2 billion people on earth trapped in a distorted reality because of brain diseases. So how long should we wait?

Question 14: What do you estimate is the likelihood that an artificial brain based
on your research will become sentient?


HM: Well, this is also a loaded question because there are many preconceptions out there. Wiki says that “sentience is the ability to feel or perceive subjectively”. Not a bad definition at first sight, but actually nature (organic and inorganic) computes, and any computation can be argued as subjective. Even a tree can be seen as making a subjective “decision” about what it is responding to. Feeling and perceptions require decisions, billions of tiny decisions that have been worked out over billions of years of evolution. All feeling and perception is therefore subjective. Oxford Dictionary therefore drops this implied subjectivity and simply says that it is “the ability to feel or perceive things”. The “things” in their definition should give you a hint that they are also lost. Webster gives it a component of awareness – it must be aware it is feeling or perceiving (even slightly!). How different philosophies view “sentience” is discussed nicely on Wikipedia. Moralists tend to argue that sentience is for all those that feel pain and pleasure (which would of course exclude a Buddha since they have transcended pain and pleasure (and human morality)). Buddhists simply take sentient beings as those ones that need our love and respect and western philosophers say you are sentient if you have the “ability to experience sensations” (“qualia”). I received an email once telling me that Blue Brain will become sentient if I give it two eyes.

What you should focus on is the ultimate philosophical question: Is a simulation of a particular reality identical to the particular reality it simulates? Your answer to this question should give you your own private answer to whether Blue Brain will become sentient according to whatever definition you want to use.

COPYRIGHTS FOR TEXT PROVIDED BY HENRY MARKRAM BELONG TO HENRY
MARKRAM. TEXT MAY ONLY BE DISTRIBUTED IN IT’S EXACT FORM AND UNDER
QUOTATIONS. NO PARTS OF THE TEXT MAY BE EXTRACTED AND/OR CHANGED
WITHOUT THE WRITTEN PERMISSION OF HENRY MARKRAM

Life Extending Gene Therapy Progress and Rundown of Life Extension Demonstrated In Mice that Could Apply to Humans

A new study from the University of Missouri may shed light on how to increase the level and quality of activity in the elderly. In the study, published in this week's edition of Public Library of Science – One, MU researchers found that gene therapy with a proven "longevity" gene energized mice during exercise, and might be applicable to humans in the future.

Mice would live longer when their genome was altered to carry a gene known as mitochondria-targeted catalase gene, or MCAT. However, such approaches would not be applicable to human. Duan and Dejia Li, a post-doctoral researcher working with Duan, took a different approach and placed the MCAT gene inside a benign virus and injected the virus into the mice.

Once injected, Duan and Li tested the mice and found that they could run farther, faster and longer than mice of the same age and sex. Duan attributes this performance enhancement to the MCAT and believes the gene is responsible for removing toxic substances, known as free radicals, from the mitochondria, the powerhouse of the cell. By using this specific gene therapy vector, the virus, to introduce the longevity gene, Duan and Li opened the possibility of human treatment.


This work is number 6 on a list assembled at Fighting Aging. Note: Fighting aging has a lot more details and links for each item.

Twelve Life Extension Techniques Demonstrated in Mice

Fighting Aging has a rundown of life extension techniques demonstrated in Mice.

Wwelve of the most interesting methods I've seen in past years. Note: omitting a number of studies that show only small (less than 10%) increases in maximum mouse life span, and also leaving out some work in progress that looks likely to enhance life span.

1) Calorie Restriction, Intermittent Fasting, and Methionine Restriction

Imposition of calorie restriction in mice has been shown to extend life span by around 40% even when initiated comparatively late in life.

2) Growth Hormone Knockout, IGF-1 and Insulin Signalling Manipulation

A breed of dwarf mouse that entirely lacks growth hormone is the present winner of the Mprize for longevity, living 60-70% longer than the compeition's standard laboratory mouse species. This is primarily interesting as a demonstration that insulin signalling and IGF-1 - intimately bound up with growth hormone - are very important to the operations of metabolism that determine life span. These dwarf mice are not very robust: whilst healthy and active, they wouldn't survive outside the laboratory or without good care due to their low body temperature.



3) Telomerase Plus p53

A Spanish group published a study in 2008 showing 50% life extension in mice by a suitable combination of enhanced telomerase and p53.

4) Inactivating the CLK-1 Gene

Reducing the activity of the mitochondria-associated gene clk-1 - lowering the amount of protein generated from its blueprint in other words - boosts mouse longevity by 30% or so.

5) SkQ, a Mitochondrially Targeted Ingested Antioxidant

A Russian researcher has demonstrated a form of antioxidant that can be targeted to the mitochondria even though ingested. Per the mitochondrial free radical theory of aging, anything that can reduce the damage mitochondria do to themselves via the free radicals they generate in the course of their operation should extend life span. Indeed, SkQ seems to boost mouse life span by about 30%:

6) Genetic Manipulation to Target Catalase to the Mitochondria

A couple of research groups have shown that through either gene therapy or genetic engineering the levels of a naturally produced antioxidant catalase can be increased in the mitochondria. The mice lived 20 percent longer than normal mice.

7) Genetic deletion of pregnancy-associated plasma protein A (PAPP-A)

This is another method of reducing cancer incidence and also extending life span by 30% or so, but this time seemingly through manipulation of the insulin signalling system in a more subtle way than previous growth hormone knockout studies. The end results certainly look like a win-win situation: extended life span and less cancer with no downside.

8) Knockout of the adenylyl cyclase type 5 (AC5) gene

Mice lacking the gene for the AC5 protein, which strangely enough appears to be a crucial component of the opioid response in mammals in addition to its other roles, live 30% longer. This is suggested to be due to a more aggressive, effective repair and prevention response to oxidative damage.

9) Metformin used as a calorie restriction mimetic drug

The drug metformin has been demonstrated to act in some ways like calorie restriction in mouse biochemistry, producing a modest 10% gain in maximum life span.

10) FIRKO, or fat-specific insulin receptor knock-out mice

FIRKO mice have less visceral body fat than normal mice, even while eating at the same calorie levels. They live a little less than 20% longer, and this is taken as one line of evidence to show that that possessing a lot of visceral fat is not good for longevity.

11) Removal of visceral fat by surgery

Continuing the fat theme, researchers demonstrated last year that you can extend the life span of mice by surgically removing excess visceral fat. It doesn't extend life as much as calorie restriction, but it is significant

12) Overexpression of PEPCK-C, or phosphoenolpyruvate carboxykinase

In this case, researchers have no firm conclusion as to why and how this genetic manipulation works. As in a number of other cases, this investigation wasn't started as a part of any aging or longevity study, and the longevity of these mice is a fortunate happenstance. Nonetheless, here we have a case of what appears to be a more than 50% life extension - though note that the formal life span study has not been published, so you might assume the comments below to refer to the outliers amongst these mice rather than the average.