Pages

September 28, 2007

Craig Venter, billionaire geneticist, supports cognitive enhancement

Craig Venter indicates his support of the goal of cognitive enhancement, a core transhumanist objective, while discussing the movie Blade Runner

Blade Runner was a landmark film that was prescient in anticipating globalization, genetic engineering, and biometric security. It also had influence in architecture and in movies and anime. It presented a plausible future environment. It projected the ethnodemographic (more asians and hispanics in the USA) shifts and that large corporations would have superior technology to what is available to law enforcement.

Craig Venter says:
The movie [Blade Runner] has an underlying assumption that I just don't relate to: that people want a slave class. As I imagine the potential of engineering the human genome, I think, wouldn't it be nice if we could have 10 times the cognitive capabilities we do have? But people ask me whether I could engineer a stupid person to work as a servant. I've gotten letters from guys in prison asking me to engineer women they could keep in their cell. I don't see us, as a society, doing that


Ray Kurzweil's comment on Blade Runner:
"The scenario of humans hunting cyborgs doesn't wash because those entities won't be separate. Today, we treat Parkinson's with a pea-sized brain implant. Increase that device's capability by a billion and decrease its size by a hundred thousand, and you get some idea of what will be feasible in 25 years. It won't be, 'OK, cyborgs on the left, humans on the right.' The two will be all mixed up."

Google phone would be free with ads and probably out in the second half of 2008

Businessweek reports that wireless industry consultants and marketing executives with knowledge of Google's plans say it has been showing prototypes of a new phone to handset manufacturers and network operators for a couple of months Industry sources don't expect one before the second half of 2008.

Combine Google's financial heft with its ultra-sophisticated ability to target ads to specific customers. "The day is coming when wireless users will experience nirvana scenarios--mobile ads tied to your individual behavior, what you are doing, and where you are," says Linda Barrabee, wireless analyst at researcher Yankee Group.

The more than 2 1/2 billion phones in use worldwide exceed the number of PCs and TVs combined. On Sept. 17, Google announced a Web program aimed at advertisers who have created sites for display on cell phones and other handheld devices. Like its online ad network, Google's AdSense for Mobile delivers ads relevant to the advertiser's mobile audience. Employing technologies that figure out where callers are and where they're headed boosts advertising prices by 50%.

If Google decides to spend the $4.6 billion that may be needed to win the spectrum auction, analysts speculate that it has several options: continue its broadband expansion, or perhaps buy a wireless carrier, such as beleaguered Sprint Nextel (S ). Then it could launch the first ad- supported, and free, nationwide phone service. "Google is the first gambler sitting down with as big a bankroll as the carriers have," says John du Pre Gauntt, a wireless industry analyst for researcher eMarketer. "By playing in wireless, they have caused people to look at the industry in a different way."


Google is buying mobile social networking sites like Zingku and dodgeball.com

September 27, 2007

The Struggle over High Risk, high payoff research

Computerworld discusses the impact of Sputnick on the development of computer technology and the internet and high risk/high payoff technology research.

The article is making the case that the United States science and technology research community has seen a return to a culture which is less likely to pursue high risk/high payoff technology research.

There is a struggle between those who want more High risk, high payoff scientific and technological research and development and those who want only timid, incremental goals who also ridicule even the description of a high payoff technological possibility.


del.icio.us




DARPA people are trying to defend themselves from the charge taht they are not interested in high-risk and high payoff research and are leaving the United States open to another nation surprising the United States with an unchallenged success in a high payoff research area.

DARPA continues to be interested in high-risk, high-payoff research," says DARPA spokesperson Jan Walker.


Walker offers the following projects as examples of DARPA's current research efforts:

- Computing systems able to assimilate knowledge by being immersed in a situation
- Universal [language] translation
- Realistic agent-based societal simulation environments
- Networks that design themselves and collaborate with application services to jointly optimize performance
- Self-forming information infrastructures that automatically organize services and applications
- Routing protocols that allow computers to choose the best path for traffic, and new methods for route discovery for wide area networks
- Devices to interconnect an optically switched backbone with metropolitan-level IP networks
- Photonic communications in a microprocessor having a theoretical maximum performance of 10 TFLOPS (trillion floating-point operations per second)


The Wall Street Journal has journalists arguing against artificial intelligence projects with greater than human AGI goals.

There are those like Dale Carrico who argue against talking about "Superlative technology". Superlative technology being potentially high payoff technology like molecular nanotechnology and artificial greater than human general intelligence.

There are many others who argue against projects with agressive goal in energy, space and nanotechnology. Often these are the same people who lament the lack of adequate technological solutions for climate change, peak oil and other potential societal problems.

Many seem to indicate that there is culture that encourages timid technological goals:
Farber sits on a computer science advisory board at the NSF, and he says he has been urging the agency to "take a much more aggressive role in high-risk research." He explains, "Right now, the mechanisms guarantee that low-risk research gets funded. It's always, 'How do you know you can do that when you haven't done it?' A program manager is going to tell you, 'Look, a year from now, I have to write a report that says what this contributed to the country. I can't take a chance that it's not going to contribute to the country.'"

A report by the President's Council of Advisors on Science and Technology, released Sept. 10, indicates that at least some in the White House agree. In "Leadership Under Challenge: Information Technology R&D in a Competitive World," John H. Marburger, science advisor to the president, said, "The report highlights in particular the need to ... rebalance the federal networking and IT research and development portfolio to emphasize more large-scale, long-term, multidisciplinary activities and visionary, high-payoff goals.

According to the Committee on Science, Engineering and Public Policy at the National Academy of Sciences, U.S. industry spent more on tort litigation than on research and development in 2001, the last year for which figures are available. And more than 95% of that R&D is engineering or development, not long-range research, Lazowska says.


The old head of ARPA, Charles M. Herzfeld, speaks on the old and new situation

We created the whole artificial intelligence community and funded it. And we created the computer science world. When we started [IPTO], there were no computer science departments or computer science professionals in the world. None.

There certainly has been a change, and it's not for the better. But it may be inevitable. I'm not sure one could start the old ARPA nowadays. It would be illegal, perhaps. We now live under tight controls by many people who don't understand much about substance.

What was unique about IPTO was that it was very broad technically and philosophically, and nobody told you how to structure it. We structured it. It's very hard to do that today.


Interviewer Question: But why? Why couldn't a Licklider come in today and do big things?

Because the people that you have to persuade are too busy, don't know enough about the subject and are highly risk-averse. When President Eisenhower said, "You, Department X, will do Y," they'd salute and say, "Yes, sir." Now they say, "We'll get back to you." I blame Congress for a good part of it. And agency heads are all wishy-washy. What's missing is leadership that understands what it is doing.

If the system does not fund thinking about big problems, you think about small problems.


Thus the big ideas for big problems have gone mostly outside the system.

SENS, Strategies for Engineered Negligible Senescence (for radical life extension), raises private funds

The Singularity Institute and companies working on AGI are outside mainstream government and corporate funding.

The nanofactory collaboration is privately funded with some use of university resources controlled by the researchers.

There was a small UK government funded project for software control of matter

Robert Bussard's nuclear fusion project was funded by the Navy

Tri-alpha Energy's colliding beam fusion was privately funded for over 40 million dollars

The NASA Institute for Advanced Concepts program was cancelled

I think there should be at least 20% of research funds (government and corporate) devoted to high risk/high payoff research. This is a model that Google is using to substantial success.

FURTHER READING
The problem of false negatives in selection of technology development projects Not choosing to pursue a technology development project which in fact would have succeeded and should have been chosen for development.

New Gigapixel Cameras

Wide-angled gigapixel satellite surveillance: Researchers at Sony and the University of Alabama have come up with a wide-angle camera that can image a 10-kilometre-square area from an altitude of 7.5 kilometres with a resolution better than 50 centimetres per pixel.

The system means that we could existing image capture chips with sub-gigapixel resolution to form fast gigapixel images.


Each chip is receiving the light from one tube.

They are building an array of light sensitive chips that each record small parts of a larger image and place them at the focal plane of a large multiple-lens system. The camera would have gigapixel resolution, and able to record images at a rate of 4 frames per second.

The team suggests that such a camera mounted on an aircraft could provide images of a large city by itself. This would even allow individual vehicles to be monitored without any danger of losing them as they move from one ground level CCTV system to another.

The camera could have military applications too, says the team. Mounted on the underside of an unmanned aerial vehicle, the gigapixel camera could provide almost real-time surveillance images of large areas for troops on the ground.


Researchers at Carnegie Mellon University, in collaboration with scientists at NASA’s Ames Research Center, have built a low-cost robotic device that enables any digital camera to produce breathtaking gigapixel (billions of pixels) panoramas, called GigaPans.

The Gigapan is part of the global connection project

It is also part of the Carnegie Mellon Robot 250 project

FURTHER READING
Gigapixel and terapixel pictures have been taken before. The new systems make it easier and cheaper for more people to do it.

There are single image sensors able to capture 111 million pixels

Common resolutions on commercial digital cameras

High resolution cameras available in 2006

September 26, 2007

Energy supplies by source in the USA 2002 to 2006

US energy used by source from 2002 to 2006
Here is the Breakdown of energy used in the USA by source from 2002 to 2006

Nnadir has a further breakdown of the non-fossil fuel portion


Non-fossil fuel source Share of non-fossil fuel total share

Nuclear: 54.5% 8.2%

Conventional Hydro: 19.2% 2.9%

Wood: 14.0% 2.1%

Other biofuels like, and including, ethanol: 5% 0.7%

Garbage burning (waste): 2.7% 0.4%

Geothermal: 2.3% 0.3%

Wind: 1.7% 0.3%

Solar: 0.4% 0.07%


RELATED NEWS
The first official new US nuclear license application has been filed for a 2700MW twin reactor in Texas

Quantum computing photon qubit communication advances

Two major steps toward putting quantum computers into real practice — sending a photon signal on demand from a qubit onto wires and transmitting the signal to a second, distant qubit — have been brought about by a team of scientists at Yale.

They report that superconducting qubits, or artificial atoms, have been able to communicate information not only to their nearest neighbor, but also to a distant qubit on the chip. This research now moves [this type of] quantum computing from “having information” to “communicating information.”

The first breakthrough reported is the ability to produce on demand — and control — single, discrete microwave photons as the carriers of encoded quantum information.

They added a second qubit and used the photon to transfer a quantum state from one qubit to another. This was possible because the microwave photon could be guided on wires — similarly to the way fiber optics can guide visible light — and carried directly to the target qubit. “A novel feature of this experiment is that the photon used is only virtual,” said Majer and Chow, “winking into existence for only the briefest instant before disappearing.”

Together the new Yale research constitutes the first demonstration of a “quantum bus” for a solid-state electronic system. This approach can in principle be extended to multiple qubits, and to connecting the parts of a future, more complex quantum computer.


There are many competing methods for making quantum computers. This approach is to down to the level of precisely manipulating single photons. Other approaches like adiabatic quantum computers could succeed with a lower degree of physical control and precision. This approach could ultimately have higher potential especially if molecular nanotechnology helps enable the high levels of precision and control.

Coal compared to green measures

There is website that compares various CO2 reduction efforts against the increased CO2 from new coal plants. There are 151 coal plants in various stages of construction in the USA.

Hat tip to Kirk at the thorium energy blog for finding this

Many efforts do not equal the CO2 of even one medium sized coal plant for one year.

All the corporate efforts and plans involving more efficient light bulbs and slightly more efficient cars, reducing all college campuses to zero emissions and a cooperative effort by 11 Northeastern and Mid-Atlantic states to reduce their CO2 emissions to 1990 levels by 2014 would combined be offset by 18 new medium sized coal plants.

Those plans should still go forward, but new coal needs to be stopped and old coal needs to be replaced. The best versions of the climate change bills would increase the cost of coal power and make utilities switch to nuclear power and renewables. The right terms for the McCain/Lieberman climate stewardship bill where international substitution is not counted has been projected by the EIA (energy information administration) to reduce coal from supplying 50% of electrical power now to 11% by 2030.

If the societal costs of coal are added into coal via legislation then the market will be forced to address this and replace coal. The replacement will mostly be nuclear energy because it is the next most affordable after coal where the damage from coal is not included.

Forbes takes a look at the progress of the energy and climate change bills Based on the progress and differences in bills that need to be reconciled it appears that meaningful energy and climate change bills will not pass until 2009.

The [Bush presidential] administration opposes mandatory emissions targets, and there is probably not enough time to get a meaningful climate change bill to the president's desk by the end of the year.

A spokesman for House Speaker Nancy Pelosi, D-Calif., says "her priority right now is to get those [energy] bills reconciled and to get them to the president's desk."

That doesn't mean the issues aren't near the top of the legislative--or corporate--agenda. It stands to reason that industry would want to see legislation, if it is to come, passed while a Republican is in the White House. Already, the major industries that would be most affected have spent heavily in an effort to influence lawmakers. According to the Center for Responsive Politics, an organization that tracks political spending, electric utilities have ponied up $49.5 million in lobbying so far this year. The automotive and oil and gas industries are not far behind, spending $37 million and $36 million, respectively.


The two climate change bills with the most support are the Bingaman/Specter and Lieberman/Warner bills

The "Low Carbon Economy Act of 2007," which was introduced by Senators Jeff Bingaman (D-NM), Chairman of the Committee on Energy and Natural Resources, and Arlen Specter (R-PA) in July with endorsements from some major electric utilities and labor unions. The second is a 16-page outline of a cap-and-trade proposal by Senators Joe Lieberman (I-CT) and John Warner (R-VA) that was released in August. They have been collecting input on the outline and intend to introduce a bill for consideration in the fall. Chairman Boxer has said that the Lieberman/Warner (America’s Climate Security Act) bill, when ready, will be the primary vehicle in the EPW committee.


Comparing the Bingaman/Specter and Lieberman/Warner proposals:
Emission targets
The Bingaman/Specter bill and the Lieberman/Warner outline would both establish cap-and-trade programs to limit emissions beginning in 2012. Bingaman/Specter calls for GHG reductions to 2006 levels by 2020. The Lieberman/Warner outline is somewhat more stringent, calling for a 10% reduction below 2005 levels by 2020.

Regulation
The Bingaman/Specter bill would regulate carbon dioxide (CO2) emissions from petroleum and natural gas consumption on an "upstream" basis, by regulating petroleum refiners and natural gas processors. The bill would reach CO2 emissions from coal consumption on a "downstream" basis, i.e., by regulating large consumers of coal (primarily coal-fired electric power plants).

The proposal by Senators Lieberman and Warner also would regulate petroleum on an "upstream" basis. However, they otherwise would regulate large sources of CO2 emissions on a "downstream" basis. Their program would require practically all electric power plants to submit allowances. It also would regulate industrial and commercial facilities that emit 10,000 metric tons or more of CO2-equivalent GHG emissions per year.

Offset and International Credit Provisions.
Both proposals would allow for the use of offset credits for emission reductions achieved by projects outside the scope of the emissions cap. Bingaman/Specter would streamline approval procedures for certain types of projects and authorize the President to promulgate rules allowing a regulated entity to use allowances or credits from foreign programs to cover up to 10% of its annual allowance requirements.

Senators Lieberman and Warner propose to limit the use of domestic offset credits to 15% of a regulated entity’s allowance requirements, but would have a slightly more generous limit for use of foreign credits than the Bingaman/Specter bill (15%).


FURTHER READING:
A collection of quotes and links to reactions by environmental groups on the Lieberman/Warner bill

The 17 page pdf outline of the Lieberman/Warner bill

Comparing all of the climate change bills


The first of the two pages of bill comparisons, which has the leading bills under consideration

Spintronics: Quantum spin hall effect could be future of computers

Stanford physics Professor Shoucheng Zhang says a new generation of semiconductors, designed around the phenomenon known as the Quantum Spin Hall Effect, could keep Moore's law in force for decades to come.

Using special semiconductor material made from layers of mercury telluride and cadmium telluride, the experimenters employed quantum tricks to align the spin of electrons like a parade of tops spinning together. Under these extraordinary conditions, the current flows only along the edges of the sheet of semiconductor.

The electrons' strange behavior constitutes a new state of matter, Zhang said, joining the three states familiar to high school science students—solids, liquids, gases—as well as more unworldly states such as superconductors, where electrons flow with no resistance. He describes the quest for new states of matter as the holy grail of condensed matter physics.

Similar effects have been demonstrated before, but only at extremely cold temperatures and under the effects of powerful magnetic fields—conditions that cannot exist inside the common computer. "What we managed to do is basically get rid of the magnetic field," Zhang said.

September 25, 2007

False negatives in the selection of technology projects

I had developed this comment as part of a response to an article at amor mundi

I am in favor of using smaller amounts of money to try a lot of different approaches to solving societal problems or reaching societal goals. I am applying an extension of some of the arguments used in support of Open Innovation.

One of the proponents of open innovation tracked the projects that were chosen to have funding terminated by Xerox corporation. He found that those projects that went outside Xerox for funding and then formed companies over 20 years eventually exceeded the entire value of Xerox.

This was a demonstration of the opportunity cost of being unable to accomodate alternative business models and from making errors in judgement about what projects will or will not work. The cost of false negatives.

For society, large opportunity costs can translate into missed opportunity to save lives. We can look with hindsight on trillion dollar mistakes made by different countries in the past Those choices led to greater impoverishment and increased losses in life. China's grounding of the treasure fleets and destruction of the records of it in 1433. Collective "not invented here syndrome" that prevented China from successfully copying Japan's Meiji restoration, which was to adopt the industrial revolution from western countries. The choices in the United States and other countries to cut back on nuclear power development in the 1970's until recently instead of copying the move of France to 78% nuclear power has cost tens of thousands of lives every year from increased air pollution. I consider delays in pursuing space colonization, molecular nanotechnology and artificial general intelligence as being in the same category of historic mistake. I think these issues will be rectified but in hindsight the delays will be viewed as having been costly to society.

Could relabelling a MNT or AGI, bring success in mainstream funding?

Certain kinds of Artificial Intelligence (AI) actually have large (multi-billion $) mainstream success. Mainstream AI runs a lot of the financial trading in the world. However, many possibly high potential AI goals do not fit within the mainstream structures.

Mainstream nanotechnology also has large funding success, although most of that is relabelled from the chemical industry.

Tieing the two together
Sometimes superlative technology projects have to take root and thrive outside of the mainstream for a time.

We do not know when false negative (technological development choice) errors are being made. Therefore, we should strive to develop some efficient societal means to support potentially high value false negatives. If it is a mundane potential false negative then it would not be that bad a mistake and presumable mainstream progress could substitute for it.

Alternative technology and systems should be developed and piloted in small trials and then expanded. The systems should be adjusted to encourage and enable more experimentation so that better ways can be found and proven. I feel that the Google model of 80% of resources on the mainstream and 20% on nearly unlimited experimentation is something approaching the correct ratio. A larger share should be used on prizes based on actually developing various stretch goals, instead of constantly paying for institutions and buildings based on political considerations, reputation without any adjustment based on performance.

Proposals should not be penalized because the goals would target superlative performance.

Equality, democracy and technology and the connection to bad individual and societal choices.

I think that some levels of inequality are inevitable when freedom of choice is valued and exercised. If we want to let someone choose a lifestyle which ultimately makes them poorer then inequality would result when other people make choices that result in greater personal wealth.

If a person chooses never to learn how to become a moderately successful investor in any asset class, then they will end up poorer than those that do.

If a person chooses a career in a field which will not be as highly valued or even chooses no education then that is their choice.

China has had the economic burden of carrying the inefficient state sector. Other societies have the economic burden of carrying the segments of society who make what have proven to be economically bad life choices. Many of those choices could have been shown at the time that they were being made to be destined to almost certain to be bad. Choosing alcohol, tabocco and drug use. Choosing education in poorly compensated areas. Choosing inadequate education (like stopping before the end of high school). Choosing not to figure out aspects of how to become economically successful.

Societies makes what turn out to be bad technological funding choices. The NASA space program is often cited as technological funding. However, I would say it is predominantly driven as projects for political support with technology development or goals as a secondary or tertiary side effect and cover story.

There is a fraction of societal resources devoted to international and domestic charity, aid and support programs. A more rich and capable society will be able to continue those programs in an expanded way by just not lowering the fraction that is devoted to it. The China example shows that even though high growth increased inequality, more people were raised out of poverty by allowing high growth to occur. China also shows that it is not in the interest of those well off to not help those who were not able to benefit. The interest are prevention of societal unrest and the increasing in the numbers able to participate as consumers and as contributors to societal development.

More on the SENS3 conference

Matthew S. O’Connor (okee, a.k.a. Dr. Okie) reports on the SENS3 conference on life extension at the Ouroboros website. Okee is currently a postdoctoral fellow at UC Berkeley, Bioengineering Department in the laboratory of Dr. Irina Conboy.

Biomedical remediation (essentially the brain-child of Aubrey de Grey) is moving along quickly.
Two teams have identified strains of bacteria capable of using 7-ketocholesterol (one precursor of the poorly defined lipofuscin) as energy. The next goal is to clone the genes. After that they want to purify the enzyme responsible and feed it to people and see if it will break down our lipofuscin.


Okee's issue:
is that they are trying to solve a problem that hasn’t been proved to be a problem yet. Lipofuscin accumulation has long been associated with aging in many tissues, but never (as far as I am aware) proved to be responsible for any illness, ailment, or disease. Now, don’t get me wrong, Aubrey makes an excellent argument for this being a serious problem with no traditional biomedical solution in sight, but it’s still just theory.


In Okee's opinion:
wound healing and artificial repair was the most provocative and promising aspect of the research at SENS 3.

Cato Laurencin has an approach called “regenerative engineering.”

Okee's comment:
It was amazing, however, to see someone actually using a few in something practical! In my opinion, this is the reality of regenerative medicine: an innovative surgeon combining technology and knowledge of biology to partially repair injuries such that they will heal as well, or better than they started.


Dr. Laurencin showed results from his work on 3D absorbable poly L-lactide (PLLA) scaffolds that seem to promote recovery from surgery much more efficiently than traditional methods. This is a microsphere-based scaffold, which promotes efficient invasion and engraftment of osteoblasts to help repair bone. He is also investigating surfaces with nano-scale grooves, which are more conducive to mesenchymal stem cell proliferation.


There is scarless repair of brain tissue using nanofibers to accelerate the healing:
Rutledge Ellis-Behnke spoke on his work with SAPNS: Self Assembling Peptide Nanofiber Scaffold. Essentially, he squirts a solution containing these nanofibers into wound sites and reportedly achieves amazing results. He reports dramatic recovery from serious brain injury: both scarless repair of bulk brain tissue removal and reinnervation. In addition, he claims that the nanofibers can dramatically stop bleeding in wounds (he showed video of this). These results are so dramatic that they are almost unbelievable.


From the research paper [they are stopping bleeding in the brain in 15 seconds]:

This novel therapy stops bleeding without the use of pressure, cauterization, vasoconstriction, coagulation, or cross-linked adhesives. The self-assembling solution is nontoxic and nonimmunogenic, and the breakdown products are amino acids, which are tissue building blocks that can be used to repair the site of injury. Here we report the first use of nanotechnology to achieve complete hemostasis in less than 15 seconds, which could fundamentally change how much blood is needed during surgery of the future.


Two groups and three speakers addressed the issue of aged muscle, muscle regeneration, and muscle stem cells.

One of the groups work supports the idea that muscle stem cells remain intrinsically young, even while their tissue ages around them. Another group revitalized old muscle stem cells.

A startup company, Sangamo, has developed gene editing, which different from gene therapy. It’s not introducing exogenous DNA into your cells, it’s editing your genomic DNA.

One drawback for Dr Cui GIFT method of cancer treatment is that it requires 10 donors for every recipient. I speculate, we will probably need to use telomeres and culturing of cells to increase the volume of cells for donation.

Quantum Dots made brighter by a 108 to 550 times factor

By placing quantum dots on a specially designed photonic crystal, researchers at the University of Illinois have demonstrated enhanced fluorescence intensity by a factor of up to 108. Potential applications include high-brightness light-emitting diodes, optical switches and personalized, high-sensitivity biosensors.

A quantum dot is a tiny piece of semiconductor material 2 to 10 nanometers in diameter (a nanometer is 1 billionth of a meter). When illuminated with invisible ultraviolet light, a quantum dot will fluoresce with visible light.

To enhance the fluorescence, Cunningham and colleagues at the U. of I. begin by creating plastic sheets of photonic crystal using a technique called replica molding. Then they fasten commercially available quantum dots to the surface of the plastic.

Quantum dots normally give off light in all directions. However, because the researchers’ quantum dots are sitting on a photonic crystal, the energy can be channeled in a preferred direction – toward a detector, for example.

While the researchers report an enhancement of fluorescence intensity by a factor of up to 108 compared with quantum dots on an unpatterned surface, more recent (unpublished) work has exceeded a factor of 550.

“The enhanced brightness makes it feasible to use photonic crystals and quantum dots in biosensing applications from detecting DNA and other biomolecules, to detecting cancer cells, spores and viruses,” Cunningham said. “More exotic applications, such as personalized medicine based on an individual’s genetic profile, may also be possible.”

Nanosensors help find how cancer establishes foothold in the body

Researchers at the Carnegie Institution, has found a key biochemical cycle that suppresses the immune response, thereby allowing cancer cells to multiply unabated. The research shows how the biomolecules responsible for healthy T-cells, the body’s first defenders against hostile invaders, are quashed, permitting the invading cancer to spread. The same cycle could also be involved in autoimmune diseases such as multiple sclerosis.

This news combines with other recent news about how some people are highly immune to cancer and how that stronger immunity could be transferred with a blood transfusion to other people. Cancer is the second most common cause of death in the United States after heart and coronary disease. If cancer were cured there would be about a 20% reduction in annual deaths.

Cancer uses a double pronged attack on T-cells which are the bodies defense against disease. By knowing how cancer and other diseases are defeating the immune system, then researchers can create ways to disrupt the attack on T-cells, kill the cancer cells, reverse the effects or create immunity to it.

The scientists used special molecular “nanosensors” for the work. “We used a technique called fluorescence resonance energy transfer, or FRET, to monitor the levels of, tryptophan, one of the essential amino acids human cells need for viability,” explained lead author Thijs Kaper. “Humans get tryptophan from foods such as grains, legumes, fruits, and meat. Tryptophan is essential for normal growth and development in children and nitrogen balance in adults. T-cells also depend on it for their immune response after invading cells have been recognized. If they don’t get enough tryptophan, the T-cells die and the invaders remain undetected.”

The scientists looked at the chemical transformations that tryptophan undergoes as it is processed in live human cancer cells. When tryptophan is broken down in the cancer cells, an enzyme (dubbed IDO) forms molecules called kynurenines. This reduces the concentration of tryptophan in the local tissues and starves T-cells for tryptophan. A key finding of the research was that a transporter protein (LAT1), present in certain types of cancer cells, exchanges tryptophan from the outside of the cell with kynurenine inside the cell, resulting in an excess of kynurenine in the body fluids, which is toxic to T-cells.

“It’s double trouble for T-cells,” remarked Wolf Frommer. “Not only do they starve from lack of tryptophan in their surroundings, but it is replaced by the toxic kynurenines, which wipes T-cells out.”

“Our FRET technology with the novel tryptophan nanosensor has an added bonus,” said Thijs. “It can be used to identify new drugs that could reduce the ability of cancer cells to uptake tryptophan or their ability to degrade it. We believe that this technology could be a huge boost to cancer treatment.”

September 24, 2007

AI the 1950s and now or did you make that nematode smarter than humans yet?

Lee Gomes of the Wall Street Journal asks :
But don't singularity people know that AI researchers have been trying to make such machines since the 1950s, without much success?


Lee is referring to the goal of making machines smarter than people.
The singularity he refers to is the technological singularity.

The Technological Singularity is the hypothesized creation, usually via AI or brain-computer interfaces, of smarter-than-human entities who rapidly accelerate technological progress beyond the capability of human beings to participate meaningfully in said progress.


In 1960, the a new computer was the PDP-1 and throughout the fifties IBM had the The IBM 700/7000 series


del.icio.us





An IBM 704

Todays computers are a over a billion times more powerful than those machines.
The most powerful computers of today have about a petaflop of performance That is one billion megaflops. The first megaflop computer did not exist until 1964 with the CDC 6600.

Today, Nvidia is bringing multiple teraflops of processing power to individuals using the Tesla GPGPU computers

So the early AI pioneers had the equivalent task of producing artificial intelligence using less brainpower than a nematode

A nematode which has about one MIP of processing power


Scale of equivalent brainpower, this is from "When will computer hardware match the human brain?" by Hans Moravec written in 1997

Currently AI is widely used for programmed financial trading, which is a highly lucrative and influential activity. It also receives a lot of money for development, research and improvement

We are now in the range of mouse brain level of hardware.

My predictions on artificial general intelligence (AGI) are that hardware does matter as an enabling capability.

Some have indicated that AGI could be achieved with less raw processing capability, but this only scales down so far. Clearly there are lower limits to how efficiently someone can make the hardware or the neurons perform. Early AI workers clearly had no chance to succeed in the goal of AGI. It is now getting progressively easier as the hardware becomes better. We still do not know when it will become easy enough. I think we will need far more processing power than the human processing equivalent. My reason is that Microsoft creates better versions of the Excel computer program by wasting a lot of computer resources. It is easier to program a certain capability with a wasteful use of resources than with maximum efficiency. Wasting resources makes the complexity of the programming simpler. Once a certain level of AGI capability is achieved then the system can start helping to make the implementation more efficient.

UPDATE: The Cleverness of the AI programmers matters
I forgot to mention that the efficiency of algorithms has improved as well since the 1950s and 1970s For certain problems like factoring large numbers there has been dramatic improvement in algorithms. In 2007 the quadratic sieve is the best public algorithm for factorizing integers under 100 digits. In 1977 Pollard’s rho algorithm was among the best factoring algorithms.
The Blue Gene supercomputer running the old 1977 algorithm would take 12 years to factor a 90 digit prime. An Apple IIc from 1977 would take 3.3 years to factor a 90 digit prime.

Algorithmic efficiency in searching the space of 5 X 10**20 possible checkers position allowed only 1 out of every 5 million possible positions to be checked in order to solve checkers

Algorithms for quantum computers and quantum computers could greatly increase the efficiency of searching large combinatorial possibilities. If we have large scale quantum computers available that could be another alternative path to solving aspects of artificial intelligence. This goes to the concept that artificial intelligence does not have to solve the problem of intelligence in the same way as biological systems. If the problems are solved faster and in an improved fashion then you can get superior results and performance.

Am I concerned that a 1995 vintage VCR could be programmed into an AI ? No.

How about a supercomputer in ten years with a thousand petaflops with a million qubit quantum computer coprocessor and another billion neuron coprocessor rendered in hardware ? Hmm. Perhaps.

How about in twenty years with a molecular nanotechnology MNT enabled machine with a trillion petaflops, a trillion qubit quantum coprocessor and a thousand trillion neurons integrated device and one integrated with nanosensors and mobile agents ? I for one welcome our AGI (artificial general intelligence) overlords etc... Partially kidding. I expect us to use these more powerful systems just as today we use mundane AI and the Google system to enhance our productivity. We will have a tighter coupling and higher bandwidth communication with multiple intellgent systems. I do not compete with Deep blue playing chess (or even some PC based chess programs), so in future we will need to adjust to compatible roles with the new technology.



FURTHER READING:
Artificial intelligence in the 1990s

Artificial intelligence in the 2000's

Efficient Wireless power transmission

WiPower uses induction (magnetic coupling) to transfer power from the base station to the receiving devices. They are claiming 68% efficiency in the transfer of power and believe that they can achieve 80% efficiency. A regular power cord is 58% efficient.