Author Archive: Linda Hernandez

Artificial Intelligence And Quantum Computing

quantum2Is the next frontier for artificial intelligence quantum computing?  There are many scientists who believe that our brains are actually quantum computers of sorts – that they store and transmit information so fast based on some form of quantum mechanics.  In fact, it has led to all sorts of conspiracy-like theories about consciousness.

But here is an interesting article talking about how the brain works and how modern computers as they stand now are limited – and how quantum mechanics may break the next barrier of AI and robotics technology (not to mention other industries as well)

But common sense is more than lots of knowledge. If that were the whole story, then all we would have to do is build bigger and faster computers, put hordes of graduate students to work typing in more facts and rules, and the machines would get smarter. But it doesn’t quite happen that way. Things go on deep in the mind that we don’t fully understand, that are different in kind.

It is almost mystical. People talk about intuition, insight, inspiration, gestalt–not because those words explain anything, but because they capture this sense of a spontaneous, holistic part of ourselves forever beyond understanding.

On the other hand, maybe that part of the mind is not magical so much as just . . . hidden. Maybe, for all our sense of being self-aware, we are almost oblivious to what really goes on in our brains.

Certainly that is what psychologists and neuroscientists seem to be telling us. Consider the capabilities involved in strolling down the sidewalk, for example. Leaving aside such things as balance and coordination, you still have to see where you are going, which means you somehow have to make sense out of the ever-changing swirl of motion and color and light and shadow.

To accomplish this, you have at your command roughly 100 million receptor cells–the rods and cones–in the retina of each eye. The retina also contains four other layers of nerve cells; all together the system probably makes the equivalent of 10 billion calculations a second before the image information even gets to the optic nerve. And once the visual data reaches the brain, the cerebral cortex has more than a dozen separate vision centers to process it. In fact, it has been estimated that vision in one form or another involves some 60 percent of the cortex.

Of course, you remain blissfully unaware of all this. You simply glance across the street and think, “Oh, there’s Sally.’

But now consider what that involved. Somehow, without your conscious mind being aware of it, you compared the visual image from across the street with all the remembered images of all the millions of people and trees and dogs and ashtrays that you have seen in your life. And once you found the right image, you matched the face with Sally’s name; with a whole set of shared experiences; with how you feel about her; with the outrageous thing your boss did that morning that you can’t wait to tell her about. And all this is simply there, pouring into your conscious mind before you’ve even lifted your arm to wave.

Try another example. You beat your brains out against a problem at work and then, “Aha!’–the solution flashes on in neon lights. Now, how did you do that?

Nobody else knows either. But a big part of it–in fact, a big part of human problem solving in general –seems to be that jolt of recognition, that ability to suddenly see things as a whole. Simon and his colleagues at Carnegie-Mellon have shown in a number of experiments that experts rarely use logic and reason to solve problems, say, in physics. Instead, they just seem to look at a problem and say, “Aha! That’s a conservation of energy problem,’ or “Aha! That’s an ideal gas law problem.’ Unlike novices, who painfully step their way through, experts seem to store the appropriate problem-solving sequences so that they are simply there when needed.

AI researchers have only just begun to explore this dynamic, holistic part of the mind. They’ve built programs that can learn, at least in a limited sense. They’ve built programs that can “recognize’ things or that can reorganize their knowledge. They’ve even built programs that make analogies, which may be a key to the whole thing: After all, people learn by analogy and even come up with creative new ideas by analogy.

None of these programs can perform with anything like the ease at human command. In part, this is because it is almost impossible, by definition, to look into the unconscious part of the mind. So it is hard to know what the programs are supposed to be modeling and how they ought to be constructed. (To get a feel for the problem, try to catch yourself in the act of having an idea. Now figure out where it came from.)

But just as important is the fact that the current generation of computer hardware is simply not up to the task. While signals move much faster through silicon than through nerve cells, almost all modern computers still chug through problems one step at a time. The brain beats them out at things like instantaneous recognition and “Aha!’ problem-solving because it has millions or billions of neurons working simultaneously.

In fact, the effort to build computers organized more like the brain is one of the frontiers of AI research. Computers are being designed that will have up to one million processors operating in parallel. AI researchers are devising new ways to program these machines–and it may well be that the exercise will teach us new ways to think about thinking.”

Waldrop, M. Mitchell. “Machinations of thought.” Science ’85 6 (1985): 38+.

Bill Gordon, an editor and writer at We Hate Malware who writes extensively about malware and hacking, is a great voice for the fight for quantum computing.  Although it seems that quantum computing could have bad sides (the hacking of encryption, for example) it also has many uses for good.  However as we know humanity, we are surely not ready to unleash it just yet – or are we?  Gordon actually thinks that were quantum computing to be invented now the government is probably doing so in a lab somewhere….and silencing the technology from appearing elsewhere just yet.  Just think – what if Russia or North Korea were able to achieve quantum computing?  They could hack into all the encrypted databases of the United States.

Computer security is a HUGE part of the quantum computing puzzle, as is the “singularity” or the accidental creation of a super powerful being through the use of artificial intelligence and quantum computers.

Technological Improvements And NASA

nasavirtualIt’s great to see that NASA has had such success lately with the arrival of the Juno spacecraft into the atmosphere of Jupiter.  It was met with much pomp and circumstance, although the actual “arrival” was a touch anticlimactic.

It’s interesting to look back and see where we thought we would be at this point in time.  Has technological advancement in terms of space travel slowed down?  Many think it has.  We need newer and more groundbreaking technologies in terms of energy and materials.  This excerpt from an article shows just how far we have come, however:

The combination of a shuttle and space station along with continued technological advances in such areas as data transmission and microprocessors plus the maturity of space-related science disciplines offer significant increases in our research capabilities during the 1990’s. The transition to the era of the space station will be complex and challenging. It will require bringing closer together NASA’s manned and unmanned programs, which had been only loosely coupled until the advent of the shuttle.

In this article, we will illustrate how these new capabilities could be used. Since we are in the design phase of the space station project, it is crucial that scientists in different disciplines consider how they might best use the planned research facilities. To put this future in perspective, we will begin with a survey of the current U.S. program for space science. The Current Program

Voyager II will explore Uranus and Neptune. The Galileo mission will send probes into the Jovian atmosphere, which will provide long-term synoptic observations of the Jovian cloud system, its moons, and its extended magnetosphere. The Venus Radar Mapper, with its synthetic aperture radar, will look through the thick clouds of Venus and map the topology of that planet, and the Mars Geochemistry and Climatology Orbiter will survey the global distribution of the elements on the Martian surface and record the climatic changes over a Martian year. Subsequent planetary exploration will focus on extended detailed studies of cometary nuclei and representative asteroids, and further study of the Saturian system, including Titan, is also under consideration.

In earth science, research satellites such as Nimbus 7, the Solar Mesophere Explorer, and the International Sun Earth Explorers will continue to provide data along with meterological and land satellites. The Upper Atmosphere Research Satellite will provide data on the stratosphere to determine how the chemical, dynamic, and radiative processes of this region determine the structure of the ozone layer. The Ocean Topography experiment together with a new research scatterometer will provide observations of the large-scale circulation of the oceans and their response to the atmospheric winds.

Our understanding of solar and terrestrial physics will be advanced by the three dimensional exploration of the heliosphere by the International Solar Polar mission, a joint effort with the European Space Agency. In the planning phases is an International Solar Terrestrial Physics program to better understand the sun and its coupling to the earth’s magnetosphere and upper atmosphere. In the astrophysics area truly dramatic advances are expected. The combination of missions now planned includes the Space Telescope, the Cosmic Background Explorer, the Extreme Ultraviolet Explorer, and the Gamma Ray Observatory, along with the development of a new generation of observing instruments on shuttle Spacelab flights and major new observatories such as the Advanced X-Ray Facility and the Space Infrared Telescope Facility. These missions should provide an unprecedented increase in astrophysical knowledge.

In the near term, however, the most valuable elements of the current NASA program are the 17 active scientific satellites returning data to investigators. These missions are the sources of the results presented at meetings and published in the journals, and they maintain the vitality and productivity of our space science program.

On the one hand the U.S. space science program is in a well-balanced state with a level of financial support well above that of Western Europe, Japan or the U.S.S.R. However, there has been a long-term change toward sustained observations from larger, more complex, longer-lived observatories and planetary orbiters. This evolution has occurred as the exploratory phase of space research has been completed. These programmatic changes, as well as drop in the level of financial support (Fig. 2), have led to a dramatic decrease in the number of launches of science missions from an average of six per year in the late 1960’s to 1.5 per year in the 1980’s. The changes in funding for space science are complex with the large peaks from 1964 to 1966 and from 1972 to 1975 caused primarily by transient increases in the planetary program. Decreases in flight programs and funding since 1965 have forced dramatic reductions in many space research groups.”

Frost, Kenneth J., and Frank B. McDonald. “Space research in the era of the space station.” Science 226 (1984): 1381+.


What do you think?  Do you think that because we aren’t in a “space race” anymore as we were in the 60s-80s that we have slowed down in terms of groundbreaking tech and innovations?