In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. NY TimesThe scary thing is that I actually understood today's NY Times front-page story "Brainlike Computers, Learning From Experience." In the late 80s I really did study this stuff in the MA and PhD computer science programs at CUNY/Brooklyn College -- I did get my MA and also a Phd - ABTandMC -- all but thesis and most courses -- actually 2 courses were all I took towards the Phd.
The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.The "new" method of computing is based on a brain-based design called "neural networks" and if I hadn't lost my 2-volume texts on the subject last year during the Sandy hurricane I would be pawing my way through them right now. You see, somewhere around 1988-9 I actually took a course on neural networks at Brooklyn College.
Interestingly it was in the Psych dept and given by a psych teacher, not the computer science department, which at the time looked down on this "advanced" stuff. In fact one of my other AI courses was taught by a physics teacher -- and both these guys were shunned by the CIS Department "scholars" who felt the program should be business-oriented. But I digress.
I think we had to write a program programming the reactions of one neuron to stimulus and how it would use feedback to adjust itself to changing light conditions.
I believed even 25 years ago that a computer could be built to mimic the human brain and when that happened we were finished. Hal would look like a chump when that happens.
I.B.M. announced last year that it had built a supercomputer simulation of the brain that encompassed roughly 10 billion neurons — more than 10 percent of a human brain. It ran about 1,500 times more slowly than an actual brain. Further, it required several megawatts of power, compared with just 20 watts of power used by the biological brain. Running the program, known as Compass, which attempts to simulate a brain, at the speed of a human brain would require a flow of electricity in a conventional computer that is equivalent to what is needed to power both San Francisco and New York, Dr. Modha said.Oh, oh. We are at 10% and once they figure out how to reduce the power consumptions, Skynet here we come.
Thus my fascination with The Terminator movies where smart individual computers networked to form Skynet and then programmed itself to wipe out all traces of humans. Think of the equivalent of the leading ed deform robots who want to wipe out all traces of real educators.
Then we saw "Her" the other day where a smart-phone operating system morphs into a romantic partner and then (spoiler alert) joins up with others to form a network -- what is billed in some reviews as a benign version of Skynet.
The point is that this will not be one computer but the networking of millions of computers that will team up to doom us. Already I can't drive 5 blocks without my GPS (and me a formerly great map reader/navigator).
I took as many courses in artificial intelligence (1984-89) as I could because I didn't trust my real intelligence. I was interested in artificial vision -- I wanted to see make sure the future Geordi on Star Trek could see without those goofy goggles. In fact my last course was in pattern
recognition which relates to artificial vision and the prof even offered me a chance to work with him - probably getting coffee - but the math threw me -- our eyes work on a system of differential equations -- or something like that.
I also took a course in natural language processing where you program a computer to engage in a conversation -- I did Dear Blabby, a gossiping Jewish mother. And a course in Expert Systems which run stuff like environmental catastrophes and how to detect where a chemical leak might be coming from. (Thank goodness for my very smart fellow teacher/computer geeks Ira Goldfine and the late Jim Scoma for holding my hand through all this).
“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist. Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.Far off? Really? How long before an Arnold-like robot lands naked in your backyard?
===
For Geeks only
Neural networks only make sense when computers are built using different concepts from the von Neumann machines upon which almost all current computers are based. I remember arguing this point with people when I attended 2 American Association of Artificial Intelligence conventions (89 in Seattle and 90 in Minneappolis). I was one of the few who thought it possible to build machines with billions of processors at a time when most computers had only one - mostly they still do in some sense -- some pushed the idea of parallel processing with a bunch of processors but that was strange stuff -- the neural net idea was the only one that made sense. So read this part of the article to get a sense of where we are 25 years later -- and I figure that another 25 might just do it --- hmmm if I can only get to 95.
Read the entire article here
Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of 1s and 0s. They generally store that information separately in what is known, colloquially, as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives. The data — for instance, temperatures for a climate model or letters for word processing — are shuttled in and out of the processor’s short-term memory while the computer carries out the programmed action. The result is then moved to its main memory. The new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s. They are not “programmed.” Rather the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions. “Instead of bringing data to computation as we do today, we can now bring computation to data,” said Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort. “Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.”