Please get your mouse pointer off my face. Thanks

My Unused Social Networks

Facebook Twitter DA_buttonBW2 Google+

The Personal Web Site of Ben Margolis

 Information Architect, Database Engineer, and CGI Illustrator

A Layman's Guide to AI

A Layman’s Guide to Artificial Intelligence
by Ben Margolis

Part One: Introduction to Neural Networks.

If you’re an executive, making software purchasing decisions, you may have been asked to evaluate an artificial intelligence solution for your business. (This would be especially true if you’re one of my clients.) This paper is intended to be briefing on the current state of this art, and to separate fact from science fiction regarding a topic that is much more often discussed than it is understood.

Note: The following document is specifically not an “Idiot’s Guide,” nor is it “for Dummies.” However, if you believe that you need one of those, try reading this anyway. You may just end up feeling better about yourself.

New Improved Intelligence Substitute!
Artificial cheese. Artificial wood paneling. Artificial diamonds. Artificial favor. Artificial color. Artificial turf. Artificial limbs. All of these things are very similar. They are all extremely poor imitations of what they portend to be. Artificial Intelligence is no different. The real kind is (and probably always will be) better.

When one mentions Artificial Intelligence, visions of a chrome Arnold Schwarzenegger tracking down Sara Conner come to mind. Older readers, may think of HAL 9000 killing astronauts, and film buffs may even remember The Forbin Project taking over all the worlds nuclear missiles (which also happens in the Arnie movies.) The point is that Hollywood movies, and the written form of Hollywood movies, modern fiction, all seem to reflect a great fear and misunderstanding about computers, that Artificial Intelligence is somehow better than the real kind.

So please put aside all your dreams of HAL 9000, Terminators, or Stephen Spielberg’s robot children who want their mommies (although we will get back to HAL a little later).

Real AI
Today when we say Artificial Intelligence, what we usually mean computer software that is somehow able to adapt to new conditions. This can be as simple as minor self-adjustment, or as complex as complete self-programming.

The first and simplest of these would be a user–preference system.

Simply by recording how many times a user clicks on one choice, and not another, the computer could move the popular choice higher on the list or even pre-select it for the user. By adding up the preferences of many, even thousands or millions of users, a system can inter-relate user preferences on a global scale. People who like this form of A.I. also liked the following types of AI. That thing. It’s used at Amazon and this is how Google pops up that list of suggestions while you type.

Knowledgebases

Next, in order of both complexity and resemblance to actual intelligence, would be the “Knowledge Base,” which is similar to and/or sometimes called and an “Expert System.” Here the knowledge and experience of one or more people is entered into a computer, usually using simple database technology, or even web publishing techniques. Links are made between relevant entries. Wikipedia and the Microsoft Support Knowledgebase are well known examples of this.

If there is a difference between the Knowledgebase and the Expert System, it would be in the interface. While the Knowledgebase simply displays facts, and links, the Expert System presents itself as entity, which claims to explain things to the user. It “asks” a series of questions (i.e. requests input or presents links to other more relevant entries) and then “answers” questions by presenting stored information, and asking more questions. The Windows “Troubleshooter” applications would be examples here.

Neural Networks
The third (and to me the most interesting) form of A.I. would be the attempt to use computers to actually simulate the process of biological thought, this is the so called “Neural Network” (which itself is a confusing term) and such involves exciting things as “fuzzy math” and “emergence.”

This is the fictional technology referred to in Arnie’s Terminator movies, and bears a very close resemblance to the fictional Heuristic ALgorithm of Arthur Clark’s ‘ HAL 9000, and if you ask me, stands the best chance of saying “Good Morning, Dave” to you some day and really meaning it.

To understand the Neural Network, (including what it is and why we call it that) we have to take a bit of a history lesson (don’t worry there won’t be any names or dates, mainly because I don’t want to look them up).

Our story starts in the 1850’s or thereabouts, when men in woolen coats were just starting to look at human cells under decent microscopes. They looked at muscle cells, and skin cells, and brain cells, otherwise known as neurons. Neurons were different. Neurons were connected to each other, not just to the ones right around themselves (like muscle cells and skin cells) but to other cells great distances away, on a cellular scale that is. There were complex branching parts and long dangly bits, and right then, without any evidence to suggest so, they began to think that maybe, just maybe the information of the brain was stored in these connections, and not in the cells themselves. The so-called “connectionist” theory of neurology was born.

Fast forward a hundred years or so, and we’re at the 1950’s when men in lab coats, with better microscopes and Petri dishes, and electrodes , were taking a better look at neurons. They were able to put living neurons in dishes and actually put electrical signals through them, and record the results. Neurons did very interesting things with electricity. If you gave them a low voltage, they put out a higher voltage. If you gave then higher voltage they put out a lower voltage. If you gave them an even higher voltage they put out a different voltage altogether.

What was even stranger was that as they conducted these experiments, the voltage output changed. It was almost as if each neuron could sense what voltage was expected (i.e. the electrical potential of the circuit) and after a few repeated zaps it was adjusting itself accordingly. Eureka!

So in the 1950’s they thought they had figured out the nature of intelligence itself: Neurons clearly worked in groups, networks if you will. They sent each other electrical signals and “learned” to send different signals based on the feedback they got from other neurons.

Signals that worked were repeated, ones that didn’t were adjusted until a signal was found that did work. This was the electro-chemical basis of learning. (We should note here they began to use the term “neural network” then in the 1950’s, to describe these biological systems. This is long before the phrase “computer network” was in common use. )

Neurologists began writing postulates to describe the interaction of these signals, and electrical engineers began to design electrical circuits that imitated these postulates. Intelligence became artificial when the first analog electrical neural network simulators were created.

One of the first of these prototype smart electrical circuits was called the “Cognitron,” a later more advanced version was called the “Perceptron.” Realize, these devices were basically self-adjusting relays, far less complex than the self-adjusting carburetor in any modern car. But used in groups, connected in “networks,” they showed a remarkable ability to output the precise electrical signals desired, after a significant amount of this feedback “training.”

Books were written; Clearly all we would need to do is wire together enough of these Perceptrons and we would have a machine as smart as a person. If only it were that easy.

Fast forward another thirty years, and we’re in the 1980s. Our neural network simulators have gone from being electrical to electronic. Intel Corporation, in addition to computer chips, also makes chips with thousands of electronic neurons on them. The state of the art, called the “Multi-Layer Perceptron” is used in top-secret guidance systems of cruise missiles and the so-called “smart bombs.” And so, one of the very first things we do with Artificial Intelligence is to violate of the First Law of Robotics.

Fortunately, neural network simulators have other, less malevolent uses as well. For one, they are being used by neurologists to assist them in the study of actual biological neurons in the same ways a physics simulator helps an engineer design a bridge. They are proving very useful in things like text recognition, and seem to have promise in investment analysis (which we’ll get to in Part Three, Neural Networks, Potentials and Limitations.)

But something else happened by the 1980s, the development of the PC, the general purpose computer, and with it the ability to make a whole new kind of neural network simulator.

If they invented it today, I‘m sure they would call it the “Virtual Neural Network,” but they didn’t use the word “virtual” in that way back then. They called it a Digital Neural Network, which really wasn’t accurate either because it’s really just a piece of software.

The PC based Digital Neural Network Simulator of the 1980s, (which is essentially the same software we still use today) has all the advantages over a hard-wired electrical or electronic neural network that Virtual Reality has over real reality.

For one, a virtual neural network doesn’t have to be built. (No special chips or hardware is needed.) It’s just generated in the computer’s memory. You can make them and delete them and make new ones. You can make huge ones, rearrange how they are connected, or try whole new entirely experimental structures. And best of all, anyone with a PC could do this.

Today, neural network software is everywhere. It’s in the OCR programs they use at the Post Office and in your scanner software on your PC, it’s in the handwriting recognition program in your smartphone, and it’s even running in the background of the better computer games, making your digital opponents more challenging and less random.

Its important to note that today’s state-of-the-art software based Multi-Layer Perceptron is just a mathematical computer model, not of a brain, but of an electrical circuit designed to imitate what we thought brains did in 1952. I know it sounds silly when I put in those terms, I mean why on Earth would people spend time and effort making computers that can run Windows 7 try to imitate a 1950’s electrical circuit?

Because it works. Like a charm!

Now, over the years, we have taken an even better look at actual biological neurons, and it turns out that they are much more complex than we thought in 1952. For instance, now that we have much smaller electrodes and much better microscopes we can now see that if we apply the same voltage to a slightly different place on the neuron, just a tiny distance away we get radically different results. And we have now found previously unknown signals within the signals neurons send to one another.

Real biological neural networks are way more complex than the digital simulations of them we make in our computers. My guess is the ratio could be tens, hundreds, perhaps thousands to one. That is, it would take a virtual neural network of several hundred neurons to equal the complexity and learning potential of a single biological neuron. This means that a machine capable of actual human “thought” would be hundreds of times more complex than we originally anticipated. It was only by making some small steps in this direction did we realize just how big and far away the goal really was.

But none of that matters! Because we don’t need computers to be as smart as people. (I don’t think we want them to be.) Computers do a fine job of handling huge amounts of information for us without actually understanding any of it. And the small special-purpose virtual neural networks we build today work the same way. The OCR programs reliably convert pictures of words in to words without actually “reading” them and our hurricane prediction systems work as well as they do without any idea of what a hurricane actually is.

It’s just a computer, numbers go in, numbers come out. Neural network software is a just a different way to process those numbers. A biologically-inspired and amazingly flexible way to process numbers, but in the end, just a way to process numbers.

I read a news story on the web the other day called “Rise of the Neurobots!” (with the exclamation point, please) Which as far as I could tell, was about guys in a lab who had PCs running virtual neural network software hooked up to radio controlled toy cars, sorry, I mean “robots.”

The neural nets get input from sensors in the toy cars, sorry - robots. And the output of the neural net makes the things drive around. It’s an early attempt at training neural networks using real world inputs. (Let’s put aside the fact that experimental equipment being experimented on, in a laboratory, hardly classifies as a “Rise.”)

The story goes on to claim that the various neurobots each had their own “personalities.” I think they really mean that Unit One slams itself into the wall repeatedly, while Unit Two only does that for ten or fifteen minutes before changing direction. I really don’t think they meant that one was “witty” and the other “urbane.”

So now when someone says “neural network” you’ll know that they don’t mean a new way to connect multiple computers, they mean a single piece of software running on a single computer (IBM’s experimental chessbot systems may be an exception). And you’ll understand despite the claims made by reporters who don’t really get it, computers are not yet thinking. Although we now do have ways of processing numbers that are a sort of like thinking.

And to understand more about that you’ll want to read:

A Layman’s Guide to Artificial Intelligence
Part Two: How a Neural Network works
Part Three: Neural Networks Potentials and Limitations

 

Home | Ben's A.I. Blog | NN.MDB | Layman's Guide to AI | Ben's CGI Blog | SpaceArt | Science Fiction | Free VB Code | Who I Am | About the Site | Contact |

 © 2014 Ben Margolis All Rights Reserved. Privacy Statement