I use this blog as a soap box to preach (ahem... to talk :-) about subjects that interest me.

Monday, July 26, 2010

Of Soft Brains and Software Brains

An uninterrupted flood of information constantly bombards our senses: images, sounds, smells, tastes, textures, temperatures. We can only cope with it because our brains only make us aware of a fraction of that information. We blissfully remain unaware of the rest.



This pruning of information is extremely efficient. Professor Thomas Landauer(1), while working at the Bell Laboratories, measured the amount of information that the human brain can handle in a second. It turns out that we can only memorise information at a maximum rate of 2 bits per second, which is very little. The Internet connection that most of us have at home can transfer data at least a million times faster than that. We only manage to accumulate over the years so many memories because our brains don’t actually save all the details all the times.

The brain records each event in a web of correlations and associations with other events. This way of storing information is very efficient but can become a source of frustration when we try to remember specific facts. We are sometimes unable to retrieve the piece of information we are looking for. And then what finally pops into our conscious mind is a very subjective reconstruction of the original event.
To solve this problem, over the millennia, we have developed increasingly powerful methods for storing and retrieving information in more permanent and reliable ways. This path has taken us from painted rocks and clay tablets to modern computers.

Like with all other tools we have invented since the first chipped stone, computers provide an extension of our capabilities. They reliably store for us enormous amounts of data and operate on them at great speed. The question is whether we can make them think like us. Powerful enough hardware does not automatically result in an Artificial Intelligence. An AI also depends on the availability of especially designed software and efficient ways of interacting with the real world.

For longer than half a century AI researchers have been looking for ways of modelling a human-like intelligence with software. One of the hardest problems encountered has been the modelling of what we call common sense. For example, we wouldn’t dream of picking up a fork to eat a soup. But a computer would need to be told that a spoon is what you use when you want to eat something liquid. We unconsciously use millions of such simple rules in our everyday activities. The task of capturing them all is enormous.

Fortunately, we don’t need to endow a computer with the whole knowledge of humanity in order to make it intelligent. A bushman roaming the Kalahari desert is as intelligent as any other human being, despite the fact that he has probably never even seen most of the objects we use in our daily life. This type of considerations has motivated AI researchers to create what has become commonly known as expert systems (ES).

The purpose of an ES is to solve problems in a particular knowledge domain. By restricting the scope of the program, the researchers bring the amount of information down to manageable levels. All ESs work by asking questions until they can propose one or more possible solutions. They collect the symptoms of the problem, diagnose possible causes, and tell you what solutions are linked to the causes they have identified.

An ES bases its reasoning on a series of rules stored in its knowledge base. Each rule encodes an elementary step that a human expert would take when attempting to identify the problem. For example, a motor mechanic knows that an engine only starts if the battery is charged, the starter motor is in order, there is petrol in the tank, etc. Therefore, to determine why a particular car doesn’t start, the mechanic checks the battery, listens to the starter motor, looks at the petrol gauge, etc. To make a motor mechanic ES, you would have to program into it a rule for each logical step the human expert would perform. For example, among many others, you would also include the rule that the engine only starts if the battery is charged. You would then need to link that rule to the description of how to check the status of the battery and what to do if the battery is flat.

In an ES, the whole problem-specific logic is stored in the rules, while the software is only an engine that, given the appropriate rules, could solve any problem. The key task of extracting knowledge from a human expert and formulate the corresponding ES rules is difficult, because most experts are not aware of why they do what they do. The specialists who know how to tap into the mind of human experts are not programmers, because they don’t need to write any software. They call themselves knowledge engineers.

An ES can explain how it reaches its conclusions by listing all the rules its engine has encountered while solving the problem, and some ESs can provide several solutions, each with an associated probability. Several companies have developed commercial ESs, in particular to support medical practitioners in their diagnostic decisions(2). These systems can be very useful, and more so in third world countries, where the number of doctors is very limited. Medical ESs can also help general practitioners with the correct diagnosis of seldom occurring diseases.

To be able to provide more than one solution, the ESs need to cope with problems they don’t have enough information to solve. This is not trivial, because computers are based on binary logic. While we can weigh partially defined factors and arrive to conclusions that are reasonable or likely, computers only know yes and no. We use approximate values every day, but computers only accept precise inputs and provide precise answers. What makes the computers able to handle partially defined problems is a technique called fuzzy logic.

While standard logical variables can only be true or false, fuzzy variables can have several values. With fuzzy logic, values like somewhat likely, fairly dark, quite heavy, and not too hot are perfectly valid. You only need to define in advance what those fuzzy values mean in term of a continuous quantity. For example, somewhat likely might mean that you consider an event with a probability to occur between 51% and 60%. Similarly, not too hot might mean a temperature between 30 and 35 ÂșC. With fuzzy logic a computer can handle cases in which value ranges overlap. A computer using crisp — as opposed to fuzzy — logic, in absence of a value like somewhat likely, would be forced to consider true an event with a probability of 51%. This might lead to confusing results.

While ESs manage to reproduce the thinking of human brains in restricted knowledge domains and handle partially defined problems with fuzzy logic, they will never be able to simulate a human mind in its entirety. This is because ESs are based on trees of logical choices codified in advance, while the neurons in our brain have a high level of connectivity that continuously evolves. The connections within and between areas of the brain are responsible for our intuition and hunches, on which innovation rests. To develop truly intelligent machines, we need to understand and reproduce the mechanisms of our mind, not just some of its results.

We have more or less one hundred billion neurons in our nervous system. Each one of them can receive signals from up to ten thousand other neurons and send a signal to other neurons via a single output. When the sum of the signals received through its inputs exceeds a certain threshold, a neuron fires by doubling the electrical potential of its output(3). The firing lasts about one millisecond, after which the neuron rests for at least 10 milliseconds before being ready to fire again.

At birth the neurons are only partially connected with each other. They can also form and modify connections easily. This explain for example why children can learn languages with much less effort than adults. As we grow up, more stable paths form within our brains, and our behavioural patterns become more difficult to change. Special neurons called mirror neurons seem to be very important for our learning processes. The special characteristic of these neurons is that they fire not only when we experience an emotion, but also when we see somebody else experiencing it. By doing so, they help us learn by imitation how to react to external events and how to behave in our community. They make us cry while we watch a dramatic or sentimental film because we experience through the mirror neurons the same feelings as the characters of the film. By simulating in our brain the actions of others, the mirror neurons also help us predict what the people we observe will do next.

Researchers have developed electronic circuits and software to simulate the workings of interconnected neurons. These artificial neural networks (ANN) are much more promising than ESs in the quest for true AI, but a lot of work still needs to be done. Like their natural counterparts, the artificial neurons accept a number of inputs and fire a single output. But, unlike what happens in the brain, the connections between neurons in an artificial networks remain unchanged after the initial setting. What you can modify in ANNs is the way in which each neuron responds to its inputs and what level of signal it sends to its output.

What makes ANNs very useful for many applications — for example in computer vision — is their capability of learning. Optical Character Recognition (OCR) programs often use ANNs to identify printed and, somewhat less successfully, handwritten characters. These systems include at least three layers of artificial neurons. The first operation of such an OCR system is to scan the image of a character, break it down into a number of cells, and measure the darkness of each one of them. Figure 1 shows an example with the figure 9 scanned into 144 cells. The numbers in the top row show the levels of darkness of its cells, from 0 to 255.

Figure 1: Sampling of an image for OCR

Once completed the scan of a character, the OCR system passes the list of darkness values on to all neurons of the first ANN layer. Depending on their initial settings, some of the first layer neurons fire their outputs, which causes some of the neurons of the second layer to fire as well. The firing progresses through the layers until the neurons of the last layer fire. At this point, the OCR system uses the outputs of the last layer to reconstruct an image of the character and compares it with the initial scan, cell by cell. It then adjusts the settings of the output neurons to produce a better output. These corrections propagate back through the ANN causing an adjustment of all other neurons in the network, layer by layer. The OCR system repeats the whole procedure until the adjustments it has to make on the output neurons become smaller than a predefined value. Through this training process, the ANN becomes able to recognise the scanned character. The things are in reality a bit more complicated, because the system must be able to recognise many characters, not just one as in the example. But this should give you an idea of how the learning process work.

The current research on computer vision focuses on recognising three-dimensional objects. It will still take a while before a computer will be able to distinguish between, say, an apple and a pear, but we are getting there. This is essential research, if we want computers to be able to learn from their physical environment, as humans do in their childhood.

Despite the fact that our brains are the result of billions of years of natural evolution, it takes us several years to learn how to read and make sense of what we read. We keep learning for the entire duration of our lives and yet we only scratch the surface of what is there to be learned. This is because the total volume of knowledge of humanity is enormous and keeps expanding faster than any of us can hope to keep up with. But computers already store most of that knowledge and, within the next one or two decades, they will be able to begin understanding it. We will soon develop the software necessary for the computers to be able to continue their instruction on their own. When that will happen, they will no longer need us and bit by bit — pun intended! — they might just decide to take charge. Today’s computers are very dumb compared to us, but this shouldn’t fool you into feeling safe in the supremacy of the human race.


Notes:
(1) http://www.colorado.edu/whitepages/ldapdrill.xml?cnfull=100034363  and
     http://www.pearsonkt.com/bioLandauer.shtml
(2) For example, MatheMEDics® (http://mathemedics.com/) states MatheMEDics develops and markets Web-based interactive medical decision support software for physicians, consumers and managed care providers. At http://easydiagnosis.com/ you can try some of their ESs.
(3) The potential of the output changes from 40-60 millivolts to 90-100 millivolts

No comments:

Post a Comment