Thinking machines
When the first computers appeared (and this happened not 70 years ago, but much earlier, the prototype of a calculator for future computers was described by Leonardo da Vinci), it seemed to everyone that it would not be long before a machine thinking like a person appeared, it was only necessary that computers become more powerful. Computers double their performance every one and a half to two years. But hardly anyone will say that every one and a half to two years their intelligence increases at any comparable rate. If at all we are talking about "intelligence," then the machine has nothing to do with it.
If humanity evolves in arithmetic progression, bacteria in geometric progression, then information technology - in power. And, nevertheless, the prospect of creating a "thinking" computer now seems even more distant than 20-30 years ago.
An obvious question arises: "Why are other areas of information technology developing at a fantastic pace, and the creation of artificial intelligence (AI), perhaps the most promising direction, is marking time?" What jammed the thinking car?
As an answer, experts usually say that modern computers do not have enough computing power to match human thinking. So, according to Intel Corporation, the computer will be equal in computing power to the human brain by 2020 and then will reach a level of intelligence comparable to that of a human.
But, I think, firstly, this is unlikely to happen so quickly, and secondly, what exactly will we consider the "level of human intelligence"? Turing test or IQ? The version of insufficient computer power is very vulnerable. For example, it can be one computer and far from the power of the brain, but probably, if you combine millions of modern computers, you get a system that is not inferior to the 2020 computer. Will she be distinguished by outstanding intelligence? Hardly. In addition, it is far from clear what computing power is needed to support thought processes.
On the one hand, a person can do things that require gigantic computing resources. For example, our brain in 0.3-0.5 seconds builds a scene with three-dimensional objects from their two-dimensional images and recognizes these objects, while the brain is not embarrassed that objects can move, the person himself and his head can move, that images can enter the brain "upside down," that is, upside down, that our eye is not at all an ideal optical instrument, etc. etc.
However, such opportunities are rather innate and therefore are not in themselves a sign of high intelligence. Birds, for example, have a much faster reaction (less than 0.2 seconds) and can do the same or even more, they are much better at navigating three-dimensional space, but for some reason, a person surpasses them in mind. As for flexible, effective behavior, which is considered the main advantage of a person, such behavior seems to rely on rather limited resources.
The amount of short-term RAM of a person is only 7 ± 2 conventional units (objects). Try to do two different things at the same time - you are tormented. It is believed that an experienced physicist uses as many techniques, heuristics, as many pages in the course of general physics, i.e. several thousand, maximum.
Computers can store a lot more information. As another explanation for the sluggish development of AI, it is sometimes said that the physics of thought processes is very different from that used in a computer and therefore cannot be modeled on it, or rather, on an abstract computer, such as a Turing machine. Also a weak excuse.
First, it is unclear why the physics of these processes cannot be simulated on a computer. A nuclear explosion, for example, is also very different from the physics of a computer, but it is quite amenable to modeling. Secondly, in certain areas, computers are relatively successful in solving complex problems, for example, they play chess perfectly. Although there is no intelligence. Simple mechanical enumeration of options.
But why then can't more be achieved? Still, one gets the impression that the development of AI is hampered by a misunderstanding of some fundamental mechanism of human thinking, but not so much at the level of physics as closer to psychology.
To understand what is the matter, you need to start with the definition of the concept of "intelligence." No one will give an exact definition of this concept, but it is intuitively clear that intelligence is the ability to effectively solve problems and tasks that arise before a person. To solve the problem, you need to achieve some goal, result, based on certain conditions.
Intelligence is therefore the processor of the ability to find algorithms and form means and ways to achieve a goal in certain conditions. From this definition, it follows that the goal is one thing, and the means to achieve it are completely different. And these things, generally speaking, are not connected in any way.
The principle of separation of means and goals, i.e., the idea that one goal can be achieved in many ways, and one tool applied to achieve different goals, underlies all types of human activities, including the basis of AI. There, this principle began to be used in the early sixties of the twentieth century by the founders of the direction Newell and Simon, who, on its basis, tried to create a "Universal Problem Solver" ("General Problem Solver" or GPS).
They did not succeed in the "universal solver," because to achieve any goal, in principle, there are an infinite number of methods and enumeration of them to choose a sufficiently effective method can very quickly overflow the memory of the most powerful computer, this problem was called "combinatorial explosion," Newell and Simon never solved it.
Since then, the search for methods to increase the efficiency of enumeration (heuristics, effective presentation of knowledge, machine learning of different types) has become the main occupation of AI developers. The principle of separation of ends and means does not apply to understanding human thinking. In response to the demands of the situation, the brain always simultaneously forms the goal and the means to achieve it.
This occurs in the process of self-organization in certain structures of the brain, and the process of self-organization proceeds in such a way as to minimize the cost of forming a goal and means. The goal, in this case, is not necessarily understood, as is usually done, to be any conscious result, but any acceptable expedient state to be achieved.
A means is any activity necessary to achieve a goal: such activity may remain unconscious, or it may result in a private dissertation or a memorandum on preparations for a world war. Self-organization of chaos is an automatic process and cannot be consciously controlled. The main advantage of the simultaneous formation of means and targets is that only targets are formed for the achievement of which there are effective means, i.e. the problem of "combinatorial explosion" is solved.
It should be noted that the internal effectiveness of the self-organization mechanism, i.e. the fact that it always generates some kind of goal and means, without resorting to a search of all possible options, generally speaking, has nothing to do with external effectiveness, that is, with how formed the goal and means are adequate to this situation. This makes it possible to understand why there are cases when the goal is clear, but it is not clear by what means to achieve it, although there are sufficient opportunities for this. Which seems contrary to the hypothesis.
It is assumed that in such cases the formed purpose and means are inadequate to the situation, but their inadequacy is not sufficiently recognized. The rejection of the principle of separation of means and purposes seems at first glance completely absurd, since, as mentioned above, this principle permeates all spheres of human activity.
Let, for example, you need to get to the 22nd floor. This can be done by elevator. If the elevator does not work, you can climb the stairs on foot. If the stairs have collapsed, climb the spider up the wall. Outwardly, it seems that the goal is the same, only the means of achieving it are different. But if you look closely, you can see that, in terms of internal mental processes, these are completely different goals.
Anyone who does not suffer from claustrophobia, is not afraid of a confined space, can go up to the 20th floor by elevator. You only want to walk to the 20th floor if you need it. But only the hero of the film, the spider Spider, can climb the wall. It seems that the internal independence of means and goals is a psychological illusion like the illusion that a person instantly reacts to external influences (sound, light), although in reality the reaction takes some time.
Both of these illusions arise because it is very difficult to do anything while simultaneously observing your actions. The idea of ; ; simultaneously forming goals and means makes it possible to understand not obvious features of the most ordinary actions. For example, every day any person commits a lot of very rational and effective actions without clogging his head with prolonged reasoning.
I want to eat - I went to the refrigerator, opened it, took out something edible. Actions are very reasonable, and the thought arises only when you think that you need to get. At the same time, it is necessary to provide a resource so that there is something to choose from. But hunger as a motivation for an act can be satisfied in many ways. You can go to the store, fly to a restaurant in New York, steal sausage or eat a neighbor's dog, etc. Among these options, the use of a refrigerator is usually the most reasonable, but its reasonableness can only be assessed by rationally comparing it with other options. But no one does this!
Likewise, you don't have to just open the fridge - you can punch a hole in it, it can be blown up or thrown from the 20th floor to get to its contents. These are all stupid ideas, but, again, no one compares them with each other to choose the optimal one. We can say that it's just a habit: once I found food in the refrigerator, and you will look for it there all the time.
The habit explains something, but it cannot explain the selectivity and purposefulness of actions. Indeed, if a person goes somewhere and a door is in his way, it will be opened automatically, but no one opens all the doors that he sees in front of him. The reasonable idea of ; ; simultaneous formation explains daily rationality by the fact that the brain is trying to build achievable goals with minimal costs, and the usual methods of action just allow minimizing the cost of self-organization.
But at the same time, not a repetition of the usual action occurs, but it is always built anew, based on current conditions, a purposeful process with the necessary means of achieving the goal. Therefore, when the situation changes, there is a quick transition to a new goal. Such brain activity allows people to effectively adapt to a variety of conditions, but if, for some reason, the jointly formed goal and remedy turn out to be inadequate situations, then it becomes very difficult for a person.
This is very clearly seen in the example of puzzles. Everyone had to solve puzzles, i.e. problems for which there is enough knowledge, but which are so cleverly composed that it is difficult to solve, but hardly anyone wondered why puzzles are even possible.
Where did they come from? After all, if there is enough knowledge to solve a problem, then it should always be solved in a finite number of steps necessary to move from the condition of the problem to the desired result or vice versa, as happens with school arithmetic examples. But with puzzles it doesn't work out that way.
Take one well-known puzzle as an example: "Build four equilateral triangles from six matches." You can drive matches around the table as long as you like, but never get the right answer - a tetrahedron. Why is it so difficult to solve this problem with a simple condition, because most of the solvers know something about stereometry, pyramids, etc.?
From the point of view of the idea of ; ; joint formation, since the problem deals with matches, triangles that are habitually located on the plane, the brain automatically reformulates the puzzle to the form "build 4 equilateral triangles of 6 matches on the plane," and as a means of solving it activates different information about triangles, ways of adding matches, etc.
It is impossible to solve such a problem, and it is very difficult to move to the "real" condition, since the current state of the brain corresponds, as it were, to the bottom of a potential pit, from which it is very difficult to jump out in order to re-calculate the goal and means.
Indeed, puzzles are usually solved not as a result of logical reasoning like "produced action A, the wrong result was obtained A1, therefore action B will lead to the correct result B1," but after long, chaotic attempts, trials and errors, when the person who solved the problem suddenly screams like Archimedes "Eureka!" and gives the correct answer.
This means that as a result of chaotic activity, the brain "swayed" and moved to another state in which a new formation of goals and means became possible, leading to the solution of the problem. The puzzle-inducing feature of thinking is not as primitive as it might seem. The wheel is also a very important invention that underlies our lives. It is known that the Native American civilizations of pre-Columbian America did not know about the idea of ; ; the wheel.
Probably because they were not familiar with the principle of rotation on the axis, etc. And, nevertheless, they had children's toys based on this principle. There was enough different knowledge, there was also a need for wheels, but the correct formation of means and goals did not happen, so the Indians did not invent wheels, which is why they probably died when faced with the Spaniards fully armed - on wheels!
Note for ourselves that it is necessary to create a rule for checking the correctness of the hypothesis about the simultaneous formation of a goal and means. But considering that this idea makes some sense when creating AI, let's return to the question of why AI is developing so slowly. Probably the answer is that the developers of this direction of AI, without realizing it, are trying to imitate what human intelligence does not do at all.
They believe that a person in his thinking proceeds from the principle of separation of goals and means, and to find the means suitable for achieving the goal, he relies on formal logic. A person, processing a huge amount of information, simply forms suitable goals and means for each other, and rational thinking is used to the extent that it is necessary for their formation. Thus, AI developers have been creating "extra intelligence" for many years, and this is very difficult to do, especially if you do not understand what you are doing.
It is paradoxical that although everyone says that intelligence comparable to human intelligence will not be created soon, even some simple intellectual programs that are applicable only in a narrow area surpass human intelligence in some way. For example, such programs do not know the problem of puzzles.
Indeed, if you offer a study of the weakest chess program, it will consistently go through the options in search of a solution. This can last as long as you like, but it will not be that the program simply "stupidly" looks at the sketch and gives out on the display: "I have no ideas."
What will happen to artificial intelligence next? Different paths are possible. One way is to continue moving in an already given direction, i.e. based on the principle of separation of means and goals, trying to build a system that can reasonably behave in the situations in which people act daily. It may very well be that such a system can never be created, no matter what computers are used for this.
Since there are an infinite number of things in the world around us with an infinite number of properties and relationships between them, entering all this into a computer either directly or through training can be an overwhelming task for a very, very long time. And without determining all the relations between objects, it is very likely that no means will be found at all to achieve this goal or the search for funds will not lead to a meaningful result.
Indeed, the history of AI is full of situations when developers of intelligent systems achieved success at the level of "toy" models and loudly promised to quickly move on to real things, but no one else has heard of them.
Another direction is to better understand how the process of self-organization occurs in the human brain, and imitate this process. The processes of self-organization in the brain may differ from those used in modern AI work, but it is probably possible to algorithmize them.
True, until now, people have not been able to successfully imitate nature in their artificial analogues of natural things: a car replaces a horse, but cannot jump, and an airplane, although a "mechanical bird," does not wave its wings. The same can happen to human intelligence.
It is possible that a combination of both ways would be better: in new, uncertain situations, such a system would think like a person, and in other cases would use more effective approaches. True, it should be noted that the idea of ; ; joint formation of goals and means can be formulated in the sense that "no one understands what he is doing." Comment is superfluous.
Being close to intelligence, which is based on such a principle of mutual misunderstanding and, at the same time, it is somehow superior to humans, is probably not very cozy. This is a deliberate guarantee of conflict. So I thought that maybe it's good that it's so difficult to create artificial intelligence. There is a sure hope that it will not be created at all. The machine has no, has not and will never have intelligence because the machine does not have its own consciousness. The myth of the existence of artificial intelligence will disappear on its own when commercial sales of calculator crafts fall. A machine, by passing off the desired as real, can solve any problem, provided that a person has already solved this problem before. Intelligence and algorithm are incompatible.
Свидетельство о публикации №223122801161