Artificial intelligence exemplifies a branch of computer science that studies the possibility of providing rational reasoning and actions with the help of computer systems and other devices. Moreover, in most cases, the algorithm for solving the problem is unknown in advance.

In sum, there is no exact definition of this science, since philosophy cannot fully solve the nature and status of human intelligence. There are no precise criteria for computers presently to achieve “rationality;” however, at the dawn of artificial intelligence, a number of hypotheses were proposed. For example, the Turing test or the Newell-Simon hypothesis was suggested. There are many current approaches for understanding the task of AI, and the creation of intelligent systems overall.

One of the classifications identifies two approaches to the development of AI:

  1. Descending, semiotic – the creation of symbolic systems that model high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc.;
  2. Ascending, biological – the study of neural networks and evolutionary computation modeling intellectual behavior on the basis of smaller “non-intellectual” elements.

This science is connected with psychology, neurophysiology, transhumanism and other disciplines. Like all other facets of computer science, AI uses a mathematical apparatus. Of particular importance to her are philosophy and robotics. 

Artificial intelligence is an extremely emergent area of research: AI’s origin began in 1956. Its historical path resembles a sine wave, where each “take off” was initiated by a new idea. At the moment, its development is on the “decline,” thus succumbing to the application of already achieved results in other areas of science, industry, business and even everyday life. 

Learning approaches

There are various approaches to the construction of AI systems. At the moment, there are 4 quite different approaches:

The logical approach.

The basis for the logical approach is Boolean algebra. Each programmer is familiar with it and with logical operators since he mastered the IF operator. Boolean algebra received its further development in the form of predicate calculus – in which it was expanded by introducing subject symbols, relations between them, as well as quantifiers of existence and universality. Virtually every AI system built on a logical principle is a machine that proves the theorems. In this case, the original data is stored in the database in the form of axioms, the rules of logical inference as the relationship between them. In addition, each such machine has a target generation unit, and the output system attempts to prove this goal as a theorem. If the goal is proved, then tracing the applied rules allows one to obtain a chain of actions necessary to achieve the goal (this system is known as expert systems). The power of such a system is determined by the capabilities of the target generator and the machine of proof of the theorems. A comparatively new direction, like fuzzy logic, allows one to yield a more expressive logical approach. Its main difference is that the truthfulness of a statement can take in, besides yes / no (1/0), also intermediate values – I don’t know (0.5), the patient is more likely alive than dead (0.75), the patient is more likely dead than alive ( 0.25). This approach is more like a person’s thinking, since he rarely answers only yes or no to questions.

Structural approach

By structural approach, we refer to attempts to build AI by modeling the structure of the human brain. One of the first such attempts was Frank Rosenblatt’s perceptron. The main simulated structural unit in perceptrons (as in most other variants of brain modeling) is a neuron. Later, other models emerged, which are mostly known by the term neural networks (NN). These models differ in the structure of individual neurons, in the topology of the connections between them, and in the learning algorithms. Among the most well-known variants of the NA today are the NA with the reverse propagation of the error, the Hopfield network, stochastic neural networks. In a broader sense, this approach is known as Connectivism.

Evolutionary approach.

When constructing AI systems for this approach, the main attention is devoted to the construction of the initial model, and the rules by which it can change (evolve). Moreover, the model can be compiled by a variety of methods: the National Assembly and a set of logical rules as well as any other model are applicable. After that, we turn on the computer and it, on the basis of checking the models, selects the best of them. It chooses according to the most various rules, as new models are generated. Among the evolutionary algorithms, the genetic algorithm is deemed as classical.

Imitation approach.

This approach is classical for cybernetics with one of its basic concepts, the black box. The object whose behavior is simulated is exactly the “black box.” It doesn’t matter to us that it has both the model inside and how it functions; instead, the main premise is that our model behaves the same way in similar situations. Thus, another human thinking pattern is modeled here – the ability to copy what others are doing, without elaborating why it is needed. Often this ability saves him significant time, especially at the beginning of his life.

Within the framework of hybrid intelligent systems, they are trying to unite these areas. Expert rules of reasoning can be generated by neural networks, and generating rules are obtained through statistical training.