There are many methods of AI. In addition to the “five major schools†we are familiar with, the author of this paper subdivides the genres of AI, sorts out at least 17 methods, and visually displays them with a picture. The authors say that the various AI methods are not equal, and that standing in the wrong team can lead to self-destruction.
"Artificial Intelligence" is a very vague term. This is because artificial intelligence (AI) is a term coined in 1955 in a very arrogant situation:
We recommend a two-month, 10-person AI seminar at Dartmouth College in Hanover, New Hampshire in the summer of 1956.
The workshop will be based on the idea that every aspect of learning or other features related to intelligence can in principle be described very accurately so that it can be machine simulated. We will try to find ways to let the machine use language, formal abstractions and concepts, solve problems that can only be solved by humans, and enhance human intelligence.
- Dartmouth AI project plan proposal; J. McCarthy et al. Aug. 31, 1955.
AI has been through more than half a century and has brought many imprints of other disciplines. For a long time, AI was ruled by symbolists. Symbolism is a rule-based system with the characteristics of Zero Learning. In the 1980s, a new AI method began to appear, which we call machine learning. Finally, there is "Simple Learning." However, the biggest change in the last decade is that we stumbled upon "deep learning," and it is extremely lethal and seems to be omnipotent.
Of course, this is a very simplified AI history. In fact, there are many different methods and genres in the AI ​​world. Pedro Domingo describes five different AI “schools†in the book The Ultimate Algorithm. A Y Combinator user named solidrocketfuel was not outdone, and posted a post saying that AI has at least "21 different genres."
For anyone planning to engage in AI, a very important thing is to understand the differences between these different schools and methods of AI. AI is not a homogeneous field, but an area where disputes between different schools are constantly occurring. The picture below is an overview:
Symbolists: People who use the rule-based symbology to make inferences. Most AIs revolve around this approach. The methods that use Lisp and Prolog fall into this category, and the methods using SemanticWeb, RDF, and OWL also belong to this group. One of the most ambitious attempts was Cyc, developed by Doug Lenat in the 1980s, to try to code our understanding of the world with logical rules. The main drawback of this approach is its vulnerability, because in the marginal case, a rigid knowledge base always seems to be inapplicable. But in reality, such ambiguity and uncertainty are inevitable.
Evolutionists: Processes that apply evolution, such as crossovers and mutations, to achieve an early stage of intelligent behavior. This method is often referred to as the Genetic Algorithm (GA). In deep learning, GA is indeed used to replace the gradient descent method, so it is not an isolated method. This school also studies cellular automata, such as Conway's "Life Game" and Complex Adaptive System (GAS).
Bayesians: A group that uses probabilistic rules and their dependencies for reasoning. The Probabilistic Graph Model (PGM) is a general method of this school. The main computer system is the Monte Carlo method for sampling distribution. This method is similar to the semiotic method in that the interpretation of the result can be obtained in some way. Another advantage of this approach is that there is a measure of the uncertainty that can be expressed in the results. Edward is a library that combines this approach with deep learning.
Kernel Conservatives: One of the most successful methods before deep learning is SVM. Yann LeCun once called this method a template matching method. This method has a technique called nucleus that can make nonlinear separation problems linear. Researchers in this group like the mathematical beauty of their methods. They think that the deep learning school is nothing more than an alchemist, and they read the spell without knowing the consequences.
Tree Huggers: People who use tree-based models, such as random forests and gradient lift decision trees. These are essentially logical rule trees that recursively split the domain to build the classifier. This method is actually quite effective in many Kaggle games. One approach Microsoft proposed was to combine tree-based models with deep learning.
Connectionists: This group of researchers believes that intelligence originates from a simple mechanism of high interconnectivity. The first specific form of this method is the perceptron that appeared in 1959. Since then, this method has disappeared and resurrected several times. Its latest form is deep learning.
There are many submethods in deep learning. include:
The Canadian Conspirators: Hinton, LeCun, Bengio, et al., advocate end-to-end deep learning without the need for manual feature engineering.
Swiss Posse: Basically LSTM, and two combined RNNs solve the problem of perception. According to LeCun, GAN is “the coolest thing of the last 20th century†and was claimed to have been invented by this group of people.
British AlphaGoist: These people believe that AI = deep learning + reinforcement learning, although LeCun says reinforcement learning is just a cherry on the cake. DeepMind is the main supporter of this faction.
Predictive Learners: This term is used by Yann LeCun to describe unsupervised learning, which is an unresolved area of ​​AI. However, I tend to believe that the solution lies in "Meta-Learning."
One Digit Led Display,Two Digits Led Display,Three Digits Led Display,Four Digits Led Display
Wuxi Ark Technology Electronic Co.,Ltd. , https://www.arkledcn.com