Artificial neural network

1940

In the late 1940s, D.

1960

The basics of continuous backpropagation were derived in the context of control theory by Kelley in 1960 and by Bryson in 1961, using principles of dynamic programming.

1961

The basics of continuous backpropagation were derived in the context of control theory by Kelley in 1960 and by Bryson in 1961, using principles of dynamic programming.

1965

The first functional networks with many layers were published by Ivakhnenko and Lapa in 1965, as the Group Method of Data Handling.

1970

Thereafter research stagnated following Minsky and Papert (1969), who discovered that basic perceptrons were incapable of processing the exclusive-or circuit and that computers lacked sufficient power to process useful neural networks. In 1970, Seppo Linnainmaa published the general method for automatic differentiation (AD) of discrete connected networks of nested differentiable functions.

1973

In 1973, Dreyfus used backpropagation to adapt parameters of controllers in proportion to error gradients.

1980

This provided more processing power for the development of practical artificial neural networks in the 1980s. In 1986 Rumelhart, Hinton and Williams showed that backpropagation learned interesting internal representations of words as feature vectors when trained to predict the next word in a sequence. In 1992, max-pooling was introduced to help with least-shift invariance and tolerance to deformation to aid 3D object recognition.

1982

In 1982, he applied Linnainmaa's AD method to neural networks in the way that became widely used. The development of metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) technology, enabled increasing MOS transistor counts in digital electronics.

Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks. ====Self learning==== Self learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named Crossbar Adaptive Array (CAA).

1986

This provided more processing power for the development of practical artificial neural networks in the 1980s. In 1986 Rumelhart, Hinton and Williams showed that backpropagation learned interesting internal representations of words as feature vectors when trained to predict the next word in a sequence. In 1992, max-pooling was introduced to help with least-shift invariance and tolerance to deformation to aid 3D object recognition.

1992

This provided more processing power for the development of practical artificial neural networks in the 1980s. In 1986 Rumelhart, Hinton and Williams showed that backpropagation learned interesting internal representations of words as feature vectors when trained to predict the next word in a sequence. In 1992, max-pooling was introduced to help with least-shift invariance and tolerance to deformation to aid 3D object recognition.

2009

Between 2009 and 2012, ANNs began winning prizes in ANN contests, approaching human level performance on various tasks, initially in pattern recognition and machine learning.

won three competitions in connected handwriting recognition in 2009 without any prior knowledge about the three languages to be learned. Ciresan and colleagues built the first pattern recognizers to achieve human-competitive/superhuman performance on benchmarks such as traffic sign recognition (IJCNN 2012). ==Models== ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with.

2012

In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images.

Between 2009 and 2012, ANNs began winning prizes in ANN contests, approaching human level performance on various tasks, initially in pattern recognition and machine learning.

won three competitions in connected handwriting recognition in 2009 without any prior knowledge about the three languages to be learned. Ciresan and colleagues built the first pattern recognizers to achieve human-competitive/superhuman performance on benchmarks such as traffic sign recognition (IJCNN 2012). ==Models== ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with.




All text is taken from Wikipedia. Text is available under the Creative Commons Attribution-ShareAlike License .

Page generated on 2021-08-05