Pages

  • Home
  • Blog Archive
  • Blog Submission
  • About Us
  • Contact Us

Sunday, 5 August 2018

Artificial Intelligence (Part-VIII)- Deep Learning in Artificial Intelligence

Image for representative purpose only.

Know in Details About Deep Learning in Artificial Intelligence


Here we continue with the last part of our blog on artificial intelligence. Those who have missed our seventh part can read it from Here. It will help to connect with this last part of the blog discussing about deep learning in artificial intelligence. Let us explore the blog to find out in more details. In word of Steve Polyak:

"Before we work on artificial intelligence why don’t we do something about natural stupidity? So, how would you weigh in? What’s your opinion about artificial intelligence?"

What is Deep Learning?


Deep learning is any artificial neural network that can learn a long chain of causal links. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a "credit assignment path" (CAP) depth of seven. Many deep learning systems need to be able to learn chains ten or more causal links in length. Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others. According to one overview,  the expression "Deep Learning" was introduced to the Machine Learning community by Rina Dechter in 1986 and gained traction after Igor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000. The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965. These networks are trained one layer at a time. Ivakhnenko's 1971 paper describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised back propagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer. Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980. In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US. Since 2011, fast implementations of CNNs on GPUs have won many visual pattern recognition competitions. CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind's "AlphaGo Lee", the program that beat a top Go champion in 2016.

Recurrent Neural Network


Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs) which are in theory Turing complete and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning. RNNs can be trained by gradient descent but suffer from the vanishing gradient problem. In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems. Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997. LSTM is often trained by Connectionist Temporal Classification (CTC). At Google, Microsoft and Baidu this approach has revolutionised speech recognition. For example, in 2015, Google's speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users. Google also used LSTM to improve machine translation, Language Modelling and Multilingual Language Processing. LSTM combined with CNNs also improved automatic image captioning and a plethora of other applications.


No comments:

Post a Comment