Feedforward Neural Networks (FFNNs)
Feedforward neural networks were the first created and are the most common type of neural network used today. Information is sent through an FFNN in one direction without any feedback loops. These networks range from simple to complex depending on the number of hidden layers they contain. More layers allow for multiple stages of processing (known as deep feedforward neural networks). A single-layer FFNN, known as a perceptron neural network, consists of only an output and input layer and is the simplest form. Feedforward networks are commonly used for object recognition. FFNNs compute quickly but aren’t complex enough for deep learning.
FFNNs use a logistic function that assigns “yes” or “no” to a numerical value. This function is useful in classification and decision making. A special type of FFNN called a radial basis function neural network calculates the distance from any point relative to the center. These networks are used in power system restoration to determine repair priority.
Recurrent Neural Networks (RNNs)
Recurrent neural networks allow information to flow in loops. As information is sent forward and backward, neurons retain data from the previous cycle. RNNs can perform more complicated tasks than FFNNs, but they are slower. Long short-term memory RNNs contain memory cells that hold data from several previous cycles – longer than traditional RNNs. This makes RNNs ideal for sentence building in text and speech recognition.
Convolutional Neural Networks (CNNs)
In convolutional neural networks, layers are arranged three-dimensionally while conventional neural networks have a two-dimensional array. CNNs contain a convolutional layer, in which each neuron only computes a small amount of information from the input layer. In this way, information is processed in batches before being pooled and sent to output. CNNs are commonly used for complex data processing like machine vision and self-driving vehicles. They are the most common neural networks used in AI applications.
Generative Adversarial Networks (GANs)
Generative adversarial networks are double networks that train each other. One side generates random variable distributions, and the other side determines which distributions are true or false given the problem the network is trying to solve. Because GANs are constantly learning, they are harder to deceive than other networks. GANs are used in image processing and analysis, including tumor diagnosis.
Modular Neural Network (MNNs)
Modular neural networks consist of two or more different types of neural networks working together to perform complex tasks. Each network operates independently on sub-tasks aimed toward the same output. MNNs are faster than having one network attempt the same task on its own. MNNs can perform exceptionally complex tasks that individual networks cannot. Research is underway using MNNs to develop biometric and human emotion recognition systems.