Discussion surrounding artificial intelligence (AI) has grown immensely since 2015. Tech billionaires exchange differing opinions about its safety on Twitter. Research and development has reached new heights at IBM and Google. Even media portrays AI on a personal level, and corporations see it as an immensely helpful tool, especially in IIoT applications.
For all its hype, however, the vocabulary and definitions of AI are murky. Terms like machine learning and deep learning have been used synonymously. Other confusing labels, such as simple AI vs limited memory AI, appear near words like robots and virtual agents. With the right words, one can separate fact from fiction and possibility from Hollywood.
Artificial Intelligence is the umbrella term for technology that imitates human intelligence. Siri’s speech recognition, iPhone 8’s security image recognition and Alexa’s virtual agency are all products of AI design. AIs use high-powered GPUs with infinite storage to handle data floods and mimic “parallel processing,” which is the simultaneous processing of multiple external inputs. The end goal is for machines to exhibit independent decision making instead of following programming.
One of AI’s tools is “machine learning,” which came about in the mid-1990s. This is the technique that contributes to preference guessing, limited facial recognition and other “narrow” AI applications that have a single task to fulfill. In 2012, Andrew Ng of Stanford and Google took machine learning to “deep learning” to research more advanced, “general,” AI, which is a computer that can adapt to situations and solve multiple problems.
In machine learning, computer scientists and technicians initially set up a machine with software rules and “neural systems.” Like human neural systems, these computer pathways process information, building on multiple interpretations to approximate a right answer; unlike humans, their connection is made of designed, discrete layers with directed propagations.
They feed data sets to the machines and “train” them. Using algorithms, the computers process information and produce the desired outcome. If trained successfully, they can make predictions about upcoming and potential sets.
On the other hand, “deep learning” employs many-layered “deep neural networks.” The machine’s structure is expanded in both size and number. Data sets expand from thousands to millions of inputs. Each bit of data goes through multiple iterations of processing, leading to a more accurate average outcome of determination. Machines learn levels of representation and distinguish between images, sounds and text.
AI machines exist on four levels: simple, limited memory, theory of mind and self-awareness. There are no publicly known devices with theory of mind. Self-awareness is the pipe dream of the field. These creations include autonomous robots and cloud-based virtual assistants, like Jarvis from Iron Man. For now, society must settle with AI devices that recognize images better than humans and create medical treatment plans with equal effectiveness as a human doctor.