Technically, this is a mighty big question, the answer to which will vary depending on who you talk to. Marketers, and their content writing and copywriting associates, tend to overuse the term AI when their clients are actually deploying unsupervised machine learning or deep learning algorithms.
In terms of machine learning vs. AI, data scientists, machine learning engineers, and academics have differing opinions. However, the most common framework used is machine learning operates as a subset of AI, and deep learning is (in turn) a subset of machine learning. Essentially, both machine learning and deep learning are methods for achieving an artificially intelligent system.
This makes sense considering the fact that machines need to be taught the who, what, when, where, and how of learning before they abstract to becoming an autonomous system. Unfortunately, for those of us who prefer hard and fast definitions and clear distinctions, there is no universally agreed upon paradigm for what constitutes an “intelligent” system. Everyone, including AI researchers, is on a continual learning path that basically constitutes an “I know it when I see it” frame of reference.
Amidst the noise of academic argument, there is an implicitly shared vision of what AI is or will be. As the former dean of Carnegie Mellon’s Computer Science Department, Andrew Moore, stated in a 2017 interview with Forbes: “Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.”
Who is actually using AI?
If we use Moore’s definition of AI, we can get a clearer picture of who is actually using AI vs. those who are claiming to be “AI-empowered” but are merely pumping up their SEO for marketing purposes.
Safe driving requires an aggregation of intelligent functions (even for the so-called “bad drivers”) and instantaneous decision-making based on a variety of external inputs and cognitive processes. The key here is autonomous decision-making on the part of the machine, which has varying levels of control over the vehicle. Driving conditions can change rapidly and have many interdependencies including trying to predict the behaviour of other drivers. Thus, smart cars are utilising AI as the car (and requisite algorithms) “behave in ways that we thought required human intelligence.”
AI moves beyond simple if-then recommendation systems. There is a behaviour component where AI not only chooses the best option but then independently acts on a decision. The highlight here is an autonomous function. AI learns, analyses, and then reacts based on prior learning, weighting current conditions against likely future conditions.
Chatbots are an example of primitive AI (but, they’re improving). The customer poses a question and AI attempts to understand the question, provide the correct information, or route the customer to a human. We’re still superior when it comes to the nuance of human communication (if you’ve ever worked in customer service or IT support, then you completely understand this concept). However, the distributed computational capacity that AI relies upon supports speedy learning and, eventually, faster behavioural adaptation.
Despite the sci-fi proclamations of AI eventually becoming our machine overlords, we’re still many years away from AI being completely autonomous, which it can only do if we give up control. In the meantime, as AI evolves (and it should if we’re benchmarking AI against the threshold of human intelligence), it can be leveraged to help improve numerous aspects of the human world that it exists within.