While AI adoption is a hot topic in the business and tech sectors, most of the media -- well-known publications included -- aren’t well versed in its intricacies. Adding to this issue are the differing opinions between academics and other AI experts as to what constitutes AI. So, you’ll frequently see a general description that comes down to machines demonstrating the same, or similar, cognitive functions when compared to their human creators (e.g., perceiving, reasoning, learning, and problem-solving).
Why is this important, and what does this have to do with the so-called AI talent shortage?
The gap between what AI actually “is” and what we want it to be leaves a large opening for marketing departments, and publications wanting higher click through rates for their articles, to make bold claims about who is using AI and who the AI experts are. Similarly, due to the high salary potential and topical “hotness”, many of the would-be AI “experts” are self-reporting their expertise (a huge red flag).
This creates and perpetuates a preponderance of noise that distorts the statistics on the AI talent shortage. Plus, anyone can claim that they are an “AI specialist” on their LinkedIn profile (just because someone as a degree in computer science, it doesn’t automatically translate into AI expertise).
Consequently, we need valid measurements for determining a. the difference between advanced machine learning methods and AI; and b. who has achieved a certain level of AI knowledge and experience. For now, the focus is on the latter rather than the former.
The AI Skillset
If we accept the premise that AI is essentially a machine-generated version of human cognition, then we have a clearer view of the requisite AI skillset. Beginning with the machine end of the AI equation, AI experts should have advanced math, statistics, and programming education; these are the fundamental components of machine language/communications.
But, our definition of AI extends into human cognition as well: decision making, perception, language (human), meaningfulness, rationality and irrationality, heuristics, rhetoric, social theory, game theory, etc. AI experts should also have a solid understanding of human cognition; it’s the primary substantive pattern for AI. We cannot escape it, and perhaps AI will eventually reveal its own psychology. But, for now, it’s an AI cornerstone. Indeed, many if not all AI degree programs require such coursework.
Does this mean that extensive hands-on experience with AI research should be discounted in favor of formal education? No. But, claims of being an AI expert should be met with a healthy amount of skepticism (trust, but verify). The same can be said for clickbait news regarding “advances in AI”, reports of companies “using” AI, and the AI talent shortage.
So, is there such a shortage?
For most, acquiring the requisite STEM skills is arduous enough -- even if not cognitively rigorous, it is time-consuming. The human cognition facet and applying it to machines (then recalibrating as needed) adds yet another dimension of expertise into the AI skillset mix.
Consequently, the answer is yes; there is an AI talent shortage just as there is a shortage of enterprises who are currently implementing AI (rather than advanced machine learning). The world is still learning about where AI will take us, and the fear-mongering is rampant. That’s what happens when speculation outpaces a scientific approach to assessing what AI “is” and who qualifies as an expert.
Some may protest that this is “gatekeeping.” Considering that AI continues to permeate all areas of our lives, including but not limited to criminal profiling and health monitoring, we need to ensure that those who are creating and monitoring AI systems are highly qualified to do so.