Despite the prevalence of the phrase artificial intelligence in today’s discourse, you might be surprised to learn that it’s a term that doesn’t have any single and universally accepted definition. The truth is, what we call AI today is really a collection of technologies and computational approaches to solving increasingly complex problems by using computing power.
But while today’s AI is capable of doing some incredible things, it’s a long way off from the artificial intelligence we’ve long seen depicted in science fiction. Today’s AI isn’t sentient — no matter what some people might claim — and is a long way off from trying to take over the world.
How does the Defense Department define artificial intelligence?
The topic of AI has been the subject of discussion within America’s defense and government apparatus more than once, as the lack of a formal definition of AI makes drafting legislation that applies to it extremely difficult.
In order to regulate a technology, especially one as broadly encompassing as AI, the government needs to be able to differentiate it from other technologies through a useful definition.
The first federal definition of AI came in the 2019 National Defense Authorization Act, commonly known as the Defense Budget. It leveraged a commonly-accepted framework of four goals, which were first established by Peter Norvig and Stuart J. Russell in their 1995 textbook Artificial Intelligence: A Modern Approach, for these systems
Per the Defense Department’s budgetary documents, AI can be summed up as systems that:
- Think like humans (e.g., neural networks)
- Act like humans (e.g., natural language processing)
- Think rationally (e.g., logic solvers)
- Act rationally (e.g., intelligent software agents embodied in robots)
Of course, this, like many common definitions of AI, is such a broadly encompassing set of parameters that the title Artificial Intelligence is really given to a large and varied group of different technologies, including:
- Machine Learning (ML)
- Deep Learning (DL)
- Generative adversarial networks (GANs)
- Supervised learning algorithms
- Reinforcement learning (RL) algorithms
Related: How US special operators use Artificial Intelligence to get an edge over China
The four categories of Artificial Intelligence
The level of advancement offered by any of these AI systems or agents is often further lumped into one of four categories as defined by Arend Hintze, a researcher and professor of integrative biology at Michigan State University.
These categories give us the means to rate the level of complexity and capability offered by an AI system or agent. In a way, you can think of them as levels — as AI becomes more advanced and capable, it progresses from Level 1 to Level 2, and so on.
These categories of AI capability are:
- Reactive Machines AI: AI that reacts to some input with some output, but can do no learning.
- Limited Memory AI: AI that stores data to make better predictions and offers some degree of machine learning.
- Theory of Mind AI: AI that is closer to human levels of cognition, with some degree of self-awareness that falls short of true sentience.
- Self-Aware AI: The fully realized boogeyman or savior many already mistakenly believe AI has become. This is the complex AI we’ve seen depicted in science fiction.
Related: Project VENOM: The Air Force is adding AI pilots to 6 F-16s
Within the framework of this rubric, today’s AI systems fall into the second category: Limited Memory. Not as impressive as many may have thought. According to one Defense Department assessment published in 2017, most experts believe we’re still decades away from fielding truly sentient AI, despite our rapid progress in recent years.
In other words, there’s a reasonable argument to be made that AI is more a marketing term in common discourse than a technical one. The truth is, the systems and algorithms that we call AI are actually a number of different technologies being leveraged to a number of different ends, with their common factor being the overarching intent to meet the four criteria laid out by the 2019 Defense Budget… to think and act like rational humans.
Of course, there’s a big difference between acting like a rational human and actually thinking like one, and while we’re getting close to the former, we’re still a long way off from the latter.
Read more from Sandboxx News
- How will the war in Ukraine end and what about Crimea?
- With an eye to Russia, US sends 100 aircraft for NATO’s biggest air exercise ever
- The M45 Quadmount – The Krautmower weapon with the devastating power
- America’s massive military advantage nobody talks about: 500+ Refueling Aircraft
- The Tabuk DMR – When Iraq decided to make its own AK