Blog

Artificial Intelligence vs Computational Intelligence

blogpic-jeff_durst

Author: Jeff, Senior Research Scientist
Five minute read

Suffice it to say, Artificial Intelligence (AI) is enjoying its moment in the sun. From science to games to art, AI has become present in nearly everything. But, what about AI’s oft-overlooked cousin, Computational Intelligence (CI)? The two terms are often used synonymously, which makes sense given that CI is considered a sub-field of AI. After all, the term CI was coined at the same time and same place as AI – the original Turing Test. However, AI and CI are quite different in form, function and application. This blog aims to shed some light on the AI vs. CI debate and how AIS is using CI to improve context-aware optimization.

AI

From a 50,000-foot view, AI is defined as a branch of computer science that aims to give a machine the ability to make decisions based on pre-trained knowledge. AI aims to give an algorithm a facsimile of human cognition. Meaning the ultimate goal of AI algorithms is to use prior information to make future decisions. Currently, the most notorious example of applied AI is generative large language models (LLM), like ChatGPT.

CI

While CI falls under the umbrella of AI, it is unique in one keyway – CI is used to make predictions using inherently uncertain and unstructured environments. AI algorithms are designed to deal with noise and randomness in their training data. However, they assume their target application environment is deterministic. For example, an LLM knows it is working with discrete units of text. Regardless of the input and output, an LLM knows it will be working with exact words. Put simply, “cat” goes in, “dog” comes out.

On the other hand, CI algorithms are built to work with imprecise data collected in an environment that is also itself imprecise. CI algorithms do not expect deterministic input nor provide deterministic outputs. That is, an LLM expects “cat” and does not understand “ecat” or “catd”. On the other hand, a CI algorithm can structure these almost-words back into “cat” by understanding the core concepts of text generation.

CI algorithms are built to work with imprecise data collected in an environment that is also itself imprecise.

For example, one common application of CI is in positioning and localization with limited GPS access. Onboard positioning sensors, like a wheel encoder or gyroscope, provide noisy and messy data. On the other hand, a GPS provides much more accurate data, but only when available. A traditional localization algorithm is only as certain as its most certain input, meaning that it will fail without near-constant GPS input. However, a CI solution that cleverly combines all data sources can maintain a highly accurate position during periods of GPS dropout. This ability to work with such uncertain information comes from CI’s ability to understand context.

Context-Aware CI

The main feature that distinguishes CI from other AI is its ability to use environmental context. CI algorithms keep track of not only their outputs but also the accuracy of their outputs over time. Put from the human cognition perspective, CI can use its “context clues” to help solve a problem. These context clues are the accuracy of CI outputs over time as a function of the accuracy of CI inputs over time.

As such, a CI algorithm can figure out that “ecat” and “catd” equal “cat”, so long as the word is contained in a known context, e.g., a sentence like “the catd ate its ecat food”. The more complicated example of positioning and navigation using context looks like:

  • The CI algorithm takes input from GPS and onboard sensors.
  • The CI algorithm determines the relative accuracy of each sensor using previous highly accurate measurements.
  • The CI algorithm learns about the uncertainty in the data and environment.
  • The CI algorithm re-optimizes in real-time to decide which data sources to use for positioning.

Developing CI at AIS

Team AIS is currently studying the intersection of CI and machine learning for complex sensor fusion applications akin to the GPS-denied localization problem. Solving for position using noisy data with CI is a well-known sensor fusion application. Building upon this foundational research, AIS is seeking to understand how CI can be carried over to developing, training and fielding advanced perception algorithms. AIS is leading research to find out if perception algorithms, e.g., target detection and tracking, can combine many visual sensors, i.e., cameras, IR sensors and LIDAR, to provide a more accurate solution than current single-sensor systems.

Our research in this area seeks to develop new perception algorithms based on context-aware CI and build new perception systems around these algorithms. To do this, we are leveraging sensor modeling and simulation (M&S) tools. By using M&S, we can not only create massive sets of sensor data but also have perfect knowledge of these data’s context. This allows us to both develop and train new sensor fusion techniques and create context-aware CI optimization techniques for deploying these techniques on advanced, multi-sensor perception systems.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound