Personal tools

Explainable AI vs. Expert Systems vs. LLMs

Old Nassau_Princeton University_110821A
[Old Nassau, Princeton University - Office of Communication]
 

- Overview

We are currently experiencing an AI hype (again). Enterprises have high hopes for artificial intelligence (AI) and must decide the scope and form of future use of AI. 

In addition to the joy, some people have also raised questions: How does AI "think" and make decisions, what results does it produce, and what data can it access during the process? For many decision-makers, the way AI works lacks explainability. 

This can have fatal consequences in practice, as different techniques produce different results. Decision makers require a certain level of understanding to choose the right AI technology for their problem. 

 

- Expert Systems vs. LLMs

Expert systems and Large Language Models (LLMs) are both AI technologies that differ in how they learn, reason, and are flexible:

  • Knowledge and learning: Expert systems use explicit, predefined knowledge from human experts, while LLMs learn implicitly from large datasets.
  • Reasoning: Expert systems use rule-based reasoning, while LLMs use statistical pattern recognition and prediction.
  • Flexibility: LLMs can cover a wider range of topics, but may not be as in-depth as expert systems in specific areas.
  • Explainability: Expert systems can often explain their reasoning process, but LLMs don't have this feature.

Expert systems originated in the 1970s as the first attempt to use technology to mimic human decision-making. They consist of two main components: a knowledge base that contains facts and rules, and an inference engine that draws conclusions or makes decisions. 

LLMs, like ChatGPT, can help agents solve complex decision-making tasks by encoding a large amount of world knowledge from text datasets. They're also good at summarizing long text, such as articles, research papers, or news reports, by extracting key information and providing concise summaries. 

The deterministic nature of AI expert systems also ensures consistent results for the same inputs, which simplifies the explanation of decisions and outcomes. LLMs, on the other hand, operate with huge, constantly expanding amounts of data.

 

- Explainable AI

Explainable artificial intelligence (XAI) is a set of tools and processes that help users understand the output of machine learning (ML) algorithms. 

XAI can help users trust the results and decisions of AI models, and can be used to improve model performance and debug models. XAI can also help users meet regulatory requirements, such as the General Data Protection Regulation (GDPR), which requires AI systems to provide understandable explanations for their decisions. 

XAI can be used to describe an AI model, its expected impact, and potential biases. Explanations can be targeted at users, operators, or developers, and can help address concerns such as user adoption, governance, and systems development. 

Some examples of XAI tools include:

  • AutoML Tables, BigQuery ML, and Vertex AI: Can generate feature attributions for model predictions.
  • What-If Tool: Can be used to visually investigate model behavior.
  • SHapley Additive exPlanations (SHAP): An algorithm that explains a prediction by mathematically computing how each feature contributed to the prediction.

 

 

[More to come ...]


Document Actions