Skip to main content

This is the third and final piece in a series examining the most effective uses of LLMs and why a more holistic approach to AI combined with different forms of reasoning is needed to help us make better decisions.

How can we deliver a more holistic approach to AI that helps make better decisions when it really matters? Before we conclude this series with a solution, let’s trace our thesis through the previous two articles.

The first article looked at the sudden mass appeal of AI and how Large Language Models (LLMs) are being viewed as the answer to solving complex problems. It outlined limitations with LLMs resulting from their reliance on natural language and statistical modeling, and explained why formal languages are an essential part of building more holistic AI.

The second article explored in more detail why LLMs are incapable of delivering the complex reasoning capabilities that businesses now expect AI to deliver. It defined what complex business problems are and what complex reasoning is. It showed why complex reasoning requires formal mathematical algorithms, and why relying on LLMs to solve complex problems is dangerous.

In this final article in the series, I will answer the questions I posed at the start: how can we build more holistic AI to help make reliable, accurate, and transparent decisions when the stakes are high?

The answer lies in using LLMs to create fluent natural language interfaces to formal systems capable of complex reasoning. This will unlock the full potential of Generative AI to solve complex problems for businesses today.

That is what we have done at Elemental Cognition (EC).

EC has built a neuro-symbolic AI platform that enables businesses to deploy reliable, accurate, transparent complex reasoning applications. We do this by integrating LLMs with a general-purpose reasoning engine that uses formal and efficient mathematical algorithms.

Simple analogies are a great way to better understand technology, so that’s why we have coined this architecture the LLM Sandwich.

The LLM Sandwich: stacking the benefits of natural and formal language

What exactly is the LLM Sandwich? It is a neuro-symbolic AI architecture where fine-tuned LLMs are the bread and the EC reasoning engine is at the center. LLMs help mediate the acquisition and delivery of knowledge to and from humans. They codify natural language rules and constraints into formal representations that the reasoning engine can understand and use in its reasoning process. The reasoning engine then solves the problem by using the best combination of formal mathematical algorithms for the task.

Like all sandwiches, bread is essential. However, sandwiches that are only bread do not satisfy the need we have when we want a sandwich. The part we’re really after is the filling. The bread makes it easier to interact with that filling in order to satisfy the need.

In a similar way, LLMs play an essential role in this architecture. They unlock a huge opportunity to interact in natural language with a general purpose reasoning engine that is capable of reliable, interactive problem-solving using formal mathematical algorithms. The crucial point is that the reasoning engine solves the problem, not the LLM. This is how we can satisfy the business need for AI to solve complex problems reliably, accurately, and transparently, while still making it easy to interact with AI.

Our customers and partners see several significant benefits to this approach.

  1. User-friendly, reliable problem-solving. Enabling people to interact directly with the formal reasoning engine in natural language means they can explore the trade-offs inherent in complex problems and make the optimal decision every time.
  2. Provably-correct answers. EC AI’s reasoning engine uses mathematically precise, objectively correct decision procedures to determine exactly how to interpret each expression, and how the truth values of expressions affect one another. Its answers are derived from true expressions, which are ultimately grounded in known facts.
  3. Total decision transparency. This is not a black box generating answers using statistical analysis of word distributions. Results are predictable, repeatable, and there is a fully traceable decision logic you can see to justify every decision.
  4. Efficient run-time compute costs. EC AI’s general purpose reasoning engine solves problems efficiently using rigorous reasoning that is not performed by LLMs. It can run on a phone with 8-24GB of RAM and scale easily on the cloud.

None of these benefits are possible using an AI architecture that relies on LLMs alone. Simply fine-tuning LLMs, using Retrieval Augmented Generation (RAG), or implementing popular prompt engineering strategies like chain-of-thought, cannot achieve these benefits.

Now that we understand the benefits, let’s dig a little deeper into what each part of the LLM Sandwich architecture is doing, and how it contributes to the whole.

The LLM Sandwich: the architecture slice by slice

Here is an overview of the LLM sandwich architecture. Let’s start with the filling and move our way towards the bread.

The EC Reasoning Engine solves hard problems.

The core of EC’s neuro-symbolic AI architecture is the reasoning engine. It combines multiple powerful and precise reasoning strategies that work together to solve hard problems efficiently with traceable logic.

This includes formal systems and computation devices from mathematical constraint modeling, efficient constraint propagation and backtracking, possible worlds reasoning and optimization, linear and non-linear programming, operations research, dependency management and truth maintenance, and many more.

The reasoning engine, not the LLM, generates answers that are provably correct. The reasoning engine itself is general purpose and is not fine-tuned based on the problem it is solving.

Fine-tuned LLMs make knowledge accessible.

Fine-tuned LLMs bridge the gap between human knowledge and the reasoning engine.

The EC AI platform uses LLMs on one end to capture knowledge from documents and experts in a form that can be consumed by the reasoning engine. This knowledge captures the facts, rules, and constraints of the target application. We also use LLMs on the other end to deliver answers from the reasoning engine and interact with end users in natural language.

The LLMs do not generate the answer itself; instead, they sandwich the powerful reasoning engine, which produces accurate answers, in between them. Hence, the LLM Sandwich.

Formal knowledge models enable reliable precision.

At EC, we have developed our own language called Cogent so anyone can easily build formal knowledge models. This reads like English, but is actually directly executable code. This is a major innovation in automatic programming that I will explore in more detail in later articles.

Cogent is transparent and easy to read like natural language, but also precise, unambiguous, and rigorous. It fulfills the same function as existing formal knowledge models that read more like math or code, but enables anyone to build and manage these models. These models are continuously refined and validated for logical consistency by the reasoning engine.

Formal knowledge models are the glue between the LLMs and the reasoning engine. They are the cheese that melts to bring the whole sandwich together, if you prefer the sandwich analogy. They are all that is needed to power a complex reasoning application that delivers accurate and optimal answers every time.

Cloud APIs power fast and scalable app deployment.

Customized based on a business’ knowledge model and generated automatically by EC AI, callable cloud APIs enable any multi-modal frontend of an application to be connected directly into the reasoning engine.

This enables businesses to rapidly deploy AI applications capable of complex reasoning.

The whole architecture can be trained jointly.

At EC, all elements of the entire sandwich can be trained. Our reasoning engine, its formal knowledge models, and its interactions with LLMs can be jointly trained and fine-tuned to work together efficiently through reinforcement learning.

This continuously improves the tight integration between natural language and formal reasoning, something LLMs alone will not achieve.

A more holistic approach to AI that solves complex problems using complex reasoning

Using this neuro-symbolic AI architecture, EC is currently powering applications across a wide range of use cases and industries, for example:

  1. Generating optimal plans for complex round-the-world travel that satisfies all the shifting constraints of real-time flight availability, customer preferences, and business rules.
  2. Analyzing complex investment scenarios to optimize financial portfolios and make major investment decisions.
  3. Accelerating complex pharmaceutical literature review to find new targets for molecules, or secondary indications for existing drugs.

The holistic approach to AI I outlined here offers the best of both worlds. It uses LLMs for what they are great at: manipulating natural language and making it easier to interact with computer systems. It also combines formal mathematical algorithms into a general-purpose reasoning engine capable of reliably solving hard problems using complex reasoning.

I believe there is no better alternative to the approach outlined here to achieve more reliable, accurate, and transparent generative AI. We are releasing a whitepaper and report soon that compares the performance of EC AI in complex reasoning scenarios against what is widely considered one of the best-in-class LLMs available: GPT-4. The performance difference is dramatic.

At EC, we find that these results highlight the dangers of relying on LLMs alone for solving complex problems. Our hope is that sharing the results will continue to drive the unprecedented amount of innovation in AI we are seeing in the industry by demonstrating how EC is solving these complex problems efficiently and reliably today.

The impact AI can have on business and society has never been greater. It is critical we adopt reliable, accurate, and transparent AI when the stakes are high, and we can’t afford to be wrong.