Skip to main content
This is the third and final piece in a series examining the most effective uses of LLMs and why a more holistic approach to AI combined with different forms of reasoning is needed to help us make better decisions.

How can we deliver a more holistic approach to AI that helps make better decisions when it really matters? Before we conclude this series with a solution, let’s trace our thesis through the previous two articles.

The first article looked at the sudden mass appeal of AI and how Large Language Models (LLMs) are being viewed as the answer to solving complex problems. It outlined limitations with LLMs resulting from their reliance on natural language and statistical modeling, and explained why formal languages are an essential part of building more holistic AI.

The second article explored in more detail why LLMs are incapable of delivering the complex reasoning capabilities that businesses now expect AI to deliver. It defined what complex business problems are and what complex reasoning is. It showed why complex reasoning requires formal mathematical algorithms, and why relying on LLMs to solve complex problems is dangerous.

In this final article in the series, I will answer the questions I posed at the start: how can we build more holistic AI to help make reliable, accurate, and transparent decisions when the stakes are high?

The answer lies in using LLMs to create fluent natural language interfaces to formal systems capable of complex reasoning. This will unlock the full potential of Generative AI to solve complex problems for businesses today.

That is what we have done at Elemental Cognition (EC).

EC has built a neuro-symbolic AI platform that enables businesses to deploy reliable, accurate, transparent complex reasoning applications. We do this by integrating LLMs with a general-purpose reasoning engine that uses formal and efficient mathematical algorithms.

Simple analogies are a great way to better understand technology, so that’s why we have coined this architecture the LLM Sandwich.

The LLM Sandwich: the architecture slice by slice

Here is an overview of the LLM sandwich architecture. Let’s start with the filling and move our way towards the bread.

The EC Reasoning Engine solves hard problems.

The core of EC’s neuro-symbolic AI architecture is the reasoning engine. It combines multiple powerful and precise reasoning strategies that work together to solve hard problems efficiently with traceable logic.

This includes formal systems and computation devices from mathematical constraint modeling, efficient constraint propagation and backtracking, possible worlds reasoning and optimization, linear and non-linear programming, operations research, dependency management and truth maintenance, and many more.

The reasoning engine, not the LLM, generates answers that are provably correct. The reasoning engine itself is general purpose and is not fine-tuned based on the problem it is solving.

Fine-tuned LLMs make knowledge accessible.

Fine-tuned LLMs bridge the gap between human knowledge and the reasoning engine.

The EC AI platform uses LLMs on one end to capture knowledge from documents and experts in a form that can be consumed by the reasoning engine. This knowledge captures the facts, rules, and constraints of the target application. We also use LLMs on the other end to deliver answers from the reasoning engine and interact with end users in natural language.

The LLMs do not generate the answer itself; instead, they sandwich the powerful reasoning engine, which produces accurate answers, in between them. Hence, the LLM Sandwich.

Formal knowledge models enable reliable precision.

At EC, we have developed our own language called Cogent so anyone can easily build formal knowledge models. This reads like English, but is actually directly executable code. This is a major innovation in automatic programming that I will explore in more detail in later articles.

Cogent is transparent and easy to read like natural language, but also precise, unambiguous, and rigorous. It fulfills the same function as existing formal knowledge models that read more like math or code, but enables anyone to build and manage these models. These models are continuously refined and validated for logical consistency by the reasoning engine.

Formal knowledge models are the glue between the LLMs and the reasoning engine. They are the cheese that melts to bring the whole sandwich together, if you prefer the sandwich analogy. They are all that is needed to power a complex reasoning application that delivers accurate and optimal answers every time.

Cloud APIs power fast and scalable app deployment.

Customized based on a business’ knowledge model and generated automatically by EC AI, callable cloud APIs enable any multi-modal frontend of an application to be connected directly into the reasoning engine.

This enables businesses to rapidly deploy AI applications capable of complex reasoning.

The whole architecture can be trained jointly.

At EC, all elements of the entire sandwich can be trained. Our reasoning engine, its formal knowledge models, and its interactions with LLMs can be jointly trained and fine-tuned to work together efficiently through reinforcement learning.

This continuously improves the tight integration between natural language and formal reasoning, something LLMs alone will not achieve.

A more holistic approach to AI that solves complex problems using complex reasoning

Using this neuro-symbolic AI architecture, EC is currently powering applications across a wide range of use cases and industries, for example:

  1. Generating optimal plans for complex round-the-world travel that satisfies all the shifting constraints of real-time flight availability, customer preferences, and business rules.
  2. Analyzing complex investment scenarios to optimize financial portfolios and make major investment decisions.
  3. Accelerating complex pharmaceutical literature review to find new targets for molecules, or secondary indications for existing drugs.

The holistic approach to AI I outlined here offers the best of both worlds. It uses LLMs for what they are great at: manipulating natural language and making it easier to interact with computer systems. It also combines formal mathematical algorithms into a general-purpose reasoning engine capable of reliably solving hard problems using complex reasoning.

I believe there is no better alternative to the approach outlined here to achieve more reliable, accurate, and transparent generative AI. We are releasing a whitepaper and report soon that compares the performance of EC AI in complex reasoning scenarios against what is widely considered one of the best-in-class LLMs available: GPT-4. The performance difference is dramatic.

At EC, we find that these results highlight the dangers of relying on LLMs alone for solving complex problems. Our hope is that sharing the results will continue to drive the unprecedented amount of innovation in AI we are seeing in the industry by demonstrating how EC is solving these complex problems efficiently and reliably today.

The impact AI can have on business and society has never been greater. It is critical we adopt reliable, accurate, and transparent AI when the stakes are high, and we can’t afford to be wrong.

Benjamin Gilbert
EVP of Marketing,
Elemental Cognition


Developer of a generative artificial intelligence based technology platform designed to empower human decision-making. The company applies large language models (LLMs) in combination with a variety of other AI (artificial intelligence) techniques, enabling users to accelerate and improve critical decision-making for complex, high-value problems where trust, accuracy, and transparency matter.

Recent articles:

AI for when you can’t afford to be wrong

See how Elemental Cognition AI can solve your hard business problems.