Skip to main content

We are thrilled that Elemental Cognition’s latest research – Braid: Weaving Symbolic and Neural Knowledge Into Coherent Logical Explanations – was selected for publication for the AAAI-22 Conference. In light of this, we wanted to share more of our thinking about the challenges of existing AI approaches and how we aim to solve them.

Not all AI is created equal

In recent years, Deep-Learning techniques have shown tremendous impact on a variety of AI applications related to speech and natural language processing (NLP), image recognition and video analytics. From achieving “super-human” performance on academic benchmarks to being used in numerous real-world applications such as Google searchFacebook image tagging, neural models have dominated the AI landscape.

However, despite their stellar performance in these specific uses, Deep Learning by itself suffers from a lack of explicability – that is, they cannot explain the decisions they make – (appearing as “black boxes”), nor do they perform well beyond the domain that they are trained on. Recent analysis of neural models for language understanding reveals them to be surprisingly brittle, often highly sensitive to regularities in the training data (getting the right answer for wrong reasons).

At a more fundamental level – The Deep Learning approach to AI is flawed since it leaves out a crucial element of human learning.

People understand language by building mental representations of the words they read, using their common sense to ensure things are coherent. A Deep-Learning system, on the other hand, uses statistical probability to take its best guess at what words or phrases come next. Hence, is it unsurprising to find that the top neural Question Answering systems fail to even answer story comprehension questions that a 5th grader can.

Adding Logic

Recognizing the inherent weakness in neural models for language understanding, there is a surge of interest in what is called neuro-symbolic approaches, which combine neural techniques for statistical pattern recognition with symbolic representations for logical coherence and explicability. Prominent work in this area includes IBM’s Logic Neural NetworksNL-Prolog and Logic Tensor Networks.

The basic idea of all these approaches is to change the neural architecture to encode logical entities, relationships and rules or constraints. At its core though, the models remain fully differentiable and are trained “end-to-end” using backpropagation. While promising, more work needs to be done to validate whether these neural solutions address the key issues – transparent reasoning, generalization beyond the trained domain, and scalability to deal with large amounts of common-sense knowledge.

We take a fundamentally different approach to this problem at Elemental Cognition.

Our patented neuro-symbolic reasoner, Braid, is at its core a symbolic reasoning engine which builds an explicit logical mental model of the language it is reading; however, it is continuously fed by neural functions that provide the probabilistic inputs – be it semantic interpretations of the text, similarity functions for aligning entities and relationships, or logical rules for reasoning. This design combines the symbolic system’s strengths of full transparency and logical reasoning with the neural model’s power for dealing with linguistic variations and ambiguity. Moreover, this design also enables effective collaborative learning.

To put it simply – we use neural networks for what they’re good at, then add logic, transparency, explicability, and collaborative learning.

Here’s how it works.

Deeper Language Understanding with Braid

Consider the following story:

Fernando and Zoey go to a plant sale. They buy mint plants. They like the minty smell of leaves. Zoey puts her plant near a sunny window. The plant looks green and healthy!

The question we would like to answer is:

Why does Zoey place the plant near the window?

State-of-the-art deep learning systems today would not be able to answer this simple question in a way that makes sense.

Questions like this are part of the “Template of Understanding”, defined in our ACL 2020 paper, that is used to test an AI system’s deep understanding of a piece of text, and in this paper, we demonstrated how purely neural QA systems did very poorly at this task.

For example, a QA model fine-tuned on GPT3 – considered one of the most advanced deep learning models – generated the following response to the question:

Why does Zoey place the plant near the window?

Because she wanted to see it from a distance

Now, we give Elemental Cognition’s Braid system an opportunity to answer the same question:

Why does Zoey place the plant near the window?

Zoey possesses a plant, therefore she wants it to be healthy.  This motivates her to move the plant to the window. Moving the plant to the window leads to it being healthy for the following reasons: After she moves the plant, it is near to the window. The window is in contact with sunlight, which is ambient. As a result, the plant is in contact with sunlight, so it becomes healthy.

In addition to using our reasoning engine, Elemental Cognition’s AI can answer this question correctly by using

  • Our neural semantic parser (“Spindle”), to interpret the text of the story (generating multiple plausible interpretations which are then logically validated by the symbolic reasoner),
  • Various neural similarity functions to match concepts – e.g. the “put” action in the story and “place” action in the question are synonymous in this context – a trivial point for humans, but a potential challenge for symbolic systems, and
  • A “dynamic rule generator” based on our earlier work on GLUCOSE (EMNLP 2020)that injects common-sense rule knowledge (such as plants need sunlight to be healthy) on-the-fly into the reasoning process. This is a huge departure from traditional symbolic systems which operate on a static and pre-compiled knowledge base of rules (Braid incorporates a static KB as well dynamic rules from a neural model).

More details on how Braid works are in Elemental Cognition’s soon to be published AAAI 2022 paper. As excited as we are for this publication, this AAAI paper describes only one piece of the entire Braid engine – the backward-chaining reasoner. Additionally, Braid contains a forward-chaining inference component and a constraint solver, both of which we will save for a future blog post.

Continuous Collaborative Learning

Finally, one more unique aspect of Elemental Cognition’s reasoning engine is the way it supports continuous learning.

Elemental Cognition’s AI generates complete explanations for questions via its core symbolic reasoning engine in natural language to the user. When an explanation is incorrect, the user can pinpoint which parts of the explanation are wrong. This granular feedback is also used as a training signal to improve future analysis and reasoning. For example, an incorrect fact interpretation in the explanation is used to improve the underlying semantic parsing model; an invalid rule is used to improve the neural rule generator, etc. In this way, Braid can learn through real-time user-feedback on the explanations (generated by the symbolic reasoning system), and improve its understanding of language over time. This kind of focused learning is not possible in other neuro-symbolic approaches which rely on much coarser-grained QA ground truth.

At Elemental Cognition, we are excited about deploying Braid in real world applications. Watch the demo video below of our Travel Assistant that uses Braid, helping customer build round-the-world trips. We look forward to refining and improving its capability across various domains and use-cases.

Benjamin Gilbert
EVP of Marketing,
Elemental Cognition


Developer of a generative artificial intelligence based technology platform designed to empower human decision-making. The company applies large language models (LLMs) in combination with a variety of other AI (artificial intelligence) techniques, enabling users to accelerate and improve critical decision-making for complex, high-value problems where trust, accuracy, and transparency matter.

Recent articles:

AI for when you can’t afford to be wrong

See how Elemental Cognition AI can solve your hard business problems.