Skip to main content

There have been many versions of how Elemental Cognition was founded cropping up recently, and I am flattered by the interest but also excited to share my personal story.

It’s a story that involves building IBM Watson and leading the team that achieved Watson’s landmark win at Jeopardy. A career journey that brought me into the world’s largest hedge fund, Bridgewater. A personal journey that involves the near-death experience of my father. And finally, a story that led to realizing my vision for a more holistic AI by founding Elemental Cognition.

In high school, I dreamed of working at the IBM T.J. Watson Research Center and becoming an IBM Fellow. After my master’s degree, some IBM researchers attended an AI presentation I gave, and within a few weeks, I got an offer from one of the world’s most prestigious research organizations. My dream came true.

I started working on applications of AI to manufacturing shop-floor scheduling. But within a year, the second AI winter hit, and IBM was shutting down AI projects because the world was concerned AI would threaten human jobs. Sound familiar? I didn’t want to work there if I couldn’t work on an AI project. So, I quit and returned to school to finish my PhD.

In people’s minds, my story begins with IBM Watson. But it actually starts much earlier, from a personal experience; a time when the stakes were too high and making a purely statistical bet was unacceptable.

Not long after I finished my PhD, my sister and I threw a birthday party for my father’s 70th birthday at a restaurant in New Rochelle, NY.  Soon after a loving toast, my father went into cardiac arrest.

It felt like ages before the ambulance arrived. The medics administered emergency procedures and rushed him to a nearby hospital. My mother, sister, uncle, myself, and far too many friends and family followed from the restaurant.

A resident came out of the emergency center, took me aside, and told me my father was brain dead and that I needed to sign a DNR – do not resuscitate – as they were performing “heroic measures” to keep him alive, but he was no longer there. We were all stunned and full of despair. I asked the resident, “How do you know he is brain dead?”

“Well, someone with a history of cardiovascular disease, at that age, with a prior heart attack going into cardiac arrest, and having to wait that long for an ambulance before resuscitation just is. Sorry, but there is just a 98% chance he is brain dead. We know this is difficult, but signing the DNR is the right thing to do.”

Given the gravity of risk, that probability just didn’t work for me. I asked, “So, to be clear, given his history, there is a 2% chance he is not brain dead. So at least 2 people in the 100 or so at the restaurant might not be brain dead.” The resident passed me off to the chief cardiologist, who explained the situation again, and I said, “I understand the statistical probabilities. But there is a 2% probability he is not brain dead. I need to understand whether or not my father, the patient right there (who I could see being kept alive with tubes and defibrillators), I need to know if his brain is functioning or not. What direct evidence do you have about him?”

The chief cardiologist thought for a moment and said, “His pupils were widely dilated when he came in.” I countered, “Are there other reasons his pupils might be dilated?” He thought for a moment and said, “Well, yes, they do give him a drug in the ambulance that might dilate his pupils.”

My sister said to me, “What are you doing? Why are you questioning the chief cardiologist? Daddy will be a living vegetable; the doctor is telling us he is already dead, and you are not accepting it.”

I said, “The doctor, so far, is saying that in a population of 100 people in similar circumstances, two may not be dead. We know nothing about whether Daddy, specifically, is brain dead or not yet.”

To make a long story short, I did not sign the DNR, followed by a series of difficult decisions, which eventually required him to be moved to another hospital. When we arrived at the new hospital, I rode up in the elevator with his seemingly lifeless body. His body was blue, cold, and, indeed, with dilated pupils. It didn’t look good.

While his chances seemed slim, I needed direct evidence to make a call. In the morning, I returned to his room, and against all odds, he was sitting up in bed. He had zero brain damage. And he said to me with all the energy I loved him for, “Ha! So I understand you thought I was dead.” Shocked and overwhelmed with joy, I responded, “I was the only one who admitted I did not know one way or the other!” My father lived for nearly another year. In that time, the family healing that occurred transformed our past and future relationships for the better.

AI must raise the bar, not mimic our cognitive flaws

After this experience with my father, I had a new outlook on what AI should be. We need to understand the different inferences we or an AI can make.

There are two forms of reasoning. One form of reasoning finds correlations in data. It depends entirely on what is in the data and what is not. It is biased by the data. It induces, and generalizes, and then extrapolates statistical averages to make predictions.

The other form of reasoning endeavors to get the specific facts and to apply stepwise logical reasoning to the specifics of each case. Both are very useful. With lots of data, statistical bets are easier than thinking it through, but not always right. In the case of my father, the stakes were too high, and probabilities weren’t going to cut it for my family.

That experience seeded my vision for bringing these two very different forms of AI together to help humanity make better decisions.

In the meantime, my desire to care for my father during his last year brought me back to Westchester, where I applied for a job at IBM Research for the second time, leading to the creation of Watson.

While Watson was successful at Jeopardy!, I envisioned AI to do more. Leaving IBM was extremely difficult, especially given my long history and deep admiration for the company. Despite my success and the promise of gaining higher and higher positions within IBM, I left. I wanted to get beyond Watson, personally and professionally.

The investment market for AI 10 years ago was different from what it is today. Bound by a two-year non-compete agreement, my options were limited to finding a partner who shared the same vision on the future of AI. Bridgewater hired me for what they saw in my integrity, scientific commitment, and entrepreneurial spirit.

I wanted to explore combining predictive analytics and data-driven techniques with more explicable symbolic models and inference. I fully expected that, ultimately, Bridgewater wasn’t where I would fully realize my vision for AI — they were not a technology company. As my IBM non-compete neared its end, I informed them of my intention to leave and pursue a more traditional tech company or get funding for my start-up to pursue my broader vision for AI.

Bridgewater was interested in my vision, aiming for a more profound, intelligent thought partner capable of learning from data and performing complex and transparent reasoning. I shared my ambitious plans, which Bridgewater found exciting and wanted to support. Greg Jensen was especially interested in my approach to AI and was personally and professionally committed to continue to invest in a future where we both thought AI would be a dominant force. They offered to fund my project, giving me significant ownership.

This was the beginning of Elemental Cognition.

Elemental Cognition is my vision for a more holistic AI

I wanted to move beyond the surface of language predictions. Language is not primary but secondary to greater intelligence based on formalization and reasoning. I wanted to enable humans to interact with more intelligent systems that could reason and clarify ambiguities in language, not just through statistics but by combining it with formal systems of logic and reasoning.

In 2019, EC spun off entirely from Bridgewater. With patented technology that reflected my broad vision for a more holistic AI and with a great core team, I dedicated myself to shaping and commercializing our approach to solve a broad class of business problems.

Today, the events around my father’s cardiac arrest underscore that we must look to AI to raise the bar for human cognition. We must intelligently acknowledge and combine different forms of inferences to help us make better, more personal, more transparent, and more caring decisions.

We often make decisions based on statistical correlation rather than causation. We see unexpected side effects because these decisions are statistical bets that do not consider the specific causal analysis of what is going on.

It’s important to recognize the difference between justifying an answer because you understand the causal reasoning behind it and making statistical bets based on patterns and trends. If an AI system identifies patterns that are too complex for us to grasp, then we won’t be able to understand how it makes its decisions.

This doesn’t mean we shouldn’t use the system, but we must understand that it’s a bet. It is only a hypothesis that assumes the future will be consistent with the past in ways that are entirely limited by the training data.

Elemental Cognition aims to break down human cognition into elemental parts and reassemble it to enhance overall cognition. By understanding and combining different statistical, logical, and linguistic inferences, we can be transparent and accountable for AI-generated decisions.

Recent breakthroughs in large language models (LLMs) are helping to accelerate that vision, but they are only one part of a more holistic AI with a reasoning engine at its core. This is what I call the LLM sandwich. LLMs are the bread with the reasoning engine at the center. LLMs help mediate the acquisition and delivery of knowledge to and from humans. They help formalize sloppy thoughts into formal systems which can interact with a reasoning engine. The reasoning engine then generates enormously powerful, provably correct problem solvers we can interact with in natural language.

Greg Jensen and I continue to share this vision. In his words, “Leveraging work from OpenAI, Anthropic, and Elemental Cognition, Bridgewater’s new AI fund combines LLMs with ML-based predictive analytics and formal reasoning systems in a general architectural AI pattern. This AI architecture is Dave’s “LLM Sandwich.” AIA, our automated investment associate, uses this architecture to enable a more reliable AI that can perform well and formally explain its thinking.”

We are at a crossroads where AI can raise the bar and help humans make better decisions when it comes to the biggest challenges we face. Our future depends on building AI that is transparent, accountable, and accurate. Constructing AI that can communicate with humans and reliably solve their hardest problems is the driving force behind my work and the ethos of Elemental Cognition.