By Jaspreet Bindra & Vivek Saxena
The German astronomer Johann Galle was the first to observe the planet Neptune and confirm its existence. However, this was not a serendipitous observation; it was the result of a remarkable leap of reasoning and science. Mathematician Urbain Le Verrier, decades earlier, had postulated the existence of an unseen and mysterious planet exerting gravitational influence on Uranus and causing it to wobble and calculated an almost exact location for it.
When Galle turned his telescope to the predicted coordinates, Neptune appeared just one degree off from where Le Verrier had postulated!
Philosophy & AGI?
You must be wondering what this has to do with Indian Philosophy and AGI (Artificial General Intelligence). Let’s talk about the Vedic concept of Pranamas, or the Six Ways of Knowing.: these are perception (pratyakṣa), inference (anumāna), comparison (upamāna), verbal testimony (śabda), postulation (arthāpatti), and non-perception (anupalabdhi). We can hypothesise how each of these relates to some of the fundamental tenets of training AI models – for example, how AI-infused autonomous vehicles use inference or anumana to predict and react to road situations like slowing down when pedestrians approach.
The two AI leaders, Google DeepMind and OpenAI, are built to achieve the holy grail of AGI – OpenAI dreams of developing AGI that benefits all of humanity, and DeepMind wants to”solve intelligence” and to leverage this to solve all humanities problems.
So far, chasing this dream has meant spending billions of dollars on Nvidia-led computing, constantly tweaking algorithms, and hoovering up more and more elusive data. Sceptics argue that just doing more of the same thing, though bigger and better, will not lead to this elusive goal.
The Art Of Postulating
In this article, we propose how arthapatti, which can postulate an unseen reality based on indirect evidence, can be leveraged in this quest. Galle could observe the elusive Neptune, based on the postulation that Le Verrier did – this leap of reasoning was the bedrock behind the discovery.
Today’s AI operates much like Le Verrier’s mathematical model, capturing and processing immense amounts of data, drawing inferences, and making predictions. AI effectively employs five of the six pramanas—perception, inference, comparison, verbal testimony, and non-perception, as we explained in our earlier article. But it has yet to fully master arthapatti — the art of postulating hidden truths from indirect evidence. We believe that this missing piece could be the key to AI’s transition from narrow intelligence (ANI) to AGI, enabling machines to make intuitive leaps similar to those made by the great thinkers of history.
One approach to make this happen involves embodied intelligence — allowing AI to interact with the physical world, much like how humans generate hypotheses based on experience. For instance, self-driving cars and robotics experiments already show how real-world interaction can lead to postulation, as these systems learn from tactile feedback and adapt to unseen scenarios.
Another important area is social intelligence. Humans use social cues and the Theory of Mind to infer others’ thoughts and behaviours, a skill AI lacks but is beginning to develop. By improving AI’s ability to model beliefs and intentions, AI systems could learn to postulate in complex social environments. The emerging field of Social AI, with groundbreaking bots like Replika and now Character.ai is helping infuse social and behavioural data into existing data sets.
Ethics & Morality For AI
Another approach would be to inculcate ethical and moral reasoning into AI models. Currently, AI follows predefined rules, integrating moral reasoning into AI systems could allow them to navigate ambiguous situations, weighing conflicting values to hypothesise about long-term outcomes.
The more AIs collaborate with human beings, through chatbots and copilots, the more AI could enhance its ability to postulate in real-world scenarios. AI offering suggestions, solicited or otherwise, and alternative perspectives could lead to that path. For example, if a human knows that you see wet ground but no rain, you might postulate that it rained earlier or at night.
Even though you didn’t see the rain, the wetness of the ground leads you to infer that. These contextual and lateral thinking abilities and postulations are what make us human. Humans can explain this too, and these kinds of explainability needs to be built in AI systems, so that they move beyond being black boxes and develop the capacity to reflect on and justify their reasoning, enabling them to refine their hypotheses based on new data.
Finally, while AI relies heavily on logical algorithms, humans often use intuition and heuristics to make rapid judgments when information is limited. Integrating this kind of intuitive thinking could give AI the flexibility it needs to form postulations in complex, dynamic environments.
The great philosopher Bertrand Russell once remarked, “The method of ‘postulating’ what we want has many advantages; they are the same as the advantages of theft over honest toil,” implying that postulation is an intellectually dishonest shortcut.
Yet, as we push the boundaries of Artificial Intelligence (AI), it seems that reaching Artificial General Intelligence (AGI) might actually require teaching machines to do just that — bypass the laborious process of gathering evidence or reasoning properly and, instead, make intuitive leaps and jumping to conclusions.
This might be flawed, and not give a perfect outcome, but no one claims that humans are perfect. Thus, the secret to building smarter machines could lie in making them a little more flawed. Like us.
(Bindra is the co-founder of AI&Beyond; Saxena is the co-founder & CEO of Thinkly App)
Disclaimer: The opinions, beliefs, and views expressed by the various authors and forum participants on this website are personal and do not reflect the opinions, beliefs, and views of ABP Network Pvt. Ltd.