Can artificial intelligence emulate brain?

AI is architected along the lines of the human brain, to simulate the workings of neurons.

By Shalini Verma

Published: Sun 17 Jan 2021, 8:43 PM

Last updated: Sun 17 Jan 2021, 10:28 PM

It’s no secret that when one thinks about intelligence, one looks no further than the human brain. For AI researchers, the first point of reference has always been our brain. It’s a template that AI technologists have been using in order to address the rather pointed question that mathematician Alan Turing asked in 1950. Can machines think?

AI is architected along the lines of the human brain, to simulate the workings of neurons. Deep neural networks are modelled on the workings of human neurons. AI neurons are connected to other neurons and each neuron fires when it reaches a certain threshold of data. At a simplistic level, this is true for neurons in our brain. The response from one neuron to another happens through a disproportionate spike-frequency.

So how far can AI go in trying to emulate the brain? Retail giant Walmart’s termination of robots that tracked inventory in store aisles is a cautionary tale. It cancelled the order after a trial of 500 robots for five years. Walmart found humans to be far more versatile in performing a variety of jobs such as managing cash registers and moving between the stock room and the sales floor. Whereas robots were more suitable for checking inventory on shelves. Most algorithms powering chatbots and other applications are trained for single function tasks. Lack of machine versatility has been the fundamental challenge facing the AI industry today. There is a good reason for it.

Neurons in our brains are constantly and relentlessly processing data from our environment, which has baffled scientists for centuries. This relentless cerebral activity allows us to do simple and complex tasks such as walking, driving, cycling or climbing a mountain. The brain can store at least 1 petabyte (1,000 terabytes) of information.

Robots will need to do the same level of relentless data crunching to perform even simple tasks without fumbling. Admittedly, robotics has advanced to a point that in restaurants, factories and offices demanding repetitive, menial and dangerous tasks, we are more likely than ever to work alongside robots. But they are no thinking machines. Not even close.

In his book The Brain from Inside Out, Gyorgy Buzsaki posits that the brain is not a passive organ that takes in external stimuli but is actively searching through alternative possibilities to try out various options. According to him, the brain actually constructs information. Many researchers like him feel that the brain has complex interconnected neural networks that are not just triggered by microbes in our stomach but are also tightly coupled with the external world. Hence, the complex and dynamic neural patterns extend beyond a single brain into an entire population! It is, therefore, difficult to understand cerebral activity by studying specific sensory processes while ignoring the composite functioning of the networks.

The initial struggle for neuroscientists was to ascertain whether the mental processing is localised in a section of the brain or the brain functions as a whole. The latter view was partly following the legacy of the French philosopher Descartes who believed in the mind as a unitary whole. For the longest time, scientists sparred over these opposing views. The current view is somewhere in between – some activities like vision is more or less localised, but most activities involve neurons from different parts of the brain, and hence they require a composite use of our brain. Some researchers describe this as emergent behaviour that happens when many entities together act in a complex manner. These intricate causal relationships within and without the brain are not well understood.

This also means that the human brain is far too complex and therefore largely remains an inscrutable black box. We have little understanding of what happens inside our brain’s neural networks let alone neurons. Despite centuries of work, we cannot accurately predict what would happen if we tinker around with the activity of a neural network. We have tried to fiddle with a mouse’s brain and conjure a memory of smell that never occurred. Yet the brain of the mouse has a staggering 70 billion neurons, which is still largely undiscovered. The human brain is quite another challenge.

The more we study the brain, the more we realise its extraordinary nonlinearity. Let’s not forget that it is a living organ that has evolved over millions of years as it adapted to the changing environment and our body structure. The organic growth of the brain led to its messy patchwork. Hence it is not a neatly stacked device, and infinitely obscure for researchers.

The AI industry’s problems are startlingly similar given that artificial neural networks have been inspired by the human brain. Computer scientist and inventor of convolutional neural network, Yann LeCun’ feels that engineering is messy. Arguably, engineering is messier when it tries to replicate the human brain. Many AI scientists have also acknowledged that they have no idea what goes on inside the deep neural networks. Some even celebrate this ambiguity as inevitable singularity. Yet, in applied AI, visibility is absolutely necessary, which has given rise to the recent industry focus on explainable AI.

The fundamental quandary of AI is that its architecture is modelled on a science that is evolving. So long as the underlying knowhow is incomplete, AI will remain a trial-and-error science, riddled with tweaks and optimisations. Many researchers contend that the quest for making machines think like humans is as obscure as alchemy.

Through the years, research on human and artificial intelligence have been converging and diverging. Yet, both are joined at the hip. The more we understand the human brain, the better the prospects of making machines intelligent. Conversely, AI can help us better understand the human brain. Elon Musk’s pet project Neuralink is researching ibrain-machine interface — an ambitious project that promises to understand the brain through embedded chips. While the brain will remain the AI template, it is possible that AI will first draw inspiration from the humbler form of the brain in animal species like the mouse.

Shalini Verma is CEO of PIVOT Technologies

More news from Opinion