Subscriber OnlyInnovation

Chris Horn: AI is getting smarter

A new approach counters widely held belief that AI systems have to be scaled to colossal numbers of neurons to solve real-world problems

So far this year, generative artificial intelligence has dominated the innovation sector. Photograph: iStock
So far this year, generative artificial intelligence has dominated the innovation sector. Photograph: iStock

So far this year, generative artificial intelligence (AI) has dominated the innovation sector. The consequences for career paths and for venture investment are widely debated. But despite the almost magical qualities of generative chatbots and the artistic capabilities of image generators, the underlying mechanisms remain relatively obscure. If we are to use AI systems to drive our cars, diagnose our health and further automate our public services, then they need to be safe, foolproof and comprehensible. Furthermore, they require considerable computing resources for training, and are rarely carbon neutral.

The foundational tool of AI is a computer mimic of a biological neuron. Like each of the 80 billion or so neurons in our brain, a machine neuron receives incoming signals from other neurons and sensors. It adds these input signals together, favouring some over others. If the sum then exceeds some threshold, an output signal is generated. The degree to which some inputs are more influential than others, and the threshold amount, are all adjustable parameters that regulate the neuron’s behaviour. Training the neuron means giving it numerous sets of illustrative inputs and examining each output. If an output is incorrect, then the neuron’s parameters are gently tweaked and the training repeated.

In an AI application such as automatically driving a car, there can be hundreds of thousands of machine neurons, each with its own collection of multiple parameters, and many millions of connections between them all. In turn, numerous sample training sets are used to tune the huge number of parameters. And, after all that, the system may still behave poorly or even dangerously if it is confronted with a situation not adequately covered by its training.

By more accurately modelling the real behaviour of biological neurons studied in a worm species (caenorhabditis elegans), they have built the core of an AI-driven car with just 19 neurons

It is beyond human capabilities to fully understand the role of each trained machine neuron in such an artificial system, given the intractable scale and complexity of the neuron network involved. Can we ever be really certain that such a system is correct and safe, particularly if it faces scenarios it has not previously encountered?

READ MORE

Researchers led by Prof Daniela Rus at Massachusetts Institute of Technology in conjunction with the Technische Universität Wien have derived a remarkable improvement in designing artificial neurons. Their approach counters the widely held belief that AI systems always have to be scaled to colossal numbers of neurons to be able to solve real-world problems.

By more accurately modelling the real behaviour of biological neurons studied in a worm species (Caenorhabditis elegans), they have built the core of an AI-driven car with just 19 of their new neurons. Given such a low number, it becomes feasible to understand the role of each artificial neuron in the compact system.

Furthermore, the innovation appears to automatically adapt well in new situations and for new tasks, which it has never previously encountered in its training. An experiment confirmed that an AI-controlled drone trained to navigate itself to a visual target through a forest in summer, automatically deduced how to accomplish the same task in winter snow conditions and, as impressively, in the entirely different setting of an urban university campus.

How did Prof Rus and her research colleagues achieve their breakthrough?

In early 2022, researchers discovered a more elegant solution that was considerably faster. As a result, their “closed-form continuous-neural” networks appear to be an extraordinary breakthrough

Most AI systems that predict behaviour over time use periodic samples of their environment – for example, weather sensor readings every hour or stock market trades every second. But for the robotic systems in which the research team specialise, it is important to continuously track and predict a robot’s movement. Rather than sequences of discrete steps used in almost all neuron-based AI systems, the team instead exploit mathematical calculus and use a continuous flow of time. Their improved model of a biological neuron uses a differential equation to capture how the neuron continuously evolves and adapts over time, and extends the usual parameters of an artificial neuron with a varying responsive delay.

A key requirement for an artificial neuron is to be able to go back and tweak its parameters in training when its output is not as desired. In the case of the new neuron design, the team’s initial approach used a standard mathematical technique to estimate the amount by which each control parameter needed to be separately adjusted for each incorrect output. However this approach was relatively cumbersome to compute, making their entire effort questionable compared with more standard neural networks based on time sampling. But in early 2022, the researchers discovered a more elegant solution that was considerably faster. As a result, their “closed-form continuous-neural” networks appear to be an extraordinary breakthrough.

Much research remains to be done, not least to explore whether the continuous neural approach can be more widely applied, including to text-based systems such as generative AI chatbots. Nevertheless the team have amply demonstrated that, for at least some applications, AI need not require brute force and immensely large computational systems, but rather a smarter and more intelligent approach to design.