Unveiling the Power of Biological Neurons

Unveiling the Power of Biological Neurons

Table of Contents

  1. Introduction
  2. Artificial Neural Networks
  3. The Birth of Machine Learning
  4. The Perceptron Model
  5. Training Neural Networks
  6. Biological Neurons
  7. The Structure of a Neuron
  8. Electrical Excitability of Neurons
  9. Voltage-Gated Ion Channels
  10. Dendrites and Information Processing
  11. Dendritic Action Potentials
  12. Logical Operations and XOR
  13. Single Cortical Neurons as Deep Artificial Neural Networks
  14. Conclusion

Artificial Neural Networks: Exploring the Complexity of Biological Neurons

The year 2022 can easily be called the year of neural networks. We got a language model that can write a surprisingly decent essay, an AI Art Generator, and even a platform that turns any image into anime. All of this was made possible by the development of artificial neural networks, a Type of computing system modeled as a network of interconnected nodes that can learn to solve problems by recognizing Patterns Hidden in the training data. If You go online and Google artificial neural networks, you are likely to see statements such as "they work pretty much like your brain." Well, you see, such claims, although attractive, can be a bit misleading because biological neurons are actually much more powerful than previously thought.

In this article, we will explore the computational complexity of individual neurons in your brain, which function essentially like full-blown neural networks themselves, equipped with insane information processing capabilities, as well as some of the physiological mechanisms that account for this computational complexity. Before we dive into the details, let's first take a look at the history of artificial neural networks and how they were inspired by the descriptions of biological neurons.

The Birth of Machine Learning

The birth of machine learning as we know it can be traced back to 1943 when Walter Pitts and Warren McCulloch introduced the Notion of a perceptron. Despite the fancy name, the idea is quite simple. The perceptron was created to function like an individual nerve cell, which in this doctrine works like a simple summator and comparator. This just means that it receives an input set of numbers, multiplies them by some coefficients (also called weights), sums everything together, and compares the result with a threshold. If the resulting value exceeds the threshold, the perceptron sends a number one as an output to its neighbors.

You interconnect a bunch of these perceptrons with each other such that the output of one serves as an input to a downstream perceptron, include an input and output layer, and boom, you got a neural network. To train the network means to somehow adjust the values of these input weights to make it match an input to a correct output. But at that point, the fields of machine learning and neurobiology pretty much leveraged over the years.

Biological Neurons

Over the years, people have invented a bunch of activation functions, organized neurons in a multitude of different network architectures, came up with algorithms to change the weights efficiently, and so much more. But because we still refer to the nodes in such networks as neurons, a lot of people believe that biological neurons in the brain function exactly like their perceptron counterparts. The main goal of this article is to exonerate biological neurons and Show that single cells are much more computationally powerful and sophisticated than you might think.

To understand the computational complexity of a neuron, it's helpful to remind ourselves of the basic biology behind neural computations. If you open any neuroscience textbook, one of the first things you'll see is the structure of a typical neuron, usually consisting of the dendrites, SOMA or cell body, and an axon. Let's put dendrites aside from now as they will become key players later on.

You probably know the key property of neurons is that they are electrically excitable cells, which means they have the capacity to generate brief electrical pulses that are propagated to other neurons, forming a basis of communication between cells. In biological systems, electric charge is carried by ions such as sodium, potassium, chloride, and calcium, which are floating both inside and outside the cells in different proportions. Cells are separated from the outside world by a lipid membrane, a barrier normally impermeable to ions. However, neurons possess special proteins forming channels through which specific ions can cross the membrane and which can open and close through a variety of mechanisms, as we'll see further.

So by regulating the flow of ions through the shells, cells can control the balance of electric charges and thus control the membrane voltage. Namely, when positive ions flow into the cell, they are said to depolarize the membrane, increasing the voltage, making the potential more positive, and vice versa for negative ions. From the whole zoo of ion channels, we'll be mostly interested in what's called the voltage-gated channels, which can open and close depending on the value of membrane potential.

Dendrites and Information Processing

If you consider the geometric representation of this inequality, it's easy to see that this is essentially a line separating the two halves of the X Y plane. Similarly, if a perceptron had three inputs, this equation would correspond to a plane cut in the 3D space into two halves and so forth. That's why a single perceptron can function as a classifier when two output classes are located on opposite sides of that line, in other words, when they are linearly separable.

Returning back to our logic gates, the inputs would be limited to 0 and 1. If we Visualize the AND gate, it's easy to see that there is a line separating the true and false outputs, so a perceptron can act as an AND gate, and it is possible to turn it into an OR gate just by lowering the threshold. The XOR gate, however, is different. Notice that there is no line that would separate the zero outputs from the ones, which makes it a linearly non-separable function. This is why to perform the XOR operation, a multi-layered network is required, and it was believed to be true for biological neurons as well that individual cells can't compute the XOR function until, of course, that paper came out.

Single Cortical Neurons as Deep Artificial Neural Networks

This study suggests that single cortical neurons with their non-linear dendritic integration properties are indeed sophisticated computational units on their own, and this computation is comparable with a multi-layered convolutional network, which, if you think about it, is a pretty mind-blowing thing for a single cell to do. It also offers a practical AdVantage to model individual neurons more efficiently because even the deep neural network with eight layers is 2,000 times faster than running the detailed model, which requires solving a myriad of partial differential equations.

In conclusion, even at the level of single cells, the brain is incredibly complex and fascinating, and next time you hear statements that individual neurons essentially function like linear summators, you will take those claims with a grain of salt.

Find AI tools in Toolify

Join TOOLIFY to find the ai tools

Get started

Sign Up
App rating
4.9
AI Tools
20k+
Trusted Users
5000+
No complicated
No difficulty
Free forever
Browse More Content