A significant role in contemporary artificial intelligence algorithms is played by artificial neural networks, built around the concept of approximating neurons. While the learning process in these networks is enabled through backward error propagation, this method proves inadequate for extremely deep neural networks.
One primary challenge lies in the minimal impact of error values on the initial layers. To effect meaningful changes in these early layers, an extensive number of examples must be processed, making it hard to conceive that our brains fine-tune motor actions using solely end-level outcomes.
Another aspect involves physiology. Backpropagation involves the propagation of error values in the opposite direction of the primary signal. Consequently, the brain’s neural network must house an immense number of neurons capable of transmitting signals from specific resolution points to the preceding networks.
REFERENCES