A New Model of the Brain’s Real-Life Neural Networks

Researchers from the Cyber-Physical Systems Group at the USC Viterbi School of Engineering, along with the University of Illinois at Urbana-Champaign, have developed a new model of how deep information in the brain can flow from one network to another, and how these times. Together neuronal networks perform cluster self-optimization.

His work, “Network Science Characters of Brain-Derived Neuronal Culture from Quantitative Phase Imaging Data”, is considered the first study to count this self-optimization event and existing models in an in vitro neuronal network.

Their findings may open new research directions for biologically motivated artificial intelligence, brain cancer detection and diagnosis, and contribute to or inspire new Parkinson’s treatment strategies.

The team investigated the structure and development of neuronal networks in the brains of mice and rats to identify connectivity patterns. The author and Associate Professor of Electrical and Computing Engineering Paul Bogdan explained the context of this work by explaining how the brain performs decision-making. He refers to the brain activity that occurs when one is supposed to count cards.

He said that the brain cannot actually remember all the card options, but rather is “operating a sort of model of uncertainty.” The brain, he says, is getting a lot of information from neurons through all the connections.

Dynamic clustering occurring in this scenario enables the brain to estimate varying degrees of uncertainty, obtain broadly probable descriptions and understand which types of situations are less likely to occur.

“We saw that the brain’s network has an extraordinary ability to minimize latency, maximize delivery, and maximize robustness and maximization while doing all of them without one central way or central coordinator.” Bogdan who holds the Jack Munshian Early Career Chair in Ming Hsieh Department of Electrical Engineering.

“This means that neuronal networks interact with each other and connect to each other in a way that rapidly enhances network performance, yet the rules for connecting are unknown.”

To Bogdan’s surprise, no classical mathematical model employed by neuroscience was able to accurately replicate this dynamic emergent connectivity phenomenon.

Using multibractal analysis and a novel imaging technique, called Quantitative Phase Imagery (QPI), developed by Gabriel Popescu, the study’s co-author is a professor of electrical and computer engineering at the University of Illinois at Urbana-Champagne, the research team. Was a co-author in To model and analyze this phenomenon with high accuracy.

Health application

The findings of this research may have a significant impact on early detection of brain tumors. Having a better topological map of healthy brain and brain activities to compare – imaging of dynamic connectivity between neurons in various cognitive functions without more invasive processing would make it easier to detect structural abnormalities early.

Co-author Chenzhong Yin, a Ph.D. Students in Bogdan’s Cyber ​​Physical Systems Group, “The cancer spreads to small groups of cells and cannot be detected by fMRI or other scanning techniques until it is too late.”

“But through this method we Yin said that monitoring and finding abnormal microscopic differences between neurons allows for early detection and even prediction of diseases.

Researchers are now trying to perfect their algorithms and imaging tools for use in monitoring these complex neuronal networks that reside inside a living brain.

This may have additional applications for diseases such as Parkinson’s, including losing neuronal connections between the left and right hemispheres in the brain.

“By placing an imaging device on the brain of a living animal, we can monitor and observe things such as how neuronal networks grow and shrink, how memory and cognition are, if a drug is effective and ultimately how to learn. Then we can begin to design better artificial neural networks, which, like the brain, will have the ability to self-adapt. ”

Use for artistic integration

Bogdan said, “Having this level of accuracy can give us a clearer picture of the internal functioning of the biological mind and how we can replicate those in the artificial mind.”

As humans we have the ability to learn new tasks without forgetting the old. Artificial neural networks, however, suffer from what is known as a catastrophic forgetting problem. We see this when we try to teach a robot two consecutive tasks such as climbing stairs and then turning off the light.