Researchers use statistical physics and "toy models" to explain how neural networks avoid overfitting and stabilize learning in high-dimensional spaces.
Tech Xplore on MSN
A simple physics-inspired model sheds light on how AI learns
Artificial intelligence systems based on neural networks—such as ChatGPT, Claude, DeepSeek or Gemini—are extraordinarily ...
Harvard University physicists have created a simplified mathematical model to study how neural networks learn, using statistical physics to uncover underlying patterns. The approach, likened to early ...
Physics meets AI: Harvard scientists applied renormalization theory to a simplified model, revealing how large neural networks stabilize learning in high‑dimensional spaces. Scaling mystery solved?: ...
The rapid ascent of large-scale artificial intelligence has provided neuroscience with a new set of powerful tools for modeling complex cognitive functions.
Researchers are training neural networks to make decisions more like humans would. This science of human decision-making is only just being applied to machine learning, but developing a neural network ...
“Neural networks are currently the most powerful tools in artificial intelligence,” said Sebastian Wetzel, a researcher at the Perimeter Institute for Theoretical Physics. “When we scale them up to ...
Researchers have devised a way to make computer vision systems more efficient by building networks out of computer chips’ logic gates. Networks programmed directly into computer chip hardware can ...
Researchers Dr. Yuval Hart and Oded Wertheimer from the Psychology department and the Edmond and Lily Safra Center for Brain Science (ELSC) at The Hebrew University of Jerusalem have developed a new ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results