![cover image](https://wikiwandv2-19431.kxcdn.com/_next/image?url=https://upload.wikimedia.org/wikipedia/commons/thumb/b/ba/ResBlock.png/640px-ResBlock.png&w=640&q=50)
Residual neural network
Deep learning method / From Wikipedia, the free encyclopedia
Dear Wikiwand AI, let's keep it short by simply answering these key questions:
Can you list the top facts and stats about Residual neural network?
Summarize this article for a 10 year old
A residual neural network (also referred to as a residual network or ResNet)[1] is a seminal deep learning model in which the weight layers learn residual functions with reference to the layer inputs. It was developed in 2015 for image recognition and won that year's ImageNet Large Scale Visual Recognition Challenge (ILSVRC).[2][3]
![]() | This article may require cleanup to meet Wikipedia's quality standards. The specific problem is: article is not written in an encyclopaedic tone and suffers from many failures to follow simple style guidelines. (February 2024) |
![](http://upload.wikimedia.org/wikipedia/commons/thumb/b/ba/ResBlock.png/640px-ResBlock.png)
ResNet behaves like a highway network whose gates are opened through strongly positive bias weights.[4] This enables deep learning models with tens or hundreds of layers to train easily and approach better accuracy when going deeper. The identity skip connections, often referred to as "residual connections", are also used in the original LSTM network,[5] transformer models (e.g., BERT and GPT models such as ChatGPT), the AlphaGo Zero system, the AlphaStar system, and the AlphaFold system.