You can edit almost every page by Creating an account. Otherwise, see the FAQ.

VGG Net

From EverybodyWiki Bios & Wiki




VGG Net[edit]

VGG Net is the name of a pre-trained convolutional neural network (CNN) invented by Simonyan and Zisserman from Visual Geometry Group (VGG) at University of Oxford in 2014[1] and it was able to be the 1st runner-up of the ILSVRC (ImageNet Large Scale Visual Recognition Competition) 2014 in the classification task. VGG Net has been trained on ImageNet ILSVRC data set which include images of 1000 classes split into three sets of 1.3 million training images, 100,000 testing images and 50,000 validation images [2]. The model obtained 92.7% test accuracy in ImageNet [3]. VGG Net has been successful in many real world applications such as estimating the heart rate based on the body motion, and pavement distress detection .[4][5]

VGG Net has learned to extract the features (feature extractor) that can distinguish the objects and is used to classify unseen objects. VGG was invented with the purpose of enhancing classification accuracy by increasing the depth of the CNNs. VGG 16 and VGG 19, having 16 and 19 weight layers, respectively, have been used for object recognition. VGG Net takes input of 224×224 RGB images and passes them through a stack of convolutional layers with the fixed filter size of 3×3 and the stride of 1. There are five max pooling filters embedded between convolutional layers in order to down-sample the input representation (image, hidden-layer output matrix, etc.) [6]. The stack of convolutional layers are followed by 3 fully connected layers, having 4096, 4096 and 1000 channels, respectively. The last layer is a soft-max layer [7]. Below figure shows VGG network structure.

Although VGG Net have been effective for object recognition, it does not work properly for scenes recognition. Places205-VGGNet is an updated version of VGG Net. Places205-VGGNet that has been trained on MIT67, SUN397, and Places205 data set, can recognize scene images. Places205-VGGNet utilizes corner cropping strategy and multi-scale cropping method [8].

Downsides of VGG Net:

• Computationally time consuming

• Uses a lot of memory

• Large number of parameters[9]

VGG network structure
VGG network structure


This article "VGG Net" is from Wikipedia. The list of its authors can be seen in its historical and/or the page Edithistory:VGG Net. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.

  1. K. S. a. A. Zisserman, "Very deep convolutional networks for large-scale image recognition,," in International Conference on Learning Representations (ICLR), San Diego, 2015.
  2. A. Culli and S. Pal, Deep Learning With Keras, Mumbai: Packt Publishing, 2017.
  3. "VGG16 – Convolutional Network for Classification and Detection," 20 November 2018. [Online]. Available: https://neurohive.io/en/popular-networks/vgg16/. [Accessed 2019 April 30].
  4. H. Lee and M. Whang, "Heart Rate Estimated from Body Movements at Six Degrees of Freedom by Convolutional Neural Networks," Sensors, vol. 18, pp. 1-19, 2018.
  5. K. Gopalakrishnan, S. Khaitan and A. Agrawal, "Deep Convolutional Neural Networks with transfer learning for computer vision-based data-driven pavement distress detection," Construction and Building Materials , vol. 157, pp. 322-330, 2017
  6. "Max-pooling / Pooling," 2018. [Online]. Available: https://computersciencewiki.org/index.php/Max-pooling_/_Pooling. [Accessed 2019 April 30].
  7. "ImageNet: VGGNet, ResNet, Inception, and Xception with Keras," Pyimagesearch, 20 March 2017. [Online]. Available: https://www.pyimagesearch.com/2017/03/20/imagenet-vggnet-resnet-inception-xception-keras/. [Accessed 2019 April 30].
  8. L. Wang, S. Guo, W. Huang and Y. Qiao, Places205-VGGNet Models for Scene Recognition, China, 2015.
  9. L. Hulstaert, "Going deep into image classification," Toward Data Science, 28 March 2018. [Online]. Available: https://towardsdatascience.com/an-overview-of-image-classification-networks-3fb4ff6fa61b. [Accessed 2019 April 30].