MobileNetV2 demo in pytorch: https://pytorch.org/hub/pytorch_vision_mobilenet_v2/ MobileNet V2 paper: https://arxiv.org/pdf/1801.04381.pdf Articles with good illustrations: https://predictiveprogrammer.com/famous-convolutional-neural-network-architectures-2/ https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d https://towardsdatascience.com/visualizing-convolution-neural-networks-using-pytorch-3dfa8443e74e Inverted residuals: https://towardsdatascience.com/mobilenetv2-inverted-residuals-and-linear-bottlenecks-8a4362f4ffd5 MobileNetV2 parameters: t,c,n,s t = expansion ratio (nhiddens per input channel) for bottleneck layers c = number of input channels n = repetition count of blocks (sum of n's = # layers) s = stride ================ Program included in this folder: mb_demo1 loads an image and gives the top 5 classifications. mb_demo2 loads several images and gives their feature vector representations. MobileNet.fsm runs the current camera image through the network and prints the top 5 classifications.