[3]
A. Bashish, Dheeb, Malik Braik, and Sulieman Bani-Ahmad. “A framework for detection and classification of plant leaf and stem diseases.” 2010 international conference on signal and image processing., IEEE, 2010.
[4]
H. Al-Hiary, "Fast and accurate detection and classification of plant diseases", Int. J. Comput. Appl., vol. 17, no. 1, pp. 31-38, 2011.
[6]
S. Arivazhagan, "Detection of unhealthy region of plant leaves and classification of plant leaf diseases using texture features", Agric. Eng. Int. CIGR J., vol. 15, no. 1, pp. 211-217, 2013.
[9]
A. Krizhevsky, I. Sutskever, and G.E. Hinton, Imagenet classification with deep convolutional neural networks.Advances in neural information processing systems., 2012, pp. 1097-1105.
[10]
C. Szegedy, "Going deeper with convolutions", 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015pp. 1-9 Boston, MA, USA
[12]
Jihen Amara, Bassem Bouaziz, and Alsayed Algergawy, "A deep learning-based approach for banana leaf diseases classification", Datenbanksysteme für Business, Technologie und Web (BTW 2017)-Workshopband, 2017.
[18]
A. Camargo, and J.S. Smith, An image-processing based algorithm to automatically identify plant disease visual symptoms.Biosystems engineering 102.1, (2009):, pp. 9-21.
[19]
T.M. Prajwala, A. Pranathi, K.S. Ashritha, Nagaratna B. Chittaragi, and S. Koolagudi, "Tomato leaf disease detection using convolutional neural networks", IEEE Xplore, Eleventh International Conference on Contemporary Computing (IC3), 2018
[21]
P.M. Mainkar, S. Ghorpade, and M. Adawadkar, "Plant leaf disease detection and classification using image processing techniques", Int. J. Innovative Emerging Res. Engineering, vol. 2, no. 4, pp. 139-144, 2015.
[22]
S. Wallelign, M. Polceanu, and C. Buche, "soybean plant disease identification using convolutional neural network", Artificial Intelligence Research Society Conference (FLAIRS-31), 2018pp. 146-151
[23]
I. Goodfellow, H. Lee, Q.V. Le, A. Saxe, and A.Y. Ng, "Measuring invariances in deep networks", Adv. Neural Inform. Processings syst., pp. 646-654, 2009.
[24]
G. Larsson, M. Maire, and G. Shakhnarovich, "Fractalnet: Ultra-deep neural networks without residuals", ArXiv Preprint ArXiv:1605.07648, 2016.
[25]
C. Cortes, X. Gonzalvo, V. Kuznetsov, M. Mohri, and S. Yang, "Adanet: Adaptive structural learning of artificial neural networks", Proceedings of the 34th Int. Conference Machine Learning JMLR, vol. 70, 2017pp. 874-883
[27]
M. Tan, and Q.V. Le, "Efficientnet: Rethinking model scaling for convolutional neural networks", ArXiv Preprint ArXiv:1905.11946, 2019.
[28]
J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, "Graph neural networks: a review of methods and applications", arXiv preprint arXiv:1812.08434, 1812.
[29]
Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel, "Gated graph sequence neural networks", ArXiv Preprint ArXiv:1511.05493, 1511.
[30]
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, "Graph attention networks", ArXiv Preprint ArXiv:1710.10903, 1710.
[31]
R. Collobert, "J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (almost) from scratch", J. Mach. Learn. Res., vol. 12, no. Aug, pp. 2493-2537, 2011.
[32]
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition", Proceedings IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[33]
J. Deng, W. Dong, R. Socher, L-J. Li, and K. Li, Imagenet: A large-scale hierarchical image database., CVPR, 2009.
[36]
Y. Bengio, "Deep learning of representations for unsupervised and transfer learning", JMLR: Workshop Conference Proceedings, vol. 27, pp. 17-37, 2012.
[37]
G. Mesnil, Y. Dauphin, X. Glorot, S. Rifai, Y. Bengio, I. Goodfellow, E. Lavoie, X. Muller, G. Desjardins, D. Warde-Farley, and P. Vincent, "Unsupervised and transfer learning challenge: a deep learning approach", JMLR: Workshop and Conference Proceedings, vol. 27, pp. 97-111, 2012.
[39]
F.N. Iandola, S. Han, M.W. Moskewicz, K. Ashraf, W.J. Dally, and K. Keutzer, "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size", ArXiv Preprint ArXiv:1602.07360, 1602.
[40]
S.L. Smith, P.J. Kindermans, C. Ying, and Q.V. Le, "Don’t decay the learning rate, increase the batch size", ArXiv Preprint ArXiv:1711.00489, 1711.
[42]
R.K. Srivastava, K. Greff, and J. Schmidhuber, "Highway networks", ArXiv Preprint ArXiv:1505.00387, 1505.
[43]
J Moody, and C Darken, "Learning with localized receptive fields",
[44]
W. Luo, Y. Li, R. Urtasun, and R. Zemel, "Understanding the effective receptive field in deep convolutional neural networks", In: Adv. Neural Information Processing Syst., 2016, pp. 4898-4906.
[45]
S. Karen, and Z. andrew, "Very deep convolutional networks for large-scale image recognition", ArXiv 1409.1556., 2014.
[46]
K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for Image recognition", Computer Vision and Pattern Recognition, Dec., 2015.
[47]
K. He, and X. Zhang, "Identity mappings in deep residual networks", European conference on computer vision, 2016, pp. 630-645
[49]
A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications., ArXiv, 2017.
[51]
F. Chollet, "Xception: deep learning with depthwise separable convolutions", Conference on Computer Vision and Pattern Recognition, 7 Oct. 2016
[52]
M. Lin, Q. Chen, and S. Yan, "Network in Network", CoRR, 2013.
[53]
Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning", Nature, vol. 521, pp. 436-444, 2015.
[54]
I. Goodfellow, Y. Bengio, and A. Courville, Deep learning.
[55]
C.Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu, "Deeply-supervised nets", Artificial Intelligence Statistics, pp. 562-570, 2015.
[56]
V. Nair, and G.E. Hinton, "Rectified linear units improve restricted boltzmann machines", In Proceedings of the 27th International Conference on International Conference on Machine Learning (ICML’10). Omnipress, Madison, WI, USA, pp. 807-814, 2010.
[57]
W. Liu, Y. Wen, Z. Yu, and M. Yang, "Large-margin softmax loss for convolutional neural networks", ICML, vol. 2, p. 7, 2016.
[60]
S.L. Smith, P.J. Kindermans, C. Ying, and Q.V. Le, "Don’t decay the learning rate, increase the batch size", ArXiv Preprint ArXiv:1711.00489, 1711.
[61]
E. Hoffer, I. Hubara, and D. Soudry, Train longer, generalize better: closing the generalization gap in large batch training of neural networksAdvances in Neural Information Processing Syst., 2017, pp. 1731-1741.
[62]
M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and M. Kudlur, "Tensorflow: A system for large-scale machine learning", 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), 2016pp. 265-283
[65]
D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning representations by back-propagating errors", Nature, vol. 323, no. 6088, pp. 533-536, 1986.
[66]
D. Kingma, and J. Ba, "Adam: A Method for Stochastic Optimization", International Conference on Learning Representations, 2014
[67]
S. Ioffe, and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shiftICML, 2015.
[68]
N. Srivastava, G.E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting., JMLR, 2014.