|
[1]E. Suryawati, R. Sustika, R. S. Yuwana, A. Subekti, and H. F. Pardede, "Deep Structured Convolutional Neural Network for Tomato Diseases Detection," in 2018 International Conference on Advanced Computer Science and Information Systems, pp. 385-390, IEEE, 2018. [2]S. Sladojevic, M. Arsenovic, A. Anderla, D. Culibrk, and D. Stefanovic, "Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification," Computational Intelligence and Neuroscience, vol. 2016, p. 3289801, 2016. [3]M. R. Howlader, U. Habiba, R. H. Faisal, and M. M. Rahman, "Automatic Recognition of Guava Leaf Diseases Using Deep Convolution Neural Network," in 2019 International Conference on Electrical, Computer and Communication Engineering, pp. 1-5, IEEE, 2019. [4]S. P. Mohanty, D. P. Hughes, and M. Salathé, "Using Deep Learning for Image-Based Plant Disease Detection," Frontiers in Plant Science, vol. 7, no. 1419, Sep. 22, 2016. [5]G. Wang, Y. Sun, and J. Wang, "Automatic Image-Based Plant Disease Severity Estimation Using Deep Learning," Computational Intelligence and Neuroscience, vol. 2017, 2017. [6]B. A. Ashqar and S. S. Abu-Naser, "Image-Based Tomato Leaves Diseases Detection Using Deep Learning," International Journal of Academic Engineering Research, vol. 2, no. 12, pp. 10-16, 2018. [7]E. C. Too, L. Yujian, S. Njuki, and L. Yingchun, "A Comparative Study of Fine-Tuning Deep Learning Models for Plant Disease Identification," Computers and Electronics in Agriculture, vol. 161, pp. 272-279, 2019. [8]M. H. Saleem, J. Potgieter, and K. M. Arif, "Plant Disease Classification: A Comparative Evaluation of Convolutional Neural Networks and Deep Learning Optimizers," Plants, vol. 9, no. 10, p. 1319, Oct. 2020. [9]L. Tan, J. Lu, and H. Jiang, "Tomato Leaf Diseases Classification Based on Leaf Images: A Comparison between Classical Machine Learning and Deep Learning Methods," AgriEngineering, vol. 3, no. 3, pp. 542-558, Jul. 2021. [10]Y. Borhani, J. Khoramdel, and E. Najafi, "A deep learning based approach for automated plant disease classification using vision transformer," Sci Rep, vol. 12, p. 11554, 2022. [11]R. Girshick, J. Donahue, T. Darrell, J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," in 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 580-587. [12]J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders, "Selective Search for Object Recognition," IJCV2013, 2013, pp. 154-171. [13]R. Girshick, "Fast R-CNN," in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1440-1448. [14]K. He, X. Zhang, S. Ren, J. Sun, "Spatial pyramid pooling in deep convolutional networks for visual recognition," in IEEE transactions on pattern analysis and machine intelligence, vol. 37, no. 9, 2015, pp. 1904-1916. [15]S. Ren, K. He, R. Girshick, J. Sun, "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, 2017, pp. 1137-1149. [16]T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, S. Belongie, "Feature pyramid networks for object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.. [17]Z. Cai, N. Vasconcelos, "Cascade R-CNN: Delving into High Quality Object Detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. [18]J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. [19]W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, A. C. Berg, "SSD: Single Shot Multibox Detector," in European Conference on Computer Vision, Springer, Cham, 2016. [20]J. Redmon, A. Farhadi, "YOLO9000: better, faster, stronger." J. Redmon, A. Farhadi, "YOLO9000: better, faster, stronger," in Proceedings of the IEEE conference on computer vision and pattern recognition., 2017. [21]T.-Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollár, "Focal loss for dense object detection," in Proceedings of the IEEE International Conference on Computer Vision, 2017. [22]J. Redmon, A. Farhadi, "YOLOv3: An incremental improvement," arXiv preprint arXiv:1804.02767, 2018. [23]M. Tan, R. Pang, Q. V. Le, "EfficientDet: Scalable and efficient object detection," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020. [24]A. Bochkovskiy, C. Y. Wang, and H. Y. M. Liao, "Yolov4: Optimal speed and accuracy of object detection," arXiv preprint arXiv:2004.10934, 2020. [25]Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, "Yolox: Exceeding YOLO Series in 2021," arXiv preprint arXiv:2107.08430, 2021. [26]Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, vol. 86, no. 11, 1998, pp. 2278-2324, doi: 10.1109/5.726791. [27]A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet Classification with Deep Convolutional Neural Networks," in Advances in neural information processing systems, 2012, pp. 1097-1105. [28]H. Guo, S. Wang, "Long-Tailed Multi-Label Visual Recognition by Collaborative Training on Uniform and Re-Balanced Samplings.", in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 15089-15098 [29]K. Simonyan, A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," in The International Conference on Learning Representations, 2015. [30]C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, "Going Deeper with Convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9. [31]K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90. [32]G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, "Densely Connected Convolutional Networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708. [33]Y. Huang, Y. Cheng, D. Chen, H. Lee, J. Ngiam, Q.V. Le, Z. Chen "Gpipe: Efficient training of giant neural networks using pipeline parallelism," arXiv preprint arXiv:1808.07233, 2018. [34]M. Tan, Q. Le, "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks," in Proceedings of the 36th International Conference on Machine Learning, 2019, pp. 6105-6114. [35]M. Tan, Q. Le, "EfficientNetV2: Smaller Models and Faster Training," in Proceedings of the 38th International Conference on Machine Learning, 2021, pp. 10096-10106. [36]F. N. Iandola, S. Han, M.W. Moskewicz, K. Ashraf, W.J. Dally, and K. Keutzer, "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size," arXiv preprint arXiv:1602.07360., 2016. [37]A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, "Mobilenets: Efficient convolutional neural networks for mobile vision applications," arXiv preprint arXiv:1704.04861, 2017. [38]X. Zhang, X. Zhou, M. Lin, J. Sun, “ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018. [39]A. Gholami, K. Kwon, B. Wu, Z. Tai, X. Yue, P. Jin, S. Zhao, and K. Keutzer, “SqueezeNext: Hardware-Aware Neural Network Design,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018. [40]N. Ma, X. Zhang, H.-T. Zheng, J. Sun, “ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design,” in Proceedings of the European conference on computer vision (ECCV), 2018. [41]M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L. -C. Chen, "MobileNetV2: Inverted Residuals and Linear Bottlenecks," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510-4520 [42]M. Tan, B. Chen, R. Pang, V. Vasudevan, M. Sandler, A. Howard, Q. V. Le, “MnasNet: Platform-Aware Neural Architecture Search for Mobile,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019. [43]A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, Q. V. Le, H. Adam, “Searching for MobileNetV3,” in ICCV2019, 2019. [44]Y. Li, Y. Chen, X. Dai, D. Chen, M. Liu, L. Yuan, Z. Liu, L. Zhang, N. Vasconcelos, “MicroNet: Towards Image Recognition with Extremely Low FLOPs,” arXiv preprint arXiv:2011.12289, 2020. [45]F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, X. Tang, "Residual attention network for image classification," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017. [46]J. Hu, L. Shen, S. Albanie, G. Sun, E. Wu, "Squeeze-and-Excitation Networks," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132-7141, doi: 10.1109/CVPR.2018.00745. [47]X. Wang, R. Girshick, A. Gupta, K. He, "Non-local Neural Networks," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 7794-7803, doi: 10.1109/CVPR.2018.00813. [48]S. Woo, J. Park, J. Lee, I.S. Kweon, "CBAM: Convolutional block attention module," in Proceedings of the European conference on computer vision (ECCV), 2018. [49]J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, H. Lu, "Dual attention network for scene segmentation," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019. [50]Z. Huang, X. Wang, Y. Wei, L. Huang, H. Shi, W. Liu, T. S. Huang, "CCNet: Criss-cross attention for semantic segmentation," in Proceedings of the IEEE/CVF international conference on computer vision, 2019. [51]Y. Cao, J. Xu, S. Lin, F. Wei, H. Hu, "GCnet: Non-local networks meet squeeze-excitation networks and beyond," in Proceedings of the IEEE/CVF international conference on computer vision workshops, 2019. [52]Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, Q. Hu, "ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks," CVPR2020, 2020. [53]M. L. Zhang, Y. K. Li, X. Y. Liu, X. Geng, "Binary relevance for multi-label learning: an overview," Frontiers of Computer Science volume 12, 2018, pp. 191-202 [54]M. Zhang, Z. Zhou, "A Review on Multi-Label Learning Algorithms," in IEEE Transactions on Knowledge and Data Engineering, vol. 26, 2014, pp. 1819-1837 [55]A. N. Tarekegn, M. Giacobini, K. Michalak, "A review of methods for imbalanced multi-label classification," Pattern Recognition Volume 118, 2021. [56]Y. Wei, W. Xia, J. Huang, B. Ni, J. Dong, Y. Zhao, S. Yan, "CNN: Single-label to multi-label," arXiv preprint arXiv:1406.5726, 2014. [57]J. Zhang, Q. Wu, C. Shen, J. Zhang, J. Lu, "Multilabel image classification with regional latent semantic dependencies," in IEEE Transactions on Multimedia 20.10, 2018. [58]S. M. García, C. J. Mantas, J. G. Castellano, J. Abellán, "Using Credal C4.5 for Calibrated Label Ranking in Multi-Label Classification," in International Journal of Approximate Reasoning Volume 147, 2022, pp. 60-77 [59]R. Caruana, "Multitask Learning," Machine learning, vol. 28, no. 1, pp. 41-75, 1997. [60]Y. Sun, Y. Chen, X. Wang, and X. Tang, "Deep Learning Face Representation by Joint Identification-Verification," in Advances in neural information processing systems, pp. 1988-1996, 2014. [61]A. Kendall, Y. Gal, and R. Cipolla, "Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7482-7491, 2018. [62]I. Kokkinos, "UberNet: Training a Universal Convolutional Neural Network for Low-, Mid-, and High-Level Vision Using Diverse Datasets and Limited Memory," CVPR 2017, 2017, pp. 5454-5463. [63]I. Misra, A. Shrivastava, A. Gupta, M. Hebert, "Cross-stitch networks for multi-task learning," CVPR 2016 , 2016. [64]Y. Gao, J. Ma, M. Zhao, W. Liu, A. L. Yuille, "NDDR-CNN: Layerwise Feature Fusing in Multi-Task CNNs by Neural Discriminative Dimensionality Reduction," CVPR 2019, 2019, pp. 3200-3209. [65]S. Liu, E. Johns, and A. J. Davison, “End-to-end multi-task learning with attention,” CVPR 2019, 2019. [66]D. Xu, W. Ouyang, X. Wang, N. Sebe, "PAD-Net: Multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing," CVPR 2018, 2018. [67]H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, "mixup: Beyond empirical risk minimization," in ICLR, 2018. [68]G. Song, Y. Liu, and X. Wang, "Revisiting the sibling head in object detector," in CVPR, 2020. [69]Y. Wu, Y. Chen, L. Yuan, Z. Liu, L. Wang, H. Li, and Y. Fu, "Rethinking classification and localization for object detection," in CVPR, 2020. [70]M. Ji, K. Zhang, Q. Wu, and Z. J. S. C. Deng, "Multi-Label Learning for Crop Leaf Diseases Recognition and Severity Estimation Based on Convolutional Neural Networks," Soft Computing, vol. 24, no. 20, pp. 15327-15340, 2020. [71]黃宣哲, "A Plant Diseases and Pests Recognition System based on Multi-label Learning,", 2021
|