DEFECT DETECTION MODEL OF INFRARED PHOTOVOLTAIC MODULE BASED ON KNOWLEDGE DISTILLATION

Wang Yin, Zhang Jie, Xie Gang, Zhao Zhicheng, Hu Xiao, Wu Xiaohui

Acta Energiae Solaris Sinica ›› 2025, Vol. 46 ›› Issue (7) : 653-662.

PDF(5521 KB)
Welcome to visit Acta Energiae Solaris Sinica, Today is
PDF(5521 KB)
Acta Energiae Solaris Sinica ›› 2025, Vol. 46 ›› Issue (7) : 653-662. DOI: 10.19912/j.0254-0096.tynxb.2024-0402
Special Topics of Academic Papers at the 97th Annual Meeting of the China Association for Science and Technology

DEFECT DETECTION MODEL OF INFRARED PHOTOVOLTAIC MODULE BASED ON KNOWLEDGE DISTILLATION

  • Wang Yin1,2, Zhang Jie1,2, Xie Gang1,2, Zhao Zhicheng1,2, Hu Xiao1,2, Wu Xiaohui3
Author information +
History +

Abstract

Aiming at the problem that the infrared photovoltaic module defect detection model is difficult to deploy to edge devices due to the increasing number of parameters and computational complexity of object detection model, a defect detection algorithm T-DINO (Tiny DINO) based on model compression is proposed. Using ResNet-101 as the teacher network and ResNet-18 as the student network, a dynamic adaptive distillation method is proposed. The difference in attention weights between the two is used for efficient knowledge transfer in feature-based distillation, and it is also used as guiding knowledge for distillation of the student network in output response (logit) distillation. Consequently, the complexity of the model and the number of parameters are greatly reduced with minimal accuracy loss. At the same time, the fusion module CSF Block (Conv-Self Attention Block) is proposed to model local features and global features to improve the detection accuracy. Experiments on the self-constructed infrared PV module fault dataset showed a 77.3% reduction in the number of parameters, a 69.3% reduction in computational complexity and a 5.2% increase in AP50 compared to the baseline network DINO (ResNet-101) model. The simulation results show that the compressed model is suitable for deployment in edge equipment and can meet the requirements of actual infrared photovoltaic module defect detection.

Key words

defect detection / DINO / knowledge distillation / model compression / edge equipment

Cite this article

Download Citations
Wang Yin, Zhang Jie, Xie Gang, Zhao Zhicheng, Hu Xiao, Wu Xiaohui. DEFECT DETECTION MODEL OF INFRARED PHOTOVOLTAIC MODULE BASED ON KNOWLEDGE DISTILLATION[J]. Acta Energiae Solaris Sinica. 2025, 46(7): 653-662 https://doi.org/10.19912/j.0254-0096.tynxb.2024-0402

References

[1] GIELEN D, BOSHELL F, SAYGIN D, et al.The role of renewable energy in the global energy transformation[J]. Energy strategy reviews, 2019, 24: 38-50.
[2] HANSEN K, BREYER C, LUND H.Status and perspectives on 100% renewable energy systems[J]. Energy, 2019, 175: 471-480.
[3] 代云中, 唐浩, 刘铭煊, 等. 一种Z源光伏并网逆变器及其共模漏电流抑制[J]. 太阳能学报, 2024, 45(3): 517-522.
DAI Y Z, TANG H, LIU M X, et al.Z-source photovoltaic grid-connected inverter with common-mode leakage current inhibition[J]. Acta energiae solaris sinica, 2024, 45(3): 517-522.
[4] 金伟勇, 卢丽娜, 赖欢欢, 等. 基于功率特征的K-ISSA-LSTM光伏功率预测[J]. 太阳能学报, 2024, 45(2): 429-434.
JIN W Y, LU L N, LAI H H, et al.K-ISSA-LSTM photovoltaic power prediction based on power characteristic[J]. Acta energiae solaris sinica, 2024, 45(2): 429-434.
[5] 周颖, 叶红, 王彤, 等. 基于多尺度CNN的光伏组件缺陷识别[J]. 太阳能学报, 2022, 43(2): 211-216.
ZHOU Y, YE H, WANG T, et al.Photovoltaic module defect identification based on mulit-scale convolution neural network[J]. Acta energiae solaris sinica, 2022, 43(2): 211-216.
[6] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al.Generative adversarial networks[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada, 2014(2): 2672-2680.
[7] GUO S Q, WANG Z H, LOU Y, et al.Detection method of photovoltaic panel defect based on improved mask R-CNN[J]. Journal of Internet technology, 2022, 23(2): 397-406.
[8] HE K M, GKIOXARI G, DOLLAR P, et al.Mask R-CNN[J]. IEEE transactions on pattern analysis and machine intelligence, 2020, 42(2): 386-397.
[9] PRABHAKARAN S, ANNIE UTHRA R, PREETHAROSELYN J.Deep learning-based model for defect detection and localization on photovoltaic panels[J]. Computer systems science and engineering, 2023, 44(3): 2683-2700.
[10] CAO Y K, PANG D D, ZHAO Q C, et al.Improved YOLOv8-GD deep learning model for defect detection in electroluminescence images of solar photovoltaic modules[J]. Engineering applications of artificial intelligence, 2024, 131: 107866.
[11] LIU Z, LI J G, SHEN Z Q, et al.Learning efficient convolutional networks through network slimming[C]//2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy, 2017: 2755-2763.
[12] HINTON G, VINYALS O, DEAN J.Distilling the knowledge in a neural network[J]. Computer science, 2015, 14(7), 38-39.
[13] HAN S, MAO H Z, DALLY W J.Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding[J]. Neurocomputing, 2023(520): 152-170.
[14] BA L J, CARUANA R.Do deep nets really need to be deep?[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 2. Montreal, Canada, 2014: 2654-2662.
[15] ROMERO A, BALLAS N, KAHOU S E, et al.FitNets: hints for thin deep nets[C]//International Conference on Learning Representations (ICLR), 2015:1-10.
[16] BELAGIANNIS V, FARSHAD A, GALASSO F.Adversarial network compression[C]//European Conference on Computer Vision, 2018: 431-449.
[17] CHEN S X, ZHANG C J, DONG M.Coupled end-to-end transfer learning with generalized fisher information[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA, 2018: 4329-4338.
[18] RAJASEGARAN J, KHAN S, HAYAT M, et al.Self-supervised knowledge distillation for few-shot learning[J]. Computer vision and pattern recognition, 2020: 1-11.
[19] VASWANI, SHAZEER, PARMAR, et al. Attention is all you need[C]//Advances in neural information processing systems, 2017: 5998-6008.
[20] ZAGORUYKO S, KOMODAKIS N.Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer[C]//International Conference on Learning Representations, 2017:1-10.
[21] ZHU X Z, SU W J, LU L W, et al.Deformable detr: Deformable transformers for end-to-end object detection[C]//The Ninth International Conference on Learning Representations, 2021:1-14.
[22] Liu S., Li F., ZHANG H,et al.DINO: DETR with improved DeNoising anchor boxes for end-to-end object detection[C]//Proceedings of the International Conference on Learning Representations, 2023:1-17.
[23] LIN T Y, GOYAL P, GIRSHICK R, et al.Focal loss for dense object detection[J]. IEEE transactions on pattern analysis and machine intelligence, 2020, 42(2): 318-327.
[24] CHO Y J.Weighted Intersection over Union (wIoU): A New Evaluation Metric for Image Segmentation[J]. Pattern recongnition letters, 2024(185):101-107.
[25] LIU Z, LIN Y T, CAO Y, et al.Swin transformer: hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision , 2021:1-19.
[26] CARION N, MASSA F, SYNNAEVE G, et al.End-to-end object detection with transformers[C]//European Conference on Computer Vision, 2020: 213-229.
[27] LIU S L, LI F, ZHANG H, et al.DAB-DETR: dynamic anchor boxes are better queries for DETR[C]//International Conference on Learning Representations (ICLR), 2022: 1-19.
[28] LI F, ZHANG H, LIU S L, et al.DN-DETR: Accelerate DETR Training by Introducing Query DeNoising[J]. IEEE transactions on pattern analysis and machine intelligence, 2024, 46(4): 2239-2251.
[29] AHN S, HU S X, DAMIANOU A, et al.Variational information distillation for knowledge transfer[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA, 2019: 9155-9163.
[30] CHEN D F, MEI J P, ZHANG Y, et al.Cross-layer distillation with semantic calibration[J]. Proceedings of the AAAI conference on artificial intelligence, 2021, 35(8): 7028-7036.
PDF(5521 KB)

Accesses

Citation

Detail

Sections
Recommended

/