作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程

• •    

GroupMNL:面向物联网设备的强鲁棒性卷积神经网络设计方法

  • 发布日期:2025-08-12

GroupMNL: A Robust Convolutional Neural Network Design Method for Internet of Things Devices

  • Published:2025-08-12

摘要: 随着物联网设备的广泛应用,如何在资源受限的物联网设备上高效部署强鲁棒性的卷积神经网络(Convolutional Neural Network, CNN)成为一项重要挑战。现有的云服务器辅助物联网设备训练CNN模型的方法虽然可以降低云和设备间的参数传输量,但并未降低推理所需计算量且鲁棒性有限。为了解决上述问题,提出了一种分组的多非线性变换函数生成滤波器方法(Group-based method for generating filters using multiple nonlinear transformation functions, GroupMNL)。首先,在每层卷积层中随机生成少量的标准滤波器作为种子滤波器。将种子滤波器分组并对每组的种子滤波器使用不同类型的非线性变换函数按需生成多样性的滤波器,且参数是不学习的,从而降低了CNN模型的可学习参数量。其次,将种子滤波器和生成的滤波器进行拼接生成完整卷积层,并在卷积操作中引入分组卷积机制以降低模型的计算量。最后,为了进一步增强CNN模型的鲁棒性,引入组归一化技术并结合多非线性变换函数对模型的正则化作用,从而提升模型的鲁棒性。实验结果表明,基于GroupMNL方法的ResNet101模型相比标准模型,减少了87%的可学习参数量、降低了71%的计算量,并将模型鲁棒性提高了6.09%。

Abstract: With the widespread application of internet of things devices, efficiently deploying robust Convolutional Neural Network (CNN) on resource-constrained internet of things devices becomes a significant challenge. Existing methods that rely on cloud servers to assist CNN training reduce the parameter transmission between the cloud and devices, but they do not reduce inference computation and show limited robustness. To address this issue, a group-based method for generating filters using multiple nonlinear transformation functions (GroupMNL) is proposed. The method first randomly generates a small number of standard filters in each convolutional layer as seed filters. These seed filters are grouped, and different types of nonlinear transform functions are applied to different groups to generate different filters. The generated parameters are unlearned, which reduces the number of trainable parameters in the CNN. Then, the seed filters and generated filters are concatenated to form a complete convolutional layer. Group convolution is introduced during the convolution operation to reduce computational cost. Finally, to further enhance the robustness of the CNN model, group normalization is introduced and combined with multiple nonlinear transformations to strengthen the model's regularization effect, thereby improving its robustness. The experimental results show that the ResNet101 model based on the GroupMNL method reduces the learnable parameters by 87%, decreases the computational cost by 71%, and improves the model's robustness by 6.09% compared to the standard model.