Author Login Chief Editor Login Reviewer Login Editor Login Remote Office

Computer Engineering

   

Super-Resolution-Driven Climate Downscaling via Implicit Neural Representation

  

  • Published:2025-11-27

基于隐式神经表达图像超分辨率的气象降尺度

Abstract: High-resolution climate data is crucial for local and regional-scale production and livelihoods, while deep learning-based downscaling techniques can effectively bridge the gap between existing low-resolution climate data and application requirements. Deep learning-based downscaling methods that can generate high-resolution climate data hold considerable significance for both local and regional production activities. However, existing methods are often constrained by fixed scaling factors, leading to high training costs in multi-scale scenarios. Meanwhile, their results in climate data are usually blurred and inaccurate in high-frequency details. To address these limitations, this study proposes a deep learning super-resolution network that fuses implicit neural representation and adaptive feature encoding for arbitrary-scale climate downscaling. In detail, the method designs the dynamic pixel feature aggregation module to dynamically adjust the feature encoding process through a learnable modulator, which can adapt to different scaling factors. Besides, the implicit neural representation for the images is designed to predict continuous-domain pixel values by fusing coordinate linear differences features and neighborhood nonlinear features via an attention mechanism. Finally, combined with a high-order degradation training strategy, experiments on the ECMMWF HRES and ERA5 datasets demonstrate that the proposed method achieves a PSNR improvement of at least 0.7 dB at ×2 scaling factor compared to fixed-ratio methods, and outperforms existing arbitrary-ratio methods by at least 0.48 dB under the same scaling condition. These quantitative results demonstrate that our approach is superior to existing methods, as it provides a more flexible and efficient solution for meteorological data processing.

摘要: 高分辨率的气象数据对于本地和区域尺度的生产生活有重要意义,而基于深度学习的降尺度技术能有效弥合现有气象低分辨率数据与应用需求间的鸿沟。目前深度学习气象降尺度方法常受限于固定整数缩放因子,导致多倍率场景下训练成本高。并且,现有方法在气象数据中仍存在高频细节预测不准、结果模糊的问题。针对上述问题,研究提出一种融合隐式神经表达和自适应特征编码的深度学习超分辨率网络,用于任意倍率气象降尺度。其核心动态像素特征聚合模块利用可学习调制器动态调整特征提取过程,使像素特征能自适应不同缩放因子;图像级隐式表达模块则通过注意力机制融合坐标线性差异与邻域非线性特征,实现连续域像素值预测。结合高阶退化训练策略,在ECMWF HRES和ERA5数据集上的实验表明,同固定倍率方法相比,该方法在×2倍率下的PSNR指标可高出至少0.7 dB;而同任意倍率方法相比,该方法在×2倍率下的PSNR指标可高出至少0.48 dB。这些结果说明该方法优于现有经典方法,为气象数据应用提供了更加灵活高效的解决方案。