作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程

• •    

融合主题增强的深度采样网络兴趣点推荐模型

  • 发布日期:2026-03-30

Topic-Enhanced Deep Learning Model for POI Recommendation with Sampling Networks

  • Published:2026-03-30

摘要: 旅游业的快速发展促使基于个性化需求的兴趣点推荐成为提升用户体验的主要途径,但推荐过程常面临交互极度稀疏、短评论碎片化与语义离散性引发的特征提取难题。传统概率主题模型因依赖词共现统计而难以捕捉潜在语义关联,基于反向传播的迭代式深度学习框架则易陷入梯度失稳与训练低效的困境。为此,提出融合语义增强主题建模的深度学习推荐框架DeepTSN。引入语义聚类增强主题建模方法SynTopic强化短文本表征,利用大语言模型构建初始主题库,结合BERT-Chinese语义聚类与自适应优化策略剔除冗余并融合相似项,有效抽取深层主题特征弥补信息缺失,整合多源异构特征构建用户与景点深层交互的高维向量以捕获复杂非线性关系;同时集成采样网络,通过自适应概率密度采样重构数据分布,采用构造式学习机制解析生成网络权重,有效抑制缺失数据干扰并解决收敛难题,显著提升推荐精度与训练效率。多源数据集实验结果表明,DeepTSN在不同交互密度的真实与公开场景中性能均优于基线模型,MAE降幅最高分别达21.34%与12.72%,MSE降幅最高分别达22.89%与7.32%。运行时间缩短约61.69%,内存峰值下降约72.87%。

Abstract: The rapid growth of tourism renders personalized POI recommendation essential for user experience. But the recommendation encounters feature extraction obstacles caused by extreme interaction sparsity and semantic fragmentation in short reviews. Traditional probabilistic topic models struggle to capture latent semantic correlations due to their reliance on word co-occurrence statistics. Iterative deep learning based on back-propagation are prone to gradient instability and training inefficiency. This paper proposes DeepTSN, a deep learning recommendation framework integrating semantic-enhanced topic modeling. By introducing the semantic clustering-enhanced topic modeling SynTopic, short-text representation is enhanced via an LLM-constructed topic library. Redundancy is removed and similar topics are merged using BERT-Chinese based clustering. This process extracts latent topic features to compensate for missing data. High-dimensional vectors are constructed through feature integration to capture non-linear interactions. A sampling network is integrated to reconstruct the data distribution via adaptive probability density sampling. By employing a constructive learning mechanism to analytically determine network weights, the proposed method effectively mitigates interference from missing data and resolves convergence challenges, significantly enhancing both recommendation accuracy and training efficiency. Experiments on multi-source datasets demonstrate that DeepTSN outperforms baselines across real-world and public scenarios with varying interaction densities. The model reduces MAE by up to 21.34% and 12.72%, and MSE by 22.89% and 7.32%, respectively. Furthermore, it cuts runtime by approximately 61.69% and peak memory by 72.87%.