作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程

• 开发研究与工程应用 • 上一篇    下一篇

基于二维人体关节点特征的体育视频标注

崔云翔   

  1. (中国科学院数学与系统科学研究院,北京 100190)
  • 收稿日期:2013-04-01 出版日期:2014-04-15 发布日期:2014-04-14
  • 作者简介:崔云翔(1987-),男,硕士研究生,主研方向:视频标注。
  • 基金资助:
    国家自然科学基金资助重点项目(61232015)。

Sports Video Annotation Based on 2D Human Joint Features

CUI Yun-xiang   

  1. (Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China)
  • Received:2013-04-01 Online:2014-04-15 Published:2014-04-14

摘要: 视频标注是指利用语义索引信息标注视频内容,其目的是方便检索视频。现有视频标注工作使用的视觉底层特征,较难直接用来标注体育视频中的人体专业动作。针对此问题,使用视频图像序列中二维人体关节点特征,建立专业动作知识库来标注体育视频中的专业动作。采用动态规划算法比较视频之间的人体动作差异,并融入协同训练学习算法进行体育视频的半自动标注。以网球比赛视频为测试数据进行实验,结果表明,该算法的动作标注正确率达到81.4%,与现有算法的专业动作标注相比,提高了30.5%。

关键词: 视频标注, 形状上下文, 动态规划, 协同训练, 半自动标注, 体育视频

Abstract: Video annotation refers to the use of semantic indexing information to annotate the content of the video, and its purpose is to facilitate the retrieval of video. Current video annotation works utilize low-level visual features, which are hard to be used to annotate directly human professional action in sports video. To solve this problem, the 2D human joint features are used and a professional action knowledge base is built to annotate sports video. The method employs dynamic programming to compare the difference between human sports actions in two videos and combines a co-training learning approach to realize the semi-automatic labeling process. It is tested with tennis videos, and experimental results demonstrate that the labeling accuracies reach to 81.4%. Compared with the existing algorithm of professional action annotation, the accuracy of action annotation is increased by 30.5%.

Key words: video annotation, shape context, dynamic programming, co-training, semi-automatic annotation, sports video

中图分类号: