Abstract: Crowded pedestrian detection has been a hot research topic in the field of pedestrian detection. An improved YOLOv7 target detection algorithm is proposed for crowded pedestrian detection scenarios where the detection algorithm is prone to miss and false detection. To address the problem of missing target features of obscured pedestrians in crowded pedestrian detection scenarios, the Bi-Former visual transformer module and the improved RC-ELAN module are incorporated into the backbone network, and the Self-Attention mechanism and attention module are introduced to make the backbone network focus more on important features of obscured pedestrians and effectively mitigate the impact of missing features on detection. To address the problem that small target pedestrians are easily missed in crowded pedestrian detection scenarios, an improved neck network incorporating the idea of BIFPN is used. By introducing transposed convolution and an improved Rep-ELAN-W module, the model can efficiently utilize the small target feature information in the low and medium dimensional feature maps to effectively improve the small target pedestrian detection performance of the model. To address the problem of low training efficiency of the original loss function, the Efficient-CIoU loss function is introduced so that the model can be further converged to higher accuracy. Finally, experiments on the WiderPerson crowded pedestrian detection dataset with a large number of small targets obscuring pedestrians show that the improved YOLOv7 algorithm leads the YOLOv7 algorithm by 0.025 AP50 and 0.028 AP50:95 accuracy in crowded pedestrian detection scenarios, and leads the YOLOv5 algorithm by 0.099 AP50 and 0.071 AP50:95 accuracy, ahead of YOLOX algorithm 0.123 AP50 and 0.107 AP50:95 accuracy.