티스토리 뷰

코드 공개돼있고 좀 현실적으로 적어도 비디오 1개에서라도 리얼타임으로 돌아가는 모델과 방법 위주로 찾으려 노력중... 언젠가 다시 봐볼만한 것들인 거 같아서 기록해둠

 

action recognition

 

https://openaccess.thecvf.com/content/ICCV2021/papers/Chen_Watch_Only_Once_An_End-to-End_Video_Action_Detection_Framework_ICCV_2021_paper.pdf

 

https://github.com/wei-tim/YOWO

 

GitHub - wei-tim/YOWO: You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization

You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization - wei-tim/YOWO

github.com

 

https://github.com/yjh0410/YOWOv2

 

GitHub - yjh0410/YOWOv2: The second generation of YOWO action detector.

The second generation of YOWO action detector. Contribute to yjh0410/YOWOv2 development by creating an account on GitHub.

github.com

 

tracking

https://github.com/megvii-research/MOTRv2

 

GitHub - megvii-research/MOTRv2: [CVPR2023] MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detector

[CVPR2023] MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detectors - megvii-research/MOTRv2

github.com

 

댓글
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2024/05   »
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
글 보관함