모션 키포인트 검출 AI 경진대회

[yanado] | Private 3등 | 6.53785 | ensamble

2021.04.08 23:34 6,444 조회 language

코드 다운로드
eff-pose:https://drive.google.com/drive/folders/141aEzKQYBy95lpCljjwhhtSFHh2s-t6j?usp=sharing
colab x:https://drive.google.com/file/d/1CDdqhvu-uTIa6P-lRtwKqaC6U63sJQNr/view?usp=sharing(todo.txt참조)
colab: https://drive.google.com/drive/folders/141aEzKQYBy95lpCljjwhhtSFHh2s-t6j?usp=sharing

[requirments]
openpose, EfficientPose의 requirements.txt
tqdm
pandas

[preprocess]
detection, rotation, crop, flip

[inference]
EfficientPose(Single human에 최적화) (링크 :https://github.com/daniegr/EfficientPose 에서 변형) 
OpenPose(성능 개선을 위해 추가) (링크 : https://github.com/CMU-Perceptual-Computing-Lab/openpose)
detection->efficient->openpose->ensamble순으로 실행

[ensamble, postprocess]
ensamble : 측정된 점들의 표준편차를 구한 다음,  평균점으로 부터 표준편차의 1.8배 이상 벗어난 점들을 제거하고, 남은 점들을 다시 평균해 추정값으로 사용했습니다.
결측 값: train_df에서 점들간의 거리를 통계낸 다음, 가장 가까운 점을 선택
측정이 안 되는 keypoint: 추가된 keypoint들은 나머지 점들을 사용해 예측
(ex) instep : openpose에서 detect된 heel과 big_toe의 중점으로 추정

코드