MLDL_정리/Sample

[DL] - MediaPipe / Video Object Detection

KimTory 2022. 3. 6. 14:31

πŸ‘‰ 개발 ν™˜κ²½  : Jupyter Notebook, Python 3.9, MediaPipe, Opencv

✍ Source Code

import cv2
import mediapipe as mp


# face detection, 찾은 detection μ˜μ—­μ— μž„μ˜μ˜ ν‘œμ‹œλ₯Ό μœ„ν•΄ λ³€μˆ˜ μ„ μ–Έ
mp_face_detection = mp.solutions.face_detection
mp_drawing = mp.solutions.drawing_utils


# For webcam input:
cap = cv2.VideoCapture("c:\\face_video.mp4") # Source Video
# min_detection_confidence λŠ” 0 ~ 1의 κ°’μœΌλ‘œ 값을 올릴수둝 μ •κ΅ν•˜κ²Œ object detection μ§„ν–‰
# 0.5 μ„€μ • μ‹œ, μ˜€μΈμ‹ ν•˜λŠ” κ²½ν–₯이 μžˆμ–΄ 70% μ •λ„λ‘œ λ³€κ²½
with mp_face_detection.FaceDetection(model_selection=0, min_detection_confidence=0.7) as face_detection:
    while cap.isOpened():
        success, image = cap.read()
        if not success:
          break 

        image.flags.writeable = False
        image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
        results = face_detection.process(image)

        # Draw the face detection annotations on the image.
        image.flags.writeable = True
        image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
        
        if results.detections:
            # detection μ •λ³΄λŠ” 6κ°€μ§€λ₯Ό κ°€μ§€κ³  있음
          for detection in results.detections:
                mp_drawing.draw_detection(image, detection)
                print(detection) # score, x,y, width, height λ“±..의 정보λ₯Ό detection 정보 좜λ ₯
        
                # νŠΉμ • μœ„μΉ˜ κ°€μ Έ 였기
                keypoints = detection.location_data.relative_keypoints
                right_eye = keypoints[0] # Right, 눈
                left_eye = keypoints[1] # Left, 눈
                nose_tip = keypoints[2] # μ½” 끝
                
                h, w, _ = image.shape # src image μ„Έλ‘œ, κ°€λ‘œ 크기 
                
                right_eye = (int(right_eye.x * w), int(right_eye.y * h)) # ()둜 κ°μ‹Έμ„œ tuple ν˜•νƒœλ‘œ λ§Œλ“¦
                left_eye = (int(left_eye.x * w), int(left_eye.y * h))
                nose_tip= (int(nose_tip.x * w), int(nose_tip.y * h))
                # 쒌-우 λˆˆμ— νŠΉμ • λ„ν˜• μ‚½μž…
                cv2.circle(image, right_eye, 50, (255, 0, 0), 10, cv2.LINE_AA) # λ°˜μ§€λ¦„ 50, 색은 BGR으둜 blue
                cv2.circle(image, left_eye, 50, (0, 255, 0), 10, cv2.LINE_AA) # λ°˜μ§€λ¦„ 50, 색은 BGR으둜 green
                cv2.circle(image, nose_tip, 50, (0, 255, 255), 10, cv2.LINE_AA) # λ°˜μ§€λ¦„ 50, 색은 BGR으둜 yellow
        
        # Webcam이 μ•„λ‹ˆλ―€λ‘œ, Flip으둜 λ°˜μ „ μ‹œν‚€μ§€ μ•ŠμŒ
        # Sizeλ₯Ό 원본 λŒ€λΉ„, 0.5 μΆ•μ†Œ
        cv2.imshow('MediaPipe Face Detection', cv2.resize(image, None, fx=0.5, fy=0.5))
        if cv2.waitKey(1) == ord('q'): # "q" Key Click μ‹œ, μ’…λ£Œ
          break
        
cap.release() # video ν• λ‹Ή λ³€μˆ˜ λ©”λͺ¨λ¦¬ ν•΄μ œ
cv2.destroyAllWindows()

 

Detection Object 정보

 

✍ OutPut / 쒌-우 눈, μ½” ꡬ뢄

→ Face Detection ν›„, κ·€ - 눈 - μ½” - μž…μ„ κ΅¬λΆ„ν•˜κΈ° μœ„ν•΄ 빨간점 ν‘œμ‹œ