Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bounding box is inaccurate #1237

Open
darshg321 opened this issue Apr 19, 2023 · 2 comments
Open

Bounding box is inaccurate #1237

darshg321 opened this issue Apr 19, 2023 · 2 comments

Comments

@darshg321
Copy link

I have this code:
`boxes, probs = mtcnn.detect(rgb_frame)

if boxes is not None:
    for box, prob in zip(boxes, probs):
        if prob < min_probability:
            continue
        x, y, w, h = box.astype(int)
        
        face = rgb_frame[y:y+h, x:x+w]

        tensor_image = mtcnn(face)
        

        face_embedding = facenet_model(tensor_image.unsqueeze(0)).detach().numpy()[0]
        
        match = face_matching(face_embedding, embeddings, 0.4)
        if match:
            print('Match found')
            cv2.rectangle(frame, (x, y), ((x+w), (y+h)), (0, 255, 0), 2)
        else:
            cv2.rectangle(frame, (x, y), ((x+w), (y+h)), (0, 0, 255), 2)`

When I try to draw a rectange around the faces using the bounding box coordinates, the top left is accurate, but the height and width of the boxes are way bigger than the face, and they both sometimes are randomly way longer than they should be. Why is this happening? I am not using any parameters in the MTCNN object.

@Devang-C
Copy link

The issue you're facing with the bounding box coordinates, where the height and width of the boxes are larger than the actual face and sometimes vary randomly, might be due to the difference in the input image sizes between MTCNN and the face recognition model.

MTCNN expects the input image size to be relatively small (typically a few hundred pixels), while the face recognition model may require a larger input image size (e.g., 160x160 pixels) for accurate feature extraction. When you extract the face region using the bounding box coordinates from MTCNN, it may result in a face image that doesn't match the size expectation of the face recognition model.

To resolve this issue, you need to ensure that the face region extracted from MTCNN is resized to match the input size required by the face recognition model. You can use OpenCV's resize function or other image resizing methods to resize the face region before feeding it into the face recognition model.

I hope this will solve the issue. If you need any more help feel free to reply.

@ammar3010
Copy link

ammar3010 commented Sep 20, 2023

I am facing the same issue. I am detecting the faces and saving the cropped face image on local drive to further train a facial recognition model. But the bounding box is only accurate on top left corner. Other corners are expanded. Below is my code:
`
from facenet_pytorch import MTCNN
import cv2
import os

mtcnn = MTCNN(keep_all=True, device='cuda:0')
dir_path = 'assets/raw/ammar'
save_path = 'assets/face_raw/ammar'

files = os.listdir(dir_path)

for file in files:
img = cv2.imread(dir_path+"/"+file, cv2.COLOR_BGR2RGB)
boxes, _ = mtcnn.detect(img)

x = int(boxes[0][0])
y = int(boxes[0][1])
w = int(boxes[0][2])
h = int(boxes[0][3])

crop_img = img[y:y+h, x:x+w]

cv2.imwrite(save_path+"/"+file, crop_img)

`

I'm stuck here and cannot find a solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants