Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

你好,我的faiss出了点问题; #24

Open
Reggieharden opened this issue Dec 17, 2020 · 12 comments
Open

你好,我的faiss出了点问题; #24

Reggieharden opened this issue Dec 17, 2020 · 12 comments

Comments

@Reggieharden
Copy link

版本是cuda9.0, faiss-gpu 1.5.0
报的错误是TypeError:bruteForceKnn() takes exactly 10 arguments (12given)

但是如果我把faiss的版本升上去,同样是用conda装的, 就会出现向前面的一个问题:Faiss assertion 'err__ == cudaSuccess' failed in void faiss::gpu::runL2Norm

我是只能把cuda升到10.0再试一试吗

@zzw-zwzhang
Copy link
Collaborator

I have not faced the same problem, and have you solved it?

@zzw-zwzhang
Copy link
Collaborator

zzw-zwzhang commented Jan 11, 2021

I have tested it on different CUDA versions (9.2 & 10.2), this error should have nothing to do with the CUDA version. You can download faiss-gpu through the following command: pip install faiss-gpu==1.6.3.

My machine version:

  1. CUDA=10.2, python=3.6, pytorch=1.6, torchvision=0.7.0, faiss-gpu=1.6.3
  2. CUDA=9.2, python=3.6, pytorch=1.6, torchvision=0.7.0, faiss-gpu=1.6.3

If you use faiss-gpu==1.6.5, the following issue will occur yxgeee/SpCL#22.

@Reggieharden
Copy link
Author

不好意思,我发现openunreid在测试vehicleid数据集时,会出现下面这个错误:
AssertionError: Error: all query identities do not appear in gallery

请问是数据在划分的时候出错了吗

@yxgeee
Copy link
Contributor

yxgeee commented Jan 16, 2021

@zwzhang121 Please help solve it.

@muzishen
Copy link

muzishen commented Feb 7, 2021

不好意思,我发现openunreid在测试vehicleid数据集时,会出现下面这个错误:
AssertionError: Error: all query identities do not appear in gallery

请问是数据在划分的时候出错了吗

相机ID全为0出现的错误,这样rank的 VehicleID库会丢弃所有底库图片,我认为这是个bug。

@zzw-zwzhang
Copy link
Collaborator

Yeah, I will fix this error.

@yxgeee
Copy link
Contributor

yxgeee commented Feb 7, 2021

@Reggieharden @muzishen

Hi, actually the VehicleID dataset does not provide camera IDs, as well as the query and gallery split. Thus conventional metrics (e.g. mAP, CMC) do not support the VehicleID dataset. According to previous works, a specific evaluation metric is used for this dataset, i.e. computing the accuracies over randomly split query and gallery (mean for ten times). But unfortunately, OpenUnReID does not support such an evaluation metric now, so it will raise errors when you use VehicleID as a target-domain dataset or an unlabeled dataset for unsupervised learning. @zwzhang121 is now in charge of the upgrade of the codebase and hopefully he could help improve it. It's also welcome to pull requests.

@muzishen
Copy link

muzishen commented Feb 7, 2021

@Reggieharden @muzishen

Hi, actually the VehicleID dataset does not provide camera IDs, as well as the query and gallery split. Thus conventional metrics (e.g. mAP, CMC) do not support the VehicleID dataset. According to previous works, a specific evaluation metric is used for this dataset, i.e. computing the accuracies over randomly split query and gallery (mean for ten times). But unfortunately, OpenUnReID does not support such an evaluation metric now, so it will raise errors when you use VehicleID as a target-domain dataset or an unlabeled dataset for unsupervised learning. @zwzhang121 is now in charge of the upgrade of the codebase and hopefully he could help improve it. It's also welcome to pull requests.

Yes, you are right! Do you have plans to update the results of vehicle reid.

@zzw-zwzhang
Copy link
Collaborator

@muzishen I will solve it and release the results on MODEL_ZOO before the Spring Festival, thanks.

@muzishen
Copy link

muzishen commented Feb 7, 2021

@muzishen I will solve it and release the results on MODEL_ZOO before the Spring Festival, thanks.

Looking forward to your results ! Thank you again for your great work !!!

@JonasZero
Copy link

@Reggieharden @muzishen
Hi, actually the VehicleID dataset does not provide camera IDs, as well as the query and gallery split. Thus conventional metrics (e.g. mAP, CMC) do not support the VehicleID dataset. According to previous works, a specific evaluation metric is used for this dataset, i.e. computing the accuracies over randomly split query and gallery (mean for ten times). But unfortunately, OpenUnReID does not support such an evaluation metric now, so it will raise errors when you use VehicleID as a target-domain dataset or an unlabeled dataset for unsupervised learning. @zwzhang121 is now in charge of the upgrade of the codebase and hopefully he could help improve it. It's also welcome to pull requests.

Yes, you are right! Do you have plans to update the results of vehicle reid.

Have the same question.

@JonasZero
Copy link

JonasZero commented Oct 15, 2022

@Reggieharden @muzishen
Hi, actually the VehicleID dataset does not provide camera IDs, as well as the query and gallery split. Thus conventional metrics (e.g. mAP, CMC) do not support the VehicleID dataset. According to previous works, a specific evaluation metric is used for this dataset, i.e. computing the accuracies over randomly split query and gallery (mean for ten times). But unfortunately, OpenUnReID does not support such an evaluation metric now, so it will raise errors when you use VehicleID as a target-domain dataset or an unlabeled dataset for unsupervised learning. @zwzhang121 is now in charge of the upgrade of the codebase and hopefully he could help improve it. It's also welcome to pull requests.

Yes, you are right! Do you have plans to update the results of vehicle reid.

Have the same question.

Fortunately, I found a solution at https://github.com/Jakel21/vehicle-ReID-baseline/blob/master/vehiclereid/eval_metrics.py.

def eval_vehicleid(distmat, q_pids, g_pids, q_camids, g_camids, max_rank):
    """Evaluation with vehicleid metric
    Key: gallery contains one images for each test vehicles and the other images in test
         use as query
    """
    num_q, num_g = distmat.shape

    if num_g < max_rank:
        max_rank = num_g
        print('Note: number of gallery samples is quite small, got {}'.format(num_g))

    indices = np.argsort(distmat, axis=1)
    matches = (g_pids[indices] == q_pids[:, np.newaxis]).astype(np.int32)

    # compute cmc curve for each query
    all_cmc = []
    all_AP = []
    num_valid_q = 0.  # number of valid query

    for q_idx in range(num_q):
        # get query pid and camid
        # remove gallery samples that have the same pid and camid with query
        '''
        q_pid = q_pids[q_idx]
        q_camid = q_camids[q_idx]
        order = indices[q_idx]
        remove = (g_pids[order] == q_pid) & (g_camids[order] == q_camid) # original remove
        '''
        remove = False  # without camid imformation remove no images in gallery
        keep = np.invert(remove)
        # compute cmc curve
        raw_cmc = matches[q_idx][keep]  # binary vector, positions with value 1 are correct matches
        if not np.any(raw_cmc):
            # this condition is true when query identity does not appear in gallery
            continue

        cmc = raw_cmc.cumsum()
        cmc[cmc > 1] = 1

        all_cmc.append(cmc[:max_rank])
        num_valid_q += 1.

        # compute average precision
        # reference: https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Average_precision
        num_rel = raw_cmc.sum()
        tmp_cmc = raw_cmc.cumsum()
        tmp_cmc = [x / (i + 1.) for i, x in enumerate(tmp_cmc)]
        tmp_cmc = np.asarray(tmp_cmc) * raw_cmc
        AP = tmp_cmc.sum() / num_rel
        all_AP.append(AP)

    assert num_valid_q > 0, 'Error: all query identities do not appear in gallery'

    all_cmc = np.asarray(all_cmc).astype(np.float32)
    all_cmc = all_cmc.sum(0) / num_valid_q
    mAP = np.mean(all_AP)

    return all_cmc, mAP

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants