You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RuntimeError: Error(s) in loading state_dict for DINO:
size mismatch for transformer.decoder.class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.0.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.1.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.1.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.2.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.2.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.3.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.3.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.4.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.4.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.5.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.5.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.enc_out_class_embed.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the sha
pe in current model is torch.Size([5, 256]).
size mismatch for transformer.enc_out_class_embed.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in c
urrent model is torch.Size([5]).
size mismatch for label_enc.weight: copying a param with shape torch.Size([92, 256]) from checkpoint, the shape in current model is
torch.Size([6, 256]).
size mismatch for class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.0.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.1.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.1.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.2.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.2.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.3.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.3.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.4.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.4.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.5.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.5.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
The text was updated successfully, but these errors were encountered:
RuntimeError: Error(s) in loading state_dict for DINO:
size mismatch for transformer.decoder.class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.0.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.1.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.1.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.2.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.2.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.3.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.3.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.4.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.4.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.5.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.5.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.enc_out_class_embed.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the sha
pe in current model is torch.Size([5, 256]).
size mismatch for transformer.enc_out_class_embed.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in c
urrent model is torch.Size([5]).
size mismatch for label_enc.weight: copying a param with shape torch.Size([92, 256]) from checkpoint, the shape in current model is
torch.Size([6, 256]).
size mismatch for class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.0.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.1.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.1.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.2.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.2.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.3.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.3.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.4.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.4.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.5.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.5.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
The text was updated successfully, but these errors were encountered: