You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, my dataset is self-made, and its format is similar to VOC, which is in XML format. This issue occurs when running on Mask RCNN. How should I adjust my dataset. It is a dataset converted using Roboflow, please work hard to solve it!
This is the format in my dataset <annotation> <folder></folder> <filename>China_Drone_000000_jpg.rf.07a740aaf2b5fb932dcf2001c49eaa6e.jpg</filename> <path>China_Drone_000000_jpg.rf.07a740aaf2b5fb932dcf2001c49eaa6e.jpg</path> <source> <database>roboflow.com</database> </source> <size> <width>640</width> <height>640</height> <depth>3</depth> </size> <segmented>0</segmented> <object> <name>D10</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <occluded>0</occluded> <bndbox> <xmin>155</xmin> <xmax>239</xmax> <ymin>336</ymin> <ymax>376</ymax> </bndbox> </object> </annotation>
This is my error report
`/home/asus/miniconda3/envs/rcnn/bin/python /home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train.py
Namespace(amp=False, aspect_ratio_group_factor=3, batch_size=2, data_path='', device='cuda:1', epochs=26, lr=0.004, lr_gamma=0.1, lr_steps=[16, 22], momentum=0.9, num_classes=6, output_dir='./save_weights', pretrain=True, resume='', start_epoch=0, weight_decay=0.0001)
Using cuda device training.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000256_jpg.rf.ab91e2b57ecad4b64d9a84b0b5700b48.xml, skip this annotation file.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_001391_jpg.rf.96d5321ab049c025d7fa8f65d7d30684.xml, skip this annotation file.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000170_jpg.rf.190d0cf1c18f88b7cc0e37f616fb25e2.xml, skip this annotation file.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000256_jpg.rf.e173a96af0d22482bbba8089cca73a4f.xml, skip this annotation file.
Using [0, 0.5, 0.6299605249474366, 0.7937005259840997, 1.0, 1.2599210498948732, 1.5874010519681994, 2.0, inf] as bins for aspect ratio quantization
Count of instances per bin: [4150]
Using 2 dataloader workers
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_001134_jpg.rf.51e0a6c595b78bce2d912dfd6ceaf882.xml, skip this annotation file.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000256_jpg.rf.4911be1b87dec91644f29aa503d5d777.xml, skip this annotation file.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_001134_jpg.rf.404195a7aa8cf62f9742f7e4c0da359c.xml, skip this annotation file.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000218_jpg.rf.112dd0036ba94a1cc2d52ff0d6e22ed5.xml, skip this annotation file.
_IncompatibleKeys(missing_keys=[], unexpected_keys=['fc.weight', 'fc.bias'])
_IncompatibleKeys(missing_keys=['roi_heads.box_predictor.cls_score.weight', 'roi_heads.box_predictor.cls_score.bias', 'roi_heads.box_predictor.bbox_pred.weight', 'roi_heads.box_predictor.bbox_pred.bias', 'roi_heads.mask_predictor.mask_fcn_logits.weight', 'roi_heads.mask_predictor.mask_fcn_logits.bias'], unexpected_keys=[])
/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Traceback (most recent call last):
File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train.py", line 240, in
main(args)
File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train.py", line 139, in main
mean_loss, lr = utils.train_one_epoch(model, optimizer, train_data_loader,
File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train_utils/train_eval_utils.py", line 32, in train_one_epoch
loss_dict = model(images, targets)
File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/network_files/faster_rcnn_framework.py", line 94, in forward
detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/network_files/roi_head.py", line 548, in forward
gt_masks = [t["masks"] for t in targets]
File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/network_files/roi_head.py", line 548, in
gt_masks = [t["masks"] for t in targets]
KeyError: 'masks'
`
The text was updated successfully, but these errors were encountered:
Hello, my dataset is self-made, and its format is similar to VOC, which is in XML format. This issue occurs when running on Mask RCNN. How should I adjust my dataset. It is a dataset converted using Roboflow, please work hard to solve it!
This is the format in my dataset
<annotation> <folder></folder> <filename>China_Drone_000000_jpg.rf.07a740aaf2b5fb932dcf2001c49eaa6e.jpg</filename> <path>China_Drone_000000_jpg.rf.07a740aaf2b5fb932dcf2001c49eaa6e.jpg</path> <source> <database>roboflow.com</database> </source> <size> <width>640</width> <height>640</height> <depth>3</depth> </size> <segmented>0</segmented> <object> <name>D10</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <occluded>0</occluded> <bndbox> <xmin>155</xmin> <xmax>239</xmax> <ymin>336</ymin> <ymax>376</ymax> </bndbox> </object> </annotation>
This is my error report
`/home/asus/miniconda3/envs/rcnn/bin/python /home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train.py
Namespace(amp=False, aspect_ratio_group_factor=3, batch_size=2, data_path='', device='cuda:1', epochs=26, lr=0.004, lr_gamma=0.1, lr_steps=[16, 22], momentum=0.9, num_classes=6, output_dir='./save_weights', pretrain=True, resume='', start_epoch=0, weight_decay=0.0001)
Using cuda device training.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000256_jpg.rf.ab91e2b57ecad4b64d9a84b0b5700b48.xml, skip this annotation file.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_001391_jpg.rf.96d5321ab049c025d7fa8f65d7d30684.xml, skip this annotation file.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000170_jpg.rf.190d0cf1c18f88b7cc0e37f616fb25e2.xml, skip this annotation file.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000256_jpg.rf.e173a96af0d22482bbba8089cca73a4f.xml, skip this annotation file.
Using [0, 0.5, 0.6299605249474366, 0.7937005259840997, 1.0, 1.2599210498948732, 1.5874010519681994, 2.0, inf] as bins for aspect ratio quantization
Count of instances per bin: [4150]
Using 2 dataloader workers
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_001134_jpg.rf.51e0a6c595b78bce2d912dfd6ceaf882.xml, skip this annotation file.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000256_jpg.rf.4911be1b87dec91644f29aa503d5d777.xml, skip this annotation file.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_001134_jpg.rf.404195a7aa8cf62f9742f7e4c0da359c.xml, skip this annotation file.
INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000218_jpg.rf.112dd0036ba94a1cc2d52ff0d6e22ed5.xml, skip this annotation file.
_IncompatibleKeys(missing_keys=[], unexpected_keys=['fc.weight', 'fc.bias'])
_IncompatibleKeys(missing_keys=['roi_heads.box_predictor.cls_score.weight', 'roi_heads.box_predictor.cls_score.bias', 'roi_heads.box_predictor.bbox_pred.weight', 'roi_heads.box_predictor.bbox_pred.bias', 'roi_heads.mask_predictor.mask_fcn_logits.weight', 'roi_heads.mask_predictor.mask_fcn_logits.bias'], unexpected_keys=[])
/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Traceback (most recent call last):
File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train.py", line 240, in
main(args)
File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train.py", line 139, in main
mean_loss, lr = utils.train_one_epoch(model, optimizer, train_data_loader,
File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train_utils/train_eval_utils.py", line 32, in train_one_epoch
loss_dict = model(images, targets)
File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/network_files/faster_rcnn_framework.py", line 94, in forward
detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/network_files/roi_head.py", line 548, in forward
gt_masks = [t["masks"] for t in targets]
File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/network_files/roi_head.py", line 548, in
gt_masks = [t["masks"] for t in targets]
KeyError: 'masks'
`
The text was updated successfully, but these errors were encountered: