You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I experiment with shapenet's full point cloud data, the segmentation accuracy is often as good as described in the paper. However, when I deployed the network to my project, I found that when the point cloud information is incomplete due to a single camera perspective, the segmentation effect is so poor that it cannot be used. Is there any way to improve the segmentation accuracy in the case of incomplete viewing angles? I haven't found any outstanding research in this area, maybe you have any engineering method to solve it?
Figure 1 is the part-segmentation effect when the full point cloud data of shapenet is used as input, and Figure 2 and Figure 3 are the part-segmentation results of images collected by a single camera.
The text was updated successfully, but these errors were encountered:
When I experiment with shapenet's full point cloud data, the segmentation accuracy is often as good as described in the paper. However, when I deployed the network to my project, I found that when the point cloud information is incomplete due to a single camera perspective, the segmentation effect is so poor that it cannot be used. Is there any way to improve the segmentation accuracy in the case of incomplete viewing angles? I haven't found any outstanding research in this area, maybe you have any engineering method to solve it?
Figure 1 is the part-segmentation effect when the full point cloud data of shapenet is used as input, and Figure 2 and Figure 3 are the part-segmentation results of images collected by a single camera.
The text was updated successfully, but these errors were encountered: