Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Too large memory required? #96

Open
whu-lyh opened this issue May 26, 2022 · 0 comments
Open

Too large memory required? #96

whu-lyh opened this issue May 26, 2022 · 0 comments

Comments

@whu-lyh
Copy link

whu-lyh commented May 26, 2022

Hi, koide, thanks for your works. I tried your registration method, specifically fast voxel boosted gicp, on a more dense point clouds. I found that the memory required is too big for correspondence searching, as the following code shows

void FastVGICP<PointSource, PointTarget>::update_correspondences(const Eigen::Isometry3d& trans) {
		voxel_correspondences_.clear();
		auto offsets = neighbor_offsets(search_method_);
		unsigned int scaled_pts_num = input_->size() * offsets.size();
		std::vector<std::vector<std::pair<int, GaussianVoxel::Ptr>>> corrs(num_threads_);
		for (auto& c : corrs) {
			c.reserve(scaled_pts_num / num_threads_);
		}
                ...
		voxel_correspondences_.reserve(scaled_pts_num);
                ...
}

or more details at

voxel_correspondences_.reserve(input_->size() * offsets.size());

and at

voxel_mahalanobis_.resize(voxel_correspondences_.size());

My question is if there are large amount of pts inside a voxel, the memory cost is too large for a simple 7nn neighbor. Although i know this operation is used for fast query neighborhood pts, but memory cost is still quite large? My laptop doesn't have enough memory for such a large memory cost operation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant