Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2-D global (3 DoF) localization questions #18

Open
WilliamWoo45 opened this issue Jun 7, 2020 · 21 comments
Open

2-D global (3 DoF) localization questions #18

WilliamWoo45 opened this issue Jun 7, 2020 · 21 comments

Comments

@WilliamWoo45
Copy link

Hi Carlos!

First of all, thanks for sharing your research work, which is quite fascinating.

I've successfully installed from the source and recently I'm looking through your codes. I try to implement your algorithm on my own 2-D laser data to accomplish 3 DoF (position_x, position_y, angle_yaw) localization tasks. However, I have a few questions which I hope you could clarify for me:

  1. Since I'm trying to implement the 2-D global (3 DoF) localization, I'm wondering whether should I use the guardian_localization.launch or dynamic_robot_localization_system.launch?
  2. I've built maps of the environment by implementing the 2-D SLAM algorithms Hector-SLAM and Google Cartographer. I obtained occupancy grid maps (OGMs) of the type (.pgm | .yaml) and (.pbstream) for these two algorithms, respectively. So can I utilize the obtained OGMs as the reference maps in the localization tasks? Do I have to transform the OGMs to pointcloud maps then can I use them?
  3. Could you detail the steps to implement the 2-D global (3 DoF) localization on my own 2-D laser data? I checked the Usage part but still have no idea how to do this.

By the way, I wrongly posted the issue #17 and you can delete it. Sorry about this.

Thanks a lot and looking forward to your early reply!

Best Regards,
William

@carlosmccosta
Copy link
Owner

Hello :)

  1. Both guardian_localization.launch and dynamic_robot_localization_system.launch have a configuration for the drl pipeline prepared for 3 DoF robot localization, so you can use either of them. However, dynamic_robot_localization_system.launch is more up to date with new developments (for example, lookup tables for faster closest point search).
  2. drl can use a reference point cloud directly or an occupancy grid received from a ros topic (that is converted internally to a point cloud). You just need to have a map_server running with your .pgm / .png map and then specify the topic to drl (default is /map).
  3. The critical parameters that you need to specify to drl are:

The default configurations of drl should be generic enough to be usable for your 3 DoF localization use case.
By default, the global localization pipeline uses geometric features for finding the robot initial pose and then relies on point cloud matching algorithms for tracking the robot pose.
You can customize the pipeline and its parameterization for your particular use case.
The parameters are listed in yaml/schema/drl_configs.yaml.
An overview of the pipeline is given in docs/perception-overview.pdf and described in my dissertation and associated papers.
The discussion in issue #14 may also be helpful.

Let me known how drl performed in your use case.

P.S.
Before testing drl, please do a git pull to get the last set of commits and then recompile.

Have a nice day :)

@WilliamWoo45
Copy link
Author

Good morning Carlos!
Thanks for your speedy reply.

I've specified the critical parameters and implemented the algorithm following your suggestions.
I opened four command lines in the following order:
(1) roslaunch dynamic_robot_localization dynamic_robot_localization_system.launch
(2) rosrun map_server map_server /opt/ros/kinetic/share/husky_navigation/maps/mymap.yaml
(3) rosbag play --clock /home/williamwoo/Z_husky_S1-B4/husky-tf-scan-07-02-B-block-amcl-way1.bag
(4) rviz

However, there seems to be a problem with the missing TFs between [laser] and [odom] and there is no robot showing in the Rviz.
Launch command line screenshots:
commandline1
commandline2
commandline3

Rviz:
rviz

The tf_echo odom laser:
tf_echo

The tf tree, modified launch file, .yaml file, .pgm file, and 3 DoF localization rosbag are uploaded for your reference.

Any idea for this problem?

Cheers :)

Best Regards,
William

@carlosmccosta
Copy link
Owner

Good night :)

I looked at the files you uploaded and tested the rosbag.
You were close, you just needed to set the sim time parameter and add a static transform for the missing TF before running the rosbag play:

roscore
rosparam set /use_sim_time true
rosrun tf2_ros static_transform_publisher 0 0 0 0 0 0 odom base_link

Then:

cd Carlos_Dynamic_Robot_Localization
rviz
rosrun map_server map_server mymap.yaml
roslaunch dynamic_robot_localization_system.launch
rosbag play --clock husky-tf-scan-07-02-B-block-amcl-way1.bag

drl uses the ros tf conventions for navigation, which state that:

  • The sensors are calibrated in relation to base_link
  • Odometry information is given as a TF between base_link and odom frames
  • A localization system (such as drl) will publish TF between odom and map

In your case, if you do not have odometry information, you just specify an identity transformation with a static transform publisher.

Have a nice day :)

@WilliamWoo45
Copy link
Author

WilliamWoo45 commented Jun 10, 2020

Hi Carlos!
Thanks for your detailed explanation.

I followed your steps and there is no more warning about the tf between laser and odom. However, I think there are still some problems because:

  1. Laser scan showing in the Rviz is not moving
  2. Error showing in Rviz-LaserScan-Transform
    rviz2
    For frame [laser]: No transform to fixed frame [map]. TF error: [Lookup would require extrapolation into the future. Requested time 1562070669.875814000 but the latest data is at time 1562070668.289503000, when looking up transform from frame [laser] to frame [map]]
  3. There're warnings keep showing in the command line (about the EuclideanTransformationValidator rejected new pose at time 1562070670.157474292 due to root_mean_square_error/max_outliers_percentage):
    screen1
    screen2

I try to fine tune the parameters following your suggestions about initial algorithms parameterization. However, the results seem no difference. I even manually set the ground truth initial 3DoF pose in the dynamic_robot_localization_system.launch, but the result still looks no difference.

I also tried another repetitive environment and the problems are kind of the same:
rviz3

So I'm wondering is there any parameter that need specific tuning? Or is it due to the reason that the experimental corridor environment is too repetitive/ambiguous for the robot to match the features?

Best Regards,
William

@carlosmccosta
Copy link
Owner

Hello :)

  1. When there is no odometry information, drl or any other localization system require environment geometry to estimate the robot motion. For example, for 3 DoF localization, corners are very good for motion estimation because they constrain the 3 degrees of freedom. Straight corridors only constrain 2 degrees of freedom (the geometry is the same along the corridor, which may result in a given localization system to either keep the alignment still or move it along the corridor). A good wheel encoder can help significantly in these situations. The second rviz screenshot shows a much better environment for robot localization.
  2. drl has a builtin alignment validator / outlier detector, which is used by the EuclideanTransformationValidator module. When drl detects that there is too much discrepancy between the alignment of the sensor data to the map over a given period of time, it will stop tracking and will try to find the initial pose of the robot using feature matching. During this time, following the ros standard, it will stop publishing tf between odom and map, which will result in rviz showing that error (stopping the publishing of the tf odom to map is the standard way to inform other nodes that the localization system lost tracking, and as such, navigation should either stop the robot or trust shortly on the odometry data).
  3. The errors of the EuclideanTransformationValidator are for letting you known that drl detected too much discrepancy between the aligned sensor data and the map, which can affect the precision and reliability of the localization. drl has a configurable blind window time, in which it avoids the triggering of the feature matching. This allows the robot to be temporary blinded by people in front of its sensors, and after they move away, drl will resume tracking.

drl publishes two topics with the inliers and outliers that you can view in rviz (inliers -> points that are considered correctly aligned, outliers -> sensor points that are too far from any map data). This can give you a good idea of how drl evaluates the overlap of the sensor data with the map.

For initial alignment, the critical parameters are the keypoint detector and descriptor, along with the sample_consensus_initial_alignment modules.

For tracking, the critical parameters are the correspondence distance, the maximum number of iterations and the time limit given to iterative closest point.

If you have a sensor with a lot of noise, you may need to increase the distance of the outlier detector.

If you expect that the robot will only see a small part of the map (due to occlusions, people walking around), you should increase the maximum allowed percentage of outliers (1 -> 100%, I do not recommend set it higher than 0.9 -> I think an overlap of at least 10% is required -> the confidence in any given map matching localization system is inversely proportional to the % of outliers -> the higher the overlap between sensor data and the map, the most confidant you can be that the localization system is working properly).

For testing the tracking only, you can give an initial alignment pose using rviz (in the map tf frame).

You can see which modules are being loaded in the dynamic_robot_localization_system.launch, which by default uses:

More information about the parameters is available in drl_configs.yaml.

You can adjust the logging level of drl and pcl using these parameters.

Have a nice day,

@WilliamWoo45
Copy link
Author

Good afternoon Carlos!
Thanks again for your speedy reply.

I've looked through all your suggestions. Then I've recorded a new rosbag including the odometry topic as /husky_velocity_controller/odom and fine tuned the parameters mentioned by you.
Besides, I also updated my roslaunch file, especially the initial pose and the odometry topic.
However, the results seem no difference with the previous one. These is still warnings about root_mean_square_error and Not enough correspondences found. Besides, there is no tracking and laser scans showing in Rviz.
warn6

So I'm wondering what could be the possible solutions for my case? Using the same rosbag, I've included a video of implementing the AMCL algorithm with the specified ground truth initial pose. I was hoping I could achieve something similiar with your algorithm, or even better than AMCL.

You can adjust the logging level of drl and pcl using these parameters

By the way, I'm wondering what is the function of the logging level? I tried to adjust them, but the results seem no difference.

Cheers! :)

Best Regards,
William

@carlosmccosta
Copy link
Owner

Good night :)

drl follows the ros navigation recommendations and since it does not need the twist information present in nav_msgs/Odometry, it retrieves the 6 DoF pose of the odometry from the tf base_link -> odom
As such, you need a node to subscribe to the odometry topic and publish the associated tf.
You can use the pose_to_tf_publisher for that, example of launch file below:

<?xml version="1.0" encoding="UTF-8"?>
<launch>
	<include file="$(find pose_to_tf_publisher)/launch/pose_to_tf_publisher.launch">
		<arg name="odometry_topic" default="/husky_velocity_controller/odom"/>
		<arg name="map_frame_id" default="odom"/>
		<arg name="odom_frame_id" default=""/>
		<arg name="base_link_frame_id" default="base_link"/>
		<arg name="publish_rate" default="-1"/>
		<arg name="publish_initial_pose" default="false"/>
		<arg name="transform_pose_to_map_frame_id" default="false"/>
	</include>
</launch>

The odometry parameter present in the dynamic_robot_localization_system.launch was for the laserscan_to_pointcloud assembler adjust how many scans it assembles for each point cloud it publishes (which is subscribed by drl).
Since you have only 1 laser, you should configured it for assembling just 1 scan when the robot is moving and allowed it to merge several scans when the robot is stopped or moving slowly.
If you do not want to use this feature, which is highly dependent on the accuracy of your odometry information, just disable it by setting the odometry topic to an empty string.

You must not set the odometry topic for the pose_to_tf_publisher.
This node must only publish the tf associated with the msgs coming from the drl localization_pose or from the rviz \initialpose.

Below is a video showing how drl performed in the second bag file (drl stopped tracking when the robot reached the wall because the laser scans in the \scan topic were missing temporally for a few seconds).
https://www.dropbox.com/s/u35jk0vqhnrzot7/2020-06-11-B4-test-drl.mp4?dl=0

You can see that drl is much more accurate and efficient than amcl (when using lookup tables, drl took less than 5 ms to process your laser data -> you can inspect this information in the localization_times topic).
amcl uses a completely different approach to localization, namely, amcl relies on a particle filter and statistics / heuristics for choosing the best particle and then rearrange new particle candidates as the robot moves.
It is also very tricky to configure its odometry coefficients., especially if your robot wheels have a lot of slippage.
drl on the other hand uses point cloud matching algorithms that were built for accurate and efficient alignment of point clouds, but they require a rough alignment to converge to a good solution.
That is why drl uses the iterative closest point algorithm (ICP) for accurate and efficient local tracking and feature matching for estimating a rough initial pose of the robot when the tracking is lost.
For best accuracy the map / occupancy grid should have walls that are just 1 pixel thick.
Your map has walls with several pixels thick. This may cause the matching algorithms to oscillate slightly, because the laser scan matching can slide within the pixels of the walls.

The logging levels allow you to control the level of detail of the messages that are shown in the terminal.
Debug level will show you verbose output, giving you detailed information about drl operations in each module, while fatal level will pretty much not show you any output at all from drl (you can also set the logging level within the pcl code).

P.S.
pose_to_tf_publisher discards poses in the past, and as such, before restarting the rosbag play, you need to restart the launch files of pose_to_tf_publisher (for the odometry tf) and drl.

Have a nice day :)

@WilliamWoo45
Copy link
Author

WilliamWoo45 commented Jun 12, 2020

Hi Carlos!
Thanks for your detailed guidance.

Following your suggestions, I've removed the odometry_topic in dynamic_robot_localization_system.launch and configured the number of laser scans to one.

<?xml version="1.0" encoding="UTF-8"?>
<launch>
	<include file="$(find pose_to_tf_publisher)/launch/pose_to_tf_publisher.launch">
 		<arg name="odometry_topic" default="/husky_velocity_controller/odom"/>
		<arg name="map_frame_id" default="odom"/>
		<arg name="odom_frame_id" default=""/>
		<arg name="base_link_frame_id" default="base_link"/>
		<arg name="publish_rate" default="-1"/>
		<arg name="publish_initial_pose" default="false"/>
		<arg name="transform_pose_to_map_frame_id" default="false"/>
	</include>
</launch>

I was wondering, should it be <arg name="map_frame_id" default="map"/> and <arg name="odom_frame_id" default="odom"/> ? Besides, I've put this into a new launch file named odom.launch under the route ~/catkin_ws_drl/src/pose_to_tf_publisher/launch. Then I run the command lines in the following order:

  1. roscore
  2. rosparam set /use_sim_time true
  3. rosrun tf2_ros static_transform_publisher 0 0 0 0 0 0 odom base_link
  4. roslaunch pose_to_tf_publisher odom.launch
  5. roslaunch dynamic_robot_localization dynamic_robot_localization_system.launch
  6. rosrun map_server map_server /opt/ros/kinetic/share/husky_navigation/maps/mymap.yaml
  7. rviz
  8. rosbag play --clock /home/williamwoo/Documents/husky2-pose-_2020-06-11-B4-test1-new.bag

Now I can see the laser scans moving in the Rviz.
rviz1
However, the laser scans still do not match with the map, thus the localization is still considered to be failed. So I'm curious how did you tune the parameters, perform DRL on my rosbag file and obtain the good results as shown in your video https://www.dropbox.com/s/u35jk0vqhnrzot7/2020-06-11-B4-test-drl.mp4?dl=0 ?
Is it ok for you to share your launch files and .yaml files?

Besides, I'm also wondering where exactly should I set the initial pose? Is that I only need to set in dynamic_robot_localization_system.launch or should I also set it in pose_to_tf_publisher.launch? Or should I manually set it in the Rviz interface by using the 2D Pose Estimate arrow?

Thanks a lot :)

Best Regards,
William

@carlosmccosta
Copy link
Owner

Good night :)

The pose_to_tf_publisher package was initially developed for publishing the tf odom -> map from pose stamped msgs (which led to the names you see in the launch file), but it evolved to publish other things, namely tf from odometry, which should be between tf frames base_link -> odom
I left the names of the arguments as map, odom and base_link because that is the typical use case (but they can be configured to other frames).
For publishing the tf base_link -> odom from the odometry messages, please use the configurations that I specified earlier.

You also cannot have two sources of tf base_link -> odom
You either use odom.launch or use the static_transform_publisher.
If you use both, then the tf base_link -> odom will be oscillating between the identity pose and the cumulative pose from the wheel encoders. This will cause issues in tracking the robot pose.
The odometry information must be stable and evolve over time in a continuous way, without sudden jumps.

I did not tune drl for your rosbags.
I made the video with the configurations that I have used in the past for our mobile platforms, which are configured by default in the dynamic_robot_localization_system.launch

You can set the initial pose of the robot in its respective section in dynamic_robot_localization_system.launch, which will forward it to drl and pose_to_tf_publisher.
While drl is running, you can also set it in rviz using the 2d pose estimate arrow (rviz must be in the map frame and publishing it to /initialpose).

Have a nice day :)

@WilliamWoo45
Copy link
Author

WilliamWoo45 commented Jun 15, 2020

Hi Carlos:

Good Monday! Thanks for your patient explanations.
Sorry for the delayed reply because of the closure of my lab on the weekend due to the COVID-19.

I've looked through your suggestions. I choose to use the odom.launch instead of the static_transform_publisher. Hence, the command lines are adjusted and implemented in the following order:

  1. roscore
  2. rosparam set /use_sim_time true
  3. roslaunch pose_to_tf_publisher odom.launch
  4. roslaunch dynamic_robot_localization dynamic_robot_localization_system.launch
  5. rosrun map_server map_server /opt/ros/kinetic/share/husky_navigation/maps/mymap.yaml
  6. rviz
  7. rosbag play --clock /home/williamwoo/Documents/husky2-pose-_2020-06-11-B4-test1-new.bag

However, I still can not get similar localization result as yours. I found the laser scans moving but sometimes oscillating. See below:
rviz1

rviz2

I'm still wondering what could be the possible reason for the different localization performances with the same rosbag on your side and on my side. I can not figure this point out. Am I lefting any configuration behind?
I also tried to record a new rosbag. But the localization performance seems no difference with the previous one.

Have a nice day! :)

Best Regards,
William

@carlosmccosta
Copy link
Owner

Good night :)

First, confirm that you have the latest version of drl:

roscd dynamic_robot_localization
git pull
catkin build

Then, be careful how you start the launch files.
If you have the launch files in the Carlos_Dynamic_Robot_Localization folder, then you should:

cd Carlos_Dynamic_Robot_Localization
roslaunch odom.launch
roslaunch dynamic_robot_localization_system.launch

Otherwise you might not be running the launch files that you modified.
I retested again in my laptop and it all seems to be working as expected and similar to the video I made previously.

The oscillations seen in the video I made (from 10s to 15s) were due to the laser field of view changing drastically because the wall in front of the robot temporary became invisible to the lidar, and since the robot was seeing just the two parallel corridor walls, one degree of freedom became unconstrained by the sensor data, allowing the matching algorithms to slide the sensor data along the corridor. You can see that in the beginning of the video the oscillations did not happen because the sensor data had several wall corners that constrained the 3 DoF pose of the robot, allowing stable matching. For minimizing the oscillations further, the map should have walls with 1 pixel thick, otherwise the matching algorithms may oscillate the sensor data within the walls.

You can set the ros_verbosity_level to DEBUG for seeing in the terminal what drl is doing and diagnose the problem and if necessary also set pcl_verbosity_level to DEBUG.

You should known that drl was preconfigured with timeouts for the registration algorithms:

If your PC is less powerful than mine, then you may need to increase these timeouts for giving more time to the algorithms to converge to better solutions and tune the convergence thresholds, such as:

Have a nice day :)

@WilliamWoo45
Copy link
Author

WilliamWoo45 commented Jun 18, 2020

Good Morning Carlos!

First of all, sincerely thanks again for your detailed and patient explanations.

I followed and tried your suggestions. However, I still can not obtain the ideal localization results on my laptop with AMD FX-9830P RADEON R7 CPU and 32GB memory. I set both the ros_verbosity_level and pcl_verbosity_level to DEBUG to diagnose the possible problem, but the DEBUG results look quite ok without any serious warnings, screenshots as follows:

debug1

debug2

I've also recorded the whole DEBUG process, which can be seen in here

I think probably it is because of my laptop is not powerful enough to implement the DRL. Hence, I borrowed another laptop, installed and implemented DRL. It works!!! :)

Now I have another question. I found that if the initial pose is not given, it will be quite hard for the DRL to successfully localize the robot. I've recorded another two rosbag, namely the test2 and test3. The ground truth initial pose for these two rosbags are (44, 18.8, 0) and (1.5, 8, 0). If with initial pose, DRL can quickly and accurately localize the robot, as shown:
test2:
way13_with_initial

test3:
way19_with_initial

If without initial pose, DRL fails to localize the robot in most cases (more than 90% will fail on my laptop), as shown:
test2:
way13_no_initial

test3:
way19_no_initial

So I'm wondering, is it because of the feature matching technique that DRL uses that performs poorly in such repetitive and ambiguous environments? Could you run these two rosbags without giving the initial pose on your laptop and check the performance of DRL ?

Thanks again and cheers!

Best Regards,
William

@carlosmccosta
Copy link
Owner

Hello :)

I made some improvements to drl and its default configurations.
drl should now work better at feature matching for maps created using slam.

roscd dynamic_robot_localization
git pull
roscd pose_to_tf_publisher
git pull
catkin build

However, feature matching is more reliable for small maps with unique environment geometry.
Large and repetitive environments are not ideal for feature matching because they have similar feature descriptors scattered throughout the environment.
If you crop your map with gimp to the regions in which you have sensor data in the rosbags, will you see that drl reliability to estimate the robot initial pose will increase significantly.

While testing the new rosbags and using rviz to set the initial pose, I notice that you need to add to odom.launch the line below, otherwise the pose_to_tf_publisher that is sending the tf base_link -> odom will receive the initial pose from rviz and cause problems.

<arg name="pose_with_covariance_stamped_topic" default="" />

I also added the option to disable the discard of older poses. This way, you can restart the rosbags without needing to restart drl and odom.launch
For that you need to add:

<arg name="discard_older_poses" default="false" />

to:

According to these benchmarks, my CPU is around 40% faster than yours.

But you can tune drl to use less CPU, for example:

  • Reducing the number of points that are used for alignment.
  • Reducing the search radius for the alignment algorithms.
  • Tuning the convergence parameters for the alignment algorithms to accept the point cloud registration sooner / with less accuracy.
  • Reducing the size of the map to only the regions in which the robot will be moving (you can either change the map directly with gimp or use a crop_box filter for the reference point cloud). You can see the point cloud used as reference in topic /dynamic_robot_localization/reference_pointcloud
  • Updating the map so that the walls are 1 pixel thick. This is important to improve the stability and accuracy of the point cloud matching, but it will also speed it up, because there will be much less points in the map. The lower the number of points in the reference and sensor point clouds, the faster the algorithms run (requiring less CPU for tracking the robot pose). When I make maps using slam, I usually manually clean them up and improved then in gimp (example for the tum dataset and our previous lab)

Have a nice day :)

@WilliamWoo45
Copy link
Author

WilliamWoo45 commented Jun 22, 2020

Good Afternoon Carlos!

Thanks a lot for your step by step guidance and explanations.

Following your suggestions:

  1. I've done git pull and updated the drl with its configurations.
  2. I've added the code <arg name="pose_with_covariance_stamped_topic" default="" /> and <arg name="discard_older_poses" default="false" /> to the corresponding launch files.
  3. Actually, the laptop that I borrowed has the same CPU with you (Intel-Core-i7-6700HQ, 4 Cores, 8 Threads @2.6 GHz), thus I no longer need to worry about the high CPU consumption problem. The drl now works pretty nice on the borrowed laptop.

As for the localization problem in repetitive environments, I agree that:

feature matching is more reliable for small maps with unique environment geometry. Large and repetitive environments are not ideal for feature matching.

However, I can not crop the map with gimp to the regions in which I have sensor data in the rosbags because in the real Global localization case no one knows where exactly is the robot (which is what I suppose the Global localization algorithm is doing).
As another way to test drl, I did input the ground-truth initial pose to drl and found that drl works pretty nice in almost all the waypoints (i.e. where the robot starts the global localization) that I randomly selected, which demonstrated its perfect pose-tracking capability. But if without accurate initial pose, drl only has about 20~30% successful localization cases. This is also the motivation for me to develop new global localization algorithm to cope with the repetitive environments.

Thanks again!

Best Regards,
William

@carlosmccosta
Copy link
Owner

Hello :)

I developed and tested drl initially for ship building environments within the FP7 Carlos project. In the project use cases the robot would be operating inside a large room.

In your use case, you have the robot moving inside long corridors of a large building, so a different approach to global localization will likely be required, which means you have the opportunity to research in this area :)

As a quick test, you could try other algorithms for global localization, for example running amcl in the beginning until it converged to an acceptable solution and then send the initial pose of amcl to drl for achieving better tracking accuracy. If for some reason drl loses tracking after a while, you could revert to amcl for finding a new initial pose, which later on would be used by drl. This approach would be a supervisor for 2 localization systems (for alternating between the 2).

Have a nice day :)

@WilliamWoo45
Copy link
Author

Hi Carlos.

Sincerely thanks again for all your help and guidances!
The approach of combining amcl and drl seems doable. I'm willing to try it.

Take care and all the best :)

Best Regards,
William

@carlosmccosta
Copy link
Owner

Hello :)

Let me know how drl performed when integrated with amcl or your own algorithms.
It's always great to have feedback from the ros community.

All the best :)

@WilliamWoo45
Copy link
Author

WilliamWoo45 commented Oct 12, 2020

Hi Carlos! @carlosmccosta

How are you recently? I hope everything's fine.
In recent days, I'm trying to implement other 2-D laser-based global localization methods apart from your DRL, the AMCL, Google Cartographer, MRPT localization . Do you have any recommendation for other open source algorithm?

Thanks a lot :)

Best Regards,
William

@carlosmccosta
Copy link
Owner

Hello :)

A few links as a starting point:

If you find packages that work well for global localization, please let me known :)

Have a nice day :)

@WilliamWoo45
Copy link
Author

Dear Carlos,

Noted with thanks!
I'll let you know if I find it.

Thanks and take care!

Best Regards,
William

@poornimajd
Copy link

@WilliamWoo45 and @carlosmccosta

So I'm wondering, is it because of the feature matching technique that DRL uses that performs poorly in such repetitive and ambiguous environments? Could you run these two rosbags without giving the initial pose on your laptop and check the performance of DRL ?

Hello ,I am also facing a similar issue of repetitive and ambiguous environments and also featureless environments,it would be really gratful of you ,if by any chance you have come across any localization algorithm which performs well in such scenarios.if you could just provide a link ,it would be nice.(The above links mentioned earlier,do not really fit my case.)
Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants