Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Navigation delay during multi-robot planning different paths. #252

Open
Vishal24rg opened this issue Oct 12, 2022 · 13 comments
Open

Navigation delay during multi-robot planning different paths. #252

Vishal24rg opened this issue Oct 12, 2022 · 13 comments

Comments

@Vishal24rg
Copy link

Hi Team,

Video.mp4

We have two delivery robots delivery1, delivery2 respectively. We have provided commands i.e, dispatch_go_to_place to two robots simultaneously. we have observed that robots have planed entirely different paths to reach the Target locations as shown in video.
The issue is delivery2 (robot on left hand side) robot is not moving and keep waiting until delivary1 (robot on right hand side) robot reached to target position.

Delivery2 robots started moving only after delivery1 robot reached target location even-though there paths are different .
This behavior will affect productivity and unnecessary delay in robot movement even-though the path is Free.
Is there any way to avoid such delay.
Please find the attached video for reference.

@mxgrey
Copy link
Contributor

mxgrey commented Oct 12, 2022

Thanks for reporting this issue. Are you able to share your map package and a set of launch files that can recreate this issue so we can debug it on our end?

@ghost
Copy link

ghost commented Jun 28, 2023

Hi @mxgrey
I also met such issue in my test. Please see the following two test cases.

  • test case WITH delay issue:

robot1 is born at P13, robot2 is born at P10

request robot2 to go to P8

after 1 second, request robot1 to go to P11

We can see, robot1 did not move and kept waiting untill robot2 had arrived at P8. In fact ,there is no overlap between the two paths.
Could you please explain why there is such delay and how to avoid the delay?

delay.mp4
  • test case WITHOUT delay issue:

robot1 is born at P13, robot2 is born at P9

request robot2 to go to P8

after 1 second, request robot1 to go to P11

The test result is as expected ,robot1 began to move immediately.
https://github.com/open-rmf/rmf/assets/118786311/bdfa1711-926d-43d9-8be5-815a710b428d

The only difference between the two test cases is the initial position of robot2.

@mxgrey
Copy link
Contributor

mxgrey commented Jun 28, 2023

Could you please explain why there is such delay and how to avoid the delay?

I've seen this happen before, but I haven't tracked down a cause because I've found it difficult to reproduce. The intended behavior is for robot1 to start moving once robot2 reaches P9, but it seems that dependency is not being calculated correctly. Is this being done in simulation and does it happen consistently? If so, you could share the launch files. I should be able to debug and fix this behavior if I can just reproduce it consistently.

That being said, the next generation of RMF will be migrating to a more robust approach to traffic dependency management so the issue should be solved either way eventually.

@ghost
Copy link

ghost commented Jun 29, 2023

Is this being done in simulation and does it happen consistently?

this is being done using my own fleet_adapter not rmf_demos. I don't think my launch file is useful to you to investigate this issue. I think it is a better way that I reproduce this issue and save the log for you to debug.
Could you please tell me how to enable debug level log of rmf core?

I ran this case multiple times. Sometimes robot1 starts to move when robot2 reaches P9, sometimes robot1 starts to move when robot 2 reaches P8. The unexpected behavior happened very frequently, I can reproduce this issue easily.

@mxgrey
Copy link
Contributor

mxgrey commented Jun 29, 2023

think it is a better way that I reproduce this issue and save the log for you to debug.

The log won't provide enough information to debug the internals. We're going to improve debug logs in the next generation.

If you're able to provide the .building.yaml I can try to reproduce it in simulation.

@ghost
Copy link

ghost commented Jun 29, 2023

@mxgrey sure, you can get test.building.yaml from the following link.
I have to modify the file extension to .txt from .yaml
test.building.txt

@ghost
Copy link

ghost commented Jul 4, 2023

@mxgrey
Hi, Grey
Is there any clue? Do you want more information about the test case?

Thanks
Stella

@mxgrey
Copy link
Contributor

mxgrey commented Jul 19, 2023

Apologies for the delay on this.

I did manage to reproduce the problem and I found a solution. I've incorporated the solution into this PR (which is the main thing I've been working on this past month). Once that PR is merged, simply switching to that branch should fix the problem.

Additionally with that PR you can consider switching to the new EasyFullControl Python API which should both drastically simplify your fleet adapter code as well as fix some known pain points that can cause misbehaviors. But you shouldn't have to switch to this new API to still benefit from the traffic dependency fix.

@ghost
Copy link

ghost commented Jul 20, 2023

@mxgrey This is absolutely a great news for me! Thank you very much!
I will have a test ASAP and give feedback here.

@ghost
Copy link

ghost commented Jul 20, 2023

@mxgrey Could you please tell me the merge plan for this fix? I can hardly wait to test it. :)
Thanks

@mxgrey
Copy link
Contributor

mxgrey commented Jul 24, 2023

We're adding one more small piece to the API for more configuration options, and then we're going to continue testing various use cases for a while. I expect it will be merged into main and released to the rolling distro of the ROS buildfarm by August 4.

In the meantime if you're building RMF from source code, you can go into src/rmf/rmf_ros2 in your colcon workspace and do

$ git fetch
$ git checkout reactive_easy_full_control

to get the relevant code. Then go back to the root of your colcon workspace and build as normal.

@stella-ccyydy
Copy link

I noticed that there was one new release delivered on August 10th.
I checked the change logs, it seemed as if this release did not include the code change for this bug, right?
If yes, could you please tell me what is the plan for the fix merged into the main branch?
I did try to get the code by the way you mentioned above, but I always met some compiling errors.

@mxgrey
Copy link
Contributor

mxgrey commented Aug 21, 2023

I bit more testing was needed before the last sync, but the changes are merged into main now. We have another release scheduled for this Friday, so the binaries should be available after that.

You can try updating to all the latest main branches and it should all be able to compile.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants