Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue: Bug/Performance Issue [Physical Robot] - Wrong orientation for Grasp Pose on Sawyer Robot #112

Open
nalindas9 opened this issue Apr 2, 2020 · 7 comments

Comments

@nalindas9
Copy link

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
  • Python version: 3.7.12
  • Installed using pip or ROS: pip
  • Camera: Kinect (modified with same intrinsic parameters mentioned in primesense.intr)
  • Gripper: Generic 2 finger gripper available in gazebo
  • Robot: Sawyer (by Rethink Robotics)
  • GPU model (if applicable): Nvidia Geforce GTX 1060

Describe what you are trying to do
I am trying to query the grasp pose obtained from the pretrained Dexnet 2.0 and plan for it on ROS Moveit in the gazebo simulator for the Sawyer Robot. I have already successfully queried the grasp pose and planned the arm to reach the desired goal pose.

Describe current behavior
The problem I am facing is that even though the end effector is reaching the correct position, it is not able to reach the correct goal orientation.

Describe the expected behavior
The correct goal orientation is for the end effector to go from the top and grasp the object. However, it is now going parallelly to the table and not from the top.
Describe the input images
Color test image
RGB Image_screenshot_02 04 2020
Segmask input Image
segmask3

Dexnet output grasp Image
Figure_1

Describe the physical camera and robot setup
default_gzclient_camera(1)-2020-04-02T14_26_18 508226

The Kinect camera is attached to the head.

Other info / logs
Grasp pose obtained from Pretrained Dexnet 2.0:
The action grasp pose obtained is: Tra: [-0.041162
96 -0.02320094 0.78583838]
Rot: [[ 0. 0.75538076 -0.65528613]
[ 0. 0.65528613 0.75538076]
[ 1. -0. 0. ]]
Qtn: [ 0.64328962 -0.29356169 -0.64328962 -0.2935
6169]
from grasp to primesense_overhead

Final grasp pose obtained in Robot base coordinates after all transformations:
Final T matrix:
0.3614 -0.0370 -0.9317 0.4776
0.0027 -0.9992 0.0407 0.0374
-0.9324 -0.0172 -0.3610 -0.1455
0 0 0 1.0000

Quaternion: 0.0176 -0.8249 0.0104 0.5650 (W,X,Y,Z)
Final Comments
As you can see from the above pose obtained in robot coordinates, the translation is correct - [0.4776, 0.0374, -0.1455]. from the base link of the robot. The translation is in meters and X, Y, Z.
The end effector reached this goal pose accurately.
However, the obtained rotation is not correct. For the end effector to approach the object from the top, the orientation should be similar to [0 1 0 0] (w,x,y,z). However, this is not how the end effector is approaching the object.

I suspect this could be because of the how the end effector axis is defined. I am defining the end effector frame as follows:
Screenshot from 2020-04-02 14-42-39

where the x,y,z axis are as follows:
image

Could you let me know how you are defining the end effector frame on the ABB Yumi Robot you are using for your setup?

Also, can you let me know if you think there is some other reason for this?
Thank you!

@nalindas9 nalindas9 changed the title Issue: Bug/Performance Issue [Physical Robot] Issue: Bug/Performance Issue [Physical Robot] - Wrong orientation for Grasp Pose on Sawyer Robot Apr 2, 2020
@visatish
Copy link
Collaborator

visatish commented Apr 3, 2020

Hi @nalindas9,

First and foremost, thanks for providing thorough information about your setup - you don't know how many times we have to go back and forth asking for more information haha. It really makes it easier.

I believe the issue is related to a discrepancy in conventions (possibly the end effector frame); however, I'm not entirely sure what the issue might be since I haven't worked as much with this part of the system. I'm cc'ing @jeffmahler who I think can be of more help.

By the way, do note that the network you are using was trained a bit differently from the current setup (i.e. camera was overhead and not at an angle, gripper was different, not sure if the height to the workspace is the same - ours was 60cm); however, it does seem like you are still getting reasonable grasps so I wouldn't be too concerned. Just want to put this disclaimer out there.

Thanks,
Vishal

@nalindas9
Copy link
Author

Hi @visatish,

Thanks for your prompt reply. Yes, I also believe that there is a discrepancy in frame assignment for the end effector which is causing this problem. @jeffmahler, can you verify this? If so, it would be helpful for everyone using Dexnet if you could give the end-effector frame assignment which you used for the Yumi robot.

PS: I even tried it with other configurations of the camera, for eg. the camera being overhead at 55cm, 63 cm and 70 cm. For each one, the result for the orientation grasp was the same. The translation for the grasp was almost the same for all three configurations. Yes, I am getting reasonable grasps with Q>=0.50 most of the time.

Thanks,
Nalin

@nalindas9
Copy link
Author

Hi @visatish, @jeffmahler,

It would be helpful if you could let me know the end-effector frame assignment asap. Do let me know if I can provide any further information about my setup.
Thanks.

@jeffmahler
Copy link
Contributor

@nalindas9 Very sorry for the delay. I am not on here very often and I just saw your LinkedIn message.

I believe the issue is, as you suspect, due to the different between the YuMi gripper frame and the sawyer gripper frame. I am not 100% sure what the correct transformation is based on this information although I suspect you need to right-multiply your rotation matrix by

[[0,0,1],
 [1,0,0],
 [0,1,0]]

If this is not correct, I recommend visualizing the the grasp pose in 3D coordinates in RViz / Gazebo and pasting the image here. There is some permutation of the coordinate axes that needs to take place.

@nalindas9
Copy link
Author

nalindas9 commented Apr 17, 2020

@jeffmahler
Hi Jeff,

Thanks for your response! I right multiplied my final grasp pose obtained after all transformations with the rotation matrix you suggested. Here is what I obtained:

Grasp from Dexnet 2.0

Figure_2

Final grasp pose without multiplying with rotation matrix

Pose obtained:

    0.3614   -0.8605   -0.3590    0.4231
   -0.0022   -0.3858    0.9226    0.1043
   -0.9324   -0.3327   -0.1413   -0.1567
         0         0         0    1.0000

Transformation: tcb*inv(trot)*toc

tcb --> transformation from robot base frame to kinect frame
inv(trot) --> transformation from kinect frame (in Rviz) to camera coordinate frame of Dexnet grasp image
toc --> final grasp pose obtained from Dexnet 2.0
Screenshot from 2020-04-17 08-31-33
Screenshot from 2020-04-17 09-27-24

Final grasp pose after multiplying with rotation matrix

Pose obtained:

    -0.8605   -0.3590    0.3614    0.4231
   -0.3858    0.9226   -0.0022    0.1043
   -0.3327   -0.1413   -0.9324   -0.1567
         0         0         0    1.0000

Transformation: tcb*inv(trot)*toc *tnewrot

tnewrot -->

[[0,0,1],
 [1,0,0],
 [0,1,0]]

Screenshot from 2020-04-17 08-34-58
Screenshot from 2020-04-17 10-12-03
Screenshot from 2020-04-17 10-12-49

Comments

As you can see, the gripper is now going from the top rather than parallel to the table. But still, the orientation of the sawyer gripper along the Z-axis is a little off from the obtained pose from Dexnet 2.0.
I believe there is some more permutation needed here, to account for the difference of frame convention between the Yumi and the Sawyer gripper.

Can you let me know what it could be? It would also be useful if you could let me know the frame convention you used for the Yumi gripper.

Thanks for your help!

@jeffmahler
Copy link
Contributor

Ah yes, so there is a 90 degree rotation about the z-axis needed. I think the correct right-multiply matrix is:

[[0,0,1],
 [0,1,0],
 [-1,0,0]]

@ShrutheeshIR
Copy link

Hello @nalindas9 .
I am working on something similar, and I had a doubt with the implementation of the same on the Gazebo Simulator using ROS. Could you give me a brief overview on how to go about it? I have the Robotic Arm model files. Are you using Moveit! to configure the planning? Also, if you are using Moveit! , have you just specified a few poses for the end effector (the parallel jaw in this case), or are you using their other packages like 'Pick and Place'/'Grasping'?
Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants