Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Announcement: Introducing the RealSense Tracking Camera T265 #3129

Closed
MartyG-RealSense opened this issue Jan 23, 2019 · 153 comments
Closed

Announcement: Introducing the RealSense Tracking Camera T265 #3129

MartyG-RealSense opened this issue Jan 23, 2019 · 153 comments
Labels
announcement tracking 6-DOF tracking, SLAM and T26x series

Comments

@MartyG-RealSense
Copy link
Collaborator

Hi everyone,

A new RealSense camera product called the RealSense Tracking Camera T265 is now listed for pre-prder on Intel's online Click store, with pre-orders due to ship in the week of March 4 2019.

"Intel RealSense Tracking Camera T265 is a new class of stand-alone Simultaneous Localization and Mapping device, for use in robotics, drones and more".

https://click.intel.com/order-intel-realsense-tracking-camera-t265.html

@barnjamin
Copy link
Contributor

Thanks, Specifications table is a little light on details. Any chance for some numbers like fps and depth accuracy?

@dorodnic dorodnic added announcement tracking 6-DOF tracking, SLAM and T26x series labels Jan 23, 2019
@dorodnic
Copy link
Contributor

dorodnic commented Jan 23, 2019

Thanks @MartyG-RealSense

Also, I'll try to answer some of the questions here.

FAQ

  1. Is this a Depth-Camera?
    No, the T265 does not offer any Depth information. It can provide IMU (Accel and Gyro), two monochrome fisheye streams, and 6-DOF pose data.

  2. Can I use it together with a Depth-Camera?
    Yes, this device can be used with RealSense (and most likely any 3rd-party) depth cameras. It is passive and comes with an IR-cut filter.

  3. What platforms are going to be supported?
    At launch the T265 will be supported via librealsense on Windows and Linux, and via ros-realsense.
    Mac-OS and Android support is planned but not scheduled yet, as well as OpenVR integration.

  4. What are the system requirements?
    Any board with USB2 capable of polling 6-DOF packets 260 times a second should be sufficient. The device was validated on Intel NUC platform.

  5. Will the same SLAM be available for the D435i as a software package?
    We are still looking into it, please stay tuned.

@dorodnic dorodnic pinned this issue Jan 23, 2019
@sam598
Copy link

sam598 commented Jan 23, 2019

@dorodnic Will the pose timestamps be able to sync with realsense depth cameras? Or will there be some method of synchronizing the pose data with other hardware?

@dorodnic
Copy link
Contributor

@sam598 - As far as I know there is no hardware-sync, but you could query T265 and D400 clocks with ~2-4ms accuracy each.

@dorodnic
Copy link
Contributor

dorodnic commented Jan 23, 2019

Fairly short (24sec) recording of T265 data:


Accelerometer and Gyro + Dual Fisheye + POSE data


3D View showing device trajectory

Download sample *.bag file (1GB) and open with Intel RealSense Viewer 2.18+
(click Add Source > From File... to load the ROS-bag recording)

@JBBee
Copy link
Contributor

JBBee commented Jan 23, 2019

This product looks amazing. Does it provide any support for persistence? Ideally, a way to fetch and save the map on a host and then restore it and re-localize to that map?

Thanks.

@dorodnic
Copy link
Contributor

dorodnic commented Jan 23, 2019

@JBBee - right now save/load_relocalization_map is still not part of librealsense API, but it is part of device capabilities and we are planning to support it at product launch next month.

@nominator
Copy link

@dorodnic

@JBBee - right now save/load_relocalization_map is still not part of librealsense API, but it is part of device capabilities and we are planning to support it at product launch next month.

I would like to do +100 likes on it if I could. Have been waiting so long for a proper tracking solution from Realsense. After reading initial info. on T265, first question in my mind was, whether it will allow persistence? AND IT WILL :) so cool!!

Other nice to have features could be
1- Occlusion (virtual objects occluded by physical geometry) in AR
2- Plane detection (walls, floors, angled surfaces) for placing virtual objects,
3- Ray-casts to detect touch targets in 3D space etc

Thanks

@dorodnic
Copy link
Contributor

Appreciate the feedback.
Since this device is based on Myriad VPU, we might be able to add more CV functionality in the future.
However, right now we are focusing on the V-SLAM aspect.

@nominator
Copy link

@dorodnic That will be awesome. Until then, is it possible for 3rd party developers to use the raw streams of data from camera to implement these CV features on the host? Just want to make sure there is nothing to keep from it in the SDK or the device

@dorodnic
Copy link
Contributor

@nominator - the camera can provide:

  1. Standalone 6-dof
  2. Accel & Gyro
  3. Dual fisheye monochrome (8-bit) streams @ 848x800 30FPS

You can ask for any or all of these.
When connected via USB3, you should be able to reliably get all together, when connected via USB2 we cannot guaranty no fisheye drops due to bandwidth.

We can publish additional sample data (*.bag) if you'd like to take a closer look at the streams (auto-exposure performance, IMU stability, etc...)

@HippoEug
Copy link

HippoEug commented Jan 24, 2019

I'm just curious about about some basic questions, since I am new to 3D imaging:

  1. Although this T265 comes with the SLAM algorithm which helps in tracking and robotic vision, it does not offer any depth data to the user. So what is the purpose of the D400 family now? They do provide the depth data, but without any proprietary SLAM algorithm wouldn't it be difficult to do reconstruction of 3D objects?

  2. I am also curious on the major hardware/SDK differences between what the T265 offers VS something like the D435i. What additional hardware does the D435i to include the features offered on the T265?

Edit: Just saw this: "The T265 complements Intel’s RealSense D400 series cameras, and the data from both devices can be combined for advanced applications like occupancy mapping, improved 3D scanning and advanced navigation and collision avoidance in GPS-restricted environments."

Will there be a camera like the T265 in the future that also provides depth data to the user like the D435i?

@nominator
Copy link

@dorodnic That will be great!! Please do, so that until we get the camera shipped, we can start playing around with the data and think about how to best use this data to add more features required for AR applications. We are currently doing an AR MVP for a client and being able to show them advanced capabilities enabled by Realsense will be awesome.

@mohammedari
Copy link

Cooool product I have waited for long time!

Does T265 have capabiliteis of realtime sharing of relocalization map between two devices over ethernet?
I would like to put two T265 on my drones each and try to share their positions and orientations for colaborative task.

@sirisian
Copy link

sirisian commented Jan 25, 2019

Does T265 have capabiliteis of realtime sharing of relocalization map between two devices over ethernet?

I just pre-ordered one of these to play with and might buy more. This is exactly my question, and I think a lot of people will want/need this. Basically allow like 2+ sensors to work using the same shared internal map so they move in the same space. There are two use cases. One where two or more sensors are placed statically relative to each other and move together increasing their accuracy, and the other is where they move independently in the same space.

My high level plan is to track multiple controllers (and maybe HMDs?) turning them into inside-out trackers. I'd need an API feature that allows the sensors to synchronize/merge their internal maps wirelessly. (So this isn't a case where two sensors are attached to the same computer).

@dorodnic The image you linked shows 30 fps for the video. That's not the sampling rate is it?

@dorodnic
Copy link
Contributor

Hi @sirisian @mohammedari
Sorry for the delay, I wanted to make sure to consult with my colleagues before giving an answer.
First, regarding the sampling rate - core pose data is available at 260 Hz. Fish-eye streams are at 30 FPS.
Regarding multiple devices, what you are asking is not part of device's built-in capabilities, but rather a higher level use-case. The following should work - map the environment using one device, save the map to file, send the file over network and load it onto the second device. The second device should relocalize relative to the map, bringing the two devices into same coordinate space.
Please note that the devices themselves do not have any connectivity on-board, nor the networking code will be provided in the SDK. It is too usage specific.
Keeping devices synchronized is also up to you. There is no API (at least for the moment) to merge two maps, you can only fetch / store a map.

@peci1
Copy link

peci1 commented Jan 26, 2019

@dorodnic If you could also create a bag file showing performance under the influence of vibrations, it'd be great. I mean vibrations from a drone, and vibrations from some kind of ground mobile robot/vehicle....

@dorodnic
Copy link
Contributor

Hi @peci1
We will try to capture one, but I don't have something prepared.
You are raising a good point. This is part of the reason why wheel odometry can be added as an additional input, and is recommended for robot scenarios.
Handheld and HMD-based use cases are easier in that regard, but we also did couple of test flights with the device, to make sure drone use-case can be supported. This is of course somewhat anecdotal evidence, so we will try to perform more evaluations and publish more data to help you make the right call.

@delmottea
Copy link

the video stream is 30 fps, but is it also 30 fps used internally for the computation? I guess the accelerometer provides the 260Hz data and there is some interpolation between the processed images, but 30 fps seems a little bit low for fast movement (even though if it is global shutter, low exposure time and hardware sync, it may be enough).

What is the exposure time? is possible to modify it?

Is there a timestamp for the video stream that we can match to the position.

I saw it's not possible to add additional processing on top of the 6dof tracking, but is it possible to completely replacing the tracking by other task like would do on a normal movidius compute stick, to use it as a standard stereo camera with processing capability. I'm thinking of tracking or object detection that could be done on camera, and streaming only the detections instead of full image.

Is it possible to synchronize the clock of multiple cameras together like the d400 serie with a sync cable?

@dorodnic
Copy link
Contributor

Hi @delmottea -

Is it possible to synchronize the clock of multiple cameras together like the d400 serie with a sync cable?

There is no external sync connector on this device.

What is the exposure time? is possible to modify it?

T265 performs auto-exposure on board. Actual exposure and gain are reported with every frame via metadata.

If you are implementing your own 6-DOF algorithm (and not enabling the POSE stream) you can also set manual exposure value (this is not yet in LRS API, but should be possible according to device specifications)

the video stream is 30 fps, but is it also 30 fps used internally for the computation? I guess the accelerometer provides the 260Hz data and there is some interpolation between the processed images, but 30 fps seems a little bit low for fast movement

IMU rates are 200 Hz for Gyro and 62 Hz for Accel. Every 33ms the device corrects IMU drift using visual data from fisheye cameras.

Is there a timestamp for the video stream that we can match to the position.

Yes, there are hardware timestamps on all streams, and a method to query device current time (the later is not yet exposed in LRS, but will be)

I saw it's not possible to add additional processing on top of the 6dof tracking, but is it possible to completely replacing the tracking by other task like would do on a normal movidius compute stick, to use it as a standard stereo camera with processing capability. I'm thinking of tracking or object detection that could be done on camera, and streaming only the detections instead of full image.

Unlike the NCS, the T265 is not a general purpose compute device, meaning we do not provide an SDK for development of custom firmware on top of its sensors, at least not at the moment.

This device was designed and optimized for the problem of inside out tracking. Low-power plug&play V-SLAM is significant part of its value proposition. For general purpose CV an NCS + standard webcam would probably make more sense.

@nominator
Copy link

@dorodnic SInce the device is capable of saving and loading maps. Is there going to be a limit on the area size that can be tracked and persisted using T265? I mean to relocalize, T265 will need to search through its saved map for matching features etc. If the maps are too large to cache on the onboard memory, would it then stream it from the connected host computer? Similarly while tracking and building the map, can the host receive the map data as a stream to save it during tracking?

@ev-mp ev-mp unpinned this issue Feb 2, 2019
@ev-mp ev-mp pinned this issue Feb 2, 2019
@r9112345
Copy link

r9112345 commented May 8, 2019

Hi I am doing research on visual-inertial odometry and the raw data from T265 seem very useful. In the comments above I saw "Every 33ms the device corrects IMU drift using visual data from fisheye cameras." Is it possible to get raw IMU data without drift correction?

@r9112345
Copy link

@MartyG-RealSense Thanks for the response! You said in #3987 "If you are implementing your own 6-DOF algorithm (and not enabling the POSE stream)". Does this mean I will get raw IMU data (without being corrupted by drift correction) after disabling POSE stream?

@MartyG-RealSense
Copy link
Collaborator Author

@r9112345 You're very welcome! :) I will refer your question onward to @dorodnic who was the source of that quote.

Additionally, Intel's Phillip Schmidt, who is a T265 expert, offers advice about the raw IMU data here:

IntelRealSense/realsense-ros#763 (comment)

@GOBish
Copy link

GOBish commented May 28, 2019

I'm looking to use a D435 and T265 together, as in the demos, on a mobile platform (so battery powered). Do people have a recommendation for either a single board computer or a NUC to use? I think I need two USB 3.0 ports, so would need at least an up-squared over just the UP board, correct? Would love to hear what you would do for this kind of project. Thanks!

@MartyG-RealSense
Copy link
Collaborator Author

MartyG-RealSense commented May 29, 2019

An Up Squared has been shown to be able to handle two D415 on the same board, so a D435 and T265 pairing should be fine in theory.

There is a long discussion about using the Up Board or Up Squared with two cameras at this link. Click on the 'More Answers' link at the bottom of the page at this link to see the full length of the discussion.

https://forums.intel.com/s/question/0D50P0000490XyySAE/aaeon-up-board-setup-with-2x-intel-realsense-depth-camera-d415?language=en_US

@GOBish
Copy link

GOBish commented May 29, 2019

Thank you for the reply and link!

@patrickpoirier51
Copy link

Confirmed this to work on UP board big brother, AAEON Pico - APL3
https://www.aaeon.com/en/p/pico-itx-boards-pico-apl3

The main issue deals with how to run a fully online SLAM system including loop closure in realtime on these entry level Intel processors.

@GOBish
Copy link

GOBish commented May 29, 2019

If you were to go with a NUC, which would you get? The expensive ones are really nice, but how about out of the lower cost ones?

@MartyG-RealSense
Copy link
Collaborator Author

After Intel released the super-powerful but high-priced NUC 8 VR kit in Spring 2018 (multiple USB 3 ports and powerful graphics). they followed it up with the announcement of a budget-priced 2018 range with a more modest spec.

https://forums.intel.com/s/question/0D50P0000490VcOSAU/new-budgetprice-intel-nuc-models?language=en_US

@GOBish
Copy link

GOBish commented May 29, 2019

great info, thanks!

@GOBish
Copy link

GOBish commented Oct 25, 2019

@dorodnic and @MartyG-RealSense, has there been any development with realtime sharing of relocalization map between two devices? (As was discussed in #3129 (comment)).
I understand we can create a map and then load it, but it would be amazing to be able to create a map on the fly with two t265's and the two units know where one another are in the map.

@MartyG-RealSense
Copy link
Collaborator Author

I do not have enough knowledge of T265 to comment on the practicality of a shared map - one of the Intel guys such as Dorodnic will be better equipped to comment on that. It was established recently though that multiple T265s can be put on separate threads. Please read downwards from the comment linked to below.

#4961 (comment)

@tensarflow
Copy link

tensarflow commented Mar 29, 2020

Hey All! Amazing work, really appreciate it! Two Questions:

About the relocalization in a known map. It is my understanding that this feature is not really accurate (kidnapped-robot example?). Is this true? And if so would it be possible to use detection of apriltags in known positions to re-initialize the current position of the robot and maybe even correct the <1% drift of the T265 on long runs? My system requires to travel long distances with high accuracy.

The other question is about wheel-odometry. This is asked a few times I think above, but since quite a time passed, I'd like to ask again. Currently I couldn't really find a documented API to provide wheel-odometry to the T265, besides for ROS. If there is no other way to provide this data, I would have to build ROS on my system only for that.
(EDIT: Okay found a few python examples, but no C++ examples, though. Is there a guide for C++?)
Also I would like to ask, since the needed odometry-data has to be in velocity, I would like to know if I could use this data channel to feed in any other sensor data converted to velocity? For example, other IMUs, GPS, or even run an external EKF on my host and give that velocity as "wheel-odometry" to the T265.

@neilyoung
Copy link
Contributor

About the relocalization in a known map. It is my understanding that this feature is not really accurate (kidnapped-robot example?). Is this true? And if so would it be possible to use detection of apriltags in known positions to re-initialize the current position of the robot and maybe even correct the <1% drift of the T265 on long runs? My system requires to travel long distances with high accuracy.

No. This feature works pretty good. If the robot is released again in an environment always seen and mapped, it is pretty easy possible to re-orientate again and determine the correct (relocated) position. But you need to help the T265 by set/get_static_node and re-project everything from the retrieved static node outside the cam in your code. See https://neilyoung.serveblog.net for examples and working code.

You don't even need Apriltags for this. I find this "AT help" scenario a bit unrealistic: You need to do the AT detection outside the cam in your code, which immediately raises the requirements to the hosting hardware. Then the AT detection costs, and lets you at least more or less w/o orientation as long as you are looking for ATs. Of course, once you have a corrective in form of a detected AT you can use this information instead of get_static_node to re-project.

I'm not sure, where this 1% drift info comes from. At small scale the drift can be pretty low. At large scale the drift can be way > 1%, even outdoors.

My personal opinion of wheel odometry is: I can't really imagine that a wheel - especially a sliding and spinning wheel - can improve the accuracy of the entire system much, so I never used it. But I might be wrong. However, I was always pretty satisfied with the accuracy indoors w/o wheel odometry, but I don't count in centimetres... IMHO this is overkill.

@tensarflow
Copy link

tensarflow commented Mar 29, 2020

Thank's for the quick reply!

About the ATs, actually I think you could let the T265 take care of computation as explained here, and then just take the pose data.

Also I didn't quite understand how I am supposed to set/get these static points. Is it like saying "You are at position x,y,z right now!", because if so, wouldn't that mean that I had to actively put the sensor myself to a position that I know (a reference position) and then run the get_static_node code from there? And how accurate would that be, since I won't put the sensor each time to the exact same position.
From that point of view I think using ATs is actually pretty useful since it'd be doing its calibration passively without my interference.

But as I said I still didn't have that beakthrough in my head yet and I have some research to do, so don't try to explain some basic concept here(, if its gonna be too much trubble) :)

@neilyoung
Copy link
Contributor

neilyoung commented Mar 29, 2020

Also I didn't quite understand how I am supposed to set/get these static points. Is it like saying "You are at position x,y,z right now!", because if so, wouldn't that mean that I had to actively put the sensor myself to a position that I know (a reference position) and then run the get_static_node code from there?

In my code it is used like so:
set_static_node is expecting a string and pose data. And yes: The referencing must be done exactly as you describe, but just ONCE PER MAP: You need to move the sensor while mapping exactly to a known point in space, which you would like to make your coordinate origin. You need to point the camera exactly into the direction which you would like to assign to the node. So you have a world reference coordinate (can be GPS (WGS84) or X/Y/Z) and a heading. This is kept in the metadata of the app. The T265 just stores the current pose under a given name in the map.

Later wherever you are in real world and you are using the same map and retrieve get_static_node it will return you the same pose, but now in the current reference frame of the cam (which changes with every start, as you know). My code does all the math to use this new coordinate w.r.t. the defined coordinate origin. You don't have to be at the same point in real world, at which you initially stored the node.

AT is not my favourite, since it requires you to place printed markers at exact positions in your world. This might work for PoC, but customers will not like it :) Promised. And AT is also a format, which does not allow you to store generic data, it's not QR. You would also have to have a translation layer, which converts a detected AT into your world coordinates, you are needing.

Give my solution a try, it's easy. There are a lot of videos explaining everything. You not even necessarily need a floor plan: A photo of a true to scale hand drawn rectangle on a paper does it...

@tensarflow
Copy link

Thank you very much for your explanation, it really helped! I'll give it a try, once my camera arrives. 👍

@neilyoung
Copy link
Contributor

Thanks. I really would appreciate some comments. I'm pretty convinced by the shown accuracy indoors. Having not seen it better yet. But I'm alone, having no feedback.

@tensarflow
Copy link

Yeah, sure I‘d totally try it out and give some feedback. Checked your website out btw, impressive accuracy actually. I think I‘m going to use something like ATs or AruCo codes anyway since I‘m trying to do some shelf picking with a robot arm, and I really need millimetric accuracy. That‘s why I thought, why not use the same marks to kind of “fine tune” my relocalization. But if it really is that good, maybe won’t need them after all. I’m definitely going to give feedback. Thanks again. :)

@stevemartinov
Copy link

Hi, any news on the maximum map size of the T265? Did it increase?

@kevinxu918
Copy link

Hello, I am using T265 with Jetson NX. Every time when I boot up the NX and T265, I get error saying “Error booting T265”, and “No RealSense devices were found!”. I can unplug and plug back the usb cable to resolve the issue, but it is annoying to do that every time I power cycle the drone. Is there anyway to fix that?

Also, I often see error "SLAM_ERROR Speed", what caused that and how to resolve that?

Normally the drone is flying well, but sometimes it flies away and lose control in a sudden. The position data looks odd, why is that, is that caused by the error "SLAM_ERROR Speed"?

Thanks

@mikeh9
Copy link

mikeh9 commented Jul 13, 2023

Its a known issue that you have to unplug / plug in the T265 after boot. I have not found a solution for this.

I don't think I have seen the error "SLAM_ERROR Speed"

What happens is that vibration causes the T265 to lose its tracking. I have seen this. One solution is to mount the T265 with vibration dampeners. Another solution, is to monitor the T265 tracking quality. If it is bad, you can switch your drone to manual.

@kevinxu918
Copy link

Its a known issue that you have to unplug / plug in the T265 after boot. I have not found a solution for this.

I don't think I have seen the error "SLAM_ERROR Speed"

What happens is that vibration causes the T265 to lose its tracking. I have seen this. One solution is to mount the T265 with vibration dampeners. Another solution, is to monitor the T265 tracking quality. If it is bad, you can switch your drone to manual.

Thanks very much for your kind reply. Is there any good substitute for t265?

And how to monitor the tracking quality?

@mikeh9
Copy link

mikeh9 commented Jul 14, 2023

Check out this video for an alternative. It is using wireless tracking.

Tracking quality can be monitored with two different confidence variables:
tracker_confidence and mapper_confidence
example: https://dev.intelrealsense.com/docs/rs-pose
source: https://github.com/IntelRealSense/librealsense/blob/master/src/rs.cpp#L2453

I use tracker_confidence. It should always be 3 in order to trust the data coming from the T265. If it drops below 3, then set an alarm and switch to manual control.

@kevinxu918
Copy link

Check out this video for an alternative. It is using wireless tracking.

Tracking quality can be monitored with two different confidence variables: tracker_confidence and mapper_confidence example: https://dev.intelrealsense.com/docs/rs-pose source: https://github.com/IntelRealSense/librealsense/blob/master/src/rs.cpp#L2453

I use tracker_confidence. It should always be 3 in order to trust the data coming from the T265. If it drops below 3, then set an alarm and switch to manual control.

Thanks very much Mike for your recommendation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
announcement tracking 6-DOF tracking, SLAM and T26x series
Projects
None yet
Development

No branches or pull requests