Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using keep() method to buffer frames to avoid frame drops. #6146

Closed
alizarghami opened this issue Mar 28, 2020 · 6 comments
Closed

Using keep() method to buffer frames to avoid frame drops. #6146

alizarghami opened this issue Mar 28, 2020 · 6 comments

Comments

@alizarghami
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model D415
Operating System & Version Ubuntu 18
Platform PC/Raspberry Pi
SDK Version 2 ..
Language Python
Segment IoT

Issue Description

I have a very simple question. How does frames keep() method work? I didn't found any documents for this and didn't understood the examples I found.
I learned from some other issues on GitHub (like #1000) that frames have a keep() method that can be used to save frames for future processing. But I can't understand how does it work. For example in this piece of code:

import pyrealsense2 as rs

pipeline = rs.pipeline()
pipeline.start()

while (True):
    frame = pipeline.wait_for_frames()
    frame.keep()

Where are we keeping the frame?
How can I access it later?
Also how can I set the buffer size used for keeping the frames in Python?

Can you please provide me a simple Python example?

Thank you

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 28, 2020

Keep() stores the frames in memory instead of writing them continuously to storage such as a hard disk. At the end of the recording process, once you close the pipeline then you can perform batch-processing operations on the frames (e.g applying post-processing) and then save the frames to storage in a single action.

The main disadvantage of using Keep() is that because the frames are stored in memory, it is only possible to do short recording sequences. The maximum recommendable recording duration would be around 30 seconds if your computer has a large amount of memory. Otherwise, 10 seconds may be a more practical limit for computers with smaller memory capacity.

The link below has a Keep() script for Python that you can compare to your own code.

#3164 (comment)

In regard to altering frame queue size to change the balance between performance and latency, @dorodnic has some advice here:

#5041 (comment)

This documentation link has more information about frame buffering management:

https://github.com/IntelRealSense/librealsense/wiki/Frame-Buffering-Management-in-RealSense-SDK-2.0

@alizarghami
Copy link
Author

Hey @MartyG-RealSense
Thanks for your fast replay. I found some very useful information from the references you provided.
I'm not trying to store the frames to the storage. Keeping it on the memory is exactly what I need.

So from the Frame Buffering Management documentation I learned that Beside Pipeline which I was using, there are other classes for accessing frames from the camera (frame_queue and syncer) which are considered low-level in comparison to Pipeline. While Pipelines Buffer capacity should be changed from the source code of Realsense SDK, we can set capacity of frame_queue and syncer and do some sort of buffering but it might affect the performance of camera (cause latency and ... according to #5041). So It might be a good Idea for me to experiment using frame_queue and syncer.

Please correct me if I'm wrong.

What I don't understand is while Frame Buffering Management documentation has some clear examples, I can't find any sign of using keep() method in them. Also The code in #3164 that you mentioned Is very much like mine. It reads a frame using frame = pipeline.wait_for_frames() and then uses frame.keep() to keep it in the memory but never access it in the future. I assume when you save it in the memory somehow you need to retrieve it later in order to process it. Am I right or I'm missing something?

So Let me explain briefly what I'm trying to do.
I get frames from the camera and I usually don't need to do much processing on them. However once in a while I might need to do some processing on a few consecutive frames (Say 2-4 seconds which will be 50-100 frames assuming frame rate of 25 fps Each might take 1 second to process). The numbers might not be accurate. I'm trying to reduce my frame drop as much as possible.
I feel my problem is very similar to #1000 and since they found keep() as their solution I think It might also work for me.

@MartyG-RealSense
Copy link
Collaborator

My understanding is that if you are only using one stream (e.g depth) then the frame queue size can be set to '1', and if you are using two streams (e,g depth and RGB) then it can be set to '2'. However, the default frame queue settings are already optimised for low lag. There is nothing to stop you from experimenting with the values though, just bearing in mind the possibility that it may make the streaming worse if the frame queue values are wrong.

The majority of people tend to use the pipeline method in their programs - e.g pipe.start() and pipe.stop().

The documentation considers the syncer class to be a high-level function that can be used to "synchronize any set of different asynchronous streams with respect to hardware timestamps". So whether you need to make use of it or use the more common pipeline method may depend on the kinds of processing that your application is doing.

https://github.com/IntelRealSense/librealsense/blob/master/doc/api_arch.md#high-level-pipeline-api

There are not many examples of use of the Keep() function. This may be because the limited recording time may not suit a lot of users' needs. The only other Python code example for Keep() that I could find was this one:

#3121 (comment)

There are a couple of more advanced uses of Keep() demonstrated in the links below, though they are in the C++ language:

#1942 (comment)

#2223

Usually with Keep(), the processing of the collected frames happens after the pipeline has been closed and the recording of new frames stops. This could be applying post-processing, and / or aligning the frames.

@dorodnic would be better able to advise you on how to use Keep() to achieve your described project goal at the bottom of your original message.

@MartyG-RealSense
Copy link
Collaborator

This case will be closed after 7 days from the time of writing this message if there are no further comments. Thanks!

@asb2111991
Copy link

Hello @alizarghami, did you figure out how to use the keep() method?

@alizarghami
Copy link
Author

alizarghami commented Oct 2, 2021

Hello @asb2111991
Actually I gave up on that. I think keep() just stores the frame into the memory. However if you want to retrieve it, you have to manually keep a reference to it. If what I said is right, it is not exactly what I was looking for. It probably speeds up retrieving the frame though.

The following code is the best example I can give you:

# Imports

import cv2
import numpy as np
import pyrealsense2 as rs


# Initialization

pipeline = rs.pipeline()
cfg = rs.config()
cfg.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
    
queue = rs.frame_queue(capacity=100)


# Main
pipeline.start(cfg)

for i in range(100):
    # get frame from pipeline
    frame = pipeline.wait_for_frames()
    frame_number = frame.frame_number
    
    frame.keep()
    queue.enqueue(frame)
    
pipeline.stop() 
    

for i in range(100):  
    frame = queue.wait_for_frame()
    frame_number = frame.frame_number

    color_frame = frame.as_frameset().get_color_frame()
    color_image = np.asanyarray(color_frame.get_data())
    
    cv2.rectangle(color_image, (10, 2), (100,20), (255,255,255), -1)
    cv2.putText(color_image, str(frame_number), (15, 15),
                cv2.FONT_HERSHEY_SIMPLEX, 0.5 , (0,0,0))
    cv2.imshow('Color image', color_image.astype(np.uint8))
    keyboard = cv2.waitKey(30)
    if keyboard == 'q' or keyboard == 27:
        break 

Also check this for a better example.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants