-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using keep() method to buffer frames to avoid frame drops. #6146
Comments
Keep() stores the frames in memory instead of writing them continuously to storage such as a hard disk. At the end of the recording process, once you close the pipeline then you can perform batch-processing operations on the frames (e.g applying post-processing) and then save the frames to storage in a single action. The main disadvantage of using Keep() is that because the frames are stored in memory, it is only possible to do short recording sequences. The maximum recommendable recording duration would be around 30 seconds if your computer has a large amount of memory. Otherwise, 10 seconds may be a more practical limit for computers with smaller memory capacity. The link below has a Keep() script for Python that you can compare to your own code. In regard to altering frame queue size to change the balance between performance and latency, @dorodnic has some advice here: This documentation link has more information about frame buffering management: |
Hey @MartyG-RealSense So from the Frame Buffering Management documentation I learned that Beside Pipeline which I was using, there are other classes for accessing frames from the camera (frame_queue and syncer) which are considered low-level in comparison to Pipeline. While Pipelines Buffer capacity should be changed from the source code of Realsense SDK, we can set capacity of frame_queue and syncer and do some sort of buffering but it might affect the performance of camera (cause latency and ... according to #5041). So It might be a good Idea for me to experiment using frame_queue and syncer. Please correct me if I'm wrong. What I don't understand is while Frame Buffering Management documentation has some clear examples, I can't find any sign of using keep() method in them. Also The code in #3164 that you mentioned Is very much like mine. It reads a frame using So Let me explain briefly what I'm trying to do. |
My understanding is that if you are only using one stream (e.g depth) then the frame queue size can be set to '1', and if you are using two streams (e,g depth and RGB) then it can be set to '2'. However, the default frame queue settings are already optimised for low lag. There is nothing to stop you from experimenting with the values though, just bearing in mind the possibility that it may make the streaming worse if the frame queue values are wrong. The majority of people tend to use the pipeline method in their programs - e.g pipe.start() and pipe.stop(). The documentation considers the syncer class to be a high-level function that can be used to "synchronize any set of different asynchronous streams with respect to hardware timestamps". So whether you need to make use of it or use the more common pipeline method may depend on the kinds of processing that your application is doing. There are not many examples of use of the Keep() function. This may be because the limited recording time may not suit a lot of users' needs. The only other Python code example for Keep() that I could find was this one: There are a couple of more advanced uses of Keep() demonstrated in the links below, though they are in the C++ language: Usually with Keep(), the processing of the collected frames happens after the pipeline has been closed and the recording of new frames stops. This could be applying post-processing, and / or aligning the frames. @dorodnic would be better able to advise you on how to use Keep() to achieve your described project goal at the bottom of your original message. |
This case will be closed after 7 days from the time of writing this message if there are no further comments. Thanks! |
Hello @alizarghami, did you figure out how to use the keep() method? |
Hello @asb2111991 The following code is the best example I can give you:
Also check this for a better example. |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
I have a very simple question. How does frames keep() method work? I didn't found any documents for this and didn't understood the examples I found.
I learned from some other issues on GitHub (like #1000) that frames have a keep() method that can be used to save frames for future processing. But I can't understand how does it work. For example in this piece of code:
Where are we keeping the frame?
How can I access it later?
Also how can I set the buffer size used for keeping the frames in Python?
Can you please provide me a simple Python example?
Thank you
The text was updated successfully, but these errors were encountered: