models/sam-2/ #14830
Replies: 20 comments 56 replies
-
Hey everyone! We wanted to give you a quick update: our team is currently busy integrating SAM 2 (Segment Anything Model 2) into the Ultralytics package. This new model is super exciting, with features like real-time performance and the ability to segment objects it’s never seen before. We're really pumped about what SAM2 can do, but it's still in the works. Thanks for your patience as we get everything set up! We'll keep you posted as we make progress. Stay tuned! |
Beta Was this translation helpful? Give feedback.
-
Could you please provide an example of how to perform inference on a video? I would appreciate it if you could include details on the necessary steps and code snippets. Thank you! |
Beta Was this translation helpful? Give feedback.
-
Is it usable with ultralytics package at the moment or should we wait until the implementation is done? |
Beta Was this translation helpful? Give feedback.
-
Hey dear people, |
Beta Was this translation helpful? Give feedback.
-
How can you fine-tune this model for your specific needs? |
Beta Was this translation helpful? Give feedback.
-
when will training option enable for SAM 2 |
Beta Was this translation helpful? Give feedback.
-
Does this support text prompts? For example, I want to search for buildings or cars from an aerial image or video |
Beta Was this translation helpful? Give feedback.
-
Is there an API to fine tune the model on our own labeled photos? also the schema and shape for the inputs |
Beta Was this translation helpful? Give feedback.
-
Thank you very much for your beautiful work! I have a question regarding prompts with several points. When I prompt your integration of SAM 2 with several points I get several masks - one mask for each prompt point (which is what I want to get), while when I do the same thing using SAM 2 directly, I get one mask. Are there certain parameters that need to be used while initializing sam2 or while doing the prediction? |
Beta Was this translation helpful? Give feedback.
-
How can I use this sample code in order to make a detection (with YOLOv8) and segment that detection through a video (more like a real-time video stream)? I have run the code and it worked and I got a txt file which includes labels but I couldn't know what to do next.
|
Beta Was this translation helpful? Give feedback.
-
Hello ! I am trying to test SAM2, but its says "sam2_b.pt is not a supported SAM model. Available models are: from ultralytics import SAM Load a modelmodel = SAM("/backend/model/sam2_b.pt") Display model information (optional)model.info() Run inferencemodel("Input_videos/video.mp4") |
Beta Was this translation helpful? Give feedback.
-
Hey there, I was wondering how can I set foreground and background points and then visualize the segmentation properly? Here is my current code, but it gives each point's segmentation a distinct color.
|
Beta Was this translation helpful? Give feedback.
-
The web page shows the example below, but what options do I have for saving the results using ultralytics built in methods?
|
Beta Was this translation helpful? Give feedback.
-
The SAM and SAM 2 documentation have an identical block called 'SAM comparison vs YOLOv8'. My wish is having updated results in the section, but if that is significant work, then rather remove it. If an update is performed, then enumerating all the SAM2 variants would be nice. |
Beta Was this translation helpful? Give feedback.
-
Hello Glenn, |
Beta Was this translation helpful? Give feedback.
-
Hello sir , I am trying to run the auto annotate code but I am getting the error please help to solve it Code : auto_annotate(data="defect_tyre/inside2.jpg", det_model="best.pt", sam_model="sam2_b.pt")" Error: Requirement : I want my model to improve its accuracy over time without the need for retraining each time I add more data. I'm looking for a way to enable my model to learn and adapt on its own, similar to reinforcement learning in computer vision. Could you suggest how to achieve this ? |
Beta Was this translation helpful? Give feedback.
-
how can i use the mask data from the results object to create a mask image? |
Beta Was this translation helpful? Give feedback.
-
Hey! I am trying to do video segmentation with point prompting. But it seems to me that only the first frame is segmented with the prompt. For all the other frames, everything seems to get segmented. How can I resolve this? |
Beta Was this translation helpful? Give feedback.
-
Breaking News: 🚨 SAM 1.0 outperforms SAM 2.0 on low-contrast images like CT Scan and MRI images! Use case with ultralytics https://www.youtube.com/watch?v=vMI-TnyNLYU Please do like and subscribe for exciting video like above . #SAM #SAM2.0 #MetaAI #ImageSegmentation #AI #MachineLearning #DeepLearning #DataMask #BreakingNews |
Beta Was this translation helpful? Give feedback.
-
Breaking News: 🚨 SAM 1.0 outperforms SAM 2.0 on low-contrast images like CT Scan and MRI images! Use case with ultralytics https://www.youtube.com/watch?v=vMI-TnyNLYU Please do like and subscribe for exciting video like above . #SAM #SAM2.0 #MetaAI #ImageSegmentation #AI #MachineLearning #DeepLearning #DataMask #BreakingNews |
Beta Was this translation helpful? Give feedback.
-
models/sam-2/
Discover SAM 2, the next generation of Meta's Segment Anything Model, supporting real-time promptable segmentation in both images and videos with state-of-the-art performance. Learn about its key features, datasets, and how to use it.
https://docs.ultralytics.com/models/sam-2/
Beta Was this translation helpful? Give feedback.
All reactions