help/ #8027
Replies: 128 comments 374 replies
-
I want to save the image in which the object is detected and not detected into different folders. |
Beta Was this translation helpful? Give feedback.
-
I'm seeking clarification regarding the imgsz parameter in YOLO (You Only Look Once) and its impact on image resizing and bounding boxes. In my dataset, all images have a consistent size of 1920x1080 pixels. If I set the imgsz parameter to 640, will the images be internally downscaled to 640x640 pixels by YOLO during training or inference? In the context of this resizing, I'm curious about the effect on bounding boxes. Do their coordinates change, or does YOLO handle the calculation of new bounding boxes internally to accommodate the downscaled images? I want to ensure that I understand how YOLO manages the resizing process and its implications for object detection accuracy. Any insights or pointers to relevant documentation would be greatly appreciated. Thank you. |
Beta Was this translation helpful? Give feedback.
-
Hi, WRT inference/predict in yolov8 how can you obtain the run number ? Or the folder to /run/detect/predictXX where XX is the sequential number ? Reason would like to automatically get the image with all the bounding boxes. example /run/detect/predict46 Using :- Thanks |
Beta Was this translation helpful? Give feedback.
-
@glenn-jocher thanks for being so responsive and helpful. It's really impressive. My question is: What color does yolov8 use as infill when a training image not square? Maybe I'm misunderstanding, but if the algorithm pads the image to a square size, I'm just curious what color it pads with (zeros, i.e., black?). If it makes any difference, I'm currently training a classification model. |
Beta Was this translation helpful? Give feedback.
-
Is it possible to combine two YOLOv8 weights? |
Beta Was this translation helpful? Give feedback.
-
Hello, I have a question regarding the |
Beta Was this translation helpful? Give feedback.
-
Hello, I was wondering if you could give me some clarification on the freeze parameter for yolov8. When training begins, the training script automatically prints the layers of the model. There seem to be 23 blocks but over 100 different components. In your examples, you always use In my personal training I used |
Beta Was this translation helpful? Give feedback.
-
i was using YOLOv8 for number detection ( meter readings ), it worked pretty good. but i need small help. |
Beta Was this translation helpful? Give feedback.
-
Dear community thank you to each of the members I want to extract the tree crown boundary using the Yolo8 model. After training the model when I predict the RGB image each tree has multiple polygons but it should be a single polygon for a tree. Have you any idea how to address this issue? |
Beta Was this translation helpful? Give feedback.
-
in my YAML file i have 11 labels, with their respective values. But when I use save_txt command the labels are saved in a text file, but i want the exact values to be saved, because label 10 has value ".", which is important for meter reading value. How can I save the values in text file rather than labels. Below is my YAML file: names: |
Beta Was this translation helpful? Give feedback.
-
I have not found any resolution to the following issue with yolov8. Whenever I start training my model, the val-set score is immediately 1 for Precision and Recall. I believe this is an error. Therefore, it's really hard to monitor training. Here's an example:
|
Beta Was this translation helpful? Give feedback.
-
I want to load model before doing prediction. model.load_state_dict(torch.load(opt.saved_model, map_location=device)) like this one. I want load model when I run file right after. now i'm using custom yolov8 model. |
Beta Was this translation helpful? Give feedback.
-
Hi Guys at ultralytics { Still, there's the general route of training using defining everything as torch variables and then have a training loop |
Beta Was this translation helpful? Give feedback.
-
hello |
Beta Was this translation helpful? Give feedback.
-
i have trained the YOLOv8n with a deck of cards, it was detecting most of the cards well but there was confusion with the similar cards such as 5 and 3 etc. So i tried to fine tune that best.pt model by providing the data sets of cards 5 and 3 only but now it only detects 5 and 3 not the other cards. I have a lot of deck of cards to train and i want to have only one model to detect them. So how can i do that...I have already tried to retrain that best.pt but it forgets the previously trained deck...So how can i do that because i do not found any appropriate answer in the official website. Is it possible, if yes then how...Reply as soon as possible... |
Beta Was this translation helpful? Give feedback.
-
Hi, I noticed that engine.results.Results has a save function that saves an image. I would like to save a video with tracking results where the source is either an existing video or a camera stream. Does ultralytics have a function to do this? I can't seem to find anything that supports saving video. |
Beta Was this translation helpful? Give feedback.
-
Hello Ultralytics/Ultralytics,
Hope you are doing well.
I performed defect detection on an mp4 video. It is displaying the defects
with bounding boxes. I wanted to say thanks for your help.
But how to save this result video file?
And is it possible to give voice assistance to read out the defects? Kindly
guide me.
…On Wed, Aug 7, 2024 at 10:26 AM Paula Derrenger ***@***.***> wrote:
@swarnalathayv <https://github.com/swarnalathayv> hi,
To display a video with bounding boxes using your model, ensure you're
using the latest version of the Ultralytics package. The show() method
should work correctly if the model and video are properly loaded. If you're
encountering errors, please check the following:
1. Ensure your video file path is correct.
2. Verify that your model is correctly loaded and compatible with the
video input.
If the issue persists, please provide the error message you're receiving.
For more detailed guidance, refer to our Ultralytics documentation
<https://docs.ultralytics.com/help/>.
—
Reply to this email directly, view it on GitHub
<#8027 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6BY3KAJNAYXIFNUDV4M4NTZQGSHFAVCNFSM6AAAAABCZTU2BOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMRWGAYTINA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
--
Thanks & Regards
Swarnalatha YV
Sci/ Eng - SC
VSSC, Trivandrum.
Ph: 0471-256-4375
|
Beta Was this translation helpful? Give feedback.
-
Hi! I am trying to make an autoannotator like function which is based on the ultralytics auto_annotate function (using yolov8 and sam) which takes 2 images of the same scene (day light and night time),, creates masks with the daylight image, applies those masks onto the night time image, checks which masks have objects that are still visible in night time (some objects might be in the dark area at night time and not visible), and finally, deletes the masks of invisible objects and only retains masks of the visible objects. My doubt is, how do I check the image pixel values inside a mask region? |
Beta Was this translation helpful? Give feedback.
-
Hello! What is the criteria to establish the best.pt model? Does it have to do with the loss or some other parameter? |
Beta Was this translation helpful? Give feedback.
-
Excuse me, if I detect multiple objects of the same category at the same time, how can I distinguish them? For example, if we detect 10 people at the same time, how do we distinguish each person's id? |
Beta Was this translation helpful? Give feedback.
-
Hello Glenn,
Thanks for your previous response, it was very helpful! My new doubt its,
when I use SAM to get the masks for the objects in the image, the 'xy'
points arrays for the masks hare too large (too many xy points). Is there a
way to make it such that the generated masks have lesser 'xy' points?
Many thanks in advance
BR
Bhaavanesh Amarnath
…On Fri, Aug 16, 2024 at 9:34 PM Glenn Jocher ***@***.***> wrote:
To check the image pixel values inside a mask region, you can use the mask
to index the image array. Here's a simple example using NumPy:
import numpy as np
# Assuming 'image' is your night-time image and 'mask' is your binary maskmasked_pixels = image[mask == 1]
# Now you can analyze the pixel values in 'masked_pixels'
This will give you the pixel values within the masked region. You can then
apply your visibility criteria to determine which masks to retain. For more
detailed guidance, please refer to our documentation:
https://docs.ultralytics.com/help/.
—
Reply to this email directly, view it on GitHub
<#8027 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BKSS2YEYHMIXIOX7A5ZJZCDZRZH57AVCNFSM6AAAAABCZTU2BOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMZWGIZTSMY>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
I need CCTV/Camera Specs requirements for YOLOv8 plate number detection and extraction. Thank you |
Beta Was this translation helpful? Give feedback.
-
Hello I am getting the following error during the Triton inference. I followed the steps mentioned here: Also if you can share the config.pbtxt file for the yolo model mentioned in the above example that will be great. AttributeError: 'AutoBackend' object has no attribute 'task' from ultralytics import YOLO Load the Triton Server modelmodel = YOLO("http://localhost:8000/yolo", task="detect") Run inference on the serverresults = model_x("sample_image.jpg") |
Beta Was this translation helpful? Give feedback.
-
Hello, I'm trying to use a 4070 Ti super for training with ultralytics. This GPU needs to use CUDA 12.4 and so I use a 2.4.1 torch version, but when I downloaded ultralytics I got an error that ultralytics only supports torch version < 2.4.0, so how to I use my GPU for training if ultralytics can't work with the version of cuda that my GPU needs |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm trying to repeat the following from the official instructions from the cite: Load a modelmodel = SAM("sam2_b.pt") but get: SAM2 models are unavailable? Thanks in advance! |
Beta Was this translation helpful? Give feedback.
-
May I ask why when I package it as app.exe and run it, using CPU inference will generate a new instance of app.exe |
Beta Was this translation helpful? Give feedback.
-
I have added a custom layer and configured the yaml file in a separate folder. But whenever i try to give cli: !yolo train data=C:/Users/Asus/YOLOv8_exp/dataset/data.yaml model=C:/Users/Asus/yolo_model_exp/ultralytics/ultralytics/cfg/models/v8/yolov8-MSRCRNet-Waternet.yaml epochs=10 imgsz=53, it doesn't over ride the default installed ultralytics folder. I want that the custom model should run from C:/Users/Asus/yolo_model_exp/ultralytics/ultralytics. The error is attached: WARNING ⚠� no model scale passed. Assuming scale='n'. |
Beta Was this translation helpful? Give feedback.
-
Hi can I use a .pth file If I have trained a custom model using Detectron2 and now want to use that weight which will be saved in form of .pth file in yolo architecture ?? please help |
Beta Was this translation helpful? Give feedback.
-
Hi! I have a question, what are the CPU and RAM requirements for using this model to make image predictions? I mean in the processing of predicting rather than training. Thank you so much. |
Beta Was this translation helpful? Give feedback.
-
Hi there, Thank you for your work! I noticed that the results of https://docs.ultralytics.com/models/yolo-world/ is different from https://huggingface.co/spaces/stevengrove/YOLO-World. I am wondering do you evaluate the performance between them. Can we load YOLO-World open source model in ultralytics lib? Thank you! |
Beta Was this translation helpful? Give feedback.
-
help/
Find comprehensive guides and documents on Ultralytics YOLO tasks. Includes FAQs, contributing guides, CI guide, CLA, MRE guide, code of conduct & more.
https://docs.ultralytics.com/help/
Beta Was this translation helpful? Give feedback.
All reactions