How to use Flame to create a YoloV5 Model with RoboFlow datasets
This tutorial will teach you to make your own YoloV5 weights for Object Detection to use with the YoloV5 Object Detection engine.
How to Install Flame
Flame is a YoloV5 Model (Weights) Trainer created by Shinobi Systems. It is currently only tested to run on Ubuntu 22.04.3 and to be used with NVIDIA RTX cards.
To ensure you have the required NVIDIA drivers you can install Shinobi then install the YoloV5 plugin with NVIDIA support. Install the plugin from the Shinobi Superuser panel.
1. Clone the Flame repository at https://gitlab.com/Shinobi-Systems/Flame and enter the directory.
git clone https://gitlab.com/Shinobi-Systems/Flame.git cd Flame
2. Install the dependencies.
npm i
3. Start the app as a daemon.
npm i pm2 -g pm2 start app.js --name flame
4. Access the web panel on the default port 8989.
http://SERVER_IP:8989/
Information about Pages
General Information about pages. See below for more details.
-
Training Control
-
You will see the status of the current training session here. Set the following fields to start training.
- Batch Size : In simple terms, a higher number requires more VRAM on the GPU.
- Epochs : Number of times for the trainer to iterate over the dataset for training. Beware of too many Epochs as it can cause "over-lifting", which means the model is looking mainly for features nearly identical to the training set. Leaving it loose allows the inference engine to allow lenience when appraising contour similarity.
-
Base Weights : This is the base model we use to train the data. In short we have 3 main models to start with. Each are for different deployment scenarios.
- yolov5s : Runs on Edge Servers, Lower Accuracy
- yolov5m : Runs on Mid-Level GPU, Medium Accuracy
- yolov5l : Requires Powerful GPU
- Device : The selected GPUs to train upon.
-
You will see the status of the current training session here. Set the following fields to start training.
-
Dataset Viewer
- This allows us to see what files are set to be trained upon.
-
Training Sessions
- This will display the results from previous training sessions. You may see images in a selection. These are the images that are being trained and tested upon.
- Some images will display only the ID number of the class (starting from 0 based on the Classes provided). While others will show the complete name. When it begins showing the complete name like "Person" you may notice it appears with a number "Person 0.9". This means that this image is a test result during the training, it has 90% confidence it sees a "Person".
-
Tests
- This tool can run a test independently of the training process. These tests are logged here. By default there may be only 2 images here for the test. You can add more images to the tester by simply adding images to the Test Sets.
-
Video Annotator (WIP, Incomplete)
- A tool to allow uploading a video and quickly annotating frames in it.
- Incomplete tool, Don't use.
-
System Status
- This will allow you to see RAM Usage and GPU Usage. Since CPU Usage is less relevant here it is not included.
-
Software Used
- A list of softwares and repositories used to make Flame work.
Using RoboFlow Datasets
-
Find the dataset you want in "YOLO v5 PyTorch" format and download with "Show Download Code" selected.
- Try this step with Dataset of Wine Bottles : https://universe.roboflow.com/denis-shishkov-ldf7t/wine-bottles/dataset/4
- Click continue and open the "Raw URL" tab of the Your Download Code window. Copy the URL shown.
-
Open Flame interface and go to
Custom Datasets
tab and paste the URL in the Download URL field and press the download icon next to it.- Once downloaded the list will reload and present the downloaded dataset in the list.
-
Find the dataset in the listing and click
Link
. The dataset will become staged for training. -
Open
Dataset Viewer
to confirm which data is now staged.-
Dataset Viewer
previously displayed all images downloaded from Google Open Images instead of the Active Training datasets. Now it only displays the active datasets.
-
- You can repeat the above steps with multiple datasets from https://universe.roboflow.com and Link them all to be staged for training.
- Now open "Training Control" to begin training to create a model (weights) for YoloV5.
Training your Active Dataset
Open the "Training Control" tab to begin training to create a model (weights) for YoloV5. Below you will find information about this page.
Control | Action | Notes |
---|---|---|
Start Training | Begins training with the specified settings (batch size, epochs, etc.). | Use this after you have configured your parameters. |
Stop Training | Immediately stops any ongoing training. | Use if you need to cancel training without completing all epochs. |
Test Latest Weights | Evaluates the most recently saved model using your currently active Test Set. Results are saved in the Tests tab. | Useful to quickly check performance on validation/test data. |
Trail Log | Toggles the logger in auto following latest result or remaining static on the current scroll position. | Helpful for monitoring real-time events and diagnosing any issues. |
Get Summary | Retrieves the last epoch's summary and displays it. | Includes details like accuracy, loss values, and possibly confusion matrices. |
Clear | Resets the visible output areas (logs, metrics, etc.). | Allows you to start fresh without losing already saved model files. |
About the Training Control Options
Parameter | Purpose | Notes |
---|---|---|
Batch Size | Number of images processed before model parameters are updated. | Larger sizes may require more GPU memory but can lead to more stable updates. |
Epochs | Total number of passes through the training dataset. | Number of times for the trainer to iterate over the dataset for training. Beware of too many Epochs as it can cause "over-lifting", which means the model is looking mainly for features nearly identical to the training set. Leaving it loose allows the inference engine to allow lenience when appraising contour similarity. |
Base Weights | Pre-trained YOLOv5 checkpoint to initialize training. |
Options like n (nano) through x (extra large). Select based on hardware/resources.
|
Device | Hardware device for training (e.g., GPU index). | If you have multiple GPUs, you can pick which one to use (0 for the first GPU, 1 for the second). |
Shutdown on Completion | Automatically powers down the system after training completes. | Useful for saving energy, especially on remote or cloud-based systems. |
About the Base Weights
Base Weight | Size/Speed | Accuracy Potential | Recommended Use |
---|---|---|---|
yolov5n (nano) | Very small model size, fastest inference | Lower accuracy than larger models | Ideal for edge devices or applications needing very fast inference. |
yolov5s (small) | Small size, fast inference | Good balance of speed & accuracy | Good starting point for most general object detection tasks. |
yolov5m (medium) | Moderate size, slightly slower inference |
Higher accuracy than n /s
|
Recommended if you need better accuracy and have moderate GPU resources. |
yolov5l (large) | Large size, slower inference | Higher accuracy | Useful for tasks requiring more precise detection at the cost of speed. |
yolov5x (extra large) | Largest size, slowest inference |
Highest accuracy among n/s/m/l/x
|
Best for maximum accuracy if ample GPU memory and time are available. |
yolov5n6 |
Very small model size, designed for larger images (p6 resolution)
|
Lower accuracy than s6/m6/l6/x6
|
When you need faster inference for higher resolution input with minimal resources. |
yolov5s6 | Small model, designed for larger images | Good balance for higher resolution | Great for slightly more complex scenarios at higher resolution. |
yolov5m6 | Moderate model size for higher resolution |
Higher accuracy than s6
|
Similar to yolov5m but tailored for larger input sizes.
|
yolov5l6 | Large model for higher resolution |
Higher accuracy than m6
|
For detailed tasks on large images where speed is less critical. |
yolov5x6 | Largest model for higher resolution |
Highest accuracy in the 6 series
|
If maximizing accuracy on large images is a priority and GPU resources are abundant. |
How to use a created Model (Weights set) with Shinobi
- Open Flame
- Go to Training Sessions
- Find the session you want to use and click Package.
-
Wait until a notice appears to ask you to Download.
- You can find these packages in the Saved Packges page.
- Extract the contents of the downloaded zip into the weights folder of the YoloV5 PyTorch Plugin for Shinobi.
Installing the YoloV5 Shinobi Plugin without Superuser
This method also runs the plugin as a separate daemon from the Shinobi core process (camera.js).
-
Download this zip and copy the yolo-v5-pt folder to your Shinobi/plugins folder.
- https://gitlab.com/Shinobi-Systems/shinobi-plugins/-/tree/master/plugins/yolo-v5-pt
- Enter the yolo-v5-pt folder and run npm i.
- Copy the downloaded package contents of the model to weights. The one downloaded from Training Sessions.
- Simple start the plugin by running
node shinobi-yolov5-pt.js
5. Daemonize the plugin by running. You need to be root do this.
pm2 start shinobi-yolov5-pt.js && pm2 save