How many Streams can my Machine do?
We all come to this question when doing our first setup. Here are some pointers to make understanding this a little easier.
Basic Estimator for Hardware Requirements
A tool is being developed to assist in understanding this. You may try it here.
It will help you do the following at present time :
- Calculate possible RAM
- Calculate possible CPU
- Calculate possible Storage Use
They are based on the Bitrate and number of cameras you have specified.
The values shown are approximations and should only be used as a preliminary guide.
Technical Reasoning
The answer to "How many Streams can my Machine do?" is somewhat complicated but simple once understood.
The number of streams a machine can handle is based on a few factors.
-
Bitrate coming from the camera.
- Set this to 2 Mbit/s for generally good quality with 10 to 15fps streams at 4MP and 5MP Resolutions.
- This is an option set within the camera itself, not in Shinobi. See Optimization for more information.
-
FPS and Resolution
- These matter a lot less than Bitrate but still factor in.
-
CPU Architecture of the machine running Shinobi
-
x86-64 (also known as x64, x86_64, AMD64 and Intel 64) work fairly well even when asked to encode a fair number of cameras.
- Examples : Intel i3(, i5, i7, Xeon) and Ryzen 4(, 5)
-
ARM CPUs have a difficult time doing encoding at this time.
- Examples : Jetson Nano and Raspberry Pi 4
-
x86-64 (also known as x64, x86_64, AMD64 and Intel 64) work fairly well even when asked to encode a fair number of cameras.
My story? I have run ~43 Cameras. A Mix of 4MP/5MP Cameras running 10 to 15fps at 2 Mbit/s... on a Jetson Nano. No Encoding. Just streaming and continuous recording.
No Encoding and Encoding
You've probably heard encode and decode before. This is what they mean in the context of FFmpeg (and Shinobi) :
-
Decode : The action of consumption. When your FFmpeg process makes the connection to the camera and begins to receive video stream data.
- If you do not optimize your camera's settings you won't be able to support many cameras. Most importantly Bitrate.
- If you decode an H.264 stream you will be able to support more cameras.
- If you decode an H.265 stream you will be able to support more cameras IF your hardware has an HEVC decoder. If it does not, you might be beating up your CPU. You may want to consider GPU decoding if you already have one.
- If you decode an MJPEG stream you won't be able to support many cameras. This is an older method of streaming IP Camera video data. If your cameras are using this please check to see if they can be switched to H.264.
- You can have hardware acceleration play a part here but it isn't necessary if adjustments are made within the camera's internal settings. See Hardware Acceleration for more information.
-
Encode : The action of creation for distribution. When your FFmpeg process creates frames with CPU power for things like your Detection Engines or JPEG-based Stream Types.
-
Detection Engines can be Motion Detection or Object Detection
- Using Motion Detection will encode frames.
- Using Object Detection will encode frames. All plugins will require encoding frames.
-
Stream Types are the kind of data Shinobi will provide through its API. They are the streams shown in the Shinobi dashboard.
- Enabling JPEG API will encode frames.
- Using MJPEG Stream Type will encode frames.
- Using Base64 Stream Type will encode frames.
- You can have hardware acceleration play a part here but it isn't necessary if adjustments are made within the camera's internal settings. See Hardware Acceleration for more information.
-
Detection Engines can be Motion Detection or Object Detection
-
No Encoding (Not-Encoding, copy encoder) : The default options in the Monitor Settings window are already preset to not encode. This means it will use very little CPU.
- What this means is if you choose not to encode you will be able to support many cameras.
Optimization
You may review this article to better understand possible option choices for the best performance with Shinobi.
Focus on Bitrate. Use the recommended value or tweak it if quality suffers from this adjustment.
If you want to use Shinobi to trigger recordings on an event like Object Detection or Motion Detection (Traditional Recording) then follow the I-frame settings as well. This options can sometimes be named "key frame" as well as a variety of others. They all mean the same thing. It is the "group of frames". Bigger group usually means smaller file sizes in recording but smaller number usually means less delay in your Streams.
Hardware Acceleration
The Juice. Sorry to say that when doing decoding or encoding you may be limited by your GPU vendor. For example some NVIDIA cards only allow 3 streams to be running concurrently. Encoding and decoding both, meaning if you choose to decode and encode on one stream then it uses two available streams.
What is the point?
Well I like to use my GPU mainly for AI operations, like Object Detection. NVIDIA GPUs work great with the provided TensorFlow plugin and License Plate Recognition.
- TensorFlow Plugin (Object Detection) : https://gitlab.com/Shinobi-Systems/Shinobi/-/tree/dev/plugins/tensorflow
- OpenALPR Plugin (License Plate Recognition) : https://gitlab.com/Shinobi-Systems/Shinobi/-/tree/dev/plugins/openalpr
- Face-API.js Plugin (Face Recognition) : https://gitlab.com/Shinobi-Systems/Shinobi/-/tree/dev/plugins/face